Summary of Survey Results


This is a summary of the survey results taken from a Mosaic survey on
characterizing the Neurobiological research process as perceived by
neurobiologists at Caltech. There were approximately 120 responses
collectively from various parts of the survey.  It was not required of
them to answer every question in the survey, only those that
interested them (you can't push their patience too far).

In general they agreed that the diagram I used to characterize
the main tasks involved in their research was about right.

Performing Experiments

They do experiments because:
	1. they need data for their simulation model that is not in the literature.
	2. if the published data is suspect.
	3. if there is a new phenomenon they wish to investigate.
	4. to test a hypothesis.

One kind of experiment they perform is called a "slice experiment"
where they basically take the brain out of a rat and slice it into
sections and place it in a petri dish with some nutrient fluid to
sustain the sample for several hours. They remarked that although
slice experiments are informative, they never produce the same results
as those found in vivo. Slice experiments are still done because there
are some experiments that just cannot be done in vivo.  Each lab
preparation is a collection of tricks and specialized knowledge and
they pointed out that any data that will be stored in the database
must be identified in terms of what type of experiment was used to get
the data (in vivo, slice, etc). 

GENESIS helps them do experiments because it helps them to see what
the important variables are that determine cell behavior. That is,
experiments help them to parameterize the models, and playing with the
models lets them make predictions. The experiments then allows them to
test those predictions.

Some consider experiments tedious indicating that little data is
gotten after enormous amounts of work.  Some prefer experiments, while
others prefer simulations.  Simulations are better for asking
functional questions and seeing the big picture. 

During the experiments, data is stored on VHS video tape from the
recording rig.  Then they spend the next day looking over data and
picking out the traces they want for analysis. Data (in Truxpac binary
format) is then stored on a PC harddrive and then ftped to a Sun for
analysis.  Data on Sun is stored on exabyte tape.  Sometimes data that
is a year old will be re-examined.  They indicated that it would be nice
to be able to do the above from 1 workstation/platform.

Analysis of Data

Data analysis is very specific to the experiment and C programs, shell
scripts, Perl scripts, are written to do this although Matlab is
starting to replace the need for this. 

They liked Matlab because:

	1. it can do serious number crunching quickly and efficiently
	2. it is easy to program
	3. it has many built in functions like FFT's
	4. it has good graphics capabilities for plotting data, although
	they use xplot a lot simply because it has a simple interface
	and can handle a large number of plots simultaneously. It allows
	them interactively turn plots on or off so they can choose the
	plots they want to compare. But xplot starts to slow down a lot
	when you have a lot of data.

The only major problem with Matlab is that they have to write C programs or
scripts to translate the data into Matlab format.  So in any event
some C program must be written somewhere, be it for analysis or translation.

They also use Excel for doing simple manipulations like computing averages
and standard deviations.

There are a number of other C programs individuals write to compare
data from GENESIS simulation with experiments to check goodness-of-fit
during parameter searches.  Parameter searches are described below.
Comparisons are made between kinetic data on ionic channels, such as
patch-clamp or voltage-clamp data.  Channel data is used to model
membrane currents which determine physiology of neurons, hence they
are very very important. Patch-clamp experiments let them measure
amplitudes of currents through single channels and the kinetic
behavior of the channels. Voltage-clamp lets you set the membrane
potential of the cell almost instantaneously at any level and hold it
there while at the same time recording the current flowing across the
membrane. Voltage clamp data from experiments or references is used to
model membrane currents. 

One other thing they do is spend hours tediously transcribing graphs
from papers into numerical format so that they can be compared to output
of their simulation models. Comparisons are either done with a C program
as described above or by eye-balling with xplot. They can qualitatively
compare spiking patterns from model with voltage-clamp data.


GENESIS has potential to be very influential in education in undergrad
and grad levels, especially with its graphics and available tutorials.

C coding is necessary in GENESIS sometimes because the documentation for the
higher level GENESIS objects are so poor that looking at the C code
is the only way to figure out what it does. I have had to do this also!
Also there are some objects in GENESIS that just aren't exactly what
you need so you have to write your own C code and compile it as part
of GENESIS- a nasty and very poorly documented task requiring the
neuroscientist to understand the elements of message passing and event
driven processing.

Contrary to Springmeyer's findings speed of the simulator is very important
since it will allow them to ask more questions faster. Hence they find
supercomputing very important.  One person invested 2 months in writing
a numerical integration routine in C for their simulation.

Simulation models help you understand how everything works together;
giving you a global view of the cell or network which you can't get from
reading papers. Good experiments will provide data to parameterize a model.
And a good model nealy always suggests new experiments to fill in some
missing pieces of data.

The models they build are an amalgamation of previous code in the prototypes library,
equations from literature and new code.  Often they prefer to write their own
code from scratch since it is important to have a complete understanding
of the simulation.

In building network models of large numbers of cells, a statistical spread
of randomness is introduced into cell parameters since, in real life,
cells in the same class have significantly different properties
(thresholds etc). 

A single cell model in GENESIS is built as follows:

1. Build replica of gross structural features of the cell (morphology).

2. Try to match passive membrane properties of model to experimental data-
usually by passing hyperpolarizing current pulses and measuring things like
input resistances and time constants.

3. Try to match response of cell to depolarizing current pulses (which
causes the cell to spike at some point) by adding active channels.
This is the most difficult part and requires parameter search methods.
Neurokit is good for this but it is slow, some write their own simple
graphical interface and do manipulations by hand. 

A network model is constructed as follows: (note: the process is less

1. Decide level of detail you want the cells to be.
2. Create prototypes of all cells as well as for whatever inputs the
network gets.
3. Create Graphical interface to view outputs or dump to file.
4. Connect up cells and run.

Sources of info for models:
1. Published papers & experiments
2. Adapt data from similar brain structures
3. Guess!

When they're trying to match a model's output with the real output
a number of parameters have to be tweaked. This process of tweaking
also helps them to understand how the model behaves.
Tweaks are needed especially for data that is difficult or impossible
to measure experimentally such as densities of active conductances
on dendrites, or subtle variations in kinetics.

The following are parameters that are tweaked for a model:
1. channel density (current magnitude)
2. types of channels
3. location and magnitude of synaptic channels
4. passive properties (Rm, Ra, Cm)
5. timing of synaptic simulation
6. in networks, tweak synaptic weights

Verification of tweak is difficult. They try to constrain tweak values
to reasonable physical parameters from experiments.  The also perform
detailed parameter search to see if other tweaks can produce the same
results.  Also if a tweak produces desired results but falls apart for
other parameters chances are it is wrong since it is unlikely that a
real cell/network can regulate its parameters with very high


The one thing they do more than anything else is read the literature!

The following is a list of data they retrieve from research papers:
1. Parameter data from references:
2. Voltage clamp data
3. Data for kinetic equations
4. Passive properties- time constant, average input resistance
5. Information about experimental methods so that they can reproduce
   experiments if necessary.
6. They would like to get 3D morphologies out of the papers from which
they can construct compartmental models with greater ease, but journal
diagrams are always 2D. For the most part morphology does not
affect a GENESIS simulation but the neuroscientists need it to convince
others that they are really modeling the cell instead of something

Many people who have done experiments are more concerned with
qualitative rather than quantitative properties of their data which is
important to modelers. This makes finding the above data extremely
difficult.  On top of that, sometimes experimental preps are suspect
and so one wonders if what the experimenters see is artificial. 

Taking Personal Notes

They maintain notes for experiments but generally not for simulation work.
They feel that a notebook is essentially imbedded in the simulaton.
Accusations of dishonesty for modeling are very different than those for
experiments since claims of models can be verified by simulation runs.

For experiments many involve stereotyped techniques and so they have
standard forms for that.

They tend to write everything down that is needed to understand what
is being looked at when doing data analysis after the experiment.

The question of whether they log research paths so that they
can back track if necessary depends on the researcher. Some do, some

Figures from papers and references are collected by xeroxing material
and stored in the notebook or a big file cabinet. References may be
entered as part of a GENESIS simulation script. 

Writing Research Papers

As far as the database is concerned they felt that papers stored in
them should follow a very standardized form, especially
papers with information on channels.  Again we're back to the problem
of generally poor reporting of quantitative results in papers.

Transferal of equations used in GENESIS or from a research paper into
the paper they are writing is done manually on the word processor.
Plots from xplot are output as postscript to include in the document
or are appended to it.