The neural models used by Rubin and Terman are Hodgkin-Huxley-type which means that each of the voltage-gated membrane proteins is modelled by one or more differential equations. For example, in the case of the Subthalamic Neuron cells, there are seven first-order differential equations that must be solved in parallel.
Numerically, there are any number of methods for solving these differential equations. The most simple are Forward Euler and Runge-Kutta, each of which use a mathematical approximation of the derivative to iteratively determine the next value of the solution, given the current value of the solution. More advanced methods such as Runge-Kutta-Fehlberg and Runge-Kutta Prince-Dormand use adaptive time-stepping, which means they exploit the fact that you can solve the differential equation less often when the signal isn't changing so much in time. Of course, there is a computational overhead involved with calculating what the optimal time-step is, which might temper some of the advantage of adaptive-timestepping.
I decided to run a sample simulation on a few different computers to determine how fast the simulations are running. I ran an array of 100 subthalamic neurons (each with seven differential equations) for 2500ms under a variety of conditions. Not that it matters, but the neurons were independent; there was no interneuron connectivity for this test. All code was written in C++ using the GSL library for solving the differential equations.
I ran two tests: (1) which simulation method is better, and (2) how fast are the various computers we are using.
Using our fastest computer (a dedicated processing-only workstation; see below for more details) I simulated the 100-neuron model using three different differential equation solvers.
|Method||Timestep Type||Execution Time||Points Generated|
The RKPD method produces by far the fewest number of data points, but takes about 17% longer to execute that the fastest method, RKF.
The simulation was repeated of five different computers; the RKF simulation was used in every case.
|Dedicated Linux Workstation||3.2GHz Quad Core i7, 6GB RAM||4.86s|
|iMac (2009)||3.06GHz Core 2 Duo, 4GB RAM||5.40s|
|Mac Mini (2011)||2.4GHz Core 2 Duo, 2GB RAM||6.88s|
|Macbook Pro (2007)||2.4GHz Core 2 Duo, 4GB RAM||7.21s|
|Old Laptop*||2.2GHz Core 2 Duo, 4GB RAM||17.45s|
*The "old laptop" simulation was actually run on virtual Ubunutu box running on the old laptop under Windows...
The first test is interesting because it emphasizes the tradeoff between fewer number of points versus longer simulation time. The second test demonstrates that our new dedicated Linux machine is actually quite fast, even when compared with other reasonably fast machines.
Its important to work through some of these issues while the simulations are still relatively small; understanding the tradeoffs now will be very helpful when the simulations get up to thousands or even tens of thousands of neurons.