This article is written as a Jupyter notebook which you can execute and modify interactively. You can either download it via the "Source" link on the top right, or run it directly in the browser on the mybinder infrastructure:
For more information, see our general Notes on Notebooks.
QuickstartTo run the code below:
- Click on the cell to select it.
SHIFT+ENTERon your keyboard or press the play button () in the toolbar above
SHIFT+ENTERwhile this cell is selected.
This article demonstrates how a control flow, where simulation parameters depend on the results of previous simulations, can be expressed by making use of standard control structures in Python. By having access to the full expressivity of a general purpose programming language, expressing such control flow is straight-forward; this would not be the case for a declarative model description.
Our goal in this toy example is to find the threshold voltage of neuron as a function of the density of sodium channels.
This example is from our eLife paper (Stimberg et al. 2019).
We start with the basic setup:
from brian2 import * %matplotlib notebook defaultclock.dt = 0.01*ms # small time step for stiff equations
Our model of the neuron is based on the classical model of from Hodgkin and Huxley (1952). Note that this is not actually a model of a neuron, but rather of a (space-clamped) axon. However, to avoid confusion with spatially extended models, we simply use the term "neuron" here. In this model, the membrane potential is shifted, i.e. the resting potential is at 0mV:
El = 10.613*mV ENa = 115*mV EK = -12*mV gl = 0.3*msiemens/cm**2 gK = 36*msiemens/cm**2 gNa_max = 100*msiemens/cm**2 gNa_min = 15*msiemens/cm**2 C = 1*uF/cm**2 eqs = ''' dv/dt = (gl * (El-v) + gNa * m**3 * h * (ENa-v) + gK * n**4 * (EK-v)) / C : volt gNa : siemens/meter**2 dm/dt = alpham * (1-m) - betam * m : 1 dn/dt = alphan * (1-n) - betan * n : 1 dh/dt = alphah * (1-h) - betah * h : 1 alpham = (0.1/mV) * (-v+25*mV) / (exp((-v+25*mV) / (10*mV)) - 1)/ms : Hz betam = 4 * exp(-v/(18*mV))/ms : Hz alphah = 0.07 * exp(-v/(20*mV))/ms : Hz betah = 1/(exp((-v+30*mV) / (10*mV)) + 1)/ms : Hz alphan = (0.01/mV) * (-v+10*mV) / (exp((-v+10*mV) / (10*mV)) - 1)/ms : Hz betan = 0.125*exp(-v/(80*mV))/ms : Hz '''
We simulate 100 neurons at the same time, each of them having a density of sodium channels between 15 and 100 mS/cm²:
neurons = NeuronGroup(100, eqs, method='rk4', threshold='v>50*mV') neurons.gNa = 'gNa_min + (gNa_max - gNa_min)*1.0*i/N'
We initialize the state variables to their resting state values, note that the values for $m$, $n$, $h$ depend on the values of $\alpha_m$, $\beta_m$, etc. which themselves depend on $v$. The order of the assignments ($v$ is initialized before $m$, $n$, and $h$) therefore matters, something that is naturally expressed by stating initial values as sequential assignments to the state variables. In a declarative approach, this would be potentially ambiguous.
neurons.v = 0*mV neurons.m = '1/(1 + betam/alpham)' neurons.n = '1/(1 + betan/alphan)' neurons.h = '1/(1 + betah/alphah)'
We record the spiking activity of the neurons and store the current network state so that we can later restore it and run another iteration of our experiment:
S = SpikeMonitor(neurons) store()
The algorithm we use here to find the voltage threshold is a simple bisection: we try to find the threshold voltage of a neuron by repeatedly testing values and increasing or decreasing these values depending on whether we observe a spike or not. By continously halving the size of the correction, we quickly converge to a precise estimate.
We start with the same initial estimate for all segments, 25mV above the resting potential, and the same value for the size of the "correction step":
v0 = 25*mV*ones(len(neurons)) step = 25*mV
For later visualization of how the estimates converged towards their final values, we also store the intermediate values of the estimates:
estimates = np.full((11, len(neurons)), np.nan)*mV estimates[0, :] = v0
We now run 10 iterations of our algorithm:
for i in range(10): # Reset to the initial state restore() # Set the membrane potential to our threshold estimate neurons.v = v0 # Run the simulation for 20ms run(20*ms) # Decrease the estimates for neurons that spiked v0[S.count > 0] -= step # Increase the estimate for neurons that did not spike v0[S.count == 0] += step # Reduce step size and store current estimate step /= 2.0 estimates[i + 1, :] = v0
After the 10 iteration steps, we plot the results:
examples = [10, 50, 90] fig, (ax_top, ax_bottom) = plt.subplots(2, 1, figsize=(9, 6)) for color, example in enumerate(examples): ax_top.plot(np.arange(11), estimates[:, example]/mV, '-', marker='o', c='C'+str(color)) ax_top.set(xlabel='Iteration', ylabel='Threshold estimate (mV)') ax_bottom.plot(neurons.gNa/(mS/cm**2), v0/mV, '-', c='gray') for color, example in enumerate(examples): ax_bottom.plot([neurons.gNa[example]/(mS/cm**2)], [estimates[-1, example]/mV], 'o', c='C'+str(color)) ax_bottom.set(xlabel='gNA (mS/cm²)', ylabel='Threshold estimate (mV)') fig.tight_layout()