Posts about Development (old posts, page 1)

Building sparse connections

We just added a new feature for building sparse connections: the sparseness keyword now accepts functions in the same way as the weight keyword, e.g.:

[python] myconnection.connect_random(group1,group2,0.1,weight=1nS,sparseness=exp(-abs(i-j).1)) [/python]

makes random synapses between neurons with a distance-dependent probability (assuming the neurons are topographically arranged, with a linear topology).

Code generation

The latest SVN version of Brian includes a new experimental package, brian.experimental.codegen, which is designed for automatic generation of code for numerical integration in Python, C++ and GPU C++. If the global preference usecodegen is True then it will be used automatically for all nonlinear differential equations. If usecodegenweave is True then C code will be generated, otherwise Python code. In all cases the code runs faster. GPU code is not currently used. Use the file to activate it for all your code. As an example of how it works, let’s use the following Brian Equations object:

[python] eqs = Equations(‘’‘ dV/dt = -WV/(10second) : volt dW/dt = -V*2/(1second) : volt ‘’‘) [/python]

The codegen module generates the following code for the Euler update scheme. For Python:

[python] V = _S[0] W = _S[1] V__tmp = -0.1VW W__tmp = -1.0V2 V += V__tmpdt W += W__tmp*dt [/python]

For C:

[cpp] double V__Sbase = _S+0num_neurons; double W__Sbase = _S+1num_neurons; for(int _i=0;_i<num_neurons;_i++){ double &V = V__Sbase++; double &W = W__Sbase++; double V__tmp = -0.1VW; double W__tmp = -1.0pow(V, 2); V += V__tmpdt; W += W__tmp*dt; } [/cpp]

And for the GPU:

[cpp] global void stateupdate(int num_neurons, double t, double S) { int i = blockIdx.x * blockDim.x + threadIdx.x; if(i>=num_neurons) return; double &V = S[i+0num_neurons]; double &W = S[i+1num_neurons]; double &V = V__Sbase++; double &W = W__Sbase++; double V__tmp = -0.1VW; double W__tmp = -1.0pow(V, 2); V += V__tmpdt; W += W__tmpdt; } [/cpp]

These are all generated from the following template for the Euler integration scheme:

[python] euler_scheme = [ ((‘foreachvar’, ‘nonzero’), ‘’‘ $vartype ${var}__tmp = $var_expr ‘’‘), ((‘foreachvar’, ‘nonzero’), ‘’‘ $var += ${var}__tmp*dt ‘’‘) ] [/python]

The other schemes are the RK2 and exponential Euler scheme. For example, the exponential Euler scheme is:

[python] exp_euler_scheme = [ ((‘foreachvar’, ‘nonzero’), ‘’‘ $vartype ${var}__B = @substitute(var_expr, {var:0}) $vartype ${var}__A = @substitute(var_expr, {var:1}) ${var}__A -= ${var}__B ${var}__B /= ${var}__A ${var}__A = dt ‘’‘), ((‘foreachvar’, ‘nonzero’), ‘’‘ $var += ${var}__B $var = exp(${var}__A) $var -= ${var}__B ‘’‘) ] [/python]

On the Equations above, this generates the following Python code:

[python] V = _S[0] W = _S[1] V__B = 0 W__B = -1.0V2 V__A = -0.1W W__A = -1.0V2 V__A -= V__B W__A -= W__B V__B /= V__A W__B /= W__A V__A = dt W__A = dt V += V__B W += W__B V = exp(V__A) W *= exp(W__A) V -= V__B W -= W__B [/python]

And similarly for C++ and GPU.

We will be extending the automatic generation of code into other areas, including resets, thresholds, and STDP code.

If you have any comments on the approach, bug reports, etc. please let us know!

Sharing numpy arrays between processes

This is a little trick that may be useful to people using multiprocessing and numpy that I couldn’t find any good examples of online.

Using the Python multiprocessing package you create a new Python process for each CPU in the system, but you may often want to work on the same data without making a copy. Here is one way to do that. First of all, we assume you’re starting with a large numpy array S. We replace S with a version in shared memory, and then we can pass this

[python] from multiprocessing import sharedctypes size = S.size shape = S.shape S.shape = size S_ctypes = sharedctypes.RawArray(‘d’, S) S = numpy.frombuffer(S_ctypes, dtype=numpy.float64, count=size) S.shape = shape [/python]

Now we can send S_ctypes and shape to a child process in multiprocessing, and convert it back to a numpy array in the child process as follows:

[python] from numpy import ctypeslib S = ctypeslib.as_array(S_ctypes) S.shape = shape [/python]


The Brian SVN has moved from SourceForge to The current release (1.1.3) is still available at the SourceForge and PyPI pages, but future releases will be at the new page. The current development version will only be available on the new page from now on.

Real time plotting

A feature of the pylab/matplotlib plotting package that I hadn’t used before is animation. The Animations entry on the Scipy Cookbook explains how to do it. Here is an example of using it to do realtime plotting of a network in Brian. We plot a raster plot and sample membrane potential trace as the CUBA example runs:

[python] from brian import *

Set up the standard CUBA example

N = 4000 eqs=’‘’ dv/dt = (ge+gi-(v+49mV))/(20ms) : volt dge/dt = -ge/(5ms) : volt dgi/dt = -gi/(10ms) : volt ‘’‘ P = NeuronGroup(N, eqs, threshold=-50mV,reset=-60mV) P.v = -60mV+10mVrand(len(P)) Pe = P.subgroup(3200) Pi = P.subgroup(800) Ce = Connection(Pe, P, ‘ge’, weight=1.62mV, sparseness=0.02) Ci = Connection(Pi, P, ‘gi’, weight=-9*mV, sparseness=0.02)

Real time plotting stuff

First of all, we use standard Brian monitors to record values

M = SpikeMonitor(P) trace = StateMonitor(P, ‘v’, record=0)

The ion() command switches pylab’s “interactive mode” on


We set up the plot with the correct axes


Note that we have to store a copy of the objects (plot lines) whose data

we will update in real time

rasterline, = plot([], [], ‘.’) # plot points, hence the ‘.’ axis([0, 1, 0, N]) subplot(212) traceline, = plot([], []) # plot lines, hence no ‘.’ axis([0, 1, -0.06, -0.05])

This network operation updates the graphics every 10ms of simulated time

@network_operation(clock=EventClock(dt=10ms)) def draw_gfx(): # This returns two lists i, t of the neuron indices and spike times for # all the recorded spikes so far i, t = zip(M.spikes) # Now we update the raster and trace plots with this new data rasterline.set_xdata(t) rasterline.set_ydata(i) traceline.set_xdata(trace.times) traceline.set_ydata(trace[0]) # and finally tell pylab to redraw it draw()

run(1*second) draw_gfx() # final draw to get the last bits of data in ioff() # switch interactive mode off show() # and wait for user to close the window before shutting down [/python]

We’re thinking about adapting some of this to include in the standard Brian plotting library. Any thoughts?

Linked variables

Please see the entry on the idea of this development blog.

One thing we end up doing very often in Brian in our code, is copying the values of a variable in one group to the values of a variable in another group. That is, a variable from one group should essentially be defined as the same as the variable from another group. This comes up in models of the auditory system we’re working on, because we use one NeuronGroup to represent the displacement of hair cells in the cochlea, and a separate NeuronGroup to represent the auditory nerve fibres which receive graded rather than spiking inputs from them. In general, the same thing might be useful in any case where there is a graded rather than spiking connection between two NeuronGroups. What we were doing was this:

[python] haircells = NeuronGroup(…) nervefibres = NeuronGroup(…) @network_operation(when=’start’) def graded_connection(): nervefibres.input = haircells.output [/python]

This works fine, but generally we think using network operations in this technical sort of a way is not very intuitive for most users, so we thought about a way of doing this automatically with a nice syntax. We came up with this:

[python] nervefibres.input = linked_var(haircells, ‘output’) [/python]

The practical effect of this is exactly the same as the code above, but the syntax is much nicer. Behind the scenes, the function linked_var creates a LinkedVar class instance which is recognised by the setattr method of the NeuronGroup, which creates a network operation to do the copy, but you didn’t want to know that.

The question is, what should the syntax be? At the moment, we’ve gone with:

[python] linked_var(source, var, func, when, clock) [/python]

The func, when and clock arguments are optional. The func argument allows you to pass the value of the variable in the source group through a function. For example, in the auditory system you do half wave rectification and logarithmic compression. The when argument tells it when to do the copy operation, at the start of the simulation loop by default, and the clock tells it what schedule to update on, by default it uses the target group’s clock. Is this a nice syntax? Is there anything else this should be able to do?


Please see the entry on the idea of this development blog.

As part of our efforts to implement STDP with heterogeneous delays, which I’ll write more about some other time, we realised we needed to access recent values of a variable, but never covering more than a fixed duration T. That is, we needed access to V(t-s) for 0<s<T, where t is the current time. We could get access to this by recording all the values of V, but that’s a big memory hog for a long running simulation. Instead, we came up with a new object, the RecentStateMonitor, which works in a very similar way to StateMonitor.

For the user, you can use RecentStateMonitor in exactly the same way as a StateMonitor (but with some extra options for allowing more efficient access to the data). We imagine it might be useful for things like network operations that monitor the dynamics of a variable as a simulation runs, and inject different currents depending on those dynamics. It could also be used together with a custom SpikeMonitor for a very simple spike triggered average monitor.

The implementation uses a cylindrical array, a 2D generalisation of a circular array that is circular in only one dimension. It is a 2D array V(i,j) where i represents a time index, and j a neuron index. The time index is circular and relative to a pointer into the array which updates at each time step, when the pointer reaches the end of the fixed size array, it resets back to the beginning in a circular fashion. This cylindrical array structure is good for memory access because no memory needs to be allocated or deallocated as it runs.

At the moment, RecentStateMonitor only records the values for a single variable, but it will very likely be extended, or an additional class added which handles multiple variables (since this seems to be a very likely use case). Any ideas on other features you would like this to have? or more efficient implementations?


Please see the entry on the idea of this development blog.

One annoying thing that crops up often in writing programs for Brian, is that you have some set of equations with a parameter which varies over time according to the values in some array. Supposing that I was an array of length N with values sampled at intervals dt, you would like to be able to write an equation like:

[python] I = randn(N) eqs = ‘’‘ dV/dt = (-V+I(t))/tau : 1 ‘’‘ [/python] One way of solving the problem is to use a network operation, like so:

[python] I = randn(N) eqs = ‘’‘ dV/dt = -(V+J)/tau : 1 J : 1 ‘’‘ G = NeuronGroup(m, eqs, …) @network_operation def update_J(clk): i = int(clk.t/clk.dt) G.J = I[i] [/python] This works, but it’s not terribly intuitive. The new TimedArray class allows you to do both of these things. The first could be written like so:

[python] I = TimedArray(randn(N)) eqs = ‘’‘ dV/dt = (-V+I(t))/tau : 1 ‘’‘ [/python] The second like so:

[python] I = randn(N) eqs = ‘’‘ dV/dt = -(V+J)/tau : 1 J : 1 ‘’‘ G = NeuronGroup(m, eqs, …) G.J = TimedArray(I) [/python] Behind the scenes, the first one works by adding a call method to numpy arrays which interpolates. The second one works by adding a network operation to the network. The first breaks linearity and so nonlinear solvers are always used, so if your equations are linear, the second is probably better to use.

TimedArray has several options, including the use of evenly sampled grid points based on a start time and dt (given by a clock by default), or a fixed array of times which needn’t be evenly sampled. For example to turn a signal on between 1 second and 1.1 second you could do:

[python] times = [0second, 1second, 1.1*second] values = [0, 1, 0] x = TimedArray(values, times) [/python] The TimedArray class is defined in the module timedarray. Take a look at the documentation, try some examples, and let us know what you think.

Upcoming features

At the moment, we’re working on these features for Brian 1.1.3 which should be coming very soon:

  • STDP working with connections with heterogeneous delays.
  • A new RecentStateMonitor for storing only the most recent values of a variable.
  • A new TimedArray class to make it easier to set the values of a variable in an equation from a precomputed array.
  • Progress reporting for simulation runs, with estimates of how long the computation will take.
In the medium term, possibly for Brian 1.2, we’re working on:
  • Support for parallel processing with a GPU - the work in progress on this is already available in the experimental subpackage in Brian.
  • Support for automatic generation and compilation of C code for nonlinear differential equation solvers.
  • A subpackage, “Brian hears”, for auditory system modelling, including efficient GPU based filterbank operations.

The idea

As part of the new website, we’ll be writing entries on this development blog, showing ideas we plan to put in the next release of Brian. The idea is to let people know about new developments and features, and to get feedback about them. We hope that you might try out experimental or new features available in the experimental subpackage of Brian, or from the most recent revision on the SVN, and let us know what you think about them. Useful questions we would like feedback on are:

  • Does it work?
  • Is it too slow?
  • Does it have all the features you’d like it to have?
  • Is the syntax clear?
Brian is an open source package, and partly a community effort. The more feedback and suggestions we get, the better we can make it. Hopefully this blog will also foster efforts to bring more people into the Brian development team.

Please comment on entries on the blog, send us email directly, or join in on the discussion forums.