Brian at NeuroComp

We will be at NeuroComp 2009 in Bordeaux on 16 September. We will give a tutorial at 4.30 pm. Try to install Brian before the tutorial, and we will help you if you have any problem with it. You will need to bring your laptop (no machine in the room). If there is anything special you would like us to address during the tutorial, please don’t hesitate to contact us. Slides for the tutorial: ppt, pdf. Examples of the tutorial.

Brian is in PyNN

Brian is now part of the PyNN API since version 0.5. PyNN is a simulator-independent language for building neuronal network models. You can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST, PCSIM and Brian).

Linked variables

Please see the entry on the idea of this development blog. One thing we end up doing very often in Brian in our code, is copying the values of a variable in one group to the values of a variable in another group. That is, a variable from one group should essentially be defined as the same as the variable from another group. This comes up in models of the auditory system we’re working on, because we use one NeuronGroup to represent the displacement of hair cells in the cochlea, and a separate NeuronGroup to represent the auditory nerve fibres which receive graded rather than spiking inputs from them. In general, the same thing might be useful in any case where there is a graded rather than spiking connection between two NeuronGroups. What we were doing was this: [python] haircells = NeuronGroup(…) nervefibres = NeuronGroup(…) @network_operation(when=’start’) def graded_connection(): nervefibres.input = haircells.output [/python] This works fine, but generally we think using network operations in this technical sort of a way is not very intuitive for most users, so we thought about a way of doing this automatically with a nice syntax. We came up with this: [python] nervefibres.input = linked_var(haircells, ‘output’) [/python] The practical effect of this is exactly the same as the code above, but the syntax is much nicer. Behind the scenes, the function linked_var creates a LinkedVar class instance which is recognised by the __setattr__ method of the NeuronGroup, which creates a network operation to do the copy, but you didn’t want to know that. The question is, what should the syntax be? At the moment, we’ve gone with: [python] linked_var(source, var, func, when, clock) [/python] The func, when and clock arguments are optional. The func argument allows you to pass the value of the variable in the source group through a function. For example, in the auditory system you do half wave rectification and logarithmic compression. The when argument tells it when to do the copy operation, at the start of the simulation loop by default, and the clock tells it what schedule to update on, by default it uses the target group’s clock. Is this a nice syntax? Is there anything else this should be able to do?

RecentStateMonitor

Please see the entry on the idea of this development blog. As part of our efforts to implement STDP with heterogeneous delays, which I’ll write more about some other time, we realised we needed to access recent values of a variable, but never covering more than a fixed duration T. That is, we needed access to V(t-s) for 0<s<T, where t is the current time. We could get access to this by recording all the values of V, but that’s a big memory hog for a long running simulation. Instead, we came up with a new object, the RecentStateMonitor, which works in a very similar way to StateMonitor. For the user, you can use RecentStateMonitor in exactly the same way as a StateMonitor (but with some extra options for allowing more efficient access to the data). We imagine it might be useful for things like network operations that monitor the dynamics of a variable as a simulation runs, and inject different currents depending on those dynamics. It could also be used together with a custom SpikeMonitor for a very simple spike triggered average monitor. The implementation uses a cylindrical array, a 2D generalisation of a circular array that is circular in only one dimension. It is a 2D array V(i,j) where i represents a time index, and j a neuron index. The time index is circular and relative to a pointer into the array which updates at each time step, when the pointer reaches the end of the fixed size array, it resets back to the beginning in a circular fashion. This cylindrical array structure is good for memory access because no memory needs to be allocated or deallocated as it runs. At the moment, RecentStateMonitor only records the values for a single variable, but it will very likely be extended, or an additional class added which handles multiple variables (since this seems to be a very likely use case). Any ideas on other features you would like this to have? or more efficient implementations?

TimedArray

Please see the entry on the idea of this development blog. One annoying thing that crops up often in writing programs for Brian, is that you have some set of equations with a parameter which varies over time according to the values in some array. Supposing that I was an array of length N with values sampled at intervals dt, you would like to be able to write an equation like: [python] I = randn(N) eqs = ”’ dV/dt = (-V+I(t))/tau : 1 ”’ [/python] One way of solving the problem is to use a network operation, like so: [python] I = randn(N) eqs = ”’ dV/dt = -(V+J)/tau : 1 J : 1 ”’ G = NeuronGroup(m, eqs, …) @network_operation def update_J(clk): i = int(clk.t/clk.dt) G.J = I[i] [/python] This works, but it’s not terribly intuitive. The new TimedArray class allows you to do both of these things. The first could be written like so: [python] I = TimedArray(randn(N)) eqs = ”’ dV/dt = (-V+I(t))/tau : 1 ”’ [/python] The second like so: [python] I = randn(N) eqs = ”’ dV/dt = -(V+J)/tau : 1 J : 1 ”’ G = NeuronGroup(m, eqs, …) G.J = TimedArray(I) [/python] Behind the scenes, the first one works by adding a __call__ method to numpy arrays which interpolates. The second one works by adding a network operation to the network. The first breaks linearity and so nonlinear solvers are always used, so if your equations are linear, the second is probably better to use. TimedArray has several options, including the use of evenly sampled grid points based on a start time and dt (given by a clock by default), or a fixed array of times which needn’t be evenly sampled. For example to turn a signal on between 1 second and 1.1 second you could do: [python] times = [0*second, 1*second, 1.1*second] values = [0, 1, 0] x = TimedArray(values, times) [/python] The TimedArray class is defined in the module timedarray. Take a look at the documentation, try some examples, and let us know what you think.

Upcoming features

At the moment, we’re working on these features for Brian 1.1.3 which should be coming very soon:

  • STDP working with connections with heterogeneous delays.
  • A new RecentStateMonitor for storing only the most recent values of a variable.
  • A new TimedArray class to make it easier to set the values of a variable in an equation from a precomputed array.
  • Progress reporting for simulation runs, with estimates of how long the computation will take.
In the medium term, possibly for Brian 1.2, we’re working on:
  • Support for parallel processing with a GPU - the work in progress on this is already available in the experimental subpackage in Brian.
  • Support for automatic generation and compilation of C code for nonlinear differential equation solvers.
  • A subpackage, “Brian hears”, for auditory system modelling, including efficient GPU based filterbank operations.

The idea

As part of the new website, we’ll be writing entries on this development blog, showing ideas we plan to put in the next release of Brian. The idea is to let people know about new developments and features, and to get feedback about them. We hope that you might try out experimental or new features available in the experimental subpackage of Brian, or from the most recent revision on the SVN, and let us know what you think about them. Useful questions we would like feedback on are:

  • Does it work?
  • Is it too slow?
  • Does it have all the features you’d like it to have?
  • Is the syntax clear?
Brian is an open source package, and partly a community effort. The more feedback and suggestions we get, the better we can make it. Hopefully this blog will also foster efforts to bring more people into the Brian development team. Please comment on entries on the blog, send us email directly, or join in on the discussion forums.