# Brian tutorial at Telluride

There will be a tutorial on Brian at the Telluride workshop in Colorado, tomorrow at noon (via Skype as I’m in Paris).

# Building sparse connections

We just added a new feature for building sparse connections: the sparseness keyword now accepts functions in the same way as the weight keyword, e.g.:

[python] myconnection.connect_random(group1,group2,0.1,weight=1nS,sparseness=exp(-abs(i-j).1)) [/python]

makes random synapses between neurons with a distance-dependent probability (assuming the neurons are topographically arranged, with a linear topology).

# Code generation

The latest SVN version of Brian includes a new experimental package, brian.experimental.codegen, which is designed for automatic generation of code for numerical integration in Python, C++ and GPU C++. If the global preference usecodegen is True then it will be used automatically for all nonlinear differential equations. If usecodegenweave is True then C code will be generated, otherwise Python code. In all cases the code runs faster. GPU code is not currently used. Use the brian_global_preferences.py file to activate it for all your code. As an example of how it works, let’s use the following Brian Equations object:

[python] eqs = Equations(‘’‘ dV/dt = -WV/(10second) : volt dW/dt = -V*2/(1second) : volt ‘’‘) [/python]

The codegen module generates the following code for the Euler update scheme. For Python:

[python] V = _S W = _S V__tmp = -0.1VW W__tmp = -1.0V2 V += V__tmpdt W += W__tmp*dt [/python]

For C:

[cpp] double V__Sbase = _S+0num_neurons; double W__Sbase = _S+1num_neurons; for(int _i=0;_i<num_neurons;_i++){ double &V = V__Sbase++; double &W = W__Sbase++; double V__tmp = -0.1VW; double W__tmp = -1.0pow(V, 2); V += V__tmpdt; W += W__tmp*dt; } [/cpp]

And for the GPU:

[cpp] global void stateupdate(int num_neurons, double t, double S) { int i = blockIdx.x * blockDim.x + threadIdx.x; if(i>=num_neurons) return; double &V = S[i+0num_neurons]; double &W = S[i+1num_neurons]; double &V = V__Sbase++; double &W = W__Sbase++; double V__tmp = -0.1VW; double W__tmp = -1.0pow(V, 2); V += V__tmpdt; W += W__tmpdt; } [/cpp]

These are all generated from the following template for the Euler integration scheme:

[python] euler_scheme = [ ((‘foreachvar’, ‘nonzero’), ‘’‘ \$vartype \${var}__tmp = \$var_expr ‘’‘), ((‘foreachvar’, ‘nonzero’), ‘’‘ \$var += \${var}__tmp*dt ‘’‘) ] [/python]

The other schemes are the RK2 and exponential Euler scheme. For example, the exponential Euler scheme is:

[python] exp_euler_scheme = [ ((‘foreachvar’, ‘nonzero’), ‘’‘ \$vartype \${var}__B = @substitute(var_expr, {var:0}) \$vartype \${var}__A = @substitute(var_expr, {var:1}) \${var}__A -= \${var}__B \${var}__B /= \${var}__A \${var}__A = dt ‘’‘), ((‘foreachvar’, ‘nonzero’), ‘’‘ \$var += \${var}__B \$var = exp(\${var}__A) \$var -= \${var}__B ‘’‘) ] [/python]

On the Equations above, this generates the following Python code:

[python] V = _S W = _S V__B = 0 W__B = -1.0V2 V__A = -0.1W W__A = -1.0V2 V__A -= V__B W__A -= W__B V__B /= V__A W__B /= W__A V__A = dt W__A = dt V += V__B W += W__B V = exp(V__A) W *= exp(W__A) V -= V__B W -= W__B [/python]

And similarly for C++ and GPU.

We will be extending the automatic generation of code into other areas, including resets, thresholds, and STDP code.

If you have any comments on the approach, bug reports, etc. please let us know!

# Sharing numpy arrays between processes

This is a little trick that may be useful to people using multiprocessing and numpy that I couldn’t find any good examples of online.

Using the Python multiprocessing package you create a new Python process for each CPU in the system, but you may often want to work on the same data without making a copy. Here is one way to do that. First of all, we assume you’re starting with a large numpy array S. We replace S with a version in shared memory, and then we can pass this

[python] from multiprocessing import sharedctypes size = S.size shape = S.shape S.shape = size S_ctypes = sharedctypes.RawArray(‘d’, S) S = numpy.frombuffer(S_ctypes, dtype=numpy.float64, count=size) S.shape = shape [/python]

Now we can send S_ctypes and shape to a child process in multiprocessing, and convert it back to a numpy array in the child process as follows:

[python] from numpy import ctypeslib S = ctypeslib.as_array(S_ctypes) S.shape = shape [/python]

# Brian 1.2

We have just released Brian 1.2.

The major feature of this release is the model fitting toolbox, designed for automatic fitting of spiking neural models to recorded data. The toolbox is capable of using multiple CPUs or GPUs, across multiple machines. For example, in our lab we use 3 computers with 2 GPUs each to get a 300x speed increase compared to using a single CPU.

Other features of this release include some support for real time plotting as the simulation runs and some important bug fixes. See below for the complete list of changes.

With this release, we are moving away from the old Sourceforge site to the new Brian Trac kindly hosted by Neural Ensemble. We have also been working on our release tools, making it much easier to produce new releases. We now plan to do fairly frequent ‘quick releases’ which will be less thoroughly tested, but useful for those who want the latest features without having to much around with the SVN version.

Major features:

• Model fitting toolbox (library.modelfitting)
Minor features:
• New real-time “refresh=“ options added to plotting functions
• Gamma factor in utils.statistics
• New RegularClock object
• Added brian_sample_run function to test installation in place of nose tests

Improvements:

• Speed improvements to monitors and plotting functions
• Sparse matrix support improved, should work with scipy versions up to 0.7.1
• Various improvements to brian.hears (still experimental though)
• Parameters now picklable

Bug fixes:

• Fixed major bug with subgroups and connections (announced on webpage)
• Fixed major bug with multiple clocks (announced on webpage)
• No warnings with Python 2.6
• Minor bugfix to TimedArray caused by floating point comparisons
• Bugfix: refractory neurons could fire in very extreme circumstances
• Fixed bug with DelayConnection not setting max_delay
• Fixed bug with STP
• Fixed bug with weight=lambda i,j:rand()

New examples:

• New multiprocessing examples
• Added examples of using derived classes with Brian
• Realtime plotting example

We recently found and fixed a bug which may silently cause spikes to be lost or extra spikes to be generated in certain circumstances. At the moment, the only known example of a problem is when using SpikeGeneratorGroup in a simulation with multiple clocks as in this message on the support group, but the problem may also appear in rare situations with other simulations with multiple clocks. We advise users to upgrade to Brian 1.1.4 when it is released next week.

# Bug advisory: connections from subgroups with delays

We recently found a bug that may silently cause problems with many simulations. The bug causes problems in the following situation:

• You create a NeuronGroup
• You create a subgroup of that group
• You create a Connection from that subgroup
• The Connection has delays
In this case, the SpikeContainer object would be improperly initialised and many spikes may be lost. The latest development version fixes this. See ticket #33 on our trac.

If you have code running on an older version and think your code might be affected, please get in contact on the briansupport@googlegroups.com list.

# New SVN

The Brian SVN has moved from SourceForge to neuralensemble.org. The current release (1.1.3) is still available at the SourceForge and PyPI pages, but future releases will be at the new page. The current development version will only be available on the new page from now on.

# Real time plotting

A feature of the pylab/matplotlib plotting package that I hadn’t used before is animation. The Animations entry on the Scipy Cookbook explains how to do it. Here is an example of using it to do realtime plotting of a network in Brian. We plot a raster plot and sample membrane potential trace as the CUBA example runs:

[python] from brian import *

###### Set up the standard CUBA example

N = 4000 eqs=’‘’ dv/dt = (ge+gi-(v+49mV))/(20ms) : volt dge/dt = -ge/(5ms) : volt dgi/dt = -gi/(10ms) : volt ‘’‘ P = NeuronGroup(N, eqs, threshold=-50mV,reset=-60mV) P.v = -60mV+10mVrand(len(P)) Pe = P.subgroup(3200) Pi = P.subgroup(800) Ce = Connection(Pe, P, ‘ge’, weight=1.62mV, sparseness=0.02) Ci = Connection(Pi, P, ‘gi’, weight=-9*mV, sparseness=0.02)

## First of all, we use standard Brian monitors to record values

M = SpikeMonitor(P) trace = StateMonitor(P, ‘v’, record=0)

ion()

subplot(211)

## we will update in real time

rasterline, = plot([], [], ‘.’) # plot points, hence the ‘.’ axis([0, 1, 0, N]) subplot(212) traceline, = plot([], []) # plot lines, hence no ‘.’ axis([0, 1, -0.06, -0.05])

## This network operation updates the graphics every 10ms of simulated time

@network_operation(clock=EventClock(dt=10ms)) def draw_gfx(): # This returns two lists i, t of the neuron indices and spike times for # all the recorded spikes so far i, t = zip(M.spikes) # Now we update the raster and trace plots with this new data rasterline.set_xdata(t) rasterline.set_ydata(i) traceline.set_xdata(trace.times) traceline.set_ydata(trace) # and finally tell pylab to redraw it draw()

run(1*second) draw_gfx() # final draw to get the last bits of data in ioff() # switch interactive mode off show() # and wait for user to close the window before shutting down [/python]

We’re thinking about adapting some of this to include in the standard Brian plotting library. Any thoughts?

# Brian 1.1.3

We have just produced a new release of Brian, version 1.1.3, with the following changes:

• STDP now works with DelayConnection