# Brian tutorial at Telluride

There will be a tutorial on Brian at the Telluride workshop in Colorado, tomorrow at noon (via Skype as I’m in Paris).

There will be a tutorial on Brian at the Telluride workshop in Colorado, tomorrow at noon (via Skype as I’m in Paris).

We just added a new feature for building sparse connections: the sparseness keyword now accepts functions in the same way as the weight keyword, e.g.:

[python]
myconnection.connect_random(group1,group2,0.1,weight=1*nS,sparseness=exp(-abs(i-j)*.1))
[/python]

makes random synapses between neurons with a distance-dependent probability (assuming the neurons are topographically arranged, with a linear topology).

The latest SVN version of Brian includes a new experimental package, brian.experimental.codegen, which is designed for automatic generation of code for numerical integration in Python, C++ and GPU C++. If the global preference usecodegen is True then it will be used automatically for all nonlinear differential equations. If usecodegenweave is True then C code will be generated, otherwise Python code. In all cases the code runs faster. GPU code is not currently used. Use the brian_global_preferences.py file to activate it for all your code. As an example of how it works, let’s use the following Brian Equations object:

[python]
eqs = Equations(‘’‘
dV/dt = -W*V/(10*second) : volt
dW/dt = -V**2/(1*second) : volt
‘’‘)
[/python]

The codegen module generates the following code for the Euler update scheme. For Python:

[python]
V = _S[0]
W = _S[1]
V__tmp = -0.1*V*W
W__tmp = -1.0*V**2
V += V__tmp*dt
W += W__tmp*dt
[/python]

For C:

[cpp]
double *V__Sbase = _S+0*num_neurons;
double *W__Sbase = _S+1*num_neurons;
for(int _i=0;_i<num_neurons;_i++){
double &V = *V__Sbase++;
double &W = *W__Sbase++;
double V__tmp = -0.1*V*W;
double W__tmp = -1.0*pow(V, 2);
V += V__tmp*dt;
W += W__tmp*dt;
}
[/cpp]

And for the GPU:

[cpp]
**global** void stateupdate(int num_neurons, double t, double *S)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if(i>=num_neurons) return;
double &V = S[i+0*num_neurons];
double &W = S[i+1*num_neurons];
double &V = *V__Sbase++;
double &W = *W__Sbase++;
double V__tmp = -0.1*V*W;
double W__tmp = -1.0*pow(V, 2);
V += V__tmp*dt;
W += W__tmp*dt;
}
[/cpp]

These are all generated from the following template for the Euler integration scheme:

[python] euler_scheme = [ ((‘foreachvar’, ‘nonzero’), ‘’‘ $vartype ${var}__tmp = $var_expr ‘’‘), ((‘foreachvar’, ‘nonzero’), ‘’‘ $var += ${var}__tmp*dt ‘’‘) ] [/python]

The other schemes are the RK2 and exponential Euler scheme. For example, the exponential Euler scheme is:

[python]
exp_euler_scheme = [
((‘foreachvar’, ‘nonzero’),
‘’‘
$vartype ${var}__B = @substitute(var_expr, {var:0})
$vartype ${var}__A = @substitute(var_expr, {var:1})
${var}__A -= ${var}__B
${var}__B /= ${var}__A
${var}__A *= dt
‘’‘),
((‘foreachvar’, ‘nonzero’),
‘’‘
$var += ${var}__B
$var *= exp(${var}__A)
$var -= ${var}__B
‘’‘)
]
[/python]

On the Equations above, this generates the following Python code:

[python]
V = _S[0]
W = _S[1]
V__B = 0
W__B = -1.0*V**2
V__A = -0.1*W
W__A = -1.0*V**2
V__A -= V__B
W__A -= W__B
V__B /= V__A
W__B /= W__A
V__A *= dt
W__A *= dt
V += V__B
W += W__B
V *= exp(V__A)
W *= exp(W__A)
V -= V__B
W -= W__B
[/python]

And similarly for C++ and GPU.

We will be extending the automatic generation of code into other areas, including resets, thresholds, and STDP code.

If you have any comments on the approach, bug reports, etc. please let us know!

This is a little trick that may be useful to people using multiprocessing and numpy that I couldn’t find any good examples of online.

Using the Python multiprocessing package you create a new Python process for each CPU in the system, but you may often want to work on the same data without making a copy. Here is one way to do that. First of all, we assume you’re starting with a large numpy array S. We replace S with a version in shared memory, and then we can pass this

[python] from multiprocessing import sharedctypes size = S.size shape = S.shape S.shape = size S_ctypes = sharedctypes.RawArray(‘d’, S) S = numpy.frombuffer(S_ctypes, dtype=numpy.float64, count=size) S.shape = shape [/python]

Now we can send S_ctypes and shape to a child process in multiprocessing, and convert it back to a numpy array in the child process as follows:

[python] from numpy import ctypeslib S = ctypeslib.as_array(S_ctypes) S.shape = shape [/python]

We have just released Brian 1.2.

The major feature of this release is the model fitting toolbox, designed for automatic fitting of spiking neural models to recorded data. The toolbox is capable of using multiple CPUs or GPUs, across multiple machines. For example, in our lab we use 3 computers with 2 GPUs each to get a 300x speed increase compared to using a single CPU.

Other features of this release include some support for real time plotting as the simulation runs and some important bug fixes. See below for the complete list of changes.

With this release, we are moving away from the old Sourceforge site to the new Brian Trac kindly hosted by Neural Ensemble. We have also been working on our release tools, making it much easier to produce new releases. We now plan to do fairly frequent ‘quick releases’ which will be less thoroughly tested, but useful for those who want the latest features without having to much around with the SVN version.

Major features:

- Model fitting toolbox (library.modelfitting)

- New real-time “refresh=“ options added to plotting functions
- Gamma factor in utils.statistics
- New RegularClock object
- Added brian_sample_run function to test installation in place of nose tests

Improvements:

- Speed improvements to monitors and plotting functions
- Sparse matrix support improved, should work with scipy versions up to 0.7.1
- Various improvements to brian.hears (still experimental though)
- Parameters now picklable
- Made Equations picklable

Bug fixes:

- Fixed major bug with subgroups and connections (announced on webpage)
- Fixed major bug with multiple clocks (announced on webpage)
- No warnings with Python 2.6
- Minor bugfix to TimedArray caused by floating point comparisons
- Bugfix: refractory neurons could fire in very extreme circumstances
- Fixed bug with DelayConnection not setting max_delay
- Fixed bug with STP
- Fixed bug with weight=lambda i,j:rand()

New examples:

- New multiprocessing examples
- Added polychronisation example
- Added modelfitting examples
- Added examples of TimedArray and linked_var
- Added examples of using derived classes with Brian
- Realtime plotting example

We recently found and fixed a bug which may silently cause spikes to be lost or extra spikes to be generated in certain circumstances. At the moment, the only known example of a problem is when using SpikeGeneratorGroup in a simulation with multiple clocks as in this message on the support group, but the problem may also appear in rare situations with other simulations with multiple clocks. We advise users to upgrade to Brian 1.1.4 when it is released next week.

We recently found a bug that may silently cause problems with many simulations. The bug causes problems in the following situation:

- You create a NeuronGroup
- You create a subgroup of that group
- You create a Connection from that subgroup
- The Connection has delays

If you have code running on an older version and think your code might be affected, please get in contact on the briansupport@googlegroups.com list.

The Brian SVN has moved from SourceForge to neuralensemble.org. The current release (1.1.3) is still available at the SourceForge and PyPI pages, but future releases will be at the new page. The current development version will only be available on the new page from now on.

A feature of the pylab/matplotlib plotting package that I hadn’t used before is animation. The Animations entry on the Scipy Cookbook explains how to do it. Here is an example of using it to do realtime plotting of a network in Brian. We plot a raster plot and sample membrane potential trace as the CUBA example runs:

[python] from brian import *

N = 4000
eqs=’‘’
dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt
dge/dt = -ge/(5*ms) : volt
dgi/dt = -gi/(10*ms) : volt
‘’‘
P = NeuronGroup(N, eqs, threshold=-50*mV,reset=-60*mV)
P.v = -60*mV+10*mV*rand(len(P))
Pe = P.subgroup(3200)
Pi = P.subgroup(800)
Ce = Connection(Pe, P, ‘ge’, weight=1.62*mV, sparseness=0.02)
Ci = Connection(Pi, P, ‘gi’, weight=-9*mV, sparseness=0.02)

M = SpikeMonitor(P) trace = StateMonitor(P, ‘v’, record=0)

ion()

subplot(211)

rasterline, = plot([], [], ‘.’) # plot points, hence the ‘.’ axis([0, 1, 0, N]) subplot(212) traceline, = plot([], []) # plot lines, hence no ‘.’ axis([0, 1, -0.06, -0.05])

@network_operation(clock=EventClock(dt=10*ms))
def draw_gfx():
# This returns two lists i, t of the neuron indices and spike times for
# all the recorded spikes so far
i, t = zip(*M.spikes)
# Now we update the raster and trace plots with this new data
rasterline.set_xdata(t)
rasterline.set_ydata(i)
traceline.set_xdata(trace.times)
traceline.set_ydata(trace[0])
# and finally tell pylab to redraw it
draw()

run(1*second) draw_gfx() # final draw to get the last bits of data in ioff() # switch interactive mode off show() # and wait for user to close the window before shutting down [/python]

We’re thinking about adapting some of this to include in the standard Brian plotting library. Any thoughts?

We have just produced a new release of Brian, version 1.1.3, with the following changes:

- STDP now works with DelayConnection
- Added EventClock
- Added RecentStateMonitor
- Added colormap option to StateMonitor.plot
- Added timed array module, see TimedArray class for details.
- Added optional progress reporting to run()
- New recall() function (converse to forget())
- Added progress reporting module (brian.utils.progressreporting)
- Added SpikeMonitor.spiketimes
- Added developer’s guide to docs
- Early version of brian.hears subpackage for auditory modelling
- Various bug fixes

You will also notice that the manual now includes a developer’s guide. If you are interested in contributing, we suggest you to subscribe to the Brian development mailing list.