ROI motion detection

2,829 views
Skip to first unread message

Polina Litvak

unread,
Apr 21, 2016, 7:14:53 AM4/21/16
to Bonsai Users
Hello!

I am new to Bonsai and am hoping to be able to use it to track mouse motion in an M maze. The idea is to detect the animal's presence in one of a number of ROIs and then trigger an automated pellet dispenser to drop food when the animal is correctly positioned in a region.  

For motion detection, I came across a post recommending to crop a region from the image and then sum all the segmented pixels in that region to get a continuous measure of how much the region has been activated. How can I access the segmented pixels of a region to be able to sum them up ? Is it something that can be configured using UI or do I need scripting for this ? An example would be greatly appreciated. 

I also came across an alternative method of specifying MinArea in FindContours to track only when there is a large enough object in the ROI. Would that achieve the same end result ?

Many thanks!
Polina

goncaloclopes

unread,
Apr 21, 2016, 7:25:59 AM4/21/16
to Bonsai Users
Hi Polina,

If you already have the thresholded and cropped image, you can use the Sum node to add up all the pixels. Since non-segmented pixels will be zero, they will not contribute to the sum. Here's a quick example using Threshold, you can change it to BackgroundSubtraction or other segmentation method that works best for you:


Hope this helps!

Polina Litvak

unread,
Apr 21, 2016, 7:37:19 AM4/21/16
to Bonsai Users
Thanks for the speedy response! 
Since I have multiple ROIs, would I just create a separate branch starting with Crop node for each of the ROIs ? Wouldn't it be better to use CropPolygon, since it seems to support multiple regions ?  Also, I apologize if the question is silly, but how do I specify to the Sum node what it should sum over ? 

Best,
Polina

goncaloclopes

unread,
Apr 21, 2016, 8:03:13 AM4/21/16
to Bonsai Users
Yes, you can just create one branch for each of the ROIs.

CropPolygon may or may not be better, depending on what you want to do. If you cropped the multiple regions and then summed it, you would get a signal that tells you how much any of the regions is activated, which would not work if you needed independent signals for each.

Also, if you are only thresholding the image inside each Crop, it can actually be faster, since you are processing only very small images and not the whole image.

Regarding the Sum node, it just sums the values from all the pixels inside the cropped region. However, because you have thresholded the values first, all other pixels will be at zero, which means that they will not add up to your sum (i.e. they are ignored).

Hope this helps, let me know if you run into any difficulties!
Message has been deleted

Polina Litvak

unread,
Apr 28, 2016, 5:16:20 AM4/28/16
to Bonsai Users
As recommended, I am using crop-threshold-background subtraction branch for each of my ROIs (I have three). Because of my experiment setup, I detect motion in more than one ROI simultaneously (mouse moving in one region, and wire dangling in another). I am  interested in finding out which ROI the mouse is in (which ROI has the largest activation), because I then need to activate the corresponding food pellet dispenser (each ROI has its own) - how can I do this in Bonsai ?

Many thanks!
Polina

On Thursday, April 21, 2016 at 1:14:53 PM UTC+2, Polina Litvak wrote:

goncaloclopes

unread,
Apr 28, 2016, 9:56:33 AM4/28/16
to Bonsai Users
Hi Polina,

This is an interesting question. I believe there are several ways to do this, but I present here one possible solution that I think is relatively easy to understand. Here it is schematically (also attached as an example workflow):

So the basic idea is that you compute the activation for each ROI, as you are probably doing already, and then you combine everything using Zip. Then what you need to do is check which ROI was maximal, but also keep track of which ROI generated that maximal activation. This is done in the ExpressionTransform.


Finally, now that you have the maximal activation and which ROI was the maximum you create 3 branches, one for each reward port/led and check (the green Condition nodes) whether the maximal ID matches any of the specified branches and whether its activity exceeds the threshold. If that's true, you write to the DigitalOutput.


Most likely you will have to modify the specifics of this to fit your exact case, but hopefully this will give you a place to start.
roiactivation.bonsai
Message has been deleted

Polina Litvak

unread,
May 2, 2016, 10:00:37 AM5/2/16
to Bonsai Users
Hi Gonçalo,

Thank you for the detailed explanation and the workflow. I have a few questions :

1. My scenario is a bit more complicated - there are certain rules that the mouse needs to obey in order to activate a food dispenser. This means that I need to keep track of the last two ROIs the mouse has activated and in case the order of activation is correct, feed it using the right food dispenser (there are three, one for each arm of the M shaped maze). What would be the best way to achieve this ? I would happily code this in python, if possible.

2. The ROI motion detection of the workflow doesn't seem to work correctly - ExpressionTransform says 'port 2' at all times, even though the mouse is moving in other ROIs. Would you be able to take a look at the video recording I am using for testing and maybe point me in the right direction as to why my motion detection isn't working correctly ? Here is a link to the video. My ROIs are the three arms of the M maze (the bottom part of each of the arms).


Many thanks for  your help!
Polina

On Thursday, April 21, 2016 at 1:14:53 PM UTC+2, Polina Litvak wrote:

goncaloclopes

unread,
May 3, 2016, 4:38:14 AM5/3/16
to Bonsai Users
Hi Polina,

1. You can actually extend the workflow easily in order to handle activation transitions. Basically you can use the pattern for comparing adjacent data pairs in time, which is to Zip the sequence minus the first element (Skip(1)) with the original sequence. You end up with a sequence of pairs (t-1,t). Here's how it looks like:


I'm assuming you want to keep different thresholds for each of the ROIs. If a single threshold would do, you could drop the branch-and-merge section and use a single condition.



2. Actually, the expression I included in the last workflow was incorrect, there was a comparison missing for the last port. Here is the corrected expression:

new(
Item2 > Item3 ? (Item1 > Item2 ? 1 : 2) : (Item1 > Item3 ? 1 : 3) as port,
Math.Max(Item1,Math.Max(Item2, Item3)) as activation)

I'm attaching the revised workflow for reference.
Hope this helps!
roiactivation.bonsai
Message has been deleted

Polina Litvak

unread,
May 3, 2016, 11:07:52 AM5/3/16
to Bonsai Users
Hi Gonçalo,

This is great, I seem to detect motion properly now but still not able to keep track of the ROI activation order. This is what I am seeing in the last Zip node, after the Skip:

({port=3, activation=13770}, {port=3, activation=50490}) The two pairs always list the same port (port = 1, port = 1 or port = 2, port = 2 and etc.)

Can you briefly explain how the Skip node works ?  Is it possible to view the sequence of elements in the Zip node ? You are saying that I should end up with sequence of pairs (t-1,t) - what do the t times signify ?  How often are they recorded ? For this to work , I need t to be times of ROI activation events, so that I am able to check if event at t and event at (t-1) activate two consecutive ROIs, which would be correct behavior, in our case.

Thank you very much for all your help!
Polina

On Thursday, April 21, 2016 at 1:14:53 PM UTC+2, Polina Litvak wrote:

goncaloclopes

unread,
May 4, 2016, 5:51:12 AM5/4/16
to Bonsai Users
Hi Polina,

The ROI activation is evaluated every frame. The workflow up to ExpressionTransform is simply calculating the port which has the maximal ROI activation above the threshold. However, once an ROI is activated, it can remain so across multiple frames.

What the Skip node is doing is pairing the maximal ROI activation on one frame with the previously reported maximal ROI activation. What you are seeing is that most of the time the last ROI activated port was the same as the currently ROI activated port. In this case the two ports appear the same. However, in there you should also have the events in which the port changes. You can maybe see this more easily if you change quickly between two ports or if you expand the size of the text visualizer window so that it shows more frames.

You can write a little ExpressionTransform to discard the pairs that have the same port:

Item1.port != Item2.port

Just place this inside a Condition node after the Zip and I think this should be what you are after.

Hope this helps,

Polina Litvak

unread,
May 17, 2016, 9:33:14 AM5/17/16
to Bonsai Users
Hi Gonçalo,

Thanks! Adding the condition node after the last zip works, and I now only output pairs that have different ports. My next question is this - what is the easiest way to identify port pairs that differ by one and when such a pair arrives (i.e. mouse moves correctly from one maze arm to another, without skipping an arm) send a signal to a food dispenser (via an Arduino) ? From reading the forums, I gather this should be possible to do using a repeat node/boolean node but I am not totally sure how to set this up to only do once for every new (and correct) port pair.


Many thanks,
Polina

goncaloclopes

unread,
May 18, 2016, 9:03:00 PM5/18/16
to Bonsai Users
Hi Polina,

What does the signal to the dispenser need to be? Is it just a TTL that goes high for a certain period of time and then stops?
This looks like it could be done by setting up another condition after the one you already have (you could fold everything into a single condition, but maybe it's best to keep them separate to make things more clear).

Basically the condition this time can just be, if I understand correctly: Math.Abs(Item1.port - Item2.port) == 1

Now at the output of this condition you should have the events that will trigger your dispenser. For each of these events you now want to send two signals to the Arduino, one immediately to set the TTL to high, and another one later to set it to low. Whenever you want to select "many" events from a single event, you can use the SelectMany node. This is a nested node where you can specify inside a mini-workflow that will run for each of your input events.

For example, you can set a Boolean to turn the Arduino DigitalOutput immediately on, followed by a Delay and a BitwiseNot (to flip the boolean) and turn it off again. Something like this:


Be aware that if you don't have anything else using Arduino in your workflow, having all this inside SelectMany will basically switch the Arduino on and off every time (because it is only in use while handling this event). This may add a significant latency to process the TTL. In order to avoid this, you can actually place the last DigitalOutput following the SelectMany, so that it is always "active".


Anyway, there are many ways to solve this depending on the exact logic you were after. I'm not sure I got all the details though so do let me know if there are additional complications!

Best,

Polina Litvak

unread,
May 19, 2016, 9:41:53 AM5/19/16
to Bonsai Users
Thank you for the detailed write up Gonçalo. 

You've lost me with the following: 'whenever you want to select "many" events from a single event, you can use the SelectMany node'  - I don't quite understand when would I select many events vs one and am also unclear about the role of the Take node - is it to grab one event from the list of events that will trigger my dispensers?  

All I need to do is signal one of three dispensers to drop a food pellet when the corresponding event arrives. I've added three condition nodes (three branches) for each event type (port1, port2 or port3 and I assumed Item1.port from the tuple (Item1, Item2) coming out of the Skip node will contain ROI activated at time t).

I would like to keep Arduino active to minimize delays and simply send it a TTL that will go high for a period of time for each of my events.

Thanks!
Polina


On Thursday, April 21, 2016 at 1:14:53 PM UTC+2, Polina Litvak wrote:

goncaloclopes

unread,
May 21, 2016, 8:09:34 PM5/21/16
to Bonsai Users
Hi Polina,

Sure, no worries, SelectMany can be a tricky concept to get, but it's a very useful one. Let me start quickly from the beginning:

1) you can think of every single node in Bonsai as producing a sequence of events.

2) it just so happens that some nodes can produce events by themselves (sources); and some other nodes can only produce events by reacting to other events (combinators)

3) some combinators produce only one output event for every input event. this is the case of transforms like Grayscale: every time there is an event with an input image, Grayscale converts the image to grayscale and generates an event with the result image.

4) now even more interesting are combinators that for every input event can produce multiple output events. For example, you can think of your "pulse" as two events: one for the pulse going "high" and another some time later for the pulse going "low", i.e. the "pulse" is defined by the sequence of events high->low. SelectMany is the node that allows you to specify that in response to one input event you want a whole sequence of output events. Does this make sense?

For example, your dispenser could be defined by something like this:


Where basically for every input, you take a Boolean, write it to the Arduino, then wait for some amount of time (Delay), flip the Boolean (BitwiseNot) and write it back to the Arduino. In fact, looking at this you may not even need the SelectMany, and maybe it just works to paste this sequence to the end of the activation workflow you have now.


Anyway, hope this helps.

Polina Litvak

unread,
Jul 26, 2016, 8:05:31 AM7/26/16
to Bonsai Users
Dear Gonçalo,

Thanks for all your input. I am attaching the final version of my script and was wondering if you can take a look a provide feedback regarding simplification and performance concerns.

Also, I am struggling with the following, perhaps you could help:

1. How to save position and time (timeAndPos.csv) such that it can be read into Matlab in a matrix form (each field in a separate column)
2. How to include time with armChange.csv and correctArmChange.csv data files ? When I add a link from my existing Timestamp node to the Zip node in front of the ArmChange node I get an out of memory exception.

Many thanks,
Polina

On Thursday, April 21, 2016 at 1:14:53 PM UTC+2, Polina Litvak wrote:
master-copy.bonsai

goncaloclopes

unread,
Jul 27, 2016, 11:01:05 PM7/27/16
to Bonsai Users
Hi Polina,

I went through the file and reorganized and simplified stuff to give you an example of how I usually would do it. I hope you don't mind, but I'm also including a Before and After image as attachment because I think this was a really good example of cleaning up and refactoring that I would like to keep as reference for later posts.

The general feeling of a nice Bonsai workflow is to avoid branching, so you need to prune a lot (hence the name Bonsai!). Even though I do want to improve support for bigger workflows, for now I see the ugliness of the layout as an advantage to remind me that some workflow is too complicated.

Some useful tips on pruning:

- Sink nodes (e.g. CsvWriter, VideoWriter, etc) can be placed anywhere in the pipeline without changing anything, so there is no need to create new branches just for them. Also some Sinks like CsvWriter allow you to specify a selector in the node to pick which elements you want to save without using extra branches and Zip nodes.

- You can group semi-independent parts of the workflow (e.g. Tracking and the ArmActivation) into nested workflows for improved readability. You can even group workflows as Sink groups, which allow you to process some workflow just for saving and keep everything else the same (I used a couple of them here).

- Starting from Bonsai 2.2, you can now pipe outputs into shared variables (i.e. Subject nodes). These variables can be subscribed from anywhere in the same or lower workflow level (e.g. with SubscribeSubject). This allows you to prune those annoying branches that connect two otherwise unrelated workflows (e.g. in your case the video and ephys acquisition).

Hopefully this workflow should be equivalent, and if not, it should be easier to modify for fixing.

Also I've added the timestamp situations you were mentioning. Basically the problem is you cannot use Zip in the case of your ArmActivation, because these events do not happen every frame. Zip requires that there is a 1-1 correspondence between the combined events, which does not happen in this case (i.e. you have many more images and timestamps than you have activation events), hence the memory accumulation.

The way to do it in this case is to use CombineLatest, which is usually more appropriate when you have independent streams. In this case you take the Latest timestamp and use it to stamp the current activation.

Hope this helps, let me know if you have further questions.
master-copy-rev.bonsai
beforeafter.png

Polina Litvak

unread,
Jul 28, 2016, 5:29:11 PM7/28/16
to Bonsai Users
Thanks a lot for your feedback, this is great!
I'd like to ask one more question - in my CorrectArmChange at the moment I only check that the mouse navigates the M maze arms in order, without skipping (inner arm to outer arms and vise versa), but turns out I need to keep track of last two ROI activations to only reward the mouse on one of the following sequences (these being the only valid arm navigations):

1 -> 2 -> 3 
3  -> 2 -> 1
2 -> 3 -> 2
2 ->1 -> 2 

What would be the correct way of doing this ?

Best,
Polina 

On Thursday, April 21, 2016 at 1:14:53 PM UTC+2, Polina Litvak wrote:

goncaloclopes

unread,
Jul 29, 2016, 12:18:10 AM7/29/16
to Bonsai Users
I guess you can just test explicitly for the activation pairs. Some of the pairs you showed in the list are redundant (repeated). Presumably these are the distinctly valid ones:

1 -> 2
2 -> 3
3 -> 2
2 -> 1

So you can just add these tests to the ExpressionTransform inside CorrectArmChange:

(Item1.port == 1 && Item2.port == 2) ||
(Item1.port == 2 && Item2.port == 1) ||
(Item1.port == 2 && Item2.port == 3) ||
(Item1.port == 3 && Item2.port == 2)

I think this should work, but don't know much about the context of your task. Anyway, the general idea is now you have in your input the two last ROI activations, so you can test for whatever you like.

Hope this helps.

goncaloclopes

unread,
Jul 29, 2016, 12:22:34 AM7/29/16
to Bonsai Users
Ah, sorry, didn't understand you actually wanted three ROI activations.

In this case, just repeat the branch with the Skip node one more time to the Zip, with the skip value set to 2. This will combine at the exit of Zip the 3 most recent ROI activations, which you can then test with an expression similar to what I exemplified above.

I'm attaching a workflow with the Zip section done, but you will need to fill in the ExpressionTransform appropriately.
master-copy-rev2.bonsai

Martin Both

unread,
Jul 29, 2016, 11:34:25 AM7/29/16
to Bonsai Users
Hi Goncaloclopes,

I am working with Polina and have an additional question:

I takes around 4 seconds after the right location was detected for the arduino board to activate the respective channel. Can I somehow accelerate this? there are delays between the yes/no decision and the activation of the arduino output via the com port in the workflow. if I take them out or reduce them to less than 2 seconds, there will be no output.

thanks!
Martin

goncaloclopes

unread,
Jul 29, 2016, 7:58:18 PM7/29/16
to bonsai...@googlegroups.com
Hi Martin and welcome to the forums!

Yes, I noticed this on Polina's workflow but forgot to warn her. The way the Arduino nodes work is they shut off the connection to the COM port when not in use. The problem with this in the activation task is that you are turning on and off the nodes only in certain times.

There is a very, very easy way to solve this for now: simply add an AnalogInput or DigitalInput to the topmost workflow. Just a node that will stay there unconnected to the rest of the workflow. This will guarantee that there is always an active connection to the Arduino and should be enough to get rid of all the delays, changes should be observed immediately.

Let me know if this helps.

Martin Both

unread,
Aug 1, 2016, 6:52:07 AM8/1/16
to Bonsai Users
Hi Goncaloclopes,

thanks a lot! Now it works perfectly!

I have another problem, though. The camera sends trigger pulses each time a frame is made and moved to the RAM. The inter-trigger intervals are of perfectly consistent length. The time-stamps of the frames are well within a normal jitter around the same average time interval. However, there are more triggers in the ephys data than frames that are really saved. I believe, that once the camera is started, it moves frames into the RAM and produces triggers. The frames that are taken from the RAM and saved to disc by bonsai are started a little bit later, when the ephys is already running. Thus, more triggers are recorded than frmaes saved. Unfortunately, it is now hard for me to assign the saved frames to the triggers and find lost frames. Is there a general way how to do this the easiest?

Thank a lot again!
Martin

Gonçalo Lopes

unread,
Aug 1, 2016, 6:56:36 AM8/1/16
to Martin Both, Bonsai Users

Hi Martin,

 

Does the inter-trigger interval in the ephys match the interval distribution you get from video acquisition timestamps?

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/bonsai-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/bonsai-users/d25fdf2e-7fa0-49cf-b829-f6189c3b0b35%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

 

Martin Both

unread,
Aug 1, 2016, 7:13:24 AM8/1/16
to Bonsai Users, martin...@gmail.com
Hi Goncaloclopes,

yes, the median from the video timestamps matches the median from the ephys trigger input

Gonçalo Lopes

unread,
Aug 1, 2016, 7:45:46 AM8/1/16
to Martin Both, Bonsai Users, martin...@gmail.com

This makes me think the frame loss happens at the beginning or the end. Let’s go step by step. Can you split the workflow into two separate Bonsai files that you can run independently? I need to know the answer to the following questions:

 

1.       If you start ephys with no video capture node, do you still see pulses?

2.       If you stop video capture but keep ephys running, do you still see pulses?

Martin

I'd like to ask one more question - in my CorrectArmChange at the moment I only check that the mouse navigates the M maze arms in order, without skipping (inner arm to outer arms and vise versa), burns out I need to keep track of last two ROI activations to only reward the mouse on one of the following sequences (these being the only valid arm navigations):

 

1 -> 2 -> 3 

3  -> 2 -> 1

2 -> 3 -> 2

2 ->1 -> 2 

 

What would be the correct way of doing this ?

 

Best,

Polina 

On Thursday, April 21, 2016 at 1:14:53 PM UTC+2, Polina Litvak wrote:

Hello!

 

I am new to Bonsai and am hoping to be able to use it to track mouse motion in an M maze. The idea is to detect the animal's presence in one of a number of ROIs and then trigger an automated pellet dispenser to drop food when the animal is correctly positioned in a region.  

 

For motion detection, I came across a post recommending to crop a region from the image and then sum all the segmented pixels in that region to get a continuous measure of how much the region has been activated. How can I access the segmented pixels of a region to be able to sum them up ? Is it something that can be configured using UI or do I need scripting for this ? An example would be greatly appreciated. 

 

I also came across an alternative method of specifying MinArea in FindContours to track only when there is a large enough object in the ROI. Would that achieve the same end result ?

 

Many thanks!

Polina

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/bonsai-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/bonsai-users/d25fdf2e-7fa0-49cf-b829-f6189c3b0b35%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

 

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/bonsai-users.

Martin Both

unread,
Aug 3, 2016, 9:34:33 AM8/3/16
to Bonsai Users, martin...@gmail.com
This is a very good suggestion, thank you! I am on holiday right now but will test this when I'm back. Thanks again for answering so quickly!

Martin Both

unread,
Aug 24, 2016, 4:15:53 AM8/24/16
to Bonsai Users, martin...@gmail.com
Hi Goncaloclopes,

ich made some tests by turning on an LED during recording and comparing the ttl signal of the Campera and the ttl signal from the LED recorded by the INTAN digital inputs with the timestamp of the images where I saw the LED in the video. It looks as there are some additional frames in the beginning (ephys starts around 1.8 seconds after frames start to be taken) and the video stops around 0.2 seconds before ephys is stopped. 
I guess I will just place an LED somewhere, where I can record it with the camera and where it doesn't disturb the other recordings to sychronize each experiment. What is the best way to turn on an LED about 2 seconds after the start of the recording and turn it off again a second later?

Thanks again!
Martin

Gonçalo Lopes

unread,
Aug 24, 2016, 11:23:46 AM8/24/16
to Martin Both, Bonsai Users

​Hi Martin,

I'm curious, what happens if you use the DelaySubscription node to delay video initialization by 5 seconds for example? Are you able to compensate for this initialization delay?

Anyway, if you have access to an Arduino, something like the following workflow will suffice for the LED synch:


Basically you use a Timer set to a DueTime of 2 seconds, the GreaterThanOrEqual node to convert the number into a boolean (True). This boolean is sent to a digital pin, then you Delay the event for 1 second, flip the boolean from True to False with BitwiseNot and write it again to the digital output.

Hope this helps.

To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Martin Both

unread,
Aug 30, 2016, 10:49:10 AM8/30/16
to Bonsai Users, martin...@gmail.com
Thank you, that works fine! 
I tried to delay the video initialization, but I cannot put the DelaySubscription in front of the uEyeCamera node. Alternatively, is there a possibility to start the camera initialization triggerd by the end of the ephys initialization?

Gonçalo Lopes

unread,
Aug 30, 2016, 11:27:26 AM8/30/16
to Martin Both, Bonsai Users
Hi Martin,

I'm curious, what exactly happens when you place DelaySubscription in front of the uEyeCapture node? Is there a Bonsai error at build time or run time? Is the behavior unchanged?

Anyway, yes, you can condition the video initialization on ephys initialization by using the SubscribeWhen node. If you use the first acquired sample from ephys as a trigger to subscribe to the camera, you will be sure that video capture is only started when the ephys pipeline is fully initialized. See below:


Hope this helps.

To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Martin Both

unread,
Aug 30, 2016, 11:34:01 AM8/30/16
to Bonsai Users, martin...@gmail.com
Cool, I try that. 
Bonsai just does not let me connect the DelaySubscription to the uEyeCamera node. Or do I have to connect it the other way round (from the uEye to the DS?)

Gonçalo Lopes

unread,
Aug 30, 2016, 11:49:31 AM8/30/16
to Martin Both, Bonsai Users
Ah, that's it then, indeed you connect it the other way around (from uEye to the DS).

The way to think about it is that the DelaySubscription node modifies the behavior of any sequence it receives as input. That is, you can put it in the middle of any workflow and it will behave exactly the same way except that the initialization of the sequence will be delayed by the time specified in the DelaySubscription

To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Martin Both

unread,
Aug 31, 2016, 4:35:17 AM8/31/16
to Bonsai Users, martin...@gmail.com
Hi Goncaloclopes,

I have another question. I have set up Bonsai on my computer in my office to test and develope the workflow. Now I get an error message 'could not load file or assembly 'Bonsai.uEye, Version=2.0.0.0, Culture=neutral, PublicKeyToken=null'. Can you help me? Unfortunately, Polina left us, so I have to ask you.

Best, Martin

Gonçalo Lopes

unread,
Aug 31, 2016, 5:24:13 AM8/31/16
to Martin Both, Bonsai Users
Hi Martin,

This is most likely because you need to install the uEye prototype package. It's not yet included in the official distribution. You can find it in this thread:



To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Martin Both

unread,
Aug 31, 2016, 7:57:28 AM8/31/16
to Bonsai Users, martin...@gmail.com
Yes, I thought so, and now it works!
Thanks again!

Martin Both

unread,
Sep 2, 2016, 11:11:54 AM9/2/16
to Bonsai Users, martin...@gmail.com
Dear Goncaloclopes,

I've played around trying to synchronize ephys and camera, but it's harder than I thought. If I use a framerate below 15 Hz, everything looks fine as my computer is fast enough to detect the position and turn on the food despensers within the time the next frame is taken. However, if the frame rate is higher frames are lost on a regular bases. I can try to synchronize by turning on the LED and matching the timestamp of the first Image where I see the LED shining up with the timepoint I see it in the digital input of the INTAN. However, if exactly this frame was lost, I cannot synchronize again. 
I tried to postpone the initialization of the camera like you told me with DelaySubscription or by SubscribeWhen, but it doesn't work. I can put any numbber I want into DelaySubscription, it does not have any effect. 
There is another interesting point. Sometimes the last frame is taken and saved as an image (I now save each image as a JPG instead of an AVI movie) and this frame is counted and stored in the videoMetadata file I save but it is not processed for position so that the timeAndPosition file has one entry less. Does that happen when I press the stop button whe the frame has been saved but not been processed?

Have a nice weekend!
Martin

Gonçalo Lopes

unread,
Sep 3, 2016, 5:47:49 AM9/3/16
to Martin Both, Bonsai Users
I see, indeed it looks like your image processing pipeline is too heavy for your data volume. Can I ask what is the resolution of your camera? Also, are you processing color or grayscale images? What are your CPU specs?

One possibility is to use Resize and Grayscale to try and reduce the data size of each image to something that your computer can handle. Also, it is possible that there are some redundant operations in the workflow that can be optimized to create a faster pipeline. Feel free to post your current workflow so we can take a look.

I find it very curious that the DelaySubscription didn't work. If I remember correctly, you reported before that simply running the Ephys node alone you don't detect any camera sync pulses in the Ephys auxiliary analog inputs. Is this correct? If this is so, it should indeed be possible to control things using DelaySubscription, but we really need to be sure that the camera is not simply sending pulses all the time (for example, the PointGrey cameras certainly send pulses all the time, which makes syncing more annoying).

Regarding your last question, yes, it can indeed happen that when you press the stop button the frame has been saved but not processed so a one frame difference in the last frame is to be expected. If this is really a problem there are ways to control it, but they will make the workflow slightly more complicated so usually people don't bother since it's only at most one frame (if there is more than one frame difference, then this is another issue).

To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Martin Both

unread,
Sep 5, 2016, 8:55:03 AM9/5/16
to Bonsai Users, martin...@gmail.com
The resolution of the camera is 1280x1024. And you are right, this is a resolution that we do not need. I tried to use hardware 2x2 binning but unfortunately Bonsai crashes while it initialitzes the camera. I wanted to use a 4x4 binning but the camera has only 2x2 binning. I now also resize the image to 340x256 before processing. I use grayscale for the processing.
Attached, you find my current workflow. Did I implement the DelaySubscription correctly? It still doesn't do anything.

I will check again about the pulses the camera sents. 

Is it possible, to can the system time/timestamp of the first ephys data that was recorded?
MMaze_v6.svg
MMaze_v6.bonsai
MMaze_v6.bonsai.layout

Martin Both

unread,
Sep 5, 2016, 8:56:58 AM9/5/16
to Bonsai Users, martin...@gmail.com
...sorry for the last sentence, I'm a bit in a hurry.

Is it possible, to save the system time/timestamp of the first ephys data sample that was recorded?

goncal...@gmail.com

unread,
Sep 5, 2016, 9:36:55 AM9/5/16
to Martin Both, Bonsai Users, martin...@gmail.com

Yes, you can. Add a Timestamp node after the ephys source and then create a branch with Take(1) and CsvWriter to save the data.

 

Not sure why it crashes when initializing with binning. Any error messages? Did the resize improve performance? Images are not terribly large…

 

Will look into the workflow later.

 

From: Martin Both
Sent: 05 September 2016 13:57
To: Bonsai Users
Cc: martin...@gmail.com
Subject: Re: [bonsai-users] Re: ROI motion detection

 

...sorry for the last sentence, I'm a bit in a hurry.

 

Martin

Cool, I try that. 

https://lh5.googleusercontent.com/proxy/HI_xplUX9wcR672l8l-KsPfIsz0FQIT90O4tqx2I-xj3j00TBtBox7Q5NVLRSdye9GBcPDPcqqvw1FHQCaAU5QOFHBz9FQUuQmt5KJVzQqQR83WHV22u_7qWoc8lEe3YipLCxCkNrfGNZUzYJdFeOCkeEtkE=w5000-h5000

 

Hope this helps.

 

On 30 August 2016 at 15:49, Martin Both <martin...@gmail.com> wrote:

Thank you, that works fine! 

I tried to delay the video initialization, but I cannot put the DelaySubscription in front of the uEyeCamera node. Alternatively, is there a possibility to start the camera initialization triggerd by the end of the ephys initialization?

Am Mittwoch, 24. August 2016 17:23:46 UTC+2 schrieb goncaloclopes:


​Hi Martin,

 

I'm curious, what happens if you use the DelaySubscription node to delay video initialization by 5 seconds for example? Are you able to compensate for this initialization delay?

 

Anyway, if you have access to an Arduino, something like the following workflow will suffice for the LED synch:

 

https://lh6.googleusercontent.com/proxy/2Dg0t4O50dKSlemi7W-WeH6u-JM4E6YVIV08ftyiKPDERocVGkh1a5XgTBl076CjYKUkLOXODF2Gw27j_L3TgKIjSw-Fjy0Zl10o15W7wKUBrhZb6D9u-cg6lMLOwH-0zQ8i9iIVIcgSovofc5bssO8HouSAjpk=w5000-h5000

Matthias Klumpp

unread,
Sep 26, 2016, 10:21:55 AM9/26/16
to Bonsai Users, martin...@gmail.com
Hello!

I am now also working on this project - my experience with Bonsai is rather poor, I just went through a few examples, but I am quite experienced in text-based programming languages.
We are still not able to synchronize the recordings, or delay the camera subscription properly, but it looks like we will soon be able to trigger the camera from Bonsai instead, which would resolve this issue as well.

Am Montag, 5. September 2016 15:36:55 UTC+2 schrieb goncaloclopes:
> [...]

> Not sure why it crashes when initializing with binning. Any error messages? Did the resize improve performance? Images are not terribly large…
> Will look into the workflow later.

I have not yet reproduced the bug, but will do that soonish. Did you look into the workflow and spot any obvious mistake we should be aware of?

Kind regards,
    Matthias Klumpp

Gonçalo Lopes

unread,
Sep 26, 2016, 4:01:13 PM9/26/16
to Matthias Klumpp, Bonsai Users, Martin Both
Hi Matthias and welcome to the forums!

There is nothing obviously wrong with the workflow. My suspicion is still that this particular camera is probably sending pulses from the moment that you plug it into the computer (i.e. same as PointGrey).

Looking back into the conversion with Martin I realize we never ended up confirming whether that was the case (i.e. if you have a workflow with no uEye capture, just Rhd2000EvalBoard, do you still see camera pulses?)

I think this is the most relevant test to do right now. If this is the case, we have to figure out some alternative strategy. Triggering the camera frames explicitly from hardware is one possibility. Another might be to find a mode in which the camera stops sending pulses. For example, in the PointGrey cameras, when the camera is in trigger mode it stops sending out pulses (since they are physically locked to the shutter).

Let me know how the tests go and we can decide from there.

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Paolo Botta

unread,
Nov 5, 2017, 12:51:03 PM11/5/17
to Gonçalo Lopes, Bonsai Users
Hi Goncalo,

I am trying to set a crop area of the maze and detect the centroid of the black disk on top of a white background.
Unfortunately I do not understand why it does not track it. As CropPolygon I have selected an ROI using BinaryInv. In addition, I have been using a threshod of 105, BinaryInv and MaxValue of 255. Even changing the sequence, it does not work. I have also tried to decrease the size for detection in FindContours. Do you have any suggestion?

Inline image 1

Thanks,
Paolo



Paolo Botta
Postdoctoral fellow, Costa lab
Zuckerman institute
Columbia University
Tel. +1 (347) 525-3666 | Skype paolobotta | LinkedIn

Gonçalo Lopes

unread,
Nov 5, 2017, 8:00:20 PM11/5/17
to Paolo Botta, Bonsai Users
Hi Paolo,

Usually you would not use the BinaryInv option with CropPolygon, otherwise the structure inside the image gets lost (as you can see in the visualizer for CropPolygon). Rather, you should leave it at its default value of ToZero, or ToZeroInv. These will simply set the pixels outside of the mask to zero, leaving the pixels inside intact.

If you do this, the rest of the pipeline should work as expected. The visualizers can help you make sure that all the steps of the workflow make sense.

Hope this helps.


Paolo Botta

unread,
Nov 7, 2017, 12:50:16 PM11/7/17
to Bonsai Users
Thanks a lot. It worked!

Can I trigger now a servo motor when the animal is in a ROI?
What do you suggest to do?

To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users...@googlegroups.com.

Gonçalo Lopes

unread,
Nov 9, 2017, 9:20:22 AM11/9/17
to Paolo Botta, Bonsai Users
Hi Paolo,

There are a lot of posts in the forums already about triggering different devices with visual ROIs:

Can you check if any of these posts is able to help your case?

To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Paolo Botta

unread,
Nov 10, 2017, 2:48:57 PM11/10/17
to Bonsai Users
Hi Goncalo,

Thanks for the link! Bonsai is great!

I have managed to trigger the servo motor with one visit in the ROI.


Now, the challenge is to use two ROIs and set a condition that the trigger must occur only when the detected object sequentially enter one and the other ROI (1 entrance in ROI1 and 1 entrance in ROI2 ->> triggering of Servo motor).
I have been trying to reach out this information in the forum, but I could not find a similar example.

Thanks for giving any feedback!
Cheers,
Paolo

Gonçalo Lopes

unread,
Nov 11, 2017, 6:52:51 AM11/11/17
to Paolo Botta, Bonsai Users
Hi Paolo,

Just to clarify, would you like the two ROIs to be triggered in alternation constantly, i.e. first A, then B, then A, then B, and so on?


To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

Paolo Botta

unread,
Nov 11, 2017, 9:59:26 AM11/11/17
to goncaloclopes, Paolo Botta, Bonsai Users
No, based on the alternation of visits between Roi1 and 2 (4 total: roi1-roi2,  roi1-roi2, roi1-roi2, roi1-roi2), I want to trigger one servo motor. At the moment I can trigger the motor only when the animal goes to one roi.

Gonçalo Lopes

unread,
Nov 14, 2017, 8:14:07 PM11/14/17
to Paolo Botta, Paolo Botta, Bonsai Users
Hi Paolo,

Do you want to simply count the number of ROI entries and trigger every 4th entry? You can do this by using the Slice operator after the entry Condition. If you set the Step property to 4, this will create a trigger only on every 4th activation.

Hope this helps.

Paolo Botta

unread,
Nov 15, 2017, 5:31:57 PM11/15/17
to Gonçalo Lopes, Bonsai Users
Yes, I would like to trigger the servo motor only after the alternation of two areas (A and B):

A-B, A-B, A-B, A-B (or even more repetitions of A-B sequence) that leads to turn on the servo motor.
The code should be a bit condensed as there will be other space sequences that trigger other servo motors.

So far I have managed to trigger the servo motor only when the animal enters in one area.

Any suggestion will be helpful.
Thanks a lot,
Paolo

Paolo Botta
Postdoctoral fellow, Costa lab
Zuckerman Institute
Columbia University
Tel. +1 3475253666 | Skype paolo...@gmail.comLinkedIn

goncal...@gmail.com

unread,
Nov 15, 2017, 6:22:10 PM11/15/17
to Paolo Botta, Bonsai Users

Hi Paolo,

 

Can you share the workflow you are using currently (i.e. the Bonsai file, not an image)? This will make it easier to understand and modify what you have so far.

 

 

From: Paolo Botta
Sent: 15 November 2017 22:31
To: Gonçalo Lopes
Cc: Bonsai Users
Subject: Re: [bonsai-users] Re: ROI motion detection

 

Yes, I would like to trigger the servo motor only after the alternation of two areas (A and B):

 

A-B, A-B, A-B, A-B (or even more repetitions of A-B sequence) that leads to turn on the servo motor.

The code should be a bit condensed as there will be other space sequences that trigger other servo motors.

 

So far I have managed to trigger the servo motor only when the animal enters in one area.

 

Any suggestion will be helpful.

Thanks a lot,

Paolo


Paolo Botta
Postdoctoral fellow, Costa lab
Zuckerman Institute

Columbia University
Tel. 
+1 3475253666 | Skype paolo...@gmail.comLinkedIn

 

On Sat, Nov 11, 2017 at 6:52 AM, Gonçalo Lopes <goncal...@gmail.com> wrote:

Hi Paolo,

 

Just to clarify, would you like the two ROIs to be triggered in alternation constantly, i.e. first A, then B, then A, then B, and so on?

 

On 10 November 2017 at 19:48, Paolo Botta <paolo...@gmail.com> wrote:

Hi Goncalo,

 

Thanks for the link! Bonsai is great!

 

I have managed to trigger the servo motor with one visit in the ROI.

 

https://lh3.googleusercontent.com/-N9s-2125RiE/WgYBscIbX4I/AAAAAAAADdw/fU6atGuo-VAa5jPZghKs_S8Y8pV7B-ZBgCLcBGAs/s400/triggering%2Bservo%2Bmotor.png

Paolo Botta

unread,
Nov 15, 2017, 6:33:21 PM11/15/17
to Gonçalo Lopes, Bonsai Users
Sure. Here it is.
Thanks
Paolo

Paolo Botta
Postdoctoral fellow, Costa lab
Zuckerman Institute
Columbia University
Tel. +1 3475253666 | Skype paolo...@gmail.comLinkedIn

To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

flycapture_triggeringSERVO.bonsai
flycapture_triggeringSERVO.bonsai.layout

Gonçalo Lopes

unread,
Nov 16, 2017, 1:41:30 PM11/16/17
to Paolo Botta, Bonsai Users
Hey Paolo,

Here is a modification that should do what you wanted:

Inline images 1

There are two parts to this:
 1) First I detect the entries into both areas and store them into shared variables (EnterA, EnterB in the RED selection above)
 2) The alternation logic is then specified in a sequence below which uses those variables to decide what to do (BLUE selection below).

Try to modify the sequence to see how it works. For example you can try swapping EnterB and EnterA in the sequence and the activation sequence would be different, or you can add two EnterA nodes before the EnterB to force an A->A->B sequence.

Hope this helps.
flycapture_triggeringSERVO.bonsai

Paolo Botta

unread,
Nov 18, 2017, 2:44:43 PM11/18/17
to Gonçalo Lopes, Bonsai Users
Thank you Goncalo, it is working well!

I have added a sequence of EnterA and EnterB in order to trigger the servo motor. Additionally, the centroid of the animal can be followed in other areas.

Here it is the modification to trigger the servo motor after the repetition of consecutive visits to two areas. As I need to put extra servo motors and more ROIs, I will share the code once it is done.

Cheers,
Paolo

Inline image 1



Paolo Botta
Postdoctoral fellow, Costa lab
Zuckerman Institute
Columbia University
Tel. +1 3475253666 | Skype paolo...@gmail.comLinkedIn

Reply all
Reply to author
Forward
0 new messages