Rotating images

808 views
Skip to first unread message

Vincent Prevosto

unread,
Mar 28, 2016, 5:35:41 PM3/28/16
to Bonsai Users

Hi,
I'm trying to build up on previous topics related to heading and orientation. Here, the idea is to use the heading angle to rotate each frame so that in the resulting video, the head (or animal) is always on a vertical axis (the motivation for that is to perform some analysis on whiskers in freely behaving rodents). The workflow is attached below.
I got good success getting a heading angle with BinaryRegionExtremes (using Vertical method) and feeding it to the "angleline" Python transform referenced above.
However, I fail at rotating the video frames. I combine the output of the Python transform (i.e., the detected heading) with a video stream in a Combine Latest node. Then I feed that to another Python transform, which goes as follow:


import clr
clr
.AddReference("OpenCV.Net")
from OpenCV.Net import CV2

@returns(IplImage)
def process(value):
  angle
= value.Item1
  img
= value.Item2
 
  rows
= img.Size.Height
  cols
= img.Size.Width

 
RMatrix = cv2.getRotationMatrix2D((cols/2,rows/2),angle,1)
  output
= cv2.warpAffine(img,RMatrix,(cols,rows))

 
return output

Apparently, CV2 doesn't work here.
What would be a the appropriate method?

Also, additional question: I made a small addition to the "angleline" Python transform, so that if there's no detected region, the output angle is the last detected angle (instead of NaN). See below at line  elif (pt1.X == Single(float.NaN)) and (head is not None) ...
But it doesn't seem to work. Any suggestion?

import clr
clr
.AddReference("OpenCV.Net")
from System import Tuple, Math, Single
from OpenCV.Net import Point2f

head
= None
tail
= None

def distancesquare(pt1,pt2):
  dx
= (pt2.X - pt1.X)
  dy
= (pt2.Y - pt1.Y)
 
return dx * dx + dy * dy

def angleline(pt1,pt2):
  dx
= (pt2.X - pt1.X)
  dy
= (pt2.Y - pt1.Y)
 
return Math.Atan2(dy, dx)

@returns(Single)
def process(value):
 
global head, tail
  pt1
= value.Item1
  pt2
= value.Item2
 
if (pt1.X == Single(float.NaN)) and (head is None):
    head
= None
    tail
= None
   
return float.NaN
 
elif (pt1.X == Single(float.NaN)) and (head is not None):
   
return angleline(head,tail)
 
else:
   
if head is None or distancesquare(pt1, head) < distancesquare(pt1, tail):
      head
= pt1
      tail
= pt2
   
else:
      head
= pt2
      tail
= pt1
   
return angleline(head,tail)


Thanks!
Vincent






Head_centered_RefFrame.bonsai

goncaloclopes

unread,
Mar 28, 2016, 6:21:57 PM3/28/16
to bonsai...@googlegroups.com
Hi Vincent,

Cool stuff :-)

OpenCV.NET does not have "CV2", only "CV". However, all the functions you are looking for should be there. You can check the documentation here.
However, if you want to rotate frames given the angle, you can actually use the WarpAffine node directly, like this:



It's a bit involved, but once you get the hang of it it's actually quite powerful. There are two branches here:
 1) WarpAffine just warps the image, but it needs to be given the Transform matrix, which is computed in the second branch;

 2) The first part of this branch is just the normal tracking stuff (I just used color tracking here). After you get the largest object information (you can refine this with better heading information using Extremes, etc), you now need to create the transform matrix. You can do this by using the AffineTransform node. AffineTransform will simply create an affine transform matrix using the specified Translation, Rotation and Scale properties. Image transformations like rotation and scale are also applied around a defined Pivot point. So all there is left to do is specify these properties from the largest blob. In this case I used an expression script to do this (ExpressionTransform). It went like this:

new(
Centroid as pivot,
-Centroid.X + 320 as translationX,
-Centroid.Y + 240 as translationY,
Orientation as rotation)

Basically, I'm defining the Pivot, Translation and Scale properties. What I want to do is to rotate the image by the heading angle (Rotation), around the centroid (which is my Pivot), and then offset it by negative of the centroid (Translation) in order to bring the object to the image origin (0,0). Now, because the "origin" in these images (0,0) is in the top-left corner, we actually need to add (width/2,height/2) or (320,240) in this case, in order to bring the object to the "center".

I hope this makes sense. If you try it you should see this workflow always keeps the largest object neatly aligned in the center. Of course, the angle in this case can jump around in some cases because the orientation only gives you heading from [-pi/2 ; pi/2] but this you can fix with any of the other heading techniques discussed previously.

Finally, regarding your modifications to the angleline script, I think this may be because of the way you are comparing against NaN. In Python, the recommended way to do it would be to either use the math module:
import math

@returns(bool)
def process(value):
 
return math.isnan(value)
or the .NET Single NaN:
from System import Single

@returns(bool)
def process(value):
 
return Single.IsNaN(value)
I think any of these should work. In general you don't want to compare against NaN using the equality operator (probably because all comparisons with NaN return false as per the IEEE standard).

Hope this helps :-)
warplargest.bonsai

Vincent Prevosto

unread,
Mar 29, 2016, 1:49:34 AM3/29/16
to Bonsai Users
Thank you Gonçalo! That's beautiful. I had tried playing with AffineTransform, but I was nowhere near imagining how to make it work.
Now, I combined PythonTransforms to refine the ConnectedComponent's Orientation (i.e., "heading"), like this:

import clr
import math
clr
.AddReference("Bonsai.Vision")

clr
.AddReference("OpenCV.Net")
from System import Tuple, Math, Single
from OpenCV.Net import Point2f
from Bonsai.Vision import ConnectedComponent


head
= None
tail
= None

def distancesquare(pt1,pt2):
  dx
= (pt2.X - pt1.X)
  dy
= (pt2.Y - pt1.Y)
 
return dx * dx + dy * dy

def angleline(pt1,pt2):
  dx
= (pt2.X - pt1.X)
  dy
= (pt2.Y - pt1.Y)
 
return Math.Atan2(dy, dx)

@returns(ConnectedComponent)
def process(value):
 
global head, tail
  pi
= math.pi

 
LBRegion= value.Item1
 
OrientationAngle= value.Item2
  pt1
= OrientationAngle.Item1
  pt2
= OrientationAngle.Item2

  centroid
= LBRegion.Centroid
  contour
= LBRegion.Contour
 
if contour is not None:
 
#if (math.isnan(pt1.X)) and (head is None):
 
#  head = None
 
#  tail = None
   
# return float.NaN
 
# elif (math.isnan(pt1.X)) and (head is not None):
 
#   LBRegion.Orientation = angleline(head,tail)
 
#else:

   
if head is None or distancesquare(pt1, head) < distancesquare(pt1, tail):
      head
= pt1
      tail
= pt2
   
else:
      head
= pt2
      tail
=
pt1
   
LBRegion.Orientation = angleline(head,tail)
 
return LBRegion

It also works - sort of. The issue is that the resulting video is very jittery when the heading detection is poor, which happens quite often, due to experimental constraints (the field of view of the high-speed camera is quite limited, and thus the head pops in and out of view).
As you can see in the code above, I didn't manage to make the bit about preserving past heading value to work. I think I'll find a solution to that one. In any case, all I need now is be more conservative and put less weight to cases where the ConnectedComponent's area is small (and of course assume that the head's direction didn't change drastically from one frame to the other). I'll play with these parameters and the develop the angleline function a bit more. If you have any further suggestions, I'm always happy to listen.
Best,
Vincent

Vincent Prevosto

unread,
Mar 29, 2016, 1:54:14 AM3/29/16
to Bonsai Users
Here's the current workflow, comparing two transformations, and an example video file.
Head_centered_RefFrame.bonsai
PrV77_56_Trial_X.avi

goncaloclopes

unread,
Mar 29, 2016, 12:54:26 PM3/29/16
to Bonsai Users
Here's an updated workflow with improved body and head tracking.

It's not perfect yet, but it takes advantage of the fact you know that the animal always comes from the bottom of the screen. This means that the "body" of the animal is always the bottom row of pixels and the "head" should be the top-most pixels.

The idea is you can just ignore the extremes, centroid, etc and just take the bottom pixels and top pixels, average their X position and you'll get pretty good estimates of where these two features are in space.

In fact, the body estimate I believe is as good as it's ever going to be, because the body is really always the bottom row of pixels. For the tip of the nose is more tricky because the shape can vary quite a bit, and you can't rely on just how much vertical the pixels are... anyway, maybe from here you can tweak further and get something better. The idea is always to come up with some feature that can be reliably extracted from the video.

Hope this helps,
Head_centered_RefFrame.bonsai

Vincent Prevosto

unread,
Mar 29, 2016, 7:01:39 PM3/29/16
to Bonsai Users
Fantastic.
Quick edit: in the 2nd PythonTransform, the last line should be
return result
instead of
return LBRegion
(I guess ;) )
The two points I'll work on from here are 1/ to make sure that the output frames are not cropping out essential regions and 2/ to adjust the rotation dampening according to "body" mass.
Thanks a million again for your help.
Vincent

Bruno Cruz

unread,
Jul 15, 2016, 9:39:20 AM7/15/16
to Bonsai Users
Does anyone have an alternative that would center the animal relative to the center of his body (simply detected using largest binary region analysis)?

goncaloclopes

unread,
Jul 16, 2016, 10:17:08 AM7/16/16
to Bonsai Users
Hi Bruno,

Just remove the Rotation mapping from the original example. Double click on the InputMapping node and remove the Rotation line. Set Rotation to zero in the property pages of AffineTransform to ensure it's constantly assigned to zero.

Hope this helps.
Reply all
Reply to author
Forward
0 new messages