Getting and setting ROI coordinates

3,269 views
Skip to first unread message

François-Xavier Thomas

unread,
Jan 22, 2014, 12:03:53 PM1/22/14
to pyqt...@googlegroups.com
Hi all, hi Luke,

My UI consists in a GraphicsView, in which I added a rectangular ROI and an ImageItem displaying an image from a camera.

I'd like to get and set the image coordinates of the ROI, which can be rotated, so this is not a simple matter of getting its (X, Y) position and size.

So far, I managed to extract the ROI handles by doing this :

>>> hh = roi.getHandles()
>>> hh = [h.mapToItem(image_item, h.pos()) for h in hh]

It *seems* to work, but I am not sure if this is the right way, nor if there isn't a much simpler solution.

Also, I haven't managed to find out how to to the opposite : updating the ROI from image coordinates.

Any ideas?

Thanks!
François-Xavier

Luke Campagnola

unread,
Jan 22, 2014, 8:18:45 PM1/22/14
to pyqt...@googlegroups.com
On Wed, Jan 22, 2014 at 12:03 PM, François-Xavier Thomas <fx.t...@gmail.com> wrote:
My UI consists in a GraphicsView, in which I added a rectangular ROI and an ImageItem displaying an image from a camera.

I'd like to get and set the image coordinates of the ROI, which can be rotated, so this is not a simple matter of getting its (X, Y) position and size.

So far, I managed to extract the ROI handles by doing this :

>>> hh = roi.getHandles()
>>> hh = [h.mapToItem(image_item, h.pos()) for h in hh]

It *seems* to work, but I am not sure if this is the right way, nor if there isn't a much simpler solution.

That's incorrect--if you read the Qt GraphicsView docs carefully, they say that QGraphicsItem.pos() returns a location in the coordinate system of the item's *parent*. So the correction looks like:

    hh = [roi.mapToItem(image_item, h.pos()) for h in hh]

Alternatively, this code should give the same result:

    hh = [h.mapToItem(image_item, pg.Point(0, 0)) for h in hh]

You should find that either of these produces better results.. If not, then there is probably some other information about your code that I am missing.
 

Also, I haven't managed to find out how to to the opposite : updating the ROI from image coordinates.

Call roi.setPos, .setSize, or .setAngle to move the ROI. To work out what values you need here might take some effort--you probably want to map locations from the image back to the ROI or the ROI's parent. For example, to set the position of the ROI to be at (5, 5) in the image:

    roi.setPos(roi.parentItem().mapFromItem(image_item, pg.Point(5, 5)))
 

Luke

François-Xavier Thomas

unread,
Jan 23, 2014, 3:42:28 AM1/23/14
to pyqt...@googlegroups.com
Hi Luke,

On Thu, Jan 23, 2014 at 2:18 AM, Luke Campagnola
<luke.ca...@gmail.com> wrote:
>
> On Wed, Jan 22, 2014 at 12:03 PM, François-Xavier Thomas <fx.t...@gmail.com> wrote:
>>
>> My UI consists in a GraphicsView, in which I added a rectangular ROI and an ImageItem displaying an image from a camera.
>>
>> I'd like to get and set the image coordinates of the ROI, which can be rotated, so this is not a simple matter of getting its (X, Y) position and size.
>>
>> So far, I managed to extract the ROI handles by doing this :
>>
>> >>> hh = roi.getHandles()
>> >>> hh = [h.mapToItem(image_item, h.pos()) for h in hh]
>>
>>
>> It *seems* to work, but I am not sure if this is the right way, nor if there isn't a much simpler solution.
>
>
> That's incorrect--if you read the Qt GraphicsView docs carefully, they say that QGraphicsItem.pos() returns a location in the coordinate system of the item's *parent*.

Ha, thanks! Sound like I should read the docs more and not rely too
much on playing around with IPython ;)

> So the correction looks like:
>
> hh = [roi.mapToItem(image_item, h.pos()) for h in hh]
>
> Alternatively, this code should give the same result:
>
> hh = [h.mapToItem(image_item, pg.Point(0, 0)) for h in hh]
>
> You should find that either of these produces better results.. If not, then there is probably some other information about your code that I am missing.

They should both produce the same results though, right? Well, aside
from rounding and precision errors.

Is there a method that returns the coordinates of the full path
defined in the ROI (i.e. not just the handles but the corners of the
polygon)? Or do I need to work my way through roi.path() manually?

>> Also, I haven't managed to find out how to to the opposite : updating the ROI from image coordinates.
>
> Call roi.setPos, .setSize, or .setAngle to move the ROI. To work out what values you need here might take some effort--you probably want to map locations from the image back to the ROI or the ROI's parent. For example, to set the position of the ROI to be at (5, 5) in the image:
>
> roi.setPos(roi.parentItem().mapFromItem(image_item, pg.Point(5, 5)))

Thanks. I will try this out!

Cheers,
François-Xavier

François-Xavier Thomas

unread,
Jan 23, 2014, 4:59:12 AM1/23/14
to pyqt...@googlegroups.com
Small update : after trial and error I managed to find something that
appears to work! Multiple ROIs for multiple scenes are constructed
using :

roi = pg.RectROI([0, 0], [50, 50], pen=(0, 9))
roi.addScaleRotateHandle([1, 0], [1, 1])
roi.addScaleHandle([0, 0], [1, 1])
roi.sigRegionChanged.connect(self._roi_update)

Then, the positions is updated when the user moves the first ROI (in
self._roi_update) with :

# ...after getting the handles' positions from the updated ROI
ht = [np.array([h.x(), h.y(), 0.]) for h in hh]

# The new ROI position is the position of that handle
new_position = QtCore.QPointF(float(hh[2][0]), float(ht[2][1]))

# Its new size is defined by the handles as well
new_size = QtCore.QSizeF(
float(np.linalg.norm(ht[2] - ht[1])),
float(np.linalg.norm(ht[1] - ht[0]))
)

# The new rotation is the angle between the vector defined between
# the first and second handles, and the vertical axis.
# I lazily stole angle_between from
http://stackoverflow.com/questions/2827393.
new_rotation = angle_between(ht[1] - ht[0], np.array([0., 1., 0.]))
new_rotation = 180 * new_rotation / 3.1415926524 - 180.

# Set them
roi.setPos(new_position, update=False, finish=False)
roi.setSize(new_size, update=False, finish=False)
roi.setAngle(new_rotation, update=False, finish=False)

With this I have linked ROIs : when I move one ROI in one camera
frame, the other ROIs move as well.

The complete program also involves an homography that is computed in
the background to let the user select matching regions in multiple
images taken by a multi-frame camera mounted on a rigid support. And
that in about 3 or 4 hours of work, including the UI. PyQtGraph is
pretty neat! ;)

Cheers,
François-Xavier

Luke Campagnola

unread,
Jan 23, 2014, 10:51:18 AM1/23/14
to pyqt...@googlegroups.com
On Thu, Jan 23, 2014 at 4:59 AM, François-Xavier Thomas <fx.t...@gmail.com> wrote:
[snip]
 
With this I have linked ROIs : when I move one ROI in one camera
frame, the other ROIs move as well.

So that's what you were after! Does this work instead?

    roi2.setState( roi1.saveState() )


 
The complete program also involves an homography that is computed in
the background to let the user select matching regions in multiple
images taken by a multi-frame camera mounted on a rigid support. And
that in about 3 or 4 hours of work, including the UI. PyQtGraph is
pretty neat! ;)

Cool, send us a link when it is done!


Cheers,
Luke
 

Kedar Patwardhan

unread,
Feb 21, 2014, 11:58:31 AM2/21/14
to pyqt...@googlegroups.com
Hi Luke,
Is getting the handle positions the only way to extract the 4 corners of the RectROI?
I am trying to build an image annotation tool where the user selects an object of interest in the image and then I can save the co-ordinates of the RectROI corners as my annotation for the object.

thanks for your help. I am a pyqt & pyqtgraph noob.

kedar

Luke Campagnola

unread,
Feb 24, 2014, 11:25:14 AM2/24/14
to pyqt...@googlegroups.com
On Fri, Feb 21, 2014 at 11:58 AM, Kedar Patwardhan <kedar.a.p...@gmail.com> wrote:
Hi Luke,
Is getting the handle positions the only way to extract the 4 corners of the RectROI?
I am trying to build an image annotation tool where the user selects an object of interest in the image and then I can save the co-ordinates of the RectROI corners as my annotation for the object.


There are a few different ways you might use, depending on the purpose:

- You can always manually map the corner positions of the ROI to the coordinate system of the image.
- If the ROI is not rotated, you can use ROI.pos() and .size()
- ROI.saveState() returns a serializable object that you can use to save/restore the ROI.
- ROI.getArraySlice is used to determine the region of an image that intersects the ROI
- ROI.getArrayRegion(..., returnMappedCoords=True) can be used to return the exact image coordinates that are accessed when using ROI.getArrayRegion().

DB

unread,
Mar 5, 2014, 2:28:56 PM3/5/14
to pyqt...@googlegroups.com
The complete program also involves an homography that is computed in
the background to let the user select matching regions in multiple
images taken by a multi-frame camera mounted on a rigid support. And
that in about 3 or 4 hours of work, including the UI. PyQtGraph is
pretty neat! ;)


That looks interesting,  do you mind sharing what you are using as the library for your multi camera setup? OpenCV? 

François-Xavier Thomas

unread,
Mar 5, 2014, 3:28:53 PM3/5/14
to pyqt...@googlegroups.com
Hey all,

You're in luck, I was just going back to this subject! February was a
bit intense, so I wasn't able to really reply to Luke asking for
source code, sorry about that.

I took some time to write a small, simple example from scratch,
hopefully this will help some people. This is now live on my GitHub
[0], and I may add more if I find useful uses for PyQtGraph ;)

Yes, we're using OpenCV, the Python bindings do work pretty well. My
example doesn't show how to do compute the perspective transformation
between two different images, but this is pretty easy to do if you
have worked a little with that library -- look for
`cv2.findHomography` [1] and use it on matching features (e.g. SIFT).

@Luke: Using setState/saveState won't work as the images are
transformed using a perspective transformation, unfortunately...

I was also a bit fuzzy about image coordinates while writing the
example. OpenCV and PyQtGraph's conventions are very different
concerning the order of the coordinates and the origin of the axes, so
I had to do some weird things like transposing/mirroring the image I
read from OpenCV to get it displayed upright. I recall having to do
the same thing while writing the application we're using at work.

Also, for some reason the handles of the second ROI don't move if you
rotate/scale the first one, but the ROI itself moves. Translating
works without issues.

Maybe you can look a little at what I have written and shed some light
on those two things. Questions and feedback are of course appreciated!

Cheers,
François-Xavier

[0] https://github.com/fxthomas/pg-examples
[1] http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findhomography
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "pyqtgraph" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/pyqtgraph/dOfrCw5KGH0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> pyqtgraph+...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/pyqtgraph/82c6191a-b3a1-40b7-b1c0-c5918b4300e8%40googlegroups.com.
>
> For more options, visit https://groups.google.com/groups/opt_out.

Luke Campagnola

unread,
Mar 5, 2014, 9:33:02 PM3/5/14
to pyqt...@googlegroups.com
On Wed, Mar 5, 2014 at 3:28 PM, François-Xavier Thomas <fx.t...@gmail.com> wrote:
@Luke: Using setState/saveState won't work as the images are
transformed using a perspective transformation, unfortunately...

I was also a bit fuzzy about image coordinates while writing the
example. OpenCV and PyQtGraph's conventions are very different
concerning the order of the coordinates and the origin of the axes, so
I had to do some weird things like transposing/mirroring the image I
read from OpenCV to get it displayed upright. I recall having to do
the same thing while writing the application we're using at work.

There are two related issues here:

1) Should image data be interpreted as (rows, columns) or (columns, rows)?  Most other libraries (including Qt) use the former, whereas pyqtgraph uses the latter. This is a common enough complaint that I am considering (maybe) breaking backward compatibility to fix this in the future. In any case, I certainly need to discuss this more in the documentation.

2) Should the y-axis be positive=up or positive=down? This is much less clear--plots usually use the former, images usually use the latter. By default, the ViewBox uses +y=up, and this causes images to be displayed upside-down because they expect the opposite. One option would be to reverse the y-axis on images such that they display correctly in +y=up coordinate systems.

I am considering implementing both changes and adding a global configuration option that would allow to easily restore the legacy behavior for systems that are already written. 
 

Also, for some reason the handles of the second ROI don't move if you
rotate/scale the first one, but the ROI itself moves. Translating
works without issues.

Maybe you can look a little at what I have written and shed some light
on those two things. Questions and feedback are of course appreciated!

The handle issue is because you used update=False when modifying the target ROI. I'll open a PR with a suggested fix. 

I have an interesting suggestion (that I may try implementing in another PR): Since you are using perspective transforms, which are supported by Qt GraphicsView, why not simply apply the known transformation to the second ROI? Then you really could just use setState/saveState.


Luke
 

Luke Campagnola

unread,
Mar 5, 2014, 10:33:29 PM3/5/14
to pyqt...@googlegroups.com
On Wed, Mar 5, 2014 at 9:33 PM, Luke Campagnola <luke.ca...@gmail.com> wrote:

Also, for some reason the handles of the second ROI don't move if you
rotate/scale the first one, but the ROI itself moves. Translating
works without issues.

Maybe you can look a little at what I have written and shed some light
on those two things. Questions and feedback are of course appreciated!

The handle issue is because you used update=False when modifying the target ROI. I'll open a PR with a suggested fix. 

I have an interesting suggestion (that I may try implementing in another PR): Since you are using perspective transforms, which are supported by Qt GraphicsView, why not simply apply the known transformation to the second ROI? Then you really could just use setState/saveState.


Success! Check out the PR--both ROIs are kept synchronized using only target.setState(source.saveState()), and it works as expected when a perspective factor is added to the transformation. The only hiccup is that the ROI handles do not ignore the perspective as they should and end up drawn too small. 

 
Reply all
Reply to author
Forward
0 new messages