Calibration of oblique images

18 views
Skip to first unread message

Tomasz Bugajski

unread,
Feb 13, 2024, 5:12:55 PM2/13/24
to XMALab
Hi,

I am hoping to gain some insight into the DLT calculation performed in XMALab. Using my lab's biplanar X-ray system, I have captured some images where the X-rays are non-orthogonal to the detectors. Based on my knowledge of DLT, I think it should be possible to calibrate in this scenario. In fact, after calibrating with the small lego cube, I received small errors but the location of the images/X-ray sources within the world view are not correct. I am happy to share some phantom data of this scenario which I have stored in my Google Drive.

Cheers,

Tomasz

Aaron Olsen

unread,
Feb 13, 2024, 6:24:29 PM2/13/24
to XMALab, tbug...@gmail.com
Hi Tomasz,

I can't address all of your questions but I can say that XMALab doesn't use DLT for the calibration. DLT is an older technique that allows for easy transformation between 2D and 3D but it is incompatible with the virtual cameras in 3D modeling software (e.g., Maya, Blender).

Instead of DLT, XMALab uses a standard computer vision camera model with internal parameters (focal length and principal point in x and y) and external parameters (rotation and position of the camera). This is compatible with the virtual cameras in 3D modeling software. There is also an option to add distortion parameters.

I can't speak to the mismatch between the estimated and real-world locations of cameras / X-ray sources. When I've done something similar in the past, the output camera positions and orientations did not match where my cameras "actually were" either. I'm guessing the estimated camera positions and orientations are more like theoretical virtual cameras matching the camera model parameters, not the actual cameras since that might require knowing additional parameters about the camera itself (e.g., the type of lens, type of camera, etc.).

Hope that helps!
Aaron

Benjamin Knörlein

unread,
Feb 13, 2024, 6:51:37 PM2/13/24
to Aaron Olsen, XMALab, tbug...@gmail.com
Hi Tomasz,

As Aaron pointed out xmalab uses another camera model then dlt. However it is kind of similar. There is a good chance that the model can compensate for the setup you have by moving the camera center out of the image and adjusting other parameters. 

I think best thing is just to try and run some experiments. E.g calibrate with the cube and then track some objects with known marker to marker distance inside the workspace. We used a Lego wand with markers inside as this was easy to construct and quite accurate.

This should give you a good idea about your accuracy in the workspace which is more important then the absolute camera or detector positions. Basically at the end of the day you want to make sure that your marker to marker distances are stable when a rigid body moves through the imaged area and that they are close to the CT ones. 

Hope that helps,
Ben

--
To unsubscribe from this group and stop receiving emails from it, send an email to xmalab+un...@brown.edu.

Tomasz Bugajski

unread,
Feb 13, 2024, 6:52:07 PM2/13/24
to Aaron Olsen, XMALab
Hi Aaron,

Sorry to hear that you ran into a similar issue. This is great information, thank you for sharing. Do you know if there is any documentation on how the IPs and EPs are calculated?

Tomasz


From: Aaron Olsen <aaro...@gmail.com>
Sent: Tuesday, February 13, 2024 5:24:30 PM
To: XMALab <xma...@brown.edu>
Cc: tbug...@gmail.com <tbug...@gmail.com>
Subject: Re: Calibration of oblique images

Gatesy, Stephen

unread,
Feb 13, 2024, 9:40:19 PM2/13/24
to Tomasz Bugajski, Aaron Olsen, XMALab
Tomasz,

Could it be that your Maya scenes are correct except for the translation of the image planes?  You need to put in the actual source-image distance (the plane will scale accordingly) in order for it to fully resemble your experimental scene.  I don't know how it choses a default SID when the camera is created (Ben?).

Steve


--
Stephen Gatesy
Dept. Ecology, Evolution, & Organismal Biology
Box G-B209
Brown University
Providence, RI 02912 USA
401-863-3770 (office)
401-863-9169 (lab)

Aaron Olsen

unread,
Feb 13, 2024, 9:43:33 PM2/13/24
to Tomasz Bugajski, XMALab
Hi Tomasz,

Not that I know of- documentation really just covers use of the software. You'd probably have to look at the source code.

If want more details on how to calculate camera parameters more generally, I'd recommend the documentation for the opencv (open computer vision) project. It's really extensive. There's a whole book on it, for purchase as a paperback or free as a pdf.

Aaron

Tomasz Bugajski

unread,
Feb 13, 2024, 9:59:39 PM2/13/24
to XMALab, aaro...@gmail.com, XMALab, Tomasz Bugajski
Hi everyone,

Thank you all for your input. Aaron, I will look into your recommendations!

I may be misinterpreting Ben's advice on experimentation, but the issue is within the calibration itself. The images are not in the correct locations, so when tracking the models they do not align with the images properly (i.e., if the model is in the correct location of one image, the other is not). What Steve is suggesting may be a solution, but I am unaware of any way to specify SIDs in XMALab. Is this possible?

Cheers,

Tomasz

Gatesy, Stephen

unread,
Feb 13, 2024, 10:18:36 PM2/13/24
to Tomasz Bugajski, XMALab, aaro...@gmail.com
Sorry, my suggestion will not solve misalignment problems.  It's entirely cosmetic (change the Z-translate attribute of the yourCameraName_Plane node).

Steve

Benjamin Knörlein

unread,
Feb 14, 2024, 12:58:34 AM2/14/24
to Tomasz Bugajski, XMALab, aaro...@gmail.com
Not sure if I understand what is wrong. Are you talking about the 3d view or the 2d views when you are tracking? Is the 3d model not correctly aligned or can you not track the markers? 

Can you maybe send some screenshots? Also what are your calibration errors? You can expand the title of the images by clicking on the Info icon. 

Tomasz Bugajski

unread,
Feb 14, 2024, 10:34:59 AM2/14/24
to XMALab, knoe...@gmail.com, XMALab, aaro...@gmail.com, Tomasz Bugajski
Sorry, I should also clarify that I am importing the MayaCam 2.0 calibration files into AutoscoperM to track 3D models. When doing this, the 3D view of the world is incorrect and does not show the correct setting we had in the lab (the same can be said for XMALab). As a result, the tracking is not possible as the models do not align due to the position of the 2D images (this is how I see it but maybe I am thinking of it wrong). I have provided images of the calibration and the tracking result. The resolution of the images is 3072x3072.
Calibration.pngAutoscoper.png

Benjamin Knörlein

unread,
Feb 14, 2024, 8:43:26 PM2/14/24
to Tomasz Bugajski, XMALab, aaro...@gmail.com
Oh. I see. You are using autoscoper. The errors for your calibration actually look fine. You could digitize the calibration cube images to see if the epipoiar lines are correct. You find some information on how xmalab works on the wiki https://bitbucket.org/xromm/xmalab/wiki/Home

AS you are using autoscoper there could be other problems as well. I am not up to date on the development there but you have to first align the bone properly on one frame and then it should be able to track them in 3D based on the 2D images. I would assume that the problem is rather another setting and not the calibration. 

Sorry that i cannot be of more help. Maybe someone else has an idea.

Best,
Ben
Reply all
Reply to author
Forward
0 new messages