Calibration issues

140 views
Skip to first unread message

Andrew Hogue

unread,
Nov 30, 2020, 1:11:37 PM11/30/20
to Brekel
Hi again,

I've been trying my hardest to get Brekel working properly in our lab but to no avail.
System setup:
4x Cameras (Azure Kinect, updated firmware to latest)
4x PCs (2 laptops Dell G7 15 with Geforce RTX, 2 PCs with 1080Ti graphics cards)

Markers:
- I have printed out my markers large (I have one that is 605 mm)
- When I choose "custom size" it doesn't give me options to change the size unless it "Advanced Settings" are on.
- When I set my marker size to 605mm and then click "start sensor calibration" it sets it back to 186mm
- Only the local camera is finding the marker although all markers can clearly see it, doesn't matter if I move the marker or use a larger or smaller marker.

- At some point I got 2 cameras to find the marker but it didn't actually do anything with them. It drew a coordinate system on the marker from the camera but didn't move the cameras to align with each other no matter what I did.

- Point Cloud alignment, nothing I can do can make this work.  

I have uploaded some videos of me trying to get things working, please take a look at them and let me know what I'm doing wrong.... I tried to show as many settings as possible and flip through all of the video/depth/etc.....

Here is a link to a shared folder with all of the videos:

Any suggestions?  I need to get this working for next semester.

-- Andrew

Brekel

unread,
Dec 1, 2020, 6:26:43 AM12/1/20
to Brekel
Hi Andrew,

From the movies it seems your marker is static on the floor, keep in mind that the marker alignment is designed to work with multiple frames of a slowly moving marker.
Try picking it up and moving and rotating it slowly in your scene while keeping it visible to your sensors.

I am currently rewriting the marker detector/alignment/calibration code so hopefully it will be more robust in the near future.

Greets,

Op maandag 30 november 2020 om 19:11:37 UTC+1 schreef Andrew Hogue:

susana

unread,
Dec 1, 2020, 4:08:59 PM12/1/20
to Brekel
Hi Brekel, hi Andrew,

Andrew – we had a similar set-up and very similar issues as you describe. At the moment, we moved on to other potential solutions but we're looking forward to seeing this work; it seems very close and so promising.

Susana

ferg...@gmail.com

unread,
Dec 2, 2020, 12:06:46 AM12/2/20
to Brekel
Same boat as you both Andrew and Susana, ready to pull the trigger on purchasing as soon as I can see it align and export with 4 x azures. Agreed about it looking promising and so close.

-Daniel

Brekel

unread,
Dec 19, 2020, 10:32:05 AM12/19/20
to Brekel
You may want to check out the latest v0.69 beta which now includes a rewritten marker aligner (and a lot of other things), here's the full changelog:

------------
 BETA v0.69
------------
- completely rewrote marker aligner
  - now has option to work with a static marker or moving marker
  - now comes with a brand new mathmatical solver that uses more data to optimize to the most optimal solution
  - ability to run a two pass solve (based on moving marker) for tighter alignment
  - ability to detect markers and do alignment based on clips on the timeline
    - this can be more flexible and more accurate than doing everything on live sensors
- implemented a new marker pattern/detector
  - old markers are still fully supported
  - new markers are labeled experimental
    - detection is usually a bit faster and more efficient, especially with moving markers
    - corner refinement should be a bit more robust
    - should be a bit more robust during movement
    - still labeled experimental (feedback is welcome)
- improved internals of marker detector to more intelligently use sensor calibration and detect marker type/size/mirroring
- rewrote how paper size and marker length are passed around
- added export preset for Holo CatchLight by Prometheus Vision https://assetstore.unity.com/packages/add-ons/holo-catchlight-plugin-pro-177405
  - allows you to use Holo CatchLight's Unity playback integration
  - exports OBJ and JPG/PNG texture sequence
  - automatically uses compatible filename structure and settings
  - use Holo CatchLight's ObjSeqConverter to convert the output to an MP4 file that can be used with the Unity integration
- added a LAS pointcloud exporter option
- added global offset
  - applies a position/rotation offset to all clips on the timeline
- added tooltip for SenseXR export
- added a Z-tweak multiplier option to BPC clips
- added a warning to the message log if multiple sensors are detected with one sensor not receiving any frames
  - suggesting that the computer may not have enough USB bandwidth for all the sensors
- added functionality to drag & drop thesefiles onto 3D viewport
  - alignment file, loads them onto live sensors or timeline (depending on what's currently visible)
  - timeline file
  - output settings file
- added some missing tooltips
- Batch Processor default now has "Create Timeline Subfolder" enabled
- moved timestamp further to the left in the sensor table
- when Kinect v2 could not be initialized with synchronization it is now retried without synchronization
- fixed potential crash when loading calibration data from an old BPC file
- fixed a regression issue with mute/solo of track would hide the wrong clip in the 3D viewport
- fixed a regression issue where PNG output would save as JPG in last version
- fixed issue where timeline would not always resize correctly when loading new BPC file(s)
- fixed erroneous "Timestamps of sensors seems to vary quite a lot" message when GUI was displaying video or depth view
- fixed cosmetic issue where progress bars in batch processor were scaled 1 frame too large
- fixed cosmetic issue where "Generate Marker PDF" window wasn't showing preview of the marker in release version
- fixed issue with Batch Processor not exporting SenseXR files
- updated to latest Stereolabs ZED SDK v3.3.3
- extended beta to 1st of February

Op woensdag 2 december 2020 om 06:06:46 UTC+1 schreef ferg...@gmail.com:
Reply all
Reply to author
Forward
0 new messages