Objectron Newsletter - March 2021

255 views
Skip to first unread message

Liangkai Zhang

unread,
Mar 19, 2021, 7:22:13 PM3/19/21
to obje...@googlegroups.com


Connect with Objectron

We hope this newsletter finds you well!  It has been a busy start of the year for Objectron.

This month:

- Objectron Paper to appear in CVPR-2021.

- Python and Web API for Objectron models (available via Mediapipe, Python colab).

- Camera pose refinement via offline bundle adjustment.

- Community spotlight:  Objectron is available via ActiveLoop hub.

     Did something amazing with Objectron? Please let us know and we would love to  

     highlight it in our newsletter.

All our best,

The Objectron team at Google Research

Updates to the Objectron

     

Get Started with Objectron Python Solution API

Objectron models now offer a ready-to-use yet customizable Python solution as part of the prebuilt Mediapipe Python package. The Mediapipe Python package is available on PyPI for Linux, macOS and Windows.

           

To install MediaPipe Python package:

$ python3 -m venv mp_env && source mp_env/bin/activate

(mp_env)$ pip install mediapipe

(mp_env)$ python3


To learn more about how to configure and run Objectron solutions in Python, please find details in the API tutorial and Python colab.

Camera Pose Refinement

We have updated the camera pose matrices in the Objectron dataset. Previously we used the camera pose from the AR session data, that is computed online during video recording. We updated it with more accurate poses obtained from running offline global bundle adjustment on the video. The refined pose is re-scaled back to metric scale by registration with the AR camera pose.


Comparison of Unrefined Poses(dashed) and Refined Poses(solid)

The updated pose is very useful for applications that require more accurate and consistent camera pose across the sequence, such as neural view synthesis (See the spotlight below).

Spotlight on Objectron + NeRF

Neural Volumetric Rendering has become very popular thanks to very promising results in the NeRF paper and many related works. Neural volume rendering refers to neural methods that generate images and videos using ray tracing. Now with more accurate camera poses in Objectron, you can also train NeRF models on the Objectron dataset and use it for synthesizing novel views, getting a depth mask or even an object's segmentation mask! (We used the amazing JaxNerf code for training these models).

Original video

Synthesized novel views, rendered using NeRF

Estimated Depth from NeRF

Original frame with annotated 3D bounding box

Rendered frame using NeRF

Estimated depth image from NeRF

Extracted segmentation mask by culling the depth mask with 3D bounding box in 3D

This is possible because of the increased accuracy of the refined camera poses.

NeRF with Original Camera Poses

NeRF with Refined Camera Poses

Where We’re Publishing


The Objectron paper will appear in the CVPR 2021 main conference. You can read the arxiv preprint here. In the paper, we will discuss more details of the dataset, including how we collected and annotated the data. We also talk about the architecture of the Objectron models.

Community Spotlight: ActiveLoop.ai Hub

The folks in Activeloop.ai have added support for Objectron in the Hub package.

Hub provides a visualization tool to the Objectron dataset. The dataset visualizer is available here and an example notebook is provided here.

If you would like your work to be in our newsletter Spotlight, please let us know!


Invite others to join Connect with Objectron

Know someone else who might be interested in joining our community? Feel free to forward along this email and have them join our mailing list!


Questions? Visit us on the github page or send questions to our mailing list!

© 2021 Google LLC. 1600 Amphitheatre Parkway, Mountain View, CA 94043

You received this email because you are in the Objectron mailing list. If you don’t wish to receive these emails in the future, you may leave the mailing list.

Visit google.com/careers for all career opportunities.

       


Reply all
Reply to author
Forward
0 new messages