[osg-users] LOD capabilities

66 views
Skip to first unread message

Bruno Oliveira

unread,
Apr 26, 2016, 4:20:07 AM4/26/16
to OpenSceneGraph Users
Hello,

I am thinking of porting my code for representing a huge point cloud scene from a simple osg::Group/osg::Node structure to a PagedLOD. This is due to the fact that I can't

I have built a custom octree for this purpose (I could probably publish it in the future if it would be useful for the OSG lib).

My problem is that I need to perform some cloud manipulations, e.g. delete points, add more points, perform picking...
  1. Would this be possible using the OSG PagedLOD node structure? Or is it better to do all these operations on my own?
  2. Would this be possible without ever loading the entire cloud at once (with the already existing code)? How much would this affect the overall picking performance, etc?
  3. I still don't understand OSG Point Picking very well. Are there any good examples on this?

Thank you!

Robert Osfield

unread,
Apr 26, 2016, 4:40:29 AM4/26/16
to OpenSceneGraph Users
Hi Bruno,

On 26 April 2016 at 09:20, Bruno Oliveira <bruno.mana...@gmail.com> wrote:
I am thinking of porting my code for representing a huge point cloud scene from a simple osg::Group/osg::Node structure to a PagedLOD. This is due to the fact that I can't

I have built a custom octree for this purpose (I could probably publish it in the future if it would be useful for the OSG lib).

My problem is that I need to perform some cloud manipulations, e.g. delete points, add more points, perform picking...
  1. Would this be possible using the OSG PagedLOD node structure? Or is it better to do all these operations on my own?
You can use PagedLOD, but you'll need to write a tool to pre-process the raw data into a hierachy of tiles that the PagedLOD's will reference.

 
  1. Would this be possible without ever loading the entire cloud at once (with the already existing code)?
How long is the piece of string I have in my hand?  Please answer this question within 1cm of accuracy without asking any further questions.

If you find this question difficult to answer then you'll understand why we don' have hope in hell of answering your question. 

Please state size of your datasets.

Robert

Bruno Oliveira

unread,
Apr 26, 2016, 11:04:06 AM4/26/16
to OpenSceneGraph Users
Robert,

the tool I'm creating is designed to handle clouds from 100 million points to 1 billion, hence the out of core rendering.
By "with the already existing code", I mean code from OSG.

Thank you

_______________________________________________
osg-users mailing list
osg-...@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Robert Osfield

unread,
Apr 26, 2016, 11:28:17 AM4/26/16
to OpenSceneGraph Users
Hi Bruno,

On 26 April 2016 at 16:04, Bruno Oliveira <bruno.mana...@gmail.com> wrote:
the tool I'm creating is designed to handle clouds from 100 million points to 1 billion, hence the out of core rendering.
By "with the already existing code", I mean code from OSG.

100 million to 1 billion does qualify as "huge" point cloud.  

The issues aren't directly related to the OSG, but how to manage that amount of data on disk, in main memory and on the GPU.  The OSG itself can be used to start the data in main memory, on the GPU, and even provide mechanisms for paging data in out from disk (via PagedLOD/DatabasePager).  You will be responsible for how you set up the scene graph, how you do this is the crux of the task, it's not trivial given the amount of data you are dealing with.

When learning what can be done you need to start doing the maths on how much memory each point in your cloud requires. A single vec3 for each point for 100 million points requires 1.2 GB, 1 bilion requires 12GB.  When passing data to OpenGL you need to put the data into OpenGL FIFO which then gets passed in driver memory.  If you are using VBO's then you'll end potentially with one copy of data in application memory, one in driver memory, and then when it finally gets rendered a copy on the GPU too.  This means we'll need at least 2.4GB main memory and 1.2GB on the GPU just for 100 million vertices - this without anything else.

This is without any colours, normals or textures.  You haven't mentioned anything about these, perhaps you should... as it makes a big difference to the memory footprint we are talking about.  I'm not expecting that you'll just have white points...

So when you start asking can the OSG do it, you can get it to scale to handling multi-terrabyte databases thanks to the built in paging support, but only if you give it an appropriately built scene graph - point clouds are a niche that doesn't have open source tools that will build the database for you.  You need to be realistic about the memory management, handling really large amounts of data requires far more skills than just rendering a few pretty textured polygons.  I don't know what your background knowledge, what can we assume?

If you don't have the skills right now then you'll need patiently develop them OR just pay a 3rd party engineer who has the skills to work with you on it. 

Robert.












 

Jason Beverage

unread,
Apr 26, 2016, 11:50:08 AM4/26/16
to OpenSceneGraph Users
Hi Bruno,

We have a point cloud capability that we've developed here at Pelican Mapping that integrates nicely with osgEarth.  You can see a video of it in action here https://youtu.be/lUeF4Y8yGNI rendering around 5 billion points.  We can easily perform picking on individual points so you can get all the information about the point (location, RGB, intensity, etc) but we don't have any support for modifying the point cloud (although that is something we could add).  Editing a giant paged dataset would take some special consideration, but it's definitely doable.

Let me know if you want anymore information, happy to chat sometime.

Thanks!

Jason

Reply all
Reply to author
Forward
0 new messages