Python frontend mesh processing

47 views
Skip to first unread message

Andrei Mancu

unread,
Mar 8, 2021, 12:22:11 PM3/8/21
to Neuroglancer
Hi,

we are trying to render the meshes of cells with their corresponding organelles. The ultimate goal is to be able to toggle between subsources (cell organelles) in the web app.
Consequently, our python/frontend.ts getVolumeDataSource() function should look like this:

Screenshot from 2021-03-08 17-53-39.png

Our idea is to parse a precomputed mesh through a LocalVolume layer to the server, containing the data, but also the on-demand cell mesh and the organelle meshes associated with the corresponding cell in one bytearray (concatenated). We succeeded in this task. The next step is to unpack that bytearray and to create the chunks (for the frontend). As far as I understood, this unpacking is done in the src/neuroglancer/datasource/python/backend.ts (as we are using Python), more exactly in the decodeFragmentChunk() function.

My question is the following: Is the assignMultiscaleMeshFragmentData() better suited for our problem (instead of the normal assignMeshFragmentData())?  Can the MultiscaleFragmentChunk object hold multiple subChunks through the subChunkOffsets, that can be accessed later for the getVolumeDataSource() function in  python/frontend?

Screenshot from 2021-03-08 18-19-26.png

This approach is inspired by the BrainmapsMultiscaleMeshSource in the brainmaps/backend.ts downloadFragment() function.

In case this approach is erroneous, is there a better way to achieve this? Should we instead have multiple chunks each holding the the corresponding mesh, through multiple calls of assignMeshFragmentData()? Maybe even define another subclass of Chunk that can support this?

I would be very thankful for any help.

Best regards,
Andrei Mancu

Reply all
Reply to author
Forward
0 new messages