- decouple the server and the viewer. So, we created a server (still Tornado), that can support multiple viewers (neuroglancer instances) at once.
- add miscellaneous support for the server, with some already implemented methods from neuroglancer, like link sharing, etc.
- in order to have flexibility we created our "knossos" protocol. This way, we created our endpoints that transfer the data exactly how we wanted (chunksizes, compression, etc.)
- because we wanted everything to run smoothly, we configured nginx to directly serve the (compressed) files from the disk.
Now this is where the problems started to arise: Because we are serving chunks of size 128x128x128 (and on bigger datasets even bigger chunksizes), the decompression step, which now happens in the frontend, gets slow. For this, we've gotten some modules that do it very fastly (example:
@evan/wasm for snappy decompression).
In the case of the jpeg decompression, we firstly tried the already implemented one from neuroglancer (
jpgjs), but it turns out it is really slow.
Have you tried any other jpeg decompression libraries in the past, except jpgjs? I would be really interested, especially in the WASM ones. I have tried
@saschazar/wasm-image-loader , but it doesn't seem to build well with neuroglancer.
Would be awesome to have an educated input on this! Thank you!
Best regards,
Andrei Mancu