Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Pure web DICOM client: is it possible?

2,704 views
Skip to first unread message

Aaron Boxer

unread,
Apr 1, 2014, 2:18:13 PM4/1/14
to
Folks,

Inspired by Chris Hafey's recently launched HTML5 viewer and js DICOM parser,
I am wondering out loud if a complete open source DICOM viewer solution is possible using just web tools? And, my feeling is that it *is* possible, given the recent advances in both DICOM and web technologies.

So, here is my ideal stack:

1) cornerstone 2D viewer with all of the usual tools, CINE, etc.
2) PACS search page using QIDO
3) key image creation using STOW
3) WADO streaming to local disk using web workers and IndexedDB local storage
4) real time decompression of encapsulated pixel data using asm.js and WebCL
5) MPR using asm.js and WebGL
6) Volume Rendering using asm.js, WebCL and WebGL

If enough people pitch in, we could be leaving the dark period of abandonware,
hobbled "open source" and broken promises behind, and entering a golden age.

Aaron







Chris Hafey

unread,
Apr 1, 2014, 3:01:56 PM4/1/14
to
On Tuesday, April 1, 2014 1:18:13 PM UTC-5, Aaron Boxer wrote:
> Folks,
>
>
>
> Inspired by Chris Hafey's recently launched HTML5 viewer and js DICOM parser,
>
> I am wondering out loud if a complete open source DICOM viewer solution is possible using just web tools? And, my feeling is that it *is* possible, given the recent advances in both DICOM and web technologies.

Woot, I am happy to be inspiring :) I do think it is possible and am going to do what I can to make it a reality.

> So, here is my ideal stack:
>
>
>
> 1) cornerstone 2D viewer with all of the usual tools, CINE, etc.

Yeah :)

> 2) PACS search page using QIDO

Possibly, but it might be better to have the client talk to an app server which then calls QIDO due to the following reasons:
1) Security - It seems unlikely that a QIDO server would have adequate authz/authc control to support the wide varieties of access control scenarios that exist
2) An app server will likely be needed anyway, see below

> 3) key image creation using STOW

OK, but the app server should make the STOW call, not the client. Security is one big reason - I know I wouldn't want to give an end user browser the ability to add data to a clinical record without being sure that it is doing the right thing.

> 3) WADO streaming to local disk using web workers and IndexedDB local storage

I think there is very low limit on local storage - something like 5MB so it may not be that helpful. WADO works for small images but I am skeptical about it working for large matrix sizes and multiframe.

> 4) real time decompression of encapsulated pixel data using asm.js and WebCL

Yes this should be possible buy I would prefer to have the server do it and provide a more uniform interface to the client. In general I believe that servers should make life as simple for clients as possible, requiring the client to decompress every possible DICOM transfer syntax is not in line with this belief. There are some creative things you can do with using PNG as a container transmitting losslessly compressed pixel data....

> 5) MPR using asm.js and WebGL

I don't think this is something to pursue anytime soon because:
1) Loading all the slices to the client is time consuming
2) Requires a ton of memory
3) Will cause browser instability due to lack of memory management capabilities of the JS VM

This should really be done on the server which a) should have a high bandwidth pipe to the archive and b) has ample memory. It should also be done in a higher level language than javascript which has deterministic memory management capabilities.

> 6) Volume Rendering using asm.js, WebCL and WebGL

Not yet - same reason as above.

> If enough people pitch in, we could be leaving the dark period of abandonware,
>
> hobbled "open source" and broken promises behind, and entering a golden age.

Yup, I think its time we make this happen.

Chris Hafey

unread,
Apr 1, 2014, 3:13:41 PM4/1/14
to
One more reason to have an app server between the client and the QIDO/WADO/STOW server is cross origin requests: http://en.wikipedia.org/wiki/Cross-origin_resource_sharing. See the CORS web site
http://enable-cors.org/ for more information on this. For some reason Microsoft has gone off the deep end securitywise here and IE11 seems to have extended this up to all HTTP requests for the originating site. I hope I am wrong about IE11 or Microsoft changes there direction on this but it could also be the new trend...

Aaron Boxer

unread,
Apr 1, 2014, 6:26:14 PM4/1/14
to
On Tuesday, April 1, 2014 3:01:56 PM UTC-4, Chris Hafey wrote:
> On Tuesday, April 1, 2014 1:18:13 PM UTC-5, Aaron Boxer wrote:
>
> > Folks,
>
> >
>
> >
>
> >
>
> > Inspired by Chris Hafey's recently launched HTML5 viewer and js DICOM parser,
>
> >
>
> > I am wondering out loud if a complete open source DICOM viewer solution is possible using just web tools? And, my feeling is that it *is* possible, given the recent advances in both DICOM and web technologies.
>
>
>
> Woot, I am happy to be inspiring :) I do think it is possible and am going to do what I can to make it a reality.

Cool!

>
>
>
> > So, here is my ideal stack:
>
> >
>
> > 1) cornerstone 2D viewer with all of the usual tools, CINE, etc.
>
>
>
> Yeah :)
>
>
>
> > 2) PACS search page using QIDO
>
>
>
> Possibly, but it might be better to have the client talk to an app server which then calls QIDO due to the following reasons:
>
> 1) Security - It seems unlikely that a QIDO server would have adequate authz/authc control to support the wide varieties of access control scenarios that exist
>

Well, if you have an app server to do queries, then I suppose you would just do good old Q/R, and not need QIDO at all. QIDO allows browsers to query PACS without an intermediary.

Regarding security, I don't know enough about ACL to comment, although you can certainly add basic auth to the search page itself.


> 2) An app server will likely be needed anyway, see below
>
>
>
> > 3) key image creation using STOW
>
>
>
> OK, but the app server should make the STOW call, not the client. Security is one big reason - I know I wouldn't want to give an end user browser the ability to add data to a clinical record without being sure that it is doing the right thing.
>
>

yes, I have to think through the security angle. Does the standard address security, I wonder?

>
> > 3) WADO streaming to local disk using web workers and IndexedDB local storage
>
>
>
> I think there is very low limit on local storage - something like 5MB so it may not be that helpful. WADO works for small images but I am skeptical about it working for large matrix sizes and multiframe.

You can increase the cap, but the user has to accept.
The trick here, and I have done this in other languages, is to stream data directly to disk, without parsing or decompressing, so memory requirements are low - assuming js cleans up memory properly after a stream is closed. If the cap can be made large, (and if the user is kind enough to install an [inexpensive] SSD, then it will work quite nicely. And a spinning disk will work fine too.

>
> > 4) real time decompression of encapsulated pixel data using asm.js and WebCL
>
>
>
> Yes this should be possible buy I would prefer to have the server do it and provide a more uniform interface to the client. In general I believe that servers should make life as simple for clients as possible, requiring the client to decompress every possible DICOM transfer syntax is not in line with this belief. There are some creative things you can do with using PNG as a container transmitting losslessly compressed pixel data....
>

Interesting, I am not familiar with the PNG trick.


>
>
> > 5) MPR using asm.js and WebGL
>
>
>
> I don't think this is something to pursue anytime soon because:
>
> 1) Loading all the slices to the client is time consuming
>
> 2) Requires a ton of memory
>
> 3) Will cause browser instability due to lack of memory management capabilities of the JS VM
>
>
>
> This should really be done on the server which a) should have a high bandwidth pipe to the archive and b) has ample memory. It should also be done in a higher level language than javascript which has deterministic memory management capabilities.
>
>
>
> > 6) Volume Rendering using asm.js, WebCL and WebGL
>
>
>
> Not yet - same reason as above.
>
>

Yes, perhaps advanced 3D features may not work.

So, stepping back, it looks like we need two different types of viewer:

1) basic 2D image review - pure DICOM solution - WADO/QIDO/STOW with basic
authentication on login page. JPEG images are pulled from PACS. Optional WADO streaming for large studies, if local storage tech is sufficiently mature. This should be fairly straightforward, and the nice thing is that this client will be able to connect to any WADO/QIDO enabled PACS. Or a proxy PACS can be used. But the client stays completely within the standard.

2) Power Viewer : add app server for security, advanced ACL, patient reconciliation and feeding RIS reports, and also a native browser plugin (NaCl for Chrome) to handle load: full DICOM image is used, full MPR/VR solution using VTK

#2 just builds on top of #1.


>
> > If enough people pitch in, we could be leaving the dark period of abandonware,
>
> >
>
> > hobbled "open source" and broken promises behind, and entering a golden age.
>
>
>
> Yup, I think its time we make this happen.


Yes!


Fuli

unread,
Apr 1, 2014, 6:57:05 PM4/1/14
to
I consider that parsing complex dicom transfer syntax should be done on server side. A uniform complete dicom file will be transferred to browser side with jp2k format for pixel data and utf-8 encoding for meta-data. That will minimize the bandwidth requirement of network transfer.

Javascript will decode a complete dicom file in real time. It will become into reality. Even it uses jp2k encoding (see mozilla's work: decodes pdf file)

I prefer to display image data using WebGL. Changing WL/WL or CLUT is totally in browser side and real time.

Fuli

unread,
Apr 1, 2014, 7:08:46 PM4/1/14
to
Totally agree. At last now, it is not realistic to render MPR or DVR(direct volume rendering) in browser side, especially the data size is extremely large. So, all the heavy computing working on CPU/GPU is put on server side. Then the rendering result packed into a JP2K/PNG data is streamed into browser. Browser uses a web worker to receive the rendering stream from server and display it with WebGL.

On server side, I think that VTK is best option for MPR. But for volume rendering or surface extracting, VTK is not best. We have a lot of toolkits from visualization and graphics community.

Chris Hafey

unread,
Apr 1, 2014, 9:11:05 PM4/1/14
to
OK Lets break this down:
1) Agreed that the server should mask the complexity of DICOM from the client
2) Agree that the pixel data should be compressed and a single compression algorithm should be used
3) I disagree that DICOM is the right container to use to ship the image to the client. The main reason for this is multiframe - the client mainly wants to see a single frame at a time and you just don't want to send an entire 2GB multiframe instance to it so it can display one frame (if it could even handle the instance without crashing the browser...). So lets say you decide to extract a frame, compress it with JP2K and then package it as a single frame sop instance to send to the client. Yes this would work but you are dramatically changing the SOPInstance in this process so the benefit of DICOM is lost compared to other more web friendly formats.
4) I agree that javascript will keep getting faster as well as CPUs so the decoding overhead of JP2K will not be an issue in time.
5) I am uncertain that JP2K is the right compression to use but I could be convinced. My main concerns are a) patent issues b) availability of robust free decoders and c) how interoperable it is. My information is a bit dated but I know there were issues with all three of these a few years ago - do you have data that says otherwise? While JP2K is mathematically the "superior" compression scheme in terms of size, I would prefer to use a patent free, popular and stable compression scheme even if it means a lower compression ratio. GZIP or Deflate come to mind and also have the benefit of using native code for decompression (instead of javascript).


> Javascript will decode a complete dicom file in real time. It will become into reality. Even it uses jp2k encoding (see mozilla's work: decodes pdf file)

I agree that JP2K decompression performance will eventually be a non issue due to speed improvements in javascript and of course faster CPUs. I don't think we are there yet for phones and tablets though. As said above, I think a compression algorithm that is patent free with many stable implementations would serve us better. I don't want to be the one to tell a patient or doctor that they can't see images because we chose a compression algorithm that is technically superior but buggy.

> I prefer to display image data using WebGL. Changing WL/WL or CLUT is totally in browser side and real time.

Do you have any sample code that does WW/WL using this approach? Would love to take a look...

Chris

Aaron Boxer

unread,
Apr 1, 2014, 10:19:10 PM4/1/14
to
This is interesting; can you elaborate on why VTK is not best for rendering and surface extraction? What toolkit do you feel does the best job here?

Aaron Boxer

unread,
Apr 1, 2014, 10:40:00 PM4/1/14
to
> 3) I disagree that DICOM is the right container to use to ship the image to the client. The main reason for this is multiframe - the client mainly wants to see a single frame at a time and you just don't want to send an entire 2GB multiframe instance to it so it can display one frame (if it could even handle the instance without crashing the browser...). So lets say you decide to extract a frame, compress it with JP2K and then package it as a single frame sop instance to send to the client. Yes this would work but you are dramatically changing the SOPInstance in this process so the benefit of DICOM is lost compared to other more web friendly formats.

I did a little research: WADO does support a frame list containing multiple frames. So, lets say we have a large multiframe CT object, we could stream the data to the client as 10 slice WADO j2k compressed objects. On a gig network,
this should work quite well, I think. Over internet, you would need to adjust your parameters. It would be nice if the client could somehow set the number of
slices in the WADO request, with a header field - if the field is absent, just return a single frame.

I favour a two-prong approach - level 1 viewer stays fully within the standard - it is server agnostic and can connect to any WADO enabled PACS - QIDO too native or via proxy. It may not be very fast, but it all works. And on a fast network it will be fast enough.


Level 2 does everything level 1 does, but optimized for performance and large studies. Plus throw in server-side 3D. And here, you could sacrifice compliance for raw speed.


>
> 4) I agree that javascript will keep getting faster as well as CPUs so the decoding overhead of JP2K will not be an issue in time.
>
> 5) I am uncertain that JP2K is the right compression to use but I could be convinced. My main concerns are a) patent issues b) availability of robust free decoders and c) how interoperable it is. My information is a bit dated but I know there were issues with all three of these a few years ago - do you have data that says otherwise? While JP2K is mathematically the "superior" compression scheme in terms of size, I would prefer to use a patent free, popular and stable compression scheme even if it means a lower compression ratio. GZIP or Deflate come to mind and also have the benefit of using native code for decompression (instead of javascript).
>
>
>
>
>
> > Javascript will decode a complete dicom file in real time. It will become into reality. Even it uses jp2k encoding (see mozilla's work: decodes pdf file)
>
>

>
> I agree that JP2K decompression performance will eventually be a non issue due to speed improvements in javascript and of course faster CPUs. I don't think we are there yet for phones and tablets though. As said above, I think a compression algorithm that is patent free with many stable implementations would serve us better. I don't want to be the one to tell a patient or doctor that they can't see images because we chose a compression algorithm that is technically superior but buggy.
>

Actually, I am working at the moment on a fast j2k decoder. In time, it will be robust. As I understand, the j2k patent holders have agreed to allow royalty free usage of their IP, although there may be submarine patents lurking. But, can we not make the compression technique an implementation detail? With the proper architecture, it should be easy to switch compression techniques if the need arises. Also, those phones and tablets have multi-core and GPU chips these days; there is a lot of horsepower there to be utilized. So, I think j2k holds a lot of promise. And given that this project we are dreaming up won't get out into the wild for another year at least, the hardware will be that much faster at release.

Fuli

unread,
Apr 1, 2014, 11:48:01 PM4/1/14
to
vtk is a little behind the advanced volume rendering in recent years, especially using GPU for local or global illumination for volume data.

For example: http://voreen.uni-muenster.de/?q=node/10 Its rendering result is better than that of vtk while the rendering speed is more quick. Or compare my two results ( goo.gl/NqRrxA ) to Osirix

About extracting surface or mesh. CGAL is most excellent library for any mesh operation. Of course, we still need ITK for medical image operation.

Fuli

unread,
Apr 2, 2014, 12:24:41 AM4/2/14
to
Agree! if we want doctors to use this system, we must maximally integrate it into existing PACS system and compliant with existing DICOM standards, while support new coming RESTFul standards for DICOM. We can only build a customize or no-standard protocol to transfer meta-data or pixel data between browser and server when no standard supports it, such as rendering parameters for volume data or ROI for 2D images.



>
> >
>
> > 4) I agree that javascript will keep getting faster as well as CPUs so the decoding overhead of JP2K will not be an issue in time.
>
> >
>
> > 5) I am uncertain that JP2K is the right compression to use but I could be convinced. My main concerns are a) patent issues b) availability of robust free decoders and c) how interoperable it is. My information is a bit dated but I know there were issues with all three of these a few years ago - do you have data that says otherwise? While JP2K is mathematically the "superior" compression scheme in terms of size, I would prefer to use a patent free, popular and stable compression scheme even if it means a lower compression ratio. GZIP or Deflate come to mind and also have the benefit of using native code for decompression (instead of javascript).

Actually, we have no more options. Almost all existing PACS systems are using uncompressed/JPEG/JPEG2000 format for pixel data. Maybe, Deflate is a good option. But base on my knowledge, it is best option for meta-data, not for pixel data.

>
> >
>
> >
>
> >
>
> >
>
> >
>
> > > Javascript will decode a complete dicom file in real time. It will become into reality. Even it uses jp2k encoding (see mozilla's work: decodes pdf file)
>
> >
>
> >
>
>
>
> >
>
> > I agree that JP2K decompression performance will eventually be a non issue due to speed improvements in javascript and of course faster CPUs. I don't think we are there yet for phones and tablets though. As said above, I think a compression algorithm that is patent free with many stable implementations would serve us better. I don't want to be the one to tell a patient or doctor that they can't see images because we chose a compression algorithm that is technically superior but buggy.
>
> >
>
>
>
> Actually, I am working at the moment on a fast j2k decoder. In time, it will be robust. As I understand, the j2k patent holders have agreed to allow royalty free usage of their IP, although there may be submarine patents lurking. But, can we not make the compression technique an implementation detail? With the proper architecture, it should be easy to switch compression techniques if the need arises. Also, those phones and tablets have multi-core and GPU chips these days; there is a lot of horsepower there to be utilized. So, I think j2k holds a lot of promise. And given that this project we are dreaming up won't get out into the wild for another year at least, the hardware will be that much faster at release.

I agree that GPU j2k decoder is more fast. It also is quite fundamental work for DICOM image decoding.
But, actually OpenJPEG2000 is fast enough contrast to KDU library on PC and Mac platform.( I am not sure on iOS or Android platform) especially on multi-core architecture. KDU's advantage of decoding speed can be negligible.

For pure browser side, we can find some libraries on github, which convert OpenJPEG2000 to javascript language. I don't know how fast it decodes jp2k image now.

Aaron Boxer

unread,
Apr 2, 2014, 8:49:57 AM4/2/14
to
Thanks, those images look very nice. I will have to play around with Voreen.

So, a mixture of VTK for MPR, Voreen for VR, CGAL for mesh creation and ITK for surface extraction should be a good engine on the server side for 3D.

Aaron Boxer

unread,
Apr 2, 2014, 8:59:56 AM4/2/14
to
Yes. Although, I don't think any production PACS support QIDO search at the moment, so as Chris stated, there will have to be another server doing the search, even for level 1 system. I prefer to have a QIDO proxy for this, because the solution I have in mind could also proxy WADO requests for non-WADO PACS and cache pixel and search results for better performance.




>
>
>
>
>
>
>
> >
>
> > >
>
> >
>
> > > 4) I agree that javascript will keep getting faster as well as CPUs so the decoding overhead of JP2K will not be an issue in time.
>
> >
>
> > >
>
> >
>
> > > 5) I am uncertain that JP2K is the right compression to use but I could be convinced. My main concerns are a) patent issues b) availability of robust free decoders and c) how interoperable it is. My information is a bit dated but I know there were issues with all three of these a few years ago - do you have data that says otherwise? While JP2K is mathematically the "superior" compression scheme in terms of size, I would prefer to use a patent free, popular and stable compression scheme even if it means a lower compression ratio. GZIP or Deflate come to mind and also have the benefit of using native code for decompression (instead of javascript).
>
>
>
> Actually, we have no more options. Almost all existing PACS systems are using uncompressed/JPEG/JPEG2000 format for pixel data. Maybe, Deflate is a good option. But base on my knowledge, it is best option for meta-data, not for pixel data.
>
>
Deflate would be perfectly fine for meta data. But, of course, it is the pixel data that makes the heaviest load on the system.


>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> > > > Javascript will decode a complete dicom file in real time. It will become into reality. Even it uses jp2k encoding (see mozilla's work: decodes pdf file)
>
> >
>
> > >
>
> >
>
> > >
>
> >
>
> >
>
> >
>
> > >
>
> >
>
> > > I agree that JP2K decompression performance will eventually be a non issue due to speed improvements in javascript and of course faster CPUs. I don't think we are there yet for phones and tablets though. As said above, I think a compression algorithm that is patent free with many stable implementations would serve us better. I don't want to be the one to tell a patient or doctor that they can't see images because we chose a compression algorithm that is technically superior but buggy.
>
> >
>
> > >
>
> >
>
> >
>
> >
>
> > Actually, I am working at the moment on a fast j2k decoder. In time, it will be robust. As I understand, the j2k patent holders have agreed to allow royalty free usage of their IP, although there may be submarine patents lurking. But, can we not make the compression technique an implementation detail? With the proper architecture, it should be easy to switch compression techniques if the need arises. Also, those phones and tablets have multi-core and GPU chips these days; there is a lot of horsepower there to be utilized. So, I think j2k holds a lot of promise. And given that this project we are dreaming up won't get out into the wild for another year at least, the hardware will be that much faster at release.
>
>
>
> I agree that GPU j2k decoder is more fast. It also is quite fundamental work for DICOM image decoding.
>
> But, actually OpenJPEG2000 is fast enough contrast to KDU library on PC and Mac platform.( I am not sure on iOS or Android platform) especially on multi-core architecture. KDU's advantage of decoding speed can be negligible.
>

In my experience, OpenJPEG is slow. And, in fact, according to a thread on their list, version 2.0 is even slower than 1.5, and nobody knows why.
KDU leaves them very far behind. I would not consider it very usable.
My encoder, which uses openCL for GPU acceleration, should be much better, and could take advantage of webcl and asm.js to provide a plugin-free solution.

>
>
> For pure browser side, we can find some libraries on github, which convert OpenJPEG2000 to javascript language. I don't know how fast it decodes jp2k image now.

Yes, this is asm.js using Mozilla emscripten. It promises to be just half the speed of native code, which is pretty good.


Chris Hafey

unread,
Apr 2, 2014, 9:26:57 AM4/2/14
to
On Tuesday, April 1, 2014 11:24:41 PM UTC-5, Fuli wrote:
>
> Actually, we have no more options. Almost all existing PACS systems are using uncompressed/JPEG/JPEG2000 format for pixel data. Maybe, Deflate is a good option. But base on my knowledge, it is best option for meta-data, not for pixel data.

PNG uses deflate and it is the web standard for lossless images. JP2K will provide a higher level of lossless compression so it is better than deflate in that regard. I believe that many customers with VNAs will likely store the data in JP2K to minimize storage costs. I also believe that VNA's will more than likely be the WADO server this viewer would communicate with. So it does make sense to try and make JP2K work if we can.

> For pure browser side, we can find some libraries on github, which convert OpenJPEG2000 to javascript language. I don't know how fast it decodes jp2k image now.

It would be great if someone could do some performance benchmarking of such a library using a variety of devices (phone, tablet, desktop, etc) with different sized images (256x256->4kx4k).

Chris Hafey

unread,
Apr 2, 2014, 9:38:49 AM4/2/14
to
On Tuesday, April 1, 2014 9:40:00 PM UTC-5, Aaron Boxer wrote:
>
> I did a little research: WADO does support a frame list containing multiple frames. So, lets say we have a large multiframe CT object, we could stream the data to the client as 10 slice WADO j2k compressed objects. On a gig network,

I wasn't aware of this, I'll do some research on this.

> this should work quite well, I think. Over internet, you would need to adjust your parameters. It would be nice if the client could somehow set the number of
>
> slices in the WADO request, with a header field - if the field is absent, just return a single frame.
>
>
>
> I favour a two-prong approach - level 1 viewer stays fully within the standard - it is server agnostic and can connect to any WADO enabled PACS - QIDO too native or via proxy. It may not be very fast, but it all works. And on a fast network it will be fast enough.

Yes this makes a lot of sense. Level 2 might even drive additional work in the standards if we can use it to point out opportunities to improve vs the level 1 "standards" version. Perhaps this additional standards work would be accelerated if we could indeed compare apples to apples here and the source was available so everyone could determine if the difference was due to an implementation issue or not.

> Level 2 does everything level 1 does, but optimized for performance and large studies. Plus throw in server-side 3D. And here, you could sacrifice compliance for raw speed.

I have a need, a need for SPEED! There are often competing needs around here - one to be 100% standards compliant and another to produce the fastest software possible. If we can come up with an architecture/strategy that unifies both groups it would be a huge victory. Your 2 pronged approach sounds like it could be this...


David Clunie

unread,
Apr 2, 2014, 10:49:06 AM4/2/14
to
By the way, have you guys looked at XTK and SliceDrop:

http://slicedrop.com/
https://github.com/xtk/X#readme

No idea how robust these are, but there would seem to be a lot of
useful Web-GL client-side rendering ideas in there (before you
dismiss that possibility).

Not everything that a viewer needs to display is a volume, but when
it is ...

Also, when you need progressive (to lossless or lossy) transmission
or lower resolutions than native, have you considered JPIP (which
is usable whether or not the server side representation is J2K)?

David

Chris Hafey

unread,
Apr 2, 2014, 12:33:34 PM4/2/14
to
On Wednesday, April 2, 2014 9:49:06 AM UTC-5, David Clunie wrote:
> By the way, have you guys looked at XTK and SliceDrop:
>
>
>
> http://slicedrop.com/
>
> https://github.com/xtk/X#readme

Wow this looks pretty cool - thanks for posting these links!

> No idea how robust these are, but there would seem to be a lot of
>
> useful Web-GL client-side rendering ideas in there (before you
>
> dismiss that possibility).

I do believe there are 3D use cases for small slice data sets like those used in the slicedrop examples. For those use cases, it may make sense to use browser side rendering. The majority of use cases for 3D have to deal with thin slice CT data which has large slice counts (otherwise why would it be thin?). For these use cases, you definitely need to render on the server. Given the need for server side rendering, it does raise the question of how valuable it is to do rendering in two different places with two different technologies. Ultimately I think it should be possible to do this, but I am not confident that it would be the default strategy.

> Not everything that a viewer needs to display is a volume, but when
>
> it is ...
>
> Also, when you need progressive (to lossless or lossy) transmission
>
> or lower resolutions than native, have you considered JPIP (which
>
> is usable whether or not the server side representation is J2K)?

My understanding is that JPIP is good technology but not widely implemented and has questionable patent related issues. I haven't kept up on this though so it may be different now - do you have any data about these two issues in particular? It wouldn't make sense to build a viewer that requires JPIP if there was nothing to connect to (perhaps this is a chicken and egg thing though?). Also it wouldn't make sense to use JPIP and then get us or our users sued over patent infringement.

Chris

David Clunie

unread,
Apr 2, 2014, 1:14:08 PM4/2/14
to
The world is "definitely" flat and the universe "definitely" rotates
around it, just as browsers "definitely" cannot cope with large
data volumes client side.

Not sure the JPIP is any more or less encumbered by patents (valid
or otherwise) than various aspects of web viewers.

David

Fuli

unread,
Apr 2, 2014, 6:39:38 PM4/2/14
to
On Wednesday, April 2, 2014 10:49:06 PM UTC+8, David Clunie wrote:
> By the way, have you guys looked at XTK and SliceDrop:
>
>
>
> http://slicedrop.com/
>
> https://github.com/xtk/X#readme
>
>
>
> No idea how robust these are, but there would seem to be a lot of
>
> useful Web-GL client-side rendering ideas in there (before you
>
> dismiss that possibility).
>

Thank David's remind, I almost forget it!

Yes, xtk is first webgl viewer in medical image field

But, xtk/slicedrop can't cope complex dicom file with jpeg/jpeg2000 encoding and meta-data. so, it only read pixel data and display it as texture in WebGL.

If implement real browser dicom view, have cope encoding and decoding for data transferring on web. After xtk, mozilla guys implemented a pdf view using javascript. A vital part appeared which is jpeg/jpeg2000 decoder in javascript.

Aaron Boxer

unread,
Apr 2, 2014, 9:29:53 PM4/2/14
to
On Wednesday, April 2, 2014 6:39:38 PM UTC-4, Fuli wrote:
> On Wednesday, April 2, 2014 10:49:06 PM UTC+8, David Clunie wrote:
>
> > By the way, have you guys looked at XTK and SliceDrop:
>
> >
>
> >
>
> >
>
> > http://slicedrop.com/
>
> >
>
> > https://github.com/xtk/X#readme
>
> >
>
> >
>
> >
>
> > No idea how robust these are, but there would seem to be a lot of
>
> >
>
> > useful Web-GL client-side rendering ideas in there (before you
>
> >
>
> > dismiss that possibility).
>
> >
>
>
>
> Thank David's remind, I almost forget it!
>
>
>
> Yes, xtk is first webgl viewer in medical image field
>
>
>
> But, xtk/slicedrop can't cope complex dicom file with jpeg/jpeg2000 encoding and meta-data. so, it only read pixel data and display it as texture in WebGL.
>
>
>
> If implement real browser dicom view, have cope encoding and decoding for data transferring on web. After xtk, mozilla guys implemented a pdf view using javascript. A vital part appeared which is jpeg/jpeg2000 decoder in javascript.
>

By the beard of Zeus!! you're right. Here it is, a jpeg 2000 decodeer in javascript:


https://github.com/mozilla/pdf.js/blob/master/src/core/jpx.js


Aaron Boxer

unread,
Apr 2, 2014, 9:33:48 PM4/2/14
to
Hi David, welcome to the thread...

On Wednesday, April 2, 2014 10:49:06 AM UTC-4, David Clunie wrote:
> By the way, have you guys looked at XTK and SliceDrop:
>
>
>
> http://slicedrop.com/
>
> https://github.com/xtk/X#readme
>
>
>
> No idea how robust these are, but there would seem to be a lot of
>
> useful Web-GL client-side rendering ideas in there (before you
>
> dismiss that possibility).

I found these demos really cool. But, I am not sure how it would measure up in real-world situation. But, this is definitely an interesting option to consider.


>
>
> Also, when you need progressive (to lossless or lossy) transmission
>
> or lower resolutions than native, have you considered JPIP (which
>
> is usable whether or not the server side representation is J2K)?
>

Good idea. This is one of the nice benefits of j2k over other compression algos.




Aaron Boxer

unread,
Apr 2, 2014, 9:38:58 PM4/2/14
to
On Wednesday, April 2, 2014 1:14:08 PM UTC-4, David Clunie wrote:
> The world is "definitely" flat and the universe "definitely" rotates
>
> around it, just as browsers "definitely" cannot cope with large
>
> data volumes client side.
>
>

From playing around with large studies using oviyam 2 and chrome, I would tend to agree with Chris that we are not there yet in terms of browser performance and memory management. But, of course, this might be the implementation I am using. And 64 bit browsers are coming into the mainstream now, so memory will become less of an issue in time. I think we will know fairly early whether our viewer will be able to handle large volumes.

And, of course, to support large volumes on mobile devices, you will need server side rendering.

Aaron Boxer

unread,
Apr 3, 2014, 12:05:47 AM4/3/14
to
On Wednesday, April 2, 2014 10:49:06 AM UTC-4, David Clunie wrote:
> By the way, have you guys looked at XTK and SliceDrop:
>
>
>
> http://slicedrop.com/
>
> https://github.com/xtk/X#readme
>

Well, I tried this out on an Osirix sample image: the PHENIX CT data set:
uncompressed @ 120 MB.

http://www.osirix-viewer.com/datasets/


On latest Chrome, loading this into slicedrop used about 900 MB of memory, performance was bad, and image quality was terrible. So, unfortunately, this is just a toy at the moment :( I think a more promising client side solution at this time is NaCl chrome native plugin to wrap Voreen or VTK.

Chris Hafey

unread,
Apr 3, 2014, 11:16:07 AM4/3/14
to
To help put some structure on this discussion, I created a github repo for this viewer and a trello board with a number of todos. I even came up with a reasonable name for this project - uView. u stands for "universal" as well as "you". Not bad for a non marketing guy eh? Anyway, here is are the links:

github:
https://github.com/chafey/uView

trello:
https://trello.com/b/D0uNwlHT/uview

Anyone is welcome to help and there are even tasks that don't require any programming. For example, it would be great to build a list of which products support JPIP. Feel free to add tasks that you think need to be done as well.

If you think this project is important, please consider contributing in some way!

Aaron Boxer

unread,
Apr 3, 2014, 11:51:28 AM4/3/14
to
On Thursday, April 3, 2014 11:16:07 AM UTC-4, Chris Hafey wrote:
> To help put some structure on this discussion, I created a github repo for this viewer and a trello board with a number of todos. I even came up with a reasonable name for this project - uView. u stands for "universal" as well as "you". Not bad for a non marketing guy eh? Anyway, here is are the links:

love it. But I was hoping for: UNiversal Imager BROWser => UNIBROW

All kidding aside, I am really excited about this project. I think we should announce this on various sympathetic forums such as Mirth, and Dcm4chee, to help build momentum.

Chris Hafey

unread,
Apr 3, 2014, 1:38:24 PM4/3/14
to
On Thursday, April 3, 2014 10:51:28 AM UTC-5, Aaron Boxer wrote:
>
> All kidding aside, I am really excited about this project. I think we should announce this on various sympathetic forums such as Mirth, and Dcm4chee, to help build momentum.

Before we can throw people at this project we need a much clearer vision so people can decide if this project is worth their time to begin with. Assuming they believe it is worth their time and have the time to contribute, they will need a structure to work within. We are far short on both of these right now so I suggest we focus our efforts in these areas:
1) What is it that we are trying to build? We need consensus beyond "Pure web DICOM Client". Vision statement, requirements, use cases are all helpful here
2) How are we going to get there? Roadmap, tasks, etc

I did make a list of the "key features" on the github site. How about if those interested review those and provide feedback on this thread so we can try and get some consensus around what it is we are going to do. It is just as important to decide what it isn't too - so brainstorm away!

Once we get past this initial state we can think about setting up a conference call over skype or google hangout to work through some of these issues in real time. Once we have a structure in place, we can make a strong push to recruit people to it.

Aaron Boxer

unread,
Apr 3, 2014, 2:54:20 PM4/3/14
to
Good plan. Realistically, it's going to be me and you working on this until we reach Minimal Viable Product stage. I've found through experience that for every 10 people who show interest in an open source project, perhaps one ends up contributing.


The MVP could be as simple as: search and display single instance jpeg via WADO. This would already be quite useful, if delivered over the web with no
installation required.

Over the long run, I have big plans for this project. It has the potential to dominate the field, similar to how MIRTH is the number once choice for HL7 integration engines, or dcm4chee is the obvious choice for a PACS server. I would view MIRTH as a model open source medical project, although we will see if things change since their purchase last year by NextGen.

Fuli

unread,
Apr 3, 2014, 7:12:11 PM4/3/14
to
>
> The MVP could be as simple as: search and display single instance jpeg via WADO. This would already be quite useful, if delivered over the web with no
>
> installation required.
>

Yes! I believe such work maybe has been started by someone in somewhere.

>
>
> Over the long run, I have big plans for this project. It has the potential to dominate the field, similar to how MIRTH is the number once choice for HL7 integration engines, or dcm4chee is the obvious choice for a PACS server. I would view MIRTH as a model open source medical project, although we will see if things change since their purchase last year by NextGen.

Great! the long-term vision is so exciting!

Personally, i consider that short-term objectives are as follows:
There is a server prototype running on Amazon. The browser can search and pull down dicom files from server by Web Worker and Web Socket, then decode pixel data and parse meta-data real time and storage some meta information in IndexedDB, finally display pixel data with WebGL.

Fuli

unread,
Apr 3, 2014, 7:15:02 PM4/3/14
to
Great! have a good start.

Aaron Boxer

unread,
Apr 3, 2014, 7:55:00 PM4/3/14
to
So, I found a few tools to benchmark the javascript j2k decoder.

- img2pdf will take any jpeg 2000 image and convert it to pdf.
- openjpeg has a large collection of j2k images for testing
- mozilla hosts a trial of their js pdf reader

So, I tried this out on an openjpeg dataset, and the results were very promising. Performance was roughly equivalent to openjpeg 2 decoding, which is native.

Given a stack of j2k instances, if we launch a web worker decompression thread in the background, and store decompressed images to indexeddb disk, then I think it would be quite feasible to pull DICOM, parse, decompress and view in a reasonable amount of time on new hardware.

One nice thing is that, if we have problems reading a particular j2k image, we can just convert it to pdf and submit a bug report to mozilla :) So, I think we are on easy street as far as decomression goes. The pdf viewer also reads jpeg, btw.



Message has been deleted

Chris Hafey

unread,
Apr 8, 2014, 7:16:19 PM4/8/14
to
For those interested in this, I have a very primitive study viewer built using the cornerstone libraries, you can check it out here:

http://chafey.github.io/cornerstoneDemo/

It doesn't look like anyone has picked up any tasks on the uView Trello - if you are interested in helping there is plenty to do!

Aaron Boxer

unread,
Apr 9, 2014, 4:20:31 PM4/9/14
to
On Tuesday, April 8, 2014 7:16:19 PM UTC-4, Chris Hafey wrote:
> For those interested in this, I have a very primitive study viewer built using the cornerstone libraries, you can check it out here:
>
>
>
> http://chafey.github.io/cornerstoneDemo/
>
>

Cool. I like the black theme.


>
> It doesn't look like anyone has picked up any tasks on the uView Trello - if you are interested in helping there is plenty to do!

I don't think this is a very high traffic forum, and there don't seem to be a lot of hackers here; you might try mirth/dcm4chee forums for more recruits.

ivmartel

unread,
Apr 10, 2014, 5:34:53 AM4/10/14
to
Hi, I'm Yves, the author of https://github.com/ivmartel/dwv. I just found this thread and the other one about the sorry state of open source medical software.

The cornerstoneDemo looks pretty cool, congrats! We have very similar visions, lets see where all of this leads us!

About JPEG2000, I ended up using https://github.com/kripken/j2k.js, but it is true the decoding on the client side takes time... I haven't checked the latest devs of the mozilla decoder, they seem promising.

Chris Hafey

unread,
Apr 11, 2014, 11:35:36 AM4/11/14
to
On Thursday, April 10, 2014 4:34:53 AM UTC-5, ivmartel wrote:
> Hi, I'm Yves, the author of https://github.com/ivmartel/dwv. I just found this thread and the other one about the sorry state of open source medical software.
>
>
>
> The cornerstoneDemo looks pretty cool, congrats! We have very similar visions, lets see where all of this leads us!

Hi Yves,
Nice to meet you. I agree that our visions are similar and you have done a fantastic job on DWV. Perhaps there is a way for us to work together, I'll send you an email.

Fuli

unread,
Jul 15, 2014, 8:34:35 PM7/15/14
to



http://techcrunch.com/2014/07/15/medxts-platform-brings-medical-imaging-in-line-with-todays-cloud-technology/



On Tuesday, April 1, 2014 11:18:13 AM UTC-7, Aaron Boxer wrote:
> Folks,
>
>
>
> Inspired by Chris Hafey's recently launched HTML5 viewer and js DICOM parser,
>
> I am wondering out loud if a complete open source DICOM viewer solution is possible using just web tools? And, my feeling is that it *is* possible, given the recent advances in both DICOM and web technologies.
>
>
>
> So, here is my ideal stack:
>
>
>
> 1) cornerstone 2D viewer with all of the usual tools, CINE, etc.
>
> 2) PACS search page using QIDO
>
> 3) key image creation using STOW
>
> 3) WADO streaming to local disk using web workers and IndexedDB local storage
>
> 4) real time decompression of encapsulated pixel data using asm.js and WebCL
>
> 5) MPR using asm.js and WebGL
>
> 6) Volume Rendering using asm.js, WebCL and WebGL
>
>
>
> If enough people pitch in, we could be leaving the dark period of abandonware,
>
> hobbled "open source" and broken promises behind, and entering a golden age.
>
>
>
> Aaron

jfpambrun

unread,
Nov 14, 2014, 12:03:42 PM11/14/14
to
On Tuesday, April 1, 2014 2:18:13 PM UTC-4, Aaron Boxer wrote:
> Folks,
>
> Inspired by Chris Hafey's recently launched HTML5 viewer and js DICOM parser,
> I am wondering out loud if a complete open source DICOM viewer solution is possible using just web tools? And, my feeling is that it *is* possible, given the recent advances in both DICOM and web technologies.
>
> So, here is my ideal stack:
>
> 1) cornerstone 2D viewer with all of the usual tools, CINE, etc.
> 2) PACS search page using QIDO
> 3) key image creation using STOW
> 3) WADO streaming to local disk using web workers and IndexedDB local storage
> 4) real time decompression of encapsulated pixel data using asm.js and WebCL
> 5) MPR using asm.js and WebGL
> 6) Volume Rendering using asm.js, WebCL and WebGL
>
> If enough people pitch in, we could be leaving the dark period of abandonware,
> hobbled "open source" and broken promises behind, and entering a golden age.
>
> Aaron

I am a bit late to the discussion,but I have been working on 4) using Mozilla's J2K implementation with multiple web workers and Chris Hafey's framework. On my 3rd gen i7 computer using chrome 38, I can download and decode a CT series with 750 images compressed at 10:1 in about 18 seconds. The first slice is usually displayed in under 500ms and the user can immediately start to navigate the stack. I my experience, it is not possible to "outrun" the loading process using the mouse wheel.

I've done some benchmark with 4 parallel workers on my university's internet connection and 18 seconds are spent downloading (20ms per slice on average) while 52 seconds are spent decoding (70ms per slice on average). But like I said, the user sees only 18 seconds.

I have also noticed that reducing the compression ratio does not only slows the downloading process, but decompression also takes longer. I haven't tried with large CR, but I assume JS is still a bit slow for that use case. I suppose using JPIP could help in both cases.

I have a small demo at http://jpx.jfpb.net/, but it wont be available for long.

JF

Leonardo M. Ramé

unread,
Mar 31, 2015, 9:07:03 AM3/31/15
to
Hi Jean, I tried your demo and it's the fastest I have tested so far.

On desktop PCs it works really fast, but I tried on a low end Android phone and it's unusable until images are loaded (something related to the 4 worker threads?).

Can you post a similar example using only one thread? to let me compare the loading speed?.

Jean-Francois Pambrun

unread,
Apr 1, 2015, 12:33:14 PM4/1/15
to
I've modified http://jpx.jfpb.net/ to use only one webworker, but I don't have a low end device to test. Please let me know when you are done so I can revert back the change.

I have also made the code available under a BSD licence :
https://github.com/jpambrun/dcmj2k-streaming-viewer

JF
0 new messages