By the Beard of Zeus!!

119 views
Skip to first unread message

Aaron Boxer

unread,
Apr 2, 2014, 9:57:53 PM4/2/14
to open...@googlegroups.com
Something astonishing: a jpeg 2000 decoder written in javascript:


This is part of a Mozilla project to write a pdf reader in pure javascript.
And since pdf supports jpeg 2000, they had to write a parser.   

Bob Friesenhahn

unread,
Apr 3, 2014, 9:12:54 PM4/3/14
to open...@googlegroups.com
How does its performance compare with OpenJPEG? How does its
performance compare with OpenJPEG extended with OpenCL?

Bob
--
Bob Friesenhahn
bfri...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/

Aaron Boxer

unread,
Apr 3, 2014, 10:41:15 PM4/3/14
to open...@googlegroups.com
Hi Bob,


How does its performance compare with OpenJPEG?  How does its performance compare with OpenJPEG extended with OpenCL?

I did a quick test on a single image from the openjpeg test set, Bretagne something or other, and I was very favourably impressed.
It was on par with my java viewer opening the image with openjpeg 2 via JNI.  

I can get numbers for you if you like.  

Another interesting fact is that a tool like img2pdf (https://github.com/josch/img2pdf)  will take a compressed jp2
file and wrap it up as a pdf file ( without decompressing and re-compressing). So, if a particular file is not being decoded
correctly by this parser, one needs to simply wrap it as pdf and submit to Mozilla as a bug report. Problem solved :)

But,... I did notice that there are over 4000 closed issues and 300 open issues on this pdf viewer, so the quality of the code
may not be great.

As for OpenCL, there is no OpenCL extension of OpenJPEG at the moment. I am working on a separate prototype library, but I am only beginning, so no number is possible yet. 

Cheers,
Aaron






 


 

Bob
--
You are subscribed to the mailing-list of the OpenJPEG project (www.openjpeg.org)
To post: email to open...@googlegroups.com
To unsubscribe: email to openjpeg+unsubscribe@googlegroups.com
For more options: visit http://groups.google.com/group/openjpeg
OpenJPEG is mainly supported by : * UCL Image and Signal Processing Group (http://sites.uclouvain.be/ispgroup) * IntoPIX (www.intopix.com)

Bob Friesenhahn

unread,
Apr 6, 2014, 11:46:18 AM4/6/14
to open...@googlegroups.com
On Thu, 3 Apr 2014, Aaron Boxer wrote:
>
> As for OpenCL, there is no OpenCL extension of OpenJPEG at the moment. I am working on a separate prototype
> library, but I am only beginning, so no number is possible yet. 

Something odd I have noticed about OpenCL acceleration to various free
software projects is that in the end it is very rare to see any useful
performance results posted. Quite often the implementation code is
then lost, discarded, or is not successfully enabled in builds. Thus
far I am not finding any successes, where success is measured by the
feature being available to end-users and there are performance
benefits substantially beyond what is available from the host CPU.

AMD's work on adding OpenCL acceleration to ImageMagick
(https://www.youtube.com/watch?v=ET8YbFXeJRI) is a good example of
this. The work sounded interesting but where did it go? Were the
performance improvements mentioned already available via other means
(e.g. on AMD's own existing CPUs)?

With wonders like JPEG 2000 decode appearing written in Javascript, I
hope that a wonder like OpenCL implementations in free software
projects will see the light of day.

Aaron Boxer

unread,
Apr 6, 2014, 9:44:12 PM4/6/14
to open...@googlegroups.com
Hi Bob,


Something odd I have noticed about OpenCL acceleration to various free software projects is that in the end it is very rare to see any useful performance results posted.  Quite often the implementation code is then lost, discarded, or is not successfully enabled in builds.  Thus far I am not finding any successes, where success is measured by the feature being available to end-users and there are performance benefits substantially beyond what is available from the host CPU.

Well, I can't speak for other projects, but my prototype is based on a GPL CUDA project, and they post the following benchmarks for compression:


Benchmarks should usually be taken with a large grain of salt; they often are constructed to make the benchmarker's preferred library
look good. It's more and more common these days for the benchmarker to publish the detailed machine specs and the code and data for the benchmark, so, I'm not sure how much these numbers really say about actual performance. But, from working with the code, I can see that
the original implementers haven't really scratched the surface of what a GPU can actually do for performance. So, my prototype should be quite a bit faster, once I start optimizing.  For OpenJPEG, unfortunately, performance has been dropping with each version, and nobody knows why, nor is anyone seemingly trying to find out:



AMD's work on adding OpenCL acceleration to ImageMagick (https://www.youtube.com/watch?v=ET8YbFXeJRI) is a good example of this.  The work sounded interesting but where did it go?  Were the performance improvements mentioned already available via other means (e.g. on AMD's own existing CPUs)?

For small images, I think CPU will win over GPU, because of the memory bottleneck in transferring to the GPU and back to main memory.
Although NVidia just announce plans to eliminate this bottleneck with something they developed with IBM:  NVLink.  Supposed to be
available in 2016.   But, for large images, GPU is going to flatten the CPU.

Check out the Comprimato encoding frame rates (CUDA and highly optimized for NVidia hardware):

 

With wonders like JPEG 2000 decode appearing written in Javascript, I hope that a wonder like OpenCL implementations in free software projects will see the light of day.


Yes, js is getting pretty fast these days, with increasingly sophisticated browser JITs. The decoder I mentioned could get even faster with
WebCL, a new js interface to OpenCL.  There is a Firefox WebCL plugin right now. This is another project I would like to tackle. 

Will keep you posted,
Aaron



Bob Friesenhahn

unread,
Apr 9, 2014, 10:38:40 AM4/9/14
to open...@googlegroups.com
On Sun, 6 Apr 2014, Aaron Boxer wrote:
>
> Well, I can't speak for other projects, but my prototype is based on a GPL CUDA project, and they post the following benchmarks
> for compression:
>
> http://apps.man.poznan.pl/trac/jpeg2k

These benchmarks seem to be three years old. Hardware is continually
changing and three years is a long time in hardware years.

The "Core 2 Duo" CPU is a very poor performer compared with today's
available CPUs.

When comparing performance on a "Core 2 Quad", I am seeing as much as
16X real performance gain on more modern CPUs.

> Benchmarks should usually be taken with a large grain of salt; they
> often are constructed to make the benchmarker's preferred library
> look good.

Of course. A common approach is to emphasize larger image sizes which
make the results look better. Larger image sizes provide more
opportunity for parallel processing. What is really important are the
image sizes used for a particular application (the one the user
happens to be interested in). For full-color images, 12 mega-pixel is
already a pretty large image. Many useful images will be only 1 or 2
mega-pixel since this represents what users can directly see with
commonly available display hardware.

> For small images, I think CPU will win over GPU, because of the memory bottleneck in transferring to the GPU and back to main
> memory.
> Although NVidia just announce plans to eliminate this bottleneck with something they developed with IBM:  NVLink.  Supposed to be
> available in 2016.   But, for large images, GPU is going to flatten the CPU.

It is always necessary to get the data off of disk and then back onto
disk. As the encoding/decoding gets faster, this becomes a more
serious concern. The CPU has an advantage that the data is closer to
it.

> Check out the Comprimato encoding frame rates (CUDA and highly
> optimized for NVidia hardware):
>
> http://www.comprimato.com/ 

The rates for the fastest GPUs are unrealistic rates for a whole
system. The bottleneck gets moved elsewhere.

Aaron Boxer

unread,
Apr 12, 2014, 1:53:16 PM4/12/14
to open...@googlegroups.com
Hi Bob,
Thanks, you have made some interesting points. Would you be interested in creating a decent benchmark, with published code, hardware, and methodology, for testing jpeg 2000 performance.and image quality? Then we could know for sure that the dealio really is here.




Cheers,
Aaron



Reply all
Reply to author
Forward
0 new messages