Processing on GPU

191 views
Skip to first unread message

Thomas Kilvær

unread,
Jan 21, 2018, 2:54:25 PM1/21/18
to QuPath users
Is it it possible to offload some tasks to the GPU when using QuPath?

micros...@gmail.com

unread,
Jan 21, 2018, 5:28:32 PM1/21/18
to QuPath users
I have only just gotten started using CUDA for deep learning, and Pete will probably have a better answer (and you might know better than I!), but I'm not sure that much of what QuPath currently does would be sped up much by using the GPU.
As a reference, the first answer in: https://stackoverflow.com/questions/22866901/using-java-with-nvidia-gpus-cuda

I could see it being useful for the neural networks option on the classifier, and maaybe some of the other classifiers, but I'm not sure it would help as much for things like cell detection.  I could also be way off on my interpretation, or maybe you have some other way of using it in mind :)

Thomas Kilvær

unread,
Jan 23, 2018, 4:15:27 AM1/23/18
to QuPath users
Thanks for the answer. I am by no means a programmer, but I saw that at least part of the OpenCV library could be run on GPU using CUDA (although this information might be deprecated??). According to your link it could perhaps speed up computation when doing cell detection and using local smooting/pixel information and when calculating image features as these must be some sort of floating point operations on matrices. 

Although... to enable this a rewrite of the openCV interface is probably needed?? Sadly this is beyond my capabilities

micros...@gmail.com

unread,
Jan 23, 2018, 1:24:20 PM1/23/18
to QuPath users
I looked into it a little more, and I think it is likely that the cell detection could actually be sped up using GPU processing, as I think it is threshold based.  https://www.sciencedirect.com/science/article/pii/S1361841514001819 has some information on segmentation methods.

Unfortunately, I don't know how easy it is to check whether someone's GPU is set up/free for use with such processing, and how generally applicable you can make it.  Setting up CUDA for playing with deep learning models took a bit of doing, for example, and still occasionally runs into problems with memory on the video card not being released when a process is interrupted (may be Jupyter specific).  Tensorflow was designed to nicely allow you to swap things between CPU and GPU, but that is all coding you are probably doing yourself and are in control of.  Not sure how OpenCV handles it, but trying to get things working smoothly on Mac, Linux and Windows, across various types of GPUs might be a bit trickier than a few lines of code :(

And while I think it would be neat, I wouldn't want to try to troubleshoot the problems that might come up from people trying to use it :)

Pete

unread,
Jan 23, 2018, 2:04:42 PM1/23/18
to QuPath users
If it's the cell detection you'd like to speed up, it would be possible from the point of view of how the algorithm works... but rather a lot of effort.

As it stands, the primary cell detection method doesn't actually use OpenCV - rather it uses ImageJ functionality for things like filtering etc.  The OpenCV implementations would almost certainly be faster, and may also use the GPU, but in the end those parts should contribute a relatively small amount to the total processing time.

Probably the slowest bits are involve a watershed transform and morphological reconstruction... which I implemented myself in Java, because neither OpenCV nor ImageJ had suitable implementations.  I did quite a lot of optimization (albeit not using the GPU), however they are inherently quite computationally intensive.  Because morphological reconstruction is only used for background estimate, you can skip it if you set the background radius to be <= 0 (in which case no background subtraction will be used).  I suspect this is the area where there is most to gain in terms of performance.

Nevertheless, I agree that I wouldn't want to try to troubleshoot the problems that might come up :) From the little bit of GPU programming I've done, it takes an awful lot of effort (especially if the result should be cross-platform), and not all users would benefit.  There is also some cost associated with transferring the data to/from the GPU, and the fact that other applications may also be fighting for limited GPU resources (including QuPath's own user interface).  For those reasons I think that rather than adapting any of the existing algorithms to use the GPU, it would be better to focus on creating new algorithms - and perhaps write them using libraries that include GPU acceleration already.

Pete

unread,
Jan 23, 2018, 2:06:54 PM1/23/18
to QuPath users
But if there is anything in particular that needs acceleration, it could certainly be added as a feature request.  There might be other ways to achieve a good speedup.
Reply all
Reply to author
Forward
0 new messages