If it's the cell detection you'd like to speed up, it would be possible from the point of view of how the algorithm works... but rather a lot of effort.
As it stands, the primary cell detection method doesn't actually use OpenCV - rather it uses ImageJ functionality for things like filtering etc. The OpenCV implementations would almost certainly be faster, and may also use the GPU, but in the end those parts should contribute a relatively small amount to the total processing time.
Probably the slowest bits are involve a watershed transform and morphological reconstruction... which I implemented myself in Java, because neither OpenCV nor ImageJ had suitable implementations. I did quite a lot of optimization (albeit not using the GPU), however they are inherently quite computationally intensive. Because morphological reconstruction is only used for background estimate, you can skip it if you set the background radius to be <= 0 (in which case no background subtraction will be used). I suspect this is the area where there is most to gain in terms of performance.
Nevertheless, I agree that I wouldn't want to try to troubleshoot the problems that might come up :) From the little bit of GPU programming I've done, it takes an awful lot of effort (especially if the result should be cross-platform), and not all users would benefit. There is also some cost associated with transferring the data to/from the GPU, and the fact that other applications may also be fighting for limited GPU resources (including QuPath's own user interface). For those reasons I think that rather than adapting any of the existing algorithms to use the GPU, it would be better to focus on creating new algorithms - and perhaps write them using libraries that include GPU acceleration already.