I think this is a really interesting application, and a few possibilities come to mind.
As suggested by microscopyra, life is a lot easier the simpler the transformation can be. Translation only should be fine. With any kind of rotation it's a lot more awkward, but tolerable. Both of those can be represented efficiently and applied dynamically, but if any other non-rigid or local transformations are involved then it probably requires a completely separate alignment step that writes out 'perfectly' aligned whole slide images.
Under the assumption that, because it's the same tissue and scanner, we just need to deal with translation, the 'simple' approach might be this:
- Mark a single point on each image, in the centre of the exact same nucleus in each case.
- Mark one or more 'area' annotation (e.g. rectangles) on one image
- With a script, determine the x & y shift from the point apply it to every other area annotation when transferring it to the other images
If you think that would help then I can certainly help with that script. It could either be applied to all images in a project 'blindly', or to images that are all opened simultaneously in QuPath using multiple viewers - whichever approach you think would help most. The latter would help with checking quickly that it has done the right thing, but having lots of images open at the same time can get pretty awkward.
(In the longer term, this could be built-in to QuPath so that the Transfer last annotation command understands it - but it would require defining some kind of 'anchor point' annotation... and therefore be a slightly bigger change, and not possible through a script alone.)
The elaborate approach I can think of is something like this:
- Figure out the transformation between images either using manual points as above, or automatically (OpenCV has a nice method for this that could be called from within QuPath; the main question is how well it can handle different staining patterns, and if it can be run at a high enough resolution)
- Create a QuPath 'ImageServer' that automatically applies color deconvolution to separate out the DAB (or AEC) staining, and 'tricks' QuPath into thinking that the deconvolved DAB for each image represents a single channel of a fluorescence image. Multiple channels would come from the multiple images, applying the transform dynamically. You might optionally also include the hematoxylin channel from one image (or more) too.
To get reasonable performance in this case, the spatial transform has to be really fast - which it sounds like it should be here. If it is, the color deconvolution part should not be a problem. And then as far as QuPath is concerned, it should just 'think' it has a fluorescence image and handle it accordingly - with access to the same annotation and analysis tools.
The second option would be a comparatively bigger task... if you want to take it on, it could be a fantastic addition to QuPath. Or send me an email if you want to discuss more whether it's something we could feasibly collaborate on to solve together - and, indeed, if the idea is likely to work. Without seeing the images I'm guessing quite a bit, but it sounds to me like it should be possible.