Can you describe more the starting point, e.g.
- whole slide scans (TMA/biopsy/whole section) or smaller images?
- brightfield H-DAB, brightfield other stains (which?), fluorescence?
- images already registered or not?
Some extra notes:
- QuPath can already handle z-stacks, so if you can write a z-stack in some other software containing your registered images in a QuPath-compatible file format then this should work (e.g. ImageJ tiff, if small enough)
- Any deformable registration would need to be done outside QuPath. 'Outside' could potentially still mean via a script, but it would be a script where you'd probably need to call external libraries and do a lot of work. The result would then need to be saved as a full image. The most that QuPath could perform dynamically is an affine transform.
- If you have multiple pre-registered images, it would be possible to write an ImageServer for QuPath that dynamically combines them to give additional channels and/or a z-stack.
I suspect that multiple channels in a (pseudo-)fluorescence image would be most useful. If your data is brightfield, you'd need to also perform color deconvolution and treat each deconvolved stain as a separate fluorescence channel - but all this can certainly be done in a custom ImageServer. It would involve wrapping one or more existing ImageServers (i.e. the Java objects used to read the pixels in your original images) and then intercepting the pixels to perform some extra transformation before returning them.
I've done various tricks like this with ImageServers in the past, so can confirm it works - and can even be done with Groovy rather than Java. If you have any pre-registered images for which this would be meaningful, just let me know if you want to discuss or explore further.