Thank you for your generous comments.
Oops! I apologise for misleading you or anyone else with the Hubble photo. It wasn't my intention.
I had two things in mind using such a comparison. The main one was to provide a comparison of results, VFM and reward/effort ratio, for those thinking of how best to start off in astro imaging, anyone wanting simplicity, or for those not wanting to spend a lot.
The relevant comparison I thought is between what is possible with a £400 complete and integrated imaging system, albeit uncooled, with a small aperture but highly portable, and what might be expected from a more traditional and typical set up with a good mount, shortish focal length refactor of medium aperture, and with cooled colour or mono camera/filters and guiding. That might all come in very easily at over £2500 being optimistic, and actually a lot more without trying hard.
I chose an old Hubble photo as it looks lame against its most recent ones of M82, which are now on a much higher level, but nevertheless is something that would indeed still be impressive for a skilled and dedicated amateur with a conventional rig.
I think the Dwarf results make a good case for smartscopes, in terms of both the absolute comparison of results, and in comparisons based on relative cost/value for money and ease, also simplicity vs learning curves.
The other reason was prompted by how AI astro programs work, ie by 'training' using comparisons of the best images for a given target and those that are typically achieved. No doubt Nvidia does the brute force of trying anything and everything on pixel matrices until the conversion method works well enough to pass the coder's quantified objective measure, at billions of operations per second, to produce the algorithm you get in the app.
This approach certainly applies for AI denoising for one, and it supports the USP of apps like the Xterminator series, that their adjustments give accurate representations of celestial objects rather than invented cosmetic 'airbrushing'.
My test was therefore to see how good the Dwarf is in capturing actual details, for which Hubble images are a good reference.
Processing of Dwarf images is an interesting topic. The answer is it's up to you. At simplest, processing is a press a button on your smart phone to upload to DwarfVision, and then download a very good processed result in less than a minute.
At the other end of the spectrum, you download the RAW fits from the Dwarf as a local USB drive, and then apply the same hard graft as applies to any other image.
If you want the gold cup, then it means Pixinsight, expensive AI programs, steep learning curves and much acquired knowledge, skill and practice in using it all, (ignoring the desirable need for a dark sky with reliable good seeing).
The auto processing is undoubtedly very competent, in part as the Dwarf automatically produces a matched master dark. (I don't know how this works but with many users of completely standardised set ups, it must be easy to produce a suitable cloud library of darks at various gain settings for each 1°C temperature increment).
Going towards the other end of the spectrum is what I attempt. The work flow is Pixinsight based but I also use add ons (the letter X features a lot in their names). I use freebie alternatives rather than buy the expensive apps.
* Blink: to weed out obviously dud frames
* Calibration: taking my own darks and flats, bias/flat frame darks. PI Weighted Batch Pre-Processing.
* Integration of the dithered data: (still not convinced about this. The latest PI version has a Fast option that ripped through WBPP for my 122 Bode frames in 6 minutes including x2 drizzling, but it can be very slow if you use the full power version . It also adds a lot of noise to the results. I usually try it but then don't use the result)
* Remove stars: PI StarNet2 or buy StarXterminator to allow separate processing of background and stars, protecting RGB stars to add back to processed monochrome images- very helpful - and stretching from the linear data originals and then adding back stars.
* Deconvolution: Influencer pester power says buy Blur Xterminator. The more thoughtful reviews say it works best on data that's already excellent and there are a lot of before and after images on the web that suggest this is true. I usually skip this stage, but there are (free) sharpening third party PI scripts.
* Gradient removal using PI ABE, DBE or Seti Astro AI GraXpert
* Noise reduction: buy NoiseXterminator but I use the Dwarf version, or freebies like free PI EZ suite or RC Astro BlurX.
* Channel creation for duo band filters, (Ha/Oiii is a built in filter in Dwarfs). Pixel Maths, including the possibility of Dynamic Weights ( where say a created Oiii channel uses per pixel weights for combining blue and green duo band data. The dynamic approach uses weights w depending on the local balance of pixels of different colours, ie w = fn(local R,G,B), [eg use more B if local R and G is weak], rather than using a deterministic w*B + (1-w)*G formula.]. The relevant obscure pixel maths syntax is beyond me but you can copy and paste and the principle is clear.
* Stretching from the original linear state: Standard PI STF and Histogram Transform. But wider choice with a recent burst of innovation eg Generalised Hyperbolic, Statistical Stretch.
* Colourisation: SPCC on RGB stars and maybe curves boosts. Histogram colour channel adjustments to taste.
My halfway house at present is though usually Press The Button, eg stacking, and then apply the later stages of the above myself.
Finally, economies of scale from the take up of a given make of smart scope are important, not just for auto darks, but also and not least, because there is a growing community of users, many high level, who can and do advise on processing Dwarf images specifically.
James