Can you clarify some points. (I'm partially asking in case this helps someone else give you a better answer, but also because I also want to do some dynamic range things better in hugin than I currently know how to, so I think your clearer question might draw out a better answer for me).
Your starting point has "great dynamic range". Does that mean the starting point has enough extra bits (beyond the basic 8 per channel) to directly represent that dynamic range? Does it mean the multiple images have many different exposures (multiple per stack and/or each appropriate to the brightness of its own content)? Or both extra bits and varying exposure?
Your "output does not". What output? From doing what to the input images? And is that 8 bit or 16 bit output?
How do you hope to represent high dynamic range in the final result?
Do you want to stay in 16 bit (so depend on that to have enough range and depend on the viewing tool to convert that range usefully to display range when displaying)?
Do you want to do mask adjustments to brightness after stitching (I want to and don't yet know how to put that whole work flow together) so the final image is "dishonest" in showing the brighter parts of dark areas as brighter than the darker parts of bright areas, even though the real world scene had those dark parts of bright areas brighter than bright parts of dark areas?
Do you want to find a good non linear mapping from 16 bit to 8 bit (probably after stitching) to provide a more "honest" image: So brighter pixels in the true scene are consistently brighter in the result than less bright pixels from the scene? In other words global scaling to make the original dynamic range more visible in the result without being supported by the display hardware.
Do you want to mainly defeat hugin's automatic exposure correction. So far as I understand, hugin generally tries to change all the images to what they would have been if they had all been taken at the same exposure. I almost never want that and don't have a good understanding of what is involved in preventing it. The boundaries become harder to deal with if that feature is disabled. Blending an overlap where the two images have different exposures of the same content and you haven't permitted an exposure correction, can cause a very ugly blend operation. But the original exposures are often different for good reason and "correcting" that is uglier than dealing with the more difficult blend. Not correcting it and successfully blending gives roughly the same kind of "dishonest" image (that I want but don't know whether you do) as the mask based final adjustment would, but unlike those doesn't even depend on 16 bits being enough to allow undoing the harm of automatic exposure correction.