I was trying to stitch two photos of the same object. One image was large about 5000x3200 px, the other one was smaller 1200x800.
The first one conatined lens data, the other did not.
The detector did not find any control points, so I placed them manually where they should be.
After optimization (y,v,p,r) the 2nd image was still much smaller. So I copied the field of view from the 1st image to the 2nd.
However they still varied in size, so there were many attempts to put a correct value there.
Finally the panorama was rendered correctly.
Another example - one image is original from a camera, the other one is cropped from the original, both with EXIF data.
The field of view is of both images is set to the same value, but the cropped image is much larger.
Now questions:
Is there a way to calculate the field of view by comparing the distance between control points? (rectilinear projection with position anchor, centered)
Just as one of images can be set as anchor for position and exposure, to set one of images as the base for field of view?
Can the control point detector (temporarily, optionally, if CPs not found) resize all images to the same size to find control points?
I can add that a mobile app can stitch images of various size and aspect if it only finds common points. I use this for merging two or more exposures or applied filters.
Even if images vary in size, they are almost perfectly stitched, the blurred (overlaid) details can be corrected in Hugin by masking.