Best Dynamic Range Camera Phone

0 views
Skip to first unread message

Егор Ульянов

unread,
Aug 3, 2024, 3:04:37 PM8/3/24
to teovemeappmatch

The next shot is very complex in terms of shadows and highlights. Google does an overall excellent job in doing an exposure that is a close representation of the actual scene. The iPhone XS follows suite with just a tad brighter shadows, with Samsung opting to go even brighter.

The OnePlus 6 looks to have the best dynamic range shot of all phones here maintaining the best contrast in the fallen leaves while not overexposing shadows of bringing down highlights too much. While the Pixel 3 does well in representing the brightness of the scene, it also loses out in detail because of it being darker.

Similar to the last scenario, I would suggest that the OnePlus 6 had the best overall shot here in terms of exposure and HDR balance, with the Pixel 3 following closely. The iPhone XS also has an excellent shot. The Note9 loses out too much contrast by raising the shadows, and the S9+ has a blown out sky.

Why can a camera not capture a similar image to what my eyes can see? I would think that newer cameras should be able to capture this much dynamic range easily. I do not believe that display is a problem if this much dynamic range is captured, because it can be normalized. In a digital camera I have to set exposure which will only capture outer scene or inside scene correctly.

A similar question is already discussed here How to capture the scene exactly as my eyes can see?. I am not talking about resolution, focusing or detail. I am interested in exposure or dynamic range similar to when we fix our eyes on a single scene.

The reason you can see such a large dynamic range isn't that the eye, as an optical device, can actually capture such a range - the reason is that your brain can combine information from lots and lots of "exposures" from the eyes and create an HDR panorama of the scene in front of you.

The brain takes all those images from the eye and create the image you think you see - this includes details from images at different sensitivity and even details that are completely made up based on what you expected to see. (This is one reason why there are optical illusions - the brain can be fooled into "seeing" things that aren't really there).

So, you can see with your camera just like with your eye, just take lots of exposures at different settings then load everything into Photoshop, create an HDR panorama and use "content aware fill" to fill the gaps.

By the way, why cameras "should" be able to capture that range but monitors shouldn't be able to reproduce it? If technology that doesn't exist should exist then monitors should be able to reproduce anything we can see (and I should be able to take a vacation at a low gravity hotel on the moon)

You may have a slight advantage in sensor dynamic range over a camera, but most of what makes the difference is having a sophisticated autoexposure system, saccades, HDR processing, and a scene recognition system that persists across multiple exposures. The human brain is at least as important to the visual system as the eye is.

We only see colour and detail within a very narrow field in the centre of our vision. To build up the detailed colourful image we perceive, the brain moves this central spot around without us knowing.

I'm not a neurobiologist but it stands to reason that as the brain is making up this wider picture from lots of tiny snapshots it also does some normalisation on the brightness yielding an image that appears roughly the same brightness everywhere, despite some areas being much brighter in reality. Basically the ability to see dark and bright things at the same time is an illusion.

There's no reason this behaviour can't be imitated by digital cameras, nor is there any reason we can't make sensors capable of much greater dynamic range in a single exposure. In fact Fuji manufactured a sensor with extra low sensitivity photosites to capture extra highlight detail.

The problem comes down to the inability to display high dynamic range images. In order to display such images on a standard low dynamic range monitor you need to do some special processing called tonemapping, which has it's own set of disadvantages. To most consumers high dynamic range cameras would simply be more hassle.

Light level in a darkened room with a window open to an exterior scene may be as low as about 0.1 lux (0.1 lumen per square metre.) The outside scene light level may be anything from 10's to thousands of lux in the situation you describe.

At 100 lux external and 0.1 lux internal the ratio is 1000:1 or just under 10 bits of dynamic range. Many modern cameras could differentiate tonal differences at both ends of this range is set correctly. If the tree light level was just saturating the sensor then you'd have about 4 bits of level available inside the room = 16 levels of lighting. so you could see some degree of detail with brightest level EXCEPT THAT theat level of light is so low that eyes would have trouble with it.

If the tree light level was 1000 lux (= 1% of full sunlight) you'd need about 13 bits of dynamic range. The very best 35mm full frame cameras available would handle this. Camera adjustment would need to be spot-on and you would have about zero tonal information inside the room. This level of external lighting is higher than you would get in other than a flood-lit night time situation.

Many modern medium to top end DSLRs have inbuilt HDR processing that allows far greater dynamic ranges to be obtained by combining multiple images. Even a 2 image HDR photo would easily accommodate your scene. My Sony A77 offers up to +/- 6 EV 3 frame HDR. That will give well over 20 bits of dynamic range - allowing very adequate tonal variations at top and bottom ends in your example.

None of the answers have touched this yet, directly at least... yes, it is very much an issue with film, too. The famous Fuji Velvia colour transparency film, for example, has a truly rotten dynamic range (great colour though!) Transparency film in general suffers from this. On the other hand, negative films can have very good dynamic range, about as good as the best current digital cameras. It is handled a bit differently, though - while digital has a linear response to light, film tends to have a marked "S" contrast curve built-in. The blacks and almost-blacks, and whites and almost-whites, are bunched up more than the middle tones.

Keep in mind that as film photos will generally end up printed in ink on a white paper background, there is a not too generous limit on how much dynamic range one would want it to capture in the first place! Capturing, say, a thirty-stop dynamic range and then outputting it to a... what is the ballpark DR of a photographic print anyway? Five stops? Six? ...output medium would look... odd, to say the least. I suspect that it is this factor more than any unsurmountable hurdles with the chemistry that has limited photographic film dynamic range. It is not so much that we cannot do it, it is more that we actively don't want to do it.

So if you assume a condition where the brightness goes from 1 to 10000 (randomly chosen number), in log base 10, the human eye would see the brightness as 0 to 5 while the camera, linearly, sees it as 1 to 10000. Building a sensor that can cover such a large range is difficult as you have noise interfering with low measurements and overspill interfering with higher brightness measurements.Having said that, I believe there is a RED camera that can record 18 stops of dynamical range - not sure if it is only a prototype or production model though.

The eye doesn't capture dynamic range. It compresses dynamic range, and then the "post processing" in the brain creates the illusion of dynamic range. A compressed dynamic range is why you can see into shadows and lit areas at the same time. The "gain", so to speak, is automatically cranked up in the parts of the retina that is sensing the shadows, making them brighter, and reduced where the retina is seeing lit areas. The brain still knows that it's looking into a shadow so it creates a sensation that it is dark there. A kind of expansion over the compressed data is going on, so to speak, so that you're not aware that the dynamic range has been compressed.

The sensors in digital cameras could easily outperform the retina in raw dynamic range. The problem is that you don't control the exposure on a per-area basis. Cameras have gain settings (usually presented in film terminology as ISO settings) which are global.

If the camera could adjust gain for specific areas of pixels based on brightness, that would be undoubtedly useful, but we know from applying such gain-leveling effects in post-processing that the brain is not really fooled by them. It does not look natural. It looks natural only when your own eye is doing it in coordination with your own brain.

Let's consider the closest option. Tone Mapping is a method in which a low-pass filter is applied on the exponent values of the RGBe image. That plays a large part in how eyes see something. But let's consider that our eyes are taking in lengthy steams of imagery. They work a lot more like video cameras than photo cameras.

In a much more simplified example, the iPhone's "HDR" photos are composites of a low and high exposure image pushed through a tone-mapping process that works fairly well if you haven't tried it. Many other consumer-grade cameras do similar things.

There is also the fascinating subject of how intuition/intention/free-will plays into how your eyes are being calibrated along the stream of time. If you're looking at a dark wall and think about turning your head towards a window that is brightly lit your brain can tell your eyes to go ahead and start closing up your pupils. A camera with automatic exposure can do the same thing but only after there's too much light coming in. People who work in cinema spend a lot of time getting the timing of movie cameras' settings to flow smoothly so that they feel natural in a complicated shot (or lighting a scene in such a way that the cameras' settings don't actually have to be adjusted) But again, the only reason those sorts of things work is because the director knows what's going to happen to the camera before it happens.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages