I often say and write, The simple test patterns are often the toughest for display systems to get right because the eye will know when something is wrong. If a flat white image is displayed and you see color(s), you know something is wrong. Humans are terrible judges of absolute color, including various white color temperatures, but the eye is sensitive to variations in color. As will be shown, the Apple Vision Pro (AVP) fails the simple (mostly) white display test. While not as horrible as some other headsets, you would never buy a modern TV or computer monitor with such poor white uniformity.
To support the wide FOV with a relatively (to the Meta Quest Pro and Quest 3) small display, Apple has developed a more radical approach of curving the quarter waveplate and having a concave lens on the eye side of the optics (below left, from an interesting analysis by Hypervision). In contrast, the Meta pancakes (below right) have a flat quarter waveplate and convex lens on both the eye and display sides.
By definition, quarter waveplates (QWP) are color=wavelength dependent, and they are also significantly affected in color and affect polarization by the angle of the incident light dependent. Having a curve, QWP is necessitated by the optics design.
There is no free lunch with digital pre-correction; the resampling to remove the distortion comes at the expense of resolution. Display pixels in the center of the FOV are less magnified, while pixels are more magnified, moving out from the center. A thin line might be the size of one pixel in the center of the FOV and will be less than 1/3rd the size of a pixel on the outer part of the FOV.
While the native spreadsheet caused dramatic problems I wrote about previously, I have seen similar eye-tracking artifacts with some static bitmaps (to be shown in a future article). Eye tracking and foveated rendering are clearly being done with bitmap images.
While all the marketing attention is on using eye tracking for input selection, eye tracking is critical to generating a good image, more so than prior optics. This also helps explain why very specific and characterized inserts are required for vision correction, even though there is enough room and eye relief for glasses with small frames to fit.
The amount of color variation is not noticeable in typical colorful scenes like movies and photographs. Still, it is noticeable when displaying mostly white screens, as commonly occurs with web browsing, word processing, or spreadsheets.
The image below is a crop from the center of the original 400-megapixel picture. I tried to pack a lot in this image, including some pieces of the source image scaled up to about the same size as the image, a view through a 50 mm lens (with 3.125 times the center resolution of the original 16mm lens), which was used to estimate the FOV of each center pixel, plus some highly magnified overlays show details in the lines in the test sub-patterns.
I have been comparing and scaling high-resolution images with a 50mm lens with a narrow FOV where pixel boundaries are clearly visible and fitting them to the images of the much wider FOV 16mm lens for the purpose of determining the pixels per degree in the center of the AVP screen. The result I get is that there are about 44.4 pixels per degree (PPD) in the center of the image.
Computer-generated images with sharp edges, including everyday applications like word processing, simple presentation graphics and charts, and spreadsheets, are very hard to reproduce when locked into 3-D space (see the Appendix for more information).
Returning to the original full camera image above, a large dash line square roughly indicates the foveated rendering boundary. The image below takes a full-resolution crop showing a horizontal boundary (2a) and a vertical boundary (2b).
I see different processing and artifacts happening when natively rendering on the AVP (such as when running Excel), displaying a saved bitmap from a file on the AVP, displaying a bitmap image on a web page, and mirroring the content of a MacBook. With each test image, seeing how it will display differently with each display mode is an adventure.
This certainly makes sense and seems to agree with what I am seeing. It looks like the AVP first renders the image at a higher than native resolution and then scales/resamples that high resolution into 3-D space. The problem is that even if you scale up a bitmap to a much higher resolution, some detail will be lost (see Appendix on Nyquist rate).
My studies use conventional camera equipment to capture what the eye sees to give a heuristic feel for how the various headsets perform. Detailed evaluations necessary for both R&D and production require specialized cameras with robots to simulate eye and head movement.
OptoFidelity announcement: We are excited to inform you that we will comprehensively evaluate the Apple Vision Pro using the BUDDY test system. Our testing will cover a range of performance metrics for the Vision Pro, including:
Looking at the test pattern on the Vision Pro, with it filling up my entire field of view, I am able to easily make out four distinct lines in the test pattern. I have to reduce the size and move it far away to see the lines merge into three or two. Are you not able to make out four distinct lines in the test pattern when you look at it through the lenses?
I have the HoloLens 2 and the Quest3, and those displays are objectively worse. I agree that Apple is cheating a lot in making things bigger in order to make text more discernible, but I have been pleasantly surprised at how productive I can actually be with text-related tasks that I would never attempt on either of the other two devices, such as reading email, or editing code. I worried that the loss of information density would be much greater than it actually is. I am able to have enough code visible to be productive. And reading email or web articles is very comfortable. I am very much looking forward to an update that allows for multiple virtual Mac screens. Is it yet a complete desktop replacement? No. But it is much closer than I anticipated.
In going back and looking at the color uniformity, the cyan outer rings cannot be missed and seem to take up about 5-10% of the FOV. The reddish tint in the center, however, is less noticeable if both eyes are open as if the color variations are different, they tend to cancel each other out.
I went back tonight looking at the color uniformity issue. To be clear, the AVP white uniformity is not horrible, but it is not great either. Tonight I did a back and fort comparison to the Meta Quest 3.
I think a lot of variations you see are due to different amounts of rescaling/resampling. What resolution the virtual monitor is trying to emulate and the quality of the scaling software can have as big an effect as the resolution of the display.
The question is, whether people will take their AVP with them instead of a Laptop or with it. A laptop is much more conveniently carried as it folds flat. If want a second monitor for say videos or unrelated content, I can use at tablet or my cell phone, once again the two devices combined are easier to carry than an AVP.
The vertical FOV in the picture close to the lens picture is significantly cut off by the camera lens. You will see with the red dotted line where I estimated the cutoff horizontally but did not do the same vertically (the camera has to be off center to capture the picture correctly as FOV is off center in each eye. I would assume the FOV difference is similar in both directions.
On that topic, is there any standard way of calculating peak PPD? Reporting the average over the central 30 area (circle with 30 radius) vs. the central 10 area may yield different numbers, though probably not markedly so in most cases, as manufacturers presumably want to keep distortion low in this area.
I had really really bad display color calibration issues so much that i had to exchange my units for reds in left and right eye being extremely off but no one else seems to have talked about this. My new unit while much better is still not a perfect match in red hues which can be tested using the color chart in the color filters accessibility menu
I also like as Eugene mentioned, having a laptop screen that is positioned higher for better posture. And a huge bonus coming up is, I can work outdoors (like on a porch) in conditions that would wash out a laptop screen.
I do notice light blooming issues at times but so far really just in movies. I do find it a little annoying at times, but again the utility of being able to watch a movie on an apparently large screen in any comfortable place I can find, is kind of invaluable.
Thanks for the tip! 1) What app do you use to display images in AVP? 2) When you take off the headset, how do you aim the virtual target image with the external camera? By maintaining the headset to the same direction/orientation?
1) I use several different applications. I have used the web browser to look at images on my website, I have used apps running on the AVP (Excel and picture display), and I have used connection to a MacBook Pro M3 Pro.
2) I wear the headset to get things generally arranged and lined up. I then have a clamp to hold it on a Tripod that is roughly at my head position. I then do some fine tuning with the tripod adjustments once the headset is clamped on the tripod.
I was not able to see the red color non uniformity in mine, but I can see the cyan in the border if all the display is white, but to be honest, they hide it somehow in any other image I can only see it when it is full white.
I think I do like it better for on-the-go laptop monitor replacement than you seem to like it. It will depend on the task, but in addition to a bigger display the ergonomics are a bit better (looking ahead rather than down on my laptop), but I still like physical monitors better. I can see some display artifacts in strange images with some CAD tools, but overall seems a worth trade-off if you are not in your desk.