I have a basic understanding (which could easily be wrong so please correct me) that for every approx. -6db in volume leveling there is a 1 bit truncation (or loss?) in the output resolution. So every 6db drop in volume leveling in Roon, 1 bit of resolution is lost?
Before anyone asks whether I can hear a 1 bit or 2 bit loss in resolution, the answer is probably/maybe no. But I do like understanding (at a high level only ) what Roon is doing along my signal chain.
With DSD, the full 64-bit floating point stream goes into the sigma-delta modulator without truncation. There is not a volume leveling adjustment possible that would cause us to lose information in that scenario.
We insist on putting volume control in the endpoint to minimize lag/latency when changing volume. Doing a DSD soft volume control would consume a lot of resources. Many endpoints in our ecosystem are not powerful enough to perform the adjustment.
Internet music dealers currently sell "CD-Quality" tracks, or even better ("Studio-Master"), thanks to lossless audio coding formats (FLAC, ALAC). However, a lossless format does not guarantee that the audio content is what it seems to be. The audio signal may have been upscaled (increasing the resolution), upsampled (increasing the sampling rate) or even transcoded from a lossy to a lossless format. Lossless Audio Checker analyzes lossless audio tracks and detects upscaling, upsampling and transcoding.
Shooting my son's graduation today, I tried 14 bit uncompressed for the first time on my d810. Wow! Not for any improvement in IQ, but for file size. Normally I was getting a file size (RAW) in the low 40s mb, but with 14 bit uncompressed files are now pushing 80 mb. To be clear, I always shoot RAW and post in Lightroom
I'm embarrassed to say I don't recall what I was shooting before switching to 14 bit uncompressed, but it was fine for me and I will shoot some test images to look at file size and figure out where I used to be. Working the images in LR this evening, I was not impressed by whatever advantage 14 bit uncompressed gives. Google tells me that there's improved shadow detail, well, maybe, but I thought the images looked far more contrasty and even noisey on the indoor shots at 3200---an iso that should be fine for the d810.
The 14bit setting also allows 16k shades of color vs 12 bit's "measly" 4096 shades of red, blue and green. My eyes saw the contrasty look, but could not appreciate 12k more shades of blue. I will switch back, but to what? My test images will tell.
I try to always shoot lossless, don't care if it's compressed. (If no information is lost, how the data is stored doesn't matter to me.) I nearly always shoot 14-bit, but will drop to 12-bit when going for a a lot of frames with a camera that slows down with 14-bit. My old Nikon D7100, with a very small buffer, slowed down or stopped after very few shots when shooting raw. It was a little better with 12-bit than 14-bit.
I typically shoot 14-bit lossless compressed, the lossless compression makes for smaller files but does not lose information. 14-bit vs. 12-bit is really subtle, if you are not shooting at a low ISO, 14 bits probably does not give you any extra information. However, it's annoying to have to switch between modes so I rarely do. If I'm shooting a buffer-constrained camera and if my ISO is higher than about 400 then I may for specific tasks switch to 12-bit lossless compressed or 12-bit compressed to get a bit more room to work with, but then I may not remember to switch back. Any differences between these formats are going to be really subtle and unlikely to be visible at all, except for (perhaps) base ISO where it makes sense to use 14-bit files. Nonetheless in practice I usually do shoot 14-bit lossless with all my cameras.
99% of the time I shoot 14-bit lossless compressed to the faster card (CF or XQD depending on camera) and JPEG fine to the slower (SD) card. I don't see a point in uncompressed - corrupted files are extremely rare, and very little of the time taken in my image processing has anything to do with decompressing the raw file (whereas running out of card space and buffer has happened to me).
I did briefly switch to 12-bit lossless (and a smaller image area) for a fast burst a few months ago to maximise buffer space. Thom Hogan argues that you should use 12-bit above, IIRC, about ISO 400; it'd be nice to have an option to automate this, though I usually try to stay as near to ISO 64 as possible anyway. In theory more bits should give you a more accurate mean to the measurement, so it ought to help even with a noisy image, but I can understand that the accuracy can usually get lost in the noise.
IIRC the D810's "small raw" is pretty useless (compared with using lossy compressed 12-bit at full resolution). The D850's versions are different and closer to a proper downsampled 12-bit image (actual pixel binning is useful, especially in low light); I've yet to experiment with them.
I shot some test shots, snd as many have noted, the sweet spot for me is 14 bit lossless compression. 14 bit uncompressed was better, but only when comparing, and the difference was subtle. 3rd place was 14 bit compressed. I guess this is how one would expect it to be. Looking at file size, I think I must've been shooting in 14 bit lossless all along, except for yesterday's shots.
As far as image quality goes, there should be absolutely no difference between lossless compressed and uncompressed. Lossless should mean no loss at all. The difference is that the files are smaller after compression, and it will take a little time to expand it when you edit it.
At thirty quid or so per Terabyte, it's never been cheaper. And transfer rates of > 90 MB/s are pretty standard. Gee, I wish I could have processed colour film at almost zero cost at the rate of a frame a second 20 years ago!
BTW, the difference between compressed and uncompressed isn't in the recovered quality, it's in the robustness of the files. Try forcing a single bit error into both file formats and see which one is most easily recovered.
Unlike with film, you can't go back and re-scan with digital if you want better quality. If your images are for personal use, it makes sense to save them with the highest available quality. If your images are used once and tossed (e.g., pro sports), and speed of transmittal is of the essence, then smaller (e.g., JPEG) files are prescribed, with several choices for "quality" (compression).
You probably can't see a difference betwee 12 and 14 bit files, or even 8 bit files. However a 14 bit file has 64 times as many steps between white and black, which means they can be edited more severely before banding or posterization occurs. If you always get it "right" the first time, it probably doesn't matter which you choose.
Compression usually involves combination of blocks of pixels with similar information, saving each pixel with the common data along with a descriptor or map with information how to interpret that data. The problem with images is that information contained in the pixels is nearly random, so assumptions must be made on what to keep and what to throw away. In a word, this constitutes "lossy" compression, commonly represented in JPEG files.
Whether a files is 12 or 14 bits makes no difference in the uncompressed file size. Bit depth increases word size in increments of Bytes, or 8 bits. A 12 or 14 bit file occupies 16 bits, with some bits empty (truncated). Lossless compression makes use of the empty bits to combine and save image data. Theoretically, the file size reduction would be 25% or less. Anything greater indicates that there are in fact, losses.
Storage compression can be theoretically "lossless," however the space saved is at the expense of redundancy and check values. In other words, a data error cannot be recovered completely, unlike with uncompressed storage. In short, file compression is a bad choice for long-term storage. The next step is to save your images on a RAID drive, which is redundant over 2 or more disk drives, and can fully recover data if one of those drives fails.
I thought I could see a small bit of difference looking at shadow detail grossly, insignificant to be sure and only when the two are side by side. And if I cared, I could up the shadows in post without a second thought.
"Lossless" in English may mean without loss, but in a Nikon lab setting there may be a factor of loss that is considered "lossless," even though there is, in a lab setting, some amount of loss, however small. Maybe Shun knows? It would beg the question: Why have both lossless and uncompressed if they are identical? I'd bet that number would be guarded and inside info. Nikon would not want Canon or Sony to know what is considered "lossless" in NikonWorld.
It's not the size of individual files that concerns (way too strong a word), but the amount of time it takes to transfer from card to hard drive to Lightroom and back---significantly longer, like go watch a TV show longer while you wait. Storage is cheap but not infinite. Although what I noticed first was the size difference, the thing that I really noticed was the time it took to transfer and then transfer the correspondingly larger jpgs to smugmug and google.
Looking at the big picture for me though, I think the most telling thing, again for me, is that I loved the IQ of what I was using before---14 bit lossless compressed---and the question arose when I saw the file size of the changed setting to 14 bit uncompressed. Though I know more about it all now, and it is interesting to know what others shoot, turns out, for me, "ignorance was bliss.":)
The "redundancy" in an uncompressed image is that there's a 1:1 relationship between bytes in the file and pixels in the image; if some bytes get corrupted, only those pixels are affected. The compressed formats make use of the similarities between pixels in an image to reduce the number of bytes needed to describe them; because the pixels are interdependent, if some get corrupted, likely many more will be. The less precise relationship between pixels due to noise is why lossless compressed files tend to get bigger as ISO increases.
c01484d022