The firebox uses gzip to try and unpack (deflate) any compressed archive it thinks might be an archive. Depending on your settings, it'll log and allow or deny (or lock, or quarantine in some cases) the file.
We have currently deployed several surveys created via Survey123 Connect that are being used by 200 users in our ArcGIS Online organization and allow them to attach photos. Lately, we've been getting complaints of photos coming out blurry, and after checking, it appears the photos are being reduced in resolution and compressed. This is despite the photo size setting in Connect being set to "Unrestricted".
We were using the annotate appearance on the image question so users could mark/highlight features on the photos they took. As it turns out, photos taken or uploaded into an annotate image question are heavily compressed in size for reasons unknown to us. We have since updated our surveys to include an image question without the annotate style, and photos taken using this question are of the original, unreduced quality.
A great deal of effort has already gone into ways of reducing this scaling, with many interesting approaches being used. In refs. 18,19 constant complexity in S is achieved by concatenating two descriptors, one of which is element agnostic and another where the contributions from each element are weighted. This effectively amounts to embedding the elemental information into two dimensions, rather than keeping different elements entirely distinct. A similar strategy was used in refs. 20,21, except here the element embeddings were optimised during model fitting, so that the final embeddings contained a data-driven measure of chemical similarity between the elements. Approaching the problem from an information content perspective, the recent work of ref. 22, demonstrated that a model fitted to as few as 10% of the power spectrum components led to negligible degradation in force errors on an independent test set, suggesting that significant compression can be achieved. Similar results were seen in ref. 23, where descriptors were selected using CUR matrix decomposition24, in ref. 25, where state-of-the-art model performance was achieved on the QM9 dataset with a heavily compressed descriptor obtained though repeated tensor products and reduction using Principal Component Analysis (PCA), and also in ref. 26 where a data-driven approach to constructing an optimal radial basis was employed with great success.
In this work we introduce two non-data driven approaches for compressing the SOAP power spectrum; the power spectrum is available through the quippy28,29, dscribe30 and librascal31 packages. Firstly, by considering the ability to recover the density expansion coefficients from the power spectrum, we introduce a compressed power spectrum and show that, under certain conditions, it is possible to recover the original descriptor from the compressed version. Secondly, we introduce a generalisation of the SOAP kernel which affords compression with respect to both S and the number of radial basis functions N used in the density expansion. This kernel retains a useful physical interpretation and the ideas used are applicable to all body-ordered descriptors, which is demonstrated using the ACSFs32. Finally, we evaluate the performance of the compressed descriptors across a variety of datasets using numerical tests which probe the information content, sensitivity to small perturbations and the accuracy of fitted energy models and force-fields.
The compressions outlined in Fig. 2 have been implemented in the Gaussian Approximation Potential fitting code gap_fit48. A Jupyter notebook demonstrating how the WTPl compressed power spectrum is computed, and how the original power spectrum can be recovered from it, is available at
My goal is to have a zlib compressed file that I append to using C/C++ at different intervals (such as a log file). Due to buffer size constraints I was hoping to avoid having to keep the entire file in memory for appending new items.
I ended up simply appending a delimiter to each section of data (ex: @@delimiter@@) and once ready to read the finished file, (different application) it seeks these sections and creates an array object of the compressed sections that are then individually decompressed.
I am having a similar issue where GTMetrix is saying some images can be compressed by up to 95% (and I confirm it can using )In fact changing the quality does almost nothing (the 0 quality one looks all grey artifacts and should be 2 Bytes yet the size doesn't change)
Since about 5 years ago (starting from the D2H), Nikon NEF compression is done in dedicated hardware so that there is no apparent speed penalty. Obviously, if your RAW files are compressed, they need to be uncompressed before actual RAW processing, but the time difference is minimal.
While compression does save some room on the media card there was a significant penalty in processing speed later. Viewing compressed NEFs on a WinXP P4 took almost twice as long as with uncompressed NEFs. And these were from a 4mp camera. When I tried to view compressed NEFs on an 850MHz PIII it either took up to a minute per file to view, or locked up the PC.
Stephen, that's completely not true, and on the vast majority of Nikon DSLRs sold, compressed NEF is the only raw option. ACR, CaptureOne, etc have Zero issues reading the raw files of the D70, D50, D40, D80, which are all compressed NEF (though using a different compression scheme).
Bogdan - I would assume that the lossless compressed NEF uses a very straigt-forward compression scheme. Compared to the Bayer demosaicing algorithms that any future software is going to need in order to read any D3 NEF files in twenty years, the compression is a walk in the park. There have been Zero reports of shooting speed decreases, b/c it's likely fully hardware implemented, and your CF card writing speed will Increase b/c there's less data being written to the CF card. Shoot losslessly compressed and don't worry about it.
Secondly, you might be confusing the issue between standard NEFs for which it's safe to assume that some form of compression is used, and what Nikon calls "compressed" NEFs. (Otherwise the 6MB NEFs from my D2H would be the same size as in-camera TIFFs which, if I'm recalling correctly, are closer to 11MB). However, Nikon simply states on pg. 41 of the D2H manual "NEF images are not compressed." This is to distinguish standard NEFs from compressed NEFs, which Nikon claims uses a lossless algorithm.
This issue has been discussed many times before. Folks who have studied both say there is, in fact, some minor loss of detail in the highlights with compressed NEFs. This may be considered insignificant for most purposes.
Considering that two of us have specifically stated problems experienced with NEFs from certain models under specific circumstances, the better advice is to try both and evaluate for ones self, not "shoot losslessly compressed and don't worry about it."
In any case, the bottom line is saving about 8.4MB if using lossless vs. uncompressed and shooting 14bit. If there is no penalty in speed in-camera and/or in post, then of course lossless makes sense. But if that were the case, why would Nikon even bother with uncompressed? There must be a catch somewhere, no?
There could be a speed penalty or a speed gain shooting. If the bottleneck is card writing speed you would gain speed shooting compressed if however the bottleneck is processing power you would loose speed shooting compressed raw.
Looked at the D3 and D300 manuals, and it looks as if these are the first Nikon DSLRs to offer "lossless compression". Before that it was just compressed vs. uncompressed, with compressed being slightly inferior, depending on what your were shooting, how large you were printing, etc. But inferior nevertheless. This now makes sense, though my question still remains: If lossless compression is the new thing, why even bother to have uncompressed? Even the fastest CF cards and computers will need to see a hit somewhere, or maybe issues with third party RAW converters. Hmm..
Is the time spent by the camera doing "lossless compression" equal to or greater than the time it takes to write an uncompressed file directly to the CF card? If it's equal, it's a wash. If greater, then you are taking a performance hit in-camera.
Since this is new to the D300 and D3, it remains to be seen. It's hard to do scientific tests. But it seems that uncompressed may be the safer bet until there are proven tests done. Wish Nikon wasn't so vague about this!
I didn't use a stop watch but the differences betwee nthethree NEF recording modes was very slight. It felt like compressed NEF was the fast fastest, nonlossy compression was a fraction of a second slower and as Nikon states non-compressed NEF was maybe 2 ticks longer than non-lossy mode. I was judging the duration between when the shutter closed to the time the green "write" light extinguished.
It's true that compressed formats would reduce the number of disks that are needed and also the transfer times. However, I am annoyed by delays in the editing process where I go through many images quickly and I need to keep myself focused (loading .... saving ... ugh). With the copying and burning processes I can be doing other things in the meanwhile.
Bernard, that's quite good compression. I have some lzw compressed tiffs, 48 bit rgb raw scans from a Minolta Scan Dual II, that shrunk about 20%. I've had no problems opening/using them so far, knock-on-wood. They were created through Vuescan.
I finally opted for uncompressed tiffs for my current scanning project. I'm getting a whole 20 images per dvd, and will likely need between 80-100 discs before I'm done. Plus, I'm making 2 copies. A few years down the road, I'm sure the "storage issue" will become trivial. Something to keep in mind?
( In running a print shop, time is money. The LZW bog is unacceptable many times, and only used where warranted, like the final signage file. Once compressed, a LZW or a group 4 file cannot be compressed more with zipping. )
aa06259810