Gta 4 Highly Compressed Pc Game Download Google Drive

0 views
Skip to first unread message

Quinton Hebenstreit

unread,
Aug 4, 2024, 9:28:44 PM8/4/24
to tighdiforse
Byenabling drive compression you would save space on your hard drive, however the benefit is not without cost. Compression uses processing power (CPU). Every time you access a file, it has to be read and uncompressed to be worked with. Every file you save or edit will also have to be compressed.

Compression can have a positive effect on a computer with an older, slower disk. The CPU may have enough horsepower so that doing decompression and compression on read/write is faster than letting the drive read the raw data. It has definitely speeded up my 5 yearold laptop.


Only add the NTFS compression to Files and folders where you read the data most of the time. DON'T ENABLE it for folders where you write data very often. With information being compressed on the fly, you're consuming more of an SSD's available write cycles than if you were writing the files uncompressed. This could have negative implications on the drive's endurance.


I don't think that compression would be all that bad on an SSD. Your operating system uses a temporary holding area for data that it needs to fetch when applications are loaded. Compressed files are decompressed and are loaded into this holding area. In Windows, this file is typically found on the root (C: presumably) of the drive. This file is called pagefile.sys. All major operating systems do this. Unix uses a special partition called the swap partition (for the ios and linux fans).


When your computer runs low on hard drive space, windows will adjust the size of how much data this file can hold thus hindering the performance of your computer when it has to shrink it because of limited resources. This is probably why hlintrup said he noticed a performance increase for his computer. He was probably out of space (hence why you would compress your file system). When he turned on file compression, it free'd up more room so the swap file could grow to an optimal size and applications could then be cached again.


(Notes: (1) The previous post addresses potential path length issues arising from the act of copying DATA into such folders. Among other things, for my purposes, it appeared I was best advised to keep paths to 240 characters. Path Length Checker could help with that. (2) With a few ext4 exceptions, I formatted my drives as NTFS. (3) Linux might ignore Windows files whose names contained unorthodox characters. An easy way to identify such files was to search for regex:[^\x00-\x7F]+ in Everything. See also another post.)


Once I copied DATA into a folder on H: (e.g., H:\2020-01-01), the next step was to deduplicate it using Borg commands in Linux. The next section of this post (below) details those commands. That process would generate Borg output archive files, known as segments (see previous post). By default, with few exceptions, each full Borg segment was about 500 MiB in size.


I put those Borg segments on a drive named BORG_APR. (When I was examining the BORG drives in Windows, I mounted them as drive B.) That name indicated that I intended to update that drive with new archives each April and October. There was also a BORG_JAN drive, to be updated each January and July. Each of these two drives also had backups (e.g., BORG_APR_BUP).


That state of affairs made life harder for people who might want to back up their Borg segments by burning them onto BD-R. To organize that backup process, I created a temporary folder with a name like BORG_APR_Burning on another drive, UTILITY (U:), with subfolders like U:\BORG_APR_Burning\2020-01-01. From BORG_APR, I made copies of the relevant Borg segments onto U: for each such folder. (Again, I am using Windows drive references for simplicity. I could instead conduct these and other file operations in Linux.)


As that example suggests, Borg organized its segments, not by archive (e.g., 2020-01-01), but rather by repository. The repository name in that example is BorgRepoApr. Along with its data subfolder, the repository would contain a handful of top-level files. For example, a top-level file of interest in the previous post would be visible (in Windows) as B:\BorgRepoApr\config.


Those top-level files were important. The names (more precisely, the extensions) of several would change after the addition of each new archive to the repository. For instance, if Borg captured H:\2020-01-01 in segments starting at B:\BorgRepoApr\data\0\2 and ending at B:\BorgRepoApr\data\1\1453, then there would be files named B:\BorgRepo\Apr\hints.1453, along with index.1453 and integrity.1453. (As the previous post indicates, these top-level archive files were not the only sources of information about the contents of Borg archives. Much of that information was stored on the operating system drive, and would be reconstructed if I ran Borg from a different drive.)


Those top-level files were the reason why I had to abandon my previous effort and to start over with this new blog post, capturing a new attempt to use Borg for backup onto BD-R. What I needed to do was to save those top-level files when I was finished with each archive along the way. They would prove essential when it came time to restore from the backup.


In one case, I failed to save those top-level files. As I found in a long struggle, they could not be reliably reconstructed after the fact, and the current and subsequent archives would be useless without them. (To emphasize, most of these concerns were specific to the Blu-ray context. The Borg backup repository on the BORG_APR HDD remained functional; it sailed through the ordeal of the previous post with no worries.)


In other words, the time to set aside a copy of these Borg files for burning onto Blu-ray disc was when I had finished whatever checking, compaction, or other processes seemed advisable to finalize a specific archive. Borg, itself, was not designed for finalizing archives. To the contrary, Borg was designed for backup administrators who might be constantly revising their archives to fit ongoing needs. For instance, there might be a corporate decision to expunge all archival references to a certain product, person, or event. The last thing these people would want, in that kind of situation, would be a BD-R archive that attempted to carve history into digital stone. The previous post provides (1 2 3) installments in a longer story about how and why one might deliberately remove material from a Borg archive.


As discussed in the previous post, Borg did have an append-only mode. Unfortunately, based on various user comments, it did not seem suited to my purposes. My best course seemed, instead, to call for running borg check and then borg compact after Borging each archive.


The compact command, especially, would apply to the whole repository, not just to one archive. It seemed best to let it change whatever it was going to change, within the specific archive, so that it would not make any further changes to prior archives when I ran it again later, after adding future archives to the repository.


The concern, there, was that Borg looked back at prior archives when processing new incoming material. This seemed to occur in two ways. For deduplication, Borg looked at previous segments, to decide whether it needed to store a full copy of incoming material, or could instead just store a pointer to an already existing copy of that material. And for administration, Borg looked at the latest versions of those top-level files (e.g., hints.1453).


So then a Blu-ray backup of that next archive would not mesh with the previous one: it would know that the previous archive on BORG_APR ended with a hints.1461 file, whereas the BD-R backup would not have a hints.1461 files; it would still be thinking in terms of hints.1453. So if I tried to restore that next archive from BD-R, maybe everything would work OK, up through that preceding archive, but the latest one would fail to compute: it would have been created with expectations of top-level files with .1453 extensions, and perhaps also with references to segment files that borg compact had subsequently deleted.


It was possible that there lurked other Borg commands that I had not needed so far, but that could have comparably disruptive effects upon my BD-R backups in the future. My experience so far suggested that, for my purposes, I only needed to worry about borg check and borg compact. These were the only commands that I had needed to run, that had updated hints.xxxx and the other top-level files as just described.


That concludes this introduction to the arrangement of drives and folders that I encountered while using Borg to create a deduplicated backup that I could store on BD-R. The following sections add some wrinkles and provide more detailed, real-world applications of the foregoing remarks.


Postscript: when I finished writing this section, it occurred to me that I did not need to rush into the project of redoing most of the archives on the BORG_APR drive and/or reburning their BD-R backups. Possibly, with additional experience, I would find a way to use those materials, so as to avoid the ordeal of a full do-over. Therefore, the remainder of this post focuses on the BORG_JAN drive, setting aside the BORG_APR drive for the time being.


The preceding post began in February 2022, with efforts to use Borg to create a backup onto a drive whose name evolved to BORG_JAN. That post continued with a recounting of certain mistakes and experiments, culminating in the impression that I would probably want to start over with that drive, at some point. Then, as summarized above, I made a start (and more mistakes) with a separate drive, named BORG_APR, and with its accompanying Blu-ray backups.


Now the calendar had moved on. It was July 2022. The concept was that I would update these drives semiannually. I had already made a new copy of DATA onto HIST_ARCHIVE in a folder named 2022-07-01. Now it was time to wipe BORG_JAN and start over, with a set of archives ending with that 2022-07-01 installment.


Aside from the instances when I connected NTFS drives to my faster Windows 10 desktop computer, this Borg work took place on either an Acer Aspire 5 A515-51-563W with an Intel Core i5-7200U CPU with 20GB DDR3 RAM, or on an old (2012) Lenovo E430 ThinkPad laptop with a dual-core Intel i3-2350M CPU and 8GB RAM. (Monitors intended to insure that the Lenovo was up to the task proved informative but unnecessary; its internal speed restrictors seemed to keep it cool.) Windows 10 21H2 was installed on the Acer and was running from a WTG USB drive on the Lenovo. My Linux on both laptops was Ubuntu 22.04 LTS, running from USB. I was using Borg 1.2.

3a8082e126
Reply all
Reply to author
Forward
0 new messages