Download Backup From Google Drive

1 view
Skip to first unread message

Argelia Fernandez

unread,
Jul 22, 2024, 8:08:05 AM7/22/24
to fuzcariba

Time Machine automatically makes hourly backups for the past 24 hours, daily backups for the past month, and weekly backups for all previous months. The oldest backups are deleted when your backup disk is full.

The first backup might take longer than you expect, but you can continue using your Mac while a backup is underway. Time Machine backs up only the files that changed since the previous backup, so future backups will be faster.

download backup from google drive


Download Backup From Google Drive --->>> https://byltly.com/2zD9gv



I'm assuming that none of that stuff is necessary as long as I'm just rsyncing from a computer to a firewire-connected external drive. I'm I wrong in assuming that? Are things really going to be more complicated than that innocuous command?

Rsync works fine across local drives. However, if it detects local paths it automatically goes into --whole-file mode which does not copy the diffs, but just copies the source file over the destination file. Rsync will still ignore files that haven't changed at all though. When bandwidth between the source and destination is high (like two local disks) this is much faster than reading both files, then copying just the changed bits.

However, if one or both drives happen to be NTFS formatted, being accessed from *nix or even from within Windows using Mobaxterm/cygwin, then rsync incremental functionality wouldn't work well with rsync -a (archive flag)

One thing you might want to consider when using rsync however is to make use of the --link-dest option. This lets you keep multiple backups, but use hard links for any unchanged files, effectively making all backups take the space of an incremental. An example use would be:

Your command as written should work, however you might want to look at a program called rsnapshot which is built on top of rsync and keeps multiple versions of files so you can go back and look at things as they were last week or last month. The configuration is pretty easy and it is really good at space optimization so unless you have a lot of churn it doesn't take up much more space then a single backup.

Finally I have ended up with 'backup2l - low-maintenance backup/restore tool', it's easy. I like the way it manages planning and rotation (in levels). I run it whenever I have my USB external drive attached from the command line but you can also automate it.

Try dirvish to do the backup.
It uses the hardlinks from rsync in the so-called vaults. You can keep as much of your older dumps as the USB disk can take. Or set it up in a an automated way.

Once you understand the idea of dirvish it more convient to use than rsync with all his options it self.

I do not use rsync with local drives but Rsync is wonderful for sync, cloning, backup and restore of data between networked linux systems. A fantastic Linux network enabled tool worth spending time learning. Learn how to use rsync with hard links (--link-dest=) and life will be good.

In the name of speed, Rsync, in my experience, changes automatically many operational parameters when rsync believes it has detected two local drives. Making matters worse. What is considered "local" from rsync's perspective can at times not really be that local. One example rsync sees a mounted SMB network share as a local drive. One can argue and be correct in explaining that in this case for rsync as a program instance the drives are all "local" but this misses the point.

The point is that scripts that operate as expected when used between a local and a remote drives do not work as expected when the same script is used where rsnyc sees the data paths as two local drives. Many rsync options seem changed or do not seem to work as expected when working with all local drives. File updates can slow to a crawl when one of the "local" drives is a networked SMB share or a large slower USB drive.

For example with "cwrsync -av /local/files/ /mountedSMBshare/files" and no -c (checksum) option where need for transfer should to be determined by file size and date with all local drives I see whole files copied between source and destination when the files have not even changed. This is not helpful behavior when one "drive" is a slower SMB networked share and the other a slow NTFS USB drive. A ssh into the SMB share server would be much better but this is not always possible and windows stuff, much hated, is part of everyday commercial life.

I would have preferred that rsync's operation was consistent regardless of the drives "location" and simply provide a --option for the user to invoke "local" operation when a speed advantage was seen as available and helpful. In my humble opinion this would be more consistent operation making rsync easier to use and more functional.

I'm a new Linux user. I've reinstalled my Wubi from scratch at least ten times the last few weeks because while getting the system up and running (drivers, resolution, etc.) I've broken something (X, grub, unknowns) and I can't get it back to work. Especially for a newbie like me, it's easier (and much faster) to just reinstall the whole shebang than try to troubleshoot several layers of failed "fixing" attempts.

Coming from Windows, I expect that there is some "disk image" utility that I can run to make a snapshot of my Linux install (and of the boot partition!!) before I meddle with stuff. Then, after I've foobar'ed my machine, I would somehow restore my machine back to that working snapshot.

All references to the file system and hard disks are located locally on the virtual /dev/ filesystem. There are a multitude of "nodes" in /dev/ that are interfaces to almost all the devices on your computer. For example, /dev/hda or /dev/sda would refer to the first hard drive in your system (hda vs sda depends on the hard drive), and /dev/hda1 would refer to the first partition on your hard drive.

The most straight forward way to make a raw image of your partitions is to use dd to dump the entire partition to a single file (remember the OS access the partitions /dev/sda1 through a file interface). Make sure you are on a larger partition or on a secondary drive and perform the following command:

dd if=/dev/hda1 of=./part1.image to backup (repeat for different partitions)dd if=./part1.image of=/dev/hda1 to restore.When you backup /dev/hda1 this partition should be unmounted (or mounted read-only) to avoid potencial corruption.

There is one limitation though, when restoring the backup: The partition needs to be the same size (or bigger) as the partition you took the image from, so this limits your options in case of a restore. However, you can always expand the partition after you've restored the backup using gparted or parted. The picture gets even muddier when you are trying to restore entire disk copies. However, if you are restoring the backup to the same exact hard drive, you don't need to worry about this at all.

Optionally, in order to minimize the space taken by the saved image, a partition can be first shrunk (from end, that is from right) so that it would not include the empty space. Here is a post on that: create partition backup image no larger than its files.

Backup with dd
The following example will create a drive image of /dev/sda, the image will be backed up to an external drive, and compressed. For example, one may use bzip2 for maximum compression:

Restoring a drive image
To restore a drive image, one will want to boot into a live environment. Restoration is quite simple, and really just involves reversing the if and of values. This will tell dd to overwrite the drive with the data that is stored in the file. Ensure the image file isn't stored on the drive you're restoring to. If you do this, eventually during the operation dd will overwrite the image file, corrupting it and your drive.

However you might be interested in using Clonezilla if you have an external USB hard disk drive or a NAS. You just have to download an ISO image by clicking here (you can access the global download page here), burn it with "Brasero". Boot from Clonezilla Live CD and perform a backup (disk or partition to image) of your main hard disk drive (with your healthy Ubuntu). Please note that you can't backup the partition you have mounted as backup destination (quite logical). If your system is broken, you just have to boot again with Clonezilla Live CD and perform a restore of your system. Don't forget that Clonezilla makes snapshots, so if you have your data ("/home", "/etc", ...) on the same disk/partition as Ubuntu system, you'll get back the one from the backup and loose what has been done since that backup was performed...

You can also use "Back In Time (backintime-gnome)" (available from Ubuntu Software Center) or else (Déjà Dup, ...) alongside to get a backup of your data. You just have to include ("/home", "/etc", "/var", "/usr/local", ...) in the backup profile. Like that you can get back your healthy system with Clonezilla and then your latest data with "Back In Time" or else.

I have an ancient Duo V1 with the original Sparc processor, which is limited to 2TB drives internally. It just keeps chugging along, and works great except for the 2TB capacity limit. I found out about that the hard way when I tried to install 3TB drives after getting close to filling up the 1 TB drives I had in it. I dashed out & bought a couple 2TB drives, but my existing USB backup drives are all still 1TB.

I also have a ReadyNAS 312 with 3TB drives I'm in the process of setting up. I've got several 3TB USB drives to rotate for backups (including the 3TB drives I couldn't install in the Duo...). It would be nice if I could use the same 3TB USB drives to also back up my V1 Duo. What I don't know is if the Duo will have a nervous breakdown trying to deal with a USB drive larger than 2TB. If not, then I have to either retire the Duo, buy some more 2TB backup drives, or backup over the network, which will tend to bog things down.

I don't know why you are thinking that. The duo's USB speeds are extremely slow - it is actually much faster to back it up over the network. That also eliminates the issue with the USB drive size limitation.

760c119bf3
Reply all
Reply to author
Forward
0 new messages