Notdirectly. When you set the backup in Menu > Settings > General > Backup, it will take the time you change the setting and click OK as the time to run the backup. So if it is 5.17pm your time, and you go into settings and you set the backup to run daily, it will run at about 5.17pm every day thereafter.
Using version 9.0.1708 Free - While I have had no problems with auto-backup actually working, or retrieving a saved back-up, I do find that while eClient carries out a back-up, it defaults to the main screen on Windows 10, irrespective that I am in the middle of gaming, which then continues in the background, causing a variety of problems in the game while unseen. Can this be stopped (other than by turning off back-up)? Thanks.
Apparently this will only happen every 2-3 months when windows updates are installed, and will completed by normal staff members (not CV administrators), so they need access to Commvault and instructions on how to perform a manual backup.
We openend a CMR to introduce support on conjunction with server plans to have a possibility to create on-demand backups with a possibility to specify the retention for the to be created recovery point.
While thinking about it could also be achieved if they would enhance blackout-windows to add support to allow on-demand backups during a blackout period.
All future things but for now I think the solution as presented by @MichaelCapon is as far as I can think off the mos logical way of doing it and it only requires you to hook-on the subclient/vm-group to the storage policy related to the plan although I'm not sure if this possibility will remain in the future.
I have a number of policies which are scheduled to run (on a calendar basis) each day. I ran a manual backup of these policies yesterday which ran successfully, but then the scheduled backups did not start last night !
I was under the impression that a manual backup did not affect the scheduled attempts - so even though the manual backup ran the scheduled backup should still have started as per schedule. Is this the case ?
At one extreme, you might want to back up absolutely everything, so that in the case of a disk failure you can restore to precisely where you were: every app, file, setting, configuration, &c. This isn't really practical to do manually. (It'd need root access, knowing exactly which vm/cache files to exclude, what to sync first, &c.) Time Machine can create incremental backups, and tools such as Carbon Copy Cloner and SuperDuper! can create clones of your drive(s).
At the other extreme, you might only want to back up your most irreplaceable files (e.g. documents, photos). This should be easiest, but will of course require the most work in the case of a failure (re-downloading and reinstalling apps, re-setting up all the config, &c). For this, you should know where the files are; it's easy to copy them by dragging them to another drive.
Also bear in mind that apps, drivers, and anything else that's not plain data will need care to preserve user, group, permissions, ACLs, &c; most external filesystems won't fully support all of those. And you may need root access to back up and/or restore some things. All in all, it's much easier to leave it to a dedicated program!
Finally, this is a good opportunity to remind everyone of the importance of backups. There's no single correct strategy; everyone's needs are different. But please think about what would happen if a disk died, and do as much or as little as you need to prevent that becoming a disaster.
The items in /Applications folder can always be re-downloaded but won't hurt to back them up too. Pay special attention to the apps that can't be easily obtained by downloading from the Mac App Store or from their web-site, like older version of app, an app no longer available from the developer/publisher or a self build app.
I manually install a kubernetes cluster of 3 nodes (1 master, 2 slave). Now, I want to perform a upgrade of the k8s version (say, from 1.7 to 1.11). As the gap is long, the preferred method would be to forcefully reinstalled all the required packages. Is it a better way to do this? If yes, could you please tell me how?
Such a long Gap is not tested (like 1.11 master with 1.7 nodes). And usually each release has some action required things, for example alpha features that changes the on-disk format for some resources, or yaml fields, etc.
Just a little question, I've had to do a manual backup recently and it took quite a very long time to complete. I've noticed that the backup that occurs before syncing the iPhone (4S ios 8.1 in my case) with Itunes (last version) doesn't take that long, and I would like to know why.
There is normally one rolling backup for each device. The initial backup will take some time. Subsequent backups will update the backup with recent changes. Automatic backups take place with the first sync on each calendar day. Media and apps are not included in the backups.
As long as the apps and media are already in your library they can be restored. I made the distinction just in case you, or someone else reading the thread, is connecting to a new library/machine. Apps and iTunes Store purchases may be transferred into the current library but it is worth being aware that purchases and media from other sources are not actually backed up by the iOS backup process.
Look under Edit > Preferences > Devices. There is normally only one backup for each device. If/when you restore a backup to a device the current version of it is used for the restore, the backup set is archived with the date of the restore, and a new rolling backup for that device is created. Whether you refresh the backup manually or via the first sync of the day, the overall size of the backup won't vary much unless you have added lots of new data to the device that would be included. Note that if your device normally backs up to iCloud then it is only backed up to your computer if you instigate a backup manually.
I run a cron job that uses rsync to copy the backups every night about an hour after the nightly backup job runs on my MiaB. That way, if I lose to MiaB data, I can restore to its state from the night prior to the failure.
For security, I do my backups in the other direction. Nothing gets reaches into the backup server, but the backup server can reach into (via rsync / ssh) my MIAB and extract a copy of the encrypted backups folder. That way my storage is not visible to the external world (touch wood).
Same here. For me I even had to install Samsung Switch on phone for it, but after 2 days still no backup made and no way to force it. Maybe it will only do a backup after being idle for 2hrs same as phone to Samsung account?
Unfortunately for me the watch did not backup until now. Last night, for the first time I tried the sleep tracking so I charged 100% before going to bed, and I charged the watch in the morning after exercise.
This night no backup was made. I have the same message as @emeles on the Wear setting for backup. It's really strange, just 4 days ago I performed a manual backup with my Watch5 Pro and I was even able to use this backup to transfer my watch settings and watch tiles from the Watch5 Pro backup to my new Watch6 Classic.
If I do a manual log backup of a DB (to clear a log file, or to enable a log file shrinkage etc) and then copy that file to the secondary log shipping file folder then will the log restore job collect and restore this log file?
I know the log shipping job stamps the .trn files in a certain format (DBNAME_20120807190200) but I am unsure if my ad-hoc backup will not get collected because of this , or if the log shipping restore job looks for the file attributes rather than the file name time stamp?
If this can't be done then log shipping would break if I do a manual t-sql log backup because the log shipping restore job could not restore the next log file in sequence even thought the file does actually exist in the correct location.
For tackling transaction log space issues on databases configured for logshipping, there is no need to take manual backup, just start the log backup job, shrink the log file after job completes, often you need to take log backup twice for releasing space completely, no problem, run the job again.
Yes that will solve your problem. Taking a manual backup will not 'break' log shipping. The log ship restore job will fail because it will try to restore the next transaction log backup that was initiated by the log-ship process. However, if you copy the backup you took across to your secondary server and restore it manually, log shipping will continue quite happily.
Ok, I have now published my backup plugin with which I create my Joplin backups under Windows (But I hope it also runs under other operating systems).
The backups can be created manually or automatically by time interval.
What happens when you restore single JEX archive into your active use profile? Do all the notebooks in JEX get recreated? What happens if existing notebooks in your profile share the same names as notebooks in JEX?
The single jex export may not be suitable for some but my backups are essentially insurance against total loss or for when I scrub everything down (including the sync target) and start again in order to clear out too many "orphaned" files. Being able to dump everything back in in one go is a great benefit compared to doing it notebook by notebook.
For the plugins there is currently no possibility to get a close event. If this were possible, it could delay the termination extremely (For me, a backup takes about 9 minutes).
The option to create backup only when a newer change is done => Added to the list and I check it
3a8082e126