Glad that worked for you, but to add to the nature of compression here is some extra details when it comes to performance.
I use aggressive bareos compression on most of my clients because most are ether over home connections (slow upload) or metered connections (AWS @ $0.10/gb). Also my volume of data on these hosts is modest and I have CPU to spare. This works because baroes compression is client side.
In my testing often if using anything other than lz4 the CPU will become pegged on compression vs your network as the compressors in bareos are single threaded they can’t use multiple cores. You could work around this by having multiple parallel jobs I think but never tested it.
Tape drive compression is hardware and will run at the speed of the drive, so is often much faster, but the compression is generally only lz4 quality.
So if you want to minimize upload bandwidth for performance or cost reasons bareos compression is a good option trading CPU for bandwidth. If bandwidth isn’t an issue lz4 or no compression is also an option. BTW I would never turn tape drive compression off. Modern drives are smart and won’t create any performance or space issues leaving it on.
Bareos also has the auto xflate plugin, which lets you do some compression/decompression SD side
https://docs.bareos.org/TasksAndConcepts/Plugins.html#autoxflate-sd
I never got it to work, but didn’t try hard. I’m not worries about super fast local restore speed, and decompress is way faster than compress anyway, and my restores are to systems over slower networks.
For many data types they are already compressed and are the bulk of your data like video and pictures. Bareos can’t tell the difference unless you make filesets based on file type (sounds risky) and define a job for each. Eg mailboxes and log files, and sql dumps compress great.
At work I actually deal with this daily as many of our genomics users data are already compressed, but then will compress them again when taring to the archive. So I made a tool that manages using parallel compression and skipping over large files as part of the tar process. Our goal here was to archive multi TByte data as quickly as possible.
https://github.com/brockpalen/archivetar/
On 36 core nodes we regularly with pigz get 1GB/s+ compression using this tool.
It more illustrates, compression can significantly slow things down but often doesn’t help with the bulk of the data if it’s already compressed or data that compresses poorly.
Compression performance comparison:
https://www.failureasaservice.com/2020/10/parallel-compressor-performance-for.html
Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting
> --
> You received this message because you are subscribed to the Google Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
bareos-users...@googlegroups.com.
> To view this discussion on the web visit
https://groups.google.com/d/msgid/bareos-users/f0008ceb-5712-4d89-b29b-178faf79109en%40googlegroups.com.