Bareos Internet Backup Timeout

233 views
Skip to first unread message

chri.s

unread,
May 2, 2013, 8:13:06 AM5/2/13
to bareos...@googlegroups.com
Hi,

i (am) [no, was] in search of an Internet Backup Solution (first via VPN, later ssl secured). Now i found Bacula or even better Bareos. ;-)

I did everything as it has to do, as i think. My problem is as following: I want to get a daily (or nightly) Backup of remote Data, which are approxemately about 30-31 GB. Daily change is about 300MB.

I did some small Initial Backups with about 10MB and 3GB which run as they should.
After this i wanted to get a full initial backup of all data which led to an Timeout Error after 10 Hours and ~ 11GB of data transfered after my Internet connection was resynced.

My issue is now how to deal with this time outs? Is there an flag to set for an increased "hey bareos, there could be a reconnect soon" time? So that i can set it to maybe 10 or 15minutes and everything should be fine?

The second, but less important question is if there is another, maybe higher compression rate than GZIP3, which i am using at the moment?

Thanks for your help and of course time.

Chris

Philipp Storz

unread,
May 2, 2013, 8:42:27 AM5/2/13
to bareos...@googlegroups.com
Hello Chris,


Am 02.05.2013 14:13, schrieb chri.s:
> Hi,
>
> i (am) [no, was] in search of an Internet Backup Solution (first via VPN, later ssl secured). Now i found Bacula or even better Bareos. ;-)
>
> I did everything as it has to do, as i think. My problem is as following: I want to get a daily (or nightly) Backup of remote Data, which are approxemately about 30-31 GB. Daily change is about 300MB.
>
> I did some small Initial Backups with about 10MB and 3GB which run as they should.
> After this i wanted to get a full initial backup of all data which led to an Timeout Error after 10 Hours and ~ 11GB of data transfered after my Internet connection was resynced.

If I get you right, your internet connection is reset and you get a new IP address, right?

So its not at timeout issue but a network connection issue.

This is not recoverable, as the complete network connection is lost then.

> My issue is now how to deal with this time outs? Is there an flag to set for an increased "hey bareos, there could be a reconnect soon" time? So that i can set it to maybe 10 or 15minutes and everything should be fine?
>
> The second, but less important question is if there is another, maybe higher compression rate than GZIP3, which i am using at the moment?

Yes, you can use "gzip9" to choose the best compression that gzip is capable of.

If your problem is not bandwidth but cpu you can also try to use LZO compression.

Maybe you can split the data that you want to backup in smaller chunks (smaller jobs with different
filesets) and reset your internet line when the first job is done.

> Thanks for your help and of course time.
>
> Chris
>

Best regards,

Philipp


--
Mit freundlichen Grüßen

Philipp Storz philip...@bareos.com
Bareos GmbH & Co. KG Phone: +49 163 32 777 92
http://www.bareos.com

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Geschäftsführer: Stephan Dühr, M. Außendorf,
J. Steffens, P. Storz, M. v. Wieringen

chri.s

unread,
May 2, 2013, 11:35:36 AM5/2/13
to bareos...@googlegroups.com, philip...@bareos.com
Hello Philip,

First thank you for your help.
[i cut the whole quotes for a better reading experience.]

I understand your way of thinking, but in my case with VPNs there should be no problem with, or am I wrong?

I mean if I aim from backup host One (192.168.1.1/24) to client host two (192.168.2.1/24) there should be no difference if the global external adress has changed since the UTM is handling the external Net to Net Communication?

Of course there could be a problem in case of direct (secured) connections without any VPN like masquerading. But actually the Bareos Connectors should be able to rescan a maybe changed DNS Name in case of a connection error.

But i will give the GZIP9 Compression a try, maybe (and i would hope so) this is an already good enough solution to ship around the problem.

Thank you,

Best regards,

Chris

Bruno Friedmann

unread,
May 3, 2013, 7:21:22 AM5/3/13
to bareos...@googlegroups.com
On Thursday 02 May 2013 08.35:36 chri.s wrote:
> Hello Philip,
>
> First thank you for your help.
> [i cut the whole quotes for a better reading experience.]
>
> I understand your way of thinking, but in my case with VPNs there should be
> no problem with, or am I wrong?
>
> I mean if I aim from backup host One (192.168.1.1/24) to client host two
> (192.168.2.1/24) there should be no difference if the global external
> adress has changed since the UTM is handling the external Net to Net
> Communication?
>
> Of course there could be a problem in case of direct (secured) connections
> without any VPN like masquerading. But actually the Bareos Connectors
> should be able to rescan a maybe changed DNS Name in case of a connection
> error.

The main things is not the name the tcp tunnel between the client and the
director, and also client to storage have been broke.

If you're using vpn (especially openvpn with udp) you normally should be able
to retrieve connection drops. But they have to be restored quite quickly.

>
> But i will give the GZIP9 Compression a try, maybe (and i would hope so)
> this is an already good enough solution to ship around the problem.
>
> Thank you,
>
> Best regards,
>
> Chris
I would avoid GZIP9 it's really time consuming for pretty much no save
compared to a gzip6. Kind of save 0.2% more for 400% more time.
You will have to test different scenarios and find which one works better
in an acceptable timeframe.


--

Bruno Friedmann

openSUSE Member
GPG KEY : D5C9B751C4653227
irc: tigerfoot

chri.s

unread,
May 4, 2013, 7:05:12 AM5/4/13
to bareos...@googlegroups.com
Hi Bruno

Thanks for your reply.

The main concern is that the whole Backup is running until the resync of the client side at 4.40 (last night). Before it run well from 15.30. In this time there were transfered about 14G. Even the resync of the server side internet connection hadn't break the backup. So i am in search of an method to avoid breaking up the whole thing just because of an maybe 2 or 3 minute resync?

Is there a way to set this limits a little bit higher?

I will run some tests gz6 vs gz9 today with a partial backup data but it seems as you are right gz6 gives me a backup rate of 264kb/s and gz9 is about 220kb/s with a 40m package. i will raise the amount now to a gigabyte.

Some another question is: can i do a fullbackup from a pre synced other server and how can i avoid that a full backup is syncing the whole data masses again from source instead of the former incremental/differential backup states?

Thank you once more.

Have a nice weekend.

Chris

Reply all
Reply to author
Forward
0 new messages