Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Debian server for backups of Windows clients

271 views
Skip to first unread message

Daniel Bareiro

unread,
Aug 2, 2016, 5:20:04 PM8/2/16
to
Hi all!

I'm thinking deploy a Debian backup server using Dirvish (which is based
on rsync --- indeed, we have packaged it in Debian). On previous
occasions I implemented these solutions seamlessly with GNU/Linux
clients, but now I would like add Windows clients.

The idea of using Dirvish is because I had a very good experience.
Besides using rsync with hard links for backups of files that do not
change from backup to the next allows a considerable saving of disk space.

But to use Dirvish with Windows clients I will need to install an SSH
server. I had thought that an alternative would be to use Cygwin, but
was looking for documentation and I have not found any uniform process
to install and configure a Cygwin SSH server on Windows.

I would like to know if anyone has had any experience in this regard
that could share.


Thanks in advance.

Kind regards,
Daniel

signature.asc

David Christensen

unread,
Aug 2, 2016, 11:30:03 PM8/2/16
to
Cygwin sshd and rsync are okay for interactive use. There is a shell
script (ssh-host-config) provided with the Cygwin openssh package for
setting up sshd as a service.


Unfortunately, Cygwin rsync is notorious for working for a while, and
then hanging in the middle of a transfer. I've seen this for years, and
I saw it on up-to-date installs less than a week ago. So for automated
backups, you need to detect this failure mode and deal with it. The
only way I found to get rsync working again was to reboot.


You might find more encouraging answers on the Cygwin mailing list:

https://cygwin.com/lists.html


Currently, Windows Backup and Restore is the most reliable solution I've
found for automated backups. Either it works, or it doesn't (the last
Windows Vista box I maintain has broken Volume Shadow Copy, which breaks
Backup and Restore).


David

didier gaumet

unread,
Aug 3, 2016, 4:40:04 AM8/3/16
to
Le 02/08/2016 à 23:12, Daniel Bareiro a écrit :
[...]
> But to use Dirvish with Windows clients I will need to install an SSH
> server [...] on Windows.
[...]

Apart from a Cygwin solution, it seems there is an open official
Microsoft OpenSSH port (including SSHD service) effort:
https://blogs.msdn.microsoft.com/powershell/2015/10/19/openssh-for-windows-update/
https://github.com/PowerShell/Win32-OpenSSH/wiki

I cannot offer more precisions: I very rarely use both SSH and Windows

Darac Marjal

unread,
Aug 8, 2016, 5:10:05 AM8/8/16
to
Can I recommend BackupPC (also in Debian), which performs similar
de-duplication to dirvish (hard links between identical files with
optional compression as well), but can pull files from computers using
rsync, ftp or, relevant to you, SMB. That is, it can backup Windows
clients without any client software being installed on them.

Obviously, SMB will probably be slower than rsync as you'll need to pull
every file every time, but you may find that's preferable to trying to
get rsync installed on your Windows machines.

>
>
>Thanks in advance.
>
>Kind regards,
>Daniel
>



--
For more information, please reread.
signature.asc

Daniel Bareiro

unread,
Aug 8, 2016, 2:30:04 PM8/8/16
to
Hi, David.

On 03/08/16 00:23, David Christensen wrote:

>> I'm thinking deploy a Debian backup server using Dirvish (which is based
>> on rsync --- indeed, we have packaged it in Debian). On previous
>> occasions I implemented these solutions seamlessly with GNU/Linux
>> clients, but now I would like add Windows clients.
>>
>> The idea of using Dirvish is because I had a very good experience.
>> Besides using rsync with hard links for backups of files that do not
>> change from backup to the next allows a considerable saving of disk space.
>>
>> But to use Dirvish with Windows clients I will need to install an SSH
>> server. I had thought that an alternative would be to use Cygwin, but
>> was looking for documentation and I have not found any uniform process
>> to install and configure a Cygwin SSH server on Windows.
>>
>> I would like to know if anyone has had any experience in this regard
>> that could share.

> Cygwin sshd and rsync are okay for interactive use. There is a shell
> script (ssh-host-config) provided with the Cygwin openssh package for
> setting up sshd as a service.
>
> Unfortunately, Cygwin rsync is notorious for working for a while, and
> then hanging in the middle of a transfer. I've seen this for years, and
> I saw it on up-to-date installs less than a week ago. So for automated
> backups, you need to detect this failure mode and deal with it. The
> only way I found to get rsync working again was to reboot.
>
> You might find more encouraging answers on the Cygwin mailing list:
>
> https://cygwin.com/lists.html

It sounds like a blocking issue. Researching about it, I read something
about what you mention here [1]. As it says, it seems that this is
resolved in new versions of Cygwin, although you say that a week ago you
had this problem (with the latest published version?). But this article
is "a bit" old.

He also mentions a problem to doing backup over open files. Have you
experienced that problem?

> Currently, Windows Backup and Restore is the most reliable solution I've
> found for automated backups. Either it works, or it doesn't (the last
> Windows Vista box I maintain has broken Volume Shadow Copy, which breaks
> Backup and Restore).

It does not sound very encouraging. An alternative that I thought as a
last resort was to mount the remote filesystem on the backup server
using Samba and then to use rsync on the mount point, although I'm not
sure how efficient it can be.


Thanks for your reply.

Kind regards,
Daniel

[1]
http://www.trueblade.com/techblog/backing-up-windows-computers-with-dirvish

signature.asc

Daniel Bareiro

unread,
Aug 8, 2016, 4:10:05 PM8/8/16
to
Hi, Darac.

On 03/08/16 05:49, Darac Marjal wrote:

>> I'm thinking deploy a Debian backup server using Dirvish (which is based
>> on rsync --- indeed, we have packaged it in Debian). On previous
>> occasions I implemented these solutions seamlessly with GNU/Linux
>> clients, but now I would like add Windows clients.
>>
>> The idea of using Dirvish is because I had a very good experience.
>> Besides using rsync with hard links for backups of files that do not
>> change from backup to the next allows a considerable saving of disk
>> space.
>>
>> But to use Dirvish with Windows clients I will need to install an SSH
>> server. I had thought that an alternative would be to use Cygwin, but
>> was looking for documentation and I have not found any uniform process
>> to install and configure a Cygwin SSH server on Windows.
>>
>> I would like to know if anyone has had any experience in this regard
>> that could share.

> Can I recommend BackupPC (also in Debian), which performs similar
> de-duplication to dirvish (hard links between identical files with
> optional compression as well), but can pull files from computers using
> rsync, ftp or, relevant to you, SMB. That is, it can backup Windows
> clients without any client software being installed on them.
>
> Obviously, SMB will probably be slower than rsync as you'll need to pull
> every file every time, but you may find that's preferable to trying to
> get rsync installed on your Windows machines.

Thanks for the recommendation. I hadn't the opportunity to use BackupPC,
but I will investigate it.

As I said in another message of this thread, I thought that an
alternative could be to mount on the Debian server the filesystem for
the Windows client using Samba and then doing rsync against the mount
point, but I'm not sure how efficient it can be. We also have security
implications as the backup server and the Windows computer are in
different offices, so the backup would be over the Internet.
signature.asc

Glenn English

unread,
Aug 8, 2016, 5:40:04 PM8/8/16
to

> On Tue, Aug 02, 2016 at 06:12:33PM -0300, Daniel Bareiro wrote:
>> Hi all!
>>
>> I'm thinking deploy a Debian backup server using Dirvish (which is based
>> on rsync --- indeed, we have packaged it in Debian). On previous
>> occasions I implemented these solutions seamlessly with GNU/Linux
>> clients, but now I would like add Windows clients.
>>
>> The idea of using Dirvish is because I had a very good experience.
>> Besides using rsync with hard links for backups of files that do not
>> change from backup to the next allows a considerable saving of disk space.
>>
>> But to use Dirvish with Windows clients I will need to install an SSH
>> server. I had thought that an alternative would be to use Cygwin, but
>> was looking for documentation and I have not found any uniform process
>> to install and configure a Cygwin SSH server on Windows.

Have you tried Putty? I know next to nothing about Windows, but I've seen Putty discussed like it does SSH seriously.

Dirvish, I think, is a good backup package. It creates backups that can be handled with plain *nix software if you lose the backup software. Like Amanda.

--
Glenn English

Daniel Bareiro

unread,
Aug 8, 2016, 9:20:04 PM8/8/16
to
Hi, Glenn.

On 08/08/16 18:34, Glenn English wrote:

>>> I'm thinking deploy a Debian backup server using Dirvish (which is based
>>> on rsync --- indeed, we have packaged it in Debian). On previous
>>> occasions I implemented these solutions seamlessly with GNU/Linux
>>> clients, but now I would like add Windows clients.
>>>
>>> The idea of using Dirvish is because I had a very good experience.
>>> Besides using rsync with hard links for backups of files that do not
>>> change from backup to the next allows a considerable saving of disk space.
>>>
>>> But to use Dirvish with Windows clients I will need to install an SSH
>>> server. I had thought that an alternative would be to use Cygwin, but
>>> was looking for documentation and I have not found any uniform process
>>> to install and configure a Cygwin SSH server on Windows.

> Have you tried Putty? I know next to nothing about Windows, but I've seen
> Putty discussed like it does SSH seriously.

I'm not sure if I understand your idea. I know that Putty is an SSH
client used mostly in Windows but there are also versions for GNU/Linux
(also in the Debian repositories).

In this case the idea is to initiate the connection from the Debian
server to the Windows server, to bring the files to the Debian server.
This connection would be via Internet, as both are not on the same local
network.

The idea would be to use an automatic process, with a secure connection
in the extent of the possibilities.

> Dirvish, I think, is a good backup package. It creates backups that can be
> handled with plain *nix software if you lose the backup software. Like
> Amanda.

I had mentioned to use Dirvish because it is what I use to doing backup
with GNU/Linux clients. In this way I would have a backup homogeneous
solution, which would facilitate the maintenance.

I will try to do some tests locally with Windows 2012 and Cygwin, using
rsync + ssh. The alternative would be to use Samba to access the Windows
filesystem as if it was local to the Debian server, then doing rsync to
the mount point. But it doesn't convince me from the point of view of
security to have with Samba access through Internet. Any comment about this?


Thanks for your interest.

Kind regards,
Daniel

signature.asc

David Christensen

unread,
Aug 8, 2016, 11:30:04 PM8/8/16
to
On 08/08/2016 01:05 PM, Daniel Bareiro wrote:
> We also have security
> implications as the backup server and the Windows computer are in
> different offices, so the backup would be over the Internet.

For security, you can use an SSH tunnel (Cygwin openssh on the Windows
machine).


Backing up over a WAN connection is only practical if your links are
fast and there isn't much data. The same goes for verification,
restore, and archiving jobs. Imaging is likely to be impractical. I'd
consider building another backup machine and deploying it on the remote LAN.


David

David Christensen

unread,
Aug 8, 2016, 11:30:04 PM8/8/16
to
On 08/08/2016 11:25 AM, Daniel Bareiro wrote:
> He also mentions a [Cygwin rsync] problem to doing backup over open files. Have you
> experienced that problem?

I'm not sure. It's been many moons since I tried to do automated
backups of Windows machines using rsync.


> An alternative that I thought as a
> last resort was to mount the remote filesystem on the backup server
> using Samba and then to use rsync on the mount point, although I'm not
> sure how efficient it can be.

Try it and let us know how well it works.


David

Mike McGinn

unread,
Aug 9, 2016, 9:40:05 AM8/9/16
to
I have done this many times. It is not that complicated.
On the Windows machine make sure you have Cygwin and ssh installed. On
the Linux machine create the account for the Windows machine to store
backups in. Use an easy password for now.

On the Windows machine at the Cygwin terminal type "ssh-keygen -t rsa".
Next use "ssh-copy-id" to copy the key to the Linux machine. You will be
asked for your password.

Now write a script to tar cfz everything up and scp it to the Linux
machine. Not as elegant as rsync, but the idea is to have a backup, mots
Win machines don't have one. Test you script. Once you are satisfied it
create a batch file to call the script.

You have to set the actual "DOS" path to your cygwin stuff. I have
pasted the actual lines from mine below:

SET PATH=C:\cygwin64\bin;D:\cygwin\bin;%PATH%
C:\cygwin64\bin\bash.exe D:\cygwin\home\mike\bin\wwwBackup.sh

Now set up a scheduled task to run it every day on the Windows machine
and a cron on the Linux machine to delete them after so many days so
they don't fill up the disk.

Mike

--
Mike McGinn KD2CNU
President, UU Congregation at Rock Tavern * www.uucrt.org
Laziness is what separates us from the beavers.
More kidneys than eyes ** Registered Linux User 377849

Daniel Bareiro

unread,
Aug 9, 2016, 7:30:05 PM8/9/16
to

Hi, David.

On 09/08/16 00:21, David Christensen wrote:

>> He also mentions a [Cygwin rsync] problem to doing backup over open files. Have you
>> experienced that problem?

> I'm not sure. It's been many moons since I tried to do automated
> backups of Windows machines using rsync.

Well, I've been testing with rsync+ssh from Cygwin on a KVM virtual
machine with Windows Server 2012. So far everything has worked smoothly:

-------------------------------------------------------------------
viper@orion:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.5 (jessie)
Release: 8.5
Codename: jessie
-------------------------------------------------------------------
viper@orion:~$ rsync --stats --progress -vae ssh
Admini...@10.1.0.33:/cygdrive/c/Users/Administrator/Documents
/ImagenCorporativa /tmp/Backup/
Admini...@10.1.0.33's password:
receiving incremental file list
(...)
ImagenCorporativa/disk01.img
3,221,225,472 100% 1.19MB/s 0:43:02 (xfr#2, to-chk=53/56)
(...)

Number of files: 56 (reg: 48, dir: 8)
Number of created files: 56 (reg: 48, dir: 8)
Number of deleted files: 0
Number of regular files transferred: 48
Total file size: 3,226,528,378 bytes
Total transferred file size: 3,226,528,378 bytes
Literal data: 3,226,528,378 bytes
Matched data: 0 bytes
File list size: 1,360
File list generation time: 0.012 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 996
Total bytes received: 3,227,319,409

sent 996 bytes received 3,227,319,409 bytes 1,244,868.04 bytes/sec
total size is 3,226,528,378 speedup is 1.00
-------------------------------------------------------------------
viper@orion:~$ rsync --stats --progress -vae ssh
Admini...@10.1.0.33:/cygdrive/c/Users/Administrator/Documents/ImagenCorporativa
/tmp/Backup/
Admini...@10.1.0.33's password:
receiving incremental file list

Number of files: 56 (reg: 48, dir: 8)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 3,226,528,378 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 1,360
File list generation time: 0.015 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 32
Total bytes received: 1,379

sent 32 bytes received 1,379 bytes 166.00 bytes/sec
total size is 3,226,528,378 speedup is 2,286,696.23
-------------------------------------------------------------------

As you can see, the transfer was over than 3 GB and it were not hung. I
did several tests and all were without problems.

I wonder if in the mentioned episodes of hangs you remember whether the
transferred volume was higher or lower than in this case (or it hung
randomly).

As a side note, the larger file (disk01.img) took more than 40 minutes
to be transferred. So the rsync was running quite some time without
hanging. While it does not have to do with the topic of this thread, in
rsync progress data we can see that the average transfer rate was 10
Mbps. I guess it will have to do with that I'm going through a wireless
network. In this testing the Debian computer is a notebook connected to
the wireless router and the KVM Windows is on the wired network. May it
be so large the decrease in transfer speed? The wireless router is
TPLink WDR3600 with OpenWRT.


Kind regards,
Daniel

signature.asc

Daniel Bareiro

unread,
Aug 9, 2016, 7:40:04 PM8/9/16
to
Hi, Didier.

On 03/08/16 05:30, didier gaumet wrote:

>> But to use Dirvish with Windows clients I will need to install an SSH
>> server [...] on Windows.

> Apart from a Cygwin solution, it seems there is an open official
> Microsoft OpenSSH port (including SSHD service) effort:
> https://blogs.msdn.microsoft.com/powershell/2015/10/19/openssh-for-windows-update/
> https://github.com/PowerShell/Win32-OpenSSH/wiki
>
> I cannot offer more precisions: I very rarely use both SSH and Windows

Thanks for the info. I will consider it in the case I see that the
Cygwin ssh server does not work satisfactorily or some other case.


Kind regards,
Daniel

signature.asc

Joel Wirāmu Pauling

unread,
Aug 9, 2016, 8:20:04 PM8/9/16
to
Best options is put an SMB/NFS share for all the windows clients on your backup server. 

RAID it and run $whatever backup tools you wish on the exports.

If you need OS level backups, the best way is to use ISCSI mounts served from the NAS/SAN to be the root of the windows machines. 

David Christensen

unread,
Aug 9, 2016, 10:00:04 PM8/9/16
to
On 08/09/2016 04:27 PM, Daniel Bareiro wrote:
> As you can see, the transfer was over than 3 GB and it were not hung. I
> did several tests and all were without problems.
>
> I wonder if in the mentioned episodes of hangs you remember whether the
> transferred volume was higher or lower than in this case (or it hung
> randomly).

Script it and run it every night for a week. If it works every time,
try again for 30 days. Then 90. Then 365.


>
> As a side note, the larger file (disk01.img) took more than 40 minutes
> to be transferred. So the rsync was running quite some time without
> hanging. While it does not have to do with the topic of this thread, in
> rsync progress data we can see that the average transfer rate was 10
> Mbps. I guess it will have to do with that I'm going through a wireless
> network. In this testing the Debian computer is a notebook connected to
> the wireless router and the KVM Windows is on the wired network. May it
> be so large the decrease in transfer speed? The wireless router is
> TPLink WDR3600 with OpenWRT.

My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
hardware can match or beat Gigabit.


For the initial full backup, I have found that scp is faster than rsync.


When I know that I've added a bunch of new and/or large files on the
sender, I sometimes try the rsync 'whole-file' option. As I haven't
benchmarked it, I don't know if/when it is helping.


My biggest problem with rsync is when I reorganize file/ directory trees
on my file server; especially big stuff -- raw video, movies, disk
images, ISO images, etc.. I have yet to figure out an rsync incantation
that does the corresponding moves on the destination, rather than
mindlessly copying and deleting 100's of GB. I have often considered
writing an rsync prelude script for just this case.


David

didier gaumet

unread,
Aug 11, 2016, 12:50:03 PM8/11/16
to

I recently upgraded a Windows 10 PC to the last 1607 build: with the
developper mode enabled, one have access to "windows subsystem for linux
(beta)". the official Microsoft port of Ubuntu bash.
sshd is part of the bundle (I've not tested it, though).

didier gaumet

unread,
Aug 11, 2016, 1:00:04 PM8/11/16
to

Celejar

unread,
Sep 9, 2016, 3:00:03 PM9/9/16
to
On Tue, 9 Aug 2016 18:57:02 -0700
David Christensen <dpch...@holgerdanske.com> wrote:

...

> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
> transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
> slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
> hardware can match or beat Gigabit.

You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
everything I've read says that 20-24Mbps is the real-world maximum.

Celejar

Daniel Bareiro

unread,
Sep 9, 2016, 3:50:04 PM9/9/16
to
Hi, David.

Thanks for your reply.

On 09/08/16 22:57, David Christensen wrote:

>> As you can see, the transfer was over than 3 GB and it were not hung. I
>> did several tests and all were without problems.
>>
>> I wonder if in the mentioned episodes of hangs you remember whether the
>> transferred volume was higher or lower than in this case (or it hung
>> randomly).

> Script it and run it every night for a week. If it works every time,
> try again for 30 days. Then 90. Then 365.

Yes, I have to start testing on a daily basis. Anyway, the mentioned
test results were quite satisfactory.

>> As a side note, the larger file (disk01.img) took more than 40 minutes
>> to be transferred. So the rsync was running quite some time without
>> hanging. While it does not have to do with the topic of this thread, in
>> rsync progress data we can see that the average transfer rate was 10
>> Mbps. I guess it will have to do with that I'm going through a wireless
>> network. In this testing the Debian computer is a notebook connected to
>> the wireless router and the KVM Windows is on the wired network. May it
>> be so large the decrease in transfer speed? The wireless router is
>> TPLink WDR3600 with OpenWRT.

> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
> transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
> slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
> hardware can match or beat Gigabit.

I think it is reasonable to expect that the wireless transfer rate is
lower than the one obtained in a wired network. But there is a big
difference compared to the ~50 Mpbs you mentioned. The peak obtained
with rsync was 10 Mbps. Maybe the best is to take a metric with iperf,
what do you think?

> For the initial full backup, I have found that scp is faster than rsync.

It is likely, since rsync adds control information used by rsync
algorithm to track the synchronization.

> When I know that I've added a bunch of new and/or large files on the
> sender, I sometimes try the rsync 'whole-file' option. As I haven't
> benchmarked it, I don't know if/when it is helping.
>
> My biggest problem with rsync is when I reorganize file/ directory trees
> on my file server; especially big stuff -- raw video, movies, disk
> images, ISO images, etc.. I have yet to figure out an rsync incantation
> that does the corresponding moves on the destination, rather than
> mindlessly copying and deleting 100's of GB. I have often considered
> writing an rsync prelude script for just this case.

If you make a move of files, but always within the same root filesystem
provided to rsync, you might want to consider using --delete for get an
identical image in the source and destination.


Kind regards,
Daniel

signature.asc

Daniel Bareiro

unread,
Sep 9, 2016, 3:50:04 PM9/9/16
to
Hi, Celejar.

On 09/09/16 15:51, Celejar wrote:

>> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
>> transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
>> slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
>> hardware can match or beat Gigabit.

> You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
> everything I've read says that 20-24Mbps is the real-world maximum.

Still, 20-24 Mbps is more than 10 Mpbs I was seeing with rsync. There
could be a bottleneck somewhere?


Kind regards,
Daniel

signature.asc

Celejar

unread,
Sep 9, 2016, 5:20:04 PM9/9/16
to
As per your own suggestion in another message, definitely benchmark
with iperf to see if that's better. And as we discussed in another
thread some time ago, (especially) if you're using wireless, benchmark
throughput in *both* directions, since the transmitter (or receiver) may
be better on one machine than on another.

Celejar

deloptes

unread,
Sep 9, 2016, 6:10:04 PM9/9/16
to
Daniel Bareiro wrote:

> Still, 20-24 Mbps is more than 10 Mpbs I was seeing with rsync. There
> could be a bottleneck somewhere?

In my case it was the IO on the disk - I couldn't do more than 12Mbps even
on wired connection, because I have encrypted disk ... it took me a while
to understand why though.

David Christensen

unread,
Sep 9, 2016, 11:40:04 PM9/9/16
to
Benchmarking using WiFi (48 Mb/s):

2016-09-09 20:18:51 dpchrist@t7400 ~
$ time dd if=/dev/urandom of=urandom.100M bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 12.6709 s, 8.3 MB/s

real 0m12.703s
user 0m0.000s
sys 0m12.481s

2016-09-09 20:19:32 dpchrist@t7400 ~
$ time scp -p urandom.100M samba:.
urandom.100M


100% 100MB 1.5MB/s 01:08

real 1m16.023s
user 0m4.548s
sys 0m0.744s


So, 1048576900 bytes * 8 bits / byte / 76.024 seconds

= 110341671 bits/second


Testing again using Fast Ethernet (100 Mb/s):

2016-09-09 20:29:54 dpchrist@t7400 ~
$ time scp -p urandom.100M samba:.
urandom.100M


100% 100MB 2.4MB/s 00:42

real 0m43.377s
user 0m4.476s
sys 0m0.876s


So, 1048576900 bytes * 8 bits / byte / 43.377 seconds

= 193388552. bits/second


Wow. Even worse than I was expecting...


David

David Christensen

unread,
Sep 9, 2016, 11:50:04 PM9/9/16
to
On 09/09/2016 12:43 PM, Daniel Bareiro wrote:
> On 09/08/16 22:57, David Christensen wrote:
>> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
>> transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
>> slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
>> hardware can match or beat Gigabit.
>
> I think it is reasonable to expect that the wireless transfer rate is
> lower than the one obtained in a wired network. But there is a big
> difference compared to the ~50 Mpbs you mentioned. The peak obtained
> with rsync was 10 Mbps. Maybe the best is to take a metric with iperf,
> what do you think?

See the benchmark I just posted for 802.11g WiFi -- dm-crypt -> scp ->
dm-crypt, all without AES-NI -- 110341671 bits/second. Yuck.


>> My biggest problem with rsync is when I reorganize file/ directory trees
>> on my file server; especially big stuff ... I have yet to figure out an rsync incantation
>> that does the corresponding moves on the destination ...
>
> If you make a move of files, but always within the same root filesystem
> provided to rsync, you might want to consider using --delete for get an
> identical image in the source and destination.

--delete is a different idea. I'm thinking -y/--fuzzy.


David

David Wright

unread,
Sep 10, 2016, 12:20:04 AM9/10/16
to


What's this 9?

Cheers,
David.

Neal P. Murphy

unread,
Sep 10, 2016, 1:30:04 AM9/10/16
to
Assuming the talk is about transfer rates over the medium, not something like pre-compression data rates (which might be called 'marketing-speak').

Good eye! I was going to say it's not possible to get 110Mb/s over 802.11g; 40-50 is closer tothe best I get. And 193Mb/s over 100Mb/s ethernet is right out; best I've ever managed is maybe 97Mb/s, and 92-95 is more typical. 11,034,157Mb/s on W/L and 19,338,838Mb/s on wired is *much* more believable.

Unless one has a very fast multicore CPU with hardware crypto assistance, very fast RAM and the data to be transferred cached in RAM, one will probably never saturate a fastE or gigE link where one end must decrypt the data from disk/cache then encrypt the data to scp, and the other end must decrypt the data from scp then encrypt the data to disk. Even simple compression slows transfer down far too much.

Now if one had many CPUs, hacked scp to open as many sockets and thread/child procs as there are CPUs, and had each thread work on a small-ish block of data at a time, one *might* be able to speed up the tranfser.

David Christensen

unread,
Sep 10, 2016, 2:20:04 AM9/10/16
to
A typographical error.

104857600 bytes * 8 bits/byte / 76.024 seconds

= 11034158 bits/seconds


David

Dan Ritter

unread,
Sep 10, 2016, 8:50:04 AM9/10/16
to
SSDs can routinely read 400-600 MB/s. No need to have everything
cached in RAM.

In 2010, the first generation of i5 CPUs with hardware support for AES
could encrypt at about 15 MB/s, more than filling a 100 Mb/s pipe.

Here's a table of recent CPUs with AES support, running with
OpenSSL/LibreSSL. https://calomel.org/aesni_ssl_performance.html

It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
bandwidth of a gigabit ethernet NIC. Anything which can do 2x
that can approach encrypting/decrypting from SSD, then
decrypting/encrypting over an SSH connection.

There are a lot of 500s and above on that chart.

And that's per-core, so even the 250+ CPUs can fill a gig-e pipe
while reading from SSD.

Nor are they monstrously expensive: an AMD FX-6300 is $90, a
motherboard for it could be another $90, and you can get a
decent SSD for $100 these days. A $400 desktop can be put
together that can saturate a gig-E link with encrypted traffic
from an encrypted disk.

Truly we live in marvelous times.

-dsr-

rhkr...@gmail.com

unread,
Sep 10, 2016, 10:30:04 AM9/10/16
to
On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> bandwidth of a gigabit ethernet NIC.

Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125 Mb/s. It
doesn't (really) change your conclusions.

regards,
Randy Kramer

Gene Heskett

unread,
Sep 10, 2016, 10:50:04 AM9/10/16
to
You make an assumption many folks do, but theres a start bit and a stop
bit so the math is more like 1000/10=100 Mb/s.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>

rhkr...@gmail.com

unread,
Sep 10, 2016, 11:00:03 AM9/10/16
to
On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> On Saturday 10 September 2016 10:26:15 rhkr...@gmail.com wrote:
> > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > > bandwidth of a gigabit ethernet NIC.
> >
> > Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125
> > Mb/s. It doesn't (really) change your conclusions.
> >
> > regards,
> > Randy Kramer
>
> You make an assumption many folks do, but theres a start bit and a stop
> bit so the math is more like 1000/10=100 Mb/s.


Well, 1000/8 is still 125 ;-) but I wouldn't have written back just to say
that. Isn't it the case that there is something less than 1 start and 1 stop
for every byte--maybe like 1 stop bit for every several bytes? (I am just
(slightly) curious.)

And, iirc, there are variations (which may be obsolete--I seem to remember one
protocol that had either 2 start or 2 stop bits?

regards,
Randy Kramer

David Christensen

unread,
Sep 10, 2016, 11:40:04 AM9/10/16
to
I remember start/stop bits from RS-232/485, but Gigabit Ethernet
signaling is more advanced:

https://en.wikipedia.org/wiki/Gigabit_Ethernet


David

Gene Heskett

unread,
Sep 10, 2016, 1:40:04 PM9/10/16
to
Yes, still in use in some legacy stuffs.

There may be some inroads into the 10 bits per byte, but tcp is so old I
doubt that synchronization portion took a hit. Thats what it is, is
keeping everything in synch. Even USB has that same data format. Sata
for disks, being much newer, may have abandoned that, particularly for
the disks whose native format is a 4096 byte sector. I've also found
sata cabling is about 1000% flakier, requiring more frequent
replacements.

> regards,
> Randy Kramer

Celejar

unread,
Sep 10, 2016, 10:30:04 PM9/10/16
to
On Fri, 9 Sep 2016 20:43:44 -0700
David Christensen <dpch...@holgerdanske.com> wrote:

> On 09/09/2016 12:43 PM, Daniel Bareiro wrote:
> > On 09/08/16 22:57, David Christensen wrote:
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
> >> transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
> >> slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
> >> hardware can match or beat Gigabit.
> >
> > I think it is reasonable to expect that the wireless transfer rate is
> > lower than the one obtained in a wired network. But there is a big
> > difference compared to the ~50 Mpbs you mentioned. The peak obtained
> > with rsync was 10 Mbps. Maybe the best is to take a metric with iperf,
> > what do you think?
>
> See the benchmark I just posted for 802.11g WiFi -- dm-crypt -> scp ->
> dm-crypt, all without AES-NI -- 110341671 bits/second. Yuck.

FTR: there seem to be more typos / here. The actual figure should be
11034157.6344 bits/second.

Celejar

Celejar

unread,
Sep 10, 2016, 10:30:04 PM9/10/16
to
On Fri, 9 Sep 2016 20:36:39 -0700
David Christensen <dpch...@holgerdanske.com> wrote:

> On 09/09/2016 11:51 AM, Celejar wrote:
> > On Tue, 9 Aug 2016 18:57:02 -0700
> > David Christensen <dpch...@holgerdanske.com> wrote:
> >
> > ...
> >
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
> >> transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
> >> slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
> >> hardware can match or beat Gigabit.
> >
> > You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
> > everything I've read says that 20-24Mbps is the real-world maximum.
> >
> > Celejar
> >
>
> Benchmarking using WiFi (48 Mb/s):
>
> 2016-09-09 20:18:51 dpchrist@t7400 ~
> $ time dd if=/dev/urandom of=urandom.100M bs=1M count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 12.6709 s, 8.3 MB/s

...

> 2016-09-09 20:19:32 dpchrist@t7400 ~
> $ time scp -p urandom.100M samba:.
> urandom.100M
>
>
> 100% 100MB 1.5MB/s 01:08
>
> real 1m16.023s
> user 0m4.548s
> sys 0m0.744s
>
>
> So, 1048576900 bytes * 8 bits / byte / 76.024 seconds
>
> = 110341671 bits/second

So assuming that '9' is a typo, as per another message of yours in this
thread, your actual throughput is more like 11Mpbs, correct?

Celejar

David Christensen

unread,
Sep 10, 2016, 11:10:05 PM9/10/16
to
On 09/10/2016 07:23 PM, Celejar wrote:
> FTR: there seem to be more typos / here. The actual figure should be
> 11034157.6344 bits/second.


Yes, let's whip those typos out of this dead horse some more:

On 09/09/2016 08:36 PM, David Christensen wrote:
> Benchmarking using WiFi (48 Mb/s):
>
> 2016-09-09 20:18:51 dpchrist@t7400 ~
> $ time dd if=/dev/urandom of=urandom.100M bs=1M count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 12.6709 s, 8.3 MB/s
>
> real 0m12.703s
> user 0m0.000s
> sys 0m12.481s
>
> 2016-09-09 20:19:32 dpchrist@t7400 ~
> $ time scp -p urandom.100M samba:.
> urandom.100M
>
>
> 100% 100MB 1.5MB/s 01:08
>
> real 1m16.023s
> user 0m4.548s
> sys 0m0.744s

2016-09-10 19:53:48 dpchrist@t7400 ~
$ perl -e 'print 104857600*8/76.023, "\n"'
11034302.7767912


On 09/09/2016 08:36 PM, David Christensen wrote:
> Testing again using Fast Ethernet (100 Mb/s):
>
> 2016-09-09 20:29:54 dpchrist@t7400 ~
> $ time scp -p urandom.100M samba:.
> urandom.100M
>
>
> 100% 100MB 2.4MB/s 00:42
>
> real 0m43.377s
> user 0m4.476s
> sys 0m0.876s

2016-09-10 19:54:43 dpchrist@t7400 ~
$ perl -e 'print 104857600*8/43.377, "\n"'
19338838.5549946


David

Neal P. Murphy

unread,
Sep 11, 2016, 1:50:03 AM9/11/16
to
On Sat, 10 Sep 2016 10:53:20 -0400
rhkr...@gmail.com wrote:

> On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> > On Saturday 10 September 2016 10:26:15 rhkr...@gmail.com wrote:
> > > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > > > bandwidth of a gigabit ethernet NIC.
> > >
> > > Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125
> > > Mb/s. It doesn't (really) change your conclusions.
> > >
> > > regards,
> > > Randy Kramer
> >
> > You make an assumption many folks do, but theres a start bit and a stop
> > bit so the math is more like 1000/10=100 Mb/s.
>
>
> Well, 1000/8 is still 125 ;-) but I wouldn't have written back just to say
> that. Isn't it the case that there is something less than 1 start and 1 stop
> for every byte--maybe like 1 stop bit for every several bytes? (I am just
> (slightly) curious.)
>
> And, iirc, there are variations (which may be obsolete--I seem to remember one
> protocol that had either 2 start or 2 stop bits?


Start/stop bits apply to async TIA-232.

Speaking very generally, 100Mb/s Ethernet actually operates at 125Mb/s; that includes the LAPB-like protocol that actually transmits the packets. All the layer 1 overhead goes in that extra 25Mb/s.

So, more correctly, you have data + TCP + IP overhead + L2 overhead: around 3% for full packets, higher for smaller packets. This is why 100Mb/s ethernet saturates at around 92%-95% *data* transmission. The rest is protocol overhead and delays (probably akin to RR and RNR).

Daniel Bareiro

unread,
Sep 11, 2016, 8:10:03 AM9/11/16
to
Hi, deloptes.
This is an interesting fact. Because 'orion' (the notebook used in the
mentioned test) also has an encrypted disk. In the test, the notebook
was pulling the files on the Windows VM on the wired network.

root@orion:~# dmsetup ls --target crypt
sda5_crypt (254, 0)

root@orion:~# cryptsetup luksDump /dev/sda5 | grep Version -A3
Version: 1
Cipher name: aes
Cipher mode: xts-plain64
Hash spec: sha1

viper@orion:~$ lsblk --fs
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 /boot
├─sda2
└─sda5
└─sda5_crypt
├─main-swap [SWAP]
├─main-root /
└─main-datos /datos
sr0


I did not think this could affect so strongly in the network transfer.


Kind regards,
Daniel

signature.asc

Daniel Bareiro

unread,
Sep 11, 2016, 8:30:03 AM9/11/16
to
Hi, Celejar

On 09/09/16 18:18, Celejar wrote:

>>>> My laptop has 802.11 a/b/g WiFi and Fast Ethernet. Wireless data
>>>> transfers are slow (~50 Mbps). Wired is twice as fast (100 Mbps); still
>>>> slow. Newer WiFi (n, ac) should be faster, but only the newest WiFi
>>>> hardware can match or beat Gigabit.

>>> You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
>>> everything I've read says that 20-24Mbps is the real-world maximum.

>> Still, 20-24 Mbps is more than 10 Mpbs I was seeing with rsync. There
>> could be a bottleneck somewhere?

> As per your own suggestion in another message, definitely benchmark
> with iperf to see if that's better.

Yes, it can be. I was thinking about what I said in a previous message
about the control information added by rsync on the packets sent.

I think this would be important only if we focus on the performance
(number of bits of data sent / total number of bits sent). In this case,
the focus is the transfer rate, for which the amount of control bits
used would be irrelevant since I think we need to know how many bits per
second we are getting, regardless of the utility have those bits.

> And as we discussed in another thread some time ago, (especially) if
> you're using wireless, benchmark throughput in *both* directions,
> since the transmitter (or receiver) may be better on one machine than
> on another.

Interesting sidelight. Thanks for sharing.


Kind regards,
Daniel

signature.asc
0 new messages