Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

cpio files to remote server

54 views
Skip to first unread message

Kevin Fleming

unread,
Sep 20, 2005, 4:22:32 PM9/20/05
to
Hey Everybody,

This is what I'm starting with to backup some data from one box to
another:
find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"

The job copies some data over every night. Right now, both servers are
on the same LAN, but the backup server is going to be moved offsite,
with a slower connection. Is there a way that I can add something like
gzip into the command to compress the data before sending it to the
remote server?

Thanks for any ideas,
Kevin Fleming

Jean-Pierre Radley

unread,
Sep 20, 2005, 5:06:16 PM9/20/05
to
Kevin Fleming typed (on Tue, Sep 20, 2005 at 01:22:32PM -0700):

Install and use rsync, which can compress if you tell it do do so.

--
JP

John DuBois

unread,
Sep 21, 2005, 2:47:01 AM9/21/05
to
In article <1127247752....@z14g2000cwz.googlegroups.com>,

Sure. Try:

find /dir/data -depth -print |cpio -o | gzip |rcmd backup "gunzip | cpio -ivdum"

John
--
John DuBois spc...@armory.com KC6QKZ/AE http://www.armory.com/~spcecdt/

Ian Wilson

unread,
Sep 21, 2005, 6:00:52 AM9/21/05
to

In my experience, rsync will be far more efficient since it only
transmits the changed parts of changed files.

Kevin Fleming

unread,
Sep 21, 2005, 10:31:12 AM9/21/05
to
"John DuBois" <spc...@armory.com> wrote in message
news:11j20f5...@corp.supernews.com...
> In article <1127247752....@z14g2000cwz.googlegroups.com>,

> Kevin Fleming <kevin...@gmail.com> wrote:
>>Hey Everybody,
>>
>>This is what I'm starting with to backup some data from one box to
>>another:
>>find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
>>
>>The job copies some data over every night. Right now, both servers are
>>on the same LAN, but the backup server is going to be moved offsite,
>>with a slower connection. Is there a way that I can add something like
>>gzip into the command to compress the data before sending it to the
>>remote server?
>
> Sure. Try:
>
> find /dir/data -depth -print |cpio -o | gzip |rcmd backup "gunzip |
> cpio -ivdum"
>
> John
> --
> John DuBois spc...@armory.com KC6QKZ/AE
> http://www.armory.com/~spcecdt/


Thanks John, that works...any ideas on how to see the amount of data
that
was actually transmitted? I suppose I could just gzip all of the data
to a
file just on its own so I'd know what the compression is like, but
thought
someone might have a slicker way of doing it.

"Ian Wilson" <scob...@infotop.co.uk> wrote in message
news:<dgrb0j$jf7$1...@nwrdmz02.dmz.ncs.ea.ibs-infra.bt.com>...

> In my experience, rsync will be far more efficient since it only

> transmits the changed parts of changed files.


Also, thanks to JP Radley and Ian Wilson for their rsync
suggestions...I'll
have to look into that.


Thanks,
Kevin Fleming

John DuBois

unread,
Sep 22, 2005, 2:29:14 PM9/22/05
to
In article <1127313072.2...@g44g2000cwa.googlegroups.com>,

Kevin Fleming <kevin...@gmail.com> wrote:
>"John DuBois" <spc...@armory.com> wrote in message
>news:11j20f5...@corp.supernews.com...
>> In article <1127247752....@z14g2000cwz.googlegroups.com>,
>> Kevin Fleming <kevin...@gmail.com> wrote:
>>>Hey Everybody,
>>>
>>>This is what I'm starting with to backup some data from one box to
>>>another:
>>>find /dir/data -depth -print |cpio -o |rcmd backup "cpio -ivdum"
>>>
>>>The job copies some data over every night. Right now, both servers are
>>>on the same LAN, but the backup server is going to be moved offsite,
>>>with a slower connection. Is there a way that I can add something like
>>>gzip into the command to compress the data before sending it to the
>>>remote server?
>>
>> Sure. Try:
>>
>> find /dir/data -depth -print |cpio -o | gzip |rcmd backup "gunzip |
>> cpio -ivdum"
>
>Thanks John, that works...any ideas on how to see the amount of data
>that
>was actually transmitted?

You could do:

find /dir/data -depth -print |cpio -o | gzip | dd obs=1024k |


rcmd backup "gunzip | cpio -ivdum"

dd will report the number of (1MB) blocks that it writes ("records out").

SDS

unread,
Sep 24, 2005, 3:01:41 PM9/24/05
to
I know this is slightly off topic but some answers later in this chain
mention gzip.

Any thoughts as to whether that more or less efficient than the old uniz zip
and unzip we have been using for years?

I prefer the unix zip and unzip programs because they are completely
compatible with the zip and unzip programs built into Windows XP and also
with older zip programs.

I know for sure that compress and pack (in Unix) are, in most instances,
less efficient than zip and unzip for un-compiled programs and data files.
They may all turn out to be equally efficient for binary files, however.

All comments appreciated.

Thanks,

DAW
==================

"Kevin Fleming" <kevin...@gmail.com> wrote in message
news:1127247752....@z14g2000cwz.googlegroups.com...

Jean-Pierre Radley

unread,
Sep 24, 2005, 3:32:24 PM9/24/05
to
SDS typed (on Sat, Sep 24, 2005 at 07:01:41PM +0000):
| I know this is slightly off topic ....

It's not at all off-topic.

| Any thoughts as to whether [gzip is] more or less efficient than the


| old uniz zip and unzip we have been using for years?

Gzip is much more efficient than zip or compress, and bzip2 is still
better.

| I prefer the unix zip and unzip programs because they are completely
| compatible with the zip and unzip programs built into Windows XP and also
| with older zip programs.

Winzip handles gzip, and the www.gzip.org page will point you to many
other possiblities.

| I know for sure that compress and pack (in Unix) are, in most instances,
| less efficient than zip and unzip for un-compiled programs and data files.
| They may all turn out to be equally efficient for binary files, however.

They are not equally efficient for binary files any more than they are
for text files.

--
JP

Bela Lubkin

unread,
Sep 24, 2005, 5:39:02 PM9/24/05
to
sd...@earthlink.net wrote:

> I know this is slightly off topic but some answers later in this chain
> mention gzip.
>
> Any thoughts as to whether that more or less efficient than the old uniz zip
> and unzip we have been using for years?

The algorithm used by `gzip` is one of the ones used by `zip`. `zip`
supports several algorithms and tries to choose which one will produce
the smallest output (actually the Unix port of `zip` just uses one
algorithm, called "deflate").

`gzip` and `bzip2` are single-file compressors: foo -> foo.gz or
foo.bz2. `zip` and many others like it are combined archivers and
compressors. Running `zip foo.zip foo bar baz` creates a single file,
foo.zip, that contains those three files. The equivalent with `gzip`
would be something like: `tar cf foo.tar foo bar baz; gzip foo.tar`.

`zip` archive format is imperfect for Unix purposes: it doesn't store
all Unix attributes, I don't think it stores directory permissions,
stuff like that. As long as you keep those things in mind it's probably
fine.

> I prefer the unix zip and unzip programs because they are completely
> compatible with the zip and unzip programs built into Windows XP and also
> with older zip programs.

There are so many newer archivers for Windows -- `zip` is rather
archaic. Two of the most popular ones these days are `rar` and `7-zip`.
I've been experimenting with these and have found that `7-zip` can get
the best compression of any compressor I've ever tried. Note that I say
"_can_" get. It has a lot of knobs you can twiddle. Its default
compression is similar to that of `rar`. (I'm working on some long-term
archival storage where minimizing size is more important than saving
compression time. For typical backup tasks, a faster compressor that
leaves a few percent on the table is probably more appropriate.)

7-zip for Unix lives at http://p7zip.sourceforge.net/. I'm not aware of
an OpenServer port (I've been fiddling with it on Windows & Linux).

> I know for sure that compress and pack (in Unix) are, in most instances,
> less efficient than zip and unzip for un-compiled programs and data files.
> They may all turn out to be equally efficient for binary files, however.

`pack` is an ancient algorithm that is always less efficient that the
modern ones. `compress` is newer and more efficient, but still nowhere
near the more modern compressors.

>Bela<

Kevin K

unread,
Sep 24, 2005, 8:39:35 PM9/24/05
to
On Sat, 24 Sep 2005 19:32:24 UTC, Jean-Pierre Radley <j...@jpr.com>
wrote:

> | Any thoughts as to whether [gzip is] more or less efficient than the
> | old uniz zip and unzip we have been using for years?
>
> Gzip is much more efficient than zip or compress, and bzip2 is still
> better.
>
>

Actually, you have to decide what you mean by efficient. In the
normal case of compression ratio, yes, bzip2 is more efficient than
gzip. But it is less efficient in CPU usage. At least when I've
uncompressed large source archives, it seems to take longer with
bzip2.

Even when using gzip, I make the decision many times about compression
levels depending on time constraints, or the CPU later used to
uncompress it. gzip -2, for instance, generally does fairly well, and
works faster than -7 or -9.

--

Bill Vermillion

unread,
Sep 24, 2005, 10:45:06 PM9/24/05
to
In article <KIRoJuEXw9g9-pn2-yr7KZIDdF7yT@ecs>,

In a discussion in an email list I get, it was pointed out that
bzip2 is less efficient for small files than gzip. For such things
as man pages, the bzip2 file is often larger than the gzip file.

For under 3K, gzip wins. For 3K to 6K it's about even. And over
6K bzip2 comes out ahead in space savings.

Over 10K bzip2 is a hands-down winner.

Bill

--
Bill Vermillion - bv @ wjv . com

0 new messages