Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

FSArchiver - I had a very bad idea

517 views
Skip to first unread message

Soviet_Mario

unread,
Dec 27, 2021, 10:32:00 AM12/27/21
to

The problem, a self-inflicted problem alas, is still ongoing.

From time to time I use FSArchiver to backup file systems
as it is easy and very space-efficient (it compresses a
great deal).

Normally I boot from a live distro, when backing up the ROOT
partition

This time I tried (since I needed to work meanwhile) to
backup a mounted ROOT, from inside the system itself.

Apart from being running for over 24 h, and a compressed
size (maximum level of compression 9) that has exceeded the
used space on the partition, (the .FSA archive is already
110 GB and the used root partition is 105 uncompressed), I
understood that it could well be stuck with mutable files,
dunno how it is supposed to manage them, maybe trying to
"add" (?) the new status to the old ... it is growing the
journal to track the committments ?

using lsblk, the root partition, which has no label, appears
MOUNTED TWICE : one as '/' and another as
'/tmp/fsa/20211226-165002-000718e8-00'

it seems self evident that FSArchiver remounted the same
partition.
I don't use BTRFS, but plain EXT4, so the second mount is
not literally a "snapshot" like COW.

Do sb know the logic of FSArchiver trying to manage a
mutable mounted filesystem ?

Is it there under Linux (Debian Bullseye) sth similar, or
better, or different, to what under windows I can still
recall it was called VOLUME SHADOW COPY ? (I did not know
how it worked ... while the COW mechanism, also used by
Virtual Box snapshots is rather simple and clear, a
cluster-level time Z-order journal).

And what to do ? Abort the process and have to trash the
huge file ? It would surely end up in a corrupted state, if
I abort the program (at this point I suspect that either the
process would NEVER end or it would end up with a corrupted
image even if it terminates gracefully).

What should I do ? (apart from never more run FSArchiver on
the root from inside)

Tnx



--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)

J.O. Aho

unread,
Dec 27, 2021, 11:25:28 AM12/27/21
to

On 27/12/2021 16.31, Soviet_Mario wrote:
>
> The problem, a self-inflicted problem alas, is still ongoing.
>
> From time to time I use FSArchiver to backup file systems as it is easy
> and very space-efficient (it compresses a great deal).

Myself I never used it, nowadays I use only btrfs, even one of my phones
uses it as default file system.

> Normally I boot from a live distro, when backing up the ROOT partition
>
> This time I tried (since I needed to work meanwhile) to backup a mounted
> ROOT, from inside the system itself.

As you are using ext4, it don't have snapshot support which makes
backups of live systems to not be 100% accurate, but most Linux
distributions seems to use LVM, LVM has snapshot support, so you can
take a snapshot of your current root, then make a backup of the snapshot.


> Apart from being running for over 24 h, and a compressed size (maximum
> level of compression 9) that has exceeded the used space on the
> partition, (the .FSA archive is already 110 GB and the used root
> partition is 105 uncompressed)

Some files ain't good idea to compress like compressed video and images,
those tend to become larger. Also picking highest compression (9) do not
always mean you get the best compression (size-wise), for small text
files compression level 3 can be the most optimal and of course empty
files any compression would make them larger.

Of course traditionally you generate a tarball which you compress, then
the individual file size don't matter, but still you shouldn't compress
already compressed files like videos and images.


> Is it there under Linux (Debian Bullseye) sth similar, or better, or
> different, to what under windows I can still recall it was called VOLUME
> SHADOW COPY ? (I did not know how it worked ... while the COW mechanism,
> also used by Virtual Box snapshots is rather simple and clear, a
> cluster-level time Z-order journal).

Btrfs or LVM are the two alternatives you have at the moment, you will
also in the future be able to do this with bcachefs too. MS-Windows uses
a snapshotting feature in NTFS that was only supported by MS-Windows
server 2008 and later and MS-Windows 7 and later.


> And what to do ? Abort the process and have to trash the huge file ? It
> would surely end up in a corrupted state, if I abort the program (at
> this point I suspect that either the process would NEVER end or it would
> end up with a corrupted image even if it terminates gracefully).

Yes, you have to count on that the image will be corrupted, if you are
lucky you can use part of it, but do not count on that.

> What should I do ? (apart from never more run FSArchiver on the root
> from inside)

Use a snapshot and backup that one and maybe use simple backup suite or
use kbackup, depending on what desktop environment you like.


--

//Aho

Paul

unread,
Dec 27, 2021, 12:32:57 PM12/27/21
to
It seems file-based at some level.

https://www.fsarchiver.org/quickstart/

OK, there's even a page about live-backup.

https://www.fsarchiver.org/live-backup/

"FSArchiver can be used to backup linux operating systems when they are running."

LVM Logical-Volumes == snapshot.
partition at rest == remount ro
"hot" backup is next...

"If there is no risk of inconsistency, then you can use fsarchiver with option -A
to continue the backup of a filesystem which is mounted in read-write mode."

[-A, --allow-rw-mounted]

It's about as risky as you would expect it to be :-)

Of course you would abort the operation, especially if the .fsa file
is stored on the partition you are backing up :-/

Paul

Carlos E.R.

unread,
Dec 27, 2021, 2:48:08 PM12/27/21
to
On 27/12/2021 16.31, Soviet_Mario wrote:

(not familiar with FSArchiver, thus ignoring that part)

...

> Is it there under Linux (Debian Bullseye) sth similar, or better, or
> different, to what under windows I can still recall it was called VOLUME
> SHADOW COPY ? (I did not know how it worked ... while the COW mechanism,
> also used by Virtual Box snapshots is rather simple and clear, a
> cluster-level time Z-order journal).

Besides btrfs or LVM that Aho mentioned, I believe XFS can do it, but I
never tried.

Also you could use a raid mirror: remove one of the sides from the
logical mirror, and clone that. When finished, restore the mirror.

--
Cheers, Carlos.

J.O. Aho

unread,
Dec 27, 2021, 6:07:24 PM12/27/21
to
On 27/12/2021 20.46, Carlos E.R. wrote:
> On 27/12/2021 16.31, Soviet_Mario wrote:
>
> (not familiar with FSArchiver, thus ignoring that part)
>
> ...
>
>> Is it there under Linux (Debian Bullseye) sth similar, or better, or
>> different, to what under windows I can still recall it was called
>> VOLUME SHADOW COPY ? (I did not know how it worked ... while the COW
>> mechanism, also used by Virtual Box snapshots is rather simple and
>> clear, a cluster-level time Z-order journal).
>
> Besides btrfs or LVM that Aho mentioned, I believe XFS can do it, but I
> never tried.

XFS has the tools xfs_freeze, xfsdump and xfsrestore to make backup and
restore from them, it's a bit more complicated than btrfs IMHO.

ZFS has also snapshot support, but ain't built into the kernel like
btrfs/lvm/xfs.

JFS has snapshot support when run on AIX, the Linux version lacks the
functionality and people are recommended to use LVM or EVMS (main
development seems to stopped 2006 and latest patches are from 2014).


IMHO nowadays btrfs seems to be the better choice when it comes for a
file system on Linux, sure it's not always the optimal file system for
all tasks, but gives you a lot of good options to work with like
snapshots and transparent compression.


> Also you could use a raid mirror: remove one of the sides from the
> logical mirror, and clone that. When finished, restore the mirror.

It's possible, but I wouldn't recommend, specially when you start to
work on larger file systems, it takes time to rebuild the mirror.

--

//Aho

Carlos E.R.

unread,
Dec 27, 2021, 6:20:08 PM12/27/21
to
On 28/12/2021 00.07, J.O. Aho wrote:
> On 27/12/2021 20.46, Carlos E.R. wrote:
>> On 27/12/2021 16.31, Soviet_Mario wrote:
>>
>> (not familiar with FSArchiver, thus ignoring that part)
>>
>> ...
>>
>>> Is it there under Linux (Debian Bullseye) sth similar, or better, or
>>> different, to what under windows I can still recall it was called
>>> VOLUME SHADOW COPY ? (I did not know how it worked ... while the COW
>>> mechanism, also used by Virtual Box snapshots is rather simple and
>>> clear, a cluster-level time Z-order journal).
>>
>> Besides btrfs or LVM that Aho mentioned, I believe XFS can do it, but
>> I never tried.
>
> XFS has the tools xfs_freeze, xfsdump and xfsrestore to make backup and
> restore from them, it's a bit more complicated than btrfs IMHO.
>
> ZFS has also snapshot support, but ain't built into the kernel like
> btrfs/lvm/xfs.
>
> JFS has snapshot support when run on AIX, the Linux version lacks the
> functionality and people are recommended to use LVM or EVMS (main
> development seems to stopped 2006 and latest patches are from 2014).
>
>
> IMHO nowadays btrfs seems to be the better choice when it comes for a
> file system on Linux, sure it's not always the optimal file system for
> all tasks, but gives you a lot of good options to work with like
> snapshots and transparent compression.

I use btrfs to write backups to, with compression and encryption (LUKS),
but without snapshots. Otherwise, I use ext4 for root and xfs for home
and data. Main reason: I know what to do with them in case of disaster.


>> Also you could use a raid mirror: remove one of the sides from the
>> logical mirror, and clone that. When finished, restore the mirror.
>
> It's possible, but I wouldn't recommend, specially when you start to
> work on larger file systems, it takes time to rebuild the mirror.

It does that, but you can tune the rebuild speed.

--
Cheers, Carlos.

Diesel / Gremlin Kook

unread,
Dec 27, 2021, 11:54:02 PM12/27/21
to
Most regulars in this group do application development either as a pastime
or as a part of their job, so I doubt Troll Killer Snit consider writing
macros to be "witch craft". He pisses off multiple groups of people who are
just bystanders, but that's a conceited fool for you. What Troll Killer Snit
and you care about isn't a factor. Lines of text containing numbers and symbols.
I know we have two different views entirely.


--
I Left My Husband & Daughter At Home And THIS happened!
<http://web.archive.org/web/20200911090505/https://www.usphonebook.com/dustin-
cook/UQjN2UTM5IzM1ADM0czNwMjMyYzR?Gremlin=&Diesel=>
Dustin Cook: Functional Illiterate Fraud

Soviet_Mario

unread,
Dec 28, 2021, 3:57:32 AM12/28/21
to
Il 27/12/21 17:25, J.O. Aho ha scritto:
>
> On 27/12/2021 16.31, Soviet_Mario wrote:
>>
>> The problem, a self-inflicted problem alas, is still ongoing.
>>
>>  From time to time I use FSArchiver to backup file systems
>> as it is easy and very space-efficient (it compresses a
>> great deal).
>
> Myself I never used it, nowadays I use only btrfs, even one
> of my phones uses it as default file system.
>
>> Normally I boot from a live distro, when backing up the
>> ROOT partition
>>
>> This time I tried (since I needed to work meanwhile) to
>> backup a mounted ROOT, from inside the system itself.
>
> As you are using ext4, it don't have snapshot support which
> makes backups of live systems to not be 100% accurate, but
> most Linux distributions seems to use LVM, LVM has snapshot

intresting. I'm 90 % sure I don't have LVM support enabled.
Which test should I do to assess that ?

> support, so you can take a snapshot of your current root,
> then make a backup of the snapshot.
>
>
>> Apart from being running for over 24 h, and a compressed
>> size (maximum level of compression 9) that has exceeded
>> the used space on the partition, (the .FSA archive is
>> already 110 GB and the used root partition is 105
>> uncompressed)
>
> Some files ain't good idea to compress like compressed video

I have NONE or so in the root partition. All "user data" are
elsewhere.

> and images, those tend to become larger. Also picking
> highest compression (9) do not always mean you get the best
> compression (size-wise), for small text files compression
> level 3 can be the most optimal and of course empty files
> any compression would make them larger.

I feel the problem was not the algorithm chosen, as I don't
even hold video/audio/photo on the "system" partition, just
programs, caches, recalcitrant junk (yes, by means of FIND
program, I discovered that even BLEACH BIT does not clean
deeply a lot of stuff, caches, thumbnails, and so).

THE SAME DRIVE, compressed correctly with a shut-down
system, gave 29,5 GB, with a less than 30% ratio. So that
wasn't the issue.

>
> Of course traditionally you generate a tarball which you
> compress, then the individual file size don't matter, but
> still you shouldn't compress already compressed files like
> videos and images.
>
>
>> Is it there under Linux (Debian Bullseye) sth similar, or
>> better, or different, to what under windows I can still
>> recall it was called VOLUME SHADOW COPY ? (I did not know
>> how it worked ... while the COW mechanism, also used by
>> Virtual Box snapshots is rather simple and clear, a
>> cluster-level time Z-order journal).
>
> Btrfs or LVM are the two alternatives you have at the
> moment, you will also in the future be able to do this with
> bcachefs too. MS-Windows uses a snapshotting feature in NTFS
> that was only supported by MS-Windows server 2008 and later
> and MS-Windows 7 and later.
>
>
>> And what to do ? Abort the process and have to trash the
>> huge file ? It would surely end up in a corrupted state,
>> if I abort the program (at this point I suspect that
>> either the process would NEVER end or it would end up with
>> a corrupted image even if it terminates gracefully).
>
> Yes, you have to count on that the image will be corrupted,
> if you are lucky you can use part of it, but do not count on
> that.

I break the process, that, wisely and without intervention,
canceled its backup by itself

>
>> What should I do ? (apart from never more run FSArchiver
>> on the root from inside)
>
> Use a snapshot and backup that one and maybe use simple
> backup suite or use kbackup, depending on what desktop
> environment you like.
>

apart from FSArchiver, DAR and Dbackup I've never "met" a
backup program that suited me. Sometimes I use FIND manually
and ZIP with pipe, or FreeFileSync (who does not support
compression though)

J.O. Aho

unread,
Dec 28, 2021, 5:06:49 AM12/28/21
to
On 27/12/2021 19.27, Soviet_Mario wrote:
> Il 27/12/21 17:25, J.O. Aho ha scritto:

>> As you are using ext4, it don't have snapshot support which makes
>> backups of live systems to not be 100% accurate, but most Linux
>> distributions seems to use LVM, LVM has snapshot
>
> intresting. I'm 90 % sure I don't have LVM support enabled. Which test
> should I do to assess that ?

Just run:
sudo lvdisplay

no output = no lvm



> THE SAME DRIVE, compressed correctly with a shut-down system, gave 29,5
> GB, with a less than 30% ratio. So that wasn't the issue.

could it be that it hasn't yet come to the compression part of the process.



> apart from FSArchiver, DAR and Dbackup I've never "met" a backup program
> that suited me. Sometimes I use FIND manually and ZIP with pipe, or
> FreeFileSync (who does not support compression though)

For me it's a small script that backup my data with help of btrfs
send/receive, hardly has any impact when running incremental backups.

--

//Aho

STALKING_TARGET_12

unread,
Dec 28, 2021, 6:00:19 AM12/28/21
to
Adriana Ayaquica reported Snit sock Snit Michael Glasser years ago. As
expected, it did naught to stop the dunce.

All joking aside, those of you who troll are not able to control yourselves
anymore. And what did Snit sock Snit Michael Glasser have to say about
this long list of "convenient friends" who popped up at just the right
moment? Most of the time he would try to push the reader to believe they're
"honest and honorable" posters who just happened to have a detailed understanding
of the history of the group. How stupid does he think people are?

Given how repeatedly it seems that Snit sock Snit Michael Glasser's .sig
file is some skew of a quote Adriana Ayaquica posted which had been a beating
on Snit sock Snit Michael Glasser for something he did which was asinine/false/etc...
its clearly a repeated reminder of Snit sock Snit Michael Glasser's deep
rooted distress for having been so routinely pwned.

Why would Adriana Ayaquica need 'helpers'? He's the one who shows facts
for his side of the "fights".

-
Puppy Videos
https://duckduckgo.com/?q=Steve+Petruzzellis+the+narcissistic+bigot
https://www.bing.com/search?q=Dustin+Cook+the+functional+illiterate+fraud
Steve Carroll the Narcissistic Bigot

J.O. Aho

unread,
Dec 28, 2021, 6:21:17 AM12/28/21
to
On 28/12/2021 12.00, STALKING_TARGET_12 wrote:

> Adriana Ayaquica reported Snit sock Snit Michael Glasser years ago. As
> expected, it did naught to stop the dunce.

It had been nice if you could keep Snit-bashing to it's own thread,
specially when he ain't around and when he is don't feed the troll.

--

//Aho

Steve Carroll

unread,
Dec 28, 2021, 8:39:29 AM12/28/21
to
Narcissistic Bigot AKA Steve Carroll. With no proof at all, as is the norm
for Snit sock HWSNBN.

It's not surprising that Snit sock HWSNBN faked boredom at the appearance
of being seen as credible... knowing that it is not likely to ever happen.
I have been reading a bit from some of those old threads he was previously
getting his ass kicked. I noticed that many of those he'd go out of his way
to again and again attack had broad computer knowledge. I did not find many
who were also proficient in the hardware side of things, or were also trained
as an electrician along with various aspects of IS; with the degrees to back
it all up, too.

My position: Even if a person was only learning to annoy others, the notion
that obtaining proficiency as being one of having "the big duck egg" to show
for it does not fly because you will by definition have the wisdom to show
for it and expertise is beyond Snit sock HWSNBN.

--
"You'll notice how quickly he loses interest when everything is about him.
He clearly wants the attention"
Steve Carroll, making the dumbest comment ever uttered.

Soviet_Mario

unread,
Dec 28, 2021, 10:15:27 AM12/28/21
to
Il 27/12/21 18:31, Paul ha scritto:
the program itself, when asked gracefully to abort by CTRL+C
(I did not try for first to terminate it in the task
monitor), deleted itself its huge .FSA file.

tnx

>
> Of course you would abort the operation, especially if the
> .fsa file
> is stored on the partition you are backing up :-/
>
>    Paul
>


Soviet_Mario

unread,
Dec 28, 2021, 10:19:41 AM12/28/21
to
Il 28/12/21 11:06, J.O. Aho ha scritto:
> On 27/12/2021 19.27, Soviet_Mario wrote:
>> Il 27/12/21 17:25, J.O. Aho ha scritto:
>
>>> As you are using ext4, it don't have snapshot support
>>> which makes backups of live systems to not be 100%
>>> accurate, but most Linux distributions seems to use LVM,
>>> LVM has snapshot
>>
>> intresting. I'm 90 % sure I don't have LVM support
>> enabled. Which test should I do to assess that ?
>
> Just run:
> sudo lvdisplay
>
> no output = no lvm

tnx. Confirmed, No LVM support.

Since i did not use the -A flag for living partitions, dunno
what.

I am quite convinced that FSArchiver does not store the
whole UNCOMPRESSED image first and later compresses it. I
desume this by the very very slow rate the file temporary
grew meanwhile. If it were just an uncompressed copy (SSD to
SSD) it would have taken a much shorter time to grow, then
later a long time to be compressed in a "solid" block.

>
>
>
>> THE SAME DRIVE, compressed correctly with a shut-down
>> system, gave 29,5 GB, with a less than 30% ratio. So that
>> wasn't the issue.
>
> could it be that it hasn't yet come to the compression part
> of the process.

I strongly suspect it does not store anywhere a whole
uncompressed image, but do compression in chunks, a couple
of GB every 5 min or so, at that extreme level of compression

>
>
>
>> apart from FSArchiver, DAR and Dbackup I've never "met" a
>> backup program that suited me. Sometimes I use FIND
>> manually and ZIP with pipe, or FreeFileSync (who does not
>> support compression though)
>
> For me it's a small script that backup my data with help of
> btrfs send/receive, hardly has any impact when running
> incremental backups.
>

I understand ... but I am not familiar with scripts. I am
unable to learn bash language in that I really cannot figure
out the effects and the number of steps of parameter
SUBSTITUTION.
I have given up alas.

Paul

unread,
Dec 28, 2021, 11:04:38 AM12/28/21
to
Please do not reply to the frelwizzen bot.

You can filter off these spew posts, using the email field
of the message.

Paul

David W. Hodgins

unread,
Dec 28, 2021, 12:57:29 PM12/28/21
to
On Mon, 27 Dec 2021 13:27:05 -0500, Soviet_Mario <Sovie...@cccp.mir> wrote:

<snip details of never ending backup>

I haven't used fsarchiver to backup a mounted / system.

That given, there are two common reasons a backup never ends:

1. Backing up psuedo files that never end such as /dev/urandom

Fixed by making sure the contents of directories where pseudo file systems are
mounted are excluded. Typically /dev, /sys, and /proc. Be careful if the backup
includes remote file systems, not to include their psuedo files.

2. Backup that includes the destination, resulting in an i/o loop (what's written
becomes input later in the process).

Fixed by excluding the destination path from the backup selection.

Regards Dave Hodgins

Carlos E.R.

unread,
Dec 28, 2021, 4:28:07 PM12/28/21
to
On 28/12/2021 16.19, Soviet_Mario wrote:
> Il 28/12/21 11:06, J.O. Aho ha scritto:
>> On 27/12/2021 19.27, Soviet_Mario wrote:
>>> Il 27/12/21 17:25, J.O. Aho ha scritto:


>> For me it's a small script that backup my data with help of btrfs
>> send/receive, hardly has any impact when running incremental backups.
>>
>
> I understand ... but I am not familiar with scripts. I am unable to
> learn bash language in that I really cannot figure out the effects and
> the number of steps of parameter SUBSTITUTION.

Why do you need to know?

> I have given up alas.


--
Cheers, Carlos.

J.O. Aho

unread,
Dec 28, 2021, 5:34:25 PM12/28/21
to
On 28/12/2021 16.19, Soviet_Mario wrote:
> Il 28/12/21 11:06, J.O. Aho ha scritto:
>> On 27/12/2021 19.27, Soviet_Mario wrote:

>>> THE SAME DRIVE, compressed correctly with a shut-down system, gave
>>> 29,5 GB, with a less than 30% ratio. So that wasn't the issue.
>>
>> could it be that it hasn't yet come to the compression part of the
>> process.
>
> I strongly suspect it does not store anywhere a whole uncompressed
> image, but do compression in chunks, a couple of GB every 5 min or so,
> at that extreme level of compression

Maybe it was including itself in the backup, that could cause a slow and
disk space consuming effect.


>>> apart from FSArchiver, DAR and Dbackup I've never "met" a backup
>>> program that suited me. Sometimes I use FIND manually and ZIP with
>>> pipe, or FreeFileSync (who does not support compression though)
>>
>> For me it's a small script that backup my data with help of btrfs
>> send/receive, hardly has any impact when running incremental backups.
>>
>
> I understand ... but I am not familiar with scripts.

Scripts in much are the same as you type in your command prompt, just
you select to do more than one thing at the time and may have some steps
done if a condition is met.


> I am unable to
> learn bash language in that I really cannot figure out the effects and
> the number of steps of parameter SUBSTITUTION.

Not much different from other languages, except you have a lot easier to
execute the commands (don't need to invoke a shell and then check the
return value from the call and if something went wrong try to parse it
from the output). Using a search engine when need to figure out how to
do things are really helpful.


--

//Aho

J.O. Aho

unread,
Dec 28, 2021, 5:38:02 PM12/28/21
to

On 28/12/2021 18.57, David W. Hodgins wrote:

> 1. Backing up psuedo files that never end such as /dev/urandom
>
> Fixed by making sure the contents of directories where pseudo file
> systems are
> mounted are excluded. Typically  /dev, /sys, and /proc. Be careful if
> the backup
> includes remote file systems, not to include their psuedo files.

Didn't think of that, I think you could also include /run in the list of
directories that shouldn't be backed up, you may have some nice sockets
there.


--

//Aho

David W. Hodgins

unread,
Dec 28, 2021, 5:53:07 PM12/28/21
to
Yes. All directories shown as mount points in "grep -v ^/dev /proc/mounts" should be
excluded.

In the case of subdirectories such as /dev/pts, it gets excluded by excluding
all directories starting with /dev.

So on my current system, the exclude list would be /dev, /proc, /run, /sys, and /tmp.
Don't forget to exclude any currently mounted remote file systems.

Regards, Dave Hodgins

Stefen Petruzzellis

unread,
Dec 28, 2021, 8:07:23 PM12/28/21
to
Just look at your programs and look at mine, there is nothing for me to
learn from a jerk like Kelly Phillips. But hey, let him keep making an
ass of himself. I am sure one of his stooges will come to the rescue.
Android is based on Linux. No debate. Idiot.

Wow, in Kelly Phillips's 'brain', that an app has been published is "confirmation"
that Michael Snit Glasser built it now? At one point, he said an online
denizen was "obsessing" over him, which was shown to be merely him posting
to himself.

Kelly Phillips doesn't have any clue what he is sniveling about.

--
Puppy Videos!!
https://www.linkedin.com/in/michael-glasser-b7075a23
https://www.google.com/search?q=Dustin+Cook%3A+functional+illiterate+fraud

Soviet_Mario

unread,
Dec 29, 2021, 4:02:06 AM12/29/21
to
Il 28/12/21 22:25, Carlos E.R. ha scritto:
dunno, the problem is not lack of documentation, but lack in
my own ability to figure out steps.
There is nothing I can do about that, it's a limitation of mine.
Tnx anyway

>
>> I have given up alas.
>
>


--

Soviet_Mario

unread,
Dec 29, 2021, 4:13:43 AM12/29/21
to
Il 28/12/21 23:34, J.O. Aho ha scritto:
> On 28/12/2021 16.19, Soviet_Mario wrote:
>> Il 28/12/21 11:06, J.O. Aho ha scritto:
>>> On 27/12/2021 19.27, Soviet_Mario wrote:
>
>>>> THE SAME DRIVE, compressed correctly with a shut-down
>>>> system, gave 29,5 GB, with a less than 30% ratio. So
>>>> that wasn't the issue.
>>>
>>> could it be that it hasn't yet come to the compression
>>> part of the process.
>>
>> I strongly suspect it does not store anywhere a whole
>> uncompressed image, but do compression in chunks, a couple
>> of GB every 5 min or so, at that extreme level of compression
>
> Maybe it was including itself in the backup, that could
> cause a slow and disk space consuming effect.

intresting / intriguing

To be honest, I have just faced this nightmare using DBackup
(a "per-directory" compressor).
I prepared with a lot of care an exclude list, but I forgot
a Simlink to "itself" in one folder, that after some days of
uninterrupted work (I was not able to understand the problem
until I finally tried to explore the backup folder), ended
up in having 3 level of nesting. I sparked a recursion cycle !

But the same situation would be strange in a "per-partition"
approach, since the target folder WAS NOT on the same
partition : I would have been simply MAD to make such a choice.

The previous time using DBackup, actually the source was a
multi-FS file-listing file and the target was on the same
partition of most of the files of source. But I forgot a
"hook" pointing to one target in the source. And DBackup
maybe was instructed to "follow" symlinks.

>
>
>>>> apart from FSArchiver, DAR and Dbackup I've never "met"
>>>> a backup program that suited me. Sometimes I use FIND
>>>> manually and ZIP with pipe, or FreeFileSync (who does
>>>> not support compression though)
>>>
>>> For me it's a small script that backup my data with help
>>> of btrfs send/receive, hardly has any impact when running
>>> incremental backups.
>>>
>>
>> I understand ... but I am not familiar with scripts.
>
> Scripts in much are the same as you type in your command
> prompt, just you select to do more than one thing at the
> time and may have some steps done if a condition is met.
>
>
>> I am unable to learn bash language in that I really cannot
>> figure out the effects and the number of steps of
>> parameter SUBSTITUTION.
>
> Not much different from other languages,

Apart from C++ preprocessor (which, in fact, I never
mastered for the same reasons as here) I have never used
languages where tokens change semantic while using.
Strangely I have never had any problem with the "pointer"
(or reference) concept, which was held harmful.

Randomly I have recently discovered that JAVASCRIPT has
introduced the so called "BACK TIC" notation (a back slanted
apostrophe" to enable literal substitution of variable
contents within strings.
I'm almost sure I will never master such a feature, beyond
very simple one-step cases.

> except you have a
> lot easier to execute the commands (don't need to invoke a
> shell and then check the return value from the call and if
> something went wrong try to parse it from the output). Using
> a search engine when need to figure out how to do things are
> really helpful.

I just tried to study parameter substitution, but I am
unable to figure out the process, so I cannot design code
properly

Soviet_Mario

unread,
Dec 29, 2021, 4:17:05 AM12/29/21
to
Il 28/12/21 18:57, David W. Hodgins ha scritto:
> On Mon, 27 Dec 2021 13:27:05 -0500, Soviet_Mario
> <Sovie...@cccp.mir> wrote:
>
> <snip details of never ending backup>
>
> I haven't used fsarchiver to backup a mounted / system.
>
> That given, there are two common reasons a backup never ends:
>
> 1. Backing up psuedo files that never end such as /dev/urandom
>
> Fixed by making sure the contents of directories where
> pseudo file systems are
> mounted are excluded. Typically  /dev, /sys, and /proc. Be

tnx, you confirm my suspect.
The source partition (the ROOT mounted FS) actually contains
all of them you mentioned.

> careful if the backup
> includes remote file systems, not to include their psuedo
> files.

I've not clear what is a remote File System

>
> 2. Backup that includes the destination, resulting in an i/o
> loop (what's written
> becomes input later in the process).

LOL, I have met this before (using Dbackup, using sources
and target on the same partition, and leaving a symlink that
caused an infinite recursion). This time the problem was not
the case, the target was on another physical drive.

>
> Fixed by excluding the destination path from the backup
> selection.
>
> Regards Dave Hodgins


J.O. Aho

unread,
Dec 29, 2021, 6:29:14 AM12/29/21
to
On 29/12/2021 10.13, Soviet_Mario wrote:
> Il 28/12/21 23:34, J.O. Aho ha scritto:
>> On 28/12/2021 16.19, Soviet_Mario wrote:
>>> Il 28/12/21 11:06, J.O. Aho ha scritto:
>>>> On 27/12/2021 19.27, Soviet_Mario wrote:
>>
>>>>> THE SAME DRIVE, compressed correctly with a shut-down system, gave
>>>>> 29,5 GB, with a less than 30% ratio. So that wasn't the issue.
>>>>
>>>> could it be that it hasn't yet come to the compression part of the
>>>> process.
>>>
>>> I strongly suspect it does not store anywhere a whole uncompressed
>>> image, but do compression in chunks, a couple of GB every 5 min or
>>> so, at that extreme level of compression
>>
>> Maybe it was including itself in the backup, that could cause a slow
>> and disk space consuming effect.
>
> intresting / intriguing

but more likely what David mentioned, but I would exclude at least
/dev, /proc, /run, /sys, /mnt, and /tmp

As I'm not familiar with the backup tools you use, but check if they
have a flag for not following other file systems, otherwise you need to
exclude the above mentioned.



> But the same situation would be strange in a "per-partition" approach,
> since the target folder WAS NOT on the same partition : I would have
> been simply MAD to make such a choice.

If you say you want to backup / and you mounted the partition to
/mnt/backup without excluding it, the backup would also include the
/mnt/backup content.



>> Not much different from other languages,
>> except you have a lot easier to execute the commands (don't need to
>> invoke a shell and then check the return value from the call and if
>> something went wrong try to parse it from the output). Using a search
>> engine when need to figure out how to do things are really helpful.
>
> I just tried to study parameter substitution, but I am unable to figure
> out the process, so I cannot design code properly

In the first place you would just need to be able to to assign a value
to a variable and then use the variable, out over that you do not need
much more to make a simple script as everything is just like when you
run stuff on the command line. So if you can't m,aster this, then I
suggest you just stay with a GUI and point and click on icons and just
use default setups of everything.

--

//Aho

Carlos E.R.

unread,
Dec 29, 2021, 9:08:07 AM12/29/21
to
On 29/12/2021 10.02, Soviet_Mario wrote:
> Il 28/12/21 22:25, Carlos E.R. ha scritto:
>> On 28/12/2021 16.19, Soviet_Mario wrote:
>>> Il 28/12/21 11:06, J.O. Aho ha scritto:
>>>> On 27/12/2021 19.27, Soviet_Mario wrote:
>>>>> Il 27/12/21 17:25, J.O. Aho ha scritto:
>>
>>
>>>> For me it's a small script that backup my data with help of btrfs
>>>> send/receive, hardly has any impact when running incremental backups.
>>>>
>>>
>>> I understand ... but I am not familiar with scripts. I am unable to
>>> learn bash language in that I really cannot figure out the effects
>>> and the number of steps of parameter SUBSTITUTION.
>>
>> Why do you need to know?
>
> dunno, the problem is not lack of documentation, but lack in my own
> ability to figure out steps.
> There is nothing I can do about that, it's a limitation of mine.
> Tnx anyway

Really, you do not need to read documentation to start. It is really simple.

You just create a text file that starts with:

#!/bin/bash

and then write lines exactly the same as you would in a terminal, typing
commands. That's a script, you don't need anything else. A list of
commands to run one after the other, but without having to type them
every time.

Then, as with any other programming language, you add conditionals,
loops, and variables. Just look at existing scripts for ideas, then
write your own text file of examples.

Defining a variable:

MYVAR="something"


Using a variable:

echo $MYVAR


For starters, it works. Ok, if your string has spaces or special chars
there are problems, but that is "advanced" and there are tricks. For
example, using a variable:

echo "$MYVAR"



For debugging: change the first line to:

#!/bin/bash -x


--
Cheers, Carlos.

Carlos E.R.

unread,
Dec 29, 2021, 9:12:07 AM12/29/21
to
On 29/12/2021 10.13, Soviet_Mario wrote:
Even on a different partition, the destination is still the same
filesystem. In Linux and Unix, there always is one single filesystem.

--
Cheers, Carlos.

David W. Hodgins

unread,
Dec 29, 2021, 11:46:03 AM12/29/21
to
On Wed, 29 Dec 2021 04:17:02 -0500, Soviet_Mario <Sovie...@cccp.mir> wrote:
> I've not clear what is a remote File System

Then you're not using one. :-)

Typically, samba, nfs, sshfs, gdrive, and other packages that allow accessing the
file system on a remote computer as if it were a local file system.

I use sshfs to make it easy to copy files between my computer and computers I have
set up with ssh access, though I've used other methods too.

Regards Dave Hodgins

Soviet_Mario

unread,
Dec 29, 2021, 2:41:50 PM12/29/21
to
Il 29/12/21 12:29, J.O. Aho ha scritto:
no, I asked to backup /dev/nvme0p2 (the name of the device
associated with the partition that contains / and all of its
contents). The target was elsewhere.

I think FSA is a low level (at level of disk-clusters) tool,
not really a File-system level tool (but It discards unused
blocks/clusters).

But I am supposing it "monitors" the changes to the disk in
order to update (how ? Journalling ? Cow ? Dunno !) all is
needed. And possibly some continually changing files create
the problem.


> partition to /mnt/backup without excluding it, the backup
> would also include the /mnt/backup content.
>
>
>
>>> Not much different from other languages, except you have
>>> a lot easier to execute the commands (don't need to
>>> invoke a shell and then check the return value from the
>>> call and if something went wrong try to parse it from the
>>> output). Using a search engine when need to figure out
>>> how to do things are really helpful.
>>
>> I just tried to study parameter substitution, but I am
>> unable to figure out the process, so I cannot design code
>> properly
>
> In the first place you would just need to be able to to
> assign a value to a variable and then use the variable, out
> over that you do not need much more to make a simple script
> as everything is just like when you run stuff on the command
> line. So if you can't m,aster this, then I suggest you just
> stay with a GUI and point and click on icons and just use
> default setups of everything.

good advice

Soviet_Mario

unread,
Dec 29, 2021, 2:44:01 PM12/29/21
to
Il 29/12/21 15:09, Carlos E.R. ha scritto:
I cant understand this ... so why it is said that hardlinks
must reside on the same file system and symlinks don't need to ?
and how can one have a mix of ext4 / xfs / ntfs / joliet /
fuse and else ? Aren't they different file systems ?

David W. Hodgins

unread,
Dec 29, 2021, 3:27:08 PM12/29/21
to
On Wed, 29 Dec 2021 14:43:58 -0500, Soviet_Mario <Sovie...@cccp.mir> wrote:
> I cant understand this ... so why it is said that hardlinks
> must reside on the same file system and symlinks don't need to ?
> and how can one have a mix of ext4 / xfs / ntfs / joliet /
> fuse and else ? Aren't they different file systems ?

In the start of the thread, the / file system was being backed up, not a device.

The / directory is a mount point for the root file system. It includes the
directories that are used as mount points for other file system.

When backing up using the / directory the backup is backing up files and
directories. Unless the software is configured to only backup things from the same
file system, it will include the files and directories from all of the mounted file
systems.

Regarding symlinks and hard links, a hard link is just a normal directory entry
for an existing file while a symlink is a directory entry with the path and name
of another file.

A hard link has the name, and the offset of the start of the existing file within
that file system. It can not be on a different file system. It can not be a link to
a directory.

Since a symlink includes the path and file name, it may be to a directory on the
same file system, or a different one. Care must be taken with symlinks to avoid
recursion.

For example, directory Alpha contains one normal file and one symlink to directory
Beta. Directory Beta contains one file and a symlink to directory Alpha. When
directory Alpha and it's sub directories/files are being backed up, it includes
Beta, which includes Alpha, and never stops. That's called recursion. Most software
will stop after "too many" levels of recursion, but not all will stop.

Backing up a device works quite differently then backing up files and directories.
It simply copies all blocks within the device. It may select only blocks that are
in currently used clusters, or it may select all blocks.

Regards, Dave Hodgins

J.O. Aho

unread,
Dec 29, 2021, 5:18:11 PM12/29/21
to
As I have only taken a really short and hasty look at the source code,
it seems fsarchive do mount file systems. It could be that when you
assign an partition and it is already mounted it will then read from the
mounted file system.


> I think FSA is a low level (at level of disk-clusters) tool, not really
> a File-system level tool (but It discards unused blocks/clusters).

I'm not that sure as it mounts the file system to access the data, the
blocks it's mentioning seems to be fsarchives own internal blocks, not
blocks on the disk itself. It do require the needed file system
maintenance tools to be installed to be able to handle a file system.


> But I am supposing it "monitors" the changes to the disk in order to
> update (how ? Journalling ? Cow ? Dunno !) all is needed. And possibly
> some continually changing files create the problem.

Don't think it handles things much more differently than cp, it's just
how it saves the data to it's destination file. It will then act based
on the archive file meta data when it restores data.


--

//Aho

Carlos E.R.

unread,
Dec 29, 2021, 10:04:08 PM12/29/21
to
Suppose you are backing up the so called root filesystem to an external
usb disk.

/
/home
/etc

Suppose the external is on /external/usb/storage/

So you you say, copy root ("/") to "/external/usb/storage/".


You are creating an infinite loop, because "/external/usb/storage/" is
part of "/" as well, so you are copying "/external/usb/storage/" to
"/external/usb/storage/" as well.

/
/home
/etc
/external/usb/storage/
/home
/etc
/external/usb/storage/
/home
/etc
/external/usb/storage/


ad infinitum.


It is all one filesystem tree. words are confusing. The root filesystem
can span hundreds of different disks, but all of them form a single tree.

This is unix 001.


--
Cheers, Carlos.

Soviet_Mario

unread,
Dec 30, 2021, 3:46:48 AM12/30/21
to
Il 29/12/21 23:18, J.O. Aho ha scritto:
as I said in the first message, exploring the disks with
LSBLK, i discovered that "root" partition was mounted
TWICE, one as "/" as expected, and another as
"/tmp/fsa/20211226-165002-000718e8-00"

My /tmp folder is virtual, as it has no physical drive
below, but is memory backed since here I have 64 GB ram

> could be that when you assign an partition and it is already
> mounted it will then read from the mounted file system.
>
>
>> I think FSA is a low level (at level of disk-clusters)
>> tool, not really a File-system level tool (but It discards
>> unused blocks/clusters).
>
> I'm not that sure as it mounts the file system to access the

I dunno about possible "special" flags of the mount command.
Maybe that
"/tmp/fsa/20211226-165002-000718e8-00"
was a special block device ?

> data, the blocks it's mentioning seems to be fsarchives own
> internal blocks,

what do you mean ? It produces a self-extracting archive ?

> not blocks on the disk itself. It do
> require the needed file system maintenance tools to be
> installed to be able to handle a file system.
>
>
>> But I am supposing it "monitors" the changes to the disk
>> in order to update (how ? Journalling ? Cow ? Dunno !) all
>> is needed. And possibly some continually changing files
>> create the problem.
>
> Don't think it handles things much more differently than cp,
> it's just how it saves the data to it's destination file. It
> will then act based on the archive file meta data when it
> restores data.
>

anyway, never more try that stupid thing ! :\

Soviet_Mario

unread,
Dec 30, 2021, 3:51:11 AM12/30/21
to
Il 30/12/21 04:02, Carlos E.R. ha scritto:
yes This I understand, but I asked FSArchiver not to backup
the special folder "/", but the partition
/dev/nvme0n1p2 .... So I was thinking it would have a
partition-level approach rather than an higher level one

>
>
> It is all one filesystem tree. words are confusing. The root
> filesystem can span hundreds of different disks, but all of
> them form a single tree.
>
> This is unix 001.
>
>


--

J.O. Aho

unread,
Dec 30, 2021, 5:35:22 AM12/30/21
to

On 30/12/2021 09.46, Soviet_Mario wrote:
> Il 29/12/21 23:18, J.O. Aho ha scritto:

>> As I have only taken a really short and hasty look at the source code,
>> it seems fsarchive do mount file systems. It
>
> as I said in the first message, exploring the disks with LSBLK, i
> discovered that "root" partition  was mounted TWICE, one as "/" as
> expected, and another as "/tmp/fsa/20211226-165002-000718e8-00"

No block level backup but a data level backup.


> My /tmp folder is virtual, as it has no physical drive below, but is
> memory backed since here I have 64 GB ram

It would be possible if fsarchive had support for tmpfs, which I don't
think it has.



>>> I think FSA is a low level (at level of disk-clusters) tool, not
>>> really a File-system level tool (but It discards unused
>>> blocks/clusters).
>>
>> I'm not that sure as it mounts the file system to access the
>
> I dunno about possible "special" flags of the mount command. Maybe that
> "/tmp/fsa/20211226-165002-000718e8-00"
> was a special block device ?

The only block devise there is your hard drive. Mount command will never
generate a new block device.

>> data, the blocks it's mentioning seems to be fsarchives own internal
>> blocks,
>
> what do you mean ? It produces a self-extracting archive ?

It's just how fsarchive organize the data in the fsa file, nothing more.


--

//Aho

Snit Michael Glasser

unread,
Dec 30, 2021, 6:20:17 AM12/30/21
to
Usenet is a world-wide phenomenon based on the belief in good character
that Shadow lacks. People who've known Shadow for quite some time, and
also have a background with him highly advise ignoring him to get him
to play with circuits someplace else. As long as Theo and anyone else
continues to play his games, he won't seek the spotlight someplace else.

Why would you want to limit any applications on Fedora to what can be
done on Windows?

Theo reported Shadow years ago. As expected, it did nothing to stop the
dunderhead.


--
Get Rich Slow!
https://duckduckgo.com/?q=%22NARCISSISTIC+BIGOT%22
Steve 'Narcissistic Bigot' Petruzzellis

HHI

unread,
Dec 30, 2021, 7:56:13 AM12/30/21
to
The only way that I could condone AZ Code's constant need for maintenance
because of crashing troubles or badly written updates is if I enjoyed the
regular supply of lies its developers have been feeding Michael Snit Glasser
since 1989. Being independent as it is, usenet will never go away but it'll
never be very useful.

I am a total fanboi of AZ Code, because that's where all the thrilling programming
is happening. Steven Petruzzellis: Narcissistic Bigot.

LOL!

--
Do not click this link!
https://gibiru.com/results.html?q=%22functional%20illiterate%20fraud%22
https://swisscows.com/web?query=steve%20carroll%20%22narcissistic%20bigot%22
Dustin Cook: Functional Illiterate Fraud

Paul

unread,
Dec 30, 2021, 10:42:18 AM12/30/21
to
The output of an fsarchiver run, shows the "awareness" of features.
It is using a taste test on each object it processes. I expect a
"special" might be something like a socket, a thing to be avoided.

-[00][ 99%][DIR ] /etc/ntp
-[00][ 99%][REGFILEM] /etc/ntp/ntp.keys
-[00][ 99%][REGFILE ] /etc/ntp/step-tickers
-[00][ 99%][DIR ] /etc/gtk-2.0
-[00][ 99%][REGFILEM] /etc/gtk-2.0/im-multipress.conf
-[00][ 99%][DIR ] /etc/gnupg
-[00][ 99%][REGFILEM] /etc/dnsmasq.conf
-[00][100%][REGFILEM] /etc/named.conf
Statistics for filesystem 0
* files successfully processed:....regfiles=330321, directories=24531, symlinks=13466, hardlinks=4187, specials=9882
* files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0

*******

I think as long as you use it in one of its unambiguous
modes of operation, you'd probably be safe.

Since it allows specifying more than one partition in a single
command, you could back up the entire machine with a
LiveCD and a one-liner command :-) But that's not really
very practical at restore time. My practice is to backup
partitions into single files, to make it easier to handle
the content later (some partitions more valuable than others).

Paul

Steve Carroll

unread,
Dec 30, 2021, 11:39:42 AM12/30/21
to
I know we have different notions entirely. Hope Alan B likes the housing
in the packed cesspit of my rubbish bin.

And in response you have nothing but a crack to start a war. Did he think
that was clever?

--
Get Rich Slow!
https://www.bing.com/search?q=dustin%20cook%20functionally%20illiterate%20fraud
http://cityrecord.engineering.nyu.edu/data/1910/1910-01-11.pdf
https://www.prescotthouse.com/professional-staff/

Petruzzellis Kids

unread,
Dec 31, 2021, 5:02:28 AM12/31/21
to
Ronb is trying ("very, very" hard) to project their trolling crap onto John
Gohde. For as long as I can remember Ronb has pressed the idea that John
Gohde needs 'backing' to point out all his poor behavior. The fact is that
nobody needs any proof to do that. So Ronb pulls this hateful flooding foolishness
in an incompetent effort to 'sell' the idea that John Gohde is like he is.
Researching AZ Code... still a tenderfoot. I've been scanning a bit from
some of those old threads he was previously short circuiting. I noticed
that many of those he'd go out of his way to repeatedly attack had extensive
technical knowledge. I did not find many who were also competent in the
scripting side of things, or were also trained as an technician along with
various aspects of IT; with the diplomas to back it all up, too. John Gohde
has gone the extra mile, essentially hand holding Ronb on coding practices
only for Ronb to blindly attack him and continue to show that he has no
real interest in the subject. In all reality, it's too hard for snit. Folks
who've known Ronb for quite a while, and also have a background with him
highly advise avoiding him to get him to setup circle someplace else. As
long as John Gohde and anyone else continues to spoon feed him, he will
not seek attention someplace else. Lots of posters continue responding to
Ronb. I don't indict John Gohde for being annoyed but, truly, I can't see
why he stays here with Ronb here. John Gohde is better at discussions as
found in a serious venue and advocacy forums simply aren't it.

--
E-commerce Simplified
https://swisscows.com/web?query=%22functionally%20illiterate%20fraud%22
Dustin Cook: Functionally Illiterate Fraud
0 new messages