Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Does rsync "lock" files?

2,577 views
Skip to first unread message

Bengt T

unread,
Jul 21, 2015, 7:00:27 AM7/21/15
to
Plan to use rsync to regularly, e. g. once per hour triggered by cron, in making backup from PC1 to PC2.

PC1 is constantly used, resulting in file changes (creation, deletion, modification) all the time. Even when rsync/backup procedure is started.

My question is if and how this will work with rsync as described above? E. g. how is an opened file handled by rsync?

Chris Ahlstrom

unread,
Jul 21, 2015, 7:53:10 AM7/21/15
to
Bengt T wrote this copyrighted missive and expects royalties:
Presumably rsync opens the file and sends it. If changes occur in the
meantime, doesn't matter, thanks to copy-on-write.

--
"Your stupidity, Allen, is simply not up to par."
-- Dave Mack (ma...@inco.UUCP)

"Yours is."
-- Allen Gwinn (al...@sulaco.sigma.com), in alt.flame

Richard Kettlewell

unread,
Jul 21, 2015, 3:25:35 PM7/21/15
to
Chris Ahlstrom <OFee...@teleworm.us> writes:
> Bengt T wrote this copyrighted missive and expects royalties:
>> Plan to use rsync to regularly, e. g. once per hour triggered by cron, in
>> making backup from PC1 to PC2.
>>
>> PC1 is constantly used, resulting in file changes (creation, deletion,
>> modification) all the time. Even when rsync/backup procedure is
>> started.
>>
>> My question is if and how this will work with rsync as described
>> above? E.g. how is an opened file handled by rsync?

If the file changes while rsync is reading it then you risk getting an
inconsistent backup. It has no idea that anything else is accessing the
file.

This is a good use case for LVM snapshots, although you still risk
backing up ‘intermediate’ states of the overall system - partial writes,
inconsistencies between pairs of files, etc. If you want a fully
consistent backup then you will need to quiesce any processes that
modify the filesystems somehow.

> Presumably rsync opens the file and sends it. If changes occur in the
> meantime, doesn't matter, thanks to copy-on-write.

rsync just uses read() to read the file. Any changes made while it’s
half way through will be reflected in subsequent reads.

--
http://www.greenend.org.uk/rjk/

Chris Ahlstrom

unread,
Jul 21, 2015, 5:39:42 PM7/21/15
to
Richard Kettlewell wrote this copyrighted missive and expects royalties:

> Chris Ahlstrom <OFee...@teleworm.us> writes:
>> Bengt T wrote this copyrighted missive and expects royalties:
>>> Plan to use rsync to regularly, e. g. once per hour triggered by cron, in
>>> making backup from PC1 to PC2.
>>>
>>> PC1 is constantly used, resulting in file changes (creation, deletion,
>>> modification) all the time. Even when rsync/backup procedure is
>>> started.
>>>
>>> My question is if and how this will work with rsync as described
>>> above? E.g. how is an opened file handled by rsync?
>
> If the file changes while rsync is reading it then you risk getting an
> inconsistent backup. It has no idea that anything else is accessing the
> file.

Rsync doesn't need to know that. The kernel knows it, preserves
the handle the rsync is using, and makes a copy of the (modified) file.

When rsync closes the file handle, the kernel removes that old file,
and only the modified one exists.

> This is a good use case for LVM snapshots, although you still risk
> backing up ‘intermediate’ states of the overall system - partial writes,
> inconsistencies between pairs of files, etc. If you want a fully
> consistent backup then you will need to quiesce any processes that
> modify the filesystems somehow.
>
>> Presumably rsync opens the file and sends it. If changes occur in the
>> meantime, doesn't matter, thanks to copy-on-write.
>
> rsync just uses read() to read the file. Any changes made while it’s
> half way through will be reflected in subsequent reads.

You sure about that? Try out the scenario on a single file using
your favorite text editor.

--
You know you've been sitting in front of your Lisp machine too long
when you go out to the junk food machine and start wondering how to
make it give you the CADR of Item H so you can get that yummie
chocolate cupcake that's stuck behind the disgusting vanilla one.

Jasen Betts

unread,
Jul 22, 2015, 3:29:01 AM7/22/15
to
in one terminal

$ while yes AAAAAAAAAAAAAAA | dd conv=notrunc bs=20K count=100 of=a
do sleep 0.01
yes BBBBBBBBBBBBBBB | dd conv=notrunc bs=20K count=100 of=a
sleep 0.01
done

in aonther rsync the file "a" over a slow connection

I got lines with BBB then with AAA then with AAA again

$ od -ta a
0000000 B B B B B B B B B B B B B B B nl
*
1540000 A A A A A A A A A A A A A A A nl
*
1560000 B B B B B B B B B B B B B B B nl
*
2220000 A A A A A A A A A A A A A A A nl
*
6070000


Richard Kettlewell

unread,
Jul 22, 2015, 4:33:46 AM7/22/15
to
Chris Ahlstrom <OFee...@teleworm.us> writes:
> Richard Kettlewell wrote this copyrighted missive and expects royalties:
>> Chris Ahlstrom <OFee...@teleworm.us> writes:
>>> Bengt T wrote this copyrighted missive and expects royalties:
>>>> Plan to use rsync to regularly, e. g. once per hour triggered by cron, in
>>>> making backup from PC1 to PC2.
>>>>
>>>> PC1 is constantly used, resulting in file changes (creation, deletion,
>>>> modification) all the time. Even when rsync/backup procedure is
>>>> started.
>>>>
>>>> My question is if and how this will work with rsync as described
>>>> above? E.g. how is an opened file handled by rsync?
>>
>> If the file changes while rsync is reading it then you risk getting an
>> inconsistent backup. It has no idea that anything else is accessing the
>> file.
>
> Rsync doesn't need to know that. The kernel knows it, preserves
> the handle the rsync is using, and makes a copy of the (modified) file.
>
> When rsync closes the file handle, the kernel removes that old file,
> and only the modified one exists.

This is not an accurate description of Unix IO semantics.

>> This is a good use case for LVM snapshots, although you still risk
>> backing up ‘intermediate’ states of the overall system - partial writes,
>> inconsistencies between pairs of files, etc. If you want a fully
>> consistent backup then you will need to quiesce any processes that
>> modify the filesystems somehow.
>>
>>> Presumably rsync opens the file and sends it. If changes occur in the
>>> meantime, doesn't matter, thanks to copy-on-write.
>>
>> rsync just uses read() to read the file. Any changes made while it’s
>> half way through will be reflected in subsequent reads.
>
> You sure about that? Try out the scenario on a single file using
> your favorite text editor.

Yes, I’m sure about it, based on decades of general experience with Unix
platforms, use and development of applications that depend on this
behavior (e.g. anything with a single-file database), reading
documentation (e.g. SUS), reading of implementations, etc.

What are you basing your view on?

--
http://www.greenend.org.uk/rjk/

crankypuss

unread,
Jul 22, 2015, 6:06:22 AM7/22/15
to
Richard Kettlewell wrote:

> Chris Ahlstrom <OFee...@teleworm.us> writes:
>> Bengt T wrote this copyrighted missive and expects royalties:
>>> Plan to use rsync to regularly, e. g. once per hour triggered by
>>> cron, in making backup from PC1 to PC2.
>>>
>>> PC1 is constantly used, resulting in file changes (creation,
>>> deletion, modification) all the time. Even when rsync/backup
>>> procedure is started.
>>>
>>> My question is if and how this will work with rsync as described
>>> above? E.g. how is an opened file handled by rsync?
>
> If the file changes while rsync is reading it then you risk getting an
> inconsistent backup. It has no idea that anything else is accessing
> the file.
>
> This is a good use case for LVM snapshots, although you still risk
> backing up ‘intermediate’ states of the overall system - partial
> writes,
> inconsistencies between pairs of files, etc. If you want a fully
> consistent backup then you will need to quiesce any processes that
> modify the filesystems somehow.

Now take that logic and apply it to hibernation, in particular
'uswsusp', and consider the integrity guarantees that can possibly be
made regarding the snapshot and subsequent restoration. Taking a
snapshot of memory implies taking a snapshot of open file handles, and
there you have a recipe for trouble... the easiest way to get 'fsck' to
bitch about unclosed files and "fix" them is to hibernate one install
then run another install on the same system when there is a shared
partition used by both. (I wonder why information about what files are
open is stored on the media, that just seems messed up, like some kind
of anti-atomicity strategy or something.)

>> Presumably rsync opens the file and sends it. If changes occur in
>> the meantime, doesn't matter, thanks to copy-on-write.
>
> rsync just uses read() to read the file. Any changes made while it’s
> half way through will be reflected in subsequent reads.

It sounds to me as though Chris was thinking about btrfs rather than
other filesystems like ext2/3/etc; or maybe not, I'm not familiar with
how btrfs is implemented.

--
totally-portable-software.blogspot.com

Chris Ahlstrom

unread,
Jul 22, 2015, 6:38:01 AM7/22/15
to
Jasen Betts wrote this copyrighted missive and expects royalties:
What does that prove? dd is opening and closing "a" a number of times.

rsync just grabbed "a" as it existed just after the previous close.

--
Time is the most valuable thing a man can spend.
-- Theophrastus

Chris Ahlstrom

unread,
Jul 22, 2015, 6:45:47 AM7/22/15
to
Richard Kettlewell wrote this copyrighted missive and expects royalties:

> Chris Ahlstrom <OFee...@teleworm.us> writes:
>> Richard Kettlewell wrote this copyrighted missive and expects royalties:
>>> Chris Ahlstrom <OFee...@teleworm.us> writes:
>>>> Bengt T wrote this copyrighted missive and expects royalties:
>>>>> Plan to use rsync to regularly, e. g. once per hour triggered by cron, in
>>>>> making backup from PC1 to PC2.
>>>>>
>>>>> PC1 is constantly used, resulting in file changes (creation, deletion,
>>>>> modification) all the time. Even when rsync/backup procedure is
>>>>> started.
>>>>>
>>>>> My question is if and how this will work with rsync as described
>>>>> above? E.g. how is an opened file handled by rsync?
>>>
>>> If the file changes while rsync is reading it then you risk getting an
>>> inconsistent backup. It has no idea that anything else is accessing the
>>> file.
>>
>> Rsync doesn't need to know that. The kernel knows it, preserves
>> the handle the rsync is using, and makes a copy of the (modified) file.
>>
>> When rsync closes the file handle, the kernel removes that old file,
>> and only the modified one exists.
>
> This is not an accurate description of Unix IO semantics.

Whatever. It's just quick-and-dirty statement.

>>> This is a good use case for LVM snapshots, although you still risk
>>> backing up ‘intermediate’ states of the overall system - partial writes,
>>> inconsistencies between pairs of files, etc. If you want a fully
>>> consistent backup then you will need to quiesce any processes that
>>> modify the filesystems somehow.
>>>
>>>> Presumably rsync opens the file and sends it. If changes occur in the
>>>> meantime, doesn't matter, thanks to copy-on-write.
>>>
>>> rsync just uses read() to read the file. Any changes made while it’s
>>> half way through will be reflected in subsequent reads.
>>
>> You sure about that? Try out the scenario on a single file using
>> your favorite text editor.
>
> Yes, I’m sure about it, based on decades of general experience with Unix
> platforms, use and development of applications that depend on this
> behavior (e.g. anything with a single-file database), reading
> documentation (e.g. SUS), reading of implementations, etc.
>
> What are you basing your view on?

Certainly not "decades of experience" with all those applications you
mention that are not rsync. In any case, adducing "experience"
is not a sound application of logic. Better to start how rsync works
and painted a detailed scenario.

And, to be sure, I'm not basing my conjecture on experience with rsync.
Just the text-editor experiment I mentioned.

Now, I would guess that rsync opens the file once, to calculate if it needs
to transfer it. If the calculation indicates the need to transfer it,
the file may have changed again and again before, and while, rsync transfers
it.

If the calculation indicates no need to transfer, well, you'll just have to
wait for rsync to catch it the next time around.

But that's only my conjecture about how rsync might work, at a high level.

--
Dimensions will always be expressed in the least usable term, convertible
only through the use of weird and unnatural conversion factors. Velocity,
for example, will be expressed in furlongs per fortnight.

Chris Ahlstrom

unread,
Jul 22, 2015, 6:47:14 AM7/22/15
to
crankypuss wrote this copyrighted missive and expects royalties:
I'm not thinking file-systems, at all. Not that I'd claim they have
no influence on how copy-on-write works. I simply do not know enough.

Words to live by:

--
You can observe a lot just by watching.
-- Yogi Berra

Jasen Betts

unread,
Jul 22, 2015, 7:58:52 AM7/22/15
to
On 2015-07-22, Chris Ahlstrom <OFee...@teleworm.us> wrote:
> Jasen Betts wrote this copyrighted missive and expects royalties:

>>> You sure about that? Try out the scenario on a single file using
>>> your favorite text editor.
>>
>> in one terminal
>>
>> $ while yes AAAAAAAAAAAAAAA | dd conv=notrunc bs=20K count=100 of=a
>> do sleep 0.01
>> yes BBBBBBBBBBBBBBB | dd conv=notrunc bs=20K count=100 of=a
>> sleep 0.01
>> done
>>
>> in aonther rsync the file "a" over a slow connection
>>
>> I got lines with BBB then with AAA then with AAA again
>>
>> $ od -ta a
>> 0000000 B B B B B B B B B B B B B B B nl
>> *
>> 1540000 A A A A A A A A A A A A A A A nl
>> *
>> 1560000 B B B B B B B B B B B B B B B nl
>> *
>> 2220000 A A A A A A A A A A A A A A A nl
>> *
>> 6070000
>
> What does that prove? dd is opening and closing "a" a number of times.

Can you read dd output?

the file only ever had all AAAA or all BBBB or one part AAAA and one part
BBBB (or vice versa) during the writing. it never had 4 distinct parts.

It prove that rsynced copies files which are being written may appear as
they never were.

> rsync just grabbed "a" as it existed just after the previous close.

It grabbed a as it never existed.

--
umop apisdn

Richard Kettlewell

unread,
Jul 22, 2015, 8:36:39 AM7/22/15
to
Jasen Betts <ja...@xnet.co.nz> writes:
> On 2015-07-22, Chris Ahlstrom <OFee...@teleworm.us> wrote:
>> Jasen Betts wrote this copyrighted missive and expects royalties:
>>> I got lines with BBB then with AAA then with AAA again
>>>
>>> $ od -ta a
>>> 0000000 B B B B B B B B B B B B B B B nl
>>> *
>>> 1540000 A A A A A A A A A A A A A A A nl
>>> *
>>> 1560000 B B B B B B B B B B B B B B B nl
>>> *
>>> 2220000 A A A A A A A A A A A A A A A nl
>>> *
>>> 6070000
>>
>> What does that prove? dd is opening and closing "a" a number of times.
>
> Can you read dd output?

ITYM ‘od output’ l-)

--
http://www.greenend.org.uk/rjk/

Chris Ahlstrom

unread,
Jul 22, 2015, 9:18:50 AM7/22/15
to
What do you mean?

> the file only ever had all AAAA or all BBBB ...

How can it ever have only BBBB, given the notrunk option?
However, just running that code above, and Ctrl-C'ing it, I did see
a case where it was all BBBB.

> or one part AAAA and one part
> BBBB (or vice versa) during the writing. it never had 4 distinct parts.

None of that makes any sense. How can BBBBBB appear first, ever?
And yet it happened for me (once only).

And how does the "yes AAAAAAAAAAA" ever terminate?

> It prove that rsynced copies files which are being written may appear as
> they never were.

That seems to be the case. But add a couple of echoes, one after the while
line, and one after the done line, and you'll see some weird stuff.
It's beyond my current knowledge, but there seems to be a lot going on
behind the scenes.

--
paycheck:
The weekly $5.27 that remains after deductions for federal
withholding, state withholding, city withholding, FICA,
medical/dental, long-term disability, unemployment insurance,
Christmas Club, and payroll savings plan contributions.

David W. Hodgins

unread,
Jul 22, 2015, 11:23:10 AM7/22/15
to
On Wed, 22 Jul 2015 04:33:45 -0400, Richard Kettlewell <r...@greenend.org.uk> wrote:

> Yes, I’m sure about it, based on decades of general experience with Unix
> platforms, use and development of applications that depend on this
> behavior (e.g. anything with a single-file database), reading
> documentation (e.g. SUS), reading of implementations, etc.

It depends on what program is being used to update a file. Most text
editors create a copy of the file, than unlink/rename the copy, when
saving changes. There are some that use direct i/o. It's a bad idea to
create a backup while the system is in use, as files in databases, have
to be kept in sync or the database will be unusable. Learned that one
the hard way, on an IBM mainframe DB/2 system when the drive with the
catalog on it died. That one took me about 80 hours straight to get the
system running again.

Regards, Dave Hodgins

--
Change nomail.afraid.org to ody.ca to reply by email.
(nomail.afraid.org has been set up specifically for
use in usenet. Feel free to use it yourself.)

Richard Kettlewell

unread,
Jul 22, 2015, 6:23:55 PM7/22/15
to
"David W. Hodgins" <dwho...@nomail.afraid.org> writes:
> Richard Kettlewell <r...@greenend.org.uk> wrote:
>> Yes, I’m sure about it, based on decades of general experience with Unix
>> platforms, use and development of applications that depend on this
>> behavior (e.g. anything with a single-file database), reading
>> documentation (e.g. SUS), reading of implementations, etc.
>
> It depends on what program is being used to update a file. Most text
> editors create a copy of the file, than unlink/rename the copy, when
> saving changes.

Yes, rename-into-place is a common strategy in all sorts of
applications. (Although I’d strace your favourite editor before being
100% certain that it uses it.) But the OP didn’t specify what programs
were modifying their files or what strategies they use, and there are
plenty of other common strategies too.

--
http://www.greenend.org.uk/rjk/

Jasen Betts

unread,
Jul 23, 2015, 2:58:50 AM7/23/15
to
I meant 'od' output, what do you mean?

>> the file only ever had all AAAA or all BBBB ...
>
> How can it ever have only BBBB, given the notrunk option?

the notrunc option does something other than what you expect.

without notrunc the file is truncated and rewritten, with it the file
is overwritten instead.

> However, just running that code above, and Ctrl-C'ing it, I did see
> a case where it was all BBBB.
>
>> or one part AAAA and one part
>> BBBB (or vice versa) during the writing. it never had 4 distinct parts.
>
> None of that makes any sense. How can BBBBBB appear first, ever?
> And yet it happened for me (once only).

The file is repeatedly overwritten from start to end with lines of
AAAA and then rewritten from the start with lines of BBBB.
the line length is 16 bytes so that od can compress the repeated lines.

> And how does the "yes AAAAAAAAAAA" ever terminate?

a SIGPIPE signal(7) some time soon after dd completes the count of
blocks and closes stdin.


--
umop apisdn

Chris Ahlstrom

unread,
Jul 23, 2015, 6:23:21 AM7/23/15
to
Jasen Betts wrote this copyrighted missive and expects royalties:

> On 2015-07-22, Chris Ahlstrom <OFee...@teleworm.us> wrote:
>> Jasen Betts wrote this copyrighted missive and expects royalties:
>>
>>>>> I got lines with BBB then with AAA then with AAA again
>>>>>
>>>>> $ od -ta a
>>>>> 0000000 B B B B B B B B B B B B B B B nl
>>>>> *
>>>>> 1540000 A A A A A A A A A A A A A A A nl
>>>>> *
>>>>> 1560000 B B B B B B B B B B B B B B B nl
>>>>> *
>>>>> 2220000 A A A A A A A A A A A A A A A nl
>>>>> *
>>>>> 6070000
>>>>
>>> the file only ever had all AAAA or all BBBB ...
>>
>> How can it ever have only BBBB, given the notrunk option?
>
> the notrunc option does something other than what you expect.
>
> without notrunc the file is truncated and rewritten, with it the file
> is overwritten instead.

Okay.

>> However, just running that code above, and Ctrl-C'ing it, I did see
>> a case where it was all BBBB.
>>
>>> or one part AAAA and one part
>>> BBBB (or vice versa) during the writing. it never had 4 distinct parts.
>>
>> None of that makes any sense. How can BBBBBB appear first, ever?
>> And yet it happened for me (once only).
>
> The file is repeatedly overwritten from start to end with lines of
> AAAA and then rewritten from the start with lines of BBBB.
> the line length is 16 bytes so that od can compress the repeated lines.
>
>> And how does the "yes AAAAAAAAAAA" ever terminate?
>
> a SIGPIPE signal(7) some time soon after dd completes the count of
> blocks and closes stdin.

Ahhh, did not know that, thanks.

Definitely a lot that goes on "behind the scenes".

--
EARL GREY PROFILES

NAME: Jean-Luc Perriwinkle Picard
OCCUPATION: Starship Big Cheese
AGE: 94
BIRTHPLACE: Paris, Terra Sector
EYES: Grey
SKIN: Tanned
HAIR: Not much
LAST MAGAZINE READ:
Lobes 'n' Probes, the Ferengi-Betazoid Sex Quarterly
TEA: Earl Grey. Hot.

EARL GREY NEVER VARIES.

William Unruh

unread,
Jul 23, 2015, 6:58:57 PM7/23/15
to
rsync will warn you tht the file has changed since checked. No rsync
does not lock files AFAIK. So the backup of a file inuse may not be a
good backup.
Ideally backup should occur only when files are not changing.

crankypuss

unread,
Jul 24, 2015, 5:39:56 AM7/24/15
to
I'm pretty sure it does something with temporary symlinks (having seen
an error message about its being unable to create one under some funky
set of conditions) but haven't looked at the source code, I get confused
enough when I'm reading my own source code.

> Ideally backup should occur only when files are not changing.

That works well enough for use on my netbook where I'm the only user,
but backing up a server seems to be both possible and common practice;
to me that means that however rsync does it probably works pretty well
and I could maybe learn something interesting from how it's done if I
had the time and inclination to dig around looking for more
understanding.

--
totally-portable-software.blogspot.com

William Unruh

unread,
Jul 24, 2015, 8:06:54 AM7/24/15
to
On 2015-07-24, crankypuss <inv...@invalid.invalid> wrote:
> William Unruh wrote:
>
>> On 2015-07-21, Bengt T <bengt_t...@hotmail.com> wrote:
>>> Plan to use rsync to regularly, e. g. once per hour triggered by
>>> cron, in making backup from PC1 to PC2.
>>>
>>> PC1 is constantly used, resulting in file changes (creation,
>>> deletion, modification) all the time. Even when rsync/backup
>>> procedure is started.
>>>
>>> My question is if and how this will work with rsync as described
>>> above? E. g. how is an opened file handled by rsync?
>>
>> rsync will warn you tht the file has changed since checked. No rsync
>> does not lock files AFAIK. So the backup of a file inuse may not be a
>> good backup.
>
> I'm pretty sure it does something with temporary symlinks (having seen
> an error message about its being unable to create one under some funky
> set of conditions) but haven't looked at the source code, I get confused
> enough when I'm reading my own source code.

Yes. rsync creates a temporary file on the destination, and
relinks/moves the file at the end. However, it does not, AFAIK, do
anything at the source end.

>
>> Ideally backup should occur only when files are not changing.
>
> That works well enough for use on my netbook where I'm the only user,
> but backing up a server seems to be both possible and common practice;
> to me that means that however rsync does it probably works pretty well
> and I could maybe learn something interesting from how it's done if I
> had the time and inclination to dig around looking for more
> understanding.

That "well enough" does I believe mean that there is a chance that the
destination is inconsistant if the source is changed during the
download. However I cannot find anything in the rsync man page which
specifically addresses this. At best you will get a copy as it existed
at one specific instance, and if a change was made a millisecond later
it will not be included.

See also
http://unix.stackexchange.com/questions/90245/is-using-rsync-while-source-is-being-updated-safe



>

Jasen Betts

unread,
Jul 24, 2015, 9:29:31 AM7/24/15
to
On 2015-07-24, crankypuss <inv...@invalid.invalid> wrote:
> William Unruh wrote:
>
>> rsync will warn you tht the file has changed since checked. No rsync
>> does not lock files AFAIK. So the backup of a file inuse may not be a
>> good backup.

only if the file changes size, internal changes seem to go unnoticed..

> I'm pretty sure it does something with temporary symlinks (having seen
> an error message about its being unable to create one under some funky
> set of conditions) but haven't looked at the source code, I get confused
> enough when I'm reading my own source code.

nah, but --delete-after fails if you replace a directory with a symlink.
maybe you were thinking of that?

--
umop apisdn

crankypuss

unread,
Jul 25, 2015, 5:11:14 AM7/25/15
to
I'm juggling so much shit that it's falling all over me, no way I'm
going to remember the exact circumstance of that one message, but as I
recall it happened just after fsck helped me out by cleaning up open
file handles that were open because the install that had those files
open was in hibernate. I've decided that when you have one system with
several distros/versions installed and you're switching between them by
rebooting, and you have a data partition shared between them, the
hibernate support provided by uswsusp is not as useful as one might
wish. Well, you know, unless you don't mind a little data corruption
between friends. Anyway since hibernate doesn't work very well in the
multi-distro case, I'll just not use it and end up saving some time.


--
totally-portable-software.blogspot.com

William Unruh

unread,
Jul 25, 2015, 9:17:26 AM7/25/15
to
Hibernate with multi distro? I would never try that. Hibernate is an
instantaneous snapshot of the system and I would never trust anything to
have completed including disk writes.

>
>

crankypuss

unread,
Jul 26, 2015, 6:43:37 AM7/26/15
to
Did I say I was trusting it? On the other hand by trying it, some flaws
became apparent, and I learned a little more about how things work. I
would say that the developers should have tried it before releasing it.
Probably they did. Apparently they figured it was close enough. A lot
of stuff seems to be that way. When applications are developed on a
framework that does not support graceful application checkpointing,
perhaps global checkpointing is what one falls back to since it's the
best one can do without a major shirtsleeve-rollup. I don't know about
you or others, but I remember that Unix began as an ad-hoc deal, and I
can see from parameters and options that it never truly moved past that
phase even as linux. Not that such is necessarily bad, it's just a
stage of industry development; first we get new hardware toys to play
with, then we spew "interfaces" so people can find out how the hardware
will really be used, then apps are developed using the new capabilities
and the whole thing becomes a mess, then there's a consolidation and
redesign phase, then we rinse and repeat. I'm thinking that Android is
a first effort toward consolidation and I'm not liking it much for many
reasons. So it goes, right? Just because a non-working hibernate
facility is offered, that doesn't mean we have to use it, even though it
might point out some filesystem design issues.


--
totally-portable-software.blogspot.com

William Unruh

unread,
Jul 26, 2015, 7:37:32 AM7/26/15
to
On 2015-07-26, crankypuss <inv...@invalid.invalid> wrote:
....
> best one can do without a major shirtsleeve-rollup. I don't know about
> you or others, but I remember that Unix began as an ad-hoc deal, and I

Unix did not begin as an "ad hoc deal" anymoreso than any other
operating system did. It was a commercial operation.

> can see from parameters and options that it never truly moved past that
> phase even as linux.

No idea what that sentence is supposed to mean.

> stage of industry development; first we get new hardware toys to play
> with, then we spew "interfaces" so people can find out how the hardware
> will really be used, then apps are developed using the new capabilities
> and the whole thing becomes a mess, then there's a consolidation and

That is how all operating systems develope. That is how the MUST
develope if you are not living in a planned economy which feezes
everything forever.

> redesign phase, then we rinse and repeat. I'm thinking that Android is
> a first effort toward consolidation and I'm not liking it much for many
> reasons. So it goes, right? Just because a non-working hibernate
> facility is offered, that doesn't mean we have to use it, even though it
> might point out some filesystem design issues.

Hibernation works. Hibernation in which you then want to use use data
that another OS has been using when hibernated does not, cannot, and
will never work by definition of hibernation. It is not a filesystem
design problem. It is a failure to understand what hibernation means.


>
>

Richard Kettlewell

unread,
Jul 26, 2015, 8:15:33 AM7/26/15
to
William Unruh <un...@invalid.ca> writes:
> Unix did not begin as an "ad hoc deal" anymoreso than any other
> operating system did. It was a commercial operation.

“citation needed”

> Hibernation works. Hibernation in which you then want to use use data
> that another OS has been using when hibernated does not, cannot, and
> will never work by definition of hibernation. It is not a filesystem
> design problem. It is a failure to understand what hibernation means.

Hibernate then boot into another OS is analogous to pulling the power
and then rebooting. Modern filesystems cope well with this, within
inherent limitations like not being able to retrieve data written by
processes but not yet committed to disk. (In Linux modern means ‘at
least ext3’ here). Applications of course may cope less well since they
have no way to tell what’s going on.

--
http://www.greenend.org.uk/rjk/

Aragorn

unread,
Jul 26, 2015, 8:36:57 AM7/26/15
to
On Sunday 26 Jul 2015 13:35, William Unruh conveyed the following to
alt.os.linux...

> On 2015-07-26, crankypuss <inv...@invalid.invalid> wrote:
>
>> [...] I remember that Unix began as an ad-hoc deal [...]
>
> Unix did not begin as an "ad hoc deal" anymoreso than any other
> operating system did. It was a commercial operation.

That's not entirely correct, Bill. The original AT&T Unix was developed
as a hobby project ─ to be more precise, so as to have an operating
system for a DEC PDP-7 upon which Dennis Ritchie, Ken Thompson, Joe
Osanna and Douglas McIlroy could then play games which they had written
themselves and which they had previously been playing on a Multics
mainframe; games which of course took CPU time away from the mainframe.

The system was modeled after Multics ─ but scaled down ─ and it didn't
even have a name at first. The name Unics ("Uniplexed Information and
Computation System") ─ spelled like this ─ was something one of their
colleagues came up with, as an incorrect pun on the name Multics
("Multiplexed Information and Computation System"), suggesting that
"Unics" (which sounds like the word "eunuchs") was an emasculated
Multics. The pun was incorrect because Unix was not "uniplexed".

The name stuck however ─ it was later on spelled "Unix" ─ and after the
AT&T brass found out about it, Thompson and Ritchie convinced them that
Unix could be used for the processing of patent documents. As such, the
system was used internally at AT&T Bell Labs for exactly that purpose,
but as AT&T Bell Labs was a US government-subsidized company, AT&T could
not commercially sell Unix, so they gave copies of the source code away
to whoever was interested, including the Berkeley University. Berkeley
then started making modifications to the system and commercially
offering it.

Nowadays, UNIX ─ spelled in all-uppercase ─ is not actually an operating
system but a mere trademark, owned by The Open Group [*], and any
operating system can apply for UNIX certification, even though the
validation test is prohibitively expensive for GNU/Linux distributions
and the various BSD-based distributions.

Technically speaking, even OpenIndiana does not have UNIX certification,
even though it's basically the same thing as Solaris, which does have
the certification. And, ironically, Apple OS X, which is the least
UNIX-like ─ at least, in the traditional sense ─ of all UNIX-family
operating systems, does have UNIX certification. But then again, Apple
can afford the validation tests.


[*] The trademark was owned by SCO ─ the real Santa Cruz Operations,
not the renamed Caldera Systems ─ and donated to The Open Group.
The AT&T Unix patents and copyrights were sold by AT&T to Novell,
and Novell was itself in turn acquired by Attachmate, although
its patent portfolio was initially sold to CPTN Holdings LLC,
which is a holding headed by Microsoft. The sale of the patents
has to the best of my knowledge not been finalized yet due to
antitrust concerns. As far as I know, the matter is still under
investigation at this point in time.

--
= Aragorn =

http://www.linuxcounter.net - registrant #223157

Eef Hartman

unread,
Jul 26, 2015, 2:56:18 PM7/26/15
to
William Unruh <un...@invalid.ca> wrote:
> Unix did not begin as an "ad hoc deal" anymoreso than any other
> operating system did. It was a commercial operation.

No, Unix began as an in-house operating system at Western Bell
laboratories. AT&T, of which Western Bell was a part, wasn't even
ALLOWED to sell it, but they did give at cost price (tape and postage)
copies to universities. One of those was the University of California,
Berkeley, which grew into the BSD (Berkeley Software Development)
version of Unix, of which the sources are free.

Much later, after the split-up of AT&T, they _did_ develop a
commercial version of Unix (System III and V), from which most modern
Unix versions (like HPUX, Sun Solaris and even IBM's AIX) have been
developed (but the BSD derived ones are still open-source, although
Berkeley has stopped development).

Eef Hartman

unread,
Jul 26, 2015, 3:18:49 PM7/26/15
to
Aragorn <thor...@telenet.be.invalid> wrote:
> That's not entirely correct, Bill. The original AT&T Unix was developed
> as a hobby project ? to be more precise, so as to have an operating
> system for a DEC PDP-7

Actually, mostly the project to PORT the (full-assembler) O/S from
that PDP-7 to the newer PDP-11, which is rather complicated for such
totally different hardware: the PDP-7 was 18-bits, the -11 16 and the
first computer with 8-bit byte addressing.
So they came up with a new language, C and rewrote the O/S almost
completely IN that language, which made it into the Unix as we now
know it.
The later port from PDP-11 to the DEC VAX was much simpler as now they
had reasonable portable code and compiler.

The VAX version is where later Unix ports were derived from
(version 7/PWB).

John Hasler

unread,
Jul 26, 2015, 4:30:37 PM7/26/15
to
Eef Hartman writes:
> ...Western Bell laboratories.

You are confounding Bell Laboratories and Western Electric (both
subsiduaries of AT&T).
--
John Hasler
jha...@newsguy.com
Dancing Horse Hill
Elmwood, WI USA

Eef Hartman

unread,
Jul 26, 2015, 4:42:30 PM7/26/15
to
John Hasler <jha...@newsguy.com> wrote:
> Eef Hartman writes:
>> ...Western Bell laboratories.
>
> You are confounding Bell Laboratories and Western Electric (both
> subsiduaries of AT&T).

You're right, thanks for the correction.

Aragorn

unread,
Jul 26, 2015, 11:09:01 PM7/26/15
to
On Sunday 26 Jul 2015 22:41, Eef Hartman conveyed the following to
alt.os.linux...
And Unix wasn't the end of the line yet for Dennis Ritchie & Co either,
because once Bell Labs had been split off from AT&T, they started
developing Plan 9, which was very Unix-like but extended that paradigm
by another dimension, i.e. it was a distributed operating system, and it
had a graphical user interface.

I believe that Plan 9 would be Free & Open Source Software these days,
but the original code was proprietary. It was never really successful,
though ─ certainly not as successful as what would ultimately become the
System V and BSD variants of Unix.

Robert Newson

unread,
Jul 27, 2015, 2:37:08 AM7/27/15
to
On 22/07/15 11:37, Chris Ahlstrom wrote:
> Jasen Betts wrote this copyrighted missive and expects royalties:
>
>> On 2015-07-21, Chris Ahlstrom <OFee...@teleworm.us> wrote:
...
>> in one terminal
>>
>> $ while yes AAAAAAAAAAAAAAA | dd conv=notrunc bs=20K count=100 of=a
>> do sleep 0.01
>> yes BBBBBBBBBBBBBBB | dd conv=notrunc bs=20K count=100 of=a
>> sleep 0.01
>> done
>>
>> in aonther rsync the file "a" over a slow connection
>>
>> I got lines with BBB then with AAA then with AAA again
>>
>> $ od -ta a
>> 0000000 B B B B B B B B B B B B B B B nl
>> *
>> 1540000 A A A A A A A A A A A A A A A nl
>> *
>> 1560000 B B B B B B B B B B B B B B B nl
>> *
>> 2220000 A A A A A A A A A A A A A A A nl
>> *
>> 6070000
>
> What does that prove? dd is opening and closing "a" a number of times.
No, it doesn't prove that, it shows with a high probability that you
know nothing about *nix systems and how they work.

If ``dd'' is opening and closing file ``a'' a number of times, is that
standard practice? If so, then ``yes'' should also be opening and
closing its output a number of times. BUT once ``yes'' closes its
output, it can never re-open the pipe. This means there is probably
another explanation to fit the facts.

It is more likely that:

The writing of ``dd'' and the reading of ``rsync'' have been interlaced
with the ``rsync'' reading more than the ``dd'' has managed to write.

Possible proof:

The first fact to note is that there are a large number of Bs before a
short number of As before the Bs recommence

Second is that no indication is given of (a) the total number of
processors the machine has, nor (b) how many processes are actually
running on the machine - there are at least 3 programs (the ``yes'',
``dd'' and ``rsync'') and the network handling of the kernel. If there
are not enough processors for one for each of the running programs at
some stage the running programs will have to physically stop (sleep/be
suspended) temporarily as the processor is used to run another program.

Third is that disk access is SLOW. As a result, data is buffered in
memory until physically written to disk

Fourth is that programs will avoid unnecessary overheads to ensure they
complete as fast as possible. Opening and closing files incurs an
overhead which means that ``dd'' is very unlikely to keep closing and
opening the output file.

Fifth is that an unnamed pipe between programs is a circular buffer of
memory of limited length (64Ki)

Sixth is that programs will not [in general, especially if a file is
[very] large] read the whole of a file at once, but read a buffer full
at a time 9which is then processed before the next buffer full is read);
the optimum size of buffer depends upon various factors (eg size of data
that can be physically read in one go from a disk drive; reliability of
packet size across a network, etc).

How can these facts account for the output?

First consider the output of ``yes'' and the input of ``dd'' - unless
the latter is extracting data from the pipe at the same rate (or higher)
as the former is putting it in, at some stage the former will block
waiting for space in the pipe; when such blocking occurs, it will be
put in a sleep mode. Similarly the latter: if it tries to read from an
empty pipe (eg because the former was suspended by the scheduler as the
processor was needed for another program) it will block waiting for data
and be put in a sleep mode.

Next, ``dd'' will spew out to file ``a'' as fast as it can based on as
fast as it can read data from its input and filling its internal buffer
(block). If the disk [write] buffering becomes too much it too might be
blocked until the [write] buffers are written to disk (to ensure data
integrity on the disk).

Now how does the ``rsync'' fit in? Under *nix there is [generally] no
file locking UNLESS the programs mutually agree to it - they set up
their own flags. So, as soon as ``rsync'' starts to read file ``a'' it
will get whatever happens to be available to read as the position it has
got through the file. If the data is not buffered in memory, it may get
suspended whilst it waits for the data to be read from the disk; and it
may also get suspended whilst it waits to ensure any network data sent
is acknowledged as being received at the other end.

So, if the machine only has 1 or 2 processors (or there are other
programs running which brings the total number of programs running to
greater than the number of processors), at some stage only 1 or 2 of
``yes'', ``dd'' and ``rsync'' can be running at any moment. If dd
manages to process 01540000 bytes of overwriting file ``a'' with Bs
before it gets suspended then rsync runs and processes 0156000 bytes of
file ``a'' before it gets suspended and ``dd'' starts running again to
process a further 0460000 bytes before being suspended and rsync
restarting and processing more than 0440000 bytes, what would the output
of ``rsync'' look like at this stage? Exactly the above.

QED.

> as soon as ``rsync'' opens file ``a''
> rsync just grabbed "a" as it existed just after the previous close.
>
It wasn't closed, it was sync'd to the disk/the dd process got suspended
and rsync ran.

crankypuss

unread,
Jul 27, 2015, 4:11:49 AM7/27/15
to
William Unruh wrote:

> On 2015-07-26, crankypuss <inv...@invalid.invalid> wrote:
> ....
>> best one can do without a major shirtsleeve-rollup. I don't know
>> about you or others, but I remember that Unix began as an ad-hoc
>> deal, and I
>
> Unix did not begin as an "ad hoc deal" anymoreso than any other
> operating system did. It was a commercial operation.

Unix was initially developed as an ad-hoc/skunkworks gaming system:
http://sites.fas.harvard.edu/~lib215/reference/history/spacetravel.html

>> can see from parameters and options that it never truly moved past
>> that phase even as linux.
>
> No idea what that sentence is supposed to mean.

Count the number of options supported by 'tar' and/or 'rsync', as
particularly egregious examples. Count the number of options that don't
belong there and were only ever put there because it was the most
expedient path. Figure it out.

It happens to all operating systems, but some development groups are
cohesive enough that those things get noticed and bitched about and
fixed, even if it takes some redesign, before it ever goes out
development's door to testing. Other groups are in a bigassed hurry to
get the next release out and will ship crap if that's what it takes.

>> stage of industry development; first we get new hardware toys to play
>> with, then we spew "interfaces" so people can find out how the
>> hardware will really be used, then apps are developed using the new
>> capabilities and the whole thing becomes a mess, then there's a
>> consolidation and
>
> That is how all operating systems develope. That is how the MUST
> develope if you are not living in a planned economy which feezes
> everything forever.

Yeah, I think they call that the Dalvic emulator, or will once SkyNet
has their way and we're all only able to purchase Dalvic (for example)
machines.

>> redesign phase, then we rinse and repeat. I'm thinking that Android
>> is a first effort toward consolidation and I'm not liking it much for
>> many
>> reasons. So it goes, right? Just because a non-working hibernate
>> facility is offered, that doesn't mean we have to use it, even though
>> it might point out some filesystem design issues.
>
> Hibernation works.

Yet I just explained how it does not, so clearly we have different
understandings of the term "works". You probably do not consider
"works" equivalent to "bulletproof", otoh I've actually worked in the
development industry and don't care to sit over the dunking pool unless
the code is as bulletproof as I can make it: "works" does not mean that
it works under the 3 conditions it was coded to handle, it means it
*works*.

> Hibernation in which you then want to use use data
> that another OS has been using when hibernated does not, cannot, and
> will never work by definition of hibernation. It is not a filesystem
> design problem. It is a failure to understand what hibernation means.

Yeah, it's unfortunate that you and others don't seem to grasp that.

If you think it's not a design problem, perhaps you can explain why
information about open file handles is stored on the media; imo it
shouldn't be, ever, media is for fully consistent data, not some notes
the filesystem had to leave itself because *nix filesystems use a
"sharing" model rather than a "server" model.

Whatever, try and have a decent day today, it's all going to work out
fine one way or the other, and when they come to piss on our graves
we'll neither know nor care.

--
totally-portable-software.blogspot.com

Chris Ahlstrom

unread,
Jul 27, 2015, 6:27:33 AM7/27/15
to
Robert Newson wrote this copyrighted missive and expects royalties:

> On 22/07/15 11:37, Chris Ahlstrom wrote:
>> Jasen Betts wrote this copyrighted missive and expects royalties:
>>
>>> On 2015-07-21, Chris Ahlstrom <OFee...@teleworm.us> wrote:
> ...
>>> $ od -ta a
>>> 0000000 B B B B B B B B B B B B B B B nl
>>> 1540000 A A A A A A A A A A A A A A A nl
>>> 1560000 B B B B B B B B B B B B B B B nl
>>> 2220000 A A A A A A A A A A A A A A A nl
>>> 6070000
>>
>> What does that prove? dd is opening and closing "a" a number of times.
>
> No, it doesn't prove that, it shows with a high probability that you
> know nothing about *nix systems and how they work.

Apart from the needless insult, thanks for the detailed answer below.

> If ``dd'' is opening and closing file ``a'' a number of times, is that
> standard practice? If so, then ``yes'' should also be opening and
> closing its output a number of times. BUT once ``yes'' closes its
> output, it can never re-open the pipe. This means there is probably
> another explanation to fit the facts.
>
> It is more likely that:

Likely? I thought you implied you actually knew!
I'm saving your answer for study later. Thanks.

--
It may be possible to get this condition from within Perl if a signal
handler runs at just the wrong moment. Another point for Chip... :-)
-- Larry Wall in <1997101615...@wall.org>

Chris Ahlstrom

unread,
Jul 27, 2015, 6:29:06 AM7/27/15
to
crankypuss wrote this copyrighted missive and expects royalties:

> Whatever, try and have a decent day today, it's all going to work out
> fine one way or the other, and when they come to piss on our graves
> we'll neither know nor care.

Words to live... er, die... by.

--
pos += screamnext[pos] /* does this goof up anywhere? */
-- Larry Wall in util.c from the perl source code

Jasen Betts

unread,
Jul 27, 2015, 7:58:50 AM7/27/15
to
> The first fact to note is that there are a large number of Bs before a
> short number of As before the Bs recommence

proves nothing, on multitasking systems processes can be interrupted
at ant time - so time appears to be lumpy.

> Second is that no indication is given of (a) the total number of
> processors the machine has, nor (b) how many processes are actually
> running on the machine - there are at least 3 programs (the ``yes'',
> ``dd'' and ``rsync'')

rsync is three processes itself, two rsyncs and an ssh.
there's also gnome terminal (which is multithreaded)
bash, dd, yes, Xorg... that was an 8 core running a bumch of other mostly
idle processes at the same time. the more recent test wason a 2 core
machine.

> Third is that disk access is SLOW. As a result, data is buffered in
> memory until physically written to disk

first test was on SSD second SSHD, but I think mostly was writing to
buffers in both cases.

> First consider the output of ``yes'' and the input of ``dd'' - unless
> the latter is extracting data from the pipe at the same rate (or higher)
> as the former is putting it in, at some stage the former will block
> waiting for space in the pipe; when such blocking occurs, it will be
> put in a sleep mode. Similarly the latter: if it tries to read from an
> empty pipe (eg because the former was suspended by the scheduler as the
> processor was needed for another program) it will block waiting for data
> and be put in a sleep mode.

conversely when the buffer is empty/full the apropriate process will be
reawakened by the scheduler.

> Next, ``dd'' will spew out to file ``a'' as fast as it can based on as
> fast as it can read data from its input and filling its internal buffer
> (block). If the disk [write] buffering becomes too much it too might be
> blocked until the [write] buffers are written to disk (to ensure data
> integrity on the disk).

here I explicitly set the size ot 20KiB

> Now how does the ``rsync'' fit in? Under *nix there is [generally] no
> file locking UNLESS the programs mutually agree to it - they set up
> their own flags. So, as soon as ``rsync'' starts to read file ``a'' it
> will get whatever happens to be available to read as the position it has
> got through the file. If the data is not buffered in memory, it may get
> suspended whilst it waits for the data to be read from the disk; and it
> may also get suspended whilst it waits to ensure any network data sent
> is acknowledged as being received at the other end.

AFAIK rsync writes to a pipeline and ssh does the actual network stuff.

> So, if the machine only has 1 or 2 processors (or there are other
> programs running which brings the total number of programs running to
> greater than the number of processors), at some stage only 1 or 2 of
> ``yes'', ``dd'' and ``rsync'' can be running at any moment. If dd
> manages to process 01540000 bytes of overwriting file ``a'' with Bs

those bumbers are octal. (You appear to be aware of that, I'm just
pointing it out for others)

> before it gets suspended then rsync runs and processes 0156000 bytes of
> file ``a'' before it gets suspended and ``dd'' starts running again to
> process a further 0460000 bytes before being suspended and rsync
> restarting and processing more than 0440000 bytes, what would the output
> of ``rsync'' look like at this stage? Exactly the above.
>
> QED.

that's speculation, and easily disproven,

I ran it again but with four patterns instead of two. there's no evidence
of interleaving.

jasen@frederick:/tmp$ od -ta a
0000000 A A A A A A A A A A A A A A A nl
*
1710000 D D D D D D D D D D D D D D D nl
*
2010000 C C C C C C C C C C C C C C C nl
*
2050000 B B B B B B B B B B B B B B B nl
*
2200000 D D D D D D D D D D D D D D D nl
*
4300000

--
umop apisdn

crankypuss

unread,
Jul 28, 2015, 5:55:44 AM7/28/15
to
Don't give yourself a hernia over it, without knowing exactly what type
of locking is used by the various threads and processors involved it's
guesswork you can spare yourself; you're no more likely to understand it
the Nth time than you were to understand it the 1st time unless you ask
some specific questions, because unless all locking is preemptive and
atomic, the code is running a crapshoot driven by timing that can't
reliably be distinguished from true randomity.


--
totally-portable-software.blogspot.com

crankypuss

unread,
Jul 28, 2015, 6:03:44 AM7/28/15
to
It proves nothing but it can be indicative of relative processor loading
or relative thread priority or any number of other things; by itself not
so useful, but sometimes a useful second-level pattern can be seen in
enough data, and if you're shooting a subtle bug in something low-level
every little drop of information is precious.

--
totally-portable-software.blogspot.com

Chris Ahlstrom

unread,
Jul 28, 2015, 6:19:37 AM7/28/15
to
crankypuss wrote this copyrighted missive and expects royalties:
I quite agree.

Oh, I don't expect to understand all that's going on with this particular
(and quite artificial and complex, apparently) use-case. However, just
taking the effort will help me learn something.

Most of my programming on Linux has been limited to the very occasional GUI,
a lot of data-crunching console apps, and some sockets (Berkeley,
Winsockets, and Boost.ASIO), synchronization objects, and pthreads. I've
spent a lot of time poring over "Advanced Programming in the UNIX
Environment", but even that tome doesn't seem to come close to presenting
the complexities of UNIX.

But, I try. :-)

--
If you continually give you will continually have.

crankypuss

unread,
Jul 29, 2015, 6:03:13 AM7/29/15
to
Chris Ahlstrom wrote:

> crankypuss wrote this copyrighted missive and expects royalties:
>
>> Chris Ahlstrom wrote:
>>
>>> Robert Newson wrote this copyrighted missive and expects royalties:

[massive example/explanation snipped]

>>> I'm saving your answer for study later. Thanks.
>>
>> Don't give yourself a hernia over it, without knowing exactly what
>> type of locking is used by the various threads and processors
>> involved it's guesswork you can spare yourself; you're no more likely
>> to understand it the Nth time than you were to understand it the 1st
>> time unless you ask some specific questions, because unless all
>> locking is preemptive and atomic, the code is running a crapshoot
>> driven by timing that can't reliably be distinguished from true
>> randomity.
>
> I quite agree.
>
> Oh, I don't expect to understand all that's going on with this
> particular
> (and quite artificial and complex, apparently) use-case.

There is "understanding all that's going on" (meaning *every* detail),
and there's "understanding what's going on" (meaning no usecase will
generate unexpected results).

You're never going to truly understand all that's going on in something
like this unless you can step through the code, even (perhaps
especially, for some of us) if you wrote the code, and the insertion of
instrumentation outside of a simulated environment will almost certainly
distrupt the exact timing conditions from which the example's result
emerged; in a sense, it's a Catch-22 that can only be resolved by total
rigor on one side, or lackadaisical supplication of the gods of chaos on
the other side... epistomology and "pure" logic are not just pretty
pictures on a poster, they are fairly essential in realtime work.

> However,
> just taking the effort will help me learn something.

Maybe, if you don't confuse away whatever understanding you had before
you tried to unscrew the unscrutable.

> Most of my programming on Linux has been limited to the very
> occasional GUI, a lot of data-crunching console apps, and some sockets
> (Berkeley,
> Winsockets, and Boost.ASIO), synchronization objects, and pthreads.
> I've spent a lot of time poring over "Advanced Programming in the UNIX
> Environment", but even that tome doesn't seem to come close to
> presenting the complexities of UNIX.

It's not just UNIX that's complex. They used to teach operating-systems
theory in school, at least they did when I was in school, but maybe they
stopped because so few people could put it into practice, or because the
four jobs in the US to which it applies had already been filled, I have
no idea what they teach these days.

It looks like I'm going to need to re-learn C (last use was... 1994?),
set up some kind of debugging environment, and learn how to build linux
components from source pretty soon... I don't think I've looked inside a
makefile since 2001 and didn't claim any "make" expertise then. Eh,
WTF, it'll only be necessary to do it once, then it can sit there and
get dusty and forgotten while I digress again.

> But, I try. :-)

I used to try, then life got so busy that I didn't have time to try; you
only get to fly the highwire with a net when it doesn't count, but it
always counts when you land on your head. <g>


--
http://totally-portable-software.blogspot.com

Chris Ahlstrom

unread,
Jul 29, 2015, 6:22:10 AM7/29/15
to
Again, I agree.

>> However,
>> just taking the effort will help me learn something.
>
> Maybe, if you don't confuse away whatever understanding you had before
> you tried to unscrew the unscrutable.

Ain't gonna happen.

>> Most of my programming on Linux has been limited to the very
>> occasional GUI, a lot of data-crunching console apps, and some sockets
>> (Berkeley,
>> Winsockets, and Boost.ASIO), synchronization objects, and pthreads.
>> I've spent a lot of time poring over "Advanced Programming in the UNIX
>> Environment", but even that tome doesn't seem to come close to
>> presenting the complexities of UNIX.
>
> It's not just UNIX that's complex. They used to teach operating-systems
> theory in school, at least they did when I was in school, but maybe they
> stopped because so few people could put it into practice, or because the
> four jobs in the US to which it applies had already been filled, I have
> no idea what they teach these days.

Apparently a lot of Java :-D

> It looks like I'm going to need to re-learn C (last use was... 1994?),
> set up some kind of debugging environment, and learn how to build linux
> components from source pretty soon... I don't think I've looked inside a
> makefile since 2001 and didn't claim any "make" expertise then. Eh,
> WTF, it'll only be necessary to do it once, then it can sit there and
> get dusty and forgotten while I digress again.

I recommend getting used to GNU autotools (automake, autoconf,
autoconf-archive, etc).

It's a bit arcane and complex, but the basics are easy; just cut-and-paste
configure.ac and Makefile.am from an active project.

Some have tried to replace it, but end up duplicating the arcaneness without
solving a lot of the issues that GNU autotools solves. (Cmake, I'm talking
to YOU -- who the hell names a config file with a ".txt" extension???)

Of course, the other option is just a simple Makefile that matches your
system, and your system only.

>> But, I try. :-)
>
> I used to try, then life got so busy that I didn't have time to try; you
> only get to fly the highwire with a net when it doesn't count, but it
> always counts when you land on your head. <g>

I've had a little more time on my hands the last few weeks.

--
Finagle's Creed:
Science is true. Don't be misled by facts.

crankypuss

unread,
Jul 29, 2015, 9:27:12 AM7/29/15
to
Chris Ahlstrom wrote:

> crankypuss wrote this copyrighted missive and expects royalties:
>
>> Chris Ahlstrom wrote:
>>
>>> crankypuss wrote this copyrighted missive and expects royalties:
>>>
>>>> Chris Ahlstrom wrote:
[snip]

>>> However,
>>> just taking the effort will help me learn something.
>>
>> Maybe, if you don't confuse away whatever understanding you had
>> before you tried to unscrew the unscrutable.
>
> Ain't gonna happen.

Let's hope so. I used to believe that, once upon a time. Then I
unscrewed a larger can of inscrutable than I could eat at one sitting.
Necessitated something of a database reorganization. So it goes, as
long as you come out of the deal with a simplified yet consistent
abstraction it's probably a net profit.


>>> Most of my programming on Linux has been limited to the very
>>> occasional GUI, a lot of data-crunching console apps, and some
>>> sockets (Berkeley,
>>> Winsockets, and Boost.ASIO), synchronization objects, and pthreads.
>>> I've spent a lot of time poring over "Advanced Programming in the
>>> UNIX Environment", but even that tome doesn't seem to come close to
>>> presenting the complexities of UNIX.
>>
>> It's not just UNIX that's complex. They used to teach
>> operating-systems theory in school, at least they did when I was in
>> school, but maybe they stopped because so few people could put it
>> into practice, or because the four jobs in the US to which it applies
>> had already been filled, I have no idea what they teach these days.
>
> Apparently a lot of Java :-D

Yeah, java and mysql, those two and a dose of the clap will doubtless
save the world before Tuesday.

>> It looks like I'm going to need to re-learn C (last use was...
>> 1994?), set up some kind of debugging environment, and learn how to
>> build linux components from source pretty soon... I don't think I've
>> looked inside a
>> makefile since 2001 and didn't claim any "make" expertise then. Eh,
>> WTF, it'll only be necessary to do it once, then it can sit there and
>> get dusty and forgotten while I digress again.
>
> I recommend getting used to GNU autotools (automake, autoconf,
> autoconf-archive, etc).
>
> It's a bit arcane and complex, but the basics are easy; just
> cut-and-paste configure.ac and Makefile.am from an active project.
>
> Some have tried to replace it, but end up duplicating the arcaneness
> without
> solving a lot of the issues that GNU autotools solves. (Cmake, I'm
> talking to YOU -- who the hell names a config file with a ".txt"
> extension???)
>
> Of course, the other option is just a simple Makefile that matches
> your system, and your system only.

I'm a bit confused about the whole build process with linux. It seems
like Git is necessary for participating in group projects so learning
and using that seems like it would be the right way to go. Otoh I'm
still making my working environment more portable so I'll be able to do
something on jessie besides sputter curses and beat on the useless gnome
so-called user interface POS. I should also maybe post some more stuff
on the blog in case anyone is interested in playing, but that ain't
happening until the mood strikes.

>>> But, I try. :-)
>>
>> I used to try, then life got so busy that I didn't have time to try;
>> you only get to fly the highwire with a net when it doesn't count,
>> but it always counts when you land on your head. <g>
>
> I've had a little more time on my hands the last few weeks.

I hope that's meant in the good way. I've had a little less time on my
hands lately and I'm starting to get the itch to cut firewood for the
winter, go figure, but the weather has seemed kind of hinky lately and
I'd hate to start Fimbulwinter with an empty woodshed, that would be in
Very Bad Taste.

--
http://totally-portable-software.blogspot.com
0 new messages