Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Problem mounting encrypted blu-ray disc or image

88 views
Skip to first unread message

B.M.

unread,
Jul 4, 2022, 9:00:05 AM7/4/22
to
Hello

I create encrypted backups on blu-ray discs for some years now with a bash
script, but now I encountered a problem mounting some of these discs (but not
all of them - in fact, my last backups consist of two discs each, and I cannot
mount the first one but I can mount the second one for each of them - seems
strange...). It's not date-related (and they are not too old).

In detail, I use the following commands:

IMGFILE=/home/TMP_BKP/backup.img
IMGSIZE=24064000K
IMGLOOP=`losetup -f`

touch $IMGFILE
truncate -s $IMGSIZE $IMGFILE
losetup $IMGLOOP $IMGFILE
cryptsetup luksFormat --cipher aes-xts-plain64 $IMGLOOP
cryptsetup luksOpen $IMGLOOP BDbackup
mkudffs -b 2048 --label $1 /dev/mapper/BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup

... then I create my compressed backup files ...

umount /mnt/BDbackup
cryptsetup luksClose /dev/mapper/BDbackup
losetup -d $IMGLOOP

growisofs -dvd-compat -Z /dev/dvd=$IMGFILE; eject


In order to mount the disc, I use:

cryptsetup luksOpen -r /dev/dvd BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup


Unfortunately, this fails now for some of my discs and also for the last image
file I created (not deleted yet...):

mount: /mnt/BDbackup: wrong fs type, bad option, bad superblock on /dev/
mapper/BDbackup, missing codepage or helper program, or other error.

And dmesg shows:

UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
UDF-fs: Scanning with blocksize 2048 failed
UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
UDF-fs: Scanning with blocksize 4096 failed


Any ideas what may happen here?

Thank you.

Best,
Bernd

Thomas Schmitt

unread,
Jul 4, 2022, 2:00:05 PM7/4/22
to
Hi,

B.M. wrote that dmesg reports:
> UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found

That's a very early stage of UDF recognition.
Given that you were able to copy files into that UDF image by help of
the Linux kernel driver, i deem it improbable that the properly decrypted
UDF format would be in such a bad shape.

So it looks like decryption uses a wrong key when you mount it again.

Consider to exercise the procedure without encryption to make sure
that the resulting $IMGFILE are recognizable UDF and contain the files
which you expect. Just to be sure.


> my last backups consist of two discs each, and I cannot
> mount the first one but I can mount the second one for each of them

Hard to explain.

I see a theoretical race condition in the sequence of
IMGLOOP=`losetup -f`
and
losetup $IMGLOOP $IMGFILE
but cannot make up a situation where this would lead to silent failure
to encrypt.

What does
file "$IMGFILE"
say ?

Consider to use --verbose and/or --debug with the two runs of
cryptsetup luksOpen. Maybe you see a reason why they are at odds.


Have a nice day :)

Thomas

David Christensen

unread,
Jul 4, 2022, 8:20:05 PM7/4/22
to
That approach is complex and Linux specific. I am unclear if a contents
put into a UDF filesystem mounted on a dm-scrypt volume inside a
loopback device, mastered and burned to BD-R, opened directly with
cryptsetup(1), and mounted results in an exact copy of the starting
contents (data and metadata). I would want to built a script to do
round-trip validation.


I prefer to do filesystem archive operations at the filesystem level,
and to use standard tools such as tar(1), gzip(1), rsync(1), and
ccencrypt(1). My scripts typically produce an encrypted file (plus an
MD5 file and a SHA256 file), which I then burn to disc using whatever
standard tool the platform provides. Later, I mount the disc using
whatever standard tool that particular platform provides and use
standard tools to access the content.


Suggestions:

1. Put your commands into one or more scripts. I try to keep my
scripts simple, each doing some incremental step of an overall process.
I can run them individually by hand, or feed them into higher level
scripts. Traditional Bourne syntax works well in many cases. I upgrade
to Perl if and when I need more power.

2. Assuming Bourne, enable the 'errexit' and 'nounset' options in the
script:

set -o errexit

set -o nounset

3. Assuming Bourne, enable the 'xtrace' option during development.
Once the script is working reliably, you can comment it out:

set -o xtrace

4. Put good comments in the script. Once a backup script like this is
working, I am loath to touch it. But when I must, climbing the learning
curve again is far easier with good comments.

5. For every command issued that makes some change to the system,
include additional commands to check that the command succeeded. The
script should stop and dump useful debugging information if a check fails.

6. Some shell commands may return before the changes have fully
propagated throughout the system, but the script will blindly charge
ahead at full speed regardless; resulting in race condition bugs.
Ironically -- the faster the computer, the more likely the problem.
Ideally, devise commands and checks that are smart enough to accommodate
such delays. A quick and dirty work-around is to add a short delay
between such the command and the check:

sleep 3

7. Add option processing to the script. Provide a "-n" option ("dry run").

8. Decompose the overall script into smaller pieces, and work each
piece in turn.

9. Refactor common functionality into reusable components (e.g.
functions, libraries). Beware that KISS coding techniques such as
cut-and-paste can be easier to write, debug, and maintain than advanced
programming techniques.

10. Idempotent scripts are nice, especially when there are failures part
way through a long process:

https://en.wikipedia.org/wiki/Idempotent

11. Do round-trip testing -- e.g. backup, restore to a side location,
and confirm the two are identical (e.g. data and metadata).

12. Write an automated test suite for the script and/or its components.
Perl has good support for test driven development.

13. Write documentation for the script and/or its components. Perl has
support for built-in text documentation, and automation for converting
that into manual pages.


David

B.M.

unread,
Jul 5, 2022, 1:10:05 PM7/5/22
to
Well, I can provide you with:

file "$IMGFILE"
LUKS encrypted file, ver 2 [, , sha256] UUID: 835847ff-2cb3-4c6d-aa04-
d3b79010a2d3

and I also compared

cryptsetup luksOpen -r --verbose --debug /dev/dvd BDbackup

for two discs, one mounting without any problem, one with the above mentioned
problem. Differences are only Checksums of the LUKS header, DM-UUIDs and udev
cookie values - as expected, I would say.

I also tried mounting again, and here is once again output from dmesg:

mount -t udf /dev/mapper/BDbackup /mnt/BDbackup

[62606.932713] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.932717] UDF-fs: Scanning with blocksize 512 failed
[62606.932860] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.932862] UDF-fs: Scanning with blocksize 1024 failed
[62606.932982] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.932984] UDF-fs: Scanning with blocksize 2048 failed
[62606.933111] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.933113] UDF-fs: Scanning with blocksize 4096 failed

and if I skip VRS (from man mount: Ignore the Volume Recognition Sequence and
attempt to mount anyway.):

mount -t udf -o novrs /dev/mapper/BDbackup /mnt/BDbackup

[62614.207353] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.207358] UDF-fs: Scanning with blocksize 512 failed
[62614.207667] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.207670] UDF-fs: Scanning with blocksize 1024 failed
[62614.207920] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.207922] UDF-fs: Scanning with blocksize 2048 failed
[62614.208202] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.208204] UDF-fs: Scanning with blocksize 4096 failed
[62614.208205] UDF-fs: warning (device dm-10): udf_fill_super: No partition
found (1)

So now I'm stuck again, but maybe one little step later...


Reply to David Christensen's comment:

Thank you for your suggestions, well, most of them are not new to me and I'm
following them already; especially as my solution is basically a bash script
which I also put into git ;-) Years ago when I started with this solution, I
tested it and I could restore all files successfully. Since the script is
unchanged, I didn't expect that kind of problem now.

Encrypting each file separately would be another way, but for me it has always
been nice to just have to decrypt the whole disc once, on the cli, the same
way as I can do it with my external hard disks and so on...

My use case - by the way, and this might be of interest for others as well -
is backuping up all the typical family stuff... files, images, mails and so on,
and I do bi-weekly backups alternating on several encrypted HDDs, stored
offsite (at my office desk). Additionally I started writing encrypted BD discs,
just to have incremental read only backups, created every 3 - 6 months and
stored offsite in the office as well...


Thanks again for any hints...

(Please add me to your reply in CC as I'm currently not subscribed to the list
anymore.)

Bernd

Thomas Schmitt

unread,
Jul 5, 2022, 2:20:04 PM7/5/22
to
Hi,

B.M. wrote:
> file "$IMGFILE"
> LUKS encrypted file, ver 2 [, , sha256] UUID: 835847ff-2cb3-4c6d-aa04-d3b79010a2d3

So it did not stay unencrypted by mistake.
(I assume this is one of the unreadable images.)


> mount -t udf -o novrs /dev/mapper/BDbackup /mnt/BDbackup
> [62614.207920] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
> [62614.207922] UDF-fs: Scanning with blocksize 2048 failed
> So now I'm stuck again, but maybe one little step later...

Yeah. Reading the anchor is a little bit further in the procedure.
But already the missing VRS is a clear indication that the image or disc
does not get properly decrypted when being mounted for reading.
The VRS was there when it was mounted for writing. Later it's gone.

A UDF filesystem image is supposed to bear at its start 32 KiB of zeros.
Have a look with a hex dumper or editor at /dev/mapper/BDbackup.
If you see something heavily non-zero, then decryption is the main
suspect.


> Thanks again for any hints...

As said, i would try whether UDF works fine without encryption.
If yes, i would try whether dd-ing an unencryptedly populated UDF image
into /dev/mapper/BDbackup yields images which are more reliably readable.

If UDF does not work even unencrypted, then i'd consider ext2 or ISO 9660
as alternatives.
(ext2 would be treated like UDF. For ISO 9660 i'd propose xorriso and
directing its output to the not mounted /dev/mapper/BDbackup.)

B.M.

unread,
Jul 7, 2022, 8:50:05 AM7/7/22
to
> > file "$IMGFILE"
> > LUKS encrypted file, ver 2 [, , sha256] UUID:
> > 835847ff-2cb3-4c6d-aa04-d3b79010a2d3
> So it did not stay unencrypted by mistake.
> (I assume this is one of the unreadable images.)

It looks like this for both, the readable and the unreadable discs.

> > mount -t udf -o novrs /dev/mapper/BDbackup /mnt/BDbackup
> > [62614.207920] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor
> > found [62614.207922] UDF-fs: Scanning with blocksize 2048 failed
> > So now I'm stuck again, but maybe one little step later...
>
> Yeah. Reading the anchor is a little bit further in the procedure.
> But already the missing VRS is a clear indication that the image or disc
> does not get properly decrypted when being mounted for reading.
> The VRS was there when it was mounted for writing. Later it's gone.
>
> A UDF filesystem image is supposed to bear at its start 32 KiB of zeros.
> Have a look with a hex dumper or editor at /dev/mapper/BDbackup.
> If you see something heavily non-zero, then decryption is the main
> suspect.

This is indeed the case:

9F AC 31 11 1B EA FC 5D 28 A7 41 4E 12 B6 DA D1 | .¬1..êü](§AN.¶ÚÑ
AE 29 C2 30 ED 7D 1E 75 80 2A 1E 3D 4A 45 1C 6F | ®)Â0í}.u.*.=JE.o
78 0C 78 F1 6F 6F FB 62 A6 79 E5 50 CA 67 9F 6E | x.xñooûb¦yåPÊg.n
69 C2 86 C0 36 40 A8 62 2C F5 15 0F 83 79 B8 46 | iÂ.À6@¨b,õ...y¸F
DF 38 E7 33 0D 2D C9 59 20 4C AF 06 B1 37 80 B2 | ß8ç3.-ÉY L¯.±7.²
D8 D3 00 61 69 07 2B 4B 1D 64 20 92 4A B9 72 29 | ØÓ.ai.+K.d .J¹r)
66 65 A8 FE F0 BF D1 1F AC 48 2E 7B 65 42 CB 69 | fe¨þð¿Ñ.¬H.{eBËi
9B DA EC 7E 55 F3 F3 08 82 F5 A9 0F DB D2 BD 6D | .Úì~Uóó..õ©.ÛÒ½m
2B BC 00 F5 A2 68 A2 CF 18 11 77 49 05 18 B1 18 | +¼.õ¢h¢Ï..wI..±.
C1 18 E5 CB 48 F3 C6 FF E5 85 C3 E5 60 F9 01 81 | Á.åËHóÆÿå.Ãå`ù..
96 DA B0 44 07 A4 E6 8D 99 E0 A4 F5 6F 1F F8 2E | .Ú°D.¤æ..à¤õo.ø.
36 B4 80 19 11 1F C3 93 0A EA BC 3B 09 D7 B2 D4 | 6´....Ã..ê¼;.ײÔ

For a readable disk, this look like you said:
Only zeros.


>
> > Thanks again for any hints...
>
> As said, i would try whether UDF works fine without encryption.
> If yes, i would try whether dd-ing an unencryptedly populated UDF image
> into /dev/mapper/BDbackup yields images which are more reliably readable.
>
> If UDF does not work even unencrypted, then i'd consider ext2 or ISO 9660
> as alternatives.
> (ext2 would be treated like UDF. For ISO 9660 i'd propose xorriso and
> directing its output to the not mounted /dev/mapper/BDbackup.)

Why should UDF not work correctly without encryption?

I have an idea what might be the root cause for my problems:
As I mentioned earlier, from the small sample of discs I checked it seems that
if I burned two discs for a backup session instead of one (too much data for
one disc), the first one is unreadable, but the second one is readable.
With respect to the first discs it might be that during the execution of my
script files get copied until the filesystem is full. Multi-disc backups are not
handled by my script, I have to intervene manually. I never expected it to
harm my process, moved some backup files manually, created another image which
I burned on a second disc. So my question is basically:

Might it be possible, that when my UDF filesystem gets filled completely, the
encryption get damaged? Or is my filesystem too large?

# Parameter:
[...]
IMGSIZE=24064000K
# There is an old comment in my script at this line, saying:
# let's try that: 24064000K
# 24438784K according to dvd+rw-mediainfo but creates at
# least sometimes INVALID ADDRESS FOR WRITE;
# alternative according to internet research: 23500M
IMGFILE=/home/TMP_BKP/backup.img
IMGLOOP=`losetup -f`

[...]

# Prepare loopback device:
echo "Preparing loopback device..."
touch $IMGFILE
truncate -s $IMGSIZE $IMGFILE
losetup $IMGLOOP $IMGFILE
echo "Creating encryption, filesystem and mounting:"
cryptsetup luksFormat --cipher aes-xts-plain64 $IMGLOOP
cryptsetup luksOpen $IMGLOOP BDbackup
mkudffs -b 2048 -m bdr --label $1 /dev/mapper/BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup

But: it's not only the burned disc which is not readable/mountable, it's also
the image I created before burning.

Thank you once again.

Best,
Bernd

Thomas Schmitt

unread,
Jul 7, 2022, 10:10:06 AM7/7/22
to
Hi,

i wrote:
> > A UDF filesystem image is supposed to bear at its start 32 KiB of zeros.

B.M. wrote:
> This is indeed the case:
> [...]
> For a readable disk, this look like you said: Only zeros.

So it looks like at least a part of the problem is decryption.


> > If UDF does not work even unencrypted,

> Why should UDF not work correctly without encryption?

It's improbable, i confess.
But for now we are hunting an unexplainable problem. So we have to divide
the situation in order to narrow the set of suspects.

Verifying that your procdure with two UDF images is not the culprit would
help even if the result is boringly ok, as we expect. (Or we are in for
a surprise ...)

After the boring outcome you have the unencrypted images to make the next
step, namely to create /dev/mapper/BDbackup with a new empty image file
as base, to copy the images into it (e.g. by dd), and to close it.
Then try whether the two encrypted image files can be properly openend
as /dev/mapper/BDbackup ans show mountable UDF filesystems.


> it's not only the burned disc which is not readable/mountable, it's
> also the image I created before burning.

So we can exclude growisofs as culprit.


> Might it be possible, that when my UDF filesystem gets filled completely,
> the encryption get damaged?

That would be a bad bug in the device-mapper code and also such a mishap
is hard to imagine. The UDF driver is supposed not to write outside its
filesystem data range. That range would be at most as large as the payload
of the device mapping.


> Multi-disc backups are not
> handled by my script, I have to intervene manually.

That's always a potential source of problems.

(Around 1999 i addressed the multi-disc problem for CDs, after my manually
maintained scripts grew in number from 2 to 3. Regrettably i see no
simple way to let scdbackup handle your special encryption wish. We'd have
to hack it a bit. And it's for ISO 9660 not for UDF, of course.)


> I never expected it to
> harm my process, moved some backup files manually, created another image
> which I burned on a second disc.

Do i get it right, that your script copies files into the mounted UDF
and gets a "filesystem full" error ?

What exactly are you doing next ?
(From where to where are you moving the surplus files ?
Does the first /dev/mapper device stay open while you create the encrypted
device for the second UDF filesystem ? Anything i don't think of ... ?)


> Or is my filesystem too large?

25 "GB" would rather be too small to swim in the swarm of other cryptsetup
users.


-----------------------------------------------------------------------
Slightly off topic: A riddle about your UDF image sizes:

> # There is an old comment in my script at this line, saying:
> # let's try that: 24064000K
> # 24438784K according to dvd+rw-mediainfo but creates at
> # least sometimes INVALID ADDRESS FOR WRITE;
> # alternative according to internet research: 23500M

An unformatted single layer BD-R has 12,219,392 blocks = 23866 MiB =
24,438,784 KiB.
But growisofs formats it by default to 11,826,176 = 23098 MiB =
23,652,352 KiB.

growisofs_mmc.cpp emits a message in function bd_r_format()
fprintf (stderr,"%s: pre-formatting blank BD-R for %.1fGB...\n",
ioctl_device,(f[0]<<24|f[1]<<16|f[2]<<8|f[3])*2048.0/1e9);
Watch your growisofs run for it.
(Note that it talks of merchant's GB = 1 billion, not of programmer's
GiB = 107.3741,824. 23098 MiB = 24.220008448 GB)


> IMGSIZE=24064000K
> truncate -s $IMGSIZE $IMGFILE

The man page of truncate says that it's "K" are 1024, i.e KiB.
So your image has 23500 MiB which is too large for the default format
as normally applied to BD-R by growisofs.

growisofs has a bug to accept burn jobs which fit into unformatted BD-R
but then to spoil them by applying its default format:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699186

So how come that your growisofs run does not fail in the end ?

There is an undocumented growisofs option to suppress BD-R formatting:
-use-the-force-luke=spare=none
There is also
-use-the-force-luke=spare=min
which (i guess) will bring 23610 MiB of payload.

(I take the occasion to point out that xorriso does not format BD-R
by default. I.e. default capacity is 23866 MiB.)

Thomas Schmitt

unread,
Jul 9, 2022, 1:10:06 PM7/9/22
to
Hi,

B.M. wrote:
> If you want you can have a look at my script, I attached it to this mail...

Will do. (There must be some rational explanation ...)


> "Filesystem full" is not handled at all. Typically if this happens it's
> quite late i.e. most folders are already backuped and I do the following:
> - remove the last lz-file, I never checked if it is corrupted
> - burn the image

No
cryptsetup luksClose /dev/mapper/BDbackup
between remove and burn ?

-----------------------------------------------------------------------

I wrote:
> > But growisofs formats it [BD-R] by default to 11,826,176 = 23098 MiB =
> > 23,652,352 KiB.

> I didn't know that growisofs gives away a few bytes... Do you know why
> that's the case?

Andy Polyakov decided to format BD-R by default. Possibly because he used
an operating system (IIRC, Solaris) which did not expect that BD-R can be
used for multi-session. So its mount program followed the volume descriptors
starting at block 16 rather than at 16 blocks after the start of the youngest
session.
Whatever, growisofs by default wants to update the volume descriptors at
block 16 of the BD-R and for this uses BD-R Pseudo-Overwrite formatting.
This special feature uses the Defect Management to replace old written blocks
by newly written blocks.

Formatted BD-R cause the drive to perform Defect Management when writing.
This means half write speed at best, heavy clonking with smaller write
quality problems, and often miserable failure on media which work well
unformatted.


> Never heard about xorriso before

It makes ISO 9660 filesystems and burns them to optical media.
I am its developer.


> - from my understanding I could use it instead of growisofs, but with
> larger images?

Be invited. :))
Image burning is handled by its cdrecord emulation mode.

growisofs -dvd-compat -Z /dev/dvd=$IMGFILE; eject

translates to

xorriso -as cdrecord -v dev=/dev/dvd -eject $IMGFILE

But xorriso (in particular: libburn) cannot write more bytes to a BD-R
than growisofs with option -use-the-force-luke=spare=none can do.
It's just a matter of program defaults, in this case.
(There are other cases where think to have outperformed growisofs.)


> > -use-the-force-luke=spare=...

> I didn't use these options.

That's why i riddle why your burns do not fail in the end.
What do you get from a run of

dvd+rw-mediainfo /dev/dvd

or

xorriso -outdev /dev/dvd -toc -list_formats

with the burnt DB-R medium in /dev/dvd ?


> General question:
> Do you think I should completely change my script such that it creates lz-
> files, encrypts each of them and then writes them on an unencrypted disc?

We should first find out why your procedure produces a bad encrypted image
when you do your manual overflow handling.

In the case of no overflow it looks perfectly ok. The result has some
advantages over a home-made encryption of file content or the whole
filesystem.
(I am still looking for a stream facility which produces encryption
which can later be put on a block device and decrypted by a /dev/mapper
device. Your way of creating a big image has the disadvantage of needing
extra disk space. Cool would be to write directly to the BD-R. But it
is a block device only for reading, not when it gets written.)

I have a backup use case where i define an encryption filter and apply
it to data file content. The filter makes use of an external encryption
program which can operate on data streams. (In this case it is self-made
from some published encryption algorithm. But any stream capable encryption
program which can read the key from a file should do.)
It is for multi-session. So the /dev/mapper approach will meet more
problems. I doubt that dm-crypt handles growing devices.

David Christensen

unread,
Jul 9, 2022, 5:30:05 PM7/9/22
to
On 7/9/22 08:41, B.M. wrote:

> If you want you can have a look at my script, I attached it to this mail...


I have written several generations of such scripts in Bourne and Perl
over the past 3+ decades. They all have obvious and inobvious
limitations and bugs.


What we both have are programs. What we really want is programming
systems product [1] -- especially if we are going to trust it for backup
and recovery.


Learning and implementing the suggests in my prior response [2] have
improved my various scripts, but I have often thought I should just
migrate to an established and mature FOSS solution:

https://www.linuxlinks.com/backup/


Doing so would:

1. Give me more confidence in my backups, and the ability to restore.

2. Once I learned the software, I could just use it and not have to
debug or upgrade my own software every time I find a bug or want another
feature.


David


[1]
https://www.pearson.com/us/higher-education/program/Brooks-Mythical-Man-Month-The-Essays-on-Software-Engineering-Anniversary-Edition-2nd-Edition/PGM172844.html

[2] https://www.mail-archive.com/debia...@lists.debian.org/msg783600.html

Tim Woodall

unread,
Jul 10, 2022, 12:50:05 AM7/10/22
to
On Sat, 9 Jul 2022, B.M. wrote:

>> Verifying that your procdure with two UDF images is not the culprit would
>> help even if the result is boringly ok, as we expect. (Or we are in for
>> a surprise ...)
>
> I don't have two UDF images.

Not been following this closely, but I do something very similar and
have never had a problem.

However, immediately after burning the disk I verify it like this:


fileSHA=$( sha1sum $UDFIMAGE | cut -d' ' -f1 )
cdromSHA=$( dd status=progress if=/dev/cdrom bs=1k count=$maxsize |
sha1sum | cut -d' ' -f1 )

STATUS=0

[[ "$fileSHA" != "$cdromSHA" ]] && STATUS=1


It's unusual, but I have had instances where the burn has completed
without any issues but the verify has failed. When that happens I got
several failures close together - I've assumed faulty disks.

I write slightly more often than once a month on average and I'm now on
disk 90 - nearly 7 years (prior to that I was using dvd), and I have
never had an issue accessing old backups (which I do from time to time)


Tim


> In my script I create a file, put an encrypted UDF filesystem into it and start
> writing compressed files into it. Unfortunately it can happen (and happened in
> the past) that the filesystem got filled up completely.
>
> Beside that, I use a fully encrypted system with several partitions...
> Extract from df -h:
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/sdb2_crypt 28G 23G 3.0G 89% /
> /dev/sdb1 447M 202M 221M 48% /boot
> /dev/mapper/var_crypt 27G 18G 8.4G 68% /var
> /dev/mapper/vraid1-home 1.8T 1.5T 251G 86% /home
> /dev/mapper/BDbackup 6.5M 6.5M 2.0K 100% /mnt/BDbackup
>
> (I create the image file as /home/TMP_BKP/backup.img just because that's where
> I have enough available space.)
>
>> After the boring outcome you have the unencrypted images to make the next
>> step, namely to create /dev/mapper/BDbackup with a new empty image file
>> as base, to copy the images into it (e.g. by dd), and to close it.
>> Then try whether the two encrypted image files can be properly openend
>> as /dev/mapper/BDbackup ans show mountable UDF filesystems.
>>
>>> it's not only the burned disc which is not readable/mountable, it's
>>> also the image I created before burning.
>>
>> So we can exclude growisofs as culprit.
>>
>>> Might it be possible, that when my UDF filesystem gets filled completely,
>>> the encryption get damaged?
>>
>> That would be a bad bug in the device-mapper code and also such a mishap
>> is hard to imagine. The UDF driver is supposed not to write outside its
>> filesystem data range. That range would be at most as large as the payload
>> of the device mapping.
>
> Doesn't look like that - I tried the following several times:
> Create (a much smaller) image file, put an encrypted filesystem in it, fill it
> completely with either cp or dd, unmount it, close and re-open with
> cryptsetup, than check /dev/mapper/BDbackup: no problems, only hex zeros and
> it's mountable.
>
>>> Multi-disc backups are not
>>> handled by my script, I have to intervene manually.
>>
>> That's always a potential source of problems.
>
>> Do i get it right, that your script copies files into the mounted UDF
>> and gets a "filesystem full" error ?
>>
>> What exactly are you doing next ?
>> (From where to where are you moving the surplus files ?
>> Does the first /dev/mapper device stay open while you create the encrypted
>> device for the second UDF filesystem ? Anything i don't think of ... ?)
>
> If you want you can have a look at my script, I attached it to this mail...
>
> Basically, I use extended attributes (user.xdg.tags) to manage which folders
> have to get backuped, write the last backup date into user.xdg.comment. By
> comparing file timestamps with these backup dates this allows for incremental
> backups.
> Then for each folder which should be backuped, I use tar and plzip, writing
> into BKPDIR="/mnt/BDbackup".
>
> "Filesystem full" is not handled at all. Typically if this happens it's quite
> late i.e. most folders are already backuped and I do the following:
> - remove the last lz-file, I never checked if it is corrupted
> - burn the image
> - reset user.xdg.comment for not yet backuped folders manually
> - execute the script again, burn the so created second image
>
> Since this is quite ugly, I try to prevent it by moving very large lz-files
> from /mnt/BDbackup to a temporary location outside of /mnt/BDbackup while the
> script is running. When the "create lz-files"-part of my script has finished, I
> check if there is sufficient space to move the large files back to /mnt/
> BDbackup. If yes I do this, if not I leave them outside, burn the first disc,
> then I create a second image manually, put the large files into the empty
> filesystem, burn this disc as well. Not perfect at all, I know, but it's
> working... and I do this about every 3 or 6 months. Beside that, it's just a
> second kind of backup additionally to bi-weekly backups on external, also
> encrypted HDDs. (I think with these two kind of backups I'm doing enough to
> save our precious personal files, images, videos etc., doing much more than
> most people out there ;-)
>
> Honestly I don't see where this process may corrupt the UDF fs or the
> encryption. And I don't see an error / bug in my script neither.
>
>>> Or is my filesystem too large?
>>
>> 25 "GB" would rather be too small to swim in the swarm of other cryptsetup
>> users.
>>
>>
>> -----------------------------------------------------------------------
>>
>> Slightly off topic: A riddle about your UDF image sizes:
>>> # There is an old comment in my script at this line, saying:
>>> # let's try that: 24064000K
>>> # 24438784K according to dvd+rw-mediainfo but creates at
>>> # least sometimes INVALID ADDRESS FOR WRITE;
>>> # alternative according to internet research: 23500M
>>
>> An unformatted single layer BD-R has 12,219,392 blocks = 23866 MiB =
>> 24,438,784 KiB.
>> But growisofs formats it by default to 11,826,176 = 23098 MiB =
>> 23,652,352 KiB.
>
> Thanks for pointing this out, I didn't know that growisofs gives away a few
> bytes... Do you know why that's the case?
>
>> growisofs_mmc.cpp emits a message in function bd_r_format()
>> fprintf (stderr,"%s: pre-formatting blank BD-R for %.1fGB...\n",
>>
>> ioctl_device,(f[0]<<24|f[1]<<16|f[2]<<8|f[3])*2048.0/1e9); Watch your
>> growisofs run for it.
>
> Never noticed this error message though, see my third to last paragraph below.
>
>> (Note that it talks of merchant's GB = 1 billion, not of programmer's
>> GiB = 107.3741,824. 23098 MiB = 24.220008448 GB)
>>
>>> IMGSIZE=24064000K
>>> truncate -s $IMGSIZE $IMGFILE
>>
>> The man page of truncate says that it's "K" are 1024, i.e KiB.
>> So your image has 23500 MiB which is too large for the default format
>> as normally applied to BD-R by growisofs.
>>
>> growisofs has a bug to accept burn jobs which fit into unformatted BD-R
>> but then to spoil them by applying its default format:
>> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699186
>>
>> So how come that your growisofs run does not fail in the end ?
>>
>> There is an undocumented growisofs option to suppress BD-R formatting:
>> -use-the-force-luke=spare=none
>> There is also
>> -use-the-force-luke=spare=min
>> which (i guess) will bring 23610 MiB of payload.
>
> I didn't use these options.
>
> With all that tried and learned, I'm going to try another full run of my
> script, closely monitoring what's happening during the different steps. Not
> today but when I have enough time, maybe in a week or even later.
>
>>
>> (I take the occasion to point out that xorriso does not format BD-R
>> by default. I.e. default capacity is 23866 MiB.)
>
> Never heard about xorriso before - from my understanding I could use it
> instead of growisofs, but with larger images?
>
> General question:
> Do you think I should completely change my script such that it creates lz-
> files, encrypts each of them and then writes them on an unencrypted disc?
>
> Thank you very much.
>
> Best,
> Bernd
>

Thomas Schmitt

unread,
Jul 10, 2022, 3:30:05 AM7/10/22
to
Hi,

Tim Woodall wrote:
> cdromSHA=$( dd status=progress if=/dev/cdrom bs=1k count=$maxsize |
> sha1sum | cut -d' ' -f1 )
> [...]
> It's unusual, but I have had instances where the burn has completed
> without any issues but the verify has failed. When that happens I got
> several failures close together - I've assumed faulty disks.

Verification errors without i/o error from the block layer (forwarded by
dd) are indeed very unusual. Normally the checksums in the data sectors of
optical media let the drive detect bad readability and cause it to report
an error to the operating system.

I can only remember one occasion when a DVD was readable but with wrong
checksum by one drive. Three others reported i/o error. One other was able
to produce the correct checksum.

I am paranoid enough to equip everything in my backups with MD5 checksums,
and to checkread important old backups from time to time. I make daily
incremental backups of $HOME on 1 BD-R, 3 BD-RE, and 1 DVD+RW.
The DVD+RW gets zisofs compression, because else it would be too small
even for the ~ 4.5 GiB base backup.
Every other day i make incremental backups by scdbackup of my larger
multi-media data and $WORK data. Either on a single BD-RE or on 1 to 4
DVD+RW. When the changed data exceed that size, i put the old backup
level on shelf and start a new one with the old as base.


> I have
> never had an issue accessing old backups (which I do from time to time)

Nevertheless consider to add a file with a list of checksums for the data
files on the medium. This way you can later distinguish bad files from
good ones, if the overall media verification fails.

B.M.

unread,
Jul 10, 2022, 6:20:05 AM7/10/22
to
> No
> cryptsetup luksClose /dev/mapper/BDbackup
> between remove and burn ?

To be honest, I cannot say for sure, so maybe yes. But: what would be the
implication? The fs inside is already unmounted, is cryptsetup luksClose
modifying anything within the image?

> Andy Polyakov decided to format BD-R by default. Possibly because he used
> an operating system (IIRC, Solaris) which did not expect that BD-R can be
> used for multi-session. So its mount program followed the volume descriptors
> starting at block 16 rather than at 16 blocks after the start of the
> youngest session.
> Whatever, growisofs by default wants to update the volume descriptors at
> block 16 of the BD-R and for this uses BD-R Pseudo-Overwrite formatting.
> This special feature uses the Defect Management to replace old written
> blocks by newly written blocks.
>
> Formatted BD-R cause the drive to perform Defect Management when writing.
> This means half write speed at best, heavy clonking with smaller write
> quality problems, and often miserable failure on media which work well
> unformatted.

Ah, I remember, some years ago before I started using BD I had a look at there
specification.

> That's why i riddle why your burns do not fail in the end.
> What do you get from a run of
>
> dvd+rw-mediainfo /dev/dvd

INQUIRY: [PIONEER ][BD-RW BDR-209D][1.30]
GET [CURRENT] CONFIGURATION:
Mounted Media: 41h, BD-R SRM+POW
Media ID: CMCMAG/BA5
Current Write Speed: 12.0x4495=53940KB/s
Write Speed #0: 12.0x4495=53940KB/s
Write Speed #1: 10.0x4495=44950KB/s
Write Speed #2: 8.0x4495=35960KB/s
Write Speed #3: 6.0x4495=26970KB/s
Write Speed #4: 4.0x4495=17980KB/s
Write Speed #5: 2.0x4495=8990KB/s
Speed Descriptor#0: 00/12088319 R...@12.0x4495=53940KB/s W...@12.0x4495=53940KB/
s
Speed Descriptor#1: 00/12088319 R...@10.0x4495=44950KB/s W...@10.0x4495=44950KB/
s
Speed Descriptor#2: 00/12088319 R...@8.0x4495=35960KB/s W...@8.0x4495=35960KB/s
Speed Descriptor#3: 00/12088319 R...@6.0x4495=26970KB/s W...@6.0x4495=26970KB/s
Speed Descriptor#4: 00/12088319 R...@4.0x4495=17980KB/s W...@4.0x4495=17980KB/s
Speed Descriptor#5: 00/12088319 R...@2.0x4495=8990KB/s W...@2.0x4495=8990KB/s
POW RESOURCES INFORMATION:
Remaining Replacements:16843296
Remaining Map Entries: 0
Remaining Updates: 0
READ DISC INFORMATION:
Disc status: appendable
Number of Sessions: 1
State of Last Session: incomplete
"Next" Track: 1
Number of Tracks: 2
READ TRACK INFORMATION[#1]:
Track State: partial incremental
Track Start Address: 0*2KB
Free Blocks: 0*2KB
Track Size: 12032000*2KB
READ TRACK INFORMATION[#2]:
Track State: invisible incremental
Track Start Address: 12032000*2KB
Next Writable Address: 12032000*2KB
Free Blocks: 56320*2KB
Track Size: 56320*2KB
FABRICATED TOC:
Track#1 : 14@0
Track#AA : 14@12088320
Multi-session Info: #1@0
READ CAPACITY: 12088320*2048=24756879360

While for a readable disc I get:

INQUIRY: [PIONEER ][BD-RW BDR-209D][1.30]
GET [CURRENT] CONFIGURATION:
Mounted Media: 41h, BD-R SRM+POW
Media ID: CMCMAG/BA5
Current Write Speed: 12.0x4495=53940KB/s
Write Speed #0: 12.0x4495=53940KB/s
Write Speed #1: 10.0x4495=44950KB/s
Write Speed #2: 8.0x4495=35960KB/s
Write Speed #3: 6.0x4495=26970KB/s
Write Speed #4: 4.0x4495=17980KB/s
Write Speed #5: 2.0x4495=8990KB/s
Speed Descriptor#0: 00/12088319 R...@12.0x4495=53940KB/s W...@12.0x4495=53940KB/
s
Speed Descriptor#1: 00/12088319 R...@10.0x4495=44950KB/s W...@10.0x4495=44950KB/
s
Speed Descriptor#2: 00/12088319 R...@8.0x4495=35960KB/s W...@8.0x4495=35960KB/s
Speed Descriptor#3: 00/12088319 R...@6.0x4495=26970KB/s W...@6.0x4495=26970KB/s
Speed Descriptor#4: 00/12088319 R...@4.0x4495=17980KB/s W...@4.0x4495=17980KB/s
Speed Descriptor#5: 00/12088319 R...@2.0x4495=8990KB/s W...@2.0x4495=8990KB/s
POW RESOURCES INFORMATION:
Remaining Replacements:16843296
Remaining Map Entries: 0
Remaining Updates: 0
READ DISC INFORMATION:
Disc status: appendable
Number of Sessions: 1
State of Last Session: incomplete
"Next" Track: 1
Number of Tracks: 2
READ TRACK INFORMATION[#1]:
Track State: partial incremental
Track Start Address: 0*2KB
Free Blocks: 0*2KB
Track Size: 12032000*2KB
READ TRACK INFORMATION[#2]:
Track State: invisible incremental
Track Start Address: 12032000*2KB
Next Writable Address: 12032000*2KB
Free Blocks: 56320*2KB
Track Size: 56320*2KB
FABRICATED TOC:
Track#1 : 14@0
Track#AA : 14@12088320
Multi-session Info: #1@0
READ CAPACITY: 12088320*2048=24756879360

> Your way of creating a big image has the disadvantage of needing
> extra disk space. Cool would be to write directly to the BD-R. But it
> is a block device only for reading, not when it gets written.)

Absolutely ;-)

> I have a backup use case where i define an encryption filter and apply
> it to data file content. The filter makes use of an external encryption
> program which can operate on data streams. (In this case it is self-made
> from some published encryption algorithm. But any stream capable encryption
> program which can read the key from a file should do.)
> It is for multi-session. So the /dev/mapper approach will meet more
> problems. I doubt that dm-crypt handles growing devices.

Since I didn't find anything like that I went for the image file solution, which
- while not being "pretty" - should at least work and I'm not disk space
limited (at least as far as the size of a BD is concerned).

Best,
Bernd

Thomas Schmitt

unread,
Jul 10, 2022, 8:20:06 AM7/10/22
to
Hi,

i wrote:
> > No
> > cryptsetup luksClose /dev/mapper/BDbackup
> > between remove and burn ?

B.M. wrote:
> To be honest, I cannot say for sure, so maybe yes. But: what would be the
> implication? The fs inside is already unmounted, is cryptsetup luksClose
> modifying anything within the image?

Good questions. Make some experiments. :))
At least the manual intervention is a good suspect because it occurs exactly
when you get undecryptable images.

I see in your script:

umount /mnt/BDbackup
cryptsetup luksClose /dev/mapper/BDbackup
losetup -d $IMGLOOP


#
# Step 5: Burn to BD-R
#

and would expect that the three lines are there for a reason.


Do i understand correctly that the overflow happens in line 173
with the tar run ?

tar cf - -C "`dirname "$line"`" "`basename "$line"`" | plzip > "$zipfilename1"

If so: What happens next ? Does the script abort without cleaning up ?
(I.e. no unmounting, closing, and de-looping by the script ?)


> > dvd+rw-mediainfo /dev/dvd

> INQUIRY: [PIONEER ][BD-RW BDR-209D][1.30]

That's the killer of Verbatim BD-RE. (If you buy BD-RE, then take any other
brand, which will probably be made by Ritek.)

It's also a super fast BD-R burner and super loud with 10x speed.
If you ever use it with unformatted BD-R by
growisofs -use-the-force-luke=spare=none ...
then consider to curb its speed by e.g. -speed=6


> Mounted Media: 41h, BD-R SRM+POW

So it is a BD-R in the default formatting state of growisofs.

> Speed Descriptor#0: 00/12088319 R...@12.0x4495=53940KB/s

The speed descriptor says that it is valid from block 0 to 12088319.
That would be the size of 23610 MiB achieved by growisofs option
-use-the-force-luke=spare=min
I have no idea from growisofs source code in
https://sources.debian.org/src/dvd%2Brw-tools/7.1-14/growisofs_mmc.cpp/#L713
why it has chosen this size. I'd expect it came to line 738
i = 8; // grab default descriptor
and not to the lines under 721
if (spare == 0) // locate descriptor with maximum capacity

But it explains why your burn runs succeeded.

(Looking at the patches confuses me even more. One by me is present twice:
https://sources.debian.org/src/dvd%2Brw-tools/7.1-14/debian/patches/10-blue-ray-bug713016.patch/
https://sources.debian.org/src/dvd%2Brw-tools/7.1-14/debian/patches/fix_burning_bd-r_discs.patch/
and one other aims to prevent the SRM+POW formatting which you get
https://sources.debian.org/src/dvd%2Brw-tools/7.1-14/debian/patches/ignore_pseudo_overwrite.patch/
)


> READ TRACK INFORMATION[#1]:
> Track State: partial incremental
> Track Start Address: 0*2KB
> Free Blocks: 0*2KB
> Track Size: 12032000*2KB

That's the 24064000K of your image file. All seems well, as far as the BD-R
is concerned.

> READ TRACK INFORMATION[#2]:
> ...
> Next Writable Address: 12032000*2KB
> Free Blocks: 56320*2KB

There would still be 100 MB free for another session.
(But don't mess with good backups which are not intended as multi-session.)


> I went for the image file solution, which
> - while not being "pretty" - should at least work and I'm not disk space
> limited (at least as far as the size of a BD is concerned).

With BD-RE media and the patience for <= 4.5 MB/s write speed you could
install /dev/mapper/BDbackup directly on /dev/dvd, mount and populate it
like the encrypted image, umount, execute sync(1), cryptsetup luksClose,
and another sync just to be sure.

The BD-RE has to be already formatted.
Default size by growisofs' companion:
dvd+rw-format /dev/dvd
(To request other sizes you'll have to study
https://sources.debian.org/src/dvd%2Brw-tools/7.1-14/dvd%2Brw-format.cpp/#L258
from where i guess that option -spare=min brings 23610 MiB.)
With xorriso i would do:
xorriso -outdev /dev/dvd -format by_size_23610m

Afterwards the Linux kernel should be able to handle it without the need
for a burn program. But also without the neat streaming of image writing
and without the opportunity to get full 2x speed (9 MiB/s) by disabling
Defect management.

fxkl...@protonmail.com

unread,
Jul 10, 2022, 9:00:05 AM7/10/22
to
I'm just flapping my gums
As a systems administrator for UNIX systems I wrote more than a few scripts
Many time I found it necessary to put a sleep between operations
Several decades ago I was taught to type sync and then type sync again before unmounting a drive
The only reason I ever got was that the second sync was a time delay

Nicolas George

unread,
Jul 10, 2022, 12:50:05 PM7/10/22
to
fxkl...@protonmail.com (12022-07-10):
I do not know if it was ever useful at all or if it always was cargo
cult, but I am quite sure it has not been any use on Linux in the last
decade.

You can check for yourself: mount a slow USB stick, create file that is
just large enough to fit in memory with something that you are sure will
not make a (f)sync:

head -c $[8*1024*1024] /dev/urandom > /media/blah/dummy

Then unmount and see what happens.

For extra benefit, start “grep Dirty /proc/meminfo” in another terminal
before you start and keep an eye on it.

Regards,

--
Nicolas George
signature.asc

fxkl...@protonmail.com

unread,
Jul 10, 2022, 1:00:05 PM7/10/22
to
On Sun, 10 Jul 2022, Nicolas George wrote:

> fxkl...@protonmail.com (12022-07-10):
>> I'm just flapping my gums
>> As a systems administrator for UNIX systems I wrote more than a few scripts
>> Many time I found it necessary to put a sleep between operations
>> Several decades ago I was taught to type sync and then type sync again before unmounting a drive
>> The only reason I ever got was that the second sync was a time delay
>
> I do not know if it was ever useful at all or if it always was cargo
> cult, but I am quite sure it has not been any use on Linux in the last
> decade.

But I would not be able to sleep at night
I would have nightmares about corrupt data haunting me :)

>
> You can check for yourself: mount a slow USB stick, create file that is
> just large enough to fit in memory with something that you are sure will
> not make a (f)sync:
>
> head -c $[8*1024*1024] /dev/urandom > /media/blah/dummy
>
> Then unmount and see what happens.
>
> For extra benefit, start ?grep Dirty /proc/meminfo? in another terminal

Nicolas George

unread,
Jul 10, 2022, 1:10:06 PM7/10/22
to
fxkl...@protonmail.com (12022-07-10):
> I would have nightmares about corrupt data haunting me :)

Well, you can see a therapist, or you can conduct the experiment I
suggested:
signature.asc

to...@tuxteam.de

unread,
Jul 10, 2022, 3:20:05 PM7/10/22
to
On Sun, Jul 10, 2022 at 07:01:49PM +0200, Nicolas George wrote:
> fxkl...@protonmail.com (12022-07-10):
> > I would have nightmares about corrupt data haunting me :)
>
> Well, you can see a therapist, or you can conduct the experiment I
> suggested:

But then, always doing sync twice looks like a very mild measure, and
far cheaper than seeing a therapist. Especially given that the second
sync will typically be very quick. If it's working, I'd go with that :)

Since writing to USBs for me mostly involves copying whole images
(with exception of my backup, which is an rsync: there I do use
sync, but just once), I do use sync much less these days after
having discovered dd's oflag=sync.

Cheers
--
t
signature.asc

David Christensen

unread,
Jul 10, 2022, 3:40:05 PM7/10/22
to
On 7/10/22 12:10, to...@tuxteam.de wrote:

> ... I do use sync much less these days after having discovered dd's oflag=sync.


+1


When doing pipelines involving 'dd bs=1M ...', I have also found
'iflag=fullblock' to be useful.


David

B.M.

unread,
Jul 11, 2022, 3:50:05 AM7/11/22
to
> Good questions. Make some experiments. :))
> At least the manual intervention is a good suspect because it occurs exactly
> when you get undecryptable images.

Will do later.

> I see in your script:
>
> umount /mnt/BDbackup
> cryptsetup luksClose /dev/mapper/BDbackup
> losetup -d $IMGLOOP
>
>
> #
> # Step 5: Burn to BD-R
> #
>
> and would expect that the three lines are there for a reason.

Well, I'm a thorough guy ;-) If I do losetup, luskOpen, mount before copying
files, I do umount, luksClose, losetup afterwards as well.

> Do i understand correctly that the overflow happens in line 173
> with the tar run ?
>
> tar cf - -C "`dirname "$line"`" "`basename "$line"`" | plzip >
> "$zipfilename1"

Exactly.

> If so: What happens next ? Does the script abort without cleaning up ?
> (I.e. no unmounting, closing, and de-looping by the script ?)

It tries for all remaining folders, all of them immediately fail because of
disk full and then it waits for input at step 4, i.e. no automatic cleaning up
but if I continue (what I do) it will do the cleanup before burning.


> > > dvd+rw-mediainfo /dev/dvd
> >
> > INQUIRY: [PIONEER ][BD-RW BDR-209D][1.30]
>
> That's the killer of Verbatim BD-RE. (If you buy BD-RE, then take any other
> brand, which will probably be made by Ritek.)

Do I understand correctly, you say that this Pioneer drive doesn't work well
with Verbatim BD-RE, i.e. their rewriteable BDs. Since I only use BD-R, it
doesn't matter for me and my use case, but thank you nevertheless.

> There would still be 100 MB free for another session.
> (But don't mess with good backups which are not intended as multi-session.)

For backup reasons, I use each BD disc once, no overwriting, no multi-session,
just write and forget (OK, I should have tested them for readability
afterwards, not just randomly but all of them - lesson learned) ;-)

Best,
Bernd

Thomas Schmitt

unread,
Jul 11, 2022, 4:20:04 AM7/11/22
to
Hi,

B.M. wrote:
> Do I understand correctly, you say that this Pioneer drive doesn't work well
> with Verbatim BD-RE, i.e. their rewriteable BDs.

Yes. The problem is with the high reading speed of the drive and with
a physical flaw of Verbatim BD-RE (CMCMAG/CN2/0).
The flaw is that there are letters engraved in the transparent area around
the inner hole, which reach to the thickened ring around the hole.
This ring is obviously essential for physical stability and the letters
weaken it enough so that 10 to 20 full read runs on my Pioneer BDR-209
are enough to produce a radial crack at the hole. This crack grows towards
the rim in a few more full speed reads. As soon as the dye is reached, the
medium is unreadable.

Writing is no problem, because it happens at most at 2.0x speed.
Older Verbatim BD-RE (VERBAT/IM0/0) are no problem. But one cannot buy
them any more.
Reading the new Verbatim BD-RE media is no problem on Optiarc BD RW BD-5300S,
LG BD-RE BH16NS40, and ASUS BW-16D1HT. And of course not with the old LG
drives like BD-RE GGW-H20L which read (and write) BD-RE at 2.3x speed.


> Since I only use BD-R, it
> doesn't matter for me and my use case, but thank you nevertheless.

I tested about 50 reads with RITEK/BR3/0 and Verbatim CMCMAG/BA5/0
BD-R media. No problems. (And no engraved letters to see around the
inner hole.)
The media which you inspected by dvd+rw-medianinfo are CMCMAG/BA5
(dunno what dvd+rw-medianinfo did to the "/0" part of the name).


I am still curious whether the decryption problems are caused by
not closing the /dev/mapper device.

Nicolas George

unread,
Jul 11, 2022, 1:50:06 PM7/11/22
to
to...@tuxteam.de (12022-07-10):
> But then, always doing sync twice looks like a very mild measure, and
> far cheaper than seeing a therapist. Especially given that the second
> sync will typically be very quick. If it's working, I'd go with that :)
>
> Since writing to USBs for me mostly involves copying whole images
> (with exception of my backup, which is an rsync: there I do use
> sync, but just once), I do use sync much less these days after
> having discovered dd's oflag=sync.

On Linux, a process that writes to a device that is not currently
mounted and therefore has no page cache will go into D state when
closing the associated file descriptor.

You can check for with this kind of command:

strace -ftttT sh -c "( dd if=/dev/urandom bs=1M count=0 seek=32000; cat /tmp/file ) > /dev/disk/by-label/CIGAES_R64"

where /tmp/file was obtained with the opposite dd command:

[pid 3708814] 1657561091.466910 write(1, "\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377"..., 131072) = 131072 <0.000233>
[pid 3708814] 1657561091.467280 read(3, "", 131072) = 0 <0.000106>
[pid 3708814] 1657561091.467527 munmap(0x7f14a2f0e000, 139264) = 0 <0.000062>
[pid 3708814] 1657561091.467658 close(3) = 0 <0.000018>
[pid 3708814] 1657561091.467755 close(1) = 0 <161.011040>
[pid 3708814] 1657561252.478904 close(2) = 0 <0.000019>
[pid 3708814] 1657561252.479045 exit_group(0) = ?
[pid 3708814] 1657561252.479252 +++ exited with 0 +++
1657561252.479286 <... wait4 resumed>[{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 3708814 <288.273781>
1657561252.479345 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=3708814, si_uid=1000, si_status=0, si_utime=1, si_stime=168} ---
1657561252.479383 rt_sigreturn({mask=[]}) = 3708814 <0.000011>
1657561252.479433 wait4(-1, 0x7ffee8b3cc5c, WNOHANG, NULL) = -1 ECHILD (No child processes) <0.000011>
1657561252.479510 exit_group(0) = ?
1657561252.479624 +++ exited with 0 +++

Notice the time taken by the close(1).

If you naively run strace on cat itself, then you do not see anything,
because then strace itself is holding a copy of the file descriptor, and
it is strace that will go into D state.

I do not know where this behavior is documented, but I suspect it is
somewhere.

Regards,

--
Nicolas George
signature.asc

B.M.

unread,
Jul 25, 2022, 8:50:04 AM7/25/22
to
Hello again

First of all, I tested all my BD backup discs now, and there are no problems
from #1 (2017) - #12 (05/2018) as well as #14 - #17 (2019-05/2020).
# 13 from 03/2019 fails.

#1 to #10 are all from 05/2017, they're the first BD backup at all and I assume
I used some manual workflow back then to start with; they also contain only
pictures, not my current larger set of files.

Then there's #11/#12 (2018), #14/#15 (01/2020) as well as #16/#17 (05/2020),
with each of these pairs being completely ok.

Afterwards (#18 - #22) there is a pattern such as
#18 (disc 1 of 2 for 01/2021) fails
#19 (disc 2 of 2 for 01/2021) is ok
#20 (disc 1 of 2 for 07/2021) fails
#21 (disc 2 of 2 for 07/2021) is ok
#22 is disc 1 of "NA" for 06/2022: I noticed the problem and didn't
continue...

I use git for my script, but only since 2020; there's no change to my IMGSIZE
setting in the git log, so this cannot explain why #11 - #17 are ok, #18
starts failing. #13 from 2019 is kind of an outlier.


I'm going to try a new full (not incremental) backup, spanning multiple discs,
in the near future and test my script thoroughly... but before, I'll add an
option to it which allows to modify the "last backup date" value stored in the
extended attributes of the filesystem for all backuped folders. That's the part
I really like: I can see very easily in a file manager (dolphin in this case),
which folders are backuped and when the last backup has been done.

Best,
Bernd
0 new messages