restore interrupted filesystem

35 views
Skip to first unread message

Christian Loitsch

unread,
Jun 26, 2019, 8:21:30 AM6/26/19
to s3...@googlegroups.com
Hello.

About a year ago I started an upgrade on an online filesystem.
I unfortunately don't know which version.  I am nearly 100% sure that the upgrade was either to 2.28 or even an earlier version.

During the update files were missing and I decided to copy everything to a local disk before continuing the upgrade process.  This took until now.

Unfortunately I can neither continue to upgrade, fsck or mount the filesystem.

Version 2.28 says (`s3qladm  --debug upgrade local:///run/media/cl/disk1/cl/christian/`)
2019-06-26 14:04:25.058 11301 INFO     MainThread s3ql.adm.upgrade: Getting file system parameters..
2019-06-26 14:04:25.059 11301 ERROR    MainThread root.excepthook: Backend data corrupted, or file system revision needs upgrade.

Version 2.27.1 says (same command):
2019-06-26 14:03:31.840 10810 INFO     MainThread s3ql.adm.upgrade: Getting file system parameters..
2019-06-26 14:03:31.900 10810 ERROR    MainThread root.excepthook: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/bin/s3qladm", line 11, in <module>
    load_entry_point('s3ql==2.27.1', 'console_scripts', 's3qladm')()
  File "/usr/lib/python3.7/site-packages/s3ql/adm.py", line 94, in main
    return upgrade(options)
  File "/usr/lib/python3.7/site-packages/s3ql/common.py", line 433, in wrapper
    return fn(*a, **kw)
  File "/usr/lib/python3.7/site-packages/s3ql/adm.py", line 240, in upgrade
    param = backend.lookup('s3ql_metadata')
  File "/usr/lib/python3.7/site-packages/s3ql/backends/comprenc.py", line 72, in lookup
    meta_raw = self.backend.lookup(key)
  File "/usr/lib/python3.7/site-packages/s3ql/backends/local.py", line 60, in lookup
    return _read_meta(src)
  File "/usr/lib/python3.7/site-packages/s3ql/backends/local.py", line 241, in _read_meta
    raise CorruptedObjectError('Invalid object header: %r' % buf)
s3ql.backends.common.CorruptedObjectError: Invalid object header: b'BZh91AY&S'

The 'BZh91AY&S' seems to come from one of the metadata files.  A head -n 1 displays these characters for
s3ql_metadata_bak_0
s3ql_metadata_bak_1
s3ql_metadata_bak_2
s3ql_metadata_bak_3 and
s3ql_metadata_bak_4

s3qladm download-metadata skips all metadata files:
ERROR: Error retrieving information about s3ql_metadata, skipping
(same for all other bak files)

All metadata files are bzip2 files:
file s3ql_metadata*
s3ql_metadata:        bzip2 compressed data, block size = 900k
s3ql_metadata_bak_0:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_1:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_10: bzip2 compressed data, block size = 900k
s3ql_metadata_bak_2:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_3:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_4:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_5:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_6:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_7:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_8:  bzip2 compressed data, block size = 900k
s3ql_metadata_bak_9:  bzip2 compressed data, block size = 900k
s3ql_metadata_new:    bzip2 compressed data, block size = 900k

This is the content of my authinfo2:
=====
[s3ql]
storage-url: local:///run/media/cl/disk1/
fs-passphrase: B5mJhEEQu2MhGR8yBh55L
=====

I still have the db and param file from the online filesystem.
xx.param:
{ 'revision': 23, 'seq_no': 18, 'label': '', 'max_obj_size': 10485760, 'needs_fsck': False, 'inode_gen': 0, 'max_inode': 108788, 'last_fsck': 1511608008.239305, 'last-modified': 1511612643.0979273 }

last-modified: Saturday, November 25, 2017


Do you have any idea, what I am doing wrong?

regards
Christian

Daniel Jagszent

unread,
Jun 26, 2019, 8:44:02 AM6/26/19
to s3...@googlegroups.com
Hello Christian,
> [...] During the update files were missing and I decided to copy
> everything to a local disk before continuing the upgrade process.  [...]
did you copy the files from an object storage (S3, Google Storage,
OpenStack Swift)? That does not work. S3QL needs metadata of the copied
"files" that you probably did not copy over – and even if, the Local
file backend stores the metadata differently, AFAIK.

You cannot easily copy a S3QL filesystem from one backend type to another.

Nikolaus Rath

unread,
Jun 26, 2019, 3:36:54 PM6/26/19
to s3...@googlegroups.com
On Jun 26 2019, Daniel Jagszent <dan...@jagszent.de> wrote:
> Hello Christian,
>> [...] During the update files were missing and I decided to copy
>> everything to a local disk before continuing the upgrade process. 
>> [...]
> did you copy the files from an object storage (S3, Google Storage,
> OpenStack Swift)? That does not work. S3QL needs metadata of the
> copied "files" that you probably did not copy over – and even if, the
> Local file backend stores the metadata differently, AFAIK.

That's right.

> You cannot easily copy a S3QL filesystem from one backend type to
> another.

It's pretty easy if you use the right tools :-).

https://github.com/s3ql/s3ql/blob/master/contrib/clone_fs.py


Best,
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

»Time flies like an arrow, fruit flies like a Banana.«

Christian Loitsch

unread,
Jun 27, 2019, 4:43:32 AM6/27/19
to s3...@googlegroups.com
Hello Nikolaus!

Am I correct, that  `clone_fs.py` requires the "old" filesystem to be available?

I only took a quick look, but AFAICT I can't convert the already copied filesystem.  correct?

I guess I will either have to setup a fake backend, which serves my local files, or rewrite s3ql.

regards
Christian

--
You received this message because you are subscribed to the Google Groups "s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/s3ql/87y31o3zrc.fsf%40vostro.rath.org.
For more options, visit https://groups.google.com/d/optout.

Nikolaus Rath

unread,
Jun 27, 2019, 3:56:56 PM6/27/19
to s3...@googlegroups.com
Hi Christian,

A: Because it confuses the reader.
Q: Why?
A: No.
Q: Should I write my response above the quoted reply?

..so please quote properly, as I'm doing in the rest of this mail:

On Jun 27 2019, Christian Loitsch <chri...@loitsch.com> wrote:
>> On Jun 26 2019, Daniel Jagszent <dan...@jagszent.de> wrote:
>> >> [...] During the update files were missing and I decided to copy
>> >> everything to a local disk before continuing the upgrade process.
>> >> [...]
>> > did you copy the files from an object storage (S3, Google Storage,
>> > OpenStack Swift)? That does not work. S3QL needs metadata of the
>> > copied "files" that you probably did not copy over – and even if, the
>> > Local file backend stores the metadata differently, AFAIK.
>>
>> That's right.
>>
>> > You cannot easily copy a S3QL filesystem from one backend type to
>> > another.
>>
>> It's pretty easy if you use the right tools :-).
>>
>> https://github.com/s3ql/s3ql/blob/master/contrib/clone_fs.py
>
> Am I correct, that `clone_fs.py` requires the "old" filesystem to be
> available?
>
> I only took a quick look, but AFAICT I can't convert the already copied
> filesystem. correct?

As Daniel said, your copy did not include all the data. So no, it can't
conjure this data back :-).

Christian Loitsch

unread,
Jun 29, 2019, 5:22:43 PM6/29/19
to s3...@googlegroups.com
Hi Nikolaus!

On Thu, 27 Jun 2019 at 21:56, Nikolaus Rath <Niko...@rath.org> wrote:
[...]
On Jun 27 2019, Christian Loitsch <chri...@loitsch.com> wrote:
>> On Jun 26 2019, Daniel Jagszent <dan...@jagszent.de> wrote:
>> >> [...] During the update files were missing and I decided to copy
>> >> everything to a local disk before continuing the upgrade process.
>> >> [...]
>> > did you copy the files from an object storage (S3, Google Storage,
>> > OpenStack Swift)? That does not work. S3QL needs metadata of the
>> > copied "files" that you probably did not copy over – and even if, the
>> > Local file backend stores the metadata differently, AFAIK.
>>
>> That's right.
>>
>> > You cannot easily copy a S3QL filesystem from one backend type to
>> > another.
I am currently trying to retrieve the metadata.
AFAIKT its:
X-Object-Meta-000: 'format_version': 2,
X-Object-Meta-001: 'compression': 'ZLIB',
X-Object-Meta-002: 'encryption': 'None',
X-Object-Meta-003: 'data': b'eyAgfQ==',
X-Object-Meta-004: 'needs_reupload': True,
X-Object-Meta-Format: raw2
X-Object-Meta-Md5: q0cp30i91gdvcDkh5w2rsw==

for all files.  (Even Meta-Md5 is identical)

I would now try to rewrite the 'local' Backend to return those values:
Rewrite _read_meta(fh) to not read anything from the file but retrieve the metadata from somewhere else.

But before investing to much work: is this even plausible?

I think I've never done anything other than fsck / mount on the "online" filesystem, but especially the
X-Object-Meta-002: 'encryption': 'None',
looks incorrect.

It should use encryption:
===   ~/.s3ql/authinfo2 ===
[s3ql]
[...]
fs-passphrase: B5mJhEEQu2MhGR8yBh55L
======


>>
>> It's pretty easy if you use the right tools :-).
>>
>> https://github.com/s3ql/s3ql/blob/master/contrib/clone_fs.py
I would really like to use clone_fs.py the connection is simply not stable enough.

regards
Christian

Nikolaus Rath

unread,
Jun 30, 2019, 5:15:46 AM6/30/19
to s3...@googlegroups.com
Yeah, in principle that's possible.

> I think I've never done anything other than fsck / mount on the "online"
> filesystem, but especially the
> X-Object-Meta-002: 'encryption': 'None',
> looks incorrect.

Well, it looks like you haven't enabled encryption.

>
> It should use encryption:
> === ~/.s3ql/authinfo2 ===
> [s3ql]
> [...]
> fs-passphrase: B5mJhEEQu2MhGR8yBh55L
> ======

mount.s3ql doesn't try to use the passphrase if the filesystem isn't
encrypted.

>> >> It's pretty easy if you use the right tools :-).
>> >>
>> >> https://github.com/s3ql/s3ql/blob/master/contrib/clone_fs.py
>
> I would really like to use clone_fs.py the connection is simply not stable
> enough.

Not sure I follow. It should retry automatically on network issues. If
you were able to download everything with other tools, then you should
be able to do it with clone_fs.py too.
Reply all
Reply to author
Forward
0 new messages