ERROR: File system revision needs upgrade (or backend data is corrupted)

58 views
Skip to first unread message

Alessandro Boem

unread,
Mar 15, 2023, 10:57:46 AM3/15/23
to s3ql
We're trying to recover the consistency of the db from a machine power outage.
I know that the project is no longer developed, but we're looking for an extra docs/help trying to recover the data.

Reading and following the available documentation, we have already tried these:
fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
s3ql_verify --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/ s3c://r1-it.storage.cloud.it/bdrive
s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
We always receive these message:
ERROR: File system revision needs upgrade (or backend data is corrupted)
We have a backup of the db file and we restored it without success (we still receive the previous error)
Taking a look at the backed data we can see that all the metadata copy have the same datetime:
Screenshot 2023-03-15 alle 15.56.05.png
Shouldn’t we have had different copies in order to recover the last valid one before the crash?
Obviously we didn't care about losing the last data after the last correct upload.

Nikolaus Rath

unread,
Mar 16, 2023, 7:49:56 AM3/16/23
to noreply-spamdigest via s3ql
Hi Alessandro,

On Wed, 15 Mar 2023, at 14:57, Alessandro Boem wrote:
We're trying to recover the consistency of the db from a machine power outage.
I know that the project is no longer developed, but we're looking for an extra docs/help trying to recover the data.

Reading and following the available documentation, we have already tried these:
fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
s3ql_verify --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/ s3c://r1-it.storage.cloud.it/bdrive
s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
We always receive these message:
ERROR: File system revision needs upgrade (or backend data is corrupted)
We have a backup of the db file and we restored it without success (we still receive the previous error)

Can you provide a bit more information? How exactly did you restore the metadata backup (full command and output)? Which backups did you try?

Is it possible that you accidentally upgraded from an old S3QL version (so your data isn't corrupted at all, and you just have to upgrade through some not-quite-as-old S3QL version)?

Taking a look at the backed data we can see that all the metadata copy have the same datetime:

The metadata backups may be created either by copying or by moving operations: https://github.com/s3ql/s3ql/blob/master/src/s3ql/metadata.py

It is possible that your backend uses copy, and sets the modification date to the date of the copy (rather than the modification date of the source). Is that possible? Are the *contents* of the backups identical as well?

Best,
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             »Time flies like an arrow, fruit flies like a Banana.«



Screenshot 2023-03-15 alle 15.56.05.png
Shouldn’t we have had different copies in order to recover the last valid one before the crash?
Obviously we didn't care about losing the last data after the last correct upload.


--
You received this message because you are subscribed to the Google Groups "s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+uns...@googlegroups.com.


Alessandro Boem

unread,
Mar 16, 2023, 10:50:42 AM3/16/23
to s3ql
Hi Nikolaus,

Il giorno giovedì 16 marzo 2023 alle 12:49:56 UTC+1 Nikolaus Rath ha scritto:
Hi Alessandro,

On Wed, 15 Mar 2023, at 14:57, Alessandro Boem wrote:
We're trying to recover the consistency of the db from a machine power outage.
I know that the project is no longer developed, but we're looking for an extra docs/help trying to recover the data.

Reading and following the available documentation, we have already tried these:
fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
s3ql_verify --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/ s3c://r1-it.storage.cloud.it/bdrive
s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
We always receive these message:
ERROR: File system revision needs upgrade (or backend data is corrupted)
We have a backup of the db file and we restored it without success (we still receive the previous error)

Can you provide a bit more information? How exactly did you restore the metadata backup (full command and output)? Which backups did you try?
First I ran all the cited three command in that order and all of them return the error.
I did not restore the metadata from backend but I've tried to restore the file with .db extension in /var/cache/s3ql/bdrive from machine last backup before crash (backup was performed at same day of the crash at midnigh, the machine power outage was at 09:30)

 
Is it possible that you accidentally upgraded from an old S3QL version (so your data isn't corrupted at all, and you just have to upgrade through some not-quite-as-old S3QL version)?

We 're using s3ql version 3.3.2 on an Ubuntu 20.04 server release
I also tried to upgrade to a later version the package (4.0.0) compiling and installing it with setup.py, then I ran
s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
again with 4.0.0 release but it return the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)
Taking a look at the backed data we can see that all the metadata copy have the same datetime:

The metadata backups may be created either by copying or by moving operations: https://github.com/s3ql/s3ql/blob/master/src/s3ql/metadata.py

It is possible that your backend uses copy, and sets the modification date to the date of the copy (rather than the modification date of the source). Is that possible? Are the *contents* of the backups identical as well?
I checked with a HEX editor the metadata copy on the backend and they are different.
Can I use a backup copy of metadata to restore file system coherence?

Nikolaus Rath

unread,
Mar 16, 2023, 2:05:44 PM3/16/23
to noreply-spamdigest via s3ql


On Thu, 16 Mar 2023, at 14:50, Alessandro Boem wrote:
Hi Nikolaus,


Il giorno giovedì 16 marzo 2023 alle 12:49:56 UTC+1 Nikolaus Rath ha scritto:

Hi Alessandro,


On Wed, 15 Mar 2023, at 14:57, Alessandro Boem wrote:
We're trying to recover the consistency of the db from a machine power outage.
I know that the project is no longer developed, but we're looking for an extra docs/help trying to recover the data.

Reading and following the available documentation, we have already tried these:
fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
s3ql_verify --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/ s3c://r1-it.storage.cloud.it/bdrive
s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
We always receive these message:
ERROR: File system revision needs upgrade (or backend data is corrupted)
We have a backup of the db file and we restored it without success (we still receive the previous error)

Can you provide a bit more information? How exactly did you restore the metadata backup (full command and output)? Which backups did you try?

First I ran all the cited three command in that order and all of them return the error.
I did not restore the metadata from backend but I've tried to restore the file with .db extension in /var/cache/s3ql/bdrive from machine last backup before crash (backup was performed at same day of the crash at midnigh, the machine power outage was at 09:30)

The error message refers to what is stored in the cloud. It's quite possible that nothing at all is wrong with the files in /var/cache/s3ql.



--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             »Time flies like an arrow, fruit flies like a Banana.«


Is it possible that you accidentally upgraded from an old S3QL version (so your data isn't corrupted at all, and you just have to upgrade through some not-quite-as-old S3QL version)?
We 're using s3ql version 3.3.2 on an Ubuntu 20.04 server release


That's what you're using now, right? Which version did you use before the crash?



I also tried to upgrade to a later version the package (4.0.0) compiling and installing it with setup.py, then I ran
s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
again with 4.0.0 release but it return the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)

Of course. What I said is that you may need an *older* release (if you accidentally upgraded, that is).



Taking a look at the backed data we can see that all the metadata copy have the same datetime:


The metadata backups may be created either by copying or by moving operations: https://github.com/s3ql/s3ql/blob/master/src/s3ql/metadata.py

It is possible that your backend uses copy, and sets the modification date to the date of the copy (rather than the modification date of the source). Is that possible? Are the *contents* of the backups identical as well?
I checked with a HEX editor the metadata copy on the backend and they are different.
Can I use a backup copy of metadata to restore file system coherence?

Yes, that is what 's3qladm download-metadata' is intended for.

Best,
-Nikolaus

Alessandro Boem

unread,
Mar 17, 2023, 5:42:26 AM3/17/23
to s3ql
Hi,

File system was created with version 3.3.2 and that's the version in use at the time of crash.
 

I also tried to upgrade to a later version the package (4.0.0) compiling and installing it with setup.py, then I ran
s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
again with 4.0.0 release but it return the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)

Of course. What I said is that you may need an *older* release (if you accidentally upgraded, that is).



Taking a look at the backed data we can see that all the metadata copy have the same datetime:


The metadata backups may be created either by copying or by moving operations: https://github.com/s3ql/s3ql/blob/master/src/s3ql/metadata.py

It is possible that your backend uses copy, and sets the modification date to the date of the copy (rather than the modification date of the source). Is that possible? Are the *contents* of the backups identical as well?
I checked with a HEX editor the metadata copy on the backend and they are different.
Can I use a backup copy of metadata to restore file system coherence?

Yes, that is what 's3qladm download-metadata' is intended for.

Ok nice. I ran s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
but I got the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)
If I need to try the others copy of metadata on backend, have I to rename a copy of metadata to s3ql_metadata (I believe starting from s3ql_metadata_bak_0) and rerun s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/ ?

Thanks
 

Best,
-Nikolaus

Nikolaus Rath

unread,
Mar 17, 2023, 7:05:03 AM3/17/23
to noreply-spamdigest via s3ql
On Fri, 17 Mar 2023, at 09:42, Alessandro Boem wrote:
File system was created with version 3.3.2 and that's the version in use at the time of crash.
[...]


Ok nice. I ran s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
but I got the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)

If you really did not change S3QL versions, then this sounds as if the s3ql_passphrase object in the cloud has somehow been corrupted.

I have no idea how this could possibly happen (since S3QL never writes to it after mkfs), nor how it could be related to a local system crash.

Maybe check if there's a backup of this object somewhere? Otherwise you can use 's3qladm recover-key' to restore this object from your offline copy of the master key (which you hopefully created at mkfs time).

This object contains the master key, so without it you can't decrypt any of the other objects.

Best,
-Nikolaus

Alessandro Boem

unread,
Mar 17, 2023, 7:31:33 AM3/17/23
to s3ql
Il giorno venerdì 17 marzo 2023 alle 12:05:03 UTC+1 Nikolaus Rath ha scritto:
On Fri, 17 Mar 2023, at 09:42, Alessandro Boem wrote:
File system was created with version 3.3.2 and that's the version in use at the time of crash.
[...]


Ok nice. I ran s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
but I got the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)

If you really did not change S3QL versions, then this sounds as if the s3ql_passphrase object in the cloud has somehow been corrupted.

I have no idea how this could possibly happen (since S3QL never writes to it after mkfs), nor how it could be related to a local system crash.

Maybe check if there's a backup of this object somewhere? Otherwise you can use 's3qladm recover-key' to restore this object from your offline copy of the master key (which you hopefully created at mkfs time).

This object contains the master key, so without it you can't decrypt any of the other objects.


When I create the filesystem I specified the --plain option. Shoud I find an s3ql_passphrase in the backend if I used that option? I can't see any file with that name on the backend...

Nikolaus Rath

unread,
Mar 18, 2023, 6:39:29 AM3/18/23
to noreply-spamdigest via s3ql


--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             »Time flies like an arrow, fruit flies like a Banana.«


On Fri, 17 Mar 2023, at 11:31, Alessandro Boem wrote:


Il giorno venerdì 17 marzo 2023 alle 12:05:03 UTC+1 Nikolaus Rath ha scritto:
On Fri, 17 Mar 2023, at 09:42, Alessandro Boem wrote:
File system was created with version 3.3.2 and that's the version in use at the time of crash.
[...]


Ok nice. I ran s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
but I got the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)

If you really did not change S3QL versions, then this sounds as if the s3ql_passphrase object in the cloud has somehow been corrupted.

I have no idea how this could possibly happen (since S3QL never writes to it after mkfs), nor how it could be related to a local system crash.

Maybe check if there's a backup of this object somewhere? Otherwise you can use 's3qladm recover-key' to restore this object from your offline copy of the master key (which you hopefully created at mkfs time).

This object contains the master key, so without it you can't decrypt any of the other objects.


When I create the filesystem I specified the --plain option. Shoud I find an s3ql_passphrase in the backend if I used that option? I can't see any file with that name on the backend...

In that case there should be no such object. Also, in that case you should never see the above warning, because that gets generated if this object exists but cannot be read.

Might be interesting to dump the communication with the backend to find out what the backend is returning when S3QL asks for this object. It should be a 404 response, which in turn should tell S3QL that the filesystem is not encrypted.

You could force this behavior by replacing 'backend.fetch('s3ql_passphrase')' with 'raise NoSuchObject('s3ql_passphrase')' in https://github.com/s3ql/s3ql/blob/master/src/s3ql/common.py#L265 (but that's not really a solution).

Best,
-Nikolaus

Alessandro Boem

unread,
Mar 18, 2023, 7:24:01 PM3/18/23
to s3ql
I've changed the line to raise the NoSuchObject instead to fetch the passphrase but it return the same error.
Running fsck it doesn't raise the exception at line 283 but the one at line 325 of common.py
So the backend seem to gave the correct response 404 requesting a correctly missing s3ql_passphrase, but s3ql_metadata is unreadable.
Do I have to try the others copy of s3ql_metadata?
 
Best,
-Nikolaus

Nikolaus Rath

unread,
Mar 19, 2023, 11:13:39 AM3/19/23
to noreply-spamdigest via s3ql
Above you said you are running `s3qladm download-metadata` and that this failed. Could you please include the full command and full output of everything you have done (not your summary of it)?

s3qladm does not attempt to download metadata until you have picked an object to restore. Did you get that far? If so, which objects have you tried to download? Do you get the error for all of them?

Similarly, fsck.s3ql should not attempt to download any objects if it is running on the same computer on which the filesystem was most recently mounted. Did you perhaps run it from a different system, or with a different cache directory? Or is it giving the error at the *end* of the fsck (when attempting to rotate metadata)?

Best,
-Nikolaus

Alessandro Boem

unread,
Mar 20, 2023, 1:15:45 PM3/20/23
to s3ql
Il giorno domenica 19 marzo 2023 alle 16:13:39 UTC+1 Nikolaus Rath ha scritto:

On Sat, 18 Mar 2023, at 23:24, Alessandro Boem wrote:

On Fri, 17 Mar 2023, at 11:31, Alessandro Boem wrote:


Il giorno venerdì 17 marzo 2023 alle 12:05:03 UTC+1 Nikolaus Rath ha scritto:
On Fri, 17 Mar 2023, at 09:42, Alessandro Boem wrote:
File system was created with version 3.3.2 and that's the version in use at the time of crash.
[...]


Ok nice. I ran s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
but I got the same error: ERROR: File system revision needs upgrade (or backend data is corrupted)

If you really did not change S3QL versions, then this sounds as if the s3ql_passphrase object in the cloud has somehow been corrupted.

I have no idea how this could possibly happen (since S3QL never writes to it after mkfs), nor how it could be related to a local system crash.

Maybe check if there's a backup of this object somewhere? Otherwise you can use 's3qladm recover-key' to restore this object from your offline copy of the master key (which you hopefully created at mkfs time).

This object contains the master key, so without it you can't decrypt any of the other objects.


When I create the filesystem I specified the --plain option. Shoud I find an s3ql_passphrase in the backend if I used that option? I can't see any file with that name on the backend...

In that case there should be no such object. Also, in that case you should never see the above warning, because that gets generated if this object exists but cannot be read.

Might be interesting to dump the communication with the backend to find out what the backend is returning when S3QL asks for this object. It should be a 404 response, which in turn should tell S3QL that the filesystem is not encrypted.

You could force this behavior by replacing 'backend.fetch('s3ql_passphrase')' with 'raise NoSuchObject('s3ql_passphrase')' in https://github.com/s3ql/s3ql/blob/master/src/s3ql/common.py#L265 (but that's not really a solution).

I've changed the line to raise the NoSuchObject instead to fetch the passphrase but it return the same error.
Running fsck it doesn't raise the exception at line 283 but the one at line 325 of common.py
So the backend seem to gave the correct response 404 requesting a correctly missing s3ql_passphrase, but s3ql_metadata is unreadable.
Do I have to try the others copy of s3ql_metadata?

Above you said you are running `s3qladm download-metadata` and that this failed. Could you please include the full command and full output of everything you have done (not your summary of it)?
 
File system was created last year with this command:
mkfs.s3ql --plain --cachedir=/var/cache/s3ql/bdrive/ --authfile=/etc/s3ql.authinfo s3c://r1-it.storage.cloud.it/bdrive
We've used it till 13/03/2023. We keep mounted it on the same machine where we created it.
This is the sequence of commands I ran and their output after the crash:
root@srvorg06:~# /usr/bin/fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/

ERROR: File system revision needs upgrade (or backend data is corrupted)
root@srvorg06:~# /usr/bin/s3ql_verify --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/ s3c://r1-it.storage.cloud.it/bdrive

ERROR: File system revision needs upgrade (or backend data is corrupted)
root@srvorg06:~# /usr/bin/s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
Getting file system parameters..

ERROR: File system revision needs upgrade (or backend data is corrupted)

On 17/03/2023 following this thread I ran:
root@srvorg06:~# /usr/bin/s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/

ERROR: File system revision needs upgrade (or backend data is corrupted)
On 18/03/2023 following this thread, I changed the command on line 265 of common.py from 'backend.fetch('s3ql_passphrase')' to 'raise NoSuchObject('s3ql_passphrase')'  and I ran:
root@srvorg06:~# /usr/bin/fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/

ERROR: File system revision needs upgrade (or backend data is corrupted)
As you can see every commands does not produce any other output except the mentioned error:  ERROR: File system revision needs upgrade (or backend data is corrupted)

Nikolaus Rath

unread,
Mar 21, 2023, 2:34:55 PM3/21/23
to noreply-spamdigest via s3ql
Hi Alessandro,

On Mon, 20 Mar 2023, at 17:15, Alessandro Boem wrote:
I've changed the line to raise the NoSuchObject instead to fetch the passphrase but it return the same error.
Running fsck it doesn't raise the exception at line 283 but the one at line 325 of common.py
So the backend seem to gave the correct response 404 requesting a correctly missing s3ql_passphrase, but s3ql_metadata is unreadable.
Do I have to try the others copy of s3ql_metadata?

Above you said you are running `s3qladm download-metadata` and that this failed. Could you please include the full command and full output of everything you have done (not your summary of it)?
 
File system was created last year with this command:
mkfs.s3ql --plain --cachedir=/var/cache/s3ql/bdrive/ --authfile=/etc/s3ql.authinfo s3c://r1-it.storage.cloud.it/bdrive
We've used it till 13/03/2023. We keep mounted it on the same machine where we created it.

This is the sequence of commands I ran and their output after the crash:

root@srvorg06:~# /usr/bin/fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
ERROR: File system revision needs upgrade (or backend data is corrupted)
root@srvorg06:~# /usr/bin/s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
ERROR: File system revision needs upgrade (or backend data is corrupted)

On 18/03/2023 following this thread, I changed the command on line 265 of common.py from 'backend.fetch('s3ql_passphrase')' to 'raise NoSuchObject('s3ql_passphrase')'  and I ran:
root@srvorg06:~# /usr/bin/fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
ERROR: File system revision needs upgrade (or backend data is corrupted)

This is quite intriguing.  I don't understand where this exception is coming from if not from attempting to read the s3ql_passphrase object.

Could you apply the following patch and re-run both 's3qladm download-metadata' and 'fsck.s3ql'?

diff --git a/src/s3ql/logging.py b/src/s3ql/logging.py
index 6238fb1..ad16784 100644
--- a/src/s3ql/logging.py
+++ b/src/s3ql/logging.py
@@ -141,7 +141,7 @@ def setup_excepthook():
 
     def excepthook(type_, val, tb):
         root_logger = logging.getLogger()
-        if isinstance(val, QuietError):
+        if False:
             root_logger.error(val.msg)
             sys.exit(val.exitcode)
         else:

Alessandro Boem

unread,
Mar 22, 2023, 6:21:02 AM3/22/23
to s3ql
These are the messages of both commands:
root@srvorg06:~# /usr/bin/fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/common.py", line 323, in get_backend_factory
    tmp_backend.fetch('s3ql_metadata')
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 293, in fetch
    return self.perform_read(do_read, key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 256, in perform_read
    fh = self.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 194, in open_read
    fh = self.backend.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 325, in open_read
    meta = self._extractmeta(resp, key)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 713, in _extractmeta
    raise CorruptedObjectError('Invalid metadata format: %s' % format_)
s3ql.backends.common.CorruptedObjectError: Invalid metadata format: raw

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/fsck.s3ql", line 11, in <module>
    load_entry_point('s3ql==3.3.2', 'console_scripts', 'fsck.s3ql')()
  File "/usr/lib/s3ql/s3ql/fsck.py", line 1141, in main
    backend = get_backend(options)
  File "/usr/lib/s3ql/s3ql/common.py", line 248, in get_backend
    return get_backend_factory(options)()
  File "/usr/lib/s3ql/s3ql/common.py", line 325, in get_backend_factory
    raise QuietError('File system revision needs upgrade '
s3ql.logging.QuietError: File system revision needs upgrade (or backend data is corrupted)

root@srvorg06:~# /usr/bin/s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/common.py", line 323, in get_backend_factory
    tmp_backend.fetch('s3ql_metadata')
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 293, in fetch
    return self.perform_read(do_read, key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 256, in perform_read
    fh = self.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 194, in open_read
    fh = self.backend.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 325, in open_read
    meta = self._extractmeta(resp, key)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 713, in _extractmeta
    raise CorruptedObjectError('Invalid metadata format: %s' % format_)
s3ql.backends.common.CorruptedObjectError: Invalid metadata format: raw

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/s3qladm", line 11, in <module>
    load_entry_point('s3ql==3.3.2', 'console_scripts', 's3qladm')()
  File "/usr/lib/s3ql/s3ql/adm.py", line 104, in main
    with get_backend(options) as backend:
  File "/usr/lib/s3ql/s3ql/common.py", line 248, in get_backend
    return get_backend_factory(options)()
  File "/usr/lib/s3ql/s3ql/common.py", line 325, in get_backend_factory
    raise QuietError('File system revision needs upgrade '
s3ql.logging.QuietError: File system revision needs upgrade (or backend data is corrupted)

 

Nikolaus Rath

unread,
Mar 23, 2023, 8:58:38 AM3/23/23
to noreply-spamdigest via s3ql



On Wed, 22 Mar 2023, at 10:21, Alessandro Boem wrote:
 
These are the messages of both commands:

root@srvorg06:~# /usr/bin/fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/common.py", line 323, in get_backend_factory
    tmp_backend.fetch('s3ql_metadata')
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 293, in fetch
    return self.perform_read(do_read, key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 256, in perform_read
    fh = self.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 194, in open_read
    fh = self.backend.open_read(key)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 325, in open_read
    meta = self._extractmeta(resp, key)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 713, in _extractmeta
    raise CorruptedObjectError('Invalid metadata format: %s' % format_)
s3ql.backends.common.CorruptedObjectError: Invalid metadata format: raw


Ah, this makes more sense. I'm afraid it also further supports my initial theory though. The s3ql_metadata object has been created with an ancient S3QL version that recent versions cannot read.

Are you absolutely sure about the versions that you have used?


If you can confirm that you've actually used such an old S3QL version before, then it's safe to just upgrade step-by-step (the old releases are available on the Github releases page).

If you are sure that you have been using a newer version successfully, with this backend, then I'm afraid I'm at my wits end. I do not know how you could possibly end up with such an old metadata object, and I therefore also have no real idea how to recover from it. You could comment out everything that attempts to use remote data (starting from line 713 of s3c.py) and thereby force S3QL to use the cached data, but I have no idea if that will help or make things worse (since I'm also not sure about the format of the locally cached data).
Reply all
Reply to author
Forward
0 new messages