Hello Johan,
It looks like the most recent remote sqlite database and the corresponding .params file do not match. That's something that is bad, and you should probably figure out how that happened. It might be an error in S3QL's incremental database backup logic, your S3 Storage Provider did not persist the data correctly, or something else that is special for your setup.
Your fsck.s3ql call output suggests that you moved the .db + .db-wal and .params files away, used another cache directory, or deleted them.
Hopefully you just moved them away/used another cache directory. Then you could try to do the recovery that https://www.sqlite.org/recovery.html suggests.
I would start with:
Read https://www.rath.org/s3ql-docs/durability.html – especailly rules 4 and 5.
Update S3QL to 5.3.0 – there the fsck.s3ql command got the new option "--fast" that will help you since that will skip the remote metadata consistency check (that will fail in your case).
Recover your local sqlite database:
sqlite3
s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.db ".recover
--ignore-freelist" > recovered.sql
sqlite3 recovered.db < recovered.sql
mv s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.db
s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.corrupt.db
mv
s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.db-wal
s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.corrupt.db-wal # this
file might not exist, then skip this step.
mv recovered.db
s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.db
shasum -a 256 s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.db
nano s3c:=2F=2Fmystorage.com=2Fdbsqlbackup=2F.params # <-
change the "db_md5" value to the value of the "shasum" command.
Run fsck.s3ql with the correct cache directory and the --fast option. When this works it will probably output many inconsistencies it found/corrected but you should be able to mount the filesystem afterward.
While the filesystem is mounted, make some insignificant changes to it so that the metadata gets changed (e.g., touch a file) and trigger a manual metadata backup (https://www.rath.org/s3ql-docs/man/ctrl.html). Do this five times. The five newest metadata backups on the remote should be consistent now. (fsck.s3ql only checks the last five metadata backups).
Unmount the filesystem.
Run fsck.s3ql --force (without --fast) on the filesystem. The remote metdata check should be OK now.
Consider running s3ql_verify --data ( https://www.rath.org/s3ql-docs/man/verify.html ) on your filesystem when you suspect that your S3 storage might be flaky. This will probably take a long time since it will download every object from the object storage and verify its integrity locally.