ConstraintError: UNIQUE constraint failed: obj_ids.id

23 views
Skip to first unread message

Marcin Ciesielski

unread,
Jun 29, 2019, 5:30:57 PM6/29/19
to s3ql


Version S3QL 3.1
stored on local

My filesystem run out of space while writing to S3QL.
Now during fsck I am getting this error


Starting fsck of local:///mnt/.s3qldata/
Using cached metadata.
WARNING: Remote metadata is outdated.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking for dirty cache objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking for temporary objects (backend)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 692625 objects so far..
Dropping temporary indices...
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/local/bin/fsck.s3ql", line 11, in <module>
    load_entry_point('s3ql==3.1', 'console_scripts', 'fsck.s3ql')()
  File "/usr/local/lib/python3.5/dist-packages/s3ql-3.1-py3.5-linux-x86_64.egg/s3ql/fsck.py", line 1273, in main
    fsck.check(check_cache)
  File "/usr/local/lib/python3.5/dist-packages/s3ql-3.1-py3.5-linux-x86_64.egg/s3ql/fsck.py", line 92, in check
    self.check_objects_id()
  File "/usr/local/lib/python3.5/dist-packages/s3ql-3.1-py3.5-linux-x86_64.egg/s3ql/fsck.py", line 965, in check_objects_id
    self.conn.execute('INSERT INTO obj_ids VALUES(?)', (obj_id,))
  File "/usr/local/lib/python3.5/dist-packages/s3ql-3.1-py3.5-linux-x86_64.egg/s3ql/database.py", line 98, in execute
    self.conn.cursor().execute(*a, **kw)
  File "src/cursor.c", line 236, in resetcursor
apsw.ConstraintError: ConstraintError: UNIQUE constraint failed: obj_ids.id

I cleaned up cache but it did not help
any chance to get it running?

Nikolaus Rath

unread,
Jun 30, 2019, 5:17:14 AM6/30/19
to s3...@googlegroups.com
It looks like something is messed up in your backend directory, since
listing it returns the same object twice. Not sure how that can
happen. I'd suggest to annotate the code to print out the name of the
duplicate object and then take a look at what's stored in the backend
directory. Maybe using --debug-module s3ql is already sufficient.


Best,
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

»Time flies like an arrow, fruit flies like a Banana.«

Marcin Ciesielski

unread,
Jun 30, 2019, 6:47:47 AM6/30/19
to s3ql
Thanks, Nikolaus

I managed to recover. 
It seems that the problem was that I misspelt the cache directory name and was pointing into different ones for mount and fsck.
I cleaned the mount one but fsck was probably holding some few weeks old data.

Best
Marcin
Reply all
Reply to author
Forward
0 new messages