Hard Link Problem

29 views
Skip to first unread message

Mark Clarkson

unread,
Feb 4, 2016, 5:58:12 PM2/4/16
to lessfs
Hi,
I want to use lessfs for incremental backups using 'cp -al oldbackup newbackup' (hard links)  then 'rsync server:/dir newbackup'.

I made a backup of 80 servers then did a 'cp -al'. Later I deleted the directory newbackup directory and lessfs deleted /all/ data. The directory structure and files remained for oldbackup but every file was corrupt.

A simple test shows the problem:

  # echo "hello" >foo
  # ln foo bar
  # cat foo bar
  hello
  hello
  # ls -l
  total 2
  drwxr-xr-x 10 user user 4096 Feb  4 09:21 20160204.1
  -rw-r--r--  2 root root    6 Feb  4 16:45 bar
  -rw-r--r--  2 root root    6 Feb  4 16:45 foo
  # rm foo
  rm: remove regular file `foo'? y
  # cat bar
  hello
  # ls -l
  total 1
  drwxr-xr-x 10 user user 4096 Feb  4 09:21 20160204.1
  -rw-r--r--  1 root root    6 Feb  4 16:46 bar
  # ln bar foo

Only one 'hello' is shown.

  # cat bar foo
  hello
  # ls -l
  total 2
  drwxr-xr-x 10 user user 4096 Feb  4 09:21 20160204.1
  -rw-r--r--  2 root root    6 Feb  4 16:46 bar
  -rw-r--r--  2 root root    6 Feb  4 16:46 foo

Although the  link count is correct and sizes are correct, it is not possible to 'cat foo'.

/var/log/lessfs-bdb_err.txt is empty. No errors.

This happened with bdb and tokyocabinet - the two I have tried.

Here's my lessfs.cfg (BACKGROUND_DELETE was 'on' when the fs was created)

DEBUG = 5
HASHNAME=MHASH_TIGER192
HASHLEN = 24
BLOCKDATA_IO_TYPE=file_io
BLOCKDATA_PATH=/data/dta/blockdata.dta
META_PATH=/data/mta
META_BS=1048576
CACHESIZE=32768
COMMIT_INTERVAL=10
LISTEN_IP=127.0.0.1
LISTEN_PORT=100
MAX_THREADS=12
DYNAMIC_DEFRAGMENTATION=on
COREDUMPSIZE=2560000000
SYNC_RELAX=1
BACKGROUND_DELETE=off
ENCRYPT_DATA=off
ENCRYPT_META=off
ENABLE_TRANSACTIONS=on
BLKSIZE=131072
COMPRESSION=snappy

Is there something I'm doing wrong?

Thanks!
Mark

Mark Ruijter

unread,
Feb 5, 2016, 9:09:12 AM2/5/16
to les...@googlegroups.com

Hi Mark,

First of all corruption should never happen.
So let me investigate the problem. 

I think there may be more optimal ways to achieve your goal.
For example simply rsync the lessfs files themselves?

I assume that you are using the most recent lessfs sources?

Thanks,

Mark
--
You received this message because you are subscribed to the Google Groups "lessfs " group.
To unsubscribe from this group and stop receiving emails from it, send an email to lessfs+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mark Clarkson

unread,
Feb 5, 2016, 9:22:02 AM2/5/16
to lessfs
I liked the 'cp -al' route on a normal fs as it hard links and takes little space - rsync only unlinks changed files in the new directory, and should work on most filesystems.

I'm not sure what you mean about copying the lessfs files. The data directory takes up about 220GB - do you mean I could rsync that? Then mount different /data directories with different lessfs.cfg files? So 7 days of backups + a couple monthly backups would take nearly 2 TB, rather than a few GB extra for hardlinks + changed files (which may will have duplicate blocks). Can you tell me the steps you had in mind?

If you have any suggestions for snapshot backups I'd be willing to try.

Mark Clarkson

unread,
Feb 5, 2016, 9:45:37 AM2/5/16
to lessfs
Sorry, I forgot to say I'm using lessfs-1.7.0.

Reply all
Reply to author
Forward
0 new messages