Is there possibility to create folder structure on swift?

282 views
Skip to first unread message

Riku Bister

unread,
Dec 7, 2015, 1:21:06 PM12/7/15
to s3ql

Hi, How i explain this? Using s3ql as local:// mountpoint it will create folder structure to the filesystem, but using s3ql in swift:// it just create files under one folder, all data, meta and pass goes on same folder.

Problem is, im using on cloud via swift and there is limit of 50000 files at least webpage saying to better not go over(i think it work but what if will breake?), im using containers of 50M but well it grows fast and i going to go limit very fast! my space is more than 10T so it growing FAST, i have created 3 seperate filesystems but i can.t do more, my cache on vps server (diskspace is running out) as i need to keep it over 1Gig.

Works nice no problems at all, but is there any possibility to change that to folders? i know i need re-do everything after. moving files around will take week even 150mbit connection.. but better now than too late Its ok also if need manual do folders, one each folder i can do more 50000 files

thanks already happy xmas! :)



i found this, but this is not my case, im not using amazon S3.
https://groups.google.com/forum/#!searchin/s3ql/folder/s3ql/m83YlB2tDIk/G6mMafe0X1AJ

Nikolaus Rath

unread,
Dec 7, 2015, 1:23:34 PM12/7/15
to s3...@googlegroups.com
On Dec 07 2015, Riku Bister <riku...@gmail.com> wrote:
> Hi, How i explain this? Using s3ql as local:// mountpoint it will create
> folder structure to the filesystem, but using s3ql in swift:// it just
> create files under one folder, all data, meta and pass goes on same
> folder.

You are talking about a folder in the swift backend, right?

> Problem is, im using on cloud via swift and there is limit of 50000 files
> at least webpage saying to better not go over(i think it work but what if
> will breake?),

Which webpage?


Best,
-Nikolaus

--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

»Time flies like an arrow, fruit flies like a Banana.«

Riku Bister

unread,
Dec 7, 2015, 1:28:02 PM12/7/15
to s3ql
>You are talking about a folder in the swift backend, right?
 yes


 > Which webpage?
webpage of cloud instructions https://goo.gl/0j8pfq there saying " for optimimum performance, we recommend that you do not to exceed 50,000 files per folder. "

Nikolaus Rath

unread,
Dec 8, 2015, 11:38:10 AM12/8/15
to s3...@googlegroups.com
On Dec 07 2015, Riku Bister <riku...@gmail.com> wrote:
>> You are talking about a folder in the swift backend, right?
> yes
>
>
>> Which webpage?
>j
> webpage of cloud instructions https://goo.gl/0j8pfq there saying " for
> optimimum performance, we recommend that you do not to exceed 50,000 files
> per folder. "

Ah, hubic again. I would have been surprised to find this limitation in
the official OpenStack documentation.

Yes, in principle S3QL could be changed to distribute files among
different folders (to use Hubic terminology). However, this would break
backward compatibility with all existing S3QL file systems stored in
swift. Furthermore, this would apparently only by required for Hubic,
which is not supported very well in any case
(cf. https://bitbucket.org/nikratio/s3ql/issues/132/). Finally, there is
no good reason for having such a limitation in the backend, and Hubic is
already known to me for it's brain-dead API design. Therefore, I'm
unlikely to change this in S3QL. I recommend you just pick a different
storage provider.

Riku Bister

unread,
Dec 8, 2015, 2:24:40 PM12/8/15
to s3ql
i know they api sucks and they not doing anything about it.

Well i have no problems on hubic, i have 3 file systems atm there and they all working fine, no breakups no disconnects, file transfers are super fast (150Mbit connection will cap) with s3ql. i can steram(full-hd) and upload same time with no issues at all, already uploaded about 1T of files and filesystem is working well, no errors, can remount, nothing on mount.log, . i was just afraid if that 50k limit thing will break things.
Actually s3ql made it possible to use as proper storage. Im very thankfully about your software! its best, already using it other things for other purposes. :)
Would also be nice if filestructure is same as local:// because now i can't mount it locally (with fuse) in case of emergency..

well i guess i gonna delete one filesystem and replace it with 100-200M slices. or just take a look on code if i can do anything about it, not a coder my knowledge is bad on this things but well.. hehe ;)


Riku Bister

unread,
Dec 8, 2015, 2:35:01 PM12/8/15
to s3ql
i dont see any reason to change provider, this is cheap unmetered have alot disk-space and works as fast internet works.. on europe. i don't see any other provider who has the same throughput in europe for that price.. or even working this fast.

Isaac Aymerich

unread,
Dec 8, 2015, 5:23:59 PM12/8/15
to s3ql
Hi I have the same problem, i'm using hubic i did a similar software as S3ql but more simple and without own FileSystem as do s3ql.

Hubic is the most cheapest OpenStack Storage in the market i think you guys should add the directory feature in the filesystem backend files. atless add a parameter in the -backend-options specific only for openstack that allow create automatic directories only to fix the 50K files limit.

i did some number, using 50Mb Chunk file you get minim of 262144 using the 12,5TB of space on hubic. ( if you have the space full of course)

if you make.. dont know a random directory structure(because i undersntad that in the backend its not required to have any logic) it could fix the problem.
and may improve the speed of get the directory info, because in openstack you can filter the lengh of the list of files sended by the openstack using delimiters

you can for example delimiter that you only want to get the files in /folder1/random2/  
if you have 50K chunks and you want to list all the files using the openstack API, and you supose that the json of every chunk it will be 500 chars it's mean you have  500*50000 = 25000000/1024=24414KB
its too much for only 1 get (LIST) petition, if you separate the chunks for 1000 chunks for folder  = 500*1000=488kb  more fastter.

Sorry for my bad english :P

and thank you for read us problems.

bests!

Daniel Jagszent

unread,
Dec 8, 2015, 7:30:13 PM12/8/15
to s3ql

Hi Isaac, Hi Riku,

are you sure that this limit affects you? Did you tried creating more than 50.000 data blocks?

AFAIK S3QL does not do directory/bucket/container listings on the Swift backend. So it does not matter if there are thousands or millions of data blocks on your Swift storage.

[...] i think you guys should add the directory feature in the filesystem backend files. atless add a parameter in the -backend-options specific only for openstack that allow create automatic directories only to fix the 50K files limit. [...]

There is no thing as a folder/directory in Swift terminology. Swift is (the same as S3 and all other object storages) a flat filesystem. You can have many buckets (or in Swift they are called container) but in each bucket you can only store files (or objects in object storage terminology). So a recursive directory structure is not possible. Swift clients can (any most of them do) opt to show you a virtual folder structure underneath a container. If an object is named “folder/structure/file.name” the Swift client can show you a virtual folder structure with one top level folder “folder” that has a sub-folder “structure” and a file “file.name” in that sub-folder.

I suspect the 50.000 files per directory limit is a limit that not the hubic backend has but one or more of the hubic clients (the web app or the sync client probably). If you use that hubic account for nothing else as S3QL filesystems you should be ok. Otherwise you might want to experiment with the “prefix” option of S3QL to put all S3QL files in a virtual folder of your hubic store and do not touch this “folder” in any other application that accesses your hubic account.

I have had a S3QL file system on OVH Object Storage (the “pro” version of hubic – https://www.ovh.co.uk/cloud/storage/object-storage.xml ) with approx. 3 million data blocks / Swift objects in the container (but only approx. 400GB storage, many small and very good compressible files).
The Swift backend did not mind that many objects in a single container. I’m currently deleting the filesystem – that involves doing a listing of all objects and issuing a HTTP DELETE requests for each of them. This takes many hours and is not finished yet but otherwise works OK.

I opted to not use that filesystem anymore because S3QL and the backup application that filled up that filesystem (Burp, http://burp.grke.org/) are not a good fit. At the end the filesystem had 16 million directory entries and 1.5 million inodes (Burp uses hard links excessively) and the sqlite database that S3QL uses to store the filesystem structure was 1.2 GB uncompressed. A s3qlstat or df on that filesystem took several seconds because of the huge database size. Also S3QL scales not very good with parallel file accesses but Burp does a ton of those. (The sqlite database is not thread safe and thus every read/write access to the database gets serialized by S3QL).

Now I use Bareos and 4 (in the future possibly more) different S3QL mounts (on 4 different container but you could do that on one container with different prefixes for every filesystem). Bareos distributes the read/write access on the 4 S3QL mounts and backing up the same data as before the combined uncompressed database sizes of all 4 S3QL filesystems is only 10 MB.

Daniel Jagszent

unread,
Dec 9, 2015, 11:06:22 AM12/9/15
to s3ql, Isaac Aymerich

Hi Isaac,

( I will reply on the list since I suspect your answer was supposed to go there, too? )

Isaac Aymerich schrieb:

no i didnt try, but in hubic documentation is definied as a limitation, anyway  i will try to create 100K 1K files this afternoon to know if it really a limitation or only a recoemndation.

Keep in mind: S3QL does data de-duplication on block level. Creating 100K 1KB files with the same content will only create one single object in the Swift backend. You need to create 100K files with different contents.

Something simple like this should do it:

cd /path/to/your/S3QL/mountpoint
for i in {1..1000000}; do echo $i > $i; done

and about file listing.. i suppose then s3ql have a little database with the binary data files name,

Yes, have a look at http://www.rath.org/s3ql-docs/impl_details.html where Nikolaus explains some details about S3QL’s inner workings.

Riku Bister

unread,
Dec 9, 2015, 3:12:02 PM12/9/15
to s3ql, isaac.a...@gmail.com
Heya,

Havent tried 50k of afraid it gonna breake. im not sure if that affect me, i just read on their page, there is nothing saying directly its a limit but anyways.
i could do a new filesystem and set 1M block then upload 50G, but if you are already trying it...

I asked on hubic forums about this 50k limit, well got banned from their forums, reason was "spam" they have problems of alot spam there and now they banned me and my post deleted... well.. i already done another account and complained about it private message, but support there takes very long time im not expecting any answer.. i watched French version, there support is more active. im not French.


Nikolaus Rath

unread,
Dec 9, 2015, 5:34:24 PM12/9/15
to s3...@googlegroups.com
On Dec 09 2015, Daniel Jagszent <dan...@jagszent.de> wrote:
> AFAIK S3QL does not do directory/bucket/container listings on the Swift
> backend. So it does not matter if there are thousands or millions of
> data blocks on your Swift storage.

Well, fsck.s3ql does such listings. But they are paginated, so there
should be no issues.

> I opted to not use that filesystem anymore because S3QL and the backup
> application that filled up that filesystem (Burp, http://burp.grke.org/)
> are not a good fit. At the end the filesystem had 16 million directory
> entries and 1.5 million inodes (Burp uses hard links excessively) and
> the sqlite database that S3QL uses to store the filesystem structure was
> 1.2 GB uncompressed.

This is not unreasonable though. Note that ext4 would require at least 5
GB of metadata as well - just to store the inodes (assuming 4096 bytes
inode size). That's not yet counting directory entry *names*.


> Also S3QL scales not very good with parallel file accesses but Burp
> does a ton of those. (The sqlite database is not thread safe and thus
> every read/write access to the database gets serialized by S3QL).

Both is true, but one is not the cause of another. Most reads/writes
don't require access to the database and could run in parallel. However,
S3QL itself is mostly single threaded at the moment so the requests are
indeed serialized.

However, I have plans in the drawer to fix this at some point. The idea
is to handle reads/writes for blocks that are already cached entirely at
the C level. This will allow concurrenty *and* at the same time boost
single-threaded performance as well. Just need to find the time...

Nikolaus Rath

unread,
Dec 9, 2015, 5:37:25 PM12/9/15
to s3...@googlegroups.com
On Dec 08 2015, Riku Bister <riku...@gmail.com> wrote:
> i dont see any reason to change provider, this is cheap unmetered have
> alot
[...]

Well, what about the reason that it doesn't work well with S3QL (which I
believe prompted you to start this thread)?

Or maybe the reason given by yourself shortly after?

On Dec 09 2015, Riku Bister <riku...@gmail.com> wrote:
> I asked on hubic forums about this 50k limit, well got banned from their
> forums, reason was "spam" they have problems of alot spam there and now
> they banned me and my post deleted... well.. i already done another account
> and complained about it private message, but support there takes very long
> time im not expecting any answer..


Not saying you should switch providers, just pointing out an
inconsistency in your arguments :-).

Riku Bister

unread,
Dec 9, 2015, 6:01:53 PM12/9/15
to s3ql


Well, what about the reason that it doesn't work well with S3QL (which I
believe prompted you to start this thread)?


Please can you explain? How is not working well? Im using it without any problems or errors for some time now data stored over 500GB(im still uploading..). Its unknown when 50k will reach will it broke things or not.. i was just in time bring it up in-case it will not work...
Again without your software this was not possible. s3ql just make possible to use hubic. and without it users are forced to use hubicfuse what is slow/full of problems/not good.
Nikolaus, please do you realize im very happy because of your work? :) without you this was never possible.

I promted on this thread to ask if there possible to do a swift:// folder thing like local:// uses just to have same directory/file structure, in-case i need access it direct and mount it via local on emergency.  (and same time this will fix the problem of 50k file because local:// structure does folders under numbers and there is files under every number)




Daniel Jagszent

unread,
Dec 9, 2015, 8:35:19 PM12/9/15
to s3...@googlegroups.com

Nikolaus Rath schrieb:

On Dec 09 2015, Daniel Jagszent <dan...@jagszent.de> wrote:
> AFAIK S3QL does not do directory/bucket/container listings on the Swift
> backend. So it does not matter if there are thousands or millions of
> data blocks on your Swift storage.
Well, fsck.s3ql does such listings. But they are paginated, so there
should be no issues.

I can confirm that. Due to stupidity (wanting to increase the nofile limit ulimit -n but actually setting a hard limit on file size ulimit -f) I once got the sqlite database corrupted for that big file system. After re-creating the database (with the sqlite command line tool) I naturally needed to run a fsck.s3ql on the file system. It took some time but worked flawlessly.

> [...] At the end the filesystem had 16 million directory
> entries and 1.5 million inodes (Burp uses hard links excessively) and
> the sqlite database that S3QL uses to store the filesystem structure was
> 1.2 GB uncompressed.
This is not unreasonable though. Note that ext4 would require at least 5
GB of metadata as well - just to store the inodes (assuming 4096 bytes
inode size). That's not yet counting directory entry *names*.

Sure. The size of the sqlite database is reasonable for so many inodes/directory entries. But I suspect that ext4 will scale better in terms of execution time for normal operations like e.g. file system stats (df). S3QL needs to do several full table scans for that and this will take its time for tables that big (In my case approx. 10 seconds).

> Also S3QL scales not very good with parallel file accesses but Burp
> does a ton of those. (The sqlite database is not thread safe and thus
> every read/write access to the database gets serialized by S3QL).
Both is true, but one is not the cause of another. Most reads/writes
don't require access to the database and could run in parallel. However,
S3QL itself is mostly single threaded at the moment so the requests are
indeed serialized.

Thanks for the clarification!

However, I have plans in the drawer to fix this at some point. The idea
is to handle reads/writes for blocks that are already cached entirely at
the C level. This will allow concurrenty *and* at the same time boost
single-threaded performance as well. Just need to find the time...

That sounds great! (More performance always does :) )
Am I right in assuming that this will speed up read/write syscalls but not stuff that solely works on the database? (like opendir or the attr and xattr calls)

Chris

unread,
Dec 9, 2015, 11:03:15 PM12/9/15
to s3ql
What settings do you all use with hubic and s3ql?

My kimsufi (OVH) dedicated server has a 100mbit connection and I can only get about 3000kb/sec upload speed on a good day. Using swiftexplorer or the like I can easily max out my upload speed, so the bottleneck is s3ql in my case. I tried disabling compression and it made no difference, so I'm wondering if there's something else I could try. Maybe my Atom processor is too weak?
Message has been deleted

Nikolaus Rath

unread,
Dec 10, 2015, 11:16:40 AM12/10/15
to s3...@googlegroups.com
On Dec 10 2015, Daniel Jagszent <dan...@jagszent.de> wrote:
>>> > [...] At the end the filesystem had 16 million directory
>>> > entries and 1.5 million inodes (Burp uses hard links excessively) and
>>> > the sqlite database that S3QL uses to store the filesystem structure was
>>> > 1.2 GB uncompressed.
>>
>> This is not unreasonable though. Note that ext4 would require at least 5
>> GB of metadata as well - just to store the inodes (assuming 4096 bytes
>> inode size). That's not yet counting directory entry *names*.
>
> Sure. The size of the sqlite database is reasonable for so many
> inodes/directory entries. But I suspect that ext4 will scale better in
> terms of execution time for normal operations like e.g. file system
> stats (|df|). S3QL needs to do several full table scans
> <https://bitbucket.org/nikratio/s3ql/src/default/src/s3ql/fs.py?fileviewer=file-view-default#fs.py-916:918>
> for that and this will take its time for tables that big (In my case
> approx. 10 seconds).

True. But this could be fixed easily by keeping the relevant numbers in
memory and updating them on write/delete. This would be rather simple,
it's just that no one ever complained about the time required for df or
s3qlstat before. Personally don't use it more than once every few weeks,
so the performance never bothered me.

>> However, I have plans in the drawer to fix this at some point. The idea
>> is to handle reads/writes for blocks that are already cached entirely at
>> the C level. This will allow concurrenty *and* at the same time boost
>> single-threaded performance as well. Just need to find the time...
>
> That sounds great! (More performance always does :) )
> Am I right in assuming that this will speed up read/write syscalls but
> not stuff that solely works on the database?

Yes

> (like opendir or the attr
> and xattr calls)

xattr requires database access and would stay slow.
opendir and getattr don't necessarily act on the database (getattr
results are cached), but they still wouldn't be speed up because S3QL
itself is mostly single threaded.

Isaac Aymerich

unread,
Dec 10, 2015, 12:27:07 PM12/10/15
to s3...@googlegroups.com
tring to upload 100K files i'm getting corruption data in s3ql, in some point s3ql crash because hubic close connections and i lose all data from last meta backup :/

this is part of the log 

2015-12-10 16:48:54.503 28772:Thread-14 s3ql.backends.common.wrapped: Had to retry 535 times over the last 60 seconds, server or network problem?
2015-12-10 16:48:54.539 28772:Thread-22 s3ql.backends.common.wrapped: Had to retry 536 times over the last 60 seconds, server or network problem?
2015-12-10 16:48:54.545 28772:Thread-19 s3ql.backends.common.wrapped: Had to retry 537 times over the last 60 seconds, server or network problem?
2015-12-10 16:48:54.591 28772:Thread-8 s3ql.backends.common.wrapped: Had to retry 538 times over the last 60 seconds, server or network problem?
2015-12-10 16:49:31.752 28772:MainThread s3ql.mount.main: FUSE main loop terminated.
2015-12-10 16:49:31.762 28772:MainThread s3ql.mount.unmount: Unmounting file system...
2015-12-10 16:49:32.153 28772:MainThread s3ql.metadata.dump_and_upload_metadata: Dumping metadata...
2015-12-10 16:49:32.153 28772:MainThread s3ql.metadata.dump_metadata: ..objects..
2015-12-10 16:49:32.165 28772:MainThread s3ql.metadata.dump_metadata: ..blocks..
2015-12-10 16:49:32.186 28772:MainThread s3ql.metadata.dump_metadata: ..inodes..
2015-12-10 16:49:32.223 28772:MainThread s3ql.metadata.dump_metadata: ..inode_blocks..
2015-12-10 16:49:32.241 28772:MainThread s3ql.metadata.dump_metadata: ..symlink_targets..
2015-12-10 16:49:32.242 28772:MainThread s3ql.metadata.dump_metadata: ..names..
2015-12-10 16:49:32.257 28772:MainThread s3ql.metadata.dump_metadata: ..contents..
2015-12-10 16:49:32.275 28772:MainThread s3ql.metadata.dump_metadata: ..ext_attributes..
2015-12-10 16:49:32.276 28772:MainThread s3ql.metadata.upload_metadata: Compressing and uploading metadata...
2015-12-10 16:49:33.234 28772:MainThread s3ql.metadata.upload_metadata: Wrote 987 KiB of compressed metadata.
2015-12-10 16:49:33.234 28772:MainThread s3ql.metadata.upload_metadata: Cycling metadata backups...
2015-12-10 16:49:33.234 28772:MainThread s3ql.metadata.cycle_metadata: Backing up old metadata...
2015-12-10 16:49:39.172 28772:MainThread s3ql.mount.main: Cleaning up local metadata...
2015-12-10 16:49:39.298 28772:MainThread s3ql.mount.main: All done.
2015-12-10 16:49:59.295 35486:MainThread s3ql.mount.get_metadata: Using cached metadata.
2015-12-10 16:49:59.296 35486:MainThread s3ql.mount.main: Mounting filesystem...
2015-12-10 16:49:59.301 35490:MainThread s3ql.daemonize.detach_process_context: Daemonizing, new PID is 35491
2015-12-10 17:17:20.900 35491:Thread-3 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.911 35491:Thread-14 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.914 35491:Thread-5 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.920 35491:Thread-19 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.940 35491:Thread-17 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.944 35491:Thread-7 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.952 35491:Thread-12 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.955 35491:Thread-22 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.957 35491:Thread-6 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.962 35491:Thread-11 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.973 35491:Thread-4 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:20.979 35491:Thread-18 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:21.004 35491:Thread-21 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:21.011 35491:Thread-10 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:21.014 35491:Thread-8 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:21.037 35491:Thread-9 s3ql.backends.swift._do_request: OpenStack auth token seems to have expired, requesting new one.
2015-12-10 17:17:21.920 35491:Thread-14 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 3)...
2015-12-10 17:17:22.344 35491:Thread-3 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 4)...
2015-12-10 17:17:22.347 35491:Thread-17 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 4)...
2015-12-10 17:17:22.352 35491:Thread-14 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 4)...
2015-12-10 17:17:22.356 35491:Thread-7 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 4)...
2015-12-10 17:17:22.499 35491:Thread-18 root.excepthook: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/mount.py", line 64, in run_with_except_hook
    run_old(*args, **kw)
  File "/usr/lib/python3.4/threading.py", line 868, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/s3ql/s3ql/block_cache.py", line 404, in _upload_loop
    self._do_upload(*tmp)
  File "/usr/lib/s3ql/s3ql/block_cache.py", line 431, in _do_upload
    % obj_id).get_obj_size()
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 337, in perform_write
    return fn(fh)
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 716, in __exit__
    self.close()
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 710, in close
    self.fh.close()
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 910, in close
    headers=self.headers, body=self.fh)
  File "/usr/lib/s3ql/s3ql/backends/swift.py", line 210, in _do_request
    self.conn =  self._get_conn()
  File "/usr/lib/s3ql/s3ql/backends/swift.py", line 167, in _get_conn
    raise AuthorizationError(resp.reason)
s3ql.backends.common.AuthorizationError: Access denied. Server said: Authorization Required
2015-12-10 17:17:22.536 35491:Thread-22 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 5)...
2015-12-10 17:17:22.540 35491:Thread-21 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 5)...
2015-12-10 17:17:22.552 35491:Thread-17 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 5)...
2015-12-10 17:17:22.554 35491:Thread-3 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 5)...
2015-12-10 17:17:22.560 35491:Thread-14 s3ql.backends.common.wrapped: Encountered HTTPError (509 unused), retrying ObjectW.close (attempt 5)...
2015-12-10 17:17:22.664 35491:Thread-19 s3ql.mount.exchook: Unhandled top-level exception during shutdown (will not be re-raised)
Traceback (most recent call last):
  File "/usr/lib/s3ql/s3ql/mount.py", line 64, in run_with_except_hook
    run_old(*args, **kw)
  File "/usr/lib/python3.4/threading.py", line 868, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/s3ql/s3ql/block_cache.py", line 404, in _upload_loop
    self._do_upload(*tmp)
  File "/usr/lib/s3ql/s3ql/block_cache.py", line 431, in _do_upload
    % obj_id).get_obj_size()
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 337, in perform_write
    return fn(fh)
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 716, in __exit__
    self.close()
  File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 710, in close
    self.fh.close()
  File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
    return method(*a, **kw)
  File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 910, in close
    headers=self.headers, body=self.fh)
  File "/usr/lib/s3ql/s3ql/backends/swift.py", line 210, in _do_request
    self.conn =  self._get_conn()
  File "/usr/lib/s3ql/s3ql/backends/swift.py", line 167, in _get_conn
    raise AuthorizationError(resp.reason)
s3ql.backends.common.AuthorizationError: Access denied. Server said: Authorization Required
2015-12-10 17:17:22.666 35491:Thread-10 s3ql.mount.exchook: Unhandled top-level exception during shutdown (will not be re-raised)




--
You received this message because you are subscribed to a topic in the Google Groups "s3ql" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/s3ql/hSyJvnXN0zs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to s3ql+uns...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Riku Bister

unread,
Dec 10, 2015, 3:14:10 PM12/10/15
to s3ql


torstai 10. joulukuuta 2015 19.27.07 UTC+2 Isaac Aymerich kirjoitti:
tring to upload 100K files i'm getting corruption data in s3ql, in some point s3ql crash because hubic close connections and i lose all data from last meta backup :/

this is part of the log 


what version s3ql you using, and how you connect?
Reply all
Reply to author
Forward
0 new messages