Device or resource busy

167 views
Skip to first unread message

Jimb0

unread,
Sep 30, 2011, 10:29:27 AM9/30/11
to dedupfilesystem-sdfs-user-discuss
When coping many many files, via scp or smbget to the dedup dir I get
many of these type of errors below.

Can't open Dell Drivers/Desktops/CHIPSET/All/g33q35.cat: Device or
resource busy


When I try to LS my dedpue I get ls: reading directory .: Permission
denied.

But if I control-c the scp or smbget and wait a few seconds it comes
back.

Is this normal behavior?? Any debugging I can do to help?

Daniel Lindgren

unread,
Sep 30, 2011, 12:48:30 PM9/30/11
to dedupfilesystem-...@googlegroups.com
2011/9/30 Jimb0 <mccan...@gmail.com>:

> When coping many many files, via scp or smbget to the dedup dir I get
> many of these type of errors below.
>
> Can't open Dell Drivers/Desktops/CHIPSET/All/g33q35.cat: Device or
> resource busy

Have you checked number of open files allowed? See the part about
limits.conf here: http://opendedup.org/quickstart

Cheers,
Daniel

Jimb0

unread,
Sep 30, 2011, 1:57:46 PM9/30/11
to dedupfilesystem-sdfs-user-discuss
Yep, this is what I have.

cat /etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
# - NOTE: group and wildcard limits are not applied to root.
# To apply a limit to the root user, <domain> must be
# the literal username root.
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20,
19]
# - rtprio - max realtime priority
# - chroot - change root to directory (Debian-specific)
#
#<domain> <type> <item> <value>
#


* soft nofile 65535
* hard nofile 65535


#* soft core 0
#root hard core 100000
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#ftp - chroot /ftp
#@student - maxlogins 4

# End of file


On Sep 30, 12:48 pm, Daniel Lindgren <bd.d...@gmail.com> wrote:
> 2011/9/30 Jimb0 <mccann....@gmail.com>:

Sam Silverberg

unread,
Sep 30, 2011, 2:21:22 PM9/30/11
to dedupfilesystem-...@googlegroups.com, dedupfilesystem-sdfs-user-discuss
Jim,

Take a look at the log files in /var/log/sdfs/. If you could email me any output from the <volume-config>.log that looks suspect I can troubleshoot.

Sent from my iPhone

Jimb0

unread,
Oct 4, 2011, 10:58:31 AM10/4/11
to dedupfilesystem-sdfs-user-discuss
Everyone make sure after changing the limits.conf you reboot the
system.

On Sep 30, 2:21 pm, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> Jim,
>
> Take a look at the log files in /var/log/sdfs/. If you could email me any output from the <volume-config>.log that looks suspect I can troubleshoot.
>
> Sent from my iPhone
>

Peter Mah

unread,
Oct 12, 2011, 11:01:58 PM10/12/11
to dedupfilesystem-sdfs-user-discuss
I have the same problem. I modified /etc/security/limits.conf as per
the quick start and rebooted.

It looks like the limit changes have taken effect. Here is the output
of ulimit:

pmah@rufus:~$ ulimit -Hn
65535
pmah@rufus:~$ ulimit -Sn
65535
pmah@rufus:~$ lsof | wc -l
730

Here is a sample of the errors I'm getting from the sdfs volume log.

2011-10-12 20:48:30,299 [Thread-11] WARN sdfs - unable to write file
metadata for [/mnt/data01/sdfs/files/backups/zbot/20111012-162356/pics/
pics_2005/september/Picture 038.jpg]
java.io.FileNotFoundException: /mnt/data01/sdfs/files/backups/zbot/
20111012-162356/pics/pics_2005/september/Picture 038.jpg (Too many
open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(Unknown Source)
at java.io.FileOutputStream.<init>(Unknown Source)
at
org.opendedup.sdfs.io.MetaDataDedupFile.writeFile(MetaDataDedupFile.java:
583)
at
org.opendedup.sdfs.io.MetaDataDedupFile.unmarshal(MetaDataDedupFile.java:
610)
at
org.opendedup.sdfs.io.MetaDataDedupFile.sync(MetaDataDedupFile.java:
912)
at fuse.SDFS.SDFSFileSystem.mknod(SDFSFileSystem.java:381)
at
fuse.Filesystem3ToFuseFSAdapter.mknod(Filesystem3ToFuseFSAdapter.java:
132)
2011-10-12 20:48:35,975 [Thread-297] WARN sdfs - unable to write file
metadata for [/mnt/data01/sdfs/files/backups/zbot/20111012-162356/
peter/work/fasttrack/spike_report/modules/DataModel/FileSystem/Export/
LocalToLocalHandler.php.html]
java.io.FileNotFoundException: /mnt/data01/sdfs/files/backups/zbot/
20111012-162356/peter/work/fasttrack/spike_report/modules/DataModel/
FileSystem/Export/LocalToLocalHandler.php.html (Too many open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(Unknown Source)
at java.io.FileOutputStream.<init>(Unknown Source)
at
org.opendedup.sdfs.io.MetaDataDedupFile.writeFile(MetaDataDedupFile.java:
583)
at
org.opendedup.sdfs.io.MetaDataDedupFile.unmarshal(MetaDataDedupFile.java:
610)
at org.opendedup.sdfs.filestore.MetaFileStore
$1.onEviction(MetaFileStore.java:46)
at org.opendedup.sdfs.filestore.MetaFileStore
$1.onEviction(MetaFileStore.java:1)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.notifyListener(ConcurrentLinkedHashMap.java:
567)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.afterCompletion(ConcurrentLinkedHashMap.java:
350)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.put(ConcurrentLinkedHashMap.java:
778)
at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.put(ConcurrentLinkedHashMap.java:
750)
at
org.opendedup.sdfs.filestore.MetaFileStore.cacheMF(MetaFileStore.java:
66)
at
org.opendedup.sdfs.filestore.MetaFileStore.getMF(MetaFileStore.java:
101)
at fuse.SDFS.SDFSFileSystem.mknod(SDFSFileSystem.java:380)
at
fuse.Filesystem3ToFuseFSAdapter.mknod(Filesystem3ToFuseFSAdapter.java:
132)
2011-10-12 20:48:35,976 [Thread-297] WARN sdfs - unable to write file
metadata for [/mnt/data01/sdfs/files/backups/zbot/20111012-162356/pics/
pics_2005/september/Picture 071.jpg]
java.io.FileNotFoundException: /mnt/data01/sdfs/files/backups/zbot/
20111012-162356/pics/pics_2005/september/Picture 071.jpg (Too many
open files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(Unknown Source)
at java.io.FileOutputStream.<init>(Unknown Source)
at
org.opendedup.sdfs.io.MetaDataDedupFile.writeFile(MetaDataDedupFile.java:
583)
at
org.opendedup.sdfs.io.MetaDataDedupFile.unmarshal(MetaDataDedupFile.java:
610)
at
org.opendedup.sdfs.io.MetaDataDedupFile.sync(MetaDataDedupFile.java:
912)
at fuse.SDFS.SDFSFileSystem.mknod(SDFSFileSystem.java:381)
at
fuse.Filesystem3ToFuseFSAdapter.mknod(Filesystem3ToFuseFSAdapter.java:
132)

I never had a problem with 1.0.1 but started to have problems after
upgrading to 1.0.6. Had the same problem with 1.0.7 and still with
1.1.0.

I basically just copy (via scp) about 50GB of files nightly to an sdfs
volume.

Any suggestions?

Thanks.

Peter

Sam Silverberg

unread,
Oct 13, 2011, 12:25:08 AM10/13/11
to dedupfilesystem-...@googlegroups.com
Peter,

I will test your issue here. Approximately how many files are in your nightly backup?

Also, can you send me the volume config XML for the vole in question?

Sent from my iPhone

Daniel Lindgren

unread,
Oct 13, 2011, 8:17:16 AM10/13/11
to dedupfilesystem-...@googlegroups.com
>> It looks like the limit changes have taken effect. Here is the output
>> of ulimit:
>>
>> pmah@rufus:~$ ulimit -Hn
>> 65535
>> pmah@rufus:~$ ulimit -Sn
>> 65535
>> pmah@rufus:~$ lsof | wc -l
>> 730

If you use sudo to mount the SDFS volume you may get different ulimits
for that process.

Try running "sudo ulimit -n" and see if if differs.

Logging in as root and then mounting the volume is a workaround.

Cheers,
Daniel

Daniel Lindgren

unread,
Oct 13, 2011, 8:21:31 AM10/13/11
to dedupfilesystem-...@googlegroups.com
2011/10/13 Daniel Lindgren <bd....@gmail.com>:

I meant run "sudo su -" and then "ulimit -n", not "sudo ulimit -n".

If you use sudo to log on as root (sudo su -), you may get different
ulimits than if you manually login as root (su -).

Cheers,
Daniel

Peter Mah

unread,
Oct 13, 2011, 12:40:57 PM10/13/11
to dedupfilesystem-sdfs-user-discuss
Ah. OK. I didn't realize that it wasn't a global setting.

pmah@rufus:~$ sudo su -
[sudo] password for pmah:
root@rufus:~# ulimit -n
1024
root@rufus:~# ulimit -Hn
1024
root@rufus:~# ulimit -Sn
1024

I'll look into changing it when mounting through sudo.

Thanks!


On Oct 13, 6:21 am, Daniel Lindgren <bd.d...@gmail.com> wrote:
> 2011/10/13 Daniel Lindgren <bd.d...@gmail.com>:

Peter Mah

unread,
Oct 13, 2011, 12:46:22 PM10/13/11
to dedupfilesystem-sdfs-user-discuss
Thanks. I think Daniel may have found my problem but here is my
information just in case.

About 76000 files in 146GB of data.

Config file: /etc/sdfs/dedupe_vol_01-volume-cfg.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?><subsystem-
config version="1.1.0">
<locations dedup-db-store="/mnt/data01/sdfs/ddb" io-log="/mnt/data01/
sdfs/ioperf.log"/>
<io chunk-size="128" claim-hash-schedule="0 0 0/2 * * ?" dedup-
files="true" file-read-cache="5" hash-size="16" log-level="1" max-file-
inactive="900" max-file-write-buffers="1" max-open-files="1024" meta-
file-cache="1024" multi-read-timeout="1000" safe-close="true" safe-
sync="false" system-read-cache="1000" write-thr
eads="12"/>
<permissions default-file="0644" default-folder="0755" default-
group="0" default-owner="0"/>

<launch-params class-path="/usr/share/sdfs/lib/truezip-samples-7.3.2-
jar-with-dependencies.jar:/usr/share/sdfs/lib/commons-
collections-3.2.1.jar:/usr/share/sdfs/lib/sdfs.jar:/usr/share/sdfs/lib/
jacksum.jar:/usr/share/sdfs/lib/slf4j-log4j12-1.5.10.jar:/usr/share/
sdfs/lib/slf4j-api-1.5.10.jar:/usr/share/sdfs/lib/simpl
e-4.1.21.jar:/usr/share/sdfs/lib/commons-io-1.4.jar:/usr/share/sdfs/
lib/clhm-release-1.0-lru.jar:/usr/share/sdfs/lib/trove-3.0.0a3.jar:/
usr/share/sdfs/lib/quartz-1.8.3.jar:/usr/share/sdfs/lib/
log4j-1.2.15.jar:/usr/share/sdfs/lib/bcprov-jdk16-143.jar:/usr/share/
sdfs/lib/commons-codec-1.3.jar:/usr/share/sdfs/lib/commo
ns-httpclient-3.1.jar:/usr/share/sdfs/lib/commons-logging-1.1.1.jar:/
usr/share/sdfs/lib/java-xmlbuilder-1.jar:/usr/share/sdfs/lib/
jets3t-0.7.4.jar:/usr/share/sdfs/lib/commons-cli-1.2.jar" java-
options="-Djava.library.path=/usr/share/sdfs/bin/ -
Dorg.apache.commons.logging.Log=fuse.logging.FuseLog -
Dfuse.logging.level
=INFO -server -XX:+UseG1GC -Xmx20000m -Xmn2000m" java-path="/usr/share/
sdfs/jre1.7.0/bin/java"/>
<sdfscli enable="true" enable-auth="false" listen-address="localhost"
password="e5136700d0f57b5d5c912532e55293ca2e24d545bc7e74414b87362d4cfec18c"
port="6442" salt="vtBbKt"/>
<local-chunkstore allocation-size="3221225472000" chunk-gc-schedule="0
0 0/4 * * ?" chunk-store="/mnt/data01/sdfs/chunkstore/chunks" chunk-
store-dirty-timeout="1000" chunk-store-read-cache="5" chunkstore-
class="org.opendedup.sdfs.filestore.FileChunkStore" enabled="true"
encrypt="false" encryption-key="JWcsb9EecadV6f
3-VMZn9kyTk6u8d8UWuxK" eviction-age="6" gc-
class="org.opendedup.sdfs.filestore.gc.PFullGC" hash-db-store="/mnt/
data01/sdfs/chunkstore/hdb" pre-allocate="false" read-ahead-pages="1">
<network enable="false" hostname="0.0.0.0" port="2222" upstream-
enabled="false" upstream-host="" upstream-host-port="2222" upstream-
password="admin" use-udp="false"/>
</local-chunkstore>
<volume capacity="3000GB" closed-gracefully="false" current-
size="18876420664" duplicate-bytes="2426929152" maximum-percentage-
full="-1.0" path="/mnt/data01/sdfs/files" read-bytes="0" write-
bytes="16486760448"/></subsystem-config>

Peter


On Oct 12, 10:25 pm, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> Peter,
>
> I will test your issue here. Approximately how many files are in your nightly backup?
>
> Also, can you send me the volume config XML for the vole in question?
>
> Sent from my iPhone
>
Reply all
Reply to author
Forward
0 new messages