error use opendedup with Oracle RMAN with access NFS

748 views
Skip to first unread message

Roberto

unread,
May 29, 2011, 4:01:02 PM5/29/11
to dedupfilesystem-sdfs-user-discuss
When access to FS SDFS with nfs use on target Oracle RMAN this return
error:

RMAN-00571:
===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
===============
RMAN-00571:
===========================================================
RMAN-03009: failure of backup command on t2 channel at 05/29/2011
00:30:46
ORA-19502: write error on file "/oraarchsdfs/db601/db/
DB601_full_blmdgj6k", blockno 231425 (blocksize=8192)
ORA-27072: File I/O error
Linux-x86_64 Error: 116: Stale NFS file handle
Additional information: 4
Additional information: 231425
Additional information: -1

Sam Silverberg

unread,
May 29, 2011, 4:13:38 PM5/29/11
to dedupfilesystem-...@googlegroups.com, dedupfilesystem-sdfs-user-discuss
Can you send the XML config? It's located in /etc/sdfs.

Sent from my iPhone

Roberto

unread,
May 29, 2011, 4:33:47 PM5/29/11
to dedupfilesystem-sdfs-user-discuss
This is:
<?xml version="1.0" encoding="UTF-8" standalone="no"?><subsystem-
config version="1.0.1">
<locations dedup-db-store="/opt/sdfs/volumes/rmandedup_sdfs/ddb" io-
log="/opt/sdfs/volumes/rmandedup_sdfs/io.SDFSLogger.getLog()"/>
<io chunk-size="128" claim-hash-schedule="0 0 0/2 * * ?" dedup-
files="true" file-read-cache="5" hash-size="16" max-file-
inactive="900" max-file-write-buffers="32" max-open-files="1024" meta-
file-cache="1024" multi-read-timeout="1000" safe-close="false" safe-
sync="false" system-read-cache="1000" write-threads="12"/>
<permissions default-file="0644" default-folder="0755" default-
group="0" default-owner="0"/>

<local-chunkstore allocation-size="1099511627776" chunk-gc-schedule="0
0 0/4 * * ?" chunk-store="/opt/sdfs/volumes/rmandedup_sdfs/chunkstore/
chunks" chunk-store-dirty-timeout="1000" chunk-store-read-cache="5"
enabled="true" encrypt="false" encryption-
key="FJXX51bYcRwhkdEwzGRgjzAwSJVrRNUSWre" eviction-age="6" hash-db-
store="/opt/sdfs/volumes/rmandedup_sdfs/chunkstore/hdb" pre-
allocate="false" read-ahead-pages="1"/>
<volume capacity="1TB" current-size="0" duplicate-bytes="34078720"
maximum-percentage-full="-1.0" path="/opt/sdfs/volumes/rmandedup_sdfs/
files" read-bytes="0" write-bytes="169345024"/></subsystem-config>

Best regards,
Roberto

On 29 Mag, 22:13, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> Can you send the XML config? It's located in /etc/sdfs.
>
> Sent from my iPhone
>

Sam Silverberg

unread,
May 29, 2011, 5:29:17 PM5/29/11
to dedupfilesystem-...@googlegroups.com
Not sure why it would be failing. Have you tried this with SDSF 1.0.5?

Roberto

unread,
May 30, 2011, 4:48:22 AM5/30/11
to dedupfilesystem-sdfs-user-discuss
Yes, i try with 1.0.1 and 1.0.5 and the error is the same.
I have use Linux RHEL 6 x86_64.

On 29 Mag, 23:29, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> Not sure why it would be failing. Have you tried this with SDSF 1.0.5?
>

henk bakker

unread,
May 30, 2011, 7:07:14 AM5/30/11
to dedupfilesystem-...@googlegroups.com
perhaps one of the next three options?

Basically 3 options:

1. retry mount from client
2. on client, unmount, then mount
3. service nfsd restart on server

Roberto

unread,
May 30, 2011, 8:29:19 AM5/30/11
to dedupfilesystem-sdfs-user-discuss
I try all of your suggestion, but the problem persist.
I try on the server to share with nfs a ext4 fs, and rman on client
work correctly.


On 30 Mag, 13:07, henk bakker <novastor...@gmail.com> wrote:
> perhaps one of the next three options?
>
> Basically 3 options:
>
> 1. retry mount from client
> 2. on client, unmount, then mount
> 3. service nfsd restart on server
>

Sam Silverberg

unread,
May 30, 2011, 11:45:59 AM5/30/11
to dedupfilesystem-...@googlegroups.com
NFS transfers should work just fine. Here are my nfs options that I use for VMWare ESX 4.1i .


Also, can you send me the logs from /var/log/sdfs/ ?

Roberto

unread,
May 30, 2011, 12:48:14 PM5/30/11
to dedupfilesystem-sdfs-user-discuss
I try to export a fs with the same your parameter, but the problem
persist.

I have recreate a sdfs:
[root@srvmdedup1 ~]# mkfs.sdfs --io-safe-close=false --volume-
name=rmandedup_sdfs --volume-capacity=1TB
Attempting to create volume ...
Volume [rmandedup_sdfs] created with a capacity of [1TB]
check [/etc/sdfs/rmandedup_sdfs-volume-cfg.xml] for configuration
details if you need to change anything
[root@srvmdedup1 ~]#
[root@srvmdedup1 ~]# mount.sdfs rmandedup_sdfs /media/
rmandedup
Running SDFS Version 1.0.5
reading config file = /etc/sdfs/rmandedup_sdfs-volume-cfg.xml
-f
/media/rmandedup
-o
direct_io,big_writes,allow_other,fsname=rmandedup_sdfs-volume-cfg.xml

and this is log after error:
ORA-19502: write error on file "/oraarchsdfs/db601/db/
DB601_full_0dmdl7ih", blockno 18433 (blocksize=8192)
ORA-27072: File I/O error
Linux-x86_64 Error: 116: Stale NFS file handle

[root@srvmdedup1 ~]# cat /var/log/sdfs/rmandedup_sdfs-volume-
cfg.xml.log
2011-05-30 18:31:38,264 [main] INFO sdfs - Running SDFS Version 1.0.5
2011-05-30 18:31:38,274 [main] INFO sdfs - Parsing subsystem-config
version 1.0.5
2011-05-30 18:31:38,275 [main] INFO sdfs - parsing folder locations
2011-05-30 18:31:38,275 [main] INFO sdfs - Setting hash size to 16
2011-05-30 18:31:38,277 [main] INFO sdfs - Mounting volume /opt/sdfs/
volumes/rmandedup_sdfs/files
2011-05-30 18:31:38,280 [main] INFO sdfs - Setting volume size to
1099511627776
2011-05-30 18:31:38,280 [main] INFO sdfs - Setting maximum capacity
to infinite
2011-05-30 18:31:38,280 [main] INFO sdfs - ######### Will allocate
1099511627776 in chunkstore ##############
2011-05-30 18:31:38,479 [main] INFO sdfs - Opening Chunk Store
2011-05-30 18:31:40,730 [main] INFO sdfs - ChunkStore /opt/sdfs/
volumes/rmandedup_sdfs/chunkstore/chunks/chunks.chk created
2011-05-30 18:31:41,738 [main] INFO sdfs - ########## Finished
Loading Hash Database in [0] seconds ###########
2011-05-30 18:31:41,738 [main] INFO sdfs - loaded [0] into the
hashtable [/opt/sdfs/volumes/rmandedup_sdfs/chunkstore/hdb/hashstore-
sdfs] free slots available are [0] free slots added [0] end file
position is [0]!
2011-05-30 18:31:41,739 [main] INFO sdfs - Cache Size = 131072 and
Dirty Timeout = 1000
2011-05-30 18:31:41,739 [main] INFO sdfs - Total Entries 0
2011-05-30 18:31:41,740 [main] INFO sdfs - Added sdfs
2011-05-30 18:31:41,742 [main] INFO sdfs - Current DSE Percentage
Full is [0.0] will run GC when [0.1]
2011-05-30 18:31:43,322 [main] INFO sdfs - ######################
Management WebServer Started at localhost/127.0.0.1:6442
#########################
2011-05-30 18:31:43,331 [main] INFO sdfs - mounting /opt/sdfs/volumes/
rmandedup_sdfs/files to null
2011-05-30 18:31:43,432 [main] INFO sdfs - Mounted SDFS FileSystem
[root@srvmdedup1 ~]# tail -f /var/log/sdfs/rmandedup_sdfs-volume-
cfg.xml.log
2011-05-30 18:31:40,730 [main] INFO sdfs - ChunkStore /opt/sdfs/
volumes/rmandedup_sdfs/chunkstore/chunks/chunks.chk created
2011-05-30 18:31:41,738 [main] INFO sdfs - ########## Finished
Loading Hash Database in [0] seconds ###########
2011-05-30 18:31:41,738 [main] INFO sdfs - loaded [0] into the
hashtable [/opt/sdfs/volumes/rmandedup_sdfs/chunkstore/hdb/hashstore-
sdfs] free slots available are [0] free slots added [0] end file
position is [0]!
2011-05-30 18:31:41,739 [main] INFO sdfs - Cache Size = 131072 and
Dirty Timeout = 1000
2011-05-30 18:31:41,739 [main] INFO sdfs - Total Entries 0
2011-05-30 18:31:41,740 [main] INFO sdfs - Added sdfs
2011-05-30 18:31:41,742 [main] INFO sdfs - Current DSE Percentage
Full is [0.0] will run GC when [0.1]
2011-05-30 18:31:43,322 [main] INFO sdfs - ######################
Management WebServer Started at localhost/127.0.0.1:6442
#########################
2011-05-30 18:31:43,331 [main] INFO sdfs - mounting /opt/sdfs/volumes/
rmandedup_sdfs/files to null
2011-05-30 18:31:43,432 [main] INFO sdfs - Mounted SDFS FileSystem


On May 30, 5:45 pm, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> NFS transfers should work just fine. Here are my nfs options that I use for
> VMWare ESX 4.1i .
>
> /media/dedup  192.168.0.0/255.255.255.0(rw,async,no_subtree_check,fsid=0)
>
> Also, can you send me the logs from /var/log/sdfs/ ?
>
> > > > > > > > Additional information: -1- Hide quoted text -
>
> - Show quoted text -

Sam Silverberg

unread,
May 30, 2011, 3:14:34 PM5/30/11
to dedupfilesystem-...@googlegroups.com
Sorry for the 100 questions. I just can't seem to figure this out. Nfs export of sdfs is fairly common.

I am uploading a new version of sdfs tomorrow. Can you wait and try it with that? I have seen issues in 1.0.5 where data is written and it takes a second where it cannot be read yet. This may be related.

Sent from my iPhone

Roberto

unread,
May 31, 2011, 4:04:59 PM5/31/11
to dedupfilesystem-sdfs-user-discuss
Hi,
i don't have a good news, i try with a new release 1.0.6, but doesn't
resolve a problem.
This is a log:
2011-05-31 21:55:25,121 [main] INFO sdfs - Running SDFS Version 1.0.6
2011-05-31 21:55:25,122 [main] INFO sdfs - Parsing subsystem-config
version 1.0.6
2011-05-31 21:55:25,122 [main] INFO sdfs - parsing folder locations
2011-05-31 21:55:25,123 [main] INFO sdfs - Setting hash size to 16
2011-05-31 21:55:25,124 [main] INFO sdfs - Mounting volume /opt/sdfs/
volumes/rmandedup_sdfs/files
2011-05-31 21:55:25,126 [main] INFO sdfs - Setting volume size to
1099511627776
2011-05-31 21:55:25,126 [main] INFO sdfs - Setting maximum capacity
to infinite
2011-05-31 21:55:25,126 [main] INFO sdfs - ######### Will allocate
1099511627776 in chunkstore ##############
2011-05-31 21:55:25,147 [main] INFO sdfs - Wrote volume config = /etc/
sdfs/rmandedup_sdfs-volume-cfg.xml
2011-05-31 21:55:25,150 [main] INFO sdfs - Opening Chunk Store
2011-05-31 21:55:25,323 [main] INFO sdfs - ChunkStore /opt/sdfs/
volumes/rmandedup_sdfs/chunkstore/chunks/chunks.chk created
2011-05-31 21:55:25,481 [main] INFO sdfs - ########## Finished
Loading Hash Database in [0] seconds ###########
2011-05-31 21:55:25,481 [main] INFO sdfs - loaded [0] into the
hashtable [/opt/sdfs/volumes/rmandedup_sdfs/chunkstore/hdb/hashstore-
sdfs] free slots available are [0] free slots added [0] end file
position is [0]!
2011-05-31 21:55:25,482 [main] INFO sdfs - Cache Size = 131072 and
Dirty Timeout = 1000
2011-05-31 21:55:25,482 [main] INFO sdfs - Total Entries 0
2011-05-31 21:55:25,482 [main] INFO sdfs - Added sdfs
2011-05-31 21:55:25,485 [main] INFO sdfs - Current DSE Percentage
Full is [0.0] will run GC when [0.1]
2011-05-31 21:55:25,485 [main] INFO sdfs - Using
org.opendedup.sdfs.filestore.gc.PFullGC for DSE Garbage Collection
2011-05-31 21:55:25,545 [main] INFO sdfs - ######################
Management WebServer Started at localhost/127.0.0.1:6442
#########################
2011-05-31 21:55:25,547 [main] INFO sdfs - mounting /opt/sdfs/volumes/
rmandedup_sdfs/files to null


On May 30, 9:14 pm, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> Sorry for the 100 questions. I just can't seem to figure this out. Nfs export of sdfs is fairly common.
>
> I am uploading a new version of sdfs tomorrow. Can you wait and try it with that? I have seen issues in 1.0.5 where data is written and it takes a second where it cannot be read yet. This may be related.
>
> Sent from my iPhone
>
> >> - Show quoted text -- Hide quoted text -

Sam Silverberg

unread,
May 31, 2011, 4:15:44 PM5/31/11
to dedupfilesystem-...@googlegroups.com
Roberto,

I am sorry to hear the issue is not fixed. From my understanding of nfs, a Stale file error is usually due to two clients accessing the same file at a time. Can you add the noac attribute on the client mount options for nfs and see if you still get the error?

Roberto

unread,
May 31, 2011, 4:39:30 PM5/31/11
to dedupfilesystem-sdfs-user-discuss
On the client i use this nfs parameters:
hard,bg,proto=tcp,suid,rsize=32768,wsize=32768,noac
but the problem persist.

On May 31, 10:15 pm, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> Roberto,
>
> I am sorry to hear the issue is not fixed. From my understanding of nfs, a
> Stale file error is usually due to two clients accessing the same file at a
> time. Can you add the noac attribute on the client mount options for nfs and
> see if you still get the error?
>
> ...
>
> read more »- Hide quoted text -

Sam Silverberg

unread,
May 31, 2011, 4:52:13 PM5/31/11
to dedupfilesystem-...@googlegroups.com
I don't have an oracle box to test
Against but will try out your client side parameters and test. I will not be able to get to it until friday evening but I will look at it.

Sent from my iPhone

Roberto

unread,
May 31, 2011, 5:00:31 PM5/31/11
to dedupfilesystem-sdfs-user-discuss
Have a debug parameter to analyze if the problem is on SDFS or NFS ?

On May 31, 10:52 pm, Sam Silverberg <sam.silverb...@gmail.com> wrote:
> I don't have an oracle box to test
> Against but will try out your client side parameters and test. I will not be able to get to it until friday evening but I will look at it.
>
> Sent from my iPhone
>
Reply all
Reply to author
Forward
0 new messages