Re Syncing Production Ready

88 views
Skip to first unread message

Davide D'Amico

unread,
Feb 21, 2009, 3:19:03 AM2/21/09
to chironfs-forum
Hi, I'd like to setup a FreeBSD NFS Cluster and chironfs seems to be
the only solution out there.
I have this scenario:
-) NFS "node" nfsA;
-) NFS "node" nfsB;
-) WEB "node" webA;
-) WEB "node" webB;
-) WEB "node" webC;
-) WEB BALANCER "node" balA;
-) WEB BALANCER "node" balB.

It seems interesting the idea of the local copy (i have to test it,
yet), but I have a question: if nfsB fails I have to resync it before
reinserting it in the cluster NFS, but when I am resyncing it, other
writes could occur in nfsA and so nfsA and nfsB will never be in sync
each other! Ideally I have to "stop" nfsA, sync nfsA with nfsB and
then reinsert them in the cluster but this is not "useful" because
webA, webB and webC will see disk I/O failures.
How could I accomplish this (or how do you do)? :-)

Thanks in advance,
d.

Keith Freedman

unread,
Feb 21, 2009, 3:44:21 AM2/21/09
to chironf...@googlegroups.com
I use gluster. it's auto-healing doesn't require resyncing when you
bring something back online.

in your particular case, you'll want to use client side replication
using gluster 2.0

Davide D'Amico

unread,
Feb 21, 2009, 6:52:01 AM2/21/09
to chironf...@googlegroups.com
2009/2/21 Keith Freedman <keith.f...@gmail.com>:
>
> I use gluster. it's auto-healing doesn't require resyncing when you
> bring something back online.
>
> in your particular case, you'll want to use client side replication
> using gluster 2.0
How many cool software I don't (still) know...

Thanks in advance,
d.

>
>
> At 12:19 AM 2/21/2009, you wrote:
>
>>Hi, I'd like to setup a FreeBSD NFS Cluster and chironfs seems to be
>>the only solution out there.
>>I have this scenario:
>>-) NFS "node" nfsA;
>>-) NFS "node" nfsB;
>>-) WEB "node" webA;
>>-) WEB "node" webB;
>>-) WEB "node" webC;
>>-) WEB BALANCER "node" balA;
>>-) WEB BALANCER "node" balB.
>>
>>It seems interesting the idea of the local copy (i have to test it,
>>yet), but I have a question: if nfsB fails I have to resync it before
>>reinserting it in the cluster NFS, but when I am resyncing it, other
>>writes could occur in nfsA and so nfsA and nfsB will never be in sync
>>each other! Ideally I have to "stop" nfsA, sync nfsA with nfsB and
>>then reinsert them in the cluster but this is not "useful" because
>>webA, webB and webC will see disk I/O failures.
>>How could I accomplish this (or how do you do)? :-)
>>
>>Thanks in advance,
>>d.
>>
>
> >
>



--
d.

Davide D'Amico

unread,
Feb 21, 2009, 7:00:07 AM2/21/09
to chironf...@googlegroups.com
2009/2/21 Davide D'Amico <davide...@gmail.com>:

> 2009/2/21 Keith Freedman <keith.f...@gmail.com>:
>>
>> I use gluster. it's auto-healing doesn't require resyncing when you
>> bring something back online.
>>
>> in your particular case, you'll want to use client side replication
>> using gluster 2.0
> How many cool software I don't (still) know...
>
I was looking for glusterfs and have I to use it rather than chironfs
or on top of it?

d.

Alexandre Fernandes

unread,
Feb 21, 2009, 11:33:28 AM2/21/09
to chironf...@googlegroups.com
Hello Davide and Keith,

Unfortunately at this moments ChironFS does not have
'auto-healing' or 'hot-resync' features. Guessing by the
description of your problem, I suggest you to look for
another solution and, as Keith pointed, Gluster is a good one.

I don't think you need to mix then. Everything that ChironFS
will be capable to do in the future will be a subset of what
Gluster, for example, can do. The key motivation for this
project is simplicity, ChironFS wants to offer one of the core
services for high availability computing [the file replication
thing] in a secure, light and very easy way to setup and
maintain. Take a good look in Gluster and you'll see that
it's a lot more than this.

What ChironFS wants to do in the future is to invite people
to compare the size of the solution with the size of the problem
and, maybe, offer an alternative for who doesn't have or don't
want to spend resources.

Best regards,
Alexandre
--
Any sufficiently advanced satire is indistinguishable from reality.

Martin Fick

unread,
Feb 21, 2009, 12:02:25 PM2/21/09
to chironfs-forum
I have implemented patches that will block writes along with a script
which rsyncs the good copy to the bad one (to first catch up large
heals), then initiates the write block, then rsyncs again (which
should be quick since we already resynced once), and finally unblock
all writes. I posted a few more details about the implementation in
this thread:

http://groups.google.com/group/chironfs-forum/browse_thread/thread/be66253dd51728c5

If you want to try out my patches and the sync script, I can send them
to you.

-Martin

Alexandre Fernandes

unread,
Feb 21, 2009, 12:13:00 PM2/21/09
to chironf...@googlegroups.com
Hi martin,

Glad to hear that! Please send me the patches and the
script. I'll be in touch with impressions and an idea

TIA,
Alexandre

2009/2/21 Martin Fick <mogu...@yahoo.com>:

Davide D'Amico

unread,
Feb 21, 2009, 1:15:57 PM2/21/09
to chironf...@googlegroups.com
Thanks Alexandre,
I found this:
http://www.gluster.org/docs/index.php/Automatic_File_Replication_(Mirror)_across_Two_Storage_Servers

that fits my needs.
Anyway, I'll stay tuned on your project.

d.


2009/2/21 Alexandre Fernandes <alexandre...@gmail.com>:
--
d.

Martin Fick

unread,
Feb 21, 2009, 1:35:22 PM2/21/09
to chironfs-forum
> Glad to hear that! Please send me the patches and the
> script. I'll be in touch with impressions and an idea

Here, under:

http://www.theficks.name/bin/lib/chironfs/chironfs-1.1.1.mtn1/

try the patches from chironfs-1.1.1:

chironfs.mtn1.patch

And the sync script:

chironsync

Usage:

chironsync [options] [--ctl path_to_ctl_fs]

Sync all outdated FSes

OR

chironsync [options] [--sync good_mnt outdated_mnt
outdated_ctl_fs]

Sync a specific outdated FS


Options:

--rsyncopts <opts> Flags for rsync.
The default are: $RSYNC_OPTS

-1 Only rysnc once (during write block)
-2 Presync before write blocking

--ctl


Hope this helps,

-Martin

------------
Extra:

Here are the patched files:
chirctl.c
chironfs.c
chironfs.h

If you do not want to compile the patches, these executable compiled
on debian might work for you:
chirctl
chironfs

Martin Fick

unread,
Feb 21, 2009, 9:27:40 PM2/21/09
to chironfs-forum
On Feb 21, 11:15 am, "Davide D'Amico" <davide.dam...@gmail.com> wrote:
> I found this:http://www.gluster.org/docs/index.php/Automatic_File_Replication_(Mir...
>
> that fits my needs.

Be aware that glusterfs cannot yet heal files while they are open. If
you only have short lived web server processes opening files, you can
probably live with this.

Davide D'Amico

unread,
Feb 22, 2009, 3:38:22 AM2/22/09
to chironf...@googlegroups.com
2009/2/22 Martin Fick <mogu...@yahoo.com>:
I know this is off-topic, but I think it's interesting for developers
on chironfs,
how do you solve problems with auto-heal and opened files using glusterfs?
Another question: in glusterfs, what's the difference between
transports cluster/ha and cluster/replicate?


Thanks,
d.

Keith Freedman

unread,
Feb 22, 2009, 3:45:55 AM2/22/09
to chironf...@googlegroups.com

I think this is an issue for any replicated distributed filesystem.
Chironfs or gluster or any will have a challege with this.
I think glusters problem is that it's bound by the limits of FUSE.
In any case, gluster replicates on file close, so the best thing, if
you're using gluster, is to understand it's limitations and work within them.

I'd not, for example, use it for a shared database.
It seems to do ok with log files, but it depends how the application
handles them.
Also, gluster 2.0 seems to have better handling of open files, but I
can't speak to the specifics.

Chironfs has it's limitations as well, and so it may be that, for
you, a combination will offer the best solution.


Alexandre Fernandes

unread,
Feb 22, 2009, 8:36:43 AM2/22/09
to chironf...@googlegroups.com
Hi Davide,

Sorry, I guess I've assumed a somewhat authoritative tone on my
last msg. ChironFS author and principal maintainer is Luis Otávio
Furquim, I only [ and happily ;-) ] happens to be in a position of
help him through these years.

Best regards,
Reply all
Reply to author
Forward
0 new messages