I'm in similar situation, though not yet with 7.0. We originally
deployed two clusters for an archive service for which replication and
direct access to distinct "readonly" fileserver name on DR cluster meets
requirements.
New services will require quick turnaround on
moving file access to the DR location. The DNS CNAME configuration
doesn't work for us.
Possible options being considered:
1. Use load balancer to direct DNS traffic to the (smartconnect) DNS server on active cluster
- Storage admins are delegated ability to mark LB targets active or inactive
- Automatic LB failover config may work, but may also be prone to transient issues
- Works for CIFS as well as NFS
2. Mount both primary and failover on all clients and use nfs replica or sync tool for managing root of tree
Here's the replica version, which hopefully explains the idea:
Primary (setup directory
mkdir -p /ifs/wnas/svc1/content #directory holding content
#
mkdir /ifs/wnas/svc1/root #service root directory
cd /ifs/wnas/svc1/root
# make link for primary service
ln -s /svc1-primary/content content
[ sync /ifs/wnas/svc1 to alternate cluster ]
Primary and alternate:
isi nas export --path=/ifs/wnas/svc1/content
isi nas export --path=/ifs/wnas/svc1/root
Client:
mount svc1-primary:/ifs/wnas/svc1 /svc1-primary
mount svc1-alternate:/ifs/wnas/svc1 /svc1-alternate
# replica readonly mount
mount -o ro svc1-primary:/ifs/wnas/svc1/root,svc1-alternate:/ifs/wnas/svc1/root /svc1
# Move service to alternate
[ if primary out of service, make alternate RW locally on isilon ]
cd /ifs/wnas/svc1/root; rm content
# make link for alternate service
ln -s /svc1-alternate/content content
The
second method seems desirable from perspective that it's manipulating
filesystem content to manage service, but if primary goes down and
alternate svc1/root is changed, there would be a gap when primary is
returned to service during which it's copy of svc1/root would be
inconsistent. This may be OK since stable clients would not
automatically flip back to primary..
Questions:
- Would [
separate] nfsv4 server for top level using referral instead of symlink
be a way to achieve this so clients could use a single mount?
Does isilon support referral? Also, how often do clients check and detect change of referral definitions?
- Is either of these alternatives clearly best?
- Is there a way to setup nfs exports or smartconnect zone so it must be manually activated after event like a power failure?