On Feb 15, 2:21 pm, Cydrome Leader <prese...@MUNGEpanix.com> wrote:
Yup,. exactly.
>
> Here's the cool part. Veritas can mount anything in site B with no
> problems. It's even aware that they are replicas of original data. I'm not
> sure how it knows, but it does.
That's interesting.
ZFS is *not* aware of replication.
All three of our replicated copies (yes, 3) have the same zfs pool ID.
Seems like, as far as zfs is concerned, "same pool ID, == same same
pool", no ifs ands or buts.
thats okay for us, though.
> It's a journaled filesystem so if there
> was some loss of writes, they roll back to a sane state and no fsck is
> needed. You can't fsck zfs and if it feels the data is corrupt, that's it,
> game over.
contrariwise, it has other advantages. For example, in my testing of
impolitely yanking out one side of a ZFS mirror, then continuing to
write to the functioning side..... Some multiple-gigabytes of writes
later, if you re-power on the mirror... zfs detects that the disk
'should' belong to the pool..and oh by the way it was previously up to
date, until (this) point in time, hey lets do a resync...
and it resyncs ONLY the data that is "out of date".
> It's possible to get more crazy over at site A and then use plexes so
> veritas itself can mirror data across multiple SANs.
Yeah, but if you have high latency, aka 1000 miles between sites, you
cant just use veritas volume manager any more (well, if your app has a
low latency response requirement anyways); you need to use veritas
volume replicator.
Which is why we're doing what we're doing.
> It supports this, and
> you can break and resync these anytime. It has snapshots too, so you can
> mirror a filesystem from a point in time if you want, which makes sense
> for local backups.
What Cindy was saying about pool split makes sense for that same
purpose of "local backups" .it would be nice for purposes of isolating
backup I/O from the production usage. but regular zfs snapshots work
pretty well for backups also.