[zfs-macos] Might have ruined my array

85 views
Skip to first unread message

Anders Wallén

unread,
Apr 18, 2013, 4:27:00 AM4/18/13
to zfs-...@googlegroups.com
Explicits!

I accidentally pulled the USB cord out and lost the array. The Mac Mini force-rebooted itself too.

The array wouldn't automatically remount, and the manual mounting commands didn't work either, so I re-created the array [sudo zpool create array raidz2 [list of disc names here]], which now looks empty.

Did I just erase the whole thing? Or is there anything I can do to restore it?


Any help would be greatly appreciated,

Anders


P.S.
Sooner or later, I will have to move the whole server into a closet. What should I do then to prevent this from happening again?
D.S.

Anders Wallén

unread,
Apr 18, 2013, 4:42:02 AM4/18/13
to zfs-...@googlegroups.com

Fastmail Jason

unread,
Apr 18, 2013, 7:57:57 AM4/18/13
to zfs-...@googlegroups.com
Well your data is probably still on the disk(s), but your going to need some tools and skills and from the sound of things it will have to be provided by another. 

I'd post all kinds of questions on why you were doing the things you were doing, but other than posterity I don't see the point of the answers.

Never, ever, ever use USB drives. It's been posted so often, on every forum that this will cause you heartache. Of course over writing your pool can be done on any drive structure....

Can everyone also assume you have no backups?

--
Jason Belec
Sent from my iPad
--
 
---
You received this message because you are subscribed to the Google Groups "zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

David Cantrell

unread,
Apr 18, 2013, 8:28:35 AM4/18/13
to zfs-...@googlegroups.com
On Thu, Apr 18, 2013 at 07:57:57AM -0400, Fastmail Jason wrote:

> Never, ever, ever use USB drives. It's been posted so often, on every forum that this will cause you heartache.

If only there were a cost-effective alternative.

--
David Cantrell | Minister for Arbitrary Justice

On the bright side, if sendmail is tied up routing spam and pointless
uknot posts, it's not waving its arse around saying "root me!"
-- Peter Corlett, in uknot

Fastmail Jason

unread,
Apr 18, 2013, 8:48:36 AM4/18/13
to zfs-...@googlegroups.com
Does losing your data due to known issues with a technology justify the cost savings of purchasing said technology? If so, your good to go.

In this instance it may have simply been a compounding of mistakes/errors, but easily avoidable with an proper action plan in place.

I represent a lot of clients that never ever thought about the real cost of data loss until its far too late. For some it meant they had violated the law, which of course compounded the cost. Some lost valuable family memories, not unlike losing precious moments lost in a fire, they are gone forever. Some use a service without knowing what that really means and the fact that no matter how bad the service feels if data is lost it is still gone. Some bought one-push-backup nonsense, some have varying things they tried over the years and never moved old to new, it's still lost, but hey it was cheap!

Just saying, if you don't care, no worries. If you do care take a little time to learn what to do when something goes wrong and plan for it. The one thing I can guarantee is that something will go wrong. And it sucks. And it will be at the worst possible time.


--
Jason Belec
Sent from my iPad

Gregg Wonderly

unread,
Apr 18, 2013, 8:49:28 AM4/18/13
to zfs-...@googlegroups.com
I guess in the end, it depends on how valuable your data really is, to you. Nothing is free any longer, and in the end, there is an old saying that seems to echo in the hallways in cases like this. The saying is "You have to pay to play". Somehow, you will eventually pay. Either in lost data, or time to recover your data from a haphazardly assembled array that experiences a failure such as this. I choose to use ZFS on Illumos just because I have equipment that I bought to do that. It's not MacOS-X, but my file storage and backups are sitting safely where I can recover what I need. I have all of my data in 5 different places (the computers, and on two different servers each with mirrored pairs of drives). Yes, it costs to do that, but that's where the value is for me. It's worth that expense.

Gregg Wonderly

Peter Lai

unread,
Apr 18, 2013, 9:50:21 AM4/18/13
to zfs-...@googlegroups.com
On Thu, Apr 18, 2013 at 6:57 AM, Fastmail Jason
<jason...@belecmartin.com> wrote:
> Well your data is probably still on the disk(s), but your going to need some
> tools and skills and from the sound of things it will have to be provided by
> another.
>
> I'd post all kinds of questions on why you were doing the things you were
> doing, but other than posterity I don't see the point of the answers.
>

But posterity is where you learn some of the lessons. In this case,
it's along the lines "why did you do what you did" which should teach
the lesson of "have a plan" and then "when things go to hell, don't
panic": http://macnugget.org/stuff/unix-horror-story.txt

> Never, ever, ever use USB drives. It's been posted so often, on every forum
> that this will cause you heartache. Of course over writing your pool can be
> done on any drive structure....
>

Of course, that's the other lesson to learn here.

> Can everyone also assume you have no backups?
>

At least keep a backup of zpool.cache!

Peter Lai

unread,
Apr 18, 2013, 9:50:47 AM4/18/13
to zfs-...@googlegroups.com
esata/port-multiplier enclosures are not cost-effective?

David Cantrell

unread,
Apr 18, 2013, 10:15:42 AM4/18/13
to zfs-...@googlegroups.com
On Thu, Apr 18, 2013 at 08:50:47AM -0500, Peter Lai wrote:
> On Thu, Apr 18, 2013 at 7:28 AM, David Cantrell <da...@cantrell.org.uk> wrote:
> > On Thu, Apr 18, 2013 at 07:57:57AM -0400, Fastmail Jason wrote:
> >> Never, ever, ever use USB drives. It's been posted so often, on every forum that this will cause you heartache.
> > If only there were a cost-effective alternative.
> esata/port-multiplier enclosures are not cost-effective?

Given that this is an OS X mailing list, no, they're not, because no
Macs come with e-SATA.

I've been using things like this <http://www.amazon.co.uk/dp/B004I6OCRO>
for a few years now (with UFS) and the documented problem of USB drives
spinning down doesn't seem to affect them - at least, my data has always
been available instantly when I wanted access to it. I'll report back
once I get round to testing with zfs.

However, given that every other filesystem seems to cope just fine with
such drives, I don't really understand why it's a problem for zfs.

--
David Cantrell | London Perl Mongers Deputy Chief Heretic

fdisk format reinstall, doo-dah, doo-dah;
fdisk format reinstall, it's the Windows way

Fastmail Jason

unread,
Apr 18, 2013, 10:33:27 AM4/18/13
to zfs-...@googlegroups.com
USB is not safe. Period. Using ZFS or otherwise the failure rates are very high. For anything non-critical its just fine.

It has not been hard to utilize ESATA on Mac since around 2007. If your into ZFS, then your already into tweaking/modding your system. Add the connector, or extend the internal to external or swap in the appropriate card to handle multiple drives.

FireWire hubs exist.

New Macs have Thunderbolt which opens you up to every possible connection.

Use USB and accept that you need current backups, always.

--
Jason Belec
Sent from my iPad

Daniel Becker

unread,
Apr 18, 2013, 10:44:41 AM4/18/13
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
On Apr 18, 2013, at 6:50 AM, Peter Lai <cow...@gmail.com> wrote:

>> Never, ever, ever use USB drives. It's been posted so often, on every forum
>> that this will cause you heartache. Of course over writing your pool can be
>> done on any drive structure....
>
> Of course, that's the other lesson to learn here.

How so? He would have lost his data just the same if he'd yanked a thunderbolt or eSATA cable instead...

Daniel Becker

unread,
Apr 18, 2013, 10:48:51 AM4/18/13
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
On Apr 18, 2013, at 7:33 AM, Fastmail Jason <jason...@belecmartin.com> wrote:

> Use USB and accept that you need current backups, always.

You should have those no matter what interface tech you use.

Fastmail Jason

unread,
Apr 18, 2013, 10:51:28 AM4/18/13
to zfs-...@googlegroups.com
Actual, not so. I have more than enough test machines and devices of various manufacturers around the mad science lab to be able to say that very few have this issue, save USB.

His initial issue, which probably could have been resolved quite easily just transferring the drive to a FireWire enclosure or something else after a full shutdown and clean restart with nothing connected. I've been able to get quite a few pools to return this way as OS X, trying to helpful sometimes holds onto bad information longer than Windows.

The over writing the pool issue would have been bad on any system, but not unrecoverable from, just painful.


--
Jason Belec
Sent from my iPad

Fastmail Jason

unread,
Apr 18, 2013, 10:54:42 AM4/18/13
to zfs-...@googlegroups.com
Yeah, but for some reason everyone thinks they can save a few bucks and avoid the inevitable just long enough.


--
Jason Belec
Sent from my iPad

Daniel Becker

unread,
Apr 18, 2013, 11:00:14 AM4/18/13
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
On Apr 18, 2013, at 1:27 AM, "Anders Wallén" <walle...@gmail.com> wrote:

The array wouldn't automatically remount, and the manual mounting commands didn't work either,

How so? What were the symptoms?

so I re-created the array [sudo zpool create array raidz2 [list of disc names here]], which now looks empty.

You just created a new pool, so yeah, of course it's empty.

Did I just erase the whole thing? Or is there anything I can do to restore it?

I don't think there's an easy way to do this, as all your uberblocks will have been nuked in creating the new pool.

Sooner or later, I will have to move the whole server into a closet. What should I do then to prevent this from happening again?

If it just sits in a closet anyway, for starters, I would think about using Illumos or FreeBSD or anything else with a mature, up-to-date ZFS implementation that doesn't just panic when a pool loses more devices than its redundancy allows for.

Additionally, if you value your data, make sure you know what you're doing. ZFS is a lot more complex than traditional RAID arrays, so don't just expect it to have the same behavior.

Daniel Becker

unread,
Apr 18, 2013, 11:17:24 AM4/18/13
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
As far as I understand, the initial panic was expected behavior for MacZFS (all devices lost -> pool degraded beyond redundancy). This would have happened regardless of interface, and obviously a clean shutdown wasn't in the cards any more at that point.

I can see how a cheap USB case might have a higher chance of having some uncommitted older writes still in cache and thus lost, but I'm not convinced that would outright kill the pool. The OP never said what exactly the problem was in trying to re-mount and/or if he tried re-importing at all, but based on how he proceeded after, I would not totally rule out user error either.

I'm curious how you'd try to reconstruct a pool after another pool was created on top of it; I would expect that to cause all uberblocks to be overwritten, so you longer have a root pointer to your metadata tree. I don't see how you could realistically recover from that.

Fastmail Jason

unread,
Apr 18, 2013, 11:30:01 AM4/18/13
to zfs-...@googlegroups.com
Well I'm not guaranteeing it can be reconstructed, but if its the most valuable thing under the sun, one could dig up posts by Max Bruning for ZFS. He has proven to be an amazing resource on how data is stored on the disk and how it can be retrieved. I'm going to give this a try later this myself as it is worth exploring. However as stated by several, backups are the joy of life.


--
Jason Belec
Sent from my iPad

Gregg Wonderly

unread,
Apr 18, 2013, 11:47:49 AM4/18/13
to zfs-...@googlegroups.com

On Apr 18, 2013, at 8:50 AM, Peter Lai <cow...@gmail.com> wrote:

> On Thu, Apr 18, 2013 at 6:57 AM, Fastmail Jason
> <jason...@belecmartin.com> wrote:
>> Well your data is probably still on the disk(s), but your going to need some
>> tools and skills and from the sound of things it will have to be provided by
>> another.
>>
>> I'd post all kinds of questions on why you were doing the things you were
>> doing, but other than posterity I don't see the point of the answers.
>>
>
> But posterity is where you learn some of the lessons. In this case,
> it's along the lines "why did you do what you did" which should teach
> the lesson of "have a plan" and then "when things go to hell, don't
> panic": http://macnugget.org/stuff/unix-horror-story.txt

I've edited an executable with 'dd' to fix a bug at a remote site where I had no compiler…

There are lots of ways to skin a cat, you just have to recognize which tool will work.

Gregg

Developer

unread,
Apr 18, 2013, 12:22:13 PM4/18/13
to zfs-...@googlegroups.com
Yes I have done this as well in the past on more than ZFS, Greg is right, lots of tools.

Daniel Bethe

unread,
Apr 18, 2013, 1:57:10 PM4/18/13
to zfs-...@googlegroups.com
The array wouldn't automatically remount, and the manual mounting commands didn't work either,
That could happen sometimes in precarious situations (like maybe if your /etc/zfs cache, storing the locations of your known arrays, was stored on HFS+, which may have gotten corrupted in the crash), so there is a 'zpool import -d /dev' command which will manually scan all device files in the directory for a valid zpool.  That's always worked for me.  The only other time I've heard of a lasting problem is maybe if a zpool imported but with one spare drive stuck in a degraded status.  Doing some other commands listed in the FAQ, or in the very rare case, possibly rebooting to another OS with a newer ZFS and doing a scrub, would solve that. 
If it just sits in a closet anyway, for starters, I would think about using Illumos or FreeBSD or anything else with a mature, up-to-date ZFS implementation that doesn't just panic when a pool loses more devices than its redundancy allows for.
I'm sorry to say it, sir, but that's totally unhelpful.  Our mature, behind-the-date, ZFS implementation performing a panic is hardly a problem in that particular case.  Those other implementations allow booting from ZFS; what are they going to do if you unplugged their root device?  What more are they going to do with the cheapest, most unreliable storage enclosure which is practically trying to lie to them or if the user sabotages the array?  Besides, I wouldn't recommend either of those OSes for non-expert users, especially on any old Mac without prior research. For someone else or for another situation, maybe.  This conclusion is wildly out of scope of the situation.


Additionally, if you value your data, make sure you know what you're doing. ZFS is a lot more complex than traditional RAID arrays, so don't just expect it to have the same behavior.
I would!  If you yank out any arrayed hard drives, a person should expect the system to crash and then one should be diligently paranoid nonetheless, whether it does crash or not.  However, with another array technology, I wouldn't always expect to recover its data.  Because ZFS is generally far simpler.  ^_^

Daniel Bethe

unread,
Apr 18, 2013, 2:50:16 PM4/18/13
to zfs-...@googlegroups.com
>>> If only there were a cost-effective alternative.
>> esata/port-multiplier enclosures are not cost-effective?
>
> Given that this is an OS X mailing list, no, they're not, because no
> Macs come with e-SATA.
FYI, eSATA cards can cost something like $20.  Just be sure to get the best supported Silicon Images one, because they conceived the cool port multiplier technologies (FIS).  Obviously, make sure that your enclosure supports that too, which costs about $100 for a several-bay enclosure.  The prices have dropped in the last year or two.

Daniel Becker

unread,
Apr 18, 2013, 3:36:40 PM4/18/13
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
On Apr 18, 2013, at 10:57 AM, Daniel Bethe <d...@smuckola.org> wrote:

If it just sits in a closet anyway, for starters, I would think about using Illumos or FreeBSD or anything else with a mature, up-to-date ZFS implementation that doesn't just panic when a pool loses more devices than its redundancy allows for.
I'm sorry to say it, sir, but that's totally unhelpful.  Our mature, behind-the-date, ZFS implementation performing a panic is hardly a problem in that particular case.

It most certainly is: If the pool had just gone into "unavailable" state instead of panicking the machine, it would not have disappeared in the first place, and thus not prompted the OP to make matters worse by creating a new pool from the disks.

Those other implementations allow booting from ZFS; what are they going to do if you unplugged their root device?

They support it, but obviously nobody forces you to put root on ZFS (and in fact it takes a good bit of extra work to set it up that way), so I'm not sure how that's relevant. Clearly, the OP did have a separate boot device.

 What more are they going to do with the cheapest, most unreliable storage enclosure which is practically trying to lie to them or if the user sabotages the array?

Not panic just because an external enclosure gets disconnected, even if no critical system files are located on that array. Allow for graceful recovery instead.

Besides, I wouldn't recommend either of those OSes for non-expert users, especially on any old Mac without prior research. For someone else or for another situation, maybe.  This conclusion is wildly out of scope of the situation.

Fair enough; I would argue that unless you know what you're doing and you're comfortable enough to work on the command line beyond just blindly following howtos, you shouldn't be using ZFS on any platform; the problem at hand pretty clearly illustrates why. I think we've disagreed on that before, though.

Additionally, if you value your data, make sure you know what you're doing. ZFS is a lot more complex than traditional RAID arrays, so don't just expect it to have the same behavior.
I would!  If you yank out any arrayed hard drives, a person should expect the system to crash and then one should be diligently paranoid nonetheless, whether it does crash or not.  However, with another array technology, I wouldn't always expect to recover its data.  Because ZFS is generally far simpler.  ^_^

My point was, on a RAID array or LVM, it is perfectly reasonable to assume that just reassembling the array does not affect any data already on the drives. However, because ZFS combines LVM and FS functionality into one, 'zpool create' is not just equivalent to reassembling an array (as it appears the OP was expecting), but does in fact affect the on-disk data (specifically, ot initializes certain critical metadata).

Daniel Becker

unread,
Apr 18, 2013, 3:48:09 PM4/18/13
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
On that note, does anybody have any suggestions for how to best hook up a 4-bay eSATA enclosure to a 2011 iMac without breaking the bank?
--

Fastmail Jason

unread,
Apr 18, 2013, 4:40:25 PM4/18/13
to zfs-...@googlegroups.com
Sonnet Tech PCI to Thunderbolt is between $140-170, add the card of choice and enclosure of choice and all your legacy stuff is now current. Pretty cost effective. Of course the other products they have allow for bigger enterprise cards and large array boxes like 12 to 24 disks or even more.



--
Jason Belec
Sent from my iPad

Daniel Becker

unread,
Apr 18, 2013, 8:00:55 PM4/18/13
to zfs-...@googlegroups.com, zfs-...@googlegroups.com
I suppose I should have been more specific as far as breaking the bank goes. :) This is for home use, and just for that one 4-bay array, so $200+ seems a bit overkill-ish. I'm also more concerned with space than with performance. Are there any PMP-capable FireWire solutions out there?

Bjoern Kahl

unread,
Apr 18, 2013, 8:53:57 PM4/18/13
to zfs-...@googlegroups.com
Am 18.04.13 10:42, schrieb Anders Wallén:
> Explicits!
>
> I accidentally pulled the USB cord out and lost the array. The Mac Mini force-rebooted itself too.

Expected and document behavior. No reason for the operator to panic too.


> The array wouldn't automatically remount, and the manual mounting commands didn't work either,

Depending on storage technology used, this is either expected to
happen (i.e. with any kind of USB enclosure) or possible behavior in
case of bad luck (pretty much any other technology).

the reason is, that such an event can (and for USB usually will)
result in out-of-order writes and/or partial writes. As such, it
destroys the last ueberblock and possibly other meta data.


> so I re-created the array [sudo zpool create array raidz2 [list of disc names here]], which now looks empty.

Bad idea. You voluntarily destroyed the pool. Why are surprised its
empty now?

OT:
There two things I will *never* understand why people are doing it:

- Why on earth are people *first* yanking a disk (or worse: zeroing
it) and *then* issuing a "zpool replace" against the now missing or
totally blank disk?!?

Seen a couple of times on various ZFS mailing lists. That is a save
way to destroy pool.

- Why on earth are people carelessly issuing "zpool create" in various
circumstances? (No, you are not first to do that, Anders!)

/OT

> Did I just erase the whole thing? Or is there anything I can do to restore it?

Yes, you erased it.

No, recovering from this stage is possible, but an expert thing. You
need to find someone doing professional data recovery from ZFS.
Basically, the job is to scan the disk block-by-block for fragments of
the MOS and try to reassemble a recent version of the MOS. Once found,
construct a matching ueberblock, and do a modified "zfs send" (i.e.
never-ever checkpoint the send position to disk, because we don't
really know which block are free and as such save to write --
alternatively, try to also reconstruct the spacemaps (ZDB can do that)
to be able to write to the pool) of the whole pool to a new one.


> Any help would be greatly appreciated,

Sorry, but any help from this point on will probably be expensive.


Best regards

Björn

--
| Bjoern Kahl +++ Siegburg +++ Germany |
| "googlelogin@-my-domain-" +++ www.bjoern-kahl.de |
| Languages: German, English, Ancient Latin (a bit :-)) |

Fastmail Jason

unread,
Apr 18, 2013, 9:03:38 PM4/18/13
to zfs-...@googlegroups.com
There is only 1 FireWire hub, and the cost is about he same, but Firewire generates heat.

I'd get the Sonnet box, a good PCI card and multi port SATA hub from American. Then power it and upto 5 drives with a computer power supply. That can grow quite large with the addition of more American hubs, drives and power. Your looking at several hundred but it works and can make use of all your legacy tech with the new. And of course is future proof,



--
Jason Belec
Sent from my iPad

David Cantrell

unread,
Apr 23, 2013, 7:26:19 AM4/23/13
to zfs-...@googlegroups.com
AFAIK Apple do not currently sell any machines that have anywhere to put
such a card - the Mac Pro is "currently unavailable" according to their
website. And even if the Mac Pro becomes available again, it's still
not cost-effective. The cost of a Mac Pro is high enough to make the
extra hassle of a non-Apple OS and non-Apple hardware a price worth
paying, at least for me.

--
David Cantrell | Cake Smuggler Extraordinaire

Alex Bowden

unread,
Apr 23, 2013, 7:34:26 AM4/23/13
to zfs-...@googlegroups.com
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "zfs-macos" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

The Mac Pro is available anywhere except the EU.

I find them extremely cost effective.

Jason

unread,
Apr 23, 2013, 12:44:21 PM4/23/13
to zfs-...@googlegroups.com
Good man Alex.


Jason
Sent from my iPhone
Reply all
Reply to author
Forward
0 new messages