I did not notice this last night, but SDS is now complaining that all 8
metadevices "need maintenance." They mounted okay, the data is fine,
and I've confirmed that there is nothing physically wrong with any of
the components.
How do I clear the "Needs maintenance" messages from my metadevices
without destroying data?
This is an example of one of the devices. Yes, it's a "one-sided"
mirror that I've set up. This is so if I ever need to migrate to
another type of storage, I can, by simply adding the storage and
mirroring.
d16: Mirror
Submirror 0: d106
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 17694080 blocks
d106: Submirror of d16
State: Needs maintenance
Invoke: after replacing "Maintenance" components:
metareplace d16 c3t61d6s0 <new device>
Size: 17694080 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c3t61d6s0 0 No Last Erred
As I said, there's nothing wrong with the disk or data, yet SDS is
complaining.
try metastat -i
You could try replacing the device with itself:
metareplace -e d16 c3t61d6s0
JulianJ
I'll allocate some temporary storage to create a two-copy mirror, sync
up, do the metareplace, and viola!
JulianJ
Good Luck!
A component may be in one of several states. The Last Erred and the
Maintenance states require action. Always replace components in the
Maintenance state first, followed by a resync and validation of data. After
components requiring maintenance are fixed, validated, and resynced,
components in the Last Erred state should be replaced. To avoid data loss,
it is always best to back up all data before replacing Last Erred devices
-e Transitions the state of component to the available state and
resyncs the failed component. If the failed component has been hot spare
replaced, the hot spare is placed in the available state and made available
for other hot spare replacements. This command is useful when a component
fails due to human error (for example, accidentally turning off a disk), or
because the component was physically replaced. In this case, the replacement
component must be partitioned to match the disk being replaced before
running the metareplace command.
As this mirror is a single sidded mirror we have two options.
1. Delete and recreate the mirror
2. Replace the errored component with itself.
The second option is the fastest as only a metastat -e d16 c3t61d6s0 needs
to be done.
Is it worth having these mirrors? All you are doing is putting LVM or
whatever you want to call it in the way.
The chances are that if you do migrate to new storage you will most likely
also move to larger LUN's. This would also add to your migration plan. When
you use growfs you will write-lock the file system.
JulianJ