Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

LSI SAS2008 mps driver preferred firmware version

63 views
Skip to first unread message

Kai Gallasch

unread,
Nov 12, 2015, 4:12:21 PM11/12/15
to

Hi.

I'm currently building a new ZFS based FreeBSD 10.2 server with a
SAS/SATA HBA SAS9211-8i.

Is there a preferred or recommended firmware version for Fusion-MPT
SAS-2 2008 chipset based LSI cards like the SAS9211-8i? MPS(4) does not
give any information about this.

The current version of my SAS9211-8i is:
v7.05.05.00 (2010.05.19), BIOS
5.00.17.00-IR, FW


IR vs. IT firmware:

Are there any advantages replacing the -IR (integrated raid) firmware on
the LSI controller with an -IT (target mode) version, if the RAID
functionality of the HBA is not used at all?

There were some claims that running the -IR version in a ZFS JBOD setup
would result in a small performance penalty compared to -IT and that
there was a risk that a controller running the -IR firmware version
could potentially damage ZFS data on a disk by putting RAID metadata
somewhere on the drive, even if not using the RAID feature of the card!

I'd appreciate it if someone could shed some light on this.

Regards,
Kai.

--
PGP-KeyID = 0x70654D7C4FB1F588
One day a lemming will fly..



signature.asc

Royce Williams

unread,
Nov 12, 2015, 5:21:32 PM11/12/15
to
Firmware should match driver, e.g.:

mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs


Some of this may help -- not yet updated for 10.2, but may still be useful:

http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-firmware.html

Royce
_______________________________________________
freebsd...@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stabl...@freebsd.org"

Stephen Mcconnell via freebsd-stable

unread,
Nov 12, 2015, 5:45:12 PM11/12/15
to
> -----Original Message-----
> From: owner-fre...@freebsd.org [mailto:owner-freebsd-
> sc...@freebsd.org] On Behalf Of Royce Williams
> Sent: Thursday, November 12, 2015 3:21 PM
> To: Kai Gallasch
> Cc: freebs...@freebsd.org; freebsd-stable
> Subject: Re: LSI SAS2008 mps driver preferred firmware version
>
> Firmware should match driver, e.g.:
>
> mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs

I've never heard of any problems when these are mismatched, so I'm not
sure why FreeNAS would complain. Anyway, you should use the latest of
both in my opinion.
The latest FW on the avagotech website is 20.00.04.00. I have heard that
some FreeBSD users have had some problems with the PH19 FW.

Steve McConnell
> freebs...@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
> To unsubscribe, send any mail to "freebsd-scsi...@freebsd.org"

Stephen Mcconnell via freebsd-stable

unread,
Nov 12, 2015, 6:28:09 PM11/12/15
to
And also, I asked someone who works on the FW about these IR concerns and
he says the only reason for a performance issue is that the IR FW is a bit
larger and therefore the command queue depth will be smaller due to the
amount of resources available, so it is possible to have a slight
performance degradation in some cases. Other than that, once it is
determined that there are no IR drives the FW acts just like IT. AND there
is no data corruption issue for ZFS disks. If there is, that would be bad
and a high priority defect would need to be filed :) If there are no IR
volumes, the FW works just like IT so there would be no reason to write
metadata to a non-IR disk. Even if there was a separate IR volume, the
ZFS disk would not be written with metadata because it's not part of an IR
volume.

Steve

Kai Gallasch

unread,
Nov 14, 2015, 7:18:52 AM11/14/15
to
On 12.11.2015 23:20 Royce Williams wrote:
> Firmware should match driver, e.g.:
>
> mps0: Firmware: 19.00.00.00, Driver: 19.00.00.00-fbs
>
>
> Some of this may help -- not yet updated for 10.2, but may still be useful:
>
> http://roycebits.blogspot.com/2015/01/freebsd-lsi-sas9211-8i-hba-firmware.html

Thanks! Lots of information about reflashing the 9211-8i.
So I upgraded the old firmare of the controller from

mps0: Firmware: 05.00.17.00, Driver: 20.00.00.00-fbsd
to mps0: Firmware: 20.00.04.00, Driver: 20.00.00.00-fbsd
(FreeBSD 10.2)

As I understand it the firmware 20.00.00.00 was pulled by avago and
replaced with the fixed version 20.00.04.00

I will give feedback if I notice any problems with this FW version.

As a side note: Flashing the 9211-8i to the new firmware version changed
the way FreeBSD orders the disk devices on this server:

With the old firmware it looked like this:

root@:~ # camcontrol devlist
<HITACHI HUS156030VLS600 A760> at scbus0 target 10 lun 0 (pass0,da0)
<HITACHI HUS156030VLS600 A5D0> at scbus0 target 11 lun 0 (pass1,da1)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 12 lun 0 (pass2,da2)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 13 lun 0 (pass3,da3)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass4,da4)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass5,da5)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 16 lun 0 (pass6,da6)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 17 lun 0 (pass7,da7)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 18 lun 0 (pass8,da8)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 19 lun 0 (pass9,da9)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 20 lun 0 (pass10,da10)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 21 lun 0 (pass11,da11)
<SUN HYDE12 0341> at scbus0 target 22 lun 0 (pass12,ses0)
<AHCI SGPIO Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)

The order is according to the order the disks are placed in the drive
bays: (da0, bay1; da1, bay2, ..)


With the new firmware it now looks like this:

<WD WD2001FYYG-01SL3 VR08> at scbus0 target 8 lun 0 (pass0,da0)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 9 lun 0 (pass1,da1)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 10 lun 0 (pass2,da2)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 11 lun 0 (pass3,da3)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 12 lun 0 (pass4,da4)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 13 lun 0 (pass5,da5)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 14 lun 0 (pass6,da6)
<WD WD2001FYYG-01SL3 VR08> at scbus0 target 15 lun 0 (pass7,da7)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 16 lun 0 (pass8,da8)
<ATA INTEL SSDSC2BA10 0270> at scbus0 target 17 lun 0 (pass9,da9)
<HITACHI HUS156030VLS600 A5D0> at scbus0 target 18 lun 0 (pass10,da10)
<HITACHI HUS156030VLS600 A760> at scbus0 target 19 lun 0 (pass11,da11)
<SUN HYDE12 0341> at scbus0 target 20 lun 0 (pass12,ses0)
<AHCI SGPIO Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass13,ses1)

So now the drive stuck in the last drive bay is seen as da0 and the
drive in the first drive bay as da11

But: In the controller BIOS the scan order of the drives did not change
at all with the new firmware! So the change is only in the way FreeBSD
sees the drives.

My explanation for this change in drive ordering is, that my 9211-8i is
a SUN branded one (SGX-SAS6-INT-Z) and the server is a SUN server. So
maybe the original firmware contained some adaptations for this server,
that are missing in the new firmware.

Can the way FreeBSD orders scanned SAS drives be changed? If not, no
problem, as I use partition labels for my zfs pools and the disks are
also labeled on the server as well.
signature.asc

Gary Palmer

unread,
Nov 14, 2015, 9:31:32 AM11/14/15
to
You can do thinks in /boot/loader.conf to hard code bus and drive
assignments.

e.g.

hint.da.0.at="scbus0"
hint.da.0.target="19"
hint.da.0.unit="0"
hint.da.1.at="scbus0"
hint.da.1.target="18"
hint.da.1.unit="0"

See scsi(4) or cam(4) for more hints.

You're probably better off using GPT labels though, as they will
survive any future disk order changes. The fact the target numbers
changed means that loader.conf changes will fix the current issue
but may not work properly after any future firmware updates.

Gary

Stephen Mcconnell via freebsd-stable

unread,
Nov 14, 2015, 12:48:55 PM11/14/15
to
> -----Original Message-----
> From: owner-fre...@freebsd.org [mailto:owner-freebsd-
> sc...@freebsd.org] On Behalf Of Gary Palmer
> Sent: Saturday, November 14, 2015 7:31 AM
> To: Kai Gallasch
> Cc: freebs...@freebsd.org; Royce Williams; freebsd-stable
> Subject: Re: LSI SAS2008 mps driver preferred firmware version
>
The driver and card have a way of keeping the order of disks persistent
across reboots. Probably the reason that your drive order has changed is
when you flashed the new firmware on the card, the NVRAM that stores this
information on your card was erased. You can set your card up for either
disk persistent mapping or Enclosure/Slot mapping or you can turn mapping
off all together. When you boot up the first time, as disks are discovered
they are placed in the mapping table on the card and then kept in that
order forever, until the data is erased or mapping is turned off. So, I
would say it's possible that you do not have mapping turned on or it's
possible that the new firmware changed this setting from disk persistence
to Enclosure/Slot persistence or vice versa, or something like that.
Maybe too much information, but that's probably what happened.

> _______________________________________________
> freebs...@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
> To unsubscribe, send any mail to "freebsd-scsi...@freebsd.org"

Slawa Olhovchenkov

unread,
Nov 14, 2015, 3:27:28 PM11/14/15
to
On Sat, Nov 14, 2015 at 01:18:14PM +0100, Kai Gallasch wrote:

> So now the drive stuck in the last drive bay is seen as da0 and the
> drive in the first drive bay as da11
>
> But: In the controller BIOS the scan order of the drives did not change
> at all with the new firmware! So the change is only in the way FreeBSD
> sees the drives.

For ZFS this is not mater.

Borja Marcos

unread,
Nov 16, 2015, 4:10:17 AM11/16/15
to

On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:

> You can do thinks in /boot/loader.conf to hard code bus and drive
> assignments.
>
> e.g.
>
> hint.da.0.at="scbus0"
> hint.da.0.target="19"
> hint.da.0.unit="0"
> hint.da.1.at="scbus0"
> hint.da.1.target="18"
> hint.da.1.unit="0"

Beware, the targer number assignment is not predictable. There's no guarantee especially if you replace
a disk.





Borja.

Kevin Oberman

unread,
Nov 16, 2015, 2:36:47 PM11/16/15
to
On Mon, Nov 16, 2015 at 1:00 AM, Borja Marcos <bor...@sarenet.es> wrote:

>
> On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:
>
> > You can do thinks in /boot/loader.conf to hard code bus and drive
> > assignments.
> >
> > e.g.
> >
> > hint.da.0.at="scbus0"
> > hint.da.0.target="19"
> > hint.da.0.unit="0"
> > hint.da.1.at="scbus0"
> > hint.da.1.target="18"
> > hint.da.1.unit="0"
>
> Beware, the target number assignment is not predictable. There's no
> guarantee especially if you replace
> a disk.
>
>
>
>
>
> Borja.
>

As already mentioned, unless you are using zfs, use gpart to label you file
systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkob...@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683

Freddie Cash

unread,
Nov 16, 2015, 2:40:37 PM11/16/15
to
On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman <rkob...@gmail.com> wrote:

> On Mon, Nov 16, 2015 at 1:00 AM, Borja Marcos <bor...@sarenet.es> wrote:
>
> >
> > On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:
> >
> > > You can do thinks in /boot/loader.conf to hard code bus and drive
> > > assignments.
> > >
> > > e.g.
> > >
> > > hint.da.0.at="scbus0"
> > > hint.da.0.target="19"
> > > hint.da.0.unit="0"
> > > hint.da.1.at="scbus0"
> > > hint.da.1.target="18"
> > > hint.da.1.unit="0"
> >
> > Beware, the target number assignment is not predictable. There's no
> > guarantee especially if you replace
> > a disk.
> >
> >
> >
> >
> >
> > Borja.
> >
>
> As already mentioned, unless you are using zfs, use gpart to label you file
> systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
>

​Even if you are using ZFS, labelling the drives with the location of the
disk in the system (enclosure, column, row, whatever) makes things so much
easier to work with when there are disk-related issues.

Just create a single partition that covers the whole disk, label it, and
use the label to create the vdevs in the pool.​

--
Freddie Cash
fjw...@gmail.com

Slawa Olhovchenkov

unread,
Nov 16, 2015, 3:58:04 PM11/16/15
to
On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:

> On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman <rkob...@gmail.com> wrote:
>
> > On Mon, Nov 16, 2015 at 1:00 AM, Borja Marcos <bor...@sarenet.es> wrote:
> >
> > >
> > > On Nov 14, 2015, at 3:31 PM, Gary Palmer wrote:
> > >
> > > > You can do thinks in /boot/loader.conf to hard code bus and drive
> > > > assignments.
> > > >
> > > > e.g.
> > > >
> > > > hint.da.0.at="scbus0"
> > > > hint.da.0.target="19"
> > > > hint.da.0.unit="0"
> > > > hint.da.1.at="scbus0"
> > > > hint.da.1.target="18"
> > > > hint.da.1.unit="0"
> > >
> > > Beware, the target number assignment is not predictable. There's no
> > > guarantee especially if you replace
> > > a disk.
> > >
> > >
> > >
> > >
> > >
> > > Borja.
> > >
> >
> > As already mentioned, unless you are using zfs, use gpart to label you file
> > systems/disks. Then use the /dev/gpt/LABEL as the mount device in fstab.
> >
>
> ​Even if you are using ZFS, labelling the drives with the location of the
> disk in the system (enclosure, column, row, whatever) makes things so much
> easier to work with when there are disk-related issues.
>
> Just create a single partition that covers the whole disk, label it, and
> use the label to create the vdevs in the pool.​

Bad idea.
Re-placed disk in different bay don't relabel automaticly.
Other issuse where disk placed in bay some remotely hands in data
center -- I am relay don't know how disk distributed by bays.
Best way for identify disk -- uses enclouse services.

I have many sites with ZFS on whole disk and some sites with ZFS on
GPT partition. ZFS on GPT more heavy for administration.

Freddie Cash

unread,
Nov 16, 2015, 4:20:25 PM11/16/15
to
On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov <s...@zxy.spb.ru> wrote:

> On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:
>
> > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman <rkob...@gmail.com>
> wrote:
> > > As already mentioned, unless you are using zfs, use gpart to label you
> file
> > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in
> fstab.
> > >
> >
> > ​Even if you are using ZFS, labelling the drives with the location of the
> > disk in the system (enclosure, column, row, whatever) makes things so
> much
> > easier to work with when there are disk-related issues.
> >
> > Just create a single partition that covers the whole disk, label it, and
> > use the label to create the vdevs in the pool.​
>
> Bad idea.
> Re-placed disk in different bay don't relabel automaticly.
>

​Did the original disk get labelled automatically? No, you had to do that
when you first started using it. So, why would you expect a replaced disk
to get labelled automatically?

Offline the dead/dying disk.
Physically remove the disk.
Insert the new disk.
Partition / label the new disk.
"zfs replace" using the new label to get it into the pool.​


> Other issuse where disk placed in bay some remotely hands in data
> center -- I am relay don't know how disk distributed by bays.
>

​You label the disks as they are added to the system the first time. That
way, you always know where each disk is located, and you only deal with the
labels.

Then, when you need to replace a disk (or ask someone in a remote location
to replace it) it's a simple matter: the label on the disk itself tells
you where the disk is physically located. And it doesn't change if the
controller decides to change the direction it enumerates devices.

Which is easier to tell someone in a remote location:
Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
or
Replace the disk called da36?​
​or
Find the disk with serial number XXXXXXXX?
or
Replace the disk where the light is (hopefully) flashing (but I can't
tell you which enclosure, front or back, or anything else like that)?

The first one lets you know exactly where the disk is located physically.

The second one just tells you the name of the device as determined by the
OS, but doesn't tell you anything about where it is located. And it can
change with a kernel update, driver update, or firmware update!

The third requires you to pull every disk in turn to read the serial number
off the drive itself.

In order for the second or third option to work, you'd have to write down
the device names and/or serial numbers and stick that onto the drive bay
itself.​


> Best way for identify disk -- uses enclouse services.
>

​Only if your enclosure services are actually working (or even enabled).
I've yet to work on a box where that actually works (we custom-build our
storage boxes using OTS hardware).

Best way, IMO, is to use the physical location of the device as the actual
device name itself. That way, there's never any ambiguity at the physical
layer, the driver layer, the OS layer, or the ZFS pool layer.​


> I have many sites with ZFS on whole disk and some sites with ZFS on
> GPT partition. ZFS on GPT more heavy for administration.
>

​It's 1 extra step: partition the drive, supplying the location of the
drive as the label for the partition.

Everything else works exactly the same.

I used to do everything with whole drives and no labels. Did that for
about a month, until 2 separate drives on separate controllers died (in a
24-bay setup) and I couldn't figure out where they were located as a BIOS
upgrade changed which controller loaded first. And then I had to work on a
server that someone else configured with direct-attach bays (24 cables)
that were connected almost at random.

Then I used glabel(8) to label the entire disk, and things were much
better. But that didn't always play well with 4K drives, and replacing
drives that were the same size didn't always work as the number of sectors
in each disk was different (ZFS plays better with this now).

Then I started to GPT partition things, and life has been so much simpler.
All the partitions are aligned to 1 MB, and I can manually set the size of
the partition to work around different physical sector counts. All the
partitions are labelled using the physical location of the disk (originally
just row/column naming like a spreadsheet, but now I'm adding enclosure
name as well as we expand to multiple enclosures per system). It's so much
simpler now, ESPECIALLY when I have to get someone to do something
remotely. :)

​Everyone has their own way to manage things. I just haven't seen any
better setup than labelling the drives themselves using their physical
location.​

--
Freddie Cash
fjw...@gmail.com

krad

unread,
Nov 17, 2015, 3:37:48 AM11/17/15
to
I disagree, get the remote hands to copy the serial number to an easily
visible location on the drive when its in the enclosure. Then label the
drives with the serial number (or a compatible version of it). That way the
label is tied to the drive, and you dont have to rely on the remote hands
100%. Better still do the physical labelling yourself

Slawa Olhovchenkov

unread,
Nov 18, 2015, 5:25:31 AM11/18/15
to
On Mon, Nov 16, 2015 at 01:19:55PM -0800, Freddie Cash wrote:

> On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov <s...@zxy.spb.ru> wrote:
>
> > On Mon, Nov 16, 2015 at 11:40:12AM -0800, Freddie Cash wrote:
> >
> > > On Mon, Nov 16, 2015 at 11:36 AM, Kevin Oberman <rkob...@gmail.com>
> > wrote:
> > > > As already mentioned, unless you are using zfs, use gpart to label you
> > file
> > > > systems/disks. Then use the /dev/gpt/LABEL as the mount device in
> > fstab.
> > > >
> > >
> > > ​Even if you are using ZFS, labelling the drives with the location of the
> > > disk in the system (enclosure, column, row, whatever) makes things so
> > much
> > > easier to work with when there are disk-related issues.
> > >
> > > Just create a single partition that covers the whole disk, label it, and
> > > use the label to create the vdevs in the pool.​
> >
> > Bad idea.
> > Re-placed disk in different bay don't relabel automaticly.
> >
>
> ​Did the original disk get labelled automatically? No, you had to do that
> when you first started using it. So, why would you expect a
> replaced disk

Initial labeling is problem too.
For new chassis with 36 identical disk (already installed) -- what is
simple way to labeling disks?

> to get labelled automatically?

Consistency keeping is another problem.

> Offline the dead/dying disk.
> Physically remove the disk.
> Insert the new disk.
> Partition / label the new disk.
> "zfs replace" using the new label to get it into the pool.​

New disk can be inserted in free bay.
This is may be done by remote hand.
And I can be miss information where disk is placed.


> > Other issuse where disk placed in bay some remotely hands in data
> > center -- I am relay don't know how disk distributed by bays.
> >
>
> ​You label the disks as they are added to the system the first time. That
> way, you always know where each disk is located, and you only deal with the
> labels.
>
> Then, when you need to replace a disk (or ask someone in a remote location
> to replace it) it's a simple matter: the label on the disk itself tells
> you where the disk is physically located. And it doesn't change if the
> controller decides to change the direction it enumerates devices.
>
> Which is easier to tell someone in a remote location:

"Replace disk in bay with blinked led"

Author: bapt
Date: Sat Sep 5 00:06:01 2015
New Revision: 287473
URL: https://svnweb.freebsd.org/changeset/base/287473

Log:
Add a new sesutil(8) utility

This is an utility for managing SCSI Enclosure Services (SES)
device.

For now only one command is supported "locate" which will change the
test of the
external LED associated to a given disk.

Usage if the following:
sesutil locate disk [on|off]

Disk can be a device name: "da12" or a special keyword: "all".



> Replace disk enc0a6 (meaning enclosure 0, column A, row 6)?
> or
> Replace the disk called da36?​
> ​or
> Find the disk with serial number XXXXXXXX?
> or
> Replace the disk where the light is (hopefully) flashing (but I can't
> tell you which enclosure, front or back, or anything else like that)?
>
> The first one lets you know exactly where the disk is located physically.
>
> The second one just tells you the name of the device as determined by the
> OS, but doesn't tell you anything about where it is located. And it can
> change with a kernel update, driver update, or firmware update!
>
> The third requires you to pull every disk in turn to read the serial number
> off the drive itself.

Usaly serial number can be read w/o pull disk (for SuperMicro cases
this is true, remote hand replaced disk by S/N for me w/o pull every disk).

> In order for the second or third option to work, you'd have to write down
> the device names and/or serial numbers and stick that onto the drive bay
> itself.​
>
>
> > Best way for identify disk -- uses enclouse services.
> >
>
> ​Only if your enclosure services are actually working (or even enabled).
> I've yet to work on a box where that actually works (we custom-build our
> storage boxes using OTS hardware).
>
> Best way, IMO, is to use the physical location of the device as the actual
> device name itself. That way, there's never any ambiguity at the physical
> layer, the driver layer, the OS layer, or the ZFS pool layer.​
>
>
> > I have many sites with ZFS on whole disk and some sites with ZFS on
> > GPT partition. ZFS on GPT more heavy for administration.
> >
>
> ​It's 1 extra step: partition the drive, supplying the location of the
> drive as the label for the partition.
>
> Everything else works exactly the same.
>
> I used to do everything with whole drives and no labels. Did that for
> about a month, until 2 separate drives on separate controllers died (in a
> 24-bay setup) and I couldn't figure out where they were located as a BIOS
> upgrade changed which controller loaded first. And then I had to work on a
> server that someone else configured with direct-attach bays (24 cables)
> that were connected almost at random.

All currently used by me servers have some randoms in detecting and
reporting controllers and HDDs. No problem for ZFS and/or replacing by
remote hands (by S/N).

Freddie Cash

unread,
Nov 18, 2015, 11:15:42 AM11/18/15
to
On Wed, Nov 18, 2015 at 2:25 AM, Slawa Olhovchenkov <s...@zxy.spb.ru> wrote:

> On Mon, Nov 16, 2015 at 01:19:55PM -0800, Freddie Cash wrote:
>
> > On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov <s...@zxy.spb.ru>
> wrote:
> > ​Did the original disk get labelled automatically? No, you had to do
> that
> > when you first started using it. So, why would you expect a
> > replaced disk
>
> Initial labeling is problem too.
> For new chassis with 36 identical disk (already installed) -- what is
> simple way to labeling disks?
>

​That's the easy part. Boot with all the drives pulled out a bit, so they
aren't connected/detected.

Insert first disk, wait for it to be detected and get a /dev node, then
partition/label it. Repeat for each disk. Takes about 5 minutes to label
a 45-bay JBOD chassis.

No different than how you would get the serial number off each disk before
inserting them into the chassis, so you'd know for sure which slot they're
in.

"Replace disk in bay with blinked led"
>
> Author: bapt
> Date: Sat Sep 5 00:06:01 2015
>

​And, how did you manage to do that before Sep 5, 2015?​

Usaly serial number can be read w/o pull disk (for SuperMicro cases
> this is true, remote hand replaced disk by S/N for me w/o pull every disk).
>

​How? We have all SuperMicro storage chassis (SC2xx, SC8xx, and JBODs) and
server chassis in our data centre here. None of them allow you to read the
serial number off the physical disk without pulling the disk out
completely.​ You'd have to manually label each bay with the serial number
before inserting the disk into the chassis ... which is no different from
labelling the device in the OS. Except it's much faster to find a 3D
co-ordinate (enc0a6) than to scan every bay looking for a specific serial
number.

But, to each their own. :) Everyone has their "perfect" system that works
for them. :D

--
Freddie Cash
fjw...@gmail.com

Slawa Olhovchenkov

unread,
Nov 18, 2015, 11:54:42 AM11/18/15
to
On Wed, Nov 18, 2015 at 08:15:15AM -0800, Freddie Cash wrote:

> On Wed, Nov 18, 2015 at 2:25 AM, Slawa Olhovchenkov <s...@zxy.spb.ru> wrote:
>
> > On Mon, Nov 16, 2015 at 01:19:55PM -0800, Freddie Cash wrote:
> >
> > > On Mon, Nov 16, 2015 at 12:57 PM, Slawa Olhovchenkov <s...@zxy.spb.ru>
> > wrote:
> > > ​Did the original disk get labelled automatically? No, you had to do
> > that
> > > when you first started using it. So, why would you expect a
> > > replaced disk
> >
> > Initial labeling is problem too.
> > For new chassis with 36 identical disk (already installed) -- what is
> > simple way to labeling disks?
> >
>
> ​That's the easy part. Boot with all the drives pulled out a bit, so they
> aren't connected/detected.
>
> Insert first disk, wait for it to be detected and get a /dev node, then
> partition/label it. Repeat for each disk. Takes about 5 minutes to label
> a 45-bay JBOD chassis.

Hmm, from me to server more then 1700km, how I can do this?

> No different than how you would get the serial number off each disk before
> inserting them into the chassis, so you'd know for sure which slot they're
> in.

This is do by manufacturer.
Or in DC after service ordering.
I am don't assemble servers, in general.
And I am don't see servers and don't know how they look.

> "Replace disk in bay with blinked led"
> >
> > Author: bapt
> > Date: Sat Sep 5 00:06:01 2015
> >
>
> ​And, how did you manage to do that before Sep 5, 2015?​

Deteched disk don't blink activity LED.

> Usaly serial number can be read w/o pull disk (for SuperMicro cases
> > this is true, remote hand replaced disk by S/N for me w/o pull every disk).
> >
>
> ​How? We have all SuperMicro storage chassis (SC2xx, SC8xx, and JBODs) and
> server chassis in our data centre here. None of them allow you to read the
> serial number off the physical disk without pulling the disk out
> completely.​ You'd have to manually label each bay with the serial number
> before inserting the disk into the chassis ... which is no different from
> labelling the device in the OS. Except it's much faster to find a 3D
> co-ordinate (enc0a6) than to scan every bay looking for a specific serial
> number.

For SC847A this do for me in NL DC (as I understand -- through holes
at an angle).

0 new messages