Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Do I need SAS drives?..

3 views
Skip to first unread message

Frank Leonhardt (m)

unread,
Aug 9, 2017, 11:13:24 AM8/9/17
to
Simple answer is to use either. You're running FreeBSD with ZFS, right? BSD will hot plug anything. I suspect 'hot plug' relates to Microsoft workaround hardware RAID.

Hot plug enclosures will also let the host know a drive has been pulled. Otherwise ZFS won't know whether it was pulled or is unresponsive due to it being on fire or something. With 8 drives in your array you can probably figure this out yourself.

SAS drives use SCSI commands, which are supposedly better than SATA commands. Electrically they are the same. SAS drives are more expensive and tend to be higher spec mechanically, but not always so. Incidentally, nearline SAS is a cheaper SATA drive that understands SAS protocol and has dual ports. Marketing.

Basically, if you really want speed at all costs go for SAS. If you want best capacity for your money, go SATA. If in doubt, go for SATA. If you don't know you need SAS for some reason, you probably don't.

Regards, Frank.


On 9 August 2017 15:27:37 BST, "Mikhail T." <mi...@aldan.algebra.com> wrote:
>My server has 8 "hot-plug" slots, that can accept both SATA and SAS
>drives. SATA ones tend to be cheaper for the same features (like
>cache-sizes), what am I getting for the extra money spent on SAS?
>
>Asking specifically about the protocol differences... It would seem,
>for example, SATA can not be as easily hot-plugged, but with
>camcontrol(8) that should not be a problem, right? What else? Thank
>you!
>--
>Sent from mobile device, please, pardon shorthand.
>
>
>--
>Sent from mobile device, please, pardon shorthand.
>_______________________________________________
>freebsd-...@freebsd.org mailing list
>https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
>To unsubscribe, send any mail to
>"freebsd-hardwa...@freebsd.org"

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
freebsd-...@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "freebsd-hardwa...@freebsd.org"

Josh Paetzel

unread,
Aug 9, 2017, 11:30:06 AM8/9/17
to
I have a different take on this. For starters SAS and SATA aren't
electrically compatible. There's a reason SAS drives are keyed so you
can't plug them in to a SATA controller. It keeps the magic smoke
inside the drive. SAS controllers can tunnel SATA (They confusingly
call this STP (Not Spanning Tree Protocol, but SATA Tunneling Protocol)
It's imperfect but good enough for 8 drives. You really do not want to
put 60 SATA drives in a SAS JBOD)

SAS can be a shared fabric, which means a group of drives are like a
room full of people having a conversation. If someone starts screaming
and spurting blood it can disrupt the conversations of everyone in the
room. Modern RAID controllers are pretty good at disconnecting drives
that are not working properly but not completely dead. Modern HBAs not
so much. If your controller is an HBA trying to keep a SAS fabric
stable with SATA drives can be more problematic than if you use SAS
drives...and as Frank pointed out nearline SAS drives are essentially
SATA drives with a SAS interface (and are typically under a $20 premium)

If performance was an issue we'd be talking about SSDs. While SAS
drives do have a performance advantage over SATA in
multiuser/multiapplication environments (they have a superior queuing
implementation) it's not worth considering when the real solution is
SSDs.

My recommendation is if you have SAS expanders and an HBA use SAS
drives. If you have direct wired SAS or a RAID controller you can use
either SAS or SATA. If your application demands performance or
concurrency get a couple SSDs. They'll smoke anything any spinning
drive can do.

--

Thanks,

Josh Paetzel

Frank Leonhardt

unread,
Aug 9, 2017, 5:23:23 PM8/9/17
to
On 09/08/2017 16:59, Alan Somers wrote:
> On Wed, Aug 9, 2017 at 8:27 AM, Mikhail T. <mi...@aldan.algebra.com> wrote:
>> My server has 8 "hot-plug" slots, that can accept both SATA and SAS drives. SATA ones tend to be cheaper for the same features (like cache-sizes), what am I getting for the extra money spent on SAS?
>>
>> Asking specifically about the protocol differences... It would seem, for example, SATA can not be as easily hot-plugged, but with camcontrol(8) that should not be a problem, right? What else? Thank you!
>> --
>> Sent from mobile device, please, pardon shorthand.
> Good question. First of all, hot-plugability has more to do with the
> controller than the protocol. Since you have a SAS controller, you
> should have no problem hot plugging SATA drives. But SAS drives still
> have a few advantages:
>
> 1) When a SATA drive goes into error recovery, it can lock up the bus
> indefinitely. This won't matter if your drives are directly connected
> to a SAS HBA. But if you have an expander with say, 4 SAS lanes going
> to the HBA, then a flaky SATA drive can reduce the bandwidth available
> to the good drives.
>
> 2) Even with NCQ, the SATA protocol is limited to queueing one or more
> write commands OR one or more read commands. You can't queue a
> mixture of reads and writes at the same time. SAS does not have that
> limitation. In this sense, SAS is theoretically more performant.
> However, I've never heard of anybody observing a performance problem
> that can be definitely blamed on this effect.
>
> 3) SAS drives have a lot of fancy features that you may not need or
> care about. For example, they often have features that are useful in
> multipath setups (dual ports, persistent reservations), their error
> reporting capabilities are more sophisticated than SMART, their self
> encrypting command set is more sophisticated, etc etc.
>
> 4) The SAS activity LED is the opposite of SATA's. With SATA, the LED
> is off for an idle drive or blinking for a busy drive. With SAS, it's
> on for an idle drive or blinking for a busy drive. This makes it
> easier to see at a glance how many SAS drives you have installed. I
> think some SATA drives have a way to change the LEDs behavior, though.
>
> 5) Desktop class SATA drives can spend an indefinite amount of time in
> error recovery mode. If your RAID stack doesn't timeout a command,
> that can cause your array to hang. But SAS drives and RAID class
> SATA drives will fail any command than spends too much time in error
> recovery mode.
>
> 6) But the most important difference isn't something you'll find on
> any datasheet or protocol manual. SAS drives are built to a higher
> standard of quality than SATA drives, and have accordingly lower
> failure rates.
>
> I'm guessing that you don't have an expander (since you only have 8
> slots), so item 1 doesn't matter to you. I'll guess that item 3
> doesn't matter either, or you wouldn't have asked this question. Item
> 5 can be dealt with simply by buying the higher end SATA drives. So
> item 6 is really the most important. If this system needs to have
> very high uptime and consistent bandwidth, or if it will be difficult
> to access for maintenance, then you probably want to use SAS drives.
> If not, then you can save some money by using SATA. Hope that helps.
>
> -Alan

Alan makes a good point about SAS expanders and their tendency to stick
when some SATA drives go off on a trip. I'm also assuming Mikhail(?)'s
setup doesn't use one.

On BSD with ZFS, a SATA drive chucking a shoe doesn't make any
difference if they're directly connected to the HBA (same applies to
GEOM RAID/MIRROR). "Dive silent?", "Detach it".

I'm not at all convinced that SAS is any more reliable than SATA per se.
This is based on 30+ years experience with Winchesters starting with
ST506. In the UK I used to write most of the storage articles for a
couple of major tech publishers, and I spent a lot of time talking to
and visiting the manufacturers and looking around the factories. Some of
this may now be out-of-date (Conner went bust for a start).

The thing is that if you opened a XXX brand SCSI disk and the IDE
version, guess what? They were the same inside. I spoke to the makers,
and apparently the electronics on the SCSI version is a lot more
expensive. Why? Well we don't sell as many, er, um.

Okay, they don't make cheap and nasty SCSI (or SAS) drives, but they do
make low-end IDE/SATA. They also make some very nice drives that are
only available as SAS. An equivalent quality SAS/SATA drive will be just
as reliable - there's no mechanical reason for them not to be. They come
off the same line.

Then there's the MTBF and the unrecoverable error rates. On high-end
drives the latter is normally claimed to be 10x better than the cheap
ones. Pretty much always, and exactly 10x. This is utter bilge. What
they're saying is that the unrecoverable error rate is this figure or
better, and any study in to this has shown that it's usually a lot
better than both figures. So both figures are technically correct; it
just makes the SATA drive look worse. If anyone has any actual evidence
of equivalent SAS and SATA drives having a different error rate, please
get in touch.

MTBF? Okay, SATA drives do fail more quickly. Run a drive 24/7 for a
couple of years in an array and it only spins up once and runs at a
constant speed; doesn't get knocked and has properly organised air
conditioning (no thermal shocks). The SATA drive in a desktop, on the
other hand, gets turned on and off and generally abused. It may be
running for less actual time but the odds are stacked against it. How
many light bulbs fail while they're running vs. how many fail when you
turn them on?

Finally, there's been my experience running a load of drives in data
centres for many years. In some servers there are SAS drives. In others
there are SATA server drives (supplied by Dell at 4x the cost of cheap
ones). And in others there are cheapo drives than were around and
whacked in when half a mirror failed. You know what's coming, don't you?
So I won't say it.

Regards, Frank.

Lanny Baron

unread,
Aug 9, 2017, 5:37:30 PM8/9/17
to
Not sure what kind of server you are referring to but our servers can
take SAS and SATA at the same time. We build plenty of servers running
FreeBSD which in some cases have SATA SSD for boot drives (in a RAID-1)
and then X amount of either SATA or SAS or both in a different RAID
configuration all connected to the same high quality RAID Controller.

I have yet to see any complaint with the configurations we've done for
our clients.

SAS drives can be much faster. 15K RPM vs. SATA 7.2K. Your choices would
depend on how busy the server is.

Regards,
Lanny

Alan Somers

unread,
Aug 10, 2017, 10:02:03 AM8/10/17
to
On Thu, Aug 10, 2017 at 7:44 AM, Ben RUBSON <ben.r...@gmail.com> wrote:
>> On 09 Aug 2017, at 17:59, Alan Somers <aso...@freebsd.org> wrote:
>>
>> 3) SAS drives have a lot of fancy features that you may not need or
>> care about. For example, (...) their error
>> reporting capabilities are more sophisticated than SMART
>
> Really interesting answer Alan, thank you very much !
> Slightly off-topic but I take this opportunity,
> how do you check SAS drives health ?
> I personally cron a background long test every 2 weeks (using smartmontools).
> I did not experience SAS drive error yet, so not sure how this behaves.
> Does the drive reports to FreeBSD when its read or write error rate cross
> a threshold (so that we can replace it before it fails) ?
> Or perhaps smartd will do ?
>
> As an example below a SAS error counter log returned by smartctl :
> Errors Corrected by Total Correction Gigabytes Total
> ECC rereads/ errors algorithm processed uncorrected
> fast | delayed rewrites corrected invocations [10^9 bytes] errors
> read: 0 49 0 49 233662 73743.588 0
> write: 0 3 0 3 83996 9118.895 0
> verify: 0 0 0 0 28712 0.000 0
>
> Thank you !
>
> Ben

smartmontools is probably the best way to read SAS error logs.
Interpreting them can be hard, though. The Backblaze blog is probably
the best place to get current advice. But the easiest thing to do is
certainly to wait until something fails hard. With ZFS, you can have
up to 3 drives' worth of redundancy, and hotspares too.

-Alan

Frank Leonhardt

unread,
Nov 7, 2017, 3:43:05 AM11/7/17
to


On 06/11/2017 10:09, Zane C. B-H. wrote:
> In my years of doing decade plus of DC work, I've seen both SAS and SATA
> drives flake and render systems in operable till the offending drive is
> removed.
>

My experience too.
> For Supermicro it will vary between backplanes.
>
Very true indeed. If they go on or off from time to time, that's good
enough.

>> I'm guessing that you don't have an expander (since you only have 8
>> slots), so item 1 doesn't matter to you. I'll guess that item 3
>> doesn't matter either, or you wouldn't have asked this question. Item
>> 5 can be dealt with simply by buying the higher end SATA drives. So
>> item 6 is really the most important. If this system needs to have
>> very high uptime and consistent bandwidth, or if it will be difficult
>> to access for maintenance, then you probably want to use SAS drives.
>> If not, then you can save some money by using SATA. Hope that helps.
>
> Actually most boxes with more than 4 slots tend to be use multipliers.
>
I'm more mixed on that. There are quite a few Dells with eight or
twelve-slot backplanes, even if it means two HBAs. Apart from better
performance, the cost of 2xHBA+backplane is bizarrely less than
1xHBA+Expander. All the Supermicros I've seen have had expanders though.

> As to uptime, that is trivial to achieve with both.
>
> With both it is of importance of drive monitoring and regular self tests.

WHS! Biggest cause of problems is discovering a flaky drive or two AFTER
the redundant one has failed. I don't know what anyone else thinks, but
I'm inclined to do a straightforward read of a block device rather than
a ZFS scrub because (a) I think it's quicker, especially when there's
not much workload; and (b) it also reads unused blocks, which are
probably the majority. "Best Practice" says you should do a scrub every
three months - seems way to long a gap for my liking.
0 new messages