Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

freebsd-stable Digest, Vol 349, Issue 5

0 views
Skip to first unread message

freebsd-sta...@freebsd.org

unread,
Mar 25, 2010, 8:00:26 AM3/25/10
to freebsd...@freebsd.org
Send freebsd-stable mailing list submissions to
freebsd...@freebsd.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
or, via email, send a message with subject or body 'help' to
freebsd-sta...@freebsd.org

You can reach the person managing the list at
freebsd-st...@freebsd.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of freebsd-stable digest..."


Today's Topics:

1. 8.x Amd64, ATA SATA mode reporting (John Long)
2. Re: Multi node storage, ZFS (David Magda)
3. Re: Multi node storage, ZFS (Patrick M. Hausen)
4. Re: Multi node storage, ZFS (Michal)
5. Re: Powerd and est / eist functionality (Torfinn Ingolfsen)


----------------------------------------------------------------------

Message: 1
Date: Wed, 24 Mar 2010 19:06:51 -0700
From: John Long <fb...@sstec.com>
Subject: 8.x Amd64, ATA SATA mode reporting
To: freeBSD-STABLE Mailing List <freebsd...@freebsd.org>
Message-ID: <5.2.1.1.2.201003...@mail.sstec.com>
Content-Type: text/plain; charset="us-ascii"; format=flowed

Moved from another thread
>> I csupd to stable amd64 8.0 and rebuilt then noticed from dmesg that I
went from SATA150 (it should be SATA300) to >> udma100 SATA.
>
>This is a bug/quirk of some changes in ata(4). Your drive should be
>operating at full SATA speed (probably SATA300). You can bring this up
>in another thread if you want, but it's purely cosmetic as far as I
>know. atacontrol(8) and diskinfo(8) -t and -c will come in handy.

I am looking for stability and find this possibly disconcerting.
It looks like you are right about cosmetic for the speed test is about the
same in either. In Generic atacontrol cannot determine the mode at all and
in stable it shows up but as the wrong thing, udma100 SATA instead of SATA300.

dmesg reports on both the same:
atapci0: <Intel ICH7 SATA300 controller> port
0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xf000-0xf00f at device 31.2 on pci0
ata0: <ATA channel 0> on atapci0
ata0: [ITHREAD]
ata1: <ATA channel 1> on atapci0
ata1: [ITHREAD]

I noticed the following line in stable compile from sysctl, I find out it
reports that in generic also.
hptmv.status: RocketRAID 18xx SATA Controller driver Version v1.16
So I added nodevice hptmv
rebuild and It took it out but there was no improvement in results.
Kernel additions at the end.

So, if it is of no matter then cool, if you want me to do something just
let me know.

thx,

John

8 stable:
dmesg ad0: 953868MB <WDC WD1001FALS-00J7B0 05.00K05> at ata0-master UDMA100
SATA
%atacontrol list
ATA channel 0:
Master: ad0 <WDC WD1001FALS-00J7B0/05.00K05> SATA revision 2.x
Slave: no device present
ATA channel 1:
Master: acd0 <CREATIVE CD5230E/C1.01> ATA/ATAPI revision 0
Slave: no device present

%atacontrol mode ad0
current mode = UDMA100 SATA

%diskinfo -t ad0
ad0
512 # sectorsize
1000203804160 # mediasize in bytes (932G)
1953523055 # mediasize in sectors
0 # stripesize
0 # stripeoffset
1938018 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WMATV5906953 # Disk ident.

Seek times:
Full stroke: 250 iter in 4.930305 sec = 19.721 msec
Half stroke: 250 iter in 3.434680 sec = 13.739 msec
Quarter stroke: 500 iter in 5.588974 sec = 11.178 msec
Short forward: 400 iter in 1.601327 sec = 4.003 msec
Short backward: 400 iter in 2.187711 sec = 5.469 msec
Seq outer: 2048 iter in 0.391555 sec = 0.191 msec
Seq inner: 2048 iter in 0.243926 sec = 0.119 msec
Transfer rates:
outside: 102400 kbytes in 0.945528 sec = 108299 kbytes/sec
middle: 102400 kbytes in 1.066398 sec = 96024 kbytes/sec
inside: 102400 kbytes in 1.775079 sec = 57688 kbytes/sec

%diskinfo -c ad0
ad0
512 # sectorsize
1000203804160 # mediasize in bytes (932G)
1953523055 # mediasize in sectors
0 # stripesize
0 # stripeoffset
1938018 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WMATV5906953 # Disk ident.

I/O command overhead:
time to read 10MB block 0.105669 sec = 0.005 msec/sector
time to read 20480 sectors 2.138126 sec = 0.104 msec/sector
calculated command overhead

=========================================================
8 Generic:
dmesg ad0: 953868MB <WDC WD1001FALS-00J7B0 05.00K05> at ata0-master SATA150
%atacontrol list
ATA channel 0:
Master: ad0 <WDC WD1001FALS-00J7B0/05.00K05> SATA revision 2.x
Slave: no device present
ATA channel 1:
Master: acd0 <CREATIVE CD5230E/C1.01> ATA/ATAPI revision 0
Slave: no device present

%atacontrol mode ad0
current mode = ???

%diskinfo -t ad0
ad0
512 # sectorsize
1000203804160# mediasize in bytes (932G)
1953523055 # mediasize in sectors
0 # stripesize
0 # stripeoffset
1938018 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WMATV5906953# Disk ident.

Seek times:
Full stroke: 250 iter in 4.990163 sec = 19.961 msec
Half stroke: 250 iter in 3.460091 sec = 13.840 msec
Quarter stroke: 500 iter in 5.572893 sec = 11.146 msec
Short forward: 400 iter in 1.601550 sec = 4.004 msec
Short backward: 400 iter in 2.187599 sec = 5.469 msec
Seq outer: 2048 iter in 0.378502 sec = 0.185 msec
Seq inner: 2048 iter in 0.248222 sec = 0.121 msec
Transfer rates:
outside: 102400 kbytes in 0.955476 sec = 107172 kbytes/sec
middle: 102400 kbytes in 1.067399 sec = 95934 kbytes/sec
inside: 102400 kbytes in 1.776965 sec = 57626 kbytes/sec

%diskinfo -c ad0
ad0
512 # sectorsize
1000203804160 # mediasize in bytes (932G)
1953523055 # mediasize in sectors
0 # stripesize
0 # stripeoffset
1938018 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WMATV5906953 # Disk ident.

I/O command overhead:
time to read 10MB block 0.090332 sec = 0.004 msec/sector
time to read 20480 sectors 2.140744 sec = 0.105 msec/sector
calculated command overhead = 0.100 msec/sector

=======================================================
#
# SSTEC custom -- FreeBSD/amd64
#
# $FreeBSD: src/sys/amd64/conf/GENERIC,v 1.531.2.8 2010/01/18 00:53:21 imp
Exp $

include GENERIC

ident SSTEC

#
# CPU control pseudo-device. Provides access to MSRs, CPUID info and
# microcode update feature.
###
#device cpuctl

## ======================================================================
## Additions for Firewall / Divert

options IPFIREWALL #firewall
options IPFIREWALL_VERBOSE #enable logging to syslogd(8)
options IPFIREWALL_VERBOSE_LIMIT=100 #limit verbosity
options IPFIREWALL_FORWARD #packet destination changes
options IPDIVERT #divert sockets

###
#options IPFIREWALL_NAT #ipfw kernel nat support
# libalias library, performing NAT required for IPFIREWALL_NAT
###
#options LIBALIAS

# Statically Link in accept filters
options ACCEPT_FILTER_DATA
options ACCEPT_FILTER_HTTP
###
options ACCEPT_FILTER_DNS

# DUMMYNET it is advisable to also have at least "options HZ=1000" to achieve
# a smooth scheduling of the traffic.
options DUMMYNET
###
options HZ=1000

## ======================================================================

#excludes
nodevice hptmv # Highpoint RocketRAID 182x

------------------------------

Message: 2
Date: Wed, 24 Mar 2010 22:27:53 -0400
From: David Magda <dma...@ee.ryerson.ca>
Subject: Re: Multi node storage, ZFS
To: Michal <mic...@ionic.co.uk>
Cc: freebsd...@freebsd.org
Message-ID: <C202E72E-9721-4B50...@ee.ryerson.ca>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

On Mar 24, 2010, at 19:45, Michal wrote:

> It's all well and good having 1 ZFS server, but it's fragile in the
> the sense of no redundancy, then we have 1 ZFS server and a 2nd with
> DRBD, but that's a waste of money...think 12 TB, and you need to pay
> for another 12TB box for redundancy, and you are still looking at 1
> server. I am thinking a cheap solution but one that has IO
> throughput, redundancy and is easy to manange and expand across
> multiple nodes

If you want an appliance, a Sun/Oracle 7000 series may be close:

http://www.oracle.com/us/products/servers-storage/storage/open-storage/

The 7310 allows for two active-active heads, with fail over if one
dies. Does NFS, SMB/CIFS, and iSCSI; the newest software release
(2010.Q1) gives SAN functionality so you can export LUNs via FC if you
purchase the optional HBAs.

Unfortunately Oracle's web site seriously sucks compared to Sun's for
product information. A lot of good weblog posts though:

http://blogs.sun.com/brendan/
http://blogs.sun.com/brendan/entry/slog_screenshots (write perf.)
http://blogs.sun.com/brendan/entry/l2arc_screenshots (read perf.)

http://blogs.sun.com/ahl/entry/sun_storage_7310
http://blogs.sun.com/wesolows/category/Clustering

Probably cheaper in price than most vendors, but more expensive than
DIY (though you have to add the cost of time).

Disclaimer: just a generally happy Sun customer. (We'll see about
Oracle though. :)

------------------------------

Message: 3
Date: Thu, 25 Mar 2010 09:54:45 +0100
From: "Patrick M. Hausen" <hau...@punkt.de>
Subject: Re: Multi node storage, ZFS
To: Michal <mic...@ionic.co.uk>
Cc: freebsd...@freebsd.org
Message-ID: <20100325085...@hugo10.ka.punkt.de>
Content-Type: text/plain; charset=iso-8859-1

Hi, all,

On Wed, Mar 24, 2010 at 11:45:25PM +0000, Michal wrote:
> I am thinking a cheap solution but one that
> has IO throughput, redundancy and is easy to manange and expand across
> multiple nodes

Fast, reliable, cheap. Pick any two.

IMHO this is just as true today as it was twenty years ago.

Best regards,
Patrick
--
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
in...@punkt.de http://www.punkt.de
Gf: Jürgen Egeling AG Mannheim 108285


------------------------------

Message: 4
Date: Thu, 25 Mar 2010 09:09:04 +0000
From: Michal <mic...@ionic.co.uk>
Subject: Re: Multi node storage, ZFS
To: freebsd...@freebsd.org
Message-ID: <4BAB2830...@ionic.co.uk>
Content-Type: text/plain; charset=ISO-8859-1

On 25/03/2010 08:54, Patrick M. Hausen wrote:
> Hi, all,
>
> On Wed, Mar 24, 2010 at 11:45:25PM +0000, Michal wrote:
>> I am thinking a cheap solution but one that
>> has IO throughput, redundancy and is easy to manange and expand across
>> multiple nodes
>
> Fast, reliable, cheap. Pick any two.
>
> IMHO this is just as true today as it was twenty years ago.
>
> Best regards,
> Patrick

I will never get what you would get by spending a lot of money, by doing
it on the cheap. But yes I agree to a certain extent, it's still
expensive and out of the SMB reach


>If you want an appliance, a Sun/Oracle 7000 series may be close:
>
>http://www.oracle.com/us/products/servers-storage/storage/open-storage/
>
>The 7310 allows for two active-active heads, with fail over if one
>dies. Does NFS, SMB/CIFS, and iSCSI; the newest software release
>(2010.Q1) gives SAN functionality so you can export LUNs via FC if you
>purchase the optional HBAs.

There are cheaper options yes I agree, but I think even that might be
out of my budget. I've been fighting for months. Time is ok as a factor,
learning it only helps me in the long run so it's win win for me. I,
too, am still unsure how Oracle buy out will effect Sun...I'm not
optimistic though...I'm expecting to move off MySQL at some point, when
I don't know but I think I will be forced to for some reason or another


------------------------------

Message: 5
Date: Thu, 25 Mar 2010 11:16:11 +0100
From: Torfinn Ingolfsen <torfinn....@broadpark.no>
Subject: Re: Powerd and est / eist functionality
To: freebsd...@freebsd.org
Message-ID: <20100325111611.b1e89...@broadpark.no>
Content-Type: text/plain; CHARSET=US-ASCII

On Wed, 24 Mar 2010 18:04:51 -0700
John Long <fb...@sstec.com> wrote:

> I want to thank you very much for all the info you have provided. It has
> clued me into a much better understanding and I see that it is a big
> un-standard thing to monitor these functions. It seems that things are

FYI: for (some) Asus boards thererb is als acpi_aiboost(4).
--
Regards,
Torfinn Ingolfsen

------------------------------


End of freebsd-stable Digest, Vol 349, Issue 5
**********************************************

0 new messages