Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ZFS on top of GELI / Intel Atom 330 system

58 views
Skip to first unread message

Dan Naumov

unread,
May 29, 2009, 4:19:44 AM5/29/09
to freebsd...@freebsd.org
Is there anyone here using ZFS on top of a GELI-encrypted provider on
hardware which could be considered "slow" by today's standards? What
are the performance implications of doing this? The reason I am asking
is that I am in the process of building a small home NAS/webserver,
starting with a single disk (intending to expand as the need arises)
on the following hardware:
http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html This
is essentially: Intel Arom 330 1.6 Ghz dualcore on an Intel
D945GCLF2-based board with 2GB Ram, the first disk I am going to use
is a 1.5TB Western Digital Caviar Green.

I had someone run a few openssl crypto benchmarks (to unscientifically
assess the maximum possible GELI performance) on a machine running
FreeBSD on nearly the same hardware and it seems the CPU would become
the bottleneck at roughly 200 MB/s throughput when using 128 bit
Blowfish, 70 MB/s when using AES128 and 55 MB/s when using AES256.
This, on it's own is definately enough for my neeeds (especially in
the case of using Blowfish), but what are the performance implications
of using ZFS on top of a GELI-encrypted provider?

Also, free free to criticize my planned filesystem layout for the
first disk of this system, the idea behind /mnt/sysbackup is to take a
snapshot of the FreeBSD installation and it's settings before doing
potentially hazardous things like upgrading to a new -RELEASE:

ad1s1 (freebsd system slice)
ad1s1a => 128bit Blowfish ad1s1a.eli 4GB swap
ad1s1b 128GB ufs2+s /
ad1s1c 128GB ufs2+s noauto /mnt/sysbackup

ad1s2 => 128bit Blowfish ad1s2.eli
zpool
/home
/mnt/data1


Thanks for your input.

- Dan Naumov

Pete French

unread,
May 29, 2009, 5:10:38 AM5/29/09
to dan.n...@gmail.com, freebsd...@freebsd.org
> Is there anyone here using ZFS on top of a GELI-encrypted provider on
> hardware which could be considered "slow" by today's standards? What

I run a mirrored zpool on top of a pair of 1TB SATA drives - they are
only 7200 rpm so pretty dog slow as far as I'm concerned. The
CPOU is a dual core Athlon 6400, and I am running amd64. The performance
is not brilliant - about 25 meg/second writing a file, and about
53 meg/second reading it.

It's a bit dissapointing really - thats a lot slower that I expected
when I built it, especially the write speed.

-pete.

Dan Naumov

unread,
May 29, 2009, 5:17:26 AM5/29/09
to Pete French, freebsd...@freebsd.org
Ouch, that does indeed sounds quite slow, especially considering that
a dual core Athlon 6400 is pretty fast CPU. Have you done any
comparison benchmarks between UFS2 with Softupdates and ZFS on the
same system? What are the read/write numbers like? Have you done any
investigating regarding possible causes of ZFS working so slow on your
system? Just wondering if its an ATA chipset problem, a drive problem,
a ZFS problem or what...

- Dan Naumov

Philipp Wuensche

unread,
May 29, 2009, 5:30:45 AM5/29/09
to Dan Naumov, freebsd...@freebsd.org
Dan Naumov wrote:
> Is there anyone here using ZFS on top of a GELI-encrypted provider on
> hardware which could be considered "slow" by today's standards? What
> are the performance implications of doing this? The reason I am asking
> is that I am in the process of building a small home NAS/webserver,
> starting with a single disk (intending to expand as the need arises)
> on the following hardware:
> http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html This
> is essentially: Intel Arom 330 1.6 Ghz dualcore on an Intel
> D945GCLF2-based board with 2GB Ram, the first disk I am going to use
> is a 1.5TB Western Digital Caviar Green.
>
> I had someone run a few openssl crypto benchmarks (to unscientifically
> assess the maximum possible GELI performance) on a machine running
> FreeBSD on nearly the same hardware and it seems the CPU would become
> the bottleneck at roughly 200 MB/s throughput when using 128 bit
> Blowfish, 70 MB/s when using AES128 and 55 MB/s when using AES256.
> This, on it's own is definately enough for my neeeds (especially in
> the case of using Blowfish), but what are the performance implications
> of using ZFS on top of a GELI-encrypted provider?

I have a zpool mirror on top of two 128bit GELI blowfish devices with
Sectorsize 4096, my system is a D945GCLF2 with 2GB RAM and a Intel Arom
330 1.6 Ghz dualcore. The two disks are a WDC WD10EADS and a WD10EACS
(5400rpm). The system is running 8.0-CURRENT amd64. I have set
kern.geom.eli.threads=3.

This is far from a real benchmarks but:

Using dd with bs=4m I get 35 MByte/s writing to the mirror (writing 35
MByte/s to each disk) and 48 MByte/s reading from the mirror (reading
with 24 MByte/s from each disk).

My experience is that ZFS is not much of an overhead and will not
degrade the performance as much as the encryption, so GELI is the
limiting factor. Using ZFS without GELI on this system gives way higher
read and write numbers, like reading with 70 MByte/s per disk etc.

greetings,
philipp


Pete French

unread,
May 29, 2009, 5:40:38 AM5/29/09
to dan.n...@gmail.com, freebsd...@freebsd.org
> Ouch, that does indeed sounds quite slow, especially considering that
> a dual core Athlon 6400 is pretty fast CPU. Have you done any
> comparison benchmarks between UFS2 with Softupdates and ZFS on the

Not at all - but, now you have got me curious, I just went to
a completely different system (four core opteron box, no ecnryption,
four 15k SCSI drives and a zpool of 2 mirrored pairs), and that
also gave me about 25 meg/second!

I am using the wildly unscientific "how long to copy a file"
method to benchmark here, with the file residing on a different
drive, which can provided it at 80 meg/second.

> same system? What are the read/write numbers like? Have you done any
> investigating regarding possible causes of ZFS working so slow on your
> system? Just wondering if its an ATA chipset problem, a drive problem,
> a ZFS problem or what...

I have no idea, and now I think I need to look into it! certainly
I should be getting better than 25 meg/sec out of the 15K SCSI's.

-pete.

Dan Naumov

unread,
May 29, 2009, 6:12:58 AM5/29/09
to Philipp Wuensche, freebsd...@freebsd.org
Thank you for your numbers, now I know what to expect when I get my
new machine, since our system specs look identical.

So basically on this system:

unencrypted ZFS read: ~70 MB/s per disk

128bit Blowfish GELI/ZFS write: 35 MB/s per disk
128bit Blowfish GELI/ZFS read: 24 MB/s per disk

I am curious what part of GELI is so inefficient to cause such a
dramatic slowdown. In comparison, my home desktop is a

C2D E6600 2,4 Ghz, 4GB RAM, Intel DP35DP, 1 x 1,5TB Seagate Barracuda
- Windows Vista x64 SP1

Read/Write on an unencrypted NTFS partition: ~85 MB/s
Read/Write on a Truecrypt AES-encrypted NTFS partition: ~65 MB/s

As you can see, the performance drop is noticeable, but not anywhere
nearly as dramatic.


- Dan Naumov

Morgan Wesström

unread,
May 29, 2009, 6:47:38 AM5/29/09
to freebsd...@freebsd.org

Dan Naumov wrote:
> Thank you for your numbers, now I know what to expect when I get my
> new machine, since our system specs look identical.
>
> So basically on this system:
>
> unencrypted ZFS read: ~70 MB/s per disk
>
> 128bit Blowfish GELI/ZFS write: 35 MB/s per disk
> 128bit Blowfish GELI/ZFS read: 24 MB/s per disk
>
> I am curious what part of GELI is so inefficient to cause such a
> dramatic slowdown. In comparison, my home desktop is a
>


You can benchmark the encryption subsytem only, like this:

# kldload geom_zero
# geli onetime -s 4096 -l 256 gzero
# sysctl kern.geom.zero.clear=0
# dd if=/dev/gzero.eli of=/dev/null bs=1M count=512

512+0 records in
512+0 records out
536870912 bytes transferred in 11.861871 secs (45260222 bytes/sec)

The benchmark will use 256-bit AES and the numbers are from my Core2 Duo
Celeron E1200 1,6GHz. My old trusty Pentium III 933MHz performs at
13MB/s on that test. Both machines are recompiled with CPUTYPE=core2 and
CPUTYPE=pentium3 respectively but unfortunately I have no benchmarks on
how they perform without the CPU optimizations.

I'm in the same spot as you, planning to build a home NAS. I have
settled for graid5/geli but haven't yet decided if I would benefit most
from a dual core CPU at 3+ GHz or a quad core at 2.6. Budget is a concern...

Regards
Morgan

Dan Naumov

unread,
May 29, 2009, 7:34:57 AM5/29/09
to Morgan Wesström, freebsd...@freebsd.org
Now that I have evaluated the numbers and my needs a bit, I am really
confused about what appropriate course of action for me would be.

1) Use ZFS without GELI and hope that zfs-crypto get implemented in
Solaris and ported to FreeBSD "soon" and that when it does, it won't
come with such a dramatic performance decrease as GELI/ZFS seems to
result in.
2) Go ahead with the original plan of using GELI/ZFS and grind my
teeth at the 24 MB/s read speed off a single disk.


>> So basically on this system:
>>
>> unencrypted ZFS read: ~70 MB/s per disk
>>
>> 128bit Blowfish GELI/ZFS write: 35 MB/s per disk
>> 128bit Blowfish GELI/ZFS read: 24 MB/s per disk

> I'm in the same spot as you, planning to build a home NAS. I have
> settled for graid5/geli but haven't yet decided if I would benefit most
> from a dual core CPU at 3+ GHz or a quad core at 2.6. Budget is a concern...

Our difference is that my hardware is already ordered and Intel Atom
330 + D945GCLF2 + 2GB ram is what it's going to have :)


- Dan Naumov

Emil Mikulic

unread,
May 29, 2009, 7:29:19 AM5/29/09
to Morgan Wesstr?m, freebsd...@freebsd.org
On Fri, May 29, 2009 at 12:47:38PM +0200, Morgan Wesstr?m wrote:
> You can benchmark the encryption subsytem only, like this:
>
> # kldload geom_zero
> # geli onetime -s 4096 -l 256 gzero
> # sysctl kern.geom.zero.clear=0
> # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512

I don't mean to take this off-topic wrt -stable but just
for fun, I built a -current kernel with dtrace and did:

geli onetime gzero
./hotkernel &
dd if=/dev/zero of=/dev/gzero.eli bs=1m count=1024
killall dtrace
geli detach gzero

The hot spots:
[snip stuff under 0.3%]
kernel`g_eli_crypto_run 50 0.3%
kernel`_mtx_assert 56 0.3%
kernel`SHA256_Final 58 0.3%
kernel`rijndael_encrypt 72 0.4%
kernel`_mtx_unlock_flags 74 0.4%
kernel`rijndael128_encrypt 74 0.4%
kernel`copyout 92 0.5%
kernel`_mtx_lock_flags 93 0.5%
kernel`bzero 114 0.6%
kernel`spinlock_exit 240 1.3%
kernel`bcopy 325 1.7%
kernel`sched_idletd 810 4.3%
kernel`swcr_process 1126 6.0%
kernel`SHA256_Transform 1178 6.3%
kernel`rijndaelEncrypt 5574 29.7%
kernel`acpi_cpu_c1 8383 44.6%

I had to build crypto and geom_eli into the kernel to get proper
symbols.

References:
http://wiki.freebsd.org/DTrace
http://www.brendangregg.com/DTrace/hotkernel

--Emil

Ivan Voras

unread,
May 29, 2009, 7:49:54 AM5/29/09
to freebsd...@freebsd.org

Hi,

What is the meaning of counts? Number of calls made or time?

signature.asc

Vlad Galu

unread,
May 29, 2009, 7:59:50 AM5/29/09
to Ivan Voras, freebsd...@freebsd.org
On Fri, May 29, 2009 at 2:49 PM, Ivan Voras <ivo...@freebsd.org> wrote:
>
> Hi,
>
> What is the meaning of counts? Number of calls made or time?
>
>

The former.

Emil Mikulic

unread,
May 29, 2009, 8:02:12 AM5/29/09
to Ivan Voras, freebsd...@freebsd.org
On Fri, May 29, 2009 at 01:49:54PM +0200, Ivan Voras wrote:
> Emil Mikulic wrote:
[...]

> > kernel`SHA256_Transform 1178 6.3%
> > kernel`rijndaelEncrypt 5574 29.7%
> > kernel`acpi_cpu_c1 8383 44.6%
>
> Hi,
>
> What is the meaning of counts? Number of calls made or time?

Time.

Sorry, I inadvertently cut off the headings: function, count, percent

As I understand it, hotkernel uses statistical sampling at 1001 Hz, so
the percentage is an approximation of how much time is spent in each
function, based on how many profiler samples ended up in each function.

--Emil

Dan Naumov

unread,
May 29, 2009, 8:12:08 AM5/29/09
to Emil Mikulic, Morgan Wesstr?m, freebsd...@freebsd.org
Pardon my ignorance, but what do these numbers mean and what
information is deductible from them?

- Dan Naumov

Chris Dillon

unread,
May 29, 2009, 9:00:08 PM5/29/09
to Dan Naumov, freebsd...@freebsd.org, Pete French
Quoting Dan Naumov <dan.n...@gmail.com>:

> Ouch, that does indeed sounds quite slow, especially considering that
> a dual core Athlon 6400 is pretty fast CPU. Have you done any
> comparison benchmarks between UFS2 with Softupdates and ZFS on the
> same system? What are the read/write numbers like? Have you done any
> investigating regarding possible causes of ZFS working so slow on your
> system? Just wondering if its an ATA chipset problem, a drive problem,
> a ZFS problem or what...

I recently built a home NAS box on an Intel Atom 330 system (MSI Wind
Nettop 100) with 2GB RAM and two WD Green 1TB (WD10EADS) drives in a
mirrored ZFS pool using a FreeNAS 0.7 64-bit daily build. I only see
25-50MB/sec via Samba from my XP64 client, but in my experience SMB
always seems to have horrible performance no matter what kind of
servers and clients are used. However, dd shows a different set of
figures:

nas:/mnt/tank/scratch# dd if=/dev/zero of=zero.file bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 61.532492 secs (68164052 bytes/sec)

nas:/mnt/tank/scratch# dd if=zero.file of=/dev/null bs=1M
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 33.347020 secs (125777476 bytes/sec)

68MB/sec writes and 125MB/sec reads... very impressive for such a
low-powered box, I think, and yes the drives are mirrored, not striped!


Ronald Klop

unread,
May 31, 2009, 10:29:59 AM5/31/09
to Dan Naumov, Morgan Wesström, freebsd...@freebsd.org
On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov <dan.n...@gmail.com>
wrote:

> Now that I have evaluated the numbers and my needs a bit, I am really
> confused about what appropriate course of action for me would be.
>
> 1) Use ZFS without GELI and hope that zfs-crypto get implemented in
> Solaris and ported to FreeBSD "soon" and that when it does, it won't
> come with such a dramatic performance decrease as GELI/ZFS seems to
> result in.
> 2) Go ahead with the original plan of using GELI/ZFS and grind my
> teeth at the 24 MB/s read speed off a single disk.

3) Add extra disks. It will speed up reading. One disk extra will about
double the read speed.

Ronald.

Dan Naumov

unread,
May 31, 2009, 10:47:33 AM5/31/09
to Ronald Klop, Morgan Wesström, freebsd...@freebsd.org
I am pretty sure that adding more disks wouldn't solve anything in
this case, only either using a faster CPU or a faster crypto system.
When you are capable of 70 MB/s reads on a single unecrypted disk, but
only 24 MB/s reads off the same disk while encrypted, your disk speed
isn't the problem.

- Dan Naumov

On Sun, May 31, 2009 at 5:29 PM, Ronald Klop
<ronald-...@klop.yi.org> wrote:
> On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov <dan.n...@gmail.com> wrote:
>

>> Now that I have evaluated the numbers and my needs a bit, I am really
>> confused about what appropriate course of action for me would be.
>>
>> 1) Use ZFS without GELI and hope that zfs-crypto get implemented in
>> Solaris and ported to FreeBSD "soon" and that when it does, it won't
>> come with such a dramatic performance decrease as GELI/ZFS seems to
>> result in.
>> 2) Go ahead with the original plan of using GELI/ZFS and grind my
>> teeth at the 24 MB/s read speed off a single disk.
>

Ulrich Spörlein

unread,
May 31, 2009, 11:39:53 AM5/31/09
to Morgan Wesström, freebsd...@freebsd.org
On Fri, 29.05.2009 at 12:47:38 +0200, Morgan Wesström wrote:
> You can benchmark the encryption subsytem only, like this:
>
> # kldload geom_zero
> # geli onetime -s 4096 -l 256 gzero
> # sysctl kern.geom.zero.clear=0
> # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512
>
> 512+0 records in
> 512+0 records out
> 536870912 bytes transferred in 11.861871 secs (45260222 bytes/sec)
>
> The benchmark will use 256-bit AES and the numbers are from my Core2 Duo
> Celeron E1200 1,6GHz. My old trusty Pentium III 933MHz performs at
> 13MB/s on that test. Both machines are recompiled with CPUTYPE=core2 and
> CPUTYPE=pentium3 respectively but unfortunately I have no benchmarks on
> how they perform without the CPU optimizations.

Hi Morgan,

thanks for the nice benchmarking trick. I tried this on two ~7.2
systems:

CPU: Intel Pentium III (996.77-MHz 686-class CPU)
-> 14.3MB/s

CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU)
-> 47.5MB/s

Reading a big file from the pool of this P4 results in 27.6MB/s netto
transfer rate (single 7200 rpm SATA disk).

I would be *very* interested in numbers from the dual core Atom, both
with 2 CPUs and with 1 active core only. I think that having dual core
is a must for this setup, so you can use 2 GELI threads and have the ZFS
threads on top of that to spread the load.

Cheers,
Ulrich Spörlein
--
http://www.dubistterrorist.de/

Morgan Wesström

unread,
May 31, 2009, 12:00:23 PM5/31/09
to freebsd...@freebsd.org
> Hi Morgan,
>
> thanks for the nice benchmarking trick. I tried this on two ~7.2
> systems:
>
> CPU: Intel Pentium III (996.77-MHz 686-class CPU)
> -> 14.3MB/s
>
> CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU)
> -> 47.5MB/s
>
> Reading a big file from the pool of this P4 results in 27.6MB/s netto
> transfer rate (single 7200 rpm SATA disk).
>
> I would be *very* interested in numbers from the dual core Atom, both
> with 2 CPUs and with 1 active core only. I think that having dual core
> is a must for this setup, so you can use 2 GELI threads and have the ZFS
> threads on top of that to spread the load.
>
> Cheers,
> Ulrich Spörlein

Credit to pjd@ actually. Picked up the trick myself from freebsd-geom
some time ago :-)
http://lists.freebsd.org/pipermail/freebsd-geom/2007-July/002498.html

My Eee PC with a single core N270 is being repaired atm, it suffered a
bad BIOS flash so I can't help you with benchmarks until it's back. I
don't have access to another Atom CPU unfortunately.

/Morgan

Ulrich Spörlein

unread,
May 31, 2009, 12:05:33 PM5/31/09
to Dan Naumov, freebsd...@freebsd.org
On Fri, 29.05.2009 at 11:19:44 +0300, Dan Naumov wrote:
> Also, free free to criticize my planned filesystem layout for the
> first disk of this system, the idea behind /mnt/sysbackup is to take a
> snapshot of the FreeBSD installation and it's settings before doing
> potentially hazardous things like upgrading to a new -RELEASE:
>
> ad1s1 (freebsd system slice)
> ad1s1a => 128bit Blowfish ad1s1a.eli 4GB swap
> ad1s1b 128GB ufs2+s /
> ad1s1c 128GB ufs2+s noauto /mnt/sysbackup
>
> ad1s2 => 128bit Blowfish ad1s2.eli
> zpool
> /home
> /mnt/data1

Hi Dan,

everybody has different needs, but what exactly are you doing with 128GB
of / ? What I did is the following:

2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

Filesystem 1024-blocks Used Avail Capacity Mounted on
/dev/ad0a 507630 139740 327280 30% /
/dev/ad0d 1453102 1292296 44558 97% /usr
/dev/md0 253678 16 233368 0% /tmp

/usr is quite crowded, but I just need to clean up some ports again.
/var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
pool. If /usr turns out to be to small, I can also move /usr/local
there. That way booting and single user involves trusty old UFS only.

I also do regular dumps from the UFS filesystems to the ZFS tank, but
there's really no sacred data under / or /usr that I would miss if the
system crashed (all configuration changes are tracked using mercurial).

Anyway, my point is to use the full disks for GELI+ZFS whenever
possible. This makes it more easy to replace faulty disks or grow ZFS
pools. The FreeBSD base system, I would put somewhere else.

Dan Naumov

unread,
May 31, 2009, 12:28:51 PM5/31/09
to u...@spoerlein.net, freebsd...@freebsd.org
Hi

Since you are suggesting 2 x 8GB USB for a root partition, what is
your experience with read/write speed and lifetime expectation of
modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?

- Dan Naumov

Pertti Kosunen

unread,
May 31, 2009, 1:08:06 PM5/31/09
to freebsd...@freebsd.org
Ulrich Spörlein wrote:
> 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
> CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

Many has internal USB header.

http://www.logicsupply.com/products/afap_082usb

Freddie Cash

unread,
May 31, 2009, 1:43:55 PM5/31/09
to freebsd...@freebsd.org
On Sun, May 31, 2009 at 9:05 AM, Ulrich Spörlein <u...@spoerlein.net> wrote:
> everybody has different needs, but what exactly are you doing with 128GB
> of / ? What I did is the following:
>
> 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
> CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

You can get CF-to-SATA adapters. We've used CF-to-IDE quite
successfully in a pair of storage server. We have a couple of the
SATA adapters on order to test with as our new motherboards only have
1 IDE controller, and doing mirroring across master/slave of the same
channel sucks.

> /usr is quite crowded, but I just need to clean up some ports again.
> /var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
> pool. If /usr turns out to be to small, I can also move /usr/local
> there. That way booting and single user involves trusty old UFS only.

That's what we do as well, but with /usr/local on ZFS, leaving just /
and /usr on UFS.

--
Freddie Cash
fjw...@gmail.com

Ulrich Spörlein

unread,
Jun 1, 2009, 1:45:10 AM6/1/09
to Dan Naumov, freebsd...@freebsd.org
On Sun, 31.05.2009 at 19:28:51 +0300, Dan Naumov wrote:
> Hi
>
> Since you are suggesting 2 x 8GB USB for a root partition, what is
> your experience with read/write speed and lifetime expectation of
> modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?

Well, my current setup is using an old 2GB CF card, so read/write speeds
suck (14 and 7 MB/s, respectively, IIRC), but then again, there are not
many actual read/writes on / or /usr for my setup anyway.

The 2x 8GB USB sticks I would of course use to gmirror the setup,
although I have been told that this is rather excessive. Modern flash
media should cope with enough write cycles to get you through a decade.
With /var being on GELI+ZFS this point is mood even more, IMHO.

A recent 8GB Sandisk U3 stick of mine manages to read/write ~25MB/s
(working from memory here), so this is pretty much the maximum USB 2.0
is giving you.

0 new messages