HDD/SSD capacity supported by GnuBee

49 views
Skip to first unread message

Chih Yu Tseng

unread,
Mar 21, 2023, 6:05:24 AM3/21/23
to GnuBee
What is the capacity of HDD/SSD supported by the single SATA interface in GnuBee PC1 and PC2?

If all 6 ports are connected to hard disks at the same time, what is the maximum capacity that can be supported?

Thanks for your help!

Brett Neumeier

unread,
Mar 21, 2023, 10:09:21 AM3/21/23
to Chih Yu Tseng, GnuBee
On Tue, Mar 21, 2023 at 5:05 AM Chih Yu Tseng <cccc1...@gmail.com> wrote:
What is the capacity of HDD/SSD supported by the single SATA interface in GnuBee PC1 and PC2?

I am probably not the best person to answer this but I will give it a shot!

I believe the question has no completely trivial answer, because it depends on the *power drawn by each drive* and the *type of partition table being used* rather than the maximum capacity of the drives that can be connected. Another factor that might be relevant for the PC1 is the *height of the drives*, since there is limited space between the SATA slots on that board.

The biggest limiting factor, again especially for the PC1, is the amount of power drawn by the drives. The power supply brick (transformer) provided with the PC2 is rated for 8A, the power supply with the PC1 is 3A. If you have drives that draw more power than the power supply is intended to provide, you'll have issues. I'm not sure what the maximum amount of power on the adapters on the PCB are, but probably not more than the power supply provided with the units.

I had six 2TB drives with a max power draw of 1A connected to a PC1 (total power drawn 6x1A = 6A, which is greater than the 3A available through the transformer), and after sustained heavy use the power adapter on the board got hot enough to melt. That is the kind of problem I'm suggesting you will want to avoid. I put the same six drives on a PC2 (using 2.5" to 3.5" adapters), and they have worked fine there, so I would definitely opt for the PC2 if you have power-hungry drives.

The power usage for a drive is not necessarily obvious when looking at an online store entry. I did some poking around and found that WD Red Plus 14TB 3.5" SATA drives draw 1.85 A of current, so six of them would be over 11 A, which is too much for either type of GnuBee. You could perhaps use four of them? You could *try* using six, but at the risk of melting your GnuBee. I also found the manual for the Seagate IronWolf 125 SSD, which is available with up to 4 TB of storage. It says that the 4 TB drive uses 2800 mW at 5 V, which I believe is 0.56 A, so six of them would be 3.36 A, which would be okay for a PC2 but not a PC1.

Does that make sense? I can use more words if I have been unclear. Basically, I believe you will be looking for drives that draw up to 1.3 A if you're using a PC2 and up to 0.5 A if you're using a PC1.

In terms of the drive capacity per se -- the only limiting factor I can think of is the partition table. MBR partition tables can be used for 2TB drives or smaller. Linux, of course, supports GPT partition tables, so once Linux is running you will be able to use GPT partition tables on the storage disks, but I do not know if the u-boot boot loader understands GPT, so you may need to either boot from the SD card or have at least one of the 6 SATA drives have an MBR in order to boot it.

If all 6 ports are connected to hard disks at the same time, what is the maximum capacity that can be supported?

Wikipedia says: "As of May 2022, the largest hard drive is 22 TB (while SSDs can be much bigger at 100 TB, mainstream consumer SSDs cap at 8 TB). Smaller, 2.5-inch drives, are available at up to 2TB for laptops, and 5TB as external drives." 

So, supposing that you are able to find 22 TB 3.5" drives that draw no more than 1.3 A each, and 5 TB 2.5" drives that draw no more than 0.5 A each, AND supposing that u-boot is only able to load linux from a storage device with an MBR partition table, AND you are okay with linux being loaded from the SD card rather than one of the SATA drives ... then the maximum capacity is:

For PC2: 6 x 22 TB = 132 TB
For PC1: 6 x 5 TB = 30 TB

If this does not answer your question, please ask follow-ups!

Cheers,

Brett

--
Brett Neumeier (bneu...@gmail.com)

Matthias Urlichs

unread,
Mar 21, 2023, 12:26:28 PM3/21/23
to gnu...@googlegroups.com
On 21.03.23 15:09, Brett Neumeier wrote:
> you may need to either boot from the SD card or have at least one of
> the 6 SATA drives have an MBR in order to boot it.

U-Boot is perfectly happy to boot from basically any USB drive you plug
into the machine, so no worries here.

Also, if you really want a whole lot of disk space, the GnuBee has USB3.
You can buy 4-disk USB-C enclosures (icybox for instance) that accept
3.5" disks. You can also buy a 16-port USB-3 hub. So, 70 disks with 22
TB each. Or 1.5 Petabytes.

Filling that would take a year of continuous writing (assuming you can
sustain 50 MBytes/sec, which on the GnuBee you probably can't), so using
more than one of these hubs would be even more silly. ;-)

--
-- mit freundlichen Grüßen
--
-- Matthias Urlichs

matthias.vcf
OpenPGP_signature

Jernej Jakob

unread,
Mar 22, 2023, 9:21:55 AM3/22/23
to 'Matthias Urlichs' via GnuBee
IIRC it's less than 50MB/s, even less if using a network protocol.
I did some benchmarks years ago and it wasn't that great, but
acceptable (IIRC 20-30MB/s SMB).
RAID will not get you more speed as the bottleneck is not the drives
but the CPU (I think specifically the RAM bandwidth).

Jernej Jakob

unread,
Mar 22, 2023, 9:31:18 AM3/22/23
to GnuBee
You can enable PUIS on the drives, so they will power up in standby
(spun down), the kernel will spin them up one at a time during booting
(staggered spin-up). Some drives have jumpers, some have ATA commands
to enable PUIS. See "hdparm -s". This worked on some WD and Seagate
drives I enabled it on.

Then you mostly need to look at the drive power draw during operation
(writing/seeking), not so much spin-up, because only one drive will be
spinning up at a time.

Of course it's best to pick low-power drives, the CPU is not powerful
enough to even approach the limit of even low end drives out there
today (in sequential speeds, random i/o is a different story).

I would keep the OS on a separate drive, I have a 2.5" SSD for it (you
can plug a 2.5" SSD into a GB-PC2, just don't screw it down, maybe
stick it to the sides with some tape). You could also boot off the SD
card or USB, but use an OS meant for flash drives so as to not wear out
and kill the card.

Matthias Urlichs

unread,
Mar 22, 2023, 9:50:29 AM3/22/23
to gnu...@googlegroups.com
On 22.03.23 14:21, Jernej Jakob wrote:
> IIRC it's less than 50MB/s, even less if using a network protocol.
> I did some benchmarks years ago and it wasn't that great, but
> acceptable (IIRC 20-30MB/s SMB).

Yeah, I'm getting ~25 here – exporting the drives via NBD, because
presumably that has the least possible CPU load.

The same drive on USB-C via an IcyBox 4x enclosure gives me ~150, and
that's while the host is reading the data from another drive, also via USB.

I don't know whether I shall be more annoyed at the fact that the world
has left the GnuBee in the dust, speed-wise, or that five years ago I
could run a full backup to the GnuBee in a day – while now I'd need a month.
matthias.vcf
OpenPGP_signature

Hans Henry von Tresckow

unread,
May 16, 2023, 4:15:35 AM5/16/23
to GnuBee
One potential issue to be aware of is that for large RAID volumes, the 512mb of memory on the PC2 is a serious limitation. I had two 3TB and two 4TB disks set up as a two RAID1 setups and ran into issues when I had to rebuild one of them after one of the WD Red disks failed. I would hit the OOM in the kernel and would need to restart. In the end, I gave up and tried to rebuild them in an external case, but that ended up corrupting the whole array... I ended up putting the disks in a Rasperry Pi CM4 powered setup instead (currently has 1GB of RAM, but I am hoping to grab a 4 or 8GB CM4 when they are no longer unobtainum.)

Jernej Jakob

unread,
May 21, 2023, 1:56:09 PM5/21/23
to 'Matthias Urlichs' via GnuBee, Matthias Urlichs
On Wed, 22 Mar 2023 14:49:43 +0100
"'Matthias Urlichs' via GnuBee" <gnu...@googlegroups.com> wrote:
> I don't know whether I shall be more annoyed at the fact that the world
> has left the GnuBee in the dust, speed-wise, or that five years ago I
> could run a full backup to the GnuBee in a day – while now I'd need a month.

The CPU wasn't meant for this kind of workload even back in the day.
It was meant for small consumer network routers, AP, those kinds of
things. Trying to do network+PCIe+RAID+encryption+userspace is too much.
I only have SMB and a borg backup server running on it. The borg server
process would OOM before I enabled a swap partition from the root SSD.
Now it runs acceptably for daily backups for my machines, but scheduled
only one at a time.

Jernej Jakob

unread,
May 21, 2023, 2:08:43 PM5/21/23
to 'Matthias Urlichs' via GnuBee, Matthias Urlichs
I forgot to say, I was looking at building a more powerful NAS, to use
ZFS you would want ECC RAM which none of the affordable SBC on the
market today offer, at least that I could find.
What I would like to see in Gnubee v3: a quad-core CPU, at least 4GB ECC
RAM, 2 or more independent GbE ports (maybe a SFP+ or better slot for
10GB DAC or fiber).
Rockchip or Allwinner ARM CPU's I think offer the best price vs
performance ratio, and they can be made to run with fully free firmware
(blob-free) more or less (some interfaces won't work, haven't tried it
so I don't know exactly).
An example would be the Pine64 RockPro64 SBC which has a PCIe x4 slot
which would hold a 4, 8 or more port SATA HBA. But it doesn't have ECC,
and securely mounting everything would take more DIY effort. If you
don't use ZFS, it is a more powerful alternative to the Gnubee even
today. Personally I want ZFS so need something with ECC (and low power!)

Zenaan Harkness

unread,
May 21, 2023, 8:05:18 PM5/21/23
to Brett Neumeier, Chih Yu Tseng, GnuBee
> I also found the manual for the Seagate IronWolf 125 SSD, which is
> available with up to 4 TB of storage. It says that the 4 TB drive uses 2800
> mW at 5 V, which I believe is 0.56 A, so six of them would be 3.36 A, which
> would be okay for a PC2 but not a PC1.

Please note, 1000mW = 1W, so 2800mW = 2.8W

and so 2.8W at 3.36A is 9.408A

The IronWolf drives are not low power drives.

Matthias Urlichs

unread,
May 21, 2023, 8:59:19 PM5/21/23
to gnu...@googlegroups.com
On 22.05.23 02:05, Zenaan Harkness wrote:
> and so 2.8W at 3.36A is 9.408A

2.8W times six is 16.8W (assuming all drives are on max load, which on a
GnuBee is somewhat improbable), which of course translates to 3.36A at 5V.

You don't multiply watt with ampere (which appears to be how you get
those 9.4A), that doesn't make sense.
matthias.vcf
OpenPGP_signature

Casey Crockett

unread,
May 29, 2023, 12:27:56 AM5/29/23
to GnuBee
Something I didn't think about beforehand was the 32 bit processor.  I have a PC2 with six drives connected split in two arrays, 18 and 24 TB.  Everything seemed to be going swimmingly until I tried to mount either of the arrays and received a "file too big" error.  My research of the error pointed to the 32 bit kernel address space limitation being the issue, and they mentioned 16TB as the limit there.
I will pull those drives off, save them for something else, and get some smaller drives that fit within that limit.

Zenaan Harkness

unread,
May 29, 2023, 1:17:49 AM5/29/23
to Matthias Urlichs, gnu...@googlegroups.com
On 5/22/23, 'Matthias Urlichs' via GnuBee <gnu...@googlegroups.com> wrote:
> On 22.05.23 02:05, Zenaan Harkness wrote:
>> and so 2.8W at 3.36A is 9.408A
>
> 2.8W times six is 16.8W (assuming all drives are on max load, which on a
> GnuBee is somewhat improbable), which of course translates to 3.36A at 5V.
>
> You don't multiply watt with ampere (which appears to be how you get
> those 9.4A), that doesn't make sense.

Sorry, you're right: P = V . A (Power/watts = Volts * Amps)

Reminder to self: do not do math before coffee...

Matthias Urlichs

unread,
May 29, 2023, 3:26:10 AM5/29/23
to gnu...@googlegroups.com
On 29.05.23 06:27, Casey Crockett wrote:
> My research of the error pointed to the 32 bit kernel address space
> limitation being the issue, and they mentioned 16TB as the limit there.

Well, almost. The problem isn't the addresses, otherwise we couldn't go
beyond 4GB. The limit is the (unsigned) integers which are used to point
to the disk's sectors. 2³² is 4 GBytes, times the kernel's page size of
4k is 16TB.

Incidentally your 18 TB drive only holds 16.78 (binary) terabytes
("tebibytes"; funny how "only" and "terabytes" can even be in the same
sentence these days …) so if I were you I'd just use it as-is.
matthias.vcf
OpenPGP_signature

Miles Raymond

unread,
Jun 10, 2023, 1:35:47 AM6/10/23
to GnuBee
I'd really like to see an optimized Gnubee v3 with:
- 8x 3.5" drives
- smaller overall PCB (more optimal/compact layout)
- holes in the PCB for airflow between SATA connectors
- no-screw assembly case that allows hot-swapping drives (no drive cages)
- 2.5Gb or 10Gb ethernet
- USB C power in (100W PD are everywhere now)

Jernej Jakob

unread,
Jun 10, 2023, 11:15:34 AM6/10/23
to Miles Raymond, GnuBee
I've thought about the hardware design too. It depends on if you want
it to remain fanless or not. For fanless operation you either need a
good heatsink screwed to the drives and them oriented in a way that
allows for good convective airflow in the vertical direction. The
current Gnubee design uses the aluminum chassis side pieces as
heatsinks for the drives. You would need to keep that kind of
arrangement with screws to keep a good thermal contact between the
drive and heatsink. Even then you'd need to stay away from the
high-power server-grade drives that absolutely do need forced air
cooling (but you'd want to do that anyway for power consumption's sake).

If using a fan was acceptable, you could use any off the shelf
hot-swappable drive cage to keep the cost down and to not have to
design and fabricate your own drive cage and backplane. Then you'd only
need to make the outer case, which could be made from aluminum sheet and
channels or from 3D printed plastic parts.
Using an off the shelf drive cage also means you could make the main
Gnubee PCB smaller because it would only need to have one multi-lane SAS
or several single-lane SATA connectors. You could use an off the shelf
SATA HBA and a SBC with a PCIe slot like the RockPro64.
I would not use a plastic drive cage as plastic is a thermal insulator
which would make it harder to cool the drives and require a higher fan
speed.
Also you would need to find a cage that has good airflow and that
doesn't have excessively restrictive air openings on the front like
some cages (drive sleds) do.
The fan would need to have a speed controller that defaults to a medium
speed before the OS is loaded, after which an userspace program would
monitor all drive temperatures and regulate fan speed according to the
hottest drive temperature. So when idle it could speed down or even
turn off, but turn on again when needed. This could be an add-on board
connected to the SBC with GPIO. That same add-on board could also do
the power supply control (switch power off with SBC command). The USB-C
PD board could be a standalone extra as not everyone would need or want
it (they are already available off the shelf).
I guess you could make it a NAS made with off-the-shelf components,
just integrated into a neat enclosure. The Gnubee product would be
mostly integrating all these components, designing the parts and models
to fabricate the case. Compared to the Gnubee v2 there are better
options for hardware available these days. Unless you really wanted to
put a lot of work into it and make a completely integrated backplane,
SATA controllers and SBC on one board like the current Gnubee. But then
you'd still have to design and make the drive cage out of metal parts
and make the entire enclosure around it.

Miles Raymond

unread,
Jun 11, 2023, 3:11:24 PM6/11/23
to GnuBee
That seems excessive compared to what I was thinking. I was thinking along the lines of the current design with some slight alterations, not a full redesign.

For instance, improvements could be:
+ adding holes in the PCB to allow airflow up between the drives
+ PCB holes for 100mm fan mount on the bottom
+ cutout folds of the side aluminum to both act as 'guide rail' and airflow hole on the sides
+ USB C power input instead of 12V 5.5/2.5mm barrel
+ compact the board (no need to stick so far out on both ends)
Reply all
Reply to author
Forward
0 new messages