Help with 200TB solution

338 views
Skip to first unread message

Solid Artists

unread,
Jan 21, 2016, 3:40:16 AM1/21/16
to OpenStoragePod
Hello,

I am tasked with building around a 200TB NAS for use in a post production facility. 

We work with large files 100GB+ that are normally ProRes422 HQ or ProRes444HQ.

We would like to build a system that our main editing bays can work directly off the storage device, allowing incremental backups of all work. As well as having the files usable by other machines.

Only about 5 of the machines would need fast read/write speeds, while the rest should be fine with fast read speeds. 3-5 of the computers would want to be connected over a 10Gb connection. Either fibre or copper. 

We currently do not have any Rack setup at all, so when responding please be aware there is only a 1Gb infrastructure currently. (a single patch panel and netgear fully managed switch)

The budget for the project is between 50-80K.

Please help! 

-Andrew 

jason andrade

unread,
Jan 21, 2016, 6:59:01 AM1/21/16
to opensto...@googlegroups.com

G'day Andrew,

Would love to help but perhaps you can elaborate on exactly what you need help with?

- Are you trying to build and/or use the open storage pod architecture to meet your technical requirements?
- Have you looked at other solutions before going down this path (e.g from name brand vendors?)
- What do you define as 'fast read/write' speed - i.e what does your appication actually work best with ? a firehose (as much as you can throw at it) or some optimal value (above which you get no benefit)

Doing some math (based around your 200TB figure), you may be better served with a chassis that can do ~90 drives (@4T) to give you around ~280T usable - I'm using a rough rule of thumb that you'll get get 75% usable of the raw total. There's a few chassis out there now that can do this density. Whether that meets your performance figure is another matter.

Of course, if you instead spread this into a Ceph cluster (i.e multiple servers) you'd likely get a lot better overall performance.

I'm also not sure why you need to limit yourself to 10GbE anymore. Mellanox do really cost effective 40GbE and 56GbE (if you buy one of their switches) and if I were working with 100GB files I'd prefer to do it at 5Gbyte/sec rather than 1Gbyte/sec.. if you can read/write that fast that is.. (but 90 or more drives should be able to give you that..)

You haven't talked about how exactly you'd plan to backup this data (when and where, how)

You need to think about data integrity possibly (so something with ZFS if you don't go down the Ceph path?)

How long do you plan to keep this bit of kit around ? Do you need to grow it.

What happens if you lose power to it and have to restore the whole thing from your backups (it might take a few days.. weeks?)

How long do you think it would take to fsck a whole filesystem..

Sorry if I haven't answered your initial question (?) but hopefully have allowed you to think about the scope a bit better.


So, some quick and dirty example scenarios to think about:

1 x openstorage pod (or vendor pod) server with all your disks in it, connected with 1x or more than 1x 10GbE or 40GbE interfaces (the monolithic server approach) with ZFS underlying and NFS on top (ugh)
N x ceph servers with disks spread around and some investment in networking it all together (the distributed approach) using CephFS, Ceph volumes or Objectstorage..

regards,

-jason
-----
M: +61 402 489 637 E: jason....@gmail.com
> --
> You received this message because you are subscribed to the Google Groups "OpenStoragePod" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to openstoragepo...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Tim Lossen

unread,
Jan 21, 2016, 9:11:51 AM1/21/16
to opensto...@googlegroups.com
hi andrew,

out of curiosity, i ran the numbers to see if an all-flash setup built from off-the-shelf components would fit your budget:

https://docs.google.com/spreadsheets/d/1RRPFQSKyo9Tl6ffeWDgocNeM_nzy8TFCOtF_Ge0-wZw

yes it does! this doesn’t include the actual NAS server though. it would take up around 1/3 of a rack.

cheers,
tim

ps: while this is how i might approach your task, i haven't actually built such a system.

--
http://tim.lossen.de

Josh Fienstein

unread,
Jan 7, 2017, 5:51:54 PM1/7/17
to OpenStoragePod
Hey Andrew,

I've worked in a post production DC. They were using a frontend cache and then also had storage. Because the media files are indeed large, they would interact with it from the cache. The caches were large and the ports were 40GbE. The company in question was in the process of actually moving the bulk of the storage to S3 services. I'm not sure if they were using Amazon or building their own Ceph cluster like Jason had mentioned. A failure of a large storage pod would cause too much downtime and I wouldn't recommend it. The companies renting editing bays are paying a lot of money and will expect a high quality service. 

Thanks

Terry LoBianco

unread,
Jan 9, 2017, 2:57:12 PM1/9/17
to OpenStoragePod
Hi Andrew,
Backblaze B2 is a cloud storage service, similar to Amazon S3. However, Backblaze B2 is 1/6th the price. Storing 1TB of data in B2 only costs $5/month. Perhaps we can help?
-Terry

Rick Peralta

unread,
Sep 7, 2019, 8:23:52 AM9/7/19
to OpenStoragePod
 
Hi Andrew,

Whatever happened to your server project; 200 TB to five workstations with 10 Gbps links?


The cost of SSDs is under $0.10/GByte, so your $50-80K budget is feasible.
Where the OP is a few years old, whatever you did should be ebbing towards retirement.

Regarding Backblaze and cloud, they are not set up for the sorts of performance you are looking for.

Benjamin Lau

unread,
Sep 7, 2019, 3:57:58 PM9/7/19
to opensto...@googlegroups.com
45drives now sell such a beast.


--
You received this message because you are subscribed to the Google Groups "OpenStoragePod" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openstoragepo...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages