Greg,
For the server side it will take far less disk. You are basically
setting up a LAMP stack. If you are not going to be running mysql on
the same box I would use a VM. The POC box MySQL had about 55,000
events and only used about 4 gig of data. I have never had any
positive experience with databases running on a VM as such I either
put them on dedicated hardware or will share it with the server on
hardware.
For the sensor; 8G ram dual cpu and as much disk as you cant throw at
it. SATA is fine
If you are going to be seeing 800Mbps I would use 32G Ram, 2xIntel
E5-2670. Ive noticed over the years tat a 1G interface usually will
saturate at about 600Mbps. Get Intel nics (QP 1GigE) and at least one
Intel X520 for the fiber. With that much data hitting the sensor you
will want SAS drives ar at the very minimum near-inline SAS.
My current setup is a 10G feed from the Gigamon. that tool port has 4
1G taps connected to it. I currently use 8x3Tb Near-inline-SAS Sata
drives on a raid5 with no hot spare to get the most disk space I can
for pcaps and session data. 32G Ram, 2 Intel E5-2670 CPU's I can
handle 2 feeds with about 1 week of data retention
Suggested sensor set up
RAM 8 gig
CPU dual Xeon 2.4Ghz or better
Intel Nics
7200 RPM SATA drives as much as you can get on board. RAID5 No Hot
Spare, Split on 2 virtual disks, 25 G for OS and remainder for /nsm I
run /nsm at 96% of capacity)
Regards
Pete
On Thu, Jul 26, 2012 at 10:48 AM, Gregory Pendergast
<
greg.pe...@gmail.com> wrote:
> On Wednesday, July 25, 2012 11:46:47 AM UTC-4, magickal1 wrote:
>> This is what I use to give a good "guestimate" of the disk usage. It
>> has worked well for me. (Thanks to RMB for this calculation)
>>
>> Drive storage can be calculated by the following:
>> "Avg Link Utilization in Mbps" * 4.5 == "Storage/Day in GBs" This is
>> valid on the Sensor side only! Database is really dependent on how
>> long you keep the data.
>
> Thanks Pete. Acknowledging that it depends on how long you retain data, I assume the per/day disk requirements for the Server component (assuming I split those functions) would be substantially less. Am I right about that?
>
> Also, I've reviewed the hardware requirements page, but that's a little sparse. Do you have any suggestions for CPU and memory requirements on the server and/or CPU requirements on the sensor? (I'm assuming that the 1GB+ RAM for each interface applies more to the sensor than the server, so I'm taking that as the minimal sensor guideline.)
>
>
>> Using the GigaVue I will assume that you are setup for aggregated
>> taps. You will want to keep an eye on the GigaVue for load and
>> dropped packets if you are going to use the 1G copper. Taking a 10G
>> feed and piping it down to a 1G will place a load in the Gigamon
>> Appliance. If you can I would suggest that you utilize the fiber
>> tool ports to eliminate back pressure on the GigaVue.
>
> Right now we only have 1GB tool ports on the GigaVue. I'm going to spec my sensor w/ a 1Gig & 10Gig interface card, then hope to get funding to add one or more 10G tool ports to the GigaVue as well. Thanks for the guidance though. I do need to keep a closer eye on the dropped packets.
>
> Thanks again,
> Greg
>
> --
>
>