</sarcasm>
Hardware vendors and software vendors will continue to compete.
> Date: Fri, 24 Oct 2008 13:40:18 +0200
> From: tarry...@gmail.com
> To: cloud-c...@googlegroups.com
> Subject: Re: "Cloud Networking" defined in Arista Networks new CEO
> Jayshree Ullal's blog
>
> With Silicon Valley taking a much larger hit at funding and staff
> lay-off, is it a good idea to do the start-up? We did follow that Andy
> left Sun to do this. Although from another perspective, this is the
> moment to lay seeds for the new order as well. Nice buzzwords and shiny
> balls (http://www.aristanetworks.com/en/Solutions), what else are they
> doing besides 10GbE and PHY?
>
> I think I need to dig in deep and see what they do.
>
> Tarry
>
>
>
> On Thu, Oct 23, 2008 at 5:52 PM, Pranta Das <pran...@yahoo.com> wrote:
>
>
> http://www.aristanetworks.com/en/JU_Cloud_Networking
>
>
>
>
>
>
>
>
>
>
> --
> Kind Regards,
>
> Tarry Singh
> ______________________________________________________________
> Founder, Avastu: Research-Analysis-Ideation
> "Do something with your ideas!"
> http://www.avastu.com <http://www.avastu.com/>
> Business Cell: +31630617633
> Private Cell: +31629159400
> LinkedIn: http://www.linkedin.com/in/tarrysingh
> Blogs: http://tarrysingh.blogspot.com <http://tarrysingh.blogspot.com/>
>
>
>
>
>
>
>
> >
>
-----Original Message-----
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Chris Sears
Sent: Friday, October 24, 2008 1:29 PM
To: cloud-c...@googlegroups.com
Subject: Re: "Cloud Networking" defined in Arista Networks new CEO Jayshree Ullal's blog
I've heard anecdotal reports that many times the network can be saturated on an EC2 instance, especially if you happen to land on a box with other heavy network users. Same with disk I/O... o the degree that some have found EBS is actually faster than local disk.
Bottlenecks are slippery little devils....what you say may be true, but I've yet to see any data that tells me that it's the interconnect, and not some other chokepoint (i.e. Hypervisor, TCP, SCSI, PCI, FSB, DDR, whatever...)
I think the promise of 10GE is a single, unified fabric that I could use for storage (FCoE or iSCSI) in addition to normal production TCP/IP uses. Cisco is moving in that direction with their Nexus line. Outside of cloud computing environments, I think anyone not stuck with an extensive FC fabric/infrastructure will be seriously considering 10GE as an alternative with some room for future growth.
100% agree. I believe that iSCSI was the primary motivation behind a lot of the TOE work a few years back....
And getting back to the EC2 example, cloud providers need a network infrastructure that can keep up with the advancements happening inside the servers... with hexa- and octal-core CPUs just around the corner and RAM constantly getting faster and cheaper, providers are looking at hosting many more VM instances per box. And with EBS / iSCSI / FCoE in the picture, the network traffic starts to add up pretty fast. We're going to need a very fat pipe going into each physical server to continue scaling for the next 5 years. 10GE looks like a good fit.
I/O has not kept pace w/CPU/Disk/network capacity and remains several order of magnitudes behind. Fast multi-cores make going off-chip really expensive (i.e. slow). Spanning servers is even worse and lots of large workloads trade off I/O for compute whenever possible.Of course 10GE is inevitable, but isn't not a game changer. I've got 100ME all through my house, but YouTube is about the same speed as my iPhone.....I think this new switch is trying to compete on features, not performance. Which makes sense. Not at all clear how those features will be available to users up the stack.CM- Chris
It's been a while since I really looked into these things, but I seem to recall one of the things holding back 10GE was the actually filling it up. TCP itself becomes a bottleneck with any reasonable sized transfer. Lots of work was done with TCP offload engines (TOEs), fast restart, buffer size, etc. but I don't think they ever really solved the problem. I'd have to go look at the current disk performance stats, but I doubt that raw disks could even fill it up....
</HTML<BR
</HTML
</BLOCKQUOTE
</HTML<BR