Reviews for Greenberg09VL2

174 views
Skip to first unread message

Rodrigo Fonseca

unread,
Mar 18, 2013, 10:15:44 PM3/18/13
to csci2950u-...@googlegroups.com
Hi,

Please post your reviews to the VL2 paper here as a group reply to this message.

Thanks,
Rodrigo

Christopher Picardo

unread,
Mar 18, 2013, 10:56:36 PM3/18/13
to csci2950u-...@googlegroups.com

Paper Review - Christopher B. Picardo


Paper Title:

VL2: A Scalable and Flexible Data Center Network


Author(s):

Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, david A. Maltz, Parveen Patel, and Sudipta Sengupta.


Date:

August 17-21, 2009, ACM, New York.


Novel Idea:

A practical network architecture that scales to support huge data centers with uniform high capacity between servers (i.e.: assign servers to a service without having to consider network topology), performance isolation between services (i.e. traffic of one service should not be affected by the traffic of any other service), and Ethernet layer-2 semantics (i.e.: the servers in each service should experience the network as if it were an Ethernet Local Area Network).


Main Results:

-         Puts an end to the need for oversubscription in the data center network.

-         Benefits the cloud server programmer by providing a simple abstraction that all servers assigned to them are plugged into a single layer-2 switch with hot spot-free performance regardless of where the servers are actually connected in the topology.

-         VL2 enables agility which means any service can be assigned to any server while the network maintains uniform high bandwidth and performance isolation between services.

-         VL2 is a simple design that can be realized today.

-         Working prototype is efficient and achieves an efficiency of 94% with a TCP fairness index of 0.995.

Impact:

No more limited server-to-server capacity, fragmentation of resources, poor reliability and utilization, network bottlenecks or frequent failures.

The VL2 tested, which comprises 80 servers and 10 switches, provides an effective layer for a scalable data center network because it achieves 94% optimal network capacity, a TCP fairness index of 0.995, graceful degradation under failures with fast reconvergence, and 50K lookups/s under 10 ms for fast resolution.


Prior work:

Data center network designs - Clos topology & off the shelf switches

Valiant load balancing – communication among parallel processors interconnected in a hypercube topology.

Scalable routing – locator/ID separation protocol uses map-and-encap as a key principle to achieve scalability and mobility in Internet routing.

Commercial networks – Data Center Ethernet by Cisco shares VL2’s goal of increasing network capacity through multipath.


Question/Criticism:

Having a system that is incrementally scalable seems like a good design choice, however at some point special non-common hardware will be needed, at which point VL2 could fail and require readjustment or modification of the protocol and its topology. How much of VL2 would have to change to adapt a very large and scalable system that requires new non-standardized hardware?

Shu Zhang

unread,
Mar 18, 2013, 10:59:49 PM3/18/13
to csci2950u-...@googlegroups.com

This VL2 paper presents a new paradigm of data center network. The VL2 network is not designed to solve one particular problem, but is designed to solve a series of problems which are related to each other. But meanwhile, the techniques VL2 uses are not very closed relevant, so the new type of network wants to subvert the data center network, which is a quite ambitious thought.


There are several drawbacks of traditional data center topology and addressing mechanism. The major cause of these drawbacks is the hierarchical topology, which brings along some problems like oversubscription and immobility. In detail, three major problems are limited server-to-server capacity, fragmentation of resources and poor reliability and utilization. The major reason of oversubscription is that the higher the level, the more expensive the routers/switches are in that layer. So in order to reduce the cost, people use multiplex techniques like oversubscription to lower the cost, but it largely limited the utilization of bandwidth and will brings in interference between different services.

The design philosophy of VL2 is to avoid hardware changes in existing data centers so it implies only software modifications could be done. VL2 added an additional end-system networking stack and a directory service. The topology VL2 advocates is Clos topology, which is consisted of low cost switches which could be scaled out and resilient to failure. The traffic engineering method is primarily VLB which is based on ECMP to randomly distribute flows to provide an even flow distribution. Another important idea of VL2 is to provide a service independent view of addressing, which enables flexible assignment for services to any end server.


There are 5 implications of data center behavior which serve as the basic theoretical building blocks for VL2. Firstly, most traffic is internal to the data center. It means that data centers are computation heavy. Second, the bottleneck of data center computation is network communication. Thirdly, the sizes of flows are mainly distributed between 100MB and 1GB. And fourth, contrary to the fact that sizes of flows are highly structured, the traffic patterns are unstable ( what is exactly traffic pattern means? ). The last implication is that failures are quite frequent in data centers.


The core technique of VL2 is the virtual layer 2, which might be what the name of VL2 comes from. The reason why it is virtual is that servers in layer2 have independent IP addresses so they provide an illusion for applications to use the flat address space. The topology of VL2 network is less hierarchical. ToRs directly connect to ASes and which locate in layer 3, which are in a different addressing spaces. ASes are connected to intermediate switches and a bunch of redundancy is provided. Compared to the conventional network architecture, L2 switch and access router disappeared in the VL2 network, so the network topology is flatter and extensible. The addressing method in VL2 is two-level addressing, which requires an additional layer in the communication stack. And the address resolution is based on table-query, not traditional ARP, which could reduce the traffic. The workflow of addressing goes like this: First an application sends a packet destined to another application address. But the VL2 agent will intercept the ARP and converts it to a unicast query to the VL2 directory system for the corresponding LA. But the LA hashing has two levels. First is AA to TOR LA. Second is ToR LA to INT LA. And the decoupling of the address encapsulation is along from INT router to destination ToR. Here might be where ECMP / VLB occurs. The hashing could map the flow to different switches, ( how does the hashing work? How to generate different next-hop switches? ) , so as to achieve the load balancing. In order to support the directory system, the servers should be maintained. There are two types of directory servers, one is read-optimized look-up server, another is write-optimized RSM servers, and caching is used to speed up the queries. The inconsistency discovery mechanism is not very up-to-date,inconsistency will be discovered in the distination ToR, if it occurs within the 30s RSM synchronization period.


The evaluation part does experiments on 3 concerns of VL2. First graph shows VL2 distrubutes goodput evenly along the time and and the utilization and efficiency are pretty high. Also, the preformance of different flows is isolated, one flow will not interfere with another. And lastly, the VL2 directory systems performance is comparable with ARP. In the very end of the paper, it says although VLB has some limits in utilizing full bandwidth, but experiments show the simplicity and universality of VLB costs relatively little capacity when compared to much more complex traffic engineering schemes.


Question:

The VL2 has two level addressing space. This let me think of a technique used in Operating Systems : the two level addressing of memory. If we could think LA as physical memory, then AA could be mapped to virtual memory. But is AA in different applications isolated to each other? For example ,for different applications and services, their application addresses all start from 20.0.0.0? Or they share a common flat addressing space?


           


DTrejo

unread,
Mar 18, 2013, 11:57:27 PM3/18/13
to csci2950u-...@googlegroups.com
Paper: VL2: A Scalable and Flexible Data Center Network
Novel Idea: Create a 100% device independent vlan that is agnostic to machine migrations and service migrations, while remaining fully backwards-compatible. The key idea is to separate the naming of machines from their IPs, and do complicated routing on the datacenter machines rather than changing router software.
Results: Great — they empirically tested their ideas and were able to sustain 94% transfer rates from machine to machine in the datacenter.
Evidence: Tested the code that lives on host-machines in a real datacenter.
Prior work: Clos topology, VLB, Locator/ID Separation Protocol, DCE.
Reproducibility: Doable — unclear whether code is open-source. Avoids custom routers, making it easier to do.

DTrejo

unread,
Mar 18, 2013, 11:57:41 PM3/18/13
to csci2950u-...@googlegroups.com
Paper: VL2: A Scalable and Flexible Data Center Network
Novel Idea: Create a 100% device independent vlan that is agnostic to machine migrations and service migrations, while remaining fully backwards-compatible. The key idea is to separate the naming of machines from their IPs, and do complicated routing on the datacenter machines rather than changing router software.
Results: Great — they empirically tested their ideas and were able to sustain 94% transfer rates from machine to machine in the datacenter.
Evidence: Tested the code that lives on host-machines in a real datacenter.
Prior work: Clos topology, VLB, Locator/ID Separation Protocol, DCE.
Reproducibility: Doable — unclear whether code is open-source. Avoids custom routers, making it easier to do.


On Monday, March 18, 2013 10:15:44 PM UTC-4, Rodrigo Fonseca wrote:

Zhiyuan "Eric" Zhang

unread,
Mar 19, 2013, 12:09:02 AM3/19/13
to csci2950u-...@googlegroups.com

Paper Title

VL2: A Scalable and Flexible DataCenter Network

 

Author(s)

Albert Greenberg, James R. Hamilton, navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel, and Sudipta Sengupta

 

Date

SIGCOMM ’09 (Barcelona, Spain, Aug. 17–21, 2009)

 

Novel Idea

This paper present VL2, a data center architecture that provides high capacity between servers, service performance isolation and L2 semantics. To achieve high capacity VL2 uses CLOS network topology and randomized routing algorithm to deal with volatility and balance internal traffic. Another idea is to separate names and locators: there are two kinds of addresses (LA and AA) and a directory system is used for mapping between them.

 

Main Result(s)

The authors evaluate VL2 using a cluster with 80 servers and 10 switches. There are two main evaluations: all-to-all data shuffle stress test and performance isolation between volatile services. The result shows that VL2 can achieve both uniform high capacity (94% efficiency) and performance isolation. They also test the performance of the directory system, and demonstrate that it is fast enough for both lookup and update.

 

Evidence

The authors show a very nice study of the data center traffic of a large cloud service provider. There are three key findings: most traffic is internal, the bottleneck is the network and the flow size is structured. All three of them are very interesting observation, although I'm not sure they are still true for other types of data center (other than a cloud service one). It will be nice if they can discuss about what type of traffic is in their data center and whether the observations are general.

 

Reproducibility

VL2 only requires low-cost commodity switches and existing software and network stack. One of the design goals is to make it easy to deploy without any changes to the hardware. However, it's difficult to reproduce their experiment based on the scale of the network and the traffic.

 

Question

Their result shows that VLB performs about the same as other routing schemes, in terms of link utilization. It seems like randomized routing scheme should bring overhead traffic. So I'm wondering why VLB's performance is not impacted.


On Monday, March 18, 2013 10:15:44 PM UTC-4, Rodrigo Fonseca wrote:

Charles Zhang

unread,
Mar 18, 2013, 11:07:32 PM3/18/13
to Rodrigo Fonseca, csci2950u-...@googlegroups.com

Paper Title:

VL2: A Scalable and Flexible Data Center Network


Authors: Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Paratap Lahiri, David A. Maltz, Parveen Patel, Sudipta Sengupta


Date: March 2011, Vol. 54 Communications of the ACM


Novel Idea:

This paper proposes a virtual network layer 2 abstraction for data center networks that act as if it were a giant switch that helps better balance the network traffic in data centers and also have a high fault tolerance. The goals are uniform high capacity, performance isolation, and layer-2 semantics, and they achieve these goals by using two types of addressing scheme, one being application-specific address and one being locator-specific addresses, allowing each switch to have complete knowledge of switch level topology.


Main Results: They report, as they predicted in their goal section, uniform high capacity, performance isolation, and their directory system provides high throughput and fast response time for lookups, can handle high update rates, more scalable, and have high availability and failure resistance, and also graceful degradation.


Impact:

A better network architecture for data center that provides all the goodies they described that it could achieve.


Evidence:

To test capacity, they created an all-to-all data shuffle traffic matrix involving 75 servers that shuffled a total of 2.7 TB of data from memory to memory. They provide strong argument for performance isolation, and finally evaluated directory system performance through micro and macro benchmarks.


Reproducibility:

The locator/ID separation could be a bit difficult to implement, but other findings are fairly easy to reproduces if you could get the same number of servers to run the test that they ran.


Shao, Tuo

unread,
Mar 18, 2013, 11:55:25 PM3/18/13
to csci2950u-...@googlegroups.com
Paper Title
VL2: A Scalable and Flexible Data Center Network

Authors
Albert Greenberg, James R. Hamilton, navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, david A. Maltz, Parveen Patel, and Sudipta Sengupta

Novel Idea
The paper presents a network resolution for data center to allow service instances to be placed anywhere in the network, balance the load of each link in the network and eliminate the bottlenect of Ethernet scaling overhead.

Main Results
The paper proposes a scale-out topology in which large number of paths between any two aggregation switches help to reduce the effect of link failure on bandwidth. It describe the design and maintainance of a system to manage application-specific addresses and location-specific addresses which provide the agility to reassign servers to different locations and also replace the functionality of ARP and thus reduce the ARP broadcasting overhead. At last, based on the VL2 directory system, the paper utilize the ECMP protocol for load balancing and routing.

Impact
VL2 provides uniform high capacity which is much better that current data center and it provides performance isolation for different services. The VL2 directory system is much more scalable than Ethernet and shows great performance.

Evidence
The paper first points out the limitations of current data center network design. Based on measurement of the statistics of flow sizes and flow numbers and analysis of the flow patterns, the paper describe a new design of network topology and addressing system. By conducting experiments to evaluate the performance of this design, the paper demonstrate that this design achieve its initial goals.

Prior Work
Notably, ECMP protocol is used for routing and load balancing in this paper.

Reproducibility
We can reproduce this paper since it describes it design and maintainance in details and it's using commodity switches.

Question
I'm curious about how many data centers including those in enterpises and research institutes are using VL2 or fat tree topology?




--
You received this message because you are subscribed to the Google Groups "CSCI2950-u Spring 13 - Brown" group.
To unsubscribe from this group and stop receiving emails from it, send an email to csci2950u-sp13-b...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Place, Jordan

unread,
Mar 19, 2013, 12:35:05 AM3/19/13
to csci2950u-...@googlegroups.com
VL2: A Scalable and Flexible Data Center Network
Microsoft Research & Amazon Web Service
SIGCOMM '09
VL2 ("Virtual Layer 2") is a network architecture which looks to solve
the current data center challenges of oversubscription, performance
isolation, and dynamic addressing without introducing the need for
non-commodity hardware. To achieve these goals, the authors begin by
explaining why these problems exist in today's data centers.
Oversubscription occurs as the result of static routing which crowds
one path between two parts of the network despite there being an open
alternate path to use. Performance isolation issues occur when one
application crowds a link and increases the latency of an unrelated
flow that also happens to be using the same link. Dynamic addressing
is the ability to move IP addresses to different physical locations in
the network as is often desirable when hosting VMs in a data center.
Doing this quickly is difficult due to the distributed nature of the
routing protocols used today.
VL2 solves these issues to a large extent. To eliminate the need for
oversubscription, VL2 propose a Clos network topology in combination
with path randomization (modified ECMP) to spread traffic over
multiple paths. The authors justify the use of randomizaition
experimentally. Performance isolation comes for free as a results of
removing oversubscription and using TCP fairness. Dynamic addressing
is accomplished by "separating names from locators." VL2 proposes a
set of centralize servers that maintain a mapping of IP names for
servers to their location-specific addresses. ARP requests for an IP
are converted into requests to these servers which respond with the
map's corresponding physical location.
The performance of VL2 is impressive as it achieves 94% of the total
possible efficiency available in the test data center. VL2 seems like
a great network architecture for data centers today with very few
drawbacks or weaknesses.


On Mon, Mar 18, 2013 at 10:15 PM, Rodrigo Fonseca
<rodrigo...@gmail.com> wrote:

Zhou, Rui

unread,
Mar 18, 2013, 11:20:15 PM3/18/13
to csci2950u-...@googlegroups.com
Paper:
VL2: A Scalable and Flexible Data Center Network
Authors:
Albert Greenberg, James R. Hamilton, navendu Jain, Srikanth Kandula, Changhoon Kim,  
Parantap Lahiri,  david A. Maltz, Parveen Patel, and Sudipta Sengupta

Review:
Traditional Tree-structured data-center networks suffer from problems including limited  server-to-server  capacity, Fragmentation of resources and Poor  reliability  and  utilization. The reason to those problems are largely from the fact that a tree structured hierarchy  requires all the traffic to go through the high level roots and the fact that the routes between two hosts are quite limited. 
However, this paper proposed VL2, which has following smart designs:
1. A "Clos" or Fat tree like structure, which enables using many commodity switches instead of few high-end expensive ones as well as providing a large selection of routes between any two hosts/racks.  The packets get delivered randomly through one of the routes, so there is no heavy traffic as we do not have a root that routes everything.  If one of the links fails, there are still lots of available routes for packets to go through, and the overall capacity gracefully degrades a little.
2. Besides the structure,  VL2 provides a virtualization of a dedicated LAN for each application group. This is achieved by a combined usage of LA and AA addresses. AA is the address in the virtualized LAN and the LA is for the aggregator and switches in the upper layers to route packets. The packets that are routed by LA address are encapsulated packets with AA addresses inside. The mapping between addresses are done by a directory service, which is basically an ARP service in the VLan.

The most impressive fact of the paper is that it again proves the power of virtualization,  with which we can apply proven ideas to solve new problems in a innovative way

Critism:
This virtulazation is adding more layers to existing already complex layers, it provides a better data-center solution at a cost of  complexity, which may make people more confused if they're trying to look into a packet and debug. This encapsulating-more trend  may not be a good trend to go further.

Questions:
What if we hashed a packet to a bad link? Do we hash again with some salt? 




On Mon, Mar 18, 2013 at 10:43 PM, Zhou, Rui <rui_...@brown.edu> wrote:
Paper:
VL2: A Scalable and Flexible Data Center Network
Authors:
Albert Greenberg, James R. Hamilton, navendu Jain, Srikanth Kandula, Changhoon Kim,  
Parantap Lahiri,  david A. Maltz, Parveen Patel, and Sudipta Sengupta

Review:


Question:
Feels like VL2





On Mon, Mar 18, 2013 at 10:15 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Papagiannopoulou, Dimitra

unread,
Mar 18, 2013, 11:35:36 PM3/18/13
to Rodrigo Fonseca, csci2950u-...@googlegroups.com

Paper Title: Frenetic: VL2: A Scalable and Flexible Data Center Network

 

Authors: Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel, and Sudipta Sengupta

 

Communications of the ACM, March'11

 

 

Novel Idea: In this paper, the authors present VL2, a scalable network architecture appropriate for very big data centers, which achieves agility by meeting three key objectives: uniform high capacity between servers, performance isolation between services and Ethernet layer-2 semantics. It uses Valiant Load Balancing (VLB) to spread traffic uniformly across all available network paths, without any centralized coordination, flat addressing to allow service instances to be placed anywhere in the network and end system-based address resolution to achieve high scalability without adding complexity.

 

Main Results: The main result of the paper is the scalable and reliable VL2 network architecture that uses already available low-cost and high speed commodity hardware and can be realized without changes to the control and data plane capabilities.  The authors evaluated VL2 and built a working prototype that achieves 94% efficiency on all-to-all data shuffle communications with a TCP fairness index of 0.995.

 

Impact: The impact of this work is significant, as it provides agility, a feature that current conventional data center architectures fail to provide. Most of the existing architectures are based on configurations based on trees and built from expensive hardware, that often lead to oversubscription. Moreover, they don't prevent effects from a traffic flood in one service to reach and affect other services and their routing design often creates further limitations. VL2 overcomes all these limitations and provides agility, the ability to assign any server to any service, which is one of the most important and desirable properties for a data center.

 

Evidence:  The authors begin their analysis by discussing why existing architectures cannot serve large cloud-service data centers, as they suffer from problems such as limited server-to-server capacity, fragmentation of resources and insufficient utilization and reliability. Then, they present their findings from studying the production data centers of a large cloud service provider, to explain how they made their design choices for VL2. They find that most traffic is internal to the data center, that the network bottlenecks computation and that the majority of flows in the data centers are small. They also studied the traffic patterns and the reliability issues of these data centers. They describe their design principles that where based on the findings mentioned before and they present the VL2 addressing and routing process. Finally, they evaluate VL2 using a prototype that runs on 10 commodity switches and a 80-server testbed, that was built using the Clos network topology. Through this evaluation they show that VL2 provides 94% optimal network capacity, a TCP fairness index of 0.995, graceful degradation under failures, with fast reconvergence and fast address resolution.

 

Prior Work: VL2 uses Valiant Load Balancing (VLB) to spread traffic uniformly across all available network paths, without any centralized coordination. VLB was introduced as a randomized scheme for communication among parallel processors that are interconnected in a hypercube topology [6].

 

Competitive Work: Other works have been focusing on building data center networks using commodity switches and a Clos topology [2,11,20,21], but they are different from this work in  their traffic engineering strategy, their control planes and their compatibility with existing switches. In [1], [12], [13], the servers are used for switching data packets. Here though, they are used only to control the way traffic is routed. The approach followed in VL2 is also similar to the approach of the Locator/ID Separation Protocol [9] but it is targeted to data centers and is implemented on end hosts.

 

Criticism: The contribution of this paper is very important, as it provides agility, the ability to assign any server, anywhere in the data center, to any service, a feature that is not provided by existing data center architectures. The design of VL2 manages to replace today's expensive switches with low-cost switches and achieves both uniform high capacity and performance isolation. It also provides layer-2 semantics thus solving the problem of the server capacity fragmentation that exists in other designs. After creating an all-to-all data shuffle traffic matrix that involves 75 servers, the authors found that VL2 achieves an aggregate goodput that is more than 10 times of what current data center designs can achieve with the same resources. The goodput efficiency of VL2,  was found to be 94% , which is very impressive. The fairness index of 0.995 shows that VL2 can achieve uniform high bandwidth across all the servers of the data center. Finally, a cost comparison of VL2 against a conventional design showed that in order to build a conventional network with no oversubscription, the cost would be 14 times the cost of building a VL2 network with no oversubscription. All the aforementioned results are very strong and make the contribution of this work significant.

 

Reproducibility: The results are reproducible.

 

 

 

 


On Mon, Mar 18, 2013 at 10:15 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

kmdent

unread,
Mar 18, 2013, 10:38:20 PM3/18/13
to csci2950u-...@googlegroups.com

Fernetic: VL2: A Scalable and Flexible Data Center, March 2011

By A. Greenberg, J. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. Maltz, P. Patel, S.Sengupta


Novel Idea: VL2 is a data center network that gives services using the servers the illusion that all of the servers on the network belong to it. The services should experience congestion only if there is no available capacity on the servers. The services shouldn’t affect other services. The network shouldn’t be difficult to configure, as the current VLAN system of assigning services to servers is.


Main Results: VL2 uses VLB for load balancing, randomly distributing flows to server. VL2 uses n + m redundancy which will degrade performance over multiple failures instead of completely shutting it down. The network also separates names from IP addresses so that the network can quickly grow or shrink.


Evidence: They use 80 servers and 10 commodity switches using the clos network topology. VL2 achieved 94% network capacity and .995 on TCP fairness. The AA-RA system did 50k lookups in under 10ms. There was also “Graceful degredation” with multiple failures coupled with fast reconvergence. They also did an all-to-all data shuffle test where each server had to transfer 500MB to every other server. They did an all-to-all 500MB transfer, a test where they put in a steady workload and one volitile workload, and another that tested the address resolution.


Competitive Work: Fat Trees


Criticism: They say that VLB never really causes any congestion by assignment of a large flow to a path that already has a large flow. This is because the flows never really get that big. In larger systems with a lot of large file transfer, this could cause a major problem. It might be a good idea to keep track of where the large flows are assigned, and avoid assigning another large flow there. It will certainly improve throughput over the ECN method that currently controls congestion.


-- 
kmdent

Reply all
Reply to author
Forward
0 new messages