Reviews for Veriflow

286 views
Skip to first unread message

Rodrigo Fonseca

unread,
Apr 15, 2013, 11:00:25 PM4/15/13
to csci2950u-...@googlegroups.com
Hi,

Please post your reviews to Veriflow as a group reply to this message.
No review is needed for NDB, but you can add it to the message if you already did it.

Thanks, and see you tomorrow (I will be there :)

Rodrigo

Shu Zhang

unread,
Apr 15, 2013, 11:10:12 PM4/15/13
to csci2950u-...@googlegroups.com

m1. Paper Title:

Where is the Debuffer for my SDN?


2. Date:

HotSDN 2012


3. Novel Idea:

The paper realized the fact that there is still no gdb-like debugger for SDN applications. So it is worthy of developing a debugger for SDN programmers. The behavior of the debugger is inspired by the GDB, primarily by the idea of breakpoint and backtrace.


4. Main Results:

The authors developed a prototype of SDN debugger (ndb), and initially it supports breakpoints and backtrace. In a scenario of a load-balancer development, they encountered three bugs ( no matching entry, server location bug and connection failure). So the backtracing feature did a good job solving these bugs.


5. Impact

The emergence of ndb could affect the future design of SDN network application and protocols. The paper has a wishlist which advocates future versions of OpenFlow to add more features to support the development of the debugger, like atomic flow table updates, layer-2 support and additional forwarding actions such as write arbitrary metadata to packet headers.


6. Evidence

The main idea of implementing the backtracing for any packet is to let switches to send “postcards” to the controller. The reason why not using the “stamp” model is that commodity switches don’t support adding this state within a packet. So to implement the postcard, the proxy gets flow entry modification messages and add additional “sending postcard” action. Then to group postcards from a same packet, the collector maintains a path table whose key is the header fields and value is a list of collected postcards. And the backtrace routes could be reconstructed using the topology information.

But the problems with ambiguity appears. There are flow table ambiguity and packet ambiguity. For flow table ambiguity, the debugger might have to add the version number for the entire flow table. To deal with packet ambiguity where a same packet with header could have conflicting routes, the paper said we don’t need to worry too much on it.


7. Related Work

If SDN becomes a mainstream in network community, papers related to debugging the SDN application will emerge more and more. In HotSDN 2012, another paper for verifying SDN network is Veriflow. But Veriflow take effect before the errors happen but the debugger does its work after the bug is discovered. These two methods are both dynamic. There are also publications for statically checking the SDN, like NICE and Anteater.



Criticism / Question

I understand it is a short paper for HotSDN. But the result is not convincing because more detailed experiments are absent from the paper. One of the few data in this paper suggests that in a 5 hops network, postcards will increase the traffic by 31%. The number I think is not negligible, so methods should be developed to limit the packet we interest in, in order to reduce the extra burden on the whole network.

ndb monitors and records the history (and future) of each packet. This might not be necessary. The idea of EC in Veriflow could be a good way to reduce the traffic and resource consumption in maintaining information of all packets.


Zhiyuan "Eric" Zhang

unread,
Apr 15, 2013, 11:43:20 PM4/15/13
to csci2950u-...@googlegroups.com
Paper Title
VeriFlow: Verifying Network-Wide Invariants in Real Time

Authors
Ahmed Khurshid, Xuan Zou, Wenxuan Zhou, Matthew Caesar, P. Brighten Godfrey

Date
NSDI '13

Novel Idea
This paper presents VeriFlow, a network debugging tool that can verify network-wide invariants in real time. VeriFlow runs as a layer between a SDN controller and network devices. Thus each time an OpenFlow action is inserted or modified in one of the switches, VeriFlow can check the invariants dynamically.

Main Results
The most challenging goal is to perform the checks in real time, which requires VeriFlow to verify the invariant at very high speed. The results show that this goal has been achieved. The problem is solved in three aspects:
1. Monitor all network updates as they are generated
2. Confine verification to only the parts of the network influenced by a new update
3. Use a custom (and faster) algorithm to check invariants

Impact
According to the authors, VeriFlow is the first tool that can verify network-wide invariants in real time.

Evidence
The authors performed two experiments two evaluate the performance of their VeriFlow design and prototype. The first experiment shows that VeriFlow can perform its operation fast enough to be in real time. And the second experiment evaluates the effect of VeriFlow's operations on TCP connection setup latency and the network throughput.

Prior Work
There are existing techniques for checking network-wide invariant before VeriFlow, although most of them operate on timescales of seconds to hours rather than in real time. There are two common approaches: analyzing configuration files and analyzing static snapshot of the network data-plane state.

Reproducibility
Their prototype is implemented with the NOX controller, and driven by a Mininet network. Although requires a fair amount of work, it should be not very hard to reproduce their result.

Christopher Picardo

unread,
Apr 16, 2013, 1:03:23 AM4/16/13
to csci2950u-...@googlegroups.com

Paper Title: VeriFlow – Verifying Network-Wide Invariants in Real Time

Authors: Ahmed Khurshid, Xuan Zou, Wenxuan Zhou, Matthew Caesar, P. Brighten Godfrey.

Date: In 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI), April 2013

Novel Idea: To demonstrate that the goal of real-time verification of network-wide invariants is achievable.

Specifically, “the paper presents Veriflow, a network debugging tool to find faulty rules issued by SDN applications, and optionally prevent them from reaching the network and causing anomalous network behavior”.

Main Results: Veriflow verifies network-wide invariants in 100’s microseconds when new rules are introduced into the network (i.e.:  Veriflow is fast enough to compute all the conflicting rules within hundreds of microseconds for 99% of the updates).

Of importance, the verification performed by Veriflow after receiving each flow rule from the controller inflates the end-to-end connection setup latency to some extent.

As we increase the number of OpenFlow packet header fields, the overhead of Veriflow increases gradually but remains low enough to ensure real-time response.

Impact: Packet forwarding is a complex process. It requires a big effort to achieve network corrections, security and fault tolerance. Unfortunately, errors do occur in the form of loops, suboptimal routing, black holes, and access control violations.

Software defined networking makes development easier via centralized network programmability, but software complexity tends to be high.

Also, if we have multiple SDN applications running together (simultaneously) on the same physical network causes a coherence/reliability problem.

Furthermore, applying pre-deployment static software checks results in offline checks that only find bugs after they happen.

Hence, Veriflow resolves these situations by using a SDN component to obtain a picture of the network, then applies incremental search algorithms to find violations, aided by Openflow and custom IP forwarding rules. All these steps happen in a RT response window. Veriflow looks to identify in time the:

-         -  Availability of a path to the destination

-          - Absence of forwarding loops

-          - Isolation between virtual networks

-         -  Enforces access control policy.

It is unfeasible to check the entire network state every time a new flow rule is inserted. This behavior is wasteful, and fails to provide a real time response. Instead the authors suggest using forwarding rules who effects only a small portion of all possible packets. They slice the network into equivalent classes.

Criticism:  Veriflow has difficulty verifying invariants in real-time when large swaths of the network’s forwarding behavior are altered in one operation, and when there is link failure.

Question: In Figure 3d, what causes that big jump of the CDF right after 0.15 ms (i.e.: Why do we have a positive step function as CDF increases?)

Future work: In a multi-controller scenario, attaining a global view in real time could be very complex. The paper leaves this for future study.

Jeff Rasley

unread,
Apr 16, 2013, 8:09:42 AM4/16/13
to csci2950u-...@googlegroups.com
Title: Where is the debugger for my software-defined network?

Authors: Handigol, Nikhil and Heller, Brandon and Jeyakumar, Vimalkumar and Maziéres, David and McKeown, Nick (Stanford)Context: HotSDN '12

Novel: The authors have created a tool called ndb, which is the gdb equivalent for SDNs. The user of the tool is able to set breakpoints and is then able to get backtraces of how a packet got to this breakpoint.

Main Results: As a packet goes through the network "postcards" are generated at each switch that contain detailed information about the state at that router when the packet was received. Then once the packet hits the breakpoint switch it reports all the associated postcards to the user.

Reproducibility: Nikhil stated in his video that the code for ndb will be available "soon", which will make the reproducibility quite easy.

Side Note: Best Student Presentation Award

Questions: Generating postcards for every packet in the network seems expensive, there was a question at the end of the talk about this. Nikhil states you could potentially only generate postcards for packets that potentially match a breakpoint, but this also seems expensive. Also it seems hard to determine what packets may or may not match breakpoints farther along in the network.


------------

Title: VeriFlow: Verifying Network-Wide Invariants in Real Time

Authors: Ahmed Khurshid, Xuan Zou, Wenxuan Zhou, Matthew Caesar, and P. Brighten Godfrey (UIUC)

Context: NSDI '13

Novel: Real-time checking of the data plane for network invariants, these techniques were previously only available offline. 

Main Results: The authors discuss the design and implementation of a runtime system for checking network invariants by monitoring the data plane of the network. They are able to do this by intercepting OpenFlow requests and building a trie data structure to represent a tree of equivalence classes (ECs) in order to reduce the amount areas that must be verified. This way when they need to check a if a new rule satisfies their invariants they only have to check the ECs that would be affected by the update and skip the ECs that are not affected.

Evidence: The authors built this system in NOX and evaluated its performance and also emulated it in MiniNet using Route View traces.

Prior Work: A HotSDN paper about an earlier version of VeriFlow and a few other papers from some of the same authors on offline verification methods.

Limitations: A question at the end of the talk asks about memory usage, Ahmed says it took 5-6GB of memory to store the trie data structure and that this method uses a lot of memory. Also invariants that involve rules that rewrite packets are not currently supported but they leave that to future work.


On Monday, April 15, 2013 11:00:25 PM UTC-4, Rodrigo Fonseca wrote:

Shao, Tuo

unread,
Apr 16, 2013, 1:20:57 AM4/16/13
to csci2950u-...@googlegroups.com
Paper Title
VeriFlow: Vertifying Network-Wide Invariants in Real Time

Authors
Ahmed Khurshid, Xuan Zou, Wenxuan Zhou, Matthew Caesar, P. Brighten Godfrey

Novel Idea
This paper provides a tool to verify effects on network or violation against current network-wide invariants with given updates operations.

Main Results
The paper presents a design and implementation of Veriflow with the goal of real-time invariants verificaiton. It also evaluates the processing time and the effect on network performance.

Impact
I think this is really a very useful tool to examine the network update before it gets deployed. It prevents SDN developers from mistakenly deploy some network updates while change the network invariants. Or the developers should have a good knowledge of current network state concerning each switches' configuration or other information which is not the goal of SDN.

Evidence
The paper first introduce the problem with some real-world concerns. Then the paper describes several approaches and some previous systems aiming to solve the problem. However these approaches are all offline and find bugs after they happen. So the paper gives its own design which is optimized by slicing the network into ECs according to rules and checking invariants based on that. At last, the paper decribes the implementation and evaluation of this design

Competetive Work
FlowChecker, Anteater, Header Space Analysis. These are offline data plane analysis systems.
Hassel tools and a BBD-based analysis tool. Veriflow outperforms these two in process time(reachablity check).

Criticism and Question
The paper mentions one way to implement Veriflow is to build a proxy process which is independ of controllers. But it doesn't talk about it in the rest of paper. But I don't know this could be done because I think Veriflow needs the help of controller to have the knowledge of network to do tasks like generating forwarding graphs. 

Inspiration on future work
We are implementing the service of rate limiting on SDN which requires parsing different policies or rules and find out overlapping ones. I think the trie used in this paper could be adopted in our project. 



Rodrigo

--
You received this message because you are subscribed to the Google Groups "CSCI2950-u Spring 13 - Brown" group.
To unsubscribe from this group and stop receiving emails from it, send an email to csci2950u-sp13-b...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Papagiannopoulou, Dimitra

unread,
Apr 16, 2013, 3:17:14 AM4/16/13
to Rodrigo Fonseca, csci2950u-...@googlegroups.com

Title: VeriFlow: Verifying Network-Wide Invariants in Real Time

Authors: Ahmed Khurshid, Xuan Zou, Wenxuan Zhou, Matthew Caesar, P. Brighten Godfrey

 

 

 

Novel Idea: This paper proposes VeriFlow, which is a network debugging tool that dynamically checks for network wide invariant violations at low latency, so that network performance is not significantly affected. VeriFlow performs real-time verification in the context of SDNs and aims in detecting and potentially preventing bugs by blocking changes to forwarding behavior that might violate important invariants.

 

Main Result: The authors found that VeriFlow can verify network wide invariants within hundreds of microseconds per rule insertion or deletion. They also found that it has a small impact on network performance and increases TCP connection setup latency by 15.5% on average.

 

Evidence: The authors used a stream of updates from a simulated IP network constructed with Rocketfuel topology data and real BGP traces, to evaluate VeriFlow's performance. Its overhead on the NOX controller was also evaluated, using Mininet to emulate an OpenFlow network. It was found that the total verification time of VeriFlow remained below 1ms for 97.8% of the updates verified. The mean verification time reported was 0.38ms and the query phase took 0.01ms on average. After evaluating the effect of VeriFlow's operations on TCP connection setup latency and network throughput, the authors reported results on the number of TCP connections that were successfully completed per second for different workloads , with and without VeriFlow. In all cases, VeriFlow was found to cause negligible overhead on the TCP connection setup throughput (the largest reduction observed was  only 0.74%).

 

Prior Work: VerIFlow leverages Software Defined Networking (SDN) to create a picture of the evolving network. But Veriflow doesn't rely only on SDNs. It also uses novel algorithms to search for violations of key network invariants in order to ensure real-time response. The authors built two versions of VeriFlow. One is a proxy process [25] that sits between the controller and the network and is independent of the particular controller, and one that is integrated with the NOX OpenFlow controller [14] for improving performance.

 

Competitive Work: The paper mentions plenty of related and competitive work on debugging networks and SDNs. Some of them have been focusing on detecting network anomalies [10,19]. Others, such as NICE[12] have been focusing on checking OpenFlow applications. NICE though, is not designed for checking network properties in real-time. Others aim on ensuring data-plane consistency [20, 24] and allowing multiple applications to run side-by-side in a non-conflicting way [22, 23, 25]. FlowVisor [25] in particular, differs from VeriFlow in that it does not verify the rules that applications send to the switches  and doesn't check for violations of key network invariants. To mention some other related works: Anteater [19] uses data-plane network information and checks for violations of key network invariants. ConfigChecker [10] and FlowChecker [9] check also for network invariants using BDD to model the network state and run queries using CTL, while VeriFlow uses graph search. One important characteristic of VeriFlow compared to competitive work is that it is capable of preventing problems from reaching the forwarding plane while many others cannot achieve that.

 

Reproducibility: The results of this paper are reproducible.

 

Criticism: The results of the paper are good. The authors paid particular attention making VeriFlow different from other existing works. One of those differences that makes the results of the paper particularly interesting is the fact that VeriFlow is the first tool that can dynamically verify network wide invariants in an evolving network in real time. Furthermore, it is also capable of preventing faulty rules issued by the SDN applications for  reaching and affecting the network. These two characteristics make VeriFlow a very interesting approach.  

 

 


On Mon, Apr 15, 2013 at 11:00 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Rodrigo

Charles Zhang

unread,
Apr 16, 2013, 7:38:51 AM4/16/13
to csci2950u-...@googlegroups.com

Paper Title: VeriFlow: verifying Network wide Invariants in Real Time

Authors: Ahmed Khurshid, Xuan Zou, Matthew Caesar, P. Brighten Godfrey

Date: 2013

 Novel idea:

This paper proposed veriFlow, a layer between a software defined networking controller and network devices that checks for network wide invariant  violations dynamically as each flow is inserted, modified, or deleted. And their results show that they could achieve a performance of rigorous checking within hundreds of milliseconds per rule insertion or deletion.

Main results:
This paper studies the question is it possible to check network-wide correctness in real time as the network evolves. Through veriFlow, they demonstrated that the goal of real time verification of network wide invariant is achievable.  The design of veriFlow is divided into four parts. First, slice the network into equivalence classes, and they used a multi dimensional prefix tree to quickly store new network rules, find overlapping rules, and and compute the affected equivalence classes. Second, for each of those ECs, generate a forwarding graph which serves as a model of the behavior of the network. Third, VeriFlow maintains a list of invariants to be checked, like basic reachability, loopfree-ness, consistency, etc.  And lastly, VeriFlow achieves real-time response by confining is verification activities within those parts of the network that are affected when a new forwarding rule is changed. Their experiments show that the verification time is roughly linear to the number of ECs it affects.

They built two versions of VeriFlow, one is a proxy process which sits in between the controller and the network, another one is integrated into the controller, which they later on did performance evaluation on. Because of the way certain certain match and packet header fields are handled in the open flow specs, they used an optimization technique for the verification process, which as the traversal of the trie goes deeper, finer rules at other devices may be encountered which will generated new packet sets with finer granularity. 

And last, their implementation exposed API calls to write general queries.

Impact:
This work is the first tool that can verify network wide invariants in a live network in real time. 

Evidence:
They ran tests to evaluate the performance of VeriFlow on per update processing time,  and effects on network performance.

Prior work:
Related work exists for finding bugs in open flow applications, ensuring data plane consistency, and checking network invariants. 





Paper Title: Where is the Debugger for my Software Defined Network

Authors: Nikhil Handigol, Brandon Heller, Vimalkumar Jeyakumar, David Mazieres, Nick McKeown

Date: HotSDN’12, August 13 2012, Helsinki, Finland

Novel Idea:
This paper introduced ndb, a prototype network debugger for SDNs which implements two primitives, one being breakpoints and another packet backtraces.

Main Results: They designed the architecture of ndb, where there is a proxy that modifies control messages to tell switches to create postcards, and a collector for storing postcards. And also gave a solution for resolving ambiguity of flow table entries. Then they made three suggestions to future work and answered some interesting frequently asked questions.

Impact:
With debuggable software defined networks, engineers could more easily and quickly write programs for SDN’s with a more thorough understanding of what’s going on in the network.

Prior work: gdb

Question: I don’t quite get how the breakpoint works. What exactly will stop its flow of execution when a condition for breaking is met.

Ideas for future work:
Cloud softwares could be so much easier to debug with this utility.


On Mon, Apr 15, 2013 at 11:00 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Rodrigo

kmdent

unread,
Apr 15, 2013, 11:51:48 PM4/15/13
to csci2950u-...@googlegroups.com

Where is the Debugger for my Software-Defined Network?

by Nikhil Handigol, Brandon Heller, Vimalkumar Jeyakumar,David Mazières, and Nick McKeown


Novel Idea: A debugger for networks that is inspired by gdb. The debugger provides several functionalities: breakpoint, watch, backtrace, step, and continue. This paper only goes over the implementation of breakpoint and backtrace.


Main Results: When compared against common bugs of SDN developers, ndb would help significantly for correctness issues, while not doing much for performance bugs. It does well in helping to identify race conditions, logic errors, and errors in switch implementations. ndb uses a postcard model where the switch sends a “postcard” containing the switch id, a version number, and an output port. These values are then sent to the collector, who hashes them into the path table. After the maximum time it takes to traverse the network elapses, the postcard is ejected from the table.


Evidence: There wasn’t much evidence other than a couple of bugs mentioned in the implementation of a load balancer that could be solved by ndb.


Impact: I believe this is the start of something very good for the networking industry. As systems get more complex, we need better and better ways of debugging them with ease.


Reproducibility: This work is fairly reproducible. It outlines the basic implementation of traces, but other than that is lacking details. With enough time, it could be reproduced


Prior Work: gdb, Anteater, Header Space Analysis, OFRewind, Fernetic, Nettle, and NICE.


Question/Criticism/Future work: One of my main problems with ndb is that a breakpoint doesn’t seem to actually be a breakpoint. Instead it is just information about the packet at a certain point. I realize that actually freezing the network would be extremely difficult, and probably currently impossible. One of the most important things in debugging is being able to interactively change the state, and see the outcome. For example, In GDB, you can change variables, and then see how it affects your program. For further work: A network topology analyzer that can translate your current network to run on a mininet like emulator. This emulator should allow you to input a network state, and interactively pause the network, change packet headers, and then resume the simulation. This is a lot of work, but I think should be the goal of network debugging.

VeriFlow: Verifying Network-Wide Invariants in Real Time


Novel Idea: A layer in between the SDN and the network hardware to verify that there is no invariant violations in real time. Currently, many of the systems in place to analyze network wide invariants can only be used offline, and thus catch bugs after they happen.


Main Results: In able to do real time verification you must monitor all network update events in the live network as they are created by the applications, the devices, and the operators of the network. To achieve this, they add a shim layer in between the controller and the network. For times sake, verification will only occur on those influenced by the new update. To do this, they use equivalence classes, i.e. sets of rules that are mutually affected by rules. To implement this, they use a multi-dimensional prefix tree. They also use forwarding graphs which represents connectivity throughout the network. This data structure allows for Veriflow to determine reachability, cycles, and rule consistency. Veriflow is able to check new rules for invariant violations within a hundred microseconds, and has very little impact on network performance. In cases where a new rule affects many others on the network, VeriFlow will first install the rules, and then verify the correctness, but this only happens 1% of the time.


Evidence: They tested veriflow on with Rocketfuel simulation data and using real BGP traces using an OSPF simulator. 94.5% of the updates affected a single rule, and 99% affected 10 or less. In these cases, the average time was around .38 ms for a verification. It took VeriFlow on average 1 second to identify a link failure, and in the worst case, 4 seconds. To test the performance of TCP with and without veriflow, they used 172 switches on mininet. The only time when there is significant overhead is when there are lots of flow modifications happening, and in that case, the reduction was around 12%.


Impact: It is not clear the impact of VeriFlow, but it seems like it would be something many network admins would want on their network.


Other Work: Flow Checker, Anteater, Header Space Analysis


Question: They mention that for IPv4 the trie is 32 levels deep. Does that mean with IPv6 it is 128? Wouldn’t that have a non-negligible effect on the runtime?




-- 
kmdent

Rodrigo Fonseca

unread,
Apr 16, 2013, 8:22:50 AM4/16/13
to csci2950u-...@googlegroups.com
On behalf of David Trejo:

Inline image 1



Rodrigo

IMG_20130416_000045.jpg
Reply all
Reply to author
Forward
0 new messages