Reviews: DCTCP

652 views
Skip to first unread message

Rodrigo Fonseca

unread,
Oct 27, 2010, 7:54:36 PM10/27/10
to CSCI2950-u Fall 10 - Brown
Hi, as usual, please post your reviews here.

Thanks,
Rodrigo

James Chin

unread,
Oct 27, 2010, 10:45:06 PM10/27/10
to CSCI2950-u Fall 10 - Brown
Paper Title: “Data Center TCP (DCTCP)”

Authors(s): Mohammad Alizadeh, Albert Greenberg, David A. Maltz,
Jitendra Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sengupta,
Murari Sridharan

Date: 2010 (SIGCOMM ‘10)

Novel Idea: The authors claim that today’s state-of-the-art TCP
protocol falls short for cloud data centers with diverse applications,
which mix workloads that require small predictable latency with others
requiring large sustained throughput. To address this issue, they
propose DCTCP, a TCP-like protocol for data center networks. DCTCP
leverages Explicit Congestion Notification (ECN) in the network to
provide multi-bit feedback to end hosts.

Main Result(s): The authors found that DCTCP delivers the same or
better throughput than TCP, while using 90% less buffer space. Unlike
TCP, DCTCP also provides high burst tolerance and low latency for
short flows. In handling workloads derived from operational
measurements, the authors found that DCTCP enables the applications to
handle 10X the current background traffic, without impacting
foreground traffic. Further, a 10X increase in foreground traffic
does not cause any timeouts, thus largely eliminating incast problems.

Impact: Application requirements for low latency directly impact the
quality of the result returned and thus revenue. Also, many
applications require high utilization for large flows, so the
freshness of internal data structures affects the quality of the
results. Thus, high throughput for these long flows is as essential
as low latency and burst tolerance. This paper proposes DCTCP, which
attempts to address the impairments that hinder these requirements.

Evidence: First, the authors measured and analyze production traffic
(>150 TB of compressed data), collected over the course of a month
from ~6,000 servers, extracting application patterns and needs
(particularly low latency ones), from data centers whose network is
comprised of commodity switches. Impairments that hurt performance
are identified and linked to properties of the traffic and the
switches. Then the authors evaluated the following aspects of DCTCP
at 1 and 10 Gbps speeds on ECN-capable commodity switches: throughput
and queue length, RED, fairness and convergence, multi-hop networks,
and traffic. They also tested DCTCP against a series of
microbenchmarks that show how it addresses performance impairments.

Prior Work: DCTCP differs from one of the earliest ECN schemes,
DECbit, in the way AQM feedback is smoothed (filtered) across time.
In DECbit, the router averages the queue length parameter over recent
cycles, while DCTCP uses a simple threshold and delegates the
smoothing across time of the feedback to the host (sender).

Competitive Work: QCN is being developed as an optional standard for
Ethernet. Also, several TCP variants aim to reduce queue lengths at
routers: delay-based congestion control (Vegas and CTCP), explicit
feedback schemes (RCP, VCP, and BMCC), and AQM schemes (RED and PI).
Finally, much research has been devoted, with success, to improving
TCP performance on paths with high bandwidth-delay product, including
High-speed TCP, CUBIC, and FAST.

Reproducibility: The findings appear to be reproducible if one follows
the testing procedures outlined in the paper and has access to the
code that the authors used. The setup does seem to be a bit involved,
though.

Question: This paper was published quite recently. Is Microsoft
planning on deploying DCTCP in-house in the near future?

Criticism: The paper could be a bit more polished. For instance, in
Part 1 (Introduction), “highly performant computing” should be
replaced with something like “high-performance computing,” as
“performant” is not a word. Also, in Part 5 (Related Work), the first
sentence of the third paragraph is missing “, and” before “AQM
schemes.”

Ideas for further work: Perform the evaluation of DCTCP on an even
larger scale, beyond 94 machines.

Hammurabi Mendes

unread,
Oct 27, 2010, 11:57:58 PM10/27/10
to brown-csci...@googlegroups.com
Paper Title

Data Center TCP (DCTCP)

Authors

Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye,
Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan

Date

SIGCOMM'10 - August 2010

Novel Idea

Return multi-package ECN information to the sending host, which adapts
its congestion window incrementally (less abruptly).

Main Results

The paper presents real data center traffic measurements/analysis, and
proposes changes on the TCP protocol, for this particular context,
based on some identified traffic patterns. One of the identified
patterns is the requirement for low latency. Moreover, they propose
changes on the TCP to allow better congestion notification to the
sender, and to permit better congestion window adaptivity.

Impact

The paper analyzes real data-center information, and notices that
low-latency queries are well demanded in such systems (basically
because of SLA constraints and the aggregation pattern on data
processing). This obviously promotes awareness on this issue.

Moreover, the proposed changes on TCP for the data center environment
build upon those observations and actually show improved performance
in relation to the previous concerns.

Evidence

They start their argumentation identifying issues in data center
communication using real data. Among the issues, they describe incast
(synchronized short-term flows), queue-length problems (if the queue
is big, latency on small queries is compromised), and buffer
"pressure" (long flows get more and more buffer space, which results
in low buffer space for short queries).

The algorithm is described in detail, and they evaluate their solution
using the previous issues as metrics, also evaluating throughput,
queue length, and fairness convergence time. They have isolated tests
and tests that simulate real environments.

Prior Work + Competitive Work

They use ideas from other approaches that do active queue management
schemes, as RED and PI (they even use RED mechanisms to implement
DCTCP).

They talk about the approach of jittering (perturbing) flows, which
increase response time. They talk about the possibility of strongly
reducing the RTOmin + fine-grained retransmission, which can relieve
incast problems, but doesn't affect the queue length problem.

They also mention other TCP variants (Vegas, CTCP).

Reproducibility

I think their evaluation section is reproducible. The paper provides a
substantial amount of technical details that permit such
reproducibility. Of course, publicly available code would make it
easier to do so.

The measurements are in a different standing regarding this matter. Of
course, they were probably taken in a business context, and it is
understandable that the data is not widely available. Even so, if the
patterns are actually characteristic, they should appear in other
measurements by other people.

Questions + Criticism

Their tests used the TCP SACK, which seems important to their results
(in a single SACK package, they could signal many CWR flags at once,
right?). Is TCP SACK something common nowadays? (I think I'm obsolete
on this.)

They mention that RED+ECN implies in high query length because it is
supposedly too slow to react to bursts of traffic. But if they could
infer a probability of congestion, they could perhaps start marking
packets based on this probability, and the receiving nodes could
employ even more fine-grained control over the congestion window. And
if it is too slow, perhaps they could still use two different
thresholds, but set with lower values. They could actually get
quicker.

Ideas for Further Work

I think it makes sense to use both low and high thresholds of
RED-enabled switches to try to infer congestion indications "in a
shade of gray", and then mark the packets according to the inferred
probability, instead of joining both thresholds into a single K.
Perhaps, a probabilistic marking on the packets could have interesting
impact on the oscillation of the congestion window.

Jake Eakle

unread,
Oct 28, 2010, 1:21:49 AM10/28/10
to brown-csci...@googlegroups.com
Paper Title

Data Center TCP (DCTCP)

Author(s)

Mohammad Alizadeh‡ † , Alber t Greenberg, David A. Maltz, Jitendra Padhye, 

Par veen Patel , Balaji Prabhakar , Sudipta Sengupta, Murari Sridharan 

Date 2010
Novel Idea Common cluster workflows cause three main performance impairments in commodity switches running standard TCP. The first is known as incast, and occurs when a large number of simultaneous requests flood a switch, causing some of them to be dropped and incurring a TCP RTO due to the small amount of buffer space allocated for packets in these switches.

The second is that long flows (typically cluster-maintenance software keeping nodes in sync, etc) cause short flows (typically processing related to a user query that needs to be responded to very quickly) to have increased latency due to queueing.

The last they call "buffer pressure", and occurs for the same reason as the second - the queues formed by the long flows can fill up the space that would be used for buffering short flow packets, causing them to be dropped.

They propose to solve all three problems by modifying TCPs ECN logic to not merely react (quite drastically) to the presence of congestion, but to react to its extent.
Main Result(s) They describe DCTCP, an extension to TCP with the following properties: 

1) Packets simply have the congestion bit set if they enter a queue of length k or greater, in contrast to TCP's more complex marking semantics.

2) Receivers echo fully explicit ECN information in acks. To preserve the benefits of delayed acks, they only send acks either when the normal m packets have been received, or whenever they receive a packet with a different ECN state from the last.

3) Senders keep track of an estimated percentage congestion alpha based on the information they receive in said acks. They use this to decrease the send window logarithmically with alpha, in the range 0 - .5, causing a gentler drop-off than TCP when congestion is estimated to be low.
Impact Ran out of time for this again...
Evidence They verify that DCTCP results in smaller and less variable queues on a real actual cluster running what seems like it was real actual cluster-style code. This is remarkable! They also show that it converges to fairness faster, and compares well on mutli-hop network tests.
Prior Work Their work is largely in response to prior work that has tried to avoid incast by artificially adding jitter to they system, which does solve the problem but increases median latency significantly.
Reproducibility They describe their algorithm really well, it seems to me.
Question They use shared-memory switches, which they claim are "like most commodity switches." What other kinds are available? Do they solve any of these challenges, and/or add different challenges of their own?
Criticism This paper honestly seems really good. They lay out some very precise problems, propose a solution, and show convincingly that it works. They compare their solution to other work in the field and explain why those preexisting technologies are unsatisfactory. I'm sure they are doing *something* wrong, but I don't see it.
Ideas for further work It seems like in very large clusters (or perhaps just poorly organized ones?) the problem they outline with incast resulting from a single packet from each machine could start to be more of an issue than they claim it is. When it is, what can be done? Perhaps there is a way to dynamically turn on a small amount of jitter when the system notices that's happening? Perhaps the jitter can be done at the application level for applications that are likely to cause that pattern?

--
A warb degombs the brangy. Your gitch zanks and leils the warb.

Dimitar

unread,
Oct 27, 2010, 11:36:32 PM10/27/10
to CSCI2950-u Fall 10 - Brown
Data Center TCP (DCTCP)
Authors: Mohammad Alizadeh, Albert Greenberg, David A. Maltz,
Jitendra Padhy, Parveen Patel,
Balaji Prabhar, Sudipta Sengupta, Murari Sridharan

Date: August 30–September 3, 2010

Novel Idea: The authors propose a new variant of TCP, called Date
Center TCP (DCTCP).
which deals with the problems data centers face in terms of traffic
between servers.
The new protocol attempts to improve the network latency required by
some application, the protocol
needs to be able to sustained high burst traffic and high utilization
so that applications could
continuously update their internal structures. The paper also analyzes
production traffic for over
6000 servers.

Main Result(s) A protocol that achieves the above goals. By using
Explicit Congestion Notification and
new schema at the source , DCTCP can operate with a small amount of
buffers thus reducing latency
at the switches.

Impact: The propose protocol can substitute TCP in the data centers.

Evidence : The authors evaluate their work based on queues length,
throughput, fairness and convergence,
impairments.

Prior Work: There are sever works that try reduce the size of the
queue like RCP , VCP and BMCC.
The key difference between them and DCTCP is that they required
complex switches that are not
available commercially. On the other hand DCTCP can be used by the
current network architecture.

Reproducibility: I think the work is reproducible since implementation
is well explained, the evaluation
of 6000 servers not so much.

Question : Is it possible to apply some of the techniques described in
the paper to TCP outside data
centers to improve performance?

Criticism: I think the paper is written well. The authors clearly
explained their goals. The implementation
of their protocol was detailed, and the test cases were sufficient.

On Oct 27, 7:54 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Visawee

unread,
Oct 27, 2010, 11:49:02 PM10/27/10
to CSCI2950-u Fall 10 - Brown
Paper Title :
Data Center TCP (DCTCP)


Author(s) :
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye,
Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan


Date :
SIGCOMM’10, August 30-September 3, 2010, New Delhi, India


Novel Idea :
Provides end hosts with multi-bit feedback using Explicit Congestion
Notification (ECN) for giving high burst tolerance and low latency for
short flows.


Main Result(s) :
(1) DCTCP gives the same or better throughput than TCP while using 90%
less buffer space.
(2) DCTCP provides high burst tolerance and low latency for short
flows.
(3) DCTCP makes the application be able to handle 10X the current
background traffic without interfering foreground traffic.


Impact :
A new variant of TCP that can handle diverse mix of short and log
flows better than traditional TCP. It also alleviates application
developers from limiting the size of query response to a very small
size.


Evidence :
(1) The DCTCP Performance microbenchmarks show that
- DCTCP achieves full throughput, while using a very small queue
length comparing to TCP
- DCTCP converges quickly, and all flows achieve their fair share
with the Jain’s fairness index of 0.99.
(2) The Impairment microbenchmarks show that DCTCP outperforms TCP in
terms of fairness, high throughput, high burst tolerance, low latency,
and high performance isolation.
(3) The benchmark on real traffic also suggests that if the data
center use DCTCP, it could handle 10X larger query response and 10X
larger backgroupnd flows while performing better than it does with TCP
today.


Reproducibility :
The results are reproducible except for the one get from the real
traffic patterns. The authors explain about the algorithm used in
DCTCP in detail. The microbenchmarks are also explained in detail.


Criticism :
This paper is very well structured and easy to follow. It starts from
describing issues found in the data centers. Then it analyzes the root
cause of the problems. It, then, suggests and evaluates the solution.

On Oct 27, 7:54 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Matt Mallozzi

unread,
Oct 27, 2010, 8:14:51 PM10/27/10
to brown-csci...@googlegroups.com
Matt Mallozzi
10/28/10

Title:
Data Center TCP (DCTCP)
Authors:
Alizadeh, Greenberg, Maltz, Padhye, Patel, Prabhakar, Sengupta, Sridharan
Date:
2010
Novel Idea:
Using Explicit Congestion Notification (ECN) to alert communication
endpoints how bad congestion is. Since the Congestion Experienced (CE) data
is only one bit in a packet, DCTCP sets the CE bit on a fraction of packets
corresponding to the level of congestion, which the other endpoint can
infer by sampling the packets.
Main Results:
A TCP-like protocol designed for data centers: one that can provide low
latency to "foreground" services and high throughput to "background"
processes, while being able to withstand bursts of traffic, all using
commodity switches with small buffers.
Impact:
This could have a large impact on data center networking. By making the
network tolerant of traffic spikes, it helps distributed systems themselves
reach that goal. Also, it makes the network more efficient, by allowing ten
times the "background" traffic while not affecting the performance of the
"foreground" traffic.
Evidence:
They ran microbenchmarks to test each goal of DCTCP, as well as benchmarks
modeled after production traffic. They also provide a formal analysis of
some aspects of performance and of how to choose certain parameters.
Prior Work:
There has been a lot of work on congestion control, but consists largely of
proposals that increase latency or do not withstand traffic bursts. There is
also a layer 2 approach, QCN, which would extend Ethernet to contain
hardware rate limiters.
Competitive Work:
There is also significant work done in layer 4 TCP variants, but many of
these do not perform well, and others require switches themselves to do
complex operations.
Reproducibility:
Fairly reroducible, as the algorithm is very simple. The value of g would
probably have to be played around with depending on the setup and on the
load of the system.
Question:
How does using Delayed ACK affect latency, and is this hit (if any) worth
the load that it saves?
Criticism:
They should have evaluated more TCP alternatives alongside DCTCP and TCP.

On Wed, Oct 27, 2010 at 7:54 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Sandy Ryza

unread,
Oct 27, 2010, 10:55:32 PM10/27/10
to CSCI2950-u Fall 10 - Brown
Title:
Data Center TCP

Authors:
Mohammad Alizadehzy, Albert Greenbergy, David A. Maltzy, Jitendra
Padhyey, Parveen Pately, Balaji Prabhakarz, Sudipta Senguptay, Murari
Sridharany

Date:
SIGCOMM '10

Novel Idea:
The authors propose a set of modifications to TCP to attempt to solve
the problems on cluster networks caused by the filling up of shallow
commodity switch buffers. Their solution keeps buffer occupancies
persistently low by adjusting window sizes according to the extent of
network congestion. It gathers data on the extent of congestion
through a single-bit flag set when the buffer occupancy goes above a
certain threshold.

Main Result(s):
The authors find that DCTCP is effective in both reducing latencies
and maintaining throughput levels.

Evidence:
At the beginning of the paper, a fair amount of evidence is presented
to motivate their solution. They gather data on latencies and the
occupancies of switch buffers in different scenarios to prove that
overrunning buffer sizes causes higher latencies in their network. A
deductive, mathematical analysis of DCTCP's efficacy is provided in
addition to a thorough comparison between TCP and DCTCP run on a large
cluster. The comparison focuses on latencies, fairness, convergence,
and congestion.

Prior Work & Competitive Work:
The obvious competitor is current state-of-the-art TCP. Other
research attempts to decrease the negative impact of timeouts caused
by full buffers, whereas DCTCP attempts to prevent these timeouts by
not letting the buffer become full in the first place. Also,
application-level approaches to congestion control - jittering.

Reproducibility:
Modifications are both simple and presented in great detail. The main
barrier to reproducibility would likely be access to the resources
that Microsoft has (clusters of 6000 machines, etc.), but given these,
I imagine the experiments could be fully reproduced.

Criticism:
While the section at the beginning is important because it argues for
the need for their solution, the paper takes overly long to get at
what its actual contribution is.

Question:
What would be the implications of using DCTCP on wide area networks?
Is my understanding correct that the reason it would not be used is
because analyzing latencies in the wide area case provides a better
estimate of congestion? If so, would its use have a harmful effect or
just a neutral one?


On Oct 27, 7:54 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Duy Nguyen

unread,
Oct 27, 2010, 11:35:56 PM10/27/10
to brown-csci...@googlegroups.com
Title:
Data Center TCP

Authors:
MSR & Stanford people

Date:
2010/SIGCOMM

Novel Idea:
A TCP-like protocol having new congestion control algorithm which decreases
application latency inside data center by decreasing queue lengths and packet
loss while maintaining high throughput.

Main Results:
The authors first describe Partition/Aggregate, a common application architecture
used in modern data centers in which latency is the key metric. Then they illustrate
the characteristics of data center traffic by experimenting on 3 production clusters
which point out how poor original TCP is. From that arguments, DCTCP is proposed and
tested. Result shows that DCTCP uses 90% less buffer space, provides high burst
tolerance and low lantency for short flows.

Impact:
I don't have enough networking background to give any judgment. But I think the result
will be more convincing if the testbed is closer to the real evironment at scale. The
testbed has only 94 machines.

Evidence:
They first do "throughput test" to show that DCTCP achieves the same throughput as TCP
on long-lived flows. Other things like fairness, performance in multihop networks are
also tested. Then they show how DCTCP address problem pointed out in experiments with
original TCP, such as incast impairments.


Prior Work:
Based on vast works of congestion control algorithms.

Competitive Work:
N/A

Reproducibility:
It's hard to reproduce, paper is quite dense with formular. It's somehow difficult to
imagine the whole big picture.

Question/Criticism:
N/A

On Wed, Oct 27, 2010 at 7:54 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Siddhartha Jain

unread,
Oct 27, 2010, 11:42:32 PM10/27/10
to brown-csci...@googlegroups.com
Title: DCTCP

Novel Idea:
Network traffic from data centers is analyzed, issues that hurt performance are 
identified and a new protocol DCTCP is proposed to remedy those issues

Main Results:
The protocol is described and the performance is evaluated.

Evidence:
Performance it terms of queue buildup and latency is evaluated and the numbers
look really good compared to TCP.

Prior Work: 
A lot of prior work on congestion control. QCN which requires hardware rate limiters.
Delay based congestion control, explicit feedback mechanisms and various AQM schemes.

Reproducibility:
No code so not reproducible

Question:
How much impact as roughly a percentage does the network topology (incast) vs. the fact that there
are both background and query traffic on the same nodes have on the latency and queue buildup

Criticism:
Why was a comparison with other congestion reduction schemes not done?

Ideas for future work:
How easy would it be to separate out the query and background traffic for something like say MapReduce.
Would DCTCP still have as much of an advantage then?


On Wed, Oct 27, 2010 at 7:54 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Tom Wall

unread,
Oct 27, 2010, 8:23:11 PM10/27/10
to CSCI2950-u Fall 10 - Brown
Data Center TCP (DCTCP)
Mohammad Alizadehzy, Albert Greenbergy, David A. Maltzy, Jitendra
Padhyey, Parveen Pately, Balaji Prabhakarz, Sudipta Senguptay, Murari
Sridharany

Novel Idea:
TCP has weaknesses for some combinations of workloads, and these
weaknesses have a direct impact on the quality (and thus revenue) of
some applications. Using a month of logs from a 6000 server data
center, they first identify applications' usage patterns and where TCP
falls short for them. Armed with this information, they design and
evaluate DCTCP, a TCP like protocol enhanced for data center
applications. DCTCP encodes multi-bit information over time using the
single bit ECN field of TCP packets so that switches can better report
congestion and servers can react accordingly.

Main Result:
They evaluate DCTCP to verify whether their intuitions about TCP's
shortcomings in data center were valid, and that their design of DCTCP
addressed these shortcomings. Turns out they were correct. DCTCP does
a much better job than TCP at keeping buffer queues short and
available, reducing lost packets while maintaining both throughput and
fast response times.

Evidence:
They first provided a mathematical anaylsis of DCTCP to get a good
idea of how to optimally tune DCTCP for best performance. Once they do
this, they do a series of tests on a real network. They compare to TCP
and TCP with RED and find that DCTCP always does a better job.

Impact:
Assuming it is not too difficult to adapt a data center for use with
DCTCP, it clearly can do better than TCP, at least for similar
workloads.

Similar Work:
They mention two active queue management schemes, RED and PI, which
aim to fix the same congestion problem as DCTCP without modifying the
TCP protocol. They find that they do not work well enough in data
center scenarios because they must sacrifice either throughput or
latency.

QCN is a standard which calls for NICs to implement rate limiters on
TCP buffers. To cut costs however, flows are grouped together and they
are limited as a single unit, which can be unfair.

Questions/Comments/Criticisms:
If the three types of traffic do not play well together, why not try
to isolate the them? Might TCP do better if you separate the MLAs from
the worker machines (p.65, Query Traffic section)? Similarly, I think
a hybrid approach of VL2 (for its application level isolation and
fairness) with DCTCP might be a killer combination that they should
explore.

They say that the convergence delay introduced with DCTCP is not an
issue, but do little to back up this claim. They say microbursts
aren't affected by the delay - is this because they typically finish
transmitting in fewer RTTs than are required for convergence?

Their tests don't appear to be done at the scale of a real data
center. Some tests never see traffic go beyond the Top of Rack switch.

How much work is it to adapt an existing data center to use TCP? While
it might run on the same hardware, there are some software (I assume
all switches and NICs will need firmware updates to support DCTCP) and
possibly architectural changes (such as separating internet facing TCP
only machines from the DCTCP machines, since they apparently do not
work well together) that may need to happen.

On Oct 27, 7:54 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Zikai

unread,
Dec 9, 2010, 4:58:02 PM12/9/10
to CSCI2950-u Fall 10 - Brown
Paper Title: DCTCP: Efficient Packet Transport for the Commoditized
Data Center

Author(s): Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitu
Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari
Sridharan

Date/Conference: SIGCOMM 2010

Novel Idea: Use a variant of TCP called DCTCP which leverages
Explicit Congestion Notification (ECN) and a simple multibit feedback
mechanism at the host to overcome limitations of traditional TCP in
data centers like queue buildup, buffer pressure, incast and high
latencies.

Main Result (1) Design and implement DCTCP, a variant of TCP to
achieve lower latency and higher throughput than TCP in typical data
center environments.
(2) Evaluate DCTCP at 1 and 10Gbps speeds, through benchmark
experiments and analysis.

Impact: While TCP’s limitations cause data center application
developers to restrict the traffic they send today, using DCTCP
enables the applications to handle 10X the current background traffic,
without impacting foreground traffic. Further, a 10X increase in
foreground traffic does not cause any timeouts, thus largely
eliminating incast problems.

Evidence: In Part4, authors evaluate DCTCP at 1 and 10Gbps speeds,
through benchmark experiments and analysis. They find that in the data
center operating with commodity, shallow buffered switches, DCTCP
delivers the same or better throughput than TCP, while using 90% less
buffer space. Unlike TCP, it also provides high burst tolerance and
low latency for short flows.

Prior Work: In Part5, authors discuss related work like congestion
control for TCP, queue length reduction at routers, earlier ECN
schemes and TCP performance improvements on high bandwidth-delay
product paths.

Reproducibility: As TCP, there are a lot of subtle issues in
implementations while the paper only addresses general design ideas
and algorithms. Therefore, it may be not easy to reproduce DCTCP and
evaluate it.

Question: Can DCTCP be used outside data center scenarios? Can we use
it to replace traditional TCPs entirely in the transportation layer?
Will we have a RFC standard for DCTCP?
Reply all
Reply to author
Forward
0 new messages