<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2900.2802" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>hi,</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>you say: " 1. In a high bandwidth-delay product
environment, high throughputs can only<BR>be achieved if the packet loss rate is
unrealistically low."</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>does anyone know what it is a realistic packet loss
rate not due to congestion in gigabit nets?</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Saverio</DIV>
<DIV><BR><BR></DIV>
<DIV><SPAN class=gmail_quote>On 2/24/06, <B class=gmail_sendername>S. Keshav</B>
<keshav@uwaterloo.ca> wrote:</SPAN>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<DIV>Folks,<BR> Its been quiet on this list for a while. Can I
take it as consensus on<BR>the following:<BR><BR>A. Definition of
Congestion:<BR><BR>Network congestion is a state of degraded performance from
the perspective<BR>of a particular user. A network is said to be congested
from the perspective<BR>of a user if that user's utility has decreased due to
an increase in network<BR>load.<BR><BR>B. Problems with TCP congestion
control:<BR><BR>1. In a high bandwidth-delay product environment, high
throughputs can only<BR>be achieved if the packet loss rate is unrealistically
low.</DIV>
<DIV> </DIV>
<DIV><FONT color=#ff0000></FONT><BR><BR>2. TCP has low throughput under lossy
environments because it uses loss as<BR>an indication of congestion.<BR><BR>3.
Due to additive increase, it takes a long time for a flow to ramp up
to<BR>transient increases in available capacity which results in
unnecessarily<BR>long flow-completion times.<BR><BR>4. Even when the flow is
capable of completing within a round-trip time,<BR>slow-start makes flows last
multiple round-trip times just to find their<BR>fair share rate. Many flows
complete before they exit slow-start phase. <BR><BR>5. TCP fills up all
available buffers at the bottleneck links, which results<BR>in long
latency.<BR><BR>6. TCP shares bandwidth inversely proportional to
flow RTTs<BR><BR>7. TCP builds a standing queue at the point of congestion,
which increases<BR>the delay.<BR>-------<BR><BR>If we agree on this, (and if
you do not, this is the time to speak up) I<BR>would like to propose that we
discuss the requirements of a new congestion<BR>control protocol, both
theoretically and practically.<BR><BR>To start off this debate, I would like
to state the following top level<BR>requirements.<BR><BR>First, given the
definition of congestion, I argue that the proposed<BR>protocol should allow
two things: decoupling and observability.<BR><BR> 0 Decoupling
means that the traffic from one user should not affect (or<BR>minimally
affect) other users.<BR> 0 Observability means that users should
be able to observe the network<BR>state in some fashion, so that they can
control their input so as to not<BR>cause overload and a consequent decrease
in utility.<BR><BR>Second, in addition to these, the new mechanism should also
not suffer from<BR>the seven problems with
TCP.<BR><BR>Comments?<BR><BR>thanks<BR><BR>keshav<BR><BR><BR><BR>______________________________<WBR>_________________<BR>Iccrg
mailing
list<BR>Iccrg@cs.ucl.ac.uk<BR>http://oakham.cs.ucl.ac.uk<WBR>/mailman/listinfo/iccrg<BR></DIV></BLOCKQUOTE></DIV>
<DIV><BR></DIV></FONT></BODY></HTML>