[Iccrg] LT-TCP followup
lachlan.andrew at gmail.com
Thu Aug 9 22:28:27 BST 2007
On 09/08/07, Dirceu Cavendish <dirceu_cavendish at yahoo.com> wrote:
> So it makes sense to me to ask: "how well can we design a CC scheme
> with as little new features from routers as possible?"
Agreed. Larry Dunn has convinced me of that.
My point was that we ideally don't want to design protocols which
*rely* on undesirable artefacts of current networks, which we might
otherwise want (and be able) to get rid of. One such artefact is that
current networks have very long queueing delay during congestion
(because of Reno's "need" for bandwidth-delay droptail buffers).
> Explicit signalling requires specific
> router behavior from the get go, AND it is difficult to guarantee anything
> if not adopted by all routers on a session path.
That is true if the explicit signalling is used to say "slow down".
What Michael proposed was something to say "*don't* slow down, that
wasn't congestion". The absence of that signal doesn't break
If a single router on the path knows that it is wireless, and likely
to drop packets without congestion, then even if only that one router
implements the signalling, then we avoid unnecessary window halving.
The wireline/core routers needn't be touched.
Conversely, consider the case of implicit signalling: A single
congested queue drops many packets without introducing queueing delay
(perhaps it has a small buffer). If this treated as heavy
"corruption" by the congestion control, then the source doesn't slow
down enough, and congestion persists.
(Minor comments below.)
> specific router behavior may interfere with some such passive schemes.
> But that is the same as specific receiver misbehavior (either maliciously or
Sort of... If we can devise a way for routers not to delay packets
despite persistent congestion, that is "progress", not "misbehaviour".
However, if it causes all losses to be treated as non-congestion
losses and so breaks congestion control, it is likely to be
"disallowed" by the IETF.
> I guess it is widespread consensus today that complexity at the end points
> is best, so routers should keep its simplicity
Yes, although wireless devices are likely to be cheap/simple devices.
It was pointed out at the open mike session in Chicago that this
particular "axiom" was just a description of what suited the 1980s
networks, and should be reconsidered now. (Not necessarily changed,
> Fortunately most routers adopt droptail queues.
Why is it that fortunate? It ensures large queueing delays and makes
ECN impossible (increasing loss/inefficiency).
> I am not advocating routers
> imposing "unecessary queue delays" (I am not sure what you mean by that)
As I pointed out, routers which drop packets based on a virtual queue
will not incur queueing delays when they are congested. Any queueing
delay which is not due to the buffer smoothing packet-level burstiness
(An example of a protocol which forces unncessary queueing delay is
Reno, which forces people to buffer one bandwidth-delay product of
data so that a single flow can get high utilisation. If we didn't use
Reno, queueing delays would be much smaller, improving user QoS.)
Lachlan Andrew Dept of Computer Science, Caltech
1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603
More information about the Iccrg