[Iccrg] LT-TCP followup

Dirceu Cavendish dirceu_cavendish at yahoo.com
Thu Aug 9 23:18:37 BST 2007


Dear Andrew,

I sense we are in violent agreement in various points, although coming from different perspectives. See below...


----- Original Message ----
From: Lachlan Andrew <lachlan.andrew at gmail.com>
To: Dirceu Cavendish <dirceu_cavendish at yahoo.com>
Cc: iccrg at cs.ucl.ac.uk
Sent: Thursday, August 9, 2007 2:28:27 PM
Subject: Re: [Iccrg] LT-TCP followup


Greetings Dirceu,

On 09/08/07, Dirceu Cavendish <dirceu_cavendish at yahoo.com> wrote:
> So it makes sense to me to ask: "how well can we design a CC scheme
> with as little new features from routers as possible?"

Agreed.  Larry Dunn has convinced me of that.

My point was that we ideally don't want to design protocols which
*rely* on undesirable  artefacts  of current networks, which we might
otherwise want (and be able) to get rid of.  One such artefact is that
current networks have very long queueing delay during congestion
(because of Reno's "need" for bandwidth-delay droptail buffers).

<DC> Agree. CC protocols that maitain a high queueing delay are not desirable. I
alluded to that as "operating points" of a cc scheme in my presentation in Chicago. Due to the inherent template of AIMD based on packet loss/no-loss detection, many TCPs tend to operate at heavy queue filling levels. But the operating point of a cc scheme can be tuned to low queueing delays as well. This is a matter of design.
</DC>

> Explicit signalling requires specific
> router behavior from the get go, AND it is difficult to guarantee anything
> if not adopted by all routers on a session path.

That is true if the explicit signalling is used to say "slow down".
What Michael proposed was something to say "*don't* slow down, that
wasn't congestion".  The absence of that signal doesn't break
anything.

<DC> Agree. But some people also advocate that L2 corruption loss should cause rate reduction, as a thread on this list has shown. In this context, then, I assume a router would decide that THAT particular corruption loss was spurious, hence a drastic rate/window reduction should not be exercised.
</DC>

If a single router on the path knows that it is wireless, and likely
to drop packets without congestion, then even if only that one router
implements the signalling, then we avoid unnecessary window halving.
The wireline/core routers needn't be touched.

<DC> Agree once more. However, persistent corruption losses in an AMC channel may trigger a mode change, with wireless service capacity reduction. In this scenario, it makes sense to me that the TCP sources slow down, as service bit rate has significantly reduced ( at a likely session bottleneck).
</DC>

Conversely, consider the case of implicit signalling:  A single
congested queue drops many packets without introducing queueing delay
(perhaps it has a small buffer).  If this treated as heavy
"corruption" by the congestion control, then the source doesn't slow
down enough, and congestion persists.

<DC> Lets take the extreme case. Wireless channel, with no storage space whatsoever. If it is an adaptive channel (AMC), the channel will reduce its modulation rate in order to achieve an acceptable BER. If not, the wireless channel may indeed become a black hole. In this case, rate reduction is unlikely to help, and the TCP should and will take more drastic action (e.g., new slow start), triggered by RTOs, excessive retransmissions, etc...
</DC>

(Minor comments below.)

Cheers,
Lachlan

> specific router behavior may interfere with some such passive schemes.
> But that is the same as specific receiver misbehavior (either maliciously or
> not).

Sort of...  If we can devise a way for routers not to delay packets
despite persistent congestion, that is "progress", not "misbehaviour".
However, if it causes all losses to be treated as non-congestion
losses and so breaks congestion control, it is likely to be
"disallowed" by the IETF.

> I guess it is widespread consensus today that complexity at the end points
> is best, so routers should keep its simplicity

Yes, although wireless devices are likely to be cheap/simple devices.
It was pointed out at the open mike session in Chicago that this
particular "axiom" was just a description of what suited the 1980s
networks, and should be reconsidered now.  (Not necessarily changed,
just reconsidered.)

<DC> Just want to state that I am not all for dirty cheap routers, but simply routers where added complexity is well justified.
</DC>

> Fortunately most routers adopt droptail queues.

Why is it that fortunate?  It ensures large queueing delays and makes
ECN impossible (increasing loss/inefficiency).

<DC> Not necessarily. If you assume current TCPs, than I can see your concern. That is, if TCPs slow down only as a reaction to packet drop, then routers should signal "random" drops in advance in order to keep their queues at low filling levels. But IMO it is much easier to fix and end-point behavior so as to shift the cc operating point to a low queueing delay, than to instruct every router to try and trigger a rate reduction in order to keep their buffers' level low, which to me amounts to try to "fool" TCP senders into believing that router queues are about to overflow. In any event, let me restate that I agree that large queues are not the desirable operating point of a session/network.
</DC>

> I am not advocating routers
> imposing "unecessary queue delays" (I am not sure what you mean by that)

As I pointed out, routers which drop packets based on a virtual queue
will not incur queueing delays when they are congested.  Any queueing
delay which is not due to the buffer smoothing packet-level burstiness
is unnecessary.

(An example of a protocol which forces unncessary queueing delay is
Reno, which forces people to buffer one bandwidth-delay product of
data so that a single flow can get high utilisation.  If we didn't use
Reno, queueing delays would be much smaller, improving user QoS.)

<DC> Again, I concur with your point. I am just not sure that I agree with the remedy. However, there can be many solutions to decreasing queueing delays in a TCP session, so we dont need to agree on a single one (but hopefully will understand each other's solution :-). Our experience with Reno shows a "medium" bottleneck queue utilization (filling level). There are less agressive protocols (our :-), CCP), and more aggressive ones (Fast).

Dirceu
</DC>

-- 
Lachlan Andrew  Dept of Computer Science, Caltech
1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
Phone: +1 (626) 395-8820    Fax: +1 (626) 568-3603


       
____________________________________________________________________________________
Need a vacation? Get great deals
to amazing places on Yahoo! Travel.
http://travel.yahoo.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oakham.cs.ucl.ac.uk/pipermail/iccrg/attachments/20070809/e5029444/attachment.html


More information about the Iccrg mailing list