[Iccrg] LT-TCP followup
dirceu_cavendish at yahoo.com
Thu Aug 9 17:22:21 BST 2007
----- Original Message ----
From: Michael Welzl <michael.welzl at uibk.ac.at>
To: Dirceu Cavendish <dirceu_cavendish at yahoo.com>
Cc: l.andrew at ieee.org; Shivkumar Kalyanaraman <shivkuma at ecse.rpi.edu>; "RAMAKRISHNAN, KADANGODE K (K. K.)" <kkrama at research.att.com>; iccrg at cs.ucl.ac.uk; Vijay Subramanian <subrav at rpi.edu>
Sent: Thursday, August 9, 2007 9:13:55 AM
Subject: Re: [Iccrg] LT-TCP followup
>Various TCP variants today track rtts, which would allow differentiation
between corruption loss and overflow loss. >Obviously, in a extremely worst
case scenario, a packet loss overflow could occur at a single packet with no
"rtt >evidence" (rtt increase) on "surrounding" packets of a TCP stream. But
IMHO most of the scenarios I've seen, the rtt >increase evidence is there.
i know, and I think I didn't say anything to discourage using other
implicit feedback too; this being said, rtt increase is misleading too,
how do you distinguish a growing queue from link layer arq happening
because of corruption?
<DC> You may not WANT to distinguish between a L2 queue increase due to L2 corruption recovery schemes and a L3 router queue overflow. For instance, a L2 with retransmission may signal corruption to higher layers exactly by overflowing its interface queues. One can look at that as L2 corruption causing a decrease in the L2 link capacity, causing "congestion", similar to an AMC wireless link, that changes its coding/bit rate in response to channel fading.
>Notice that mistakenly taken a corruption loss as overflow loss does not
necessarily have to have a big impact on a TCP >session. It is bad enough
for VJ (alpha=2) TCP, but other controllers can be much more "immune" to
sporadic "misreading" >of packet loss information.
ok, i can imagine that
Got a little couch potato?
Check out fun summer activities for kids.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Iccrg