# [Iccrg] LT-TCP followup

Lachlan Andrew lachlan.andrew at gmail.com
Sat Aug 4 00:55:55 BST 2007

Greetings Michael,

On 03/08/07, Michael Welzl <michael.welzl at uibk.ac.at> wrote:
> > Lachlan Wrote
> >
> > No, we consider the whole path.  The assumption is just that that the
> > wireless link is the *last* hop.
>
> Hmm... I don't think that this makes much of a difference for what
> I was trying to say; see below...

The difference is that, in the framework we consider, we still
consider the congestion caused by the excess transmissions.
Intuitively, the wireless links being the last hop is the worst case,
because it causes the most congestion for a given received rate.

> That reminds me of the old congestion collapse scenario in one
> of the early Raj Jain papers, where a source which doesn't
> know about the small capacity of its last hop yet sends at
> a high rate can contribute to congestion on a hop before that,
> thereby causing the total throughput of the network to
> decrease even when sending rates are increased.

Yes, at Marina Del Rey you also mentioned the "two hop congestion
collapse" problem.  That may be a good way to explain my point.  A
flow which gets  1/k  of its packets through is exactly analogous to a
flow which uses  k  congested hops.  For a given receive rate, both
use  k  times "their share" of the resources, compared with a
single-hop lossless flow.  TCP doesn't give each flow a rate in
proportion to 1/k -- it gives it a rate in proportion to 1/\sqrt(k),
allowing the "resource intensive" flow to consume more resources.

Unfortunately, current TCP behaviour fits Jain's original definition
of congestion collapse.  Imagine a two-hop "parking lot" topology, in
which the two-hop flow is application limited.  If the two-hop flow
increases its sending rate, then the total network throughput goes
down.  I believe this is Jain's definition of congestion collapse.

That is all fixed by saying measuring the change in utility instead of
throughput.  The extra benefit to the two-hop flow outweighs the
disadvantage to the two short flows; because it has a lower current
rate, the percentage increase it gets is much larger than their
percentage decreases.

"Proportional fairness" (which in our contexts allocates equal send
rate to users, regardless of drop rate) would allocate 2/3 of each
bottleneck capacity to the one-hop flows, and 1/3 of each capacity to
the two-hop flow.  That corresponds to setting a window proportional
to  1/p  for a drop rate of  p.  In this simple case, the long flow
causes twice the congestion per unit throughput (the same amount on
two routers), and consequently gets half the rate.  Good.

TCP is "more fair", and gives a higher rate to the low-throughput
flow, by setting the window proportional to  1/sqrt(p).  (Is that
"acceptable"?  That's not for me to justify...)  This gives a ratio of
1:sqrt(2) instead of 1:2 in the rates.  Thus, the flow with the lower
rate is allowed to create more total congestion.  This is "fair",
because it is the underdog and needs help.

We can have a whole family, where the rate is proportional to
p^{-1/alpha};  proportional fairness uses  alpha=1,  TCP uses
alpha=2.  (Again, this isn't just what I'm proposing; this is what TCP
already does for flows of equal RTT.)

> Consider a long path with a wireless last hop. Consider a source
> which sends across this path, and let's say that all of a sudden,
> 90% of the packets are lost due to corruption, and that situation
> stays stable for a while. You argue that the sender should increase
> its rate.
>
> Now consider the first hop, which is the bottleneck, and assume that
> all other flows sharing it get 100% of their packets across most of
> the time (they don't traverse a wireless network).
>
> The  part that I don't get is: why can it be acceptable for the
> single sender which only gets 10% of its packets across anyway
> to increase its rate, and thereby increase the chance for all others
> to experience congestion?

That comes down to the whole issue of "what is fair?"  TCP wasn't
design to be fair for non-identical flows -- that is why flows with a
larger MSS or shorter RTT get higher throughput.

The best way I know of to answer fairness questions is in terms of
Kelly's utility maximisation.  The conversation would be:
A: Why should the high-loss flow get less effective throughput?
B: Because it creates more congestion per byte of throughput
A: How much less throughput should it get?
B: That depends on how much benefit it loses by sending at the lower rate.

Let's be specific.  Assume there are two flows sharing a 1Gbps link.
The "proportional fair" allocation (accounting for loss) would be that
User A gets 500Mbps throughput, while user B gets 500Mpbs through the
link, but only 50Mbps reaches the destination.  Intuitively, this
again corresponds to each flow causing the same amount of congestion.

However, remember that TCP is "more fair" than proportional fairness,
by giving extra bandwidth to the flows which get the least,
corresponding to  alpha=2.  Thus, in your example, when the flow
suddenly starts losing 90% of its packets, the TCP-fair thing to do
would be to reduce its payload by *less* than a factor of 10.  Of
course, to do that it needs to increase the sending rate slightly.
This flow causes more congestion than the lossless flow, the same as a
two-hop flow causes more congestion than a one-hop flow.  That is just
what TCP's fairness scheme implies.

In this example, the ratio would be 1:sqrt(10%).  User A would get
240Mbps throughput (and use 240Mbps of bandwidth), while User B would
get 76Mbps throughput (and use 760Mbps of bandwidth).

For comparison, in a 10-hop parking lot with lossless 316Mbps links,
each one-hop flow would get 240Mbps, and the 10-hop flow would get
76Mbps, using 760 "link * Mbps".

> I was also not aware that you're making a concrete proposal
> for a certain behavior - at least back in Marina Del Rey, I thought that
> you're merely trying to point out an interesting counterintutive issue

True.  I don't have a particular protocol (I'll put my vote behind
LT-TCP :) and I'm not sure if we should be TCP-friendly or
proportionally fair.  However, I think fairness should be consistently
designed in to protocols, rather than just "emerging" from a set of
rules.  In that case, being TCP-friendly implies we  should
increasing the send-rate of flows suffering packet corruption.

> In any case it would be helpful if you could explain it by means of
> one or two simple examples like the one I gave above.

I hope the above example explains the intuition.

Have a good trip,
Lachlan

--
Lachlan Andrew  Dept of Computer Science, Caltech
1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
Phone: +1 (626) 395-8820    Fax: +1 (626) 568-3603