# [Iccrg] LT-TCP followup

Michael Welzl michael.welzl at uibk.ac.at
Mon Aug 6 14:32:31 BST 2007

> > > Lachlan Wrote
> > >
> > > No, we consider the whole path.  The assumption is just that that the
> > > wireless link is the *last* hop.
> >
> > Hmm... I don't think that this makes much of a difference for what
> > I was trying to say; see below...
>
> The difference is that, in the framework we consider, we still
> consider the congestion caused by the excess transmissions.
> Intuitively, the wireless links being the last hop is the worst case,
> because it causes the most congestion for a given received rate.

This probably means that you assume that the receiving
end point of the wireless link drops any corrupt packet
anyway (otherwise I absolutely don't understand why
this would be the worst case), which it however shouldn't
do for packets which experience corruption and where upper
layers could detect it (e.g. UDP-Lite packets).

> > That reminds me of the old congestion collapse scenario in one
> > of the early Raj Jain papers, where a source which doesn't
> > know about the small capacity of its last hop yet sends at
> > a high rate can contribute to congestion on a hop before that,
> > thereby causing the total throughput of the network to
> > decrease even when sending rates are increased.
>
> Yes, at Marina Del Rey you also mentioned the "two hop congestion
> collapse" problem.  That may be a good way to explain my point.  A

Indeed that's exactly the scenario which I meant here.

> flow which gets  1/k  of its packets through is exactly analogous to a
> flow which uses  k  congested hops.  For a given receive rate, both
> use  k  times "their share" of the resources, compared with a
> single-hop lossless flow.  TCP doesn't give each flow a rate in

Sorry: I just tried hard, but I don't understand that analogy.
Maybe that's the problem. I can think of proportional fairness
in terms of multiple congested resources, but I never understood
how this relates to flows which have their packets dropped because
of corruption.

> proportion to 1/k -- it gives it a rate in proportion to 1/\sqrt(k),
> allowing the "resource intensive" flow to consume more resources.
>
>
> Unfortunately, current TCP behaviour fits Jain's original definition
> of congestion collapse.  Imagine a two-hop "parking lot" topology, in
> which the two-hop flow is application limited.  If the two-hop flow
> increases its sending rate, then the total network throughput goes
> down.  I believe this is Jain's definition of congestion collapse.

I don't know because I again don't quite get it. What do you
mean with application limited?

Normally, to me, this means that the application can't send
at the rate at which it should (according to what the network
allows - i.e. a TCP can't increase it's rate as the congestion
control mechanism normally would). Then why would it increase
its sending rate at all? It can't!

> That is all fixed by saying measuring the change in utility instead of
> throughput.  The extra benefit to the two-hop flow outweighs the
> disadvantage to the two short flows; because it has a lower current
> rate, the percentage increase it gets is much larger than their
> percentage decreases.

Because it's a log utility function?

> "Proportional fairness" (which in our contexts allocates equal send
> rate to users, regardless of drop rate) would allocate 2/3 of each
> bottleneck capacity to the one-hop flows, and 1/3 of each capacity to
> the two-hop flow.  That corresponds to setting a window proportional
> to  1/p  for a drop rate of  p.  In this simple case, the long flow
> causes twice the congestion per unit throughput (the same amount on
> two routers), and consequently gets half the rate.  Good.
>
> TCP is "more fair", and gives a higher rate to the low-throughput
> flow, by setting the window proportional to  1/sqrt(p).  (Is that
> "acceptable"?  That's not for me to justify...)  This gives a ratio of
> 1:sqrt(2) instead of 1:2 in the rates.  Thus, the flow with the lower
> rate is allowed to create more total congestion.  This is "fair",
> because it is the underdog and needs help.

I'm having a lot of trouble trying to grasp all that. I already
have a problem with the sentence: "that corresponds to setting
a window proportional to 1/p for a drop rate of p". This assumes
that the packet loss ratio experienced by the two-hop flow is
exactly 2 times the packet loss ratio experienced by the one-hop
flows, which isn't true.

Even if we assume that p doesn't depend on the rate of the
flows under consideration (which is a story in itself...),
then p only becomes approximately 2p in the 2-hop case if
p is very small.

If a flow traverses two hops, and we assume that the packet
loss probability is independent and equal to p at both
hops, the probability that a packet is dropped for that flow
is the probability that it is dropped at the first hop, plus
the probability that it is not dropped at the first hop and
it is dropped at the second hop.

In other words, it's p + (1-p)*p, which is 2p-p^2, which
is only reasonably close to 2p for small values of p,
and, as I said, this is even assuming that p is independent.

Since in fact p of the second hop will be less than
p of the first hop as it is under the influence of
my own two-hop flow, too, and the rate of this flow
is reduced along the path, the real total p of the
two-hop flow will be even less.

Now this may or may not change the intuition of what you're
trying to say here, but in any case, with all these weird
assumptions as a basis, it makes it hard for me to intuitively
understand what you say.

> We can have a whole family, where the rate is proportional to
> p^{-1/alpha};  proportional fairness uses  alpha=1,  TCP uses
> alpha=2.  (Again, this isn't just what I'm proposing; this is what TCP
> already does for flows of equal RTT.)

okay...

> > Consider a long path with a wireless last hop. Consider a source
> > which sends across this path, and let's say that all of a sudden,
> > 90% of the packets are lost due to corruption, and that situation
> > stays stable for a while. You argue that the sender should increase
> > its rate.
> >
> > Now consider the first hop, which is the bottleneck, and assume that
> > all other flows sharing it get 100% of their packets across most of
> > the time (they don't traverse a wireless network).
> >
> > The  part that I don't get is: why can it be acceptable for the
> > single sender which only gets 10% of its packets across anyway
> > to increase its rate, and thereby increase the chance for all others
> > to experience congestion?
>
> That comes down to the whole issue of "what is fair?"  TCP wasn't
> design to be fair for non-identical flows -- that is why flows with a
> larger MSS or shorter RTT get higher throughput.

okay...

> The best way I know of to answer fairness questions is in terms of
> Kelly's utility maximisation.  The conversation would be:
> A: Why should the high-loss flow get less effective throughput?
> B: Because it creates more congestion per byte of throughput
> A: How much less throughput should it get?
> B: That depends on how much benefit it loses by sending at the lower rate.
>
> Let's be specific.  Assume there are two flows sharing a 1Gbps link.
> The "proportional fair" allocation (accounting for loss) would be that
> User A gets 500Mbps throughput, while user B gets 500Mpbs through the
> link, but only 50Mbps reaches the destination.  Intuitively, this
> again corresponds to each flow causing the same amount of congestion.

Yes...

> However, remember that TCP is "more fair" than proportional fairness,
> by giving extra bandwidth to the flows which get the least,
> corresponding to  alpha=2.  Thus, in your example, when the flow
> suddenly starts losing 90% of its packets, the TCP-fair thing to do
> would be to reduce its payload by *less* than a factor of 10.  Of
> course, to do that it needs to increase the sending rate slightly.

What?? I'm lost. The part that I don't get is the last sentence.
This whole business with the square root is the sender behavior,
not the reception rate of the receiver. If I understand you
correctly, what you're saying is, in order for the TCP receiver
to get exactly as many packets as it would get if there was
no corruption, the sender would have to increase the sending rate.
Who says that you should do that?

> This flow causes more congestion than the lossless flow, the same as a
> two-hop flow causes more congestion than a one-hop flow.  That is just
> what TCP's fairness scheme implies.

Of course it causes more congestion, but why would TCP-friendliness
imply that you do that? It's all about the sending rate, not the
throughput as seen by the receiver.

> In this example, the ratio would be 1:sqrt(10%).  User A would get
> 240Mbps throughput (and use 240Mbps of bandwidth), while User B would
> get 76Mbps throughput (and use 760Mbps of bandwidth).
>
> For comparison, in a 10-hop parking lot with lossless 316Mbps links,
> each one-hop flow would get 240Mbps, and the 10-hop flow would get
> 76Mbps, using 760 "link * Mbps".
>
> > I was also not aware that you're making a concrete proposal
> > for a certain behavior - at least back in Marina Del Rey, I thought that
> > you're merely trying to point out an interesting counterintutive issue
>
> True.  I don't have a particular protocol (I'll put my vote behind
> LT-TCP :) and I'm not sure if we should be TCP-friendly or
> proportionally fair.  However, I think fairness should be consistently
> designed in to protocols, rather than just "emerging" from a set of
> rules.  In that case, being TCP-friendly implies we  should
> increasing the send-rate of flows suffering packet corruption.

If I understood you correctly, I disagree. that's because
TCP-friendliness is about the sending rate. I think that
the earliest RFC to define it is RFC 2309:

We introduce the term "TCP-compatible" for a flow that behaves under
congestion like a flow produced by a conformant TCP.  A TCP-
compatible flow is responsive to congestion notification, and in
steady-state it uses no more bandwidth than a conformant TCP running
under comparable conditions (drop rate, RTT, MTU, etc.)

Despite the slightly different term, it's clear that this is
which the receiver is supposed to get its data.

> > In any case it would be helpful if you could explain it by means of
> > one or two simple examples like the one I gave above.
>
> I hope the above example explains the intuition.

Somewhat - but at that point, I'd say that either I still
don't fully understanding it, or the main message is simply:
"in order for a receiver to get as much as it would
get from a standard TCP sender without corruption in
the network, the sender has to increase its rate".

If this is what you're saying, it's obviously true, but
this doesn't have anything to do with TCP-friendliness
and doesn't convince me in the least that this is
appropriate behavior.

Cheers,
Michael