[Iccrg] Answer to MulTFRC reviews

Michael Welzl michawe at ifi.uio.no
Thu Jan 21 13:08:20 GMT 2010


On Jan 21, 2010, at 1:05 AM, Lachlan Andrew wrote:

> Greetings Michael,
>
> 2010/1/20 Michael Welzl <michawe at ifi.uio.no>:
>> Concern #4:
>>
>> MulTFRC, like MulTCP, simply seems to increase (or decrease) the
>> aggressiveness, without regard for how large or small the BDP is.
>> Since TCP is (too?)aggressive on small-BDP paths, but not aggressive
>> enough on large-BDP paths, it is not clear that a "safe" setting  
>> of  N
>> will be useful.  I think even extremely aggressive algorithms are
>> unlikely to cause congestion collapse in the Internet, and so from
>> that point of view, MulTFRC is "safe".  However, if the
>> user/application can set  N,  then it could easily become part of the
>> "Linux beats Microsoft" arms race Michael described at PFLDnet.
>> (Lachlan)
>>
>> I would like to work out a solution for the small-BDP vs. large-BDP  
>> path
>> concern, but for this, I would need some more details about your
>> statement that "TCP is (too?) aggressive on small-BDP-paths".
>> Could you elaborate, maybe with a reference to a study showing
>> that TCP is too aggressive on small-BDP paths, and what exactly
>> you mean by "small"?
>
> My point was that standard TCP is   sufficiently   aggressive in a LAN
> environment.

Not necessarily - this depends on the buffer size:


> One example where standard TCP is too aggressive is in highly-buffered
> ADSL links.  (You could argue that the problem is the size of the
> buffer rather than the fact that the BDP is low, but if the BDP were
> higher then that size of buffer would be fine.)  The same is true of
> basically any loss-based algorithm, although Lawrence Stuart here at
> Swinburne showed that H-TCP's concave increase actually causes lower
> average queueing than Reno in these cases.

How do you define "too aggressive"?

It causes delay, by letting the queue overflow, which, as you
rightly say, is true for any loss-based algorithm. Having been
a part of it at Swinburne myself, I happen to know Lawrence's
investigation quite well  :-)

It led to some interesting and partially surprising conclusions,
but none that could be interpreted as "mechanism X is *too*
aggressive", because there was also no threshold set for
what "too" is supposed to mean. All loss-based mechanisms
can have a bad impact on VoIP, no doubt about that, and
some are worse than others.

I would argue that, while this is an interesting study, trying
to optimize a mechanism for it is to optimize it for a very
poorly tuned special circumstances, where the only real
solution is to throw all non-delay-based schemes away.

Indeed, the buffer size is the problem, and I don't see how
this fact changes by saying "if the BDP were higher then the
size of the buffer would be fine" - because it just isn't higher,
and if it was, the problem would disappear, both with
MulTFRC and with all other loss-based mechanisms.


>> About the arms race concern: one way to work against this
>> is to have a uniform system-wide non-user-accessible
>> upper limit, which we recommend to have.
>
> That limits the scalability.  It may buy us an extra generation of
> Ethernet (increase aggressiveness by a factor of 10 to match going
> from GbE to 10GbE), but doesn't address the inherent scalability
> problem.  We should be aiming to make changes now which can scale to
> bandwidths a million times higher than we currently have, like the
> original Tahoe/Reno algorithm did.

I don't get this, most probably it's a misunderstanding.
A limit of, e.g., N=6 emulated flows, will always give you
at least 95% link utilization or more, irrespective of the BDP.


>> One could argue that the value of this system-wide upper
>> limit could itself be a part of the arms race, no matter
>> what the specification recommends. However, we believe
>> this to be unlikely. As we state in the draft:
>> "Thus, setting N to a much larger value than the values
>> mentioned above will only yield a marginal benefit in
>> isolation but can significantly affect other traffic."
>
> That comment seems to apply only to current BDPs.  The setting of that
> upper limit is indeed the cause for concern.

Just like above, either you're misunderstanding what this
limit does or I'm misunderstanding your point.


>> Concern #5: "The abstract has a rather weak motivation and should
>> be strengthened"
>> (Dirceu)
>>
>> We'll do that.
>
> You can also think about why/whether this is the "right" solution for
> the need, as well as making a stronger case that there is a need.
>
> Of course, for an experimental RFC it need not be the very best
> solution, but receiving that stamp is a strong endorsement.  I'd be
> more in favour of a rate-based version of one of the new-generation
> algorithms already before the ICCRG (C-TCP, CUBIC or H-TCP) or LEDBAT.
> Once simulation/test-bed studies have shown which of the four options
> seems most promising for "new TFRC", we can set the best one loose on
> the internet.

Now that really makes no sense to me, as MulTFRC is
not a TCP variant, and by no means meant to be one.
Being slowly reactive, yet having a smooth sending rate
(which most TCP applications wouldn't care about),
it is just not designed to replace Reno, C-TCP, CUBIC,
H-TCP etc. You're comparing apples with oranges here.

Cheers,
Michael




More information about the Iccrg mailing list