[Iccrg] Answer to MulTFRC reviews

Lachlan Andrew lachlan.andrew at gmail.com
Thu Jan 21 00:05:28 GMT 2010


Greetings Michael,

2010/1/20 Michael Welzl <michawe at ifi.uio.no>:
> Concern #4:
>
> MulTFRC, like MulTCP, simply seems to increase (or decrease) the
> aggressiveness, without regard for how large or small the BDP is.
> Since TCP is (too?)aggressive on small-BDP paths, but not aggressive
> enough on large-BDP paths, it is not clear that a "safe" setting of  N
> will be useful.  I think even extremely aggressive algorithms are
> unlikely to cause congestion collapse in the Internet, and so from
> that point of view, MulTFRC is "safe".  However, if the
> user/application can set  N,  then it could easily become part of the
> "Linux beats Microsoft" arms race Michael described at PFLDnet.
> (Lachlan)
>
> I would like to work out a solution for the small-BDP vs. large-BDP path
> concern, but for this, I would need some more details about your
> statement that "TCP is (too?) aggressive on small-BDP-paths".
> Could you elaborate, maybe with a reference to a study showing
> that TCP is too aggressive on small-BDP paths, and what exactly
> you mean by "small"?

My point was that standard TCP is   sufficiently   aggressive in a LAN
environment.

One example where standard TCP is too aggressive is in highly-buffered
ADSL links.  (You could argue that the problem is the size of the
buffer rather than the fact that the BDP is low, but if the BDP were
higher then that size of buffer would be fine.)  The same is true of
basically any loss-based algorithm, although Lawrence Stuart here at
Swinburne showed that H-TCP's concave increase actually causes lower
average queueing than Reno in these cases.


> About the arms race concern: one way to work against this
> is to have a uniform system-wide non-user-accessible
> upper limit, which we recommend to have.

That limits the scalability.  It may buy us an extra generation of
Ethernet (increase aggressiveness by a factor of 10 to match going
from GbE to 10GbE), but doesn't address the inherent scalability
problem.  We should be aiming to make changes now which can scale to
bandwidths a million times higher than we currently have, like the
original Tahoe/Reno algorithm did.

> One could argue that the value of this system-wide upper
> limit could itself be a part of the arms race, no matter
> what the specification recommends. However, we believe
> this to be unlikely. As we state in the draft:
> "Thus, setting N to a much larger value than the values
> mentioned above will only yield a marginal benefit in
> isolation but can significantly affect other traffic."

That comment seems to apply only to current BDPs.  The setting of that
upper limit is indeed the cause for concern.

> Concern #5: "The abstract has a rather weak motivation and should
> be strengthened"
> (Dirceu)
>
> We'll do that.

You can also think about why/whether this is the "right" solution for
the need, as well as making a stronger case that there is a need.

Of course, for an experimental RFC it need not be the very best
solution, but receiving that stamp is a strong endorsement.  I'd be
more in favour of a rate-based version of one of the new-generation
algorithms already before the ICCRG (C-TCP, CUBIC or H-TCP) or LEDBAT.
 Once simulation/test-bed studies have shown which of the four options
seems most promising for "new TFRC", we can set the best one loose on
the internet.

Cheers,
Lachlan

-- 
Lachlan Andrew  Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
<http://caia.swin.edu.au/cv/landrew> <http://netlab.caltech.edu/lachlan>
Ph +61 3 9214 4837



More information about the Iccrg mailing list