<html>
<body>
Dirceau,<br><br>
Why do you want the comparison to be between /sessions/? Given sessions
are defined by identifiers taken arbitrarily many times from a large
number space.<br><br>
And are all your metrics deliberately instantaneous metrics? Or do you
think that fairness should actually be judged over time? <br><br>
And when you say that routers should have the power to share their link
capacity, do you mean any router, or only a router able to see all the
sessions of one user?<br><br>
<br><br>
Bob<br><br>
At 18:50 04/04/2008, Dirceu Cavendish wrote:<br><br>
<blockquote type=cite class=cite cite="">One more heteric coming out of
the woods. However, my disclaimer is that I sympathize with the concept
:-)<br><br>
<br><br>
On top of all technical difficulties, the concept of "fairness"
in itself is elusive. All things being equal, then one session should
have the same (pick one - bandwidth, transaction response time,
packet loss) than the others. The utopy of it comes from the "all
things being equal" part - in the Internet?<br><br>
<br><br>
Zeroing into the bandwidth metric, a per flow queueing would give routers
the power to share their link capacities however they find fit - WFQ,
max-min fairness, etc - independent of other factors, such as sessions
RTTs. In this scenario, and aggressive congestion control would cause its
own traffic to be spilled, so "isolation" of sessions would be
achieved.<br>
<br>
<br><br>
Dirceu<br><br>
----- Original Message ----<br>
From: Bob Briscoe <rbriscoe@jungle.bt.co.uk><br>
To: Matt Mathis <mathis@psc.edu><br>
Cc: iccrg@cs.ucl.ac.uk<br>
Sent: Friday, April 4, 2008 6:53:43 AM<br>
Subject: Re: [Iccrg] Heresy following "TCP:
Train-wreck"<br><br>
Matt,<br><br>
From one heretic to another...<br><br>
0) Priority to shortest jobs<br>
In your bullet list you missed a point that I believe is the most <br>
important one: about arrivals and departures of whole flows. If you <br>
have a mix of flow activity at a shared bottleneck, some continuously
<br>
streaming and some intermittent (as we do on the Internet), you can <br>
make the intermittent flows go much much faster without hardly <br>
prolonging the completion time of the continuous flows. It's totally
<br>
unfair (and very inefficient) for the intermittent flows to get the <br>
same bit rate as the continuous flows, because their behaviour is <br>
much more multiplexable. A picture may help: <br>
<<a href="http://www.cs.ucl.ac.uk/staff/B.Briscoe/presents/0801cfp/shortestJobsPriority.png">
http://www.cs.ucl.ac.uk/staff/B.Briscoe/presents/0801cfp/shortestJobsPriority.png</a>
><br><br>
If you're intrigued by how we'd move to such a world, I recently <br>
posted a bit more on this here: <br>
<<a href="http://episteme.arstechnica.com/eve/forums?a=tpc&s=50009562&f=174096756&m=703000231931&r=160007441931#160007441931">
http://episteme.arstechnica.com/eve/forums?a=tpc&s=50009562&f=174096756&m=703000231931&r=160007441931#160007441931</a>
><br><br>
Now thoughts on your main points:<br><br>
1) Fair queuing<br>
Many broadband deployments already do some form of per-flow FQ <br>
(usually WFQ) at the most likely bottleneck (the broadband remote <br>
access server, BRAS). I'm less sure what common practice is in <br>
broadband cellular networks. I believe WFQ is not such an obvious <br>
choice for radio access because of the unpredictable radio link rate.
<br>
I think there are a range of schemes, some distributed at the base <br>
station and others centralised at the radio network controller
(RNC).<br><br>
Incidentally, back to the first point, I recently realised that most
<br>
deployed per-flow FQ tends to help short duration flows, by giving <br>
higher priority at the start of a flow and reducing the longer each <br>
flow continues. Altho this helps a little bit today, it's also <br>
actually a huge potential problem for the future - same problem as <br>
TCP: it still converges to the incorrect goal of sharing out bit-rate
<br>
equally for each flow at the bottleneck - tho at least at a BRAS it's
<br>
done separately per each user.<br><br>
My point about priority to shortest jobs (and your point about huge <br>
differences between no.s of flows per app) shows that flow rates need
<br>
to be /very/ unequal to be fair. So per-flow FQ embedded in the <br>
network will be fighting what we really need to do in the transport <br>
in future (tho operators would obviously turn off per-flow FQ if they
<br>
were trying to encourage the new world I'm talking about).<br><br>
<br>
2) Edge bottlenecks protecting core<br>
Although it would be nice, we can't mandate that the core be <br>
protected by bottlenecks at the edge. Technology limits & economics
<br>
determine these things, not the IETF/IRTF:<br>
* At present technology trends are moving bottlenecks gradually <br>
closer to the core as access rates increase, because the newest <br>
access is now using the same technology as the core (optics) and <br>
there's nothing faster on the horizon.<br>
* However, economics always pushes the bottleneck location the other
<br>
way - outwards. The cost of a bps of logical channel capacity follows
<br>
about a square root (?) law wrt to the bandwidth of the physical pipe
<br>
in which the logical channel sits. Ie. where you need many physical <br>
cables/fibres to cover a dispersed area, the cost per bps is much <br>
greater than where each bps can be provided within a few large
pipes.<br><br>
The net effect of these two opposing forces is pushing bottlenecks <br>
inwards at the moment - particularly onto border routers. The message
<br>
of a talk I gave to a recent workshop (on photonics research and <br>
future Internet design) was that the challenge is to find ways to <br>
push complex trust-related stuff currently done at border routers <br>
outward to the edge, so we can have dumb all-optical interconnection
<br>
without electronics: <br>
<<a href="http://www.cs.ucl.ac.uk/staff/B.Briscoe/present.html#0709ecoc-fid">
http://www.cs.ucl.ac.uk/staff/B.Briscoe/present.html#0709ecoc-fid</a>
><br><br>
3) RTT fairness<br>
I'd say this is only a small part of the problem, because it's <br>
relatively easy to solve in the transport alone - e.g. FAST TCP <br>
[Jin04:FAST_TCP] ensures its dynamics are slower for longer RTTs but,
<br>
even tho it takes longer to get there, it ends up at the same rate as
<br>
competing FAST TCPs with shorter RTTs.<br><br>
<br>
Bob<br><br>
[Jin04:FAST_TCP] Cheng Jin, David Wei and Steven Low "FAST TCP:
<br>
Motivation, Architecture, Algorithms, Performance", In "Proc.
IEEE <br>
Conference on Computer Communications (Infocomm'04)" (March,
2004)<br><br>
<br><br>
At 17:35 02/04/2008, Matt Mathis wrote:<br>
>I just attended the "The Future of TCP: Train-wreck or
Evolution?" <br>
>at Stanford last week, and it solidified my thoughts on a subject
<br>
>that is sure to be controversial.<br>
><br>
>I think it is time to abandon the concept of "TCP-Friendly"
and <br>
>instead expect the network to protect itself and other users from
<br>
>aggressive protocols and applications. For the moment I am
going to <br>
>assume two mechanisms, although I suspect that there will prove to be
more.<br>
><br>
>1) Deploy some form of Fair Queuing at the edges of the network.<br>
><br>
>2) Protect the core by bottlenecks at the edges of the Internet.<br>
><br>
>I observe that both of these mechanisms are already being <br>
>implemented due to existing market forces, and the natural <br>
>consequence of their implementation is to make TCP-friendliness a
<br>
>whole lot less important. I admit that it is not clear at this
<br>
>point if these two mechanisms will ultimately prove to be sufficient
<br>
>to address fairness all situations, such as over loaded core <br>
>routers, but I suspect that sufficient mechanisms do exist.<br>
><br>
>Supporting arguments:<br>
><br>
>FQ is already being deployed at the edges to solve several existing
<br>
>and growing fairness problems:<br>
><br>
>* Non-IETF, UDP protocols that are non-responsive.<br>
><br>
>* P2P and other applications that open huge numbers of
connections.<br>
><br>
>* Stock TCP is egregiously unfair when very short RTT flows compete
with wide<br>
> area flows. This can be a real killer in a number of
settings such as data<br>
> centers and university campuses. The symptoms of this
problem will become<br>
> more pronounced as TCP autotuning continues to be rolled out
in Vista,<br>
> Linux, and various BSDs.<br>
><br>
>* Autotuning will also greatly magnify RFC 2309 [1] problems, since
every<br>
> single TCP flow with sufficient data will cause congestion
somewhere in the<br>
> network. At the very least this will gradually force the
retirement of<br>
> drop-tail equipment, creating the opportunity for RED and/or
FQ. Since RED<br>
> by itself is insufficient to solve the other fairness
problems, it will not<br>
> be the first choice replacement.<br>
><br>
>I should note that "Fair Queuing"is overly specific.
The network <br>
>needs to do something to large flows to prevent them from <br>
>overwhelming smaller flows and to limit queue occupancy. FQ is
one <br>
>way, but there are others.<br>
><br>
>If you have ever shared a drop-tail home router with a teenager, you
<br>
>might have observed some of these issues first hand, as has <br>
>Comcast.[2] As I understand it, some form of enforced fairness is
<br>
>now part of all commercial broadband services.<br>
><br>
>The core of the Internet is already mostly protected by bottlenecks
<br>
>at the edges. This is because ISPs can balance the allocation
of <br>
>revenue from customers between the relatively expensive access link,
<br>
>its own backbone links and interconnections to other ISPs.
Since <br>
>congestion in the core has proven to cause complaints from <br>
>commercial customers (and perhaps SLA problems), most providers are
<br>
>careful to keep adequate capacity in the core, and can do so pretty
<br>
>easily, as long as their duty cycle models hold true.<br>
><br>
>Are these two mechanisms sufficient to make TCP-friendliness <br>
>completely moot? Probably not.<br>
><br>
>We still have some work to do:<br>
><br>
>First, stop whining about non-TCP-friendly protocols. The are
here <br>
>to stay and they can't hear us. We are wasting our breath and
<br>
>impeding real progress in well designed alternatives to <br>
>"TCP-friendly". This concept came from an era when
the Internet was <br>
>a gentleman's club, but now it needs to be retired.<br>
><br>
>Second, blame the network when the network deserves it. In
<br>
>particular if there are drop tail queues without AQM, be very <br>
>suspicious of RFC2309 problems. In fact every drop tail queue
<br>
>without AQM should be viewed as a bug waiting to bite <br>
>someone. Likewise remember that "TCP-friendly" is
extremely unfair <br>
>when the RTTs are extremely different.<br>
><br>
>Third, think about the hard cases: over loaded interconnects, <br>
>failure conditions, etc. Can FQ be approximated at core <br>
>scales? Where else are my proposed mechanisms
insufficient? I sure <br>
>there are some.<br>
><br>
>Fourth, start dreaming about what it would take to make Moore's law
<br>
>apply to end-to-end protocol performance, as it does to just about
<br>
>everything else in the computing universe. I suspect that in
some <br>
>future hindsight, we will come to realize that TCP-friendly was <br>
>actually a untenable position, and has held us back from important
innovations.<br>
><br>
>[1] RFC2309 "Recommendations on Queue Management and Congestion
<br>
>Avoidance in the Internet", Bob Braden, et al.<br>
><br>
>[2] Richard Bennett "New and Improved Traffic Shaping"
<br>
><a href="http://bennett.com/blog/index.php/archives/2008/03/27/new-and-improved-traffic-shaping/">
http://bennett.com/blog/index.php/archives/2008/03/27/new-and-improved-traffic-shaping/</a>
<br>
>------------------<br>
><br>
>It was a very stimulating conference!<br>
>Thanks Nandita and everyone else who made it happen!<br>
>--MM--<br>
>-------------------------------------------<br>
>Matt Mathis
<a href="http://staff.psc.edu/mathis">http://staff.psc.edu/mathis</a><br>
>Work:412.268.3319 Home/Cell:412.654.7529<br>
>-------------------------------------------<br>
><br>
><br>
>_______________________________________________<br>
>Iccrg mailing list<br>
><a href="mailto:Iccrg@cs.ucl.ac.uk">Iccrg@cs.ucl.ac.uk</a><br>
><a href="http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg" eudora="autourl">
http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg</a><br><br>
____________________________________________________________________________<br>
Bob Briscoe,
<<a href="mailto:bob.briscoe@bt.com">bob.briscoe@bt.com</a>
> Networks Research Centre, BT
Research<br>
B54/77 Adastral Park,Martlesham Heath,Ipswich,IP5
3RE,UK. +44 1473 645196 <br><br>
<br><br>
_______________________________________________<br>
Iccrg mailing list<br>
<a href="mailto:Iccrg@cs.ucl.ac.uk">Iccrg@cs.ucl.ac.uk</a><br>
<a href="http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg" eudora="autourl">
http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg</a><br><br>
<br>
<br>
You rock. That's why Blockbuster's offering you
<a href="http://us.rd.yahoo.com/evt=47523/*http://tc.deals.yahoo.com/tc/blockbuster/text5.com">
one month of Blockbuster Total Access</a>, No Cost. </blockquote>
<x-sigsep><p></x-sigsep>
____________________________________________________________________________<br>
Bob Briscoe, <bob.briscoe@bt.com>
Networks Research Centre, BT Research<br>
B54/77 Adastral Park,Martlesham Heath,Ipswich,IP5
3RE,UK. +44 1473 645196</body>
</html>