<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:times new roman, new york, times, serif;font-size:10pt"><P>One more heteric coming out of the woods. However, my disclaimer is that I sympathize with the concept :-)</P>
<P> </P>
<P>On top of all technical difficulties, the concept of "fairness" in itself is elusive. All things being equal, then one session should have the same (pick one - bandwidth, transaction response time, packet loss) than the others. The utopy of it comes from the "all things being equal" part - in the Internet?</P>
<P> </P>
<P>Zeroing into the bandwidth metric, a per flow queueing would give routers the power to share their link capacities however they find fit - WFQ, max-min fairness, etc - independent of other factors, such as sessions RTTs. In this scenario, and aggressive congestion control would cause its own traffic to be spilled, so "isolation" of sessions would be achieved.</P>
<P> </P>
<P>Dirceu</P>
<DIV style="FONT-SIZE: 10pt; FONT-FAMILY: times new roman, new york, times, serif"><BR>
<DIV style="FONT-SIZE: 12pt; FONT-FAMILY: times new roman, new york, times, serif">----- Original Message ----<BR>From: Bob Briscoe <rbriscoe@jungle.bt.co.uk><BR>To: Matt Mathis <mathis@psc.edu><BR>Cc: iccrg@cs.ucl.ac.uk<BR>Sent: Friday, April 4, 2008 6:53:43 AM<BR>Subject: Re: [Iccrg] Heresy following "TCP: Train-wreck"<BR><BR>Matt,<BR><BR>From one heretic to another...<BR><BR>0) Priority to shortest jobs<BR>In your bullet list you missed a point that I believe is the most <BR>important one: about arrivals and departures of whole flows. If you <BR>have a mix of flow activity at a shared bottleneck, some continuously <BR>streaming and some intermittent (as we do on the Internet), you can <BR>make the intermittent flows go much much faster without hardly <BR>prolonging the completion time of the continuous flows. It's totally <BR>unfair (and very inefficient) for the intermittent flows to get the <BR>same bit rate as the continuous flows,
because their behaviour is <BR>much more multiplexable. A picture may help: <BR><<A href="http://www.cs.ucl.ac.uk/staff/B.Briscoe/presents/0801cfp/shortestJobsPriority.png" target=_blank>http://www.cs.ucl.ac.uk/staff/B.Briscoe/presents/0801cfp/shortestJobsPriority.png</A>><BR><BR>If you're intrigued by how we'd move to such a world, I recently <BR>posted a bit more on this here: <BR><<A href="http://episteme.arstechnica.com/eve/forums?a=tpc&s=50009562&f=174096756&m=703000231931&r=160007441931#160007441931" target=_blank>http://episteme.arstechnica.com/eve/forums?a=tpc&s=50009562&f=174096756&m=703000231931&r=160007441931#160007441931</A>><BR><BR>Now thoughts on your main points:<BR><BR>1) Fair queuing<BR>Many broadband deployments already do some form of per-flow FQ <BR>(usually WFQ) at the most likely bottleneck (the broadband remote <BR>access server, BRAS). I'm less sure what common practice is in
<BR>broadband cellular networks. I believe WFQ is not such an obvious <BR>choice for radio access because of the unpredictable radio link rate. <BR>I think there are a range of schemes, some distributed at the base <BR>station and others centralised at the radio network controller (RNC).<BR><BR>Incidentally, back to the first point, I recently realised that most <BR>deployed per-flow FQ tends to help short duration flows, by giving <BR>higher priority at the start of a flow and reducing the longer each <BR>flow continues. Altho this helps a little bit today, it's also <BR>actually a huge potential problem for the future - same problem as <BR>TCP: it still converges to the incorrect goal of sharing out bit-rate <BR>equally for each flow at the bottleneck - tho at least at a BRAS it's <BR>done separately per each user.<BR><BR>My point about priority to shortest jobs (and your point about huge <BR>differences between no.s of flows per app) shows that flow
rates need <BR>to be /very/ unequal to be fair. So per-flow FQ embedded in the <BR>network will be fighting what we really need to do in the transport <BR>in future (tho operators would obviously turn off per-flow FQ if they <BR>were trying to encourage the new world I'm talking about).<BR><BR><BR>2) Edge bottlenecks protecting core<BR>Although it would be nice, we can't mandate that the core be <BR>protected by bottlenecks at the edge. Technology limits & economics <BR>determine these things, not the IETF/IRTF:<BR>* At present technology trends are moving bottlenecks gradually <BR>closer to the core as access rates increase, because the newest <BR>access is now using the same technology as the core (optics) and <BR>there's nothing faster on the horizon.<BR>* However, economics always pushes the bottleneck location the other <BR>way - outwards. The cost of a bps of logical channel capacity follows <BR>about a square root (?) law wrt to the bandwidth
of the physical pipe <BR>in which the logical channel sits. Ie. where you need many physical <BR>cables/fibres to cover a dispersed area, the cost per bps is much <BR>greater than where each bps can be provided within a few large pipes.<BR><BR>The net effect of these two opposing forces is pushing bottlenecks <BR>inwards at the moment - particularly onto border routers. The message <BR>of a talk I gave to a recent workshop (on photonics research and <BR>future Internet design) was that the challenge is to find ways to <BR>push complex trust-related stuff currently done at border routers <BR>outward to the edge, so we can have dumb all-optical interconnection <BR>without electronics: <BR><<A href="http://www.cs.ucl.ac.uk/staff/B.Briscoe/present.html#0709ecoc-fid" target=_blank>http://www.cs.ucl.ac.uk/staff/B.Briscoe/present.html#0709ecoc-fid</A>><BR><BR>3) RTT fairness<BR>I'd say this is only a small part of the problem, because it's <BR>relatively
easy to solve in the transport alone - e.g. FAST TCP <BR>[Jin04:FAST_TCP] ensures its dynamics are slower for longer RTTs but, <BR>even tho it takes longer to get there, it ends up at the same rate as <BR>competing FAST TCPs with shorter RTTs.<BR><BR><BR>Bob<BR><BR>[Jin04:FAST_TCP] Cheng Jin, David Wei and Steven Low "FAST TCP: <BR>Motivation, Architecture, Algorithms, Performance", In "Proc. IEEE <BR>Conference on Computer Communications (Infocomm'04)" (March, 2004)<BR><BR><BR><BR>At 17:35 02/04/2008, Matt Mathis wrote:<BR>>I just attended the "The Future of TCP: Train-wreck or Evolution?" <BR>>at Stanford last week, and it solidified my thoughts on a subject <BR>>that is sure to be controversial.<BR>><BR>>I think it is time to abandon the concept of "TCP-Friendly" and <BR>>instead expect the network to protect itself and other users from <BR>>aggressive protocols and applications. For the moment I am going to <BR>>assume
two mechanisms, although I suspect that there will prove to be more.<BR>><BR>>1) Deploy some form of Fair Queuing at the edges of the network.<BR>><BR>>2) Protect the core by bottlenecks at the edges of the Internet.<BR>><BR>>I observe that both of these mechanisms are already being <BR>>implemented due to existing market forces, and the natural <BR>>consequence of their implementation is to make TCP-friendliness a <BR>>whole lot less important. I admit that it is not clear at this <BR>>point if these two mechanisms will ultimately prove to be sufficient <BR>>to address fairness all situations, such as over loaded core <BR>>routers, but I suspect that sufficient mechanisms do exist.<BR>><BR>>Supporting arguments:<BR>><BR>>FQ is already being deployed at the edges to solve several existing <BR>>and growing fairness problems:<BR>><BR>>* Non-IETF, UDP protocols that are
non-responsive.<BR>><BR>>* P2P and other applications that open huge numbers of connections.<BR>><BR>>* Stock TCP is egregiously unfair when very short RTT flows compete with wide<BR>> area flows. This can be a real killer in a number of settings such as data<BR>> centers and university campuses. The symptoms of this problem will become<BR>> more pronounced as TCP autotuning continues to be rolled out in Vista,<BR>> Linux, and various BSDs.<BR>><BR>>* Autotuning will also greatly magnify RFC 2309 [1] problems, since every<BR>> single TCP flow with sufficient data will cause congestion somewhere in the<BR>> network. At the very least this will gradually force the retirement of<BR>> drop-tail equipment, creating the opportunity for RED and/or FQ. Since RED<BR>> by itself is insufficient to solve the other fairness problems, it will
not<BR>> be the first choice replacement.<BR>><BR>>I should note that "Fair Queuing"is overly specific. The network <BR>>needs to do something to large flows to prevent them from <BR>>overwhelming smaller flows and to limit queue occupancy. FQ is one <BR>>way, but there are others.<BR>><BR>>If you have ever shared a drop-tail home router with a teenager, you <BR>>might have observed some of these issues first hand, as has <BR>>Comcast.[2] As I understand it, some form of enforced fairness is <BR>>now part of all commercial broadband services.<BR>><BR>>The core of the Internet is already mostly protected by bottlenecks <BR>>at the edges. This is because ISPs can balance the allocation of <BR>>revenue from customers between the relatively expensive access link, <BR>>its own backbone links and interconnections to other ISPs. Since <BR>>congestion in the core has proven to
cause complaints from <BR>>commercial customers (and perhaps SLA problems), most providers are <BR>>careful to keep adequate capacity in the core, and can do so pretty <BR>>easily, as long as their duty cycle models hold true.<BR>><BR>>Are these two mechanisms sufficient to make TCP-friendliness <BR>>completely moot? Probably not.<BR>><BR>>We still have some work to do:<BR>><BR>>First, stop whining about non-TCP-friendly protocols. The are here <BR>>to stay and they can't hear us. We are wasting our breath and <BR>>impeding real progress in well designed alternatives to <BR>>"TCP-friendly". This concept came from an era when the Internet was <BR>>a gentleman's club, but now it needs to be retired.<BR>><BR>>Second, blame the network when the network deserves it. In <BR>>particular if there are drop tail queues without AQM, be very <BR>>suspicious of RFC2309 problems. In
fact every drop tail queue <BR>>without AQM should be viewed as a bug waiting to bite <BR>>someone. Likewise remember that "TCP-friendly" is extremely unfair <BR>>when the RTTs are extremely different.<BR>><BR>>Third, think about the hard cases: over loaded interconnects, <BR>>failure conditions, etc. Can FQ be approximated at core <BR>>scales? Where else are my proposed mechanisms insufficient? I sure <BR>>there are some.<BR>><BR>>Fourth, start dreaming about what it would take to make Moore's law <BR>>apply to end-to-end protocol performance, as it does to just about <BR>>everything else in the computing universe. I suspect that in some <BR>>future hindsight, we will come to realize that TCP-friendly was <BR>>actually a untenable position, and has held us back from important innovations.<BR>><BR>>[1] RFC2309 "Recommendations on Queue Management and Congestion
<BR>>Avoidance in the Internet", Bob Braden, et al.<BR>><BR>>[2] Richard Bennett "New and Improved Traffic Shaping" <BR>><A href="http://bennett.com/blog/index.php/archives/2008/03/27/new-and-improved-traffic-shaping/" target=_blank>http://bennett.com/blog/index.php/archives/2008/03/27/new-and-improved-traffic-shaping/</A><BR>>------------------<BR>><BR>>It was a very stimulating conference!<BR>>Thanks Nandita and everyone else who made it happen!<BR>>--MM--<BR>>-------------------------------------------<BR>>Matt Mathis <A href="http://staff.psc.edu/mathis" target=_blank>http://staff.psc.edu/mathis</A><BR>>Work:412.268.3319 Home/Cell:412.654.7529<BR>>-------------------------------------------<BR>><BR>><BR>>_______________________________________________<BR>>Iccrg mailing list<BR>><A href="mailto:Iccrg@cs.ucl.ac.uk"
ymailto="mailto:Iccrg@cs.ucl.ac.uk">Iccrg@cs.ucl.ac.uk</A><BR>><A href="http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg" target=_blank>http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg</A><BR><BR>____________________________________________________________________________<BR>Bob Briscoe, <<A href="mailto:bob.briscoe@bt.com" ymailto="mailto:bob.briscoe@bt.com">bob.briscoe@bt.com</A>> Networks Research Centre, BT Research<BR>B54/77 Adastral Park,Martlesham Heath,Ipswich,IP5 3RE,UK. +44 1473 645196 <BR><BR><BR><BR>_______________________________________________<BR>Iccrg mailing list<BR><A href="mailto:Iccrg@cs.ucl.ac.uk" ymailto="mailto:Iccrg@cs.ucl.ac.uk">Iccrg@cs.ucl.ac.uk</A><BR><A href="http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg" target=_blank>http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg</A><BR></DIV><BR></DIV></div><br>
<hr size=1>You rock. That's why Blockbuster's offering you <a href="http://us.rd.yahoo.com/evt=47523/*http://tc.deals.yahoo.com/tc/blockbuster/text5.com">one month of Blockbuster Total Access</a>, No Cost.</body></html>