[Iccrg] IETF plenary: fast timescales

ken carlberg carlberg at g11.org.uk
Wed Aug 11 13:28:45 BST 2010


Mikael,

> Devices are getting less and less buffers over time, not more. (re-)ECN relies on big buffers and WRED to signal congestion. Implementation of ECN (a 2001 standard) in major vendor core routers today is minimal (where the buffers are), I haven't seen it at all on switches that does policing only (and it makes little sense on a device with 1-5 ms of buffers).

yes, I'd quite agree about the sparse ECN deployment, and I wouldn't expect that to change drastically.  But the point that you're missing is that ECN support is not needed or even expected throughout the Internet.  Ideally, ECN would be placed in places that can be prone to wide variations of traffic and/or in those transits that choose to add additional notification.

> Also, the best and easiest way (which is also net neutral) to solve congestion is to make sure that the heavy users either mark their packets as low priority (or it can be marked for them by means of having a different type of subscription) and queue accordingly, or by means of monthly caps (and having the user pay for extra credits which means congested links are financed).

marking packets can be nice (and i assume you refer to diff-serv), but they only have local significance within that domain.  without a chain of trust, the code points will either be cleared, ignored, or reset to other values (eg, protecting routing traffic between adjacent BGP speakers).  

<snip>

> I'm from a market where 10/100 megabit/s ETTH is common (15-20 percent of the households have access to this service). Congestion is rare because bad performance is not tolerated by the end users and they switch providers. There are plenty of tools available to test bandwidth, and they work. People are not satisfied with badly performing ISPs so this is a problem of the past (it was "solved" 5-8 years ago).

lucky you.  try spending time in South America, where economics still puts a significant downward pressure on traffic between tiered providers egressing a country/region.  Or, try other spots along the pacific rim.  Point being that the entire Internet doesn't have the luxury of significant over provisioning -- which was the point of one of the speakers.

and speaking of which, one tool you didn't mention is MPLS (aka, traffic engineering).  This is done in part because one can't over-provision to satisfy the sum capacity of all leaf links.  Up until about a year ago, this was becoming more acute with ever rising P2P traffic, motivating the recent actions from Comcast.  

> We have plenty of tools in the toolbox already, we have precedence/dscp markings, we have ECN and there has been plenty of mechanisms proposed the past 10 years, but what is still most in use is simple WFQ and WRED, all of these are at least 10 years old but they work.

although, somewhat static.  folks don't alter those settings with any great frequency.

> We should work to remove congestion, not hide its impact. Congestion within the ISP means something is full and the ISP hasn't provisioned enough bandwidth to deliver the service it promised to the user. The only place congestion should happen is on customer access links.

you make some good points, and I would agree that barring the back-hoe problem (or other disasters that damage infrastructure), one should mostly see congestion closer towards the customer access links.  But I would just contend that the picture you paint isn't complete.

cheers,

-ken




More information about the Iccrg mailing list