[Iccrg] Definition of Congestion

John Leslie john at jlc.net
Mon Feb 27 13:07:12 GMT 2006


Dado Colussi <gdc at iki.fi> wrote:
> Fil Dickinson wrote:
> 
>> 'Network congestion is a state where there are insufficient resources
>> in a network for a flow that result in excessive delay. This in turn
>> results in the flow failing to meet minimum effective performance.'

   I read Fil to be saying that "congestion" exists when information
flow is delayed beyond the requirements for effective use of the network.

> If we consider Keshav's utilities mathematically, we probably see 
> functions u: X -> Y where X is a set of resource bundles and Y is some 
> set with order property. Keshav says that a source experiences 
> congestion if u(x1) > u(x2) where x1 and x2 are user's resource bundles 
> before and after an increase in network load. I find this definition 
> quite elegant and expressive.

   It indeed is true that it is technically possible that an increase
in network load could _increase_ a user's perceived utility.

   More obviously, it is true that different users will have different
utility functions, and some will perceive degradation in the same set
of resource bundles where others do not.

> The insufficient resources you suggest are all those resource bundles 
> that don't result in maximum utility when the utility is sensitive to 
> excessive delay (or throughput, as in your second definiton).

   I assume Dado is referring to:
] 
] 'Network congestion is a state that occurs when a network element in
] a path reaches a resource limit. A network is said to be congested
] from the perspective of a user if that user's minimum effective
] throughput has decreased due to a limit being attained.'

which I read to say that whenever a user's experience is degraded, the
network is by definition "congested". (This does not strike me as a
useful definition.)

> However, utility functions differ per application and even per user
> basis and are not limited to excessive delay only (or throughput only).

   I quite agree.

   (Which is why I prefer to work on things we can measure...)

> Furthermore, the Internet exists for its users and it's more important
> to adjust it for the users than for mere existence. That's why I think
> a user-centric approach is well justified.

   I come from the old school, which states that the intelligence
should be at the edges with the center as simple as possible.

   This is not necessarily at odds with Dado and/or Fil: congestion
management has traditionally been an edge function; and anything they
want to try at the edges is fine with me.

   My approach to <iccrg> is that it appears that we may have made
the "backbone" _too_ simple, and I'd like to work on identifying points
of overload and seeing if we can "route around them".

   (Also, the "backbone" _will_ need to protect itself against excessive
loading, possibly by discarding types of traffic which seem to be
failing to apply appropriate congestion management: there are questions
worth studying in how to distinguish this.)

> You're right about the problem of measurement. I think it would be 
> essential to discuss u, X and Y. If we knew them, u and X in particular, 
> we would be in a better position to decide what tradeoffs to make in 
> suggesting a framework for measurement. It's not feasible to find 
> utility functions for each application and user but I do think it would 
> be feasible to find a sufficiently expressive classification that would 
> enable us to justify the tradeoffs. I'm not suggesting it would be easy 
> though.

   I do think it's clear that there's no single utility function. It
may be possible to identify a useful set of functions -- which brings up
the question of how to tell which function to use...

--
John Leslie <john at jlc.net>



More information about the Iccrg mailing list