[Fwd: Re: [Iccrg] Congestion control definition and requirements of a new protocol]

Ferenc Kubinszky ferenc.kubinszky at ericsson.com
Wed Mar 1 11:48:16 GMT 2006


Hi,

Please find my comments inline.

S. Keshav wrote:
> Folks, 
>     Its been quiet on this list for a while. Can I take it as consensus on
> the following:
> 
> A. Definition of Congestion:
> 
> Network congestion is a state of degraded performance from the perspective
> of a particular user. A network is said to be congested from the perspective
> of a user if that user's utility has decreased due to an increase in network
> load. 

What does it mean "user's utility has decreased"? Is it an absolute or
relative measure? It might be important to distinguish if the source of 
the increase is the user itself.

By this definition the network is congested if a user uses the whole
capacity of a given link, and an other user will start a flow. Is it
really a congestion? This case the network should be congested every time...



> B. Problems with TCP congestion control:
> 
> 1. In a high bandwidth-delay product environment, high throughputs can only
> be achieved if the packet loss rate is unrealistically low.
> 
I agree with this. However new TCPs, like highspeed, scalable, BIC do
~10x better than e.g. reno. But this is a strict constraint for high
speed networks.

> 2. TCP has low throughput under lossy environments because it uses loss as
> an indication of congestion.

Sure. In some networks loss isn't the best congestin detection source.
But TCP should somehow know about the congested state of the network
(whatever it is). There are other possible measures, like RTT changes,
packet inter arrival times etc.
A new congestion control mechanism should combine these wisely. But the
problem still remain, namely that the source of these measures is not
strictly the congestion.
Consider that the packet loss on a link or network mostly convertible to
   delay (or delay variation, if you wish), and vica versa.

> 3. Due to additive increase, it takes a long time for a flow to ramp up to
> transient increases in available capacity which results in unnecessarily
> long flow-completion times.
> 
Its true and it might be important in future networks.
Maybe 'transient' is not the proper word for this, because the change in
capacity may be sudden, but it may stay on the new level for a long time.

I would split this point into 2 sub section like this:

3. Sudden changes in network/link characteristics

Sudden change in network characteristics might be caused by changes in
the network path, changes in the number of users on a given link.

a) Capacity changes
  ** Your definition here **
  ++ I think that sudden capacity degradation might be examined too.
This might cause buffer overruns and bursty losses.

b) Base RTT changes
  As link characteristics change there might be changes in RTT values.
This might be negligible, but should not cause any problem to future
protocols.


> 4. Even when the flow is capable of completing within a round-trip time,
> slow-start makes flows last multiple round-trip times just to find their
> fair share rate. Many flows complete before they exit slow-start phase.   
> 
> 5. TCP fills up all available buffers at the bottleneck links, which results
> in long latency. 
> 
> 6. TCP  shares bandwidth inversely proportional to flow RTTs
> 
> 7. TCP builds a standing queue at the point of congestion, which increases
> the delay. 

8. Impact of jitter (this section should come after loss)
Today's TCPs perform poorly in networks with high jitter.





> -------
> 
> If we agree on this, (and if you do not, this is the time to speak up) I
> would like to propose that we discuss the requirements of a new congestion
> control protocol, both theoretically and practically.
> 
> To start off this debate, I would like to state the following top level
> requirements. 
> 
> First, given the definition of congestion, I argue that the proposed
> protocol should allow two things: decoupling and observability.
> 
>     0 Decoupling means that the traffic from one user should not affect (or
> minimally affect) other users.
>     0 Observability means that users should be able to observe the network
> state in some fashion, so that they can control their input so as to not
> cause overload and a consequent decrease in utility.
> 
> Second, in addition to these, the new mechanism should also not suffer from
> the seven problems with TCP.
> 
> Comments?
> 
> thanks
> 
> keshav
> 
> 
> 
> _______________________________________________
> Iccrg mailing list
> Iccrg at cs.ucl.ac.uk
> http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg





More information about the Iccrg mailing list