<div> </div>
<div>Hi Keshav,</div>
<div> </div>
<div>My two cents on the questions you raised:</div>
<div> </div>
<div>>> A. What is the commonly agreed definition for congestion. (I will post my suggestion later today). If we do not agree on a definition, then indices are meaningless.<br> </div>
<div>Congestion happens when demand exceeds capacity. Queue and loss are the *outcome* of congestion, but not congestion itself. </div>
<div> </div>
<div>Therefore, in steady state, if there is congestion, queue builds and, up to a point, loss is resulted. If there is no congestion, there is no queue and no loss due to congestion. On the contrary, queue and loss do not necessarily mean congestion happens. There are other reasons that might cause queue or loss.
</div>
<div> </div>
<div>At each link, if the aggregate incoming traffic rate is less than the link capacity, then there is no congestion on this link (in steady state). In a network, if there is no congestion on all the links, there is no congestion in this network.
</div>
<div> </div>
<div><br>>> B. What's wrong with TCP's congestion control scheme. Does someone have a concise summary that they can post here? I am sure this will be a cut-and-paste job for someone who has written a paper on congestion control recently :-)
<br> </div>
<div>TCP's problems are mainly two: 1) Using loss to detect congestion. Even though this works in wired networks such as today's Internet, loss is a too-late (binary) signal --- congestion has already happened when loss is seen. Non-congestion caused loss is also a common case in wireless networks; 2) The limited *information* about congestion using loss as the indicator limits the choice of congestion control algorithm. TCP's AIMD has a dynamic range that does not scale to the future high-speed, long-delay networks.
</div>
<div> </div>
<div>Best,</div>
<div>Yong</div>
<div> </div>