[Iccrg] Proposal for ICCRG operations

Damon Wischik D.Wischik at cs.ucl.ac.uk
Sun Feb 5 16:57:36 GMT 2006


> Here's my take on the questions put forth by Keshav. For now I will limit
> my answers to elastic traffic.
> [...]
> * A network-centric definition of congestion: Congestion is anything that
> deviates significantly from an efficient and fair usage of the network
> resources --- mostly network bandwidth and buffers.
> * User-centric definition: From an user's view point what really matters
> is how quickly does my flow finish, so congestion is long flow-completion
> times. Often, it isn't the per-packet latency that users care about, but
> just how fast the entire flow completes.

Here's a different perspective:

Most flows in the Internet are short-duration, e.g. HTTP requests, retriving
one- or two-packet items from a web page. Increasingly, also, we will want
to use the Internet for real-time traffic, e.g. video. For other uses, e.g.
web browsing, we want the first page of text to arrive fast, but the rest we
don't mind about so much. For web services, it's also often short flows that
go to form a transaction. [These are just plain assertions on my part. I
haven't got good data to hand to back them up.]

Elastic flows (by which I mean flows which are in congestion avoidance, or
maybe I mean flows which can't complete in one RTT) matter, but they're not
a good subset of flows on which to base a notion of congestion.

Instead, a notion of congestion should address the performance perceived by
very short flows. Since these flows have no time to respond to any feedback
about the state of the network, they are basically "open loop". Therefore
the most succinct way to say what performance they will perceive is to
summarize network-centric congestion measures.

(I noticed your paper "Why flow-completion time is the right metric for
congestion control", SIGCOMM/CCR January 2006. You point out that TCP and
XCP can make flows take several RTT to complete, when in fact they should be
able to complete in less than one RTT. Your proposal, RCP, seems like an
excellent way to achieve this, for flows of moderate duration (meaning: more
than one RTT). For flows of very short duration (meaning: capable of being
transferred in less than one RTT), we'd like to be able to transfer the file
without having to wait for any feedback from the network, and surely the
only way to achieve this is to have the network operating at say 95% or less
capacity, leaving 5% of capacity free for these instantaneous transfers.
Your RCP algorithm also seems to achieve something like this, but I am not
clear if this was designed in, or if it is serendipity.)

Therefore, perhaps, congestion should be measured by (A) drop probability,
which seriously impacts flows that last only one or two packets, by sending
them into timeout; and (B) queueing delay, which hurts real-time traffic;
and (C) correlation of losses, since bursts of loss also hurt real-time
traffic; and (D) inability to accomodate a reasonable volume of very short
non-responsive transactions (I hesitate now to call them "flows" when
they're really just "impulses").

Interestingly, however, since the bulk of packets come from long flows, that
is where we should work on getting congestion-control right. I suggest that
a good question might be: "What sort of congestion control for long-lived
flows will lead to the best network-centric congestion measures, i.e. will
lead to the best performance for short-lived open-loop flows?"

Damon.



More information about the Iccrg mailing list