[Iccrg] convergence time

Lachlan Andrew lachlan.andrew at gmail.com
Sat Oct 27 21:00:01 BST 2007


Greetings all,

With the TCP evaluation round table just under two weeks away, let's
keep the discussions moving.  It would be great for people to start
threads on whatever issues they think we should agree on. Even if
you're not coming in person or through VRVS, feel free to start a
thread.


How should we measure the responsiveness of a TCP algorithm?  One way
is to measure "convergence time" as the time after a step change in
traffic until rate is within x% of its final value.

If we agree on that, we need to decide:

- What should x be?  I think it is not very critical, as long as we
agree on it.  If it is too low (like within 10%), it becomes too
sensitive to how we measure rates.  If it is too high (like within
50%), it doesn't capture whole convergence process.  Is 30% OK?

- How do we determine the "final value" of rate?  If everything is
symmetric, we could just take is at the "fair" (equal) rates, but I
think we should define it independently for fairness, for those cases
in which flows never reach equal rates.  For experiments with just one
step change followed by a long period of "steady state" (possibly with
cross traffic coming and going), we can just average over a period "a
long time" after the event.  How long should that be?  It could be
something like "when the rate of change has dropped to 5% of the
original rate of change" or some such.

- What timescale should we average the "current" rate over?  Rates
vary due to AIMD and cross traffic, as well as the convergence
process.  For loss-based protocols, including hybrid loss+delay, we
could base it on the rate (or window) just before or just after a loss
event.  For non-loss-based protocols, we could simply average over one
RTT.  Thoughts?

- The convergence time depends on setting, such as the number of
competing flows, and the RTT.  Should we specify a few settings
specifically for determining "the convergence speed of the algorithm",
or should we just say how to measure convergence time for each
experiment?

- The rise time of a single flow to an empty systems is not very
interesting, because it many measures the impact of slow start.  Is it
interesting to consider the response of one existing flow to one new
flow?  An alternative is to consider time to settle when a flow
*departs*, although that mainly measure the aggressiveness of the
protocol, rather than its responsiveness.

- We need to make these repeatable.  That is particularly hard with
cross traffic.  Should we specify a minimum number of runs to average
over?  If so, there is a tradeoff between accuracy and time to
complete the tests.  Would averaging over 5 tests be enough?

Cheers,
Lachlan

-- 
Lachlan Andrew  Dept of Computer Science, Caltech
1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
Phone: +1 (626) 395-8820    Fax: +1 (626) 568-3603



More information about the Iccrg mailing list