[Iccrg] TCP test suite
david.hayes at ieee.org
Thu Oct 6 04:52:38 BST 2011
Quoting from the current draft:
"The goal of the test suite is to allow researchers to quickly and easily
evaluate their proposed TCP extensions in simulators and testbeds using a common
set of well- defined, standard test cases, in order to compare and contrast
proposals against standard TCP as well as other proposed modifications. This
test suite is not intended to result in an exhaustive evaluation of a proposed
TCP modification or new congestion control mechanism. Instead, the focus is on
quickly and easily generating an initial evaluation report that allows the
networking community to understand and discuss the behavioral aspects of a new
proposal, in order to guide further experimentation that will be needed to fully
investigate the specific aspects of a new proposal."
The current version of the draft can be found at
http://tools.ietf.org/html/draft-irtf-tmrg-tests-02. A revised version is
expected to be out soon.
So far, the following tests have been implemented in the ns2 version of the TCP
1. Basic dumbbell scenarios:
- trans-oceanic link
- geostationary satellite
2. Delay/throughput trade-off as a function of queue size
(This is really the basic access-link test for different queue sizes and AQM)
3. Ramp up time
4. Impact on standard TCP traffic
5. Multiple bottlenecks
Tests not yet implemented are:
1. Transients: release of bandwidth, arrival of many flows
2. Intra-protocol and inter-RTT fairness
I plan to release the implemented tests in the next few days for people to
evaluate. However, there are a couple of general points that can be highlighted
A. The tests use Tmix to generate the test traffic (in most cases) and the
background traffic. The tmix traffic traces are not stationary (traces will
be released with the tests).
- Comparing protocols within a particular test, say the basic dumbbell
access-link test, works quite well.
- However comparing access-link test results to data-center results is less
straightforward, as the data-center test uses more of the traffic traces
than the access-link test. This extra trace data being statistically
different to the earlier data in the traces.
Is this a problem (or a feature)?
Special tmix traces could be concocted so that there is less statistical
variability in the traffic for each basic dumbbell test. However, a data-center
scenario with 1Gbps links goes through much more trace data than a dial-up
link scenario with a 56kbps bottleneck link. Would the resulting trace be
B. Resource consumption.
First a note that work in conjunction with the Tmix developers has greatly
reduced this from what it was. I expect that the work on Tmix will be
released in the final version of ns-2.35.
Simulations were run on an i7 930, 12GB ram, 12GB swap running FreeBSD 8.2
Rough resource usage for the New Reno mild congestion dumbbell tests:
- access-link <4GB memory, 1hr simulation time
- data-center ~6GB, 2.3 hrs
- trans-oceanic >12GB, 4 hrs (although >12GB, it is not slowed much by swapping)
Is it reasonable to expect draft authors submitting new TCP proposals to have
access to machines with at least these resources?
Versions of the ns2 version of the test suite and documentation will be released
over the next week or so on http://caia.swin.edu.au/ngen/tcptestsuite/tools.html
I'll notify the group as it is released.
| David A. Hayes |
| david.hayes at ieee.org |
More information about the Iccrg