[Iccrg] ctcp review: big picture issues (3 of 4)

Mark Allman mallman at icir.org
Tue Dec 4 23:22:58 GMT 2007


> I think that the "experiments with only random loss" are incomplete
> rather than inappropriate in themselves.  

Well, fair enough.  I should have been more careful.  If these were a
first set of experiments and there were others then OK.  Perhaps I
should have said that my opinion is that such experiments are "not
sufficient" and should not have said "inappropriate".

> I assume that they are modelling the case in which current TCP fails.
> That is the case in which a long flow is repeatedly interrupted by
> slow-start of short-lived connections, giving losses which cause the
> window to be reduced, but not hanging around long enough to give link
> utilization.  They're taking the short-cut of just forcing the
> sporadic losses.  It is no more or less valid than considering a
> "response function" which considers losses without specifying where
> the come from.  These tests are also useful show that the congestion
> control doesn't do as much harm when there is no congestion as current
> TCP does.

OK, but if these tests are designed to show that there is no problem
when there is no congestion then I can't make a determination of whether
I think a **congestion control** scheme is "safe" or not.  If you don't
test a congestion controller in the presence of congestion then it is a
non-starter in my book.  Maybe I am weird.

> 1. I didn't think our job way to say how often there will be a benefit
> in real networks.  I thought our job was to say whether they're
> "allowed" to do experiments in real networks to find out if there is a
> significant benefit.

I am not sure what the RG's job is in general.  With regards to
reviewing this for TCPM the job of the RG is, as far as I understand it,
to evaluate the safety statement in the document.  In this document that
statement does not say it is safe enough to play with, it says it "is
safe to deploy on the current Internet".

I would like to think that the RG provides more than simply this review
for TCPM and that in fact other technical comments about proposed CC
schemes would be in bounds.

> 2. CTCP isn't as sensitive to the actual queue estimate as algorithms
> like Vegas are.  Vegas sets its rate based on the estimated queue
> size, while CTCP simply detects whether or not to be aggressive based
> on whether or not it thinks the queue is empty.  For that, it isn't
> clear that we need a correlation between our own number of packets in
> flight and our precise observed queueing delay.  That means it will
> only be aggressive (but no more than existing experimental RFCs) if it
> has a   reason   to think the link is underutilized.

Perhaps you're right.  I am not sure I feel like I understand it well
enough to weigh in on that.  But, certainly if this is the case it'd be
nice to see.

I still beleve my high-order bit, however.  If you have no contention in
the network (or, at least little and an unassessed amount) and yet are
relying on that contention (in terms of queue buildup) as part of the
controller then how can that be an effective evaluation?

allman



-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 185 bytes
Desc: not available
Url : http://oakham.cs.ucl.ac.uk/pipermail/iccrg/attachments/20071204/ae3b631d/attachment.bin


More information about the Iccrg mailing list