[Iccrg] Meeting minutes
Michael Welzl
michawe at ifi.uio.no
Wed Aug 4 09:01:05 BST 2010
Hi all,
Please see the minutes taken by Andrew McGregor below, and send me any
corrections that
you may have within the next week or so.
Thanks!
Cheers,
Michael
=========================================================
Agenda (12 July 2010) of ICCRG meeting @ IETF 78
Friday 30 July 2010, 9:00-11:30 in 0.2 Berlin
Title: RG Status
Presenter: Chairs
Time: 10 minutes
Title: "MulTFRC update (draft-irtf-iccrg-multfrc-00.txt)"
Presenter: Michael Welzl
Time: 10 minutes
(no comments)
Title: "News from CAIA's newtcp project -- delay-based TCP and improved
instrumentation of FreeBSD's TCP stack"
Presenter: Michael Welzl (on behalf of CAIA)
Time: 10 minutes
Fred Baker: the Cisco funding was me. Reason is really the LEDBAT
question: how do I move a whole lot of data through the network
without destroying the ISP. Secondly, using high-loss radios in
things like 6lowpan, COAP where loss-based TCP really has a problem.
Idea is to tune to the mean instead of the congestion decay cliff, so
we can get fair sharing without attacking the network. Therefore I'm
very interested in congestion control that achieves this. I asked one
of the Contiki authors what their CC is, and reply was there is only
one buffer and therefore there cannot be congestion.
Lars -> lawrence: please tell me you are leaving the default in BSD at
NewReno
Lawrence Stewart -> Lars: Absolutely
Title: "TCP modifications to reduce thin-stream latency"
Presenter: Andreas Petlund
Time: 20 minutes
q: Do you have measurements on extra transmissions?
a: Answer is in the slides: about 6 to 10 %
q: Thin streams are <4 packets in flight, so all streams are thin at
the beginning?
a: Yes
q: Bundling... when you send the second packet, how many bytes do you
have in flight?
a: We're not tracking that
q: What about the congestion window?
a: That neither, which is work to be done.
Gorry: We're interested in streams that vary between thick and thin,
so let's talk later.
Title: "Seeding TCP retransmission timer using SYN/SYNACK RTT sample
(draft-ycheng-tcpm-rtosynrtt-00.txt)"
Presenter: Yu-chung Cheng
Time: 15 minutes
Gorry: Analysis is on fixed lines rather than wireless?
a: Usually on those links the SYN RTT is higher, but could be lower.
q: The SYN RTT on wireless can in fact be much higher, but it seems
the estimator could be too high based on this.
a: We could clamp to 3s.
q: You have this 'd' in your formula representing delay for two
packets, should you change this if the IW is 10 packets?
a: TCP typically acks every other packet, so the first ack is usually
after receiving two packets.
q: 2988 says reset the timer after each ack, but stack may sample only
per window. What are the implications?
Title: "Scaling IW with Internet scale"
Presenter: Matt Mathis
Time: 10 minutes
Fred Baker: How do you expect clients to know the link buffer space?
A: They do in some cases.
Bob Briscoe: 83 SYNs for multiple objects?
A: I didn't dig through, but there were 83 SYNs
Bob: Parallel for the same object?
Tim: 83 immediately?
Matt: I didn't check how many were cascaded.
Carsten: I usually see 30-40 objects in 3-5 waves, later waves delayed
by download of earlier.
Matt: I've spoken to these developers (both browser and site
developers) and they do it for latency... deliberately.
Costin: A lot of these connections are there for pipelining your local
link...
Matt: Which is why it is reasonable to have some parallelism, but not
that much.
Aaron: What's a 'slow' link?
Matt: Coming to that
Bob: Problem here is, buffer size is getting smaller, based on long
flows.
Matt: Those analyses are for highly aggregated links, though.
Bob: That analysis across both small and large numbers of flows is
supposed to decrease for faster links, but to optimise startup you
need more buffer.
Slow links are mostly NOT SHARED, and have only a couple of segments
worth of buffer. IW=3 is too big then.
Aaron: You're saying that there is only dumb queueing?
Matt: I'd hope not, but the paper I read did not say.
Content developers go to a lot of trouble to spread assets across
servers.
Bob: Isn't that the same however you have multiple IWs at the same time.
Title: "A Simulation Study on Increasing TCP's IW - Preliminary Results"
Presenter: Ilpo Jaervinen
Time: 30 minutes
Bob: This is a useful presentation to give some intuition as to what
happens. What sort of queues were you using?
A: Tail drop
Bob: I think the results would be very different with a different kind
of queue.
Jana: Do you have an insight as to why the fairness between bursts
goes down at IW=10?
Richard: What kind of stack and congestion control?
A: This is ns-2, its defaults.
Title: "Increasing TCP's Initial Window (draft-hkchu-tcpm-
initcwn-01.txt)"
Presenter: Nandita Dukkipati
Time: 45 minutes
Fred: What was the impact on sessions competing with yours? In other
words, have you completely blasted the rest of the world off the
network?
A: We can't completely answer that.
--- new speaker
Jana: TCP latency is transfer completion time?
A: Latency is time from first byte leaving to last byte acked
Costin: Why is there a dip at 60kb?
A: There are always some proxies, and there was a particular site
skewing the data.
Richard S: You're investigating long links, high RTT, what happens if
you run at very short RTT?
A: We also studied that, but didn't really see anything interesting.
Q: Did you see initial timeouts increase?
A: Somewhat, there is a slide on that.
Q: Are you using a linux stack?
A: Stock linux stack
Q: That stack is unique in retransmission recovery.
A: We're using a stock stack.
Q: That should be noted.
Bob: It would be better to come with the question 'what should IW be?'
Nandita: Should we be trying to find a global for the internet?
Lars: This is a TCPM proposal. We'd like to come with a proposal for
standardising.
Nandita: It's a great research question.
Jana: Thanks for doing all this. I suggest that 'offered load' is too
coarse a metric, we can talk about ways of doing something more
realistic.
Nandita: For sure... offered load >1 is unstable. Do you have any
real traces?
More information about the Iccrg
mailing list