[Iccrg] Meeting minutes

michawe at ifi.uio.no michawe at ifi.uio.no
Wed Aug 19 14:30:51 BST 2009


The notes from our last meeting (taken by Joseph Ishac - thanks!)
are now available at:

and at the bottom of this email.



ICCRG Notes:

Start Time: 9:02 am
Room State: Packed

No Bashing
First is ICCRG Status: No Comments

Dimitri Papadimitriou
"Open Research Issues" Talk
Wes: Lot of work has gone into this document, chairs think it's finished. 
Trying to close on this document, please put forth any final comments in
the next two weeks so that it can go to review.  (No additional comments
from room)

Lawrence Stewart:
The future of congestion control: feedback from an implementer's perspective
30 mins (9:15)

(13:59) Bob Briscoe: Is the motivation of the offloading to get to those
working at high speeds or if it's more to do with other factors (doing
other things at high speed, interrupts, etc) - Bob thinks it's the latter.
Lawrence and Bob: agree on the clarification (that it is about the other

(18:25) Question off Mic (from the crowd): Have you looked at ECN behavior?
A: No, have not yet looked at ECN personally, there are implementations in
BSD, etc... but I've not worked with it for this.

(31:40) Aaron Falk: So you're saying that the I-Ds should state
packet/byte behavior differences - so you're saying that researchers
should understand what the differences are... should be part of describing
the performance.... a TMRG input point
A: Absolutely... important work that someone should do.  Some of these
issues don't come up unless you put it in the light of a particular
Aaron Falk: Nice candidate for a short RFC is to describe the Musts and
Shoulds of how you should be specifying congestion control
A: Definitely only highlight the key ones here, but I could see a document
detailing these and more.
Tim Shepard: If I had one version, couldn't you just figure out the other
- or might it not be that simple. - Appreciates the issues that can arise
from the two approaches (Path MTU discovery...)
A: Right there are a lot of issues... Certainly being more specific if
we're leaving things unspecified would be an excellent start
Mike Lambrets:  Works for an ISP.  Has users that complain about speed of
network (slow).  Would exposing tcp performance in a standard way be
within scope.  API to get those metrics... be able to answer the question:
"Is the download slow because of X reason?"
A: Gray area, isn't there an API for this already? in some part? API calls
to adapt behavior?
Aaron Falk (shout out): Thinks web 100 has that
A: Would be nice to have this to tune behavior, with an appliance application
ML: Yes, it's hard to do manually with Wireshark, etc...
A: Yes, it doesn't scale - it's a nightmare.
(Agreement, examples)
A: It would be nice to integrate with pcap libraries (author has some
effort in this)... should also be helping implementers more often.

(43:00) Bryan Ford:  Likes the results on delay and is a personal interest
area.  Not sure if this research group has looked at the detail of the
basic question - if it's possible for a CC algorithm in the wild to
realistically achieve a low delay and still be operable.
A: Personally, thinks the answer is yes... graceful degregation, if you
have a fat pipe, there's not much of a problem - but we need to
standardize the approaches.  Don't have a concrete answer, but the point
is to document these.
Wolfgang Beck (JABBER): ledbat might have some useful input to this.
A: Sure, the scavenger class TCP is interesting in certain uses.  Most
people are concerned with good TCP performance (tuned right), although not
completely up on all of ledbat's work.
S. Shalunov:  Remind everyone of ledbat... doesn't think it's a well
formed goal to try and fill buffers and at the same time keep them empty.
A: Thinks we show that it's not good to fill buffers completely
SS: Do you think it's possible to compete with NewReno "fairly" and
maintain low-delay... it pursues to goals of keeping the buffer both full
and empty at the same time ... how do you do that??
A: Clarifying position: More in favor of keeping the buffers "less full". 
There's more that we should be able to do to make customers happier, even
if we don't fully utilize the network
A: There are lot of different options you can tweak to get better
performance, in a home scenario (science labs might be a different
story)... like to make sure the default is good and does the least amount
of harm.
Joseph Ishac: Shouldn't your goal be to keep the network utilized and the
delays as low as possible... what that means to your buffers is not
necessarily so cut and dry
Bryan Ford: Is if possible to have tcp friendliness and empty buffers??
A: And it's a good one.
Wes: Go to the ledbat wg this afternoon
SS: Perplexed that the relationship between buffer occupancy and extra
delay isn't cut and dry.  It seems obvious that it's proportional.
Tim Shepard: It could be more complicated... it depends on how the queues
are serviced.  You can have multiple queues... etc.
A: It would be interesting to see what would happen if simple things like
priority or weighted queuing is put in routers.... or maybe just shrink
the buffers.
SS: I like that... you tell these people and they just do it? In the many
devices already deployed... (/sarcasm)
A: Sure, we'll piggyback this on IPv6 deployment... it's hard work... we
shouldn't work on the assumption that this will never change.  Would like
to see these recommendations come out from somewhere.
ML: Ethernet to home space has very little buffering... l3 switches with
ms buffers.  Agree that CPU manufactures should use fair queuing...
prioritize acks, etc...

Bob Briscoe / Matt Mathis
Beyond TCP-friendly Design Team
30 minutes (53:00)

(1:00:00) Aaron Falk: Question about the goals - how to shift to a new
capacity sharing arch... you want to do capacity sharing, understand how
to your going to implement it and how to migrate to it... is it really
that broad of scope?
A: Yes - both what is the target and how do we get there.
AF: process note - you need to have these teams have outgoing feedback to
share problems
A: Plan to use ICCRG list
Wes: Arch here means defining the pieces that would be needed and not
specifying the actual pieces to put into these slots.
AF (no mic): This goes beyond requirements right?
Wes: This is a feasible set of things that can be used, not specifying
specific pieces for each slot.
A: where do functions fit... and how do you accommodate existing stuff.
AF:  Do you believe you will develop and recommend mechanisms?
A: No. on either count... but would like to deprecate potentially.
AF: This isn't a mechanism development... ie: ReECN
A: NO... that's coming up next... this is beside that not in it.
A & Wes: Not to try and approve a particular architecture... Really need
to develop and architecture and see what we need, where current activities
??(A: Haven't really fully passed these thoughts past other authors,

A: Ask chairs if a particular diagram is a good record of this.
Wes: Thinks diagram is similar... just made up one that was better though
for Aaron in the plenary.
Aaron Falk: Notes that a RG can't publish BCP's
Tim Shepard (no mic): current practice or bcp?
ML: are you planing on deprecating Rfc 5033 with this document? (or
A: Might not need to be... document doesn't say tcp friendly is good...
not very strict on recommendations.  Might be able to coexist.

(1:29:29) ??: Doesn't think the comparison is fair... TCP gets less CC
A: Right, because of it's behavior.
??: So overall it's still AIMD... your saying that if your getting a
constant flow of signals then you need to do AIAD?
A: Doesn't work like that... by working in the terms here, you will get
enough signals where TCP doesn't get enough as it gets faster.

(1:39:0) Dave Craig: Confused by goals, relentless wants to pick up more
loss to figure out the bw, but the policing is going to put a limit on the
congestion you can create.  Why would I want to run relentless if that's
going to put me up against the congestion limit faster.
A: Depends on how others play the game... and it is a tradeoff.  That
space is why I think congestion needs to be the right metric for the
internet space and why this needs to be figured out early.
Gorry Fairhurst: Re-ECN, Relentless, and then slides to say how we fit
this and more all together.  Interesting to see how we build something
with all these different mechanisms... Is that still a focus of this
activity... as we really need that still
A: Still of interest and we would like to get more involved.
GF: How do these things fight, and in different environments... is a very
interesting question
Bryan Ford: Doesn't this have a vulnerability to "rich jerks"... aren't
these preferential to folks (that can buy a lot of bandwidth)?  Or be able
to be a primary cause of the congestion.
A: You have a mechanism to mediate between rich jerk and congestion
(network)... use $$ from jerks to upgrade bottlenecks, ie.
BF: Two orthogonal items in moment fairness (at a granularity of per
flow/user) ... or done in high order metrics (number of bytes)
A: Would like to have an algorithm that could be used liberally, but there
might be conditionals to it in either case... and there will most
definitely be tussles.  I would like to see those play out.
Janardhan Iyengar: Can you say more about the transition in moving away
from the current solutions.
A: The transition is almost more away from a mind set (ie TCP Friendly). 
If we take those away, what new "Rules" do we use?  Not just the
mechanisms, but the mindset as well.  How to transition when the network
is doing policing in one area, but not in others, etc.  Do we need to say
something to operators?
Mike Lambrins: From experience in residential ethernet... trend has been
for more fast dumb devices... instead of being clever with what you have,
sometimes it is better to spend the $$ on just upgrading the system
operationally.  Better to look at a graph that doesn't flatline. 
Operationally... fast is the answer.
A: Exactly what I'm trying to do... trying to find the min - a place where
you just have ECN.
ML: If you only have a 5ms buffer, I don't see the point in ECN
A: technique you can run a token bucket as if you running at a slower rate...
ML: (understands the technique) Also just wanted to note how the ISP
community was moving (to faster devices)
A: Understood, but the assumptions are also changing as we now are having
things like window scalling changes, etc.
Doug G) If your not looking at tuplets... do you create an opportunity for
DOS attacks that happen in the middle of the network.
A) Yes it is an attack that Mark Handley thought up a while back.  There
might be a way of combating the situation by adjusting the behavior at the
application layer.  Example given was a DNS scenario.
DG) However, they're not causing the issue, and they have no control over
the policing (in this attack)
A) So yes the question is very important, a case we thought of - and a
solution needs to be worked out.  However, we need a good solution for
handling this congestion as well.

Michael Welzl
30 minutes (1:58:00)

(2:13:00) Bob Briscoe: Clarifying... you don't get any more performance if
you're fighting yourself... if your self-congested
A: Sure if you own your own bottleneck - or the only one fighting for your
choke point.
Bryan Ford: Seems like you're most interested in N>1, and there are two
uses... be more aggressive against other users, the other is when your
self congested and your not getting the performance out of your access
link... but that's where these CC algorithms such as CUBIC come in.
A: But then you don't have some of these solutions for things like
multimedia traffic (no CUBIC for example, which you might use for file
BF: So in terms of a specific number, the tcp spec has a magic number of 2
for the # of connections to be fair
A: (willing to compromise with 4 (from 6))
BF: Wanted to suggest the most interesting use case might be when N<1...
simple and safe short term way of allowing for a good starting point on
fair multipath behavior
A: (agrees on multipath behavior benefit)
Gorry: Have you looked at sharing with bursty traffic sources instead of
just continuous traffic sources.
A: Have not yet looked into that.... just normal TCP competition.
Pasi Sarolahti: Would like to get an impression if people would be
interested in following up... seeing this happen in the DCCP working
A + Wes: Show of hands for discussing this in the DCCP working group?
(Not much show of hands)
Wes: Asks who is actually in DCCP (not much show of hands either - maybe
one) - so there may be interest, but no personnel present.

Dragana Damjanovic
Explicit Feedback on Access Links
30 minutes (11 mins actual time left) (2:21:00)

(2:25:00) Joseph Ishac: Clarifying - So this is coming from just the next
Lloyd Wood: Getting information from the link is a difficult issue.  Had
this issue with router/modem integration... modem has to talk to the next
hop router.  You have a similar problem here, host on a home net and a
stub link.  Did you think of the mechanisms to actually communicate or
send the data?  Mulicast?
Tim Shepard: One of your slides said IP option.  Not sure what was meant.
A: One of the methods would be to use a small IP option from the end host
to the router and no further.
LW: Will supply info about this issue, there might be some commonality.
Bryan Ford: Unclear of the purpose... is it fairness?  or is it taking
advantage of optimal bandwidth as soon as possible.
A: Kind of managing of the access link, trying to smartly use your own
Tim Shepard: as an example - Does this already - uses Linux traffic
control to not saturate the line on his linux box and DSL modem... to not
fill it's queue, make sure receive window doesn't open to large.  Does
this by hand.  Is your approach to come up with some automated solution?
A: Yes... just really wanted to clarify the objective (reducing delay?)
TS: What do you mean by IP option?  IPv4 option?  It doesn't seem like a
good idea.  If a box isn't there that understands, then it's going to be
dropped or forwarded into the network and dropped by some router.  Don't
see a path forward with IPv4 options.
A: Yeah, I don't have a complete solution.  The router might be able to
send to the end host without issue, but not the other way around.
??: If that's the case (direction?), why mess with options at all - just
roll your own solution.
A: Yes that's possible.
(shout out): Source quenching
GF: Quick start caution... assuming that the first link is the bottleneck
is a bit risky... might not be doing the right thing for you.
A: Understood - QS isn't necessarily the option to use, but you could use
a QS to the first router, not the whole path.
GF: That's a big assumption if your going to another node or access
link... recommends QS be done on the whole path... simply doing it on your
access link and assuming that is the bottleneck may be a "little" risky
ML: Should the OS try to queue this or should TCP fix it?  Want to do this
for more than TCP (ie: UDP streams).  How do you communicate this?  Seems

(2:31:55) Adjourn

More information about the Iccrg mailing list