[Iccrg] Meeting agenda

Lachlan Andrew lachlan.andrew at gmail.com
Sat Sep 9 19:08:00 BST 2006


Greetings all,

Executive summary:  the amount by which the source should back off
when there is corruption increases with the amount of congestion
(signalled by loss).

Detailed arguments (plus propsed response) below:

On 09/09/06, John Leslie <john at jlc.net> wrote:
> Michael Welzl <michael.welzl at uibk.ac.at> wrote:
> > This inspires me to mention one more:
> >
> > The correct reaction to corruption.
>
>    Actually, there is one: increase the reundancy of coding.
>    This actually will result in a hopefully-slight _increase_ in bandwidth
> demand. This is _not_ wrong!

Good point.  However, bear in mind that it is the role of the source
to do source coding, rather than "100% reliable" channel coding, which
is best done at the physical/link level.  One purpose of allowing the
network to deliver corrupt packets is if the ultimate application can
withstand some corruption (like transmitting perceptual data).

In that case, the application should balance more concise source
coding against the increased channel coding to maximise the user's
perception *given* the network resources.  It does not immediately
tell us whether we should increase or decrease the bandwidth.

> > There seems to be some agreement (and, by the way, I
> > also personally agree  :-)   ) that the (not so uncommon)
> > idea of a sender which would not reduce its rate at all
> > in response to corruption is not the right way to do it.
>
> > Here's a (somewhat artificial) example explanation: you may
> > have a sender which generates a lot of traffic, yet doesn't
> > deliver anything useful - yet, this sender may cause other
> > senders to reduce their rates, even though they would
> > not experience any corruption whatsoever.
>
>    It is always "wrong" to fill a pipe with traffic which serves no useful
> purpose. Alas, there's often no way to determine whether it's useful.

Exactly.  Getting corrupted video is arguably more "useful" than
getting reliable spam.  It should be up to the application to
determine what is useful.

> > It may be the right thing to reduce the rate by less than
> > in the normal congestion case, but by how much...?
>
>    Wrong question!
>
>    Sufficient redundancy can repair the corruption (and, of course,
> eliminate the need for retransmission).

No, it is the right question.  The well-developed theory of Network
Utility Maximisation tells us that *** the amount by which the source
should back off when there is corruption increases with the amount of
congestion (signalled by loss).*** It is up to the corruption-tolerant
application whether it uses the bandwidth for redundancy or reduced
compression.

The issue of whether different applications should get more or less
bandwidth is an issue of choosing the "utility" the application
obtains by getting the data.  Michael is right that the incremental
utility gained by getting corrupt data is smaller than that from
un-corrupt data.  That means that the "optimal" solution will cause
that rate to decrease *provided* there is other traffic to take up the
slack (indicated by loss).  If there is no loss, the source should not
back off due to corruption.

A crude approach would be to respond to a loss rate  X  and corruption
rate  Y  the same way as to  X ((X + 2Y) / (X+Y))  loss with no
corruption.  For  X << Y  (an uncongested lossy link), this gives
"loss" rate 2X and hence high potential throughput, but deferring to
those with less loss.  If  X >> Y (a congested link) it gives  "loss"
rate X+Y, treating corruption as loss.

This can be implemented by a lookup table, the same way as HS-TCP is.

An important question is whether all applications should be
arbitrarily assigned the same utility (as is implicitly done by all
TCP variants), or the utility should reflect the actual application's
benefit from getting a certain amount of data at a given level of
corruption.  Since different application gain vastly different amounts
of utility from corrupt date, I would suggest having two classes of
utility (i.e., two responses to congestion) -- one for TCP-like
behaviour which seeks 100% reliability and one for perceptual
applications which do their own redundancy tradeoff.

Congestion control that responds to "hetereogeneous" signals, like
loss and corruption, have begun to be studied by Kevin Tang
<http://netlab.caltech.edu/~aotang/pub/07/ton2007.pdf>.

Cheers,
Lachlan


-- 
Lachlan Andrew  Dept of Computer Science, Caltech
1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
Phone: +1 (626) 395-8820    Fax: +1 (626) 568-3603



More information about the Iccrg mailing list