[Iccrg] Meeting agenda

Lachlan Andrew lachlan.andrew at gmail.com
Wed Sep 13 16:51:28 BST 2006


Greetings,

I tried posting the below, but it bounced due to the attachment.  The
attachment is at
<http://netlab.caltech.edu/~lachlan/optimum_erased.pdf>.

Cheers,
Lachlan

On 12/09/06, Lachlan Andrew <lachlan.andrew at gmail.com> wrote:
> Greetings,
>
> On 12/09/06, John Leslie <john at jlc.net> wrote:
> > Lachlan Andrew <lachlan.andrew at gmail.com> wrote:
> >
> > > The well-developed theory of Network
> > > Utility Maximisation tells us that *** the amount by which the source
> > > should back off when there is corruption increases with the amount of
> > > congestion (signalled by loss).***
> >
> >    Citation?
>
> The attached is a very rough sketch of my argument.  It's probably
> full of holes, and I'd be happy to hear about them.
>
> > > Michael is right that the incremental
> > > utility gained by getting corrupt data is smaller than that from
> > > un-corrupt data.
> >
> >    But we don't know how much smaller...
>
> True.  We can treat the corrupt data as worthless (as Michael later
> suggested), which gives an upper bound on the amount we should back
> off.
>
> >    Regardless, I'd aim for enough redundancy to render the utility of
> > corrupted data "close enough" to the utility of uncorrupted data.
>
> That may also be hard to determine, and may involve too much
> redundancy.  Consider the case of a small fraction of packets beging
> corrupted, but the corruption is almost complete.  To make those
> almost-useless packets useful, we would need massive redundancy.
> However, even the uncorrupted packets must incur that redundancy
> (since we don't know in advance which will be corrupted).  If the
> redundancy exceeds the rate of corrupt packets, we're better off just
> treating them as erasures.
>
> > > A crude approach would be to respond to a loss rate  X  and corruption
> > > rate  Y  the same way as to  X ((X + 2Y) / (X+Y))  loss with no
> > > corruption.
> >
> >    But what crystal ball do you consult to choose the constant (2)?
> > It seems rather low to me for many near-real-time uses...
>
> *grin*  I said it was crude :)  The attached sketch gives a first-cut
> systematic derivation.  It seems that for Reno, the "right" approach
> is simply to scale the window down by a factor of
> (1-corruption_rate),  assuming we treat corruption as erasure.  I
> haven't got around to thinking about partial corruption yet...
>
> > > Congestion control that responds to "hetereogeneous" signals, like
> > > loss and corruption, have begun to be studied by Kevin Tang
> > > <http://netlab.caltech.edu/~aotang/pub/07/ton2007.pdf>.
> >
> >    This paper seems to be concerned with stability -- an important
> > concern, to be sure, but I didn't read any indication that his chosen
> > conditions were _necessary_ for stability.
>
> It mentions stability, but the focus is on the equalibrium rate
> allocations.  Normally, we can say for a given set of TCP flows what
> their equilibrium share of the rate will be.  In "heterogeneous"
> networks, it can depend intrinsically on the order of their arrival.
> That isn't just "slow convergence to fairness", but multiple stable
> rate allocations.  (That is why sufficient conditions for stability
> are of interest here, rather than necessary conditions.)  That is
> important if we are trying to work out what control will give
> different flows some desired share of the bandwidth.  This may be too
> "theoretical" to be of interest to ICCRG, but I thought it may be of
> some use.


-- 
Lachlan Andrew  Dept of Computer Science, Caltech
1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
Phone: +1 (626) 395-8820    Fax: +1 (626) 568-3603



More information about the Iccrg mailing list