[Iccrg] Meeting agenda

John Leslie john at jlc.net
Tue Sep 12 18:01:52 BST 2006


Lachlan Andrew <lachlan.andrew at gmail.com> wrote:
> On 09/09/06, John Leslie <john at jlc.net> wrote:
>> Michael Welzl <michael.welzl at uibk.ac.at> wrote:
>> 
>>> This inspires me to mention one more:
>>>
>>> The correct reaction to corruption.
>>
>> Actually, there is one: increase the reundancy of coding.
>> This actually will result in a hopefully-slight _increase_ in bandwidth
>> demand. This is _not_ wrong!
> 
> Good point.  However, bear in mind that it is the role of the source
> to do source coding, rather than "100% reliable" channel coding, which
> is best done at the physical/link level.

   Point well taken: I was not differentiating by role, which may have
been the intent of the question.

   But forget not that in _any_ real-time application, our end-to-end
model imposes on each end the responsibility to determine the appropriate
amount of redundancy.

   "100% reliable" is telco-think. That model has been rejected in the
Internet design. It's better to use a more precise term here. (I'm not
proposing one. though.)

   There is a channel-level responsibility for _some_ level of reliability,
but it's not clear what that level might be. We'd _like_ ethernet-level
reliability, but that's not always appropriate. (Lachlan is quite correct
to assign this responsibility to link level or below.)

> One purpose of allowing the network to deliver corrupt packets is if
> the ultimate application can withstand some corruption (like
> transmitting perceptual data).

   Almost any near-real-time application will be better off receiving
the corrupted packets. (This implies some alternative to TCP.)

> In that case, the application should balance more concise source
> coding against the increased channel coding to maximise the user's
> perception *given* the network resources.  It does not immediately
> tell us whether we should increase or decrease the bandwidth.

   True. I merely meant that (hopefully-small) increases in network
traffic can improve reliability of transit for a given raw data rate.

>> It is always "wrong" to fill a pipe with traffic which serves no useful
>> purpose. Alas, there's often no way to determine whether it's useful.
> 
> Exactly.  Getting corrupted video is arguably more "useful" than
> getting reliable spam.  It should be up to the application to
> determine what is useful.

   I agree. (But of course TCP doesn't do this.)

>>> It may be the right thing to reduce the rate by less than
>>> in the normal congestion case, but by how much...?
>>
>> Wrong question!
>>
>> Sufficient redundancy can repair the corruption (and, of course,
>> eliminate the need for retransmission).
> 
> No, it is the right question.  The well-developed theory of Network
> Utility Maximisation tells us that *** the amount by which the source
> should back off when there is corruption increases with the amount of
> congestion (signalled by loss).***

   Citation?

> The issue of whether different applications should get more or less
> bandwidth is an issue of choosing the "utility" the application
> obtains by getting the data.  Michael is right that the incremental
> utility gained by getting corrupt data is smaller than that from
> un-corrupt data.

   But we don't know how much smaller...

> That means that the "optimal" solution will cause that rate to decrease
> *provided* there is other traffic to take up the slack (indicated by
> loss).  If there is no loss, the source should not back off due to
> corruption.

   I certainly agree with the last sentence.

   TCP backs off as if the corruption was congestion. Assuming any
reasonable percentage of TCP traffic, I doubt there's any need for other
traffic to back off. But perhaps there may be cases where congestion
loss continues despite ongoing corruption. I had presumed that reacting
to the _actual_ congestion loss would be sufficient -- and, to tell truth,
I can't thing of a useful way to determine how much additional backing
off would be right if that weren't sufficient.

   Regardless, I'd aim for enough redundancy to render the utility of
corrupted data "close enough" to the utility of uncorrupted data.

> A crude approach would be to respond to a loss rate  X  and corruption
> rate  Y  the same way as to  X ((X + 2Y) / (X+Y))  loss with no
> corruption.  For  X << Y  (an uncongested lossy link), this gives
> "loss" rate 2X and hence high potential throughput, but deferring to
> those with less loss.  If  X >> Y (a congested link) it gives  "loss"
> rate X+Y, treating corruption as loss.

   But what crystal ball do you consult to choose the constant (2)?
It seems rather low to me for many near-real-time uses...

> Congestion control that responds to "hetereogeneous" signals, like
> loss and corruption, have begun to be studied by Kevin Tang
> <http://netlab.caltech.edu/~aotang/pub/07/ton2007.pdf>.

   This paper seems to be concerned with stability -- an important
concern, to be sure, but I didn't read any indication that his chosen
conditions were _necessary_ for stability.

--
John Leslie <john at jlc.net>



More information about the Iccrg mailing list