home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.dcom.cell-relay
- Path: sparky!uunet!mcsun!sun4nl!research.ptt.nl!walvdrk
- From: walvdrk@research.ptt.nl (KEES VAN DER WAL)
- Subject: Re: Congestion Avoidance
- Message-ID: <1992Jul27.203657.1@research.ptt.nl>
- Sender: usenet@spider.research.ptt.nl (USEnet News)
- Nntp-Posting-Host: dnlts0
- Organization: PTT Research, The Netherlands
- References: <1992Jul18.230550.3046@sics.se> <1992Jul19.130005.1@research.ptt.nl> <1992Jul25.092823.12189@e2big.mko.dec.com>
- Date: Mon, 27 Jul 1992 19:36:57 GMT
- Lines: 165
-
- In article <1992Jul25.092823.12189@e2big.mko.dec.com>,
- pettengill@cvg.enet.dec.com () writes:
-
- > In article <1992Jul19.130005.1@research.ptt.nl>, walvdrk@research.ptt.nl
- >(KEES VAN DER WAL) writes:
-
- > |>PS. What worries me most at the moment is more the flow control measures to be
- > |>taken for the network's sake, ie. how to avoid or solve congestion. I haven't
- > |>seen any clear concepts on how to do that.
-
- > Think about congestion avoidance. Why avoidance? Well, if you do congestion
- > control, then you incur large (relative to the normal data rate) timeouts,
- > and all sorts of other disruptions. With congestion avoidance, you are
- > somehow detecting the possibility of loss and then taking action to prevent
- > this loss from occurring.
- >
- > There is one sure way to avoid congestion loss: reserve sufficient bandwidth.
- > The problem with data communication is that, unlike voice which goes from
- > nothing to next to nothing, or video which is basically fixed at a constant
- > lot of data, data comm goes from nothing to mucho data and then back to nothing
- > in rather unpredictable patterns. Reserving mucho capacity will be expensive,
- > but not reserving it leads to the likelihood that every user will want to
- > use mucho bandwidth at just about the same time. I've monitored Ethernet
- > traffic, and I've seen it behave like sewer flow on Super Bowl Sunday, massive
- > flows during timeouts and halftime, but not much in between.
-
- I partly agree. Not about video; the cellrate that comes out of the coder
- depends very much on the coding/compression algorithm. Actually, the "natural"
- behaviour is quite irregular: a lot of data when there's a lot of changes on
- the screen and hardly any data if there isn't. Of course it's better to
- "smooth" the bare output of the coder at the cost of some additional delay.
- Smoothing can be made so good that a continuous flow results. If the coder
- creates more data than the smoother is allowed to put into the network, you're
- loosing in quality. In a cell-relay network you might (in principle) be allowed
- to send that additional amount of data rather than discarding it at the source.
-
-
- But, going back to congestion: somewhere in the grey history of ATM it has been
- decided that it should be possible to guarantee the user a certain level of
- cell-loss and cell-delay. Not an unwise decision to my opinion, but that rather
- simple choice had great consequences:
-
- You can only "guarantee" somehting if you know what's going to happen in
- your network, so:
-
- 1). You can only do that in a connection oriented network.
-
- 2). The user has to specify the (worst case) characteristics of his source.
-
- 3). The source has to be policed to ensure 2).
-
-
-
- But, next to those high-quality applications there's also other applications
- that are not too much interested in strict "guarantees" of the network
- performance. For the sake of simplicity I only take the "loss" parameter into
- account, not the "delay".
-
- Combine that with the observation that there's a number of sources sending
- either "nothing" or "mucho" but "moderate" on the average, and there's a number
- of ideal candidates to be multiplexed together.
-
- That won't work if "mucho" is e.g. 1/2 or 1/10 of the rate at the output of the
- multiplexer. If the source peakrate is less, and if the "on"-time of the
- sources is known, you can do some calculations about the probability that a
- certain amount of output capacity is required.
-
- Even with overall loss figures in the order of 10E-10, nobody is going to
- observe any difference between:
-
- Operator A, who allocates on the peak rate of every individual source
-
- Operator B, who allocates less than A but calculated that the probability that
- the required aggragate of capacity is more than he allocated is 1E-11.
-
-
- Both operators are following the belief of "congestion avoidance". Obviously,
- operator B has to make more complicated traffic contracts and more elaborated
- policing to stick to the "guaranteed" figures.
-
-
- Now, operator C is coming in, eager to do better than A and B. He realises that
- a large number of applications don't really need the 1E-10 loss figure and that
- (for the same situation), he can allocate even less than operator B.
- Until, of course, his customers are going to complain.
-
-
- It seems to me a "natural" development from B to C, until also the data
- applications start to complain about too much loss. If C wants to keep his
- "high-quality" customers happy, he'll have to do some more work than A and B to
- ensure a strict separation of the two (or more) "quality classes".
-
-
- C has to cope with congestion, because it's his "policy" to allow congestion
- for a limited fraction of the time. Doing a rough guess I estimate that
- somehting in the order of 1E-3 to 1E-4 fraction of the time wouldn't be any
- problem for most data applications.
-
- Congestion can only be avoided if you either follow operator A (or B) policy,
- not with C's policy.
-
-
- > Some protocols are real bad news under these circumstances; faced with a
- > loss, they just add their retries on top of new requests. This, of course,
- > just makes matters worse. If you can see the global picture, the best
- > solution in an ATM kind of network would be discard everything from most
- > of the active circuit in sort of a rolling blackout kind of mode. Anything
- > else, and you end up with either total congestion collapse or you end up
- > with the good citizens like TCP and DECnet giving up virtually all claim
- > to any bandwidth to the hogs like NFS.
-
- But how do you recover from such a "discard all" situation, where all source
- are eager to restart the transmission to finally send those cells that should
- have been sent already a second ago? It seems to me some kind of
- self-synchronising phenomenon as the occurence of a congestion is the
- "director" that orchestrates a new disaster.
-
-
- The only "solutions" I see are either one of the following approaches:
-
- i) after detecting a congestion every source throttles down to zero (or some
- guranteed minimum) and then resumes normal and unrestricted operation in an
- *un*correlated way by waiting a randomly selected time.
-
- ii) after detecting a congestion every source throttles down to zero (or some
- guranteed minimum) and then resumes normal and unrestricted operation by slowly
- increasing the peak cell rate from minimum to the maximum allowed. The increase
- in peak rate should be done so slow that the time from end-of-congestion to
- start-of-potential-new-congestion is so long that uncorrelated behaviour is a
- safe assumption by then.
-
-
-
- > While this situation can and does occur with frame switching (bridged or
- > routed Ethernet, token ring, or FDDI LANs), at least you can be fairly
- > sure that discarding a frame isn't going to result the retransmission of
- > hundreds of frames, and the number of packets is small enough so that a
- > discard policy that considers past history is conceivable.
-
- I don't understand this point. What do you mean with "discard policy"? Do you
- propose to selectively discard (all) cells from a limited number of connections
- like e.g. chose some scapegoat-connection randomly, kill all their cells and
- solve the congestion?
-
-
- > In addition,
- > it is also reasonable to include sufficient buffering in the network
- > to handle seconds worth data to allow riding thru short periods of congestion.
- > This latter solution is not at all possible for ATM because of the timeliness
- > requirements of voice and video.
-
- It could be done, but you'll need time priorities in the network to handle the
- delay-sensitive cells (connections) separately from the delay-tolerant cells
- (connections). That would mean another step in complicating the ATM network.
- I'm not eager to take that step before I've been convinced it's _really_
- necessary.
-
-
- Regards, <kees>
-
- Kees van der Wal e-mail: J.C.vanderWal@research.ptt.nl
- ----------------------------------------------------------------------------
- PTT Research Neher Laboratories Room: E130
- Sint Paulusstraat 4 Fax: +31 70 3326477
- 2264 XZ Leidschendam The Netherlands Phone: +31 70 3326295
-