正在加载图片...
3 ADAPTING TO THE PATH: CONGESTION AVOIDANCE 10 If there's no congestion, y must be near zero and the load approximately constant. The network announces, via a dropped packet, when demand is excessive but says nothing if connection is using less than its fair share(since the network is stateless, it cannot know this). Thus a connection has to increase its bandwidth utilization to find out the current limit. E.g., you could have been sharing the path with someone else and converged to a window that gives you each half the available bandwidth. If she shuts down, 50% of the bandwidth will be wasted unless your window size is increased. What should the increase The first thought is to use a symmetric, multiplicative increase, possibly with a longer time constant, W= bW-1,1<b<1/d. This is a mistake. The result will oscillate wildly nd, on the average, deliver poor throughput. The analytic reason for this has to do with that fact that it is easy to drive the net into saturation but hard for the net to recover(what [18, chap. 2.1, calls the rush-hour efect). Thus overestimating the available bandwidth is costly. But an exponential, almost regardless of its time constant, increases so quickly that overestimates are inevitable Without justification, we'll state that the best increase policy is to make small, On Wi=Wi-I+u(u<Wmax where Wmar is the pipesize(the delay-bandwidth product of the path minus protocol ove head-i.e, the largest sensible window for the unloaded path). This is the additive increase multiplicative decrease policy suggested in [15] and the policy we've implemented in TCP The only difference between the two implementations is the choice of constants for d and We used 0.5 and I for reasons partially explained in appendix D. a more complete analysis is in yet another in-progress paper The preceding has probably made the congestion control algorithm sound hairy but it's not. Like slow-start it's three lines of code On any timeout, set cwnd to half the current window size(this is the multiplicative On each ack for new data, increase cwnd by 1/cwnd(this is the additive increase) 10In fig. 1, note that the"pipesize'is 16 packets, 8 in each path, but the sender is using a window of 22 packets. The six excess packets will form a queue at the entry to the bottleneck and that queue cannot shrink, even though the sender carefully clocks out packets at the bottleneck link rate. This stable queue is another, unfortunate, aspect of conservation: The queue would shrink only if the gateway could move packets into the skinny pipe faster than the sender dumped packets into the fat pipe. But the system tunes itself so each time the gateway pulls a packet off the front of its queue, the sender lays a new packet on the end a gateway needs excess output capacity (ie, p< 1) to dissipate a queue and the clearing time will scale like(1-p)-([18], chap. 2 is an excellent discussion of this). Since at equilibrium our transport connection wants to run the bottleneck link at 100%(p= 1), we have to be sure that during the non-equilibrium window ustment, our control policy allows the gateway enough free bandwidth to dissipate queues that inevitably orm due to path testing and traffic fluctuations. by an argument similar to the one used to show exponential timer backoff is necessary, it's possible to show that an exponential (multiplicative) window increase policy will be'faster'than the dissipation time for some traffic mix and, thus, leads to an unbounded growth of the I See [4]for a complete analysis of these increase and decrease policies. Also see [ 8] and [9] for a control- theoretic analysis of a similar class of control policies. by a small amount on each ack rather than by a large amount at the end of the interval. (Assuming, of course,3 ADAPTING TO THE PATH: CONGESTION AVOIDANCE 10 If there’s no congestion, γ must be near zero and the load approximately constant. The network announces, via a dropped packet, when demand is excessive but says nothing if a connection is using less than its fair share (since the network is stateless, it cannot know this). Thus a connection has to increase its bandwidth utilization to find out the current limit. E.g., you could have been sharing the path with someone else and converged to a window that gives you each half the available bandwidth. If she shuts down, 50% of the bandwidth will be wasted unless your window size is increased. What should the increase policy be? The first thought is to use a symmetric, multiplicative increase, possibly with a longer time constant, Wi = bWi−1, 1 < b ≤ 1/d. This is a mistake. The result will oscillate wildly and, on the average, deliver poor throughput. The analytic reason for this has to do with that fact that it is easy to drive the net into saturation but hard for the net to recover (what [18], chap. 2.1, calls the rush-hour effect).10 Thus overestimating the available bandwidth is costly. But an exponential, almost regardless of its time constant, increases so quickly that overestimates are inevitable. Without justification, we’ll state that the best increase policy is to make small, constant changes to the window size:11 On no congestion: Wi = Wi−1 +u (u Wmax) where Wmax is the pipesize (the delay-bandwidth product of the path minus protocol over￾head — i.e., the largest sensible window for the unloaded path). This is the additive increase / multiplicative decrease policy suggested in [15] and the policy we’ve implemented in TCP. The only difference between the two implementations is the choice of constants for d and u. We used 0.5 and 1 for reasons partially explained in appendix D. A more complete analysis is in yet another in-progress paper. The preceding has probably made the congestion control algorithm sound hairy but it’s not. Like slow-start, it’s three lines of code: • On any timeout, set cwnd to half the current window size (this is the multiplicative decrease). • On each ack for new data, increase cwnd by 1/cwnd (this is the additive increase).12 10In fig. 1, note that the ‘pipesize’ is 16 packets, 8 in each path, but the sender is using a window of 22 packets. The six excess packets will form a queue at the entry to the bottleneck and that queue cannot shrink, even though the sender carefully clocks out packets at the bottleneck link rate. This stable queue is another, unfortunate, aspect of conservation: The queue would shrink only if the gateway could move packets into the skinny pipe faster than the sender dumped packets into the fat pipe. But the system tunes itself so each time the gateway pulls a packet off the front of its queue, the sender lays a new packet on the end. A gateway needs excess output capacity (i.e., ρ < 1) to dissipate a queue and the clearing time will scale like (1−ρ)−2 ([18], chap. 2 is an excellent discussion of this). Since at equilibrium our transport connection ‘wants’ to run the bottleneck link at 100% (ρ = 1), we have to be sure that during the non-equilibrium window adjustment, our control policy allows the gateway enough free bandwidth to dissipate queues that inevitably form due to path testing and traffic fluctuations. By an argument similar to the one used to show exponential timer backoff is necessary, it’s possible to show that an exponential (multiplicative) window increase policy will be ‘faster’ than the dissipation time for some traffic mix and, thus, leads to an unbounded growth of the bottleneck queue. 11See [4] for a complete analysis of these increase and decrease policies. Also see [8] and [9] for a control￾theoretic analysis of a similar class of control policies. 12This increment rule may be less than obvious. We want to increase the window by at most one packet over a time interval of length R (the round trip time). To make the algorithm ‘self-clocked’, it’s better to increment by a small amount on each ack rather than by a large amount at the end of the interval. (Assuming, of course
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有