正在加载图片...
496 Flow Control Chap.6 6.1.2 Main Objectives of Flow Control We look now at the main principles that guide the design of flow control algorithms. Our focus is on two objectives.First.strike a good compromise between throttling sessions (subject to minimum data rate requirements)and keeping average delay and buffer overfow at a reasonable level.Second.maintain fairness between sessions in providing the requisite qualiry of service. Limiting delay and buffer overflow.We mentioned earlier that for important classes of sessions,such as voice and video,packets that are excessively delayed are useless.For such sessions,a limited delay is essential and should be one of the chief concerns of the flow control algorithm;for example,such sessions may be given high priority. For other sessions.a small average delay per packet is desirable but it may not be crucial.For these sessions,network level flow control does not necessarily reduce delay; it simply shifts delay from the network layer to higher layers.That is,by restricting entrance to the subnet,flow control keeps packets waiting outside the subnet rather than in queues inside it.In this way.however.flow control avoids wasting subnet resources in packet retransmissions and helps prevent a disastrous traffic jam inside the subnet.Retransmissions can occur in two ways:first.the buildup of queues causes buffer overflow to occur and packets to be discarded;second,slow acknowledgments can cause the source node to retransmit some packets because it thinks mistakenly that they have been lost.Retransmissions waste network resources,effectively reduce network throughput.and cause congestion to spread.The following example (from [GeK80]) illustrates how buffer overflow and attendant retransmissions can cause congestion. Example 6.1 Consider the five-node network shown in Fig.6.1(a).There are two sessions.one from top to bottom with a Poisson input rate of 0.8.and the other from left to right with a Poisson input rate f.Assume that the central node has a large but finite buffer pool that is shared on a first-come first-serve basis by the two sessions.If the buffer pool is full,an incoming packet is rejected and then retransmitted by the sending node.For small f.the buffer rarely fills up and the total throughput of the system is 0.8+f.When f is close to unity (which is the capacity of the rightmost link).the buffer of the central node is almost always full,while the top and left nodes are busy most of the time retransmitting packets.Since the left node is transmitting 10 times faster than the top node.it has a 10-fold greater chance of capturing a buffer space at the central node.so the left-to-right throughput will be roughly 10 times larger than the top-to-bottom throughput.The left-to-right throughput will be roughly unity (the capacity of the rightmost link).so that the total throughput will be roughly 1.1.This is illustrated in more detail in Fig.6.1(b).where it can be seen that the total throughput decreases toward 1.I as the offered load f increases. This example also illustrates how with buffer overflow.some sessions can capture almost all the buffers and nearly prevent other sessions from using the network.To avoid this,it is sometimes helpful to implement a buffer management scheme.In such a scheme.packets are divided in different classes based,for example,on origin,destination.496 6.1.2 Main Objectives of Flow Control Flow Control Chap. 6 We look now at the main principles that guide the design of flow control algorithms. Our focus is on two objectives. First, strike a good compromise hetween throttling sessions (suhject to minimum data rate requirements) and keeping average delay and huller overflow at a reasonable level. Second, maintain fairness hetween sessions in providing the requisite qualit..... of service. Limiting delay and buffer overflow. We mentioned earlier that for important classes of sessions, such as voice and video, packets that are excessively delayed are useless. For such sessions, a limited delay is essential and should be one of the chief concerns of the flow control algorithm; for example, such sessions may be given high priority. For other sessions, a small average delay per packet is desirable but it may not be crucial. For these sessions, network level flow control does not necessarily reduce delay; it simply shifts delay from the network layer to higher layers. That is, by restricting entrance to the subnet, flow control keeps packets waiting outside the subnet rather than in queues inside it. In this way, however, flow control avoids wasting subnet resources in packet retransmissions and helps prevent a disastrous traffic jam inside the subnet. Retransmissions can occur in two ways: first, the buildup of queues causes buffer overflow to occur and packets to be discarded; second, slow acknowledgments can cause the source node to retransmit some packets because it thinks mistakenly that they have been lost. Retransmissions waste network resources, effectively reduce network throughput, and cause congestion to spread. The following example (from lGeK80]) illustrates how buffer overflow and attendant retransmissions can cause congestion. Example 6,1 Consider the five-node network shown in Fig. 6.1 (a). There are two sessions, one from top to bottom with a Poisson input rate of 0.8. and the other from left to right with a Poisson input rate f. Assume that the central node has a large but finite buffer pool that is shared on a first-come first-serve basis by the two sessions. If the buffer pool is full, an incoming packet is rejected and then retransmitted by the sending node. For small f, the buffer rarely tills up and the total throughput of the system is 0.8 +f. When f is close to unity (which is the capacity of the rightmost link), the buffer of the central node is almost always full, while the top and left nodes are busy most of the time retransmitting packets. Since the left node is transmitting 10 times faster than the top node. it has a la-fold greater chance of capturing a buffer space at the central node. so the left-to-right throughput will be roughly 10 times larger than the top-to-boltom throughput. The left-to-right throughput will be roughly unity (the capacity of the rightmost link). so that the total throughput will be roughly 1.1. This is illustrated in more detail in Fig. 6.i(b). where it can be seen that the total throughput decreases toward i.1 as the offered load f increases. This example also illustrates how with buffer overflow. some sessions can capture almost all the buffers and nearly prevent other sessions from using the network. To avoid this, it is sometimes helpful to implement a hutler management scheme. In such a scheme. packets are divided in different classes based, for example, on origin, destination
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有