Congestion control methods in broadband networks

Congestion control methods in broadband networks

Abstract

The development of broadband will expand high bandwidth access using IP platform to perform broadband superhighway. Traffic congestion may occur if the use increases and it will allows slower speeds, longer trip times and increased queuing. The study of congestion control method is necessary to avoid network overload or congestive collapse. The methods that will discuss in this paper are congestion control on IP networks which include load control and capacity dimensioning and congestion control in packet networks using Binary Feedback Models and Price Based Models.

1.0 Introduction

-what are broadband networks & the history

Broadband was applied to differentiate multifrequency communications systems from baseband systems because the services before this are only a limited range of highly reliable and lower-bandwidth. Broadband as an expansion of new technology provide higher bandwidth services.

-broadband network architecture

Routing Gateways (RG) which provide a layer 3 (IP) gateway function between end devices and the network. They include functions for: IP routing, IP address allocation to end devices, QoS, Network Address Translation (NAT), firewall, management, Domain Name Server (DNS) and network authentication. A Retail NSP may include additional capabilities on the RG not listed above.

-what is network overload or congestive collapse and how it happen

-what is congestion control

Congestion control need for controlling traffic entry into a telecommunications network by avoiding congestive collapse

Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. (http://en.wikipedia.org/wiki/Congestion_control)

-the use of traffic control in network

In computer networking, network traffic control is the process of managing, prioritising, controlling or reducing the network traffic, particularly Internet bandwidth, used by network administrators, to reduce congestion, latency and packet loss. This is part of bandwidth management. In order to use these tools effectively, it is necessary to measure the network traffic to determine the causes of network congestion and attack those problems specifically.

- IP networks

-packet networks

Assuming the Internet will continue to become congested due to a scarcity of bandwidth, this proposition leads to several possible approaches for controlling best-effort traffic. One approach involves the deployment of packet scheduling disciplines in routers that isolate each flow, as much as possible, from the effects of other flows [She94]. This approach suggests the deployment of per-flow scheduling mechanisms that separately regulate the bandwidth used by each best-effort flow, usually in an effort to approximate max-min fairness. A second approach, outlined in this paper, is for routers to support the continued use of end-to-end congestion control as the primary mechanism for best-effort traffic to share scarce bandwidth, and to deploy incentives for its continued use. These incentives would be in the form of router mechanisms to restrict the bandwidth of best-effort flows using a disproportionate share of the bandwidth in times of congestion. These mechanisms would give a concrete incentive to end-users, application developers, and protocol designers to use end-to-end congestion control for best-effort traffic. A third approach would be to rely on financial incentives or pricing mechanisms to control sharing. Relying exclusively on financial incentiveswould result in a risky gamble that network providers will be able to provision additional bandwidth and deploy effective pricing structures fast enough to keep up with the growth in unresponsive best-effort traffic in the Internet.

2.0 Content

- Congestion control on IP networks

Load control and Capacity dimensioning (Techniques in Internet Congestion Control)

- Congestion control in packet networks using (Approaches to Congestion Control in Packet Networks)

Binary Feedback Models and Price Based Models

3.0 Discussion

- The suitable methods for broadband

4.0 Conclusions

The two dimensions of congestion control are explored: 1) Load Control: control of amount of traffic transmitted onto the network 2) Capacity Dimensioning: provisioning enough capacity to meet the anticipated load to avoid congestion.

Load Control

We begin the work on load control by examining the design of Active Queue Management (AQM) algorithms. We focus on AQMs that are based on an integrator rate control structure. Unlike some previous AQMs, which measure congestion by measuring backlog at the link, this structure is based on the measurement of packet arrival rate at the link. The new AQMs are able to control link utilisation, and therefore control and reduce the amount of queuing and delay in the network. The rate-based AQMs can be used to provide extremely low queuing delays for IP networks, and enable a multi-service best-effort network that can support real-time and non-real-time applications.

This history of the Internet reflects the two fundamental approaches to the problem of controlling congestion in networks 1) Capacity Provisioning and 2) Load control. Since Congestion collapse occurs when the load of packets placed onto the network exceeds the network's capacity to carry the packets, the capacity provisioning approach is to ensure that there is enough capacity to meet the load. The load control approach is to ensure that the load of packets placed onto the network is within the capacity of the network. Capacity provisioning is achieved either by accurate performance analysis and traffic modelling, or the brute force approach of over provisioning. There is a range of load control strategies for networks, from connection admission control schemes through to best-effort flow control as on the Internet.

As is evident from the ARPANET experience, economic and technical reasons limit the capacity provisioning approach as a sufficient solution to the congestion control problem. Provisioning for peak loads is an expensive proposition, and in any case, it may not always be possible to anticipate what the peak load will be. Therefore, load control has a permanent role in the congestion control in networks. In this thesis, we will develop the areas of Internet congestion control by developing load control as well as provisioning techniques.

2.1.1 Load control mechanisms

When the capacity available is less than the demand for capacity, load control is the critical element which determines how many packets are allowed onto each link of the network, who gets to send them and when. This controls the quality of service (QoS) metrics such as bandwidth, latency and jitter experienced by users. There are a number of load control mechanisms, and these can be classified by the type of QoS guarantees they can deliver.

At one end of the spectrum are the connection admission control (CAC) schemes, such as the Resource Reservation Protocol (RSVP) [4]. Such schemes require the network to maintain information about each connection and arbitrate whether connections are admitted or rejected so that the connections that are admitted can be absolutely guaranteed their required bandwidth for the duration of the connection. When the load of requested connections is increased beyond the capacity of the network then some new users will be rejected in order to maintain the bandwidth guarantees made to already admitted users. CAC is good for honouring bandwidth supply contracts that specify minimum rates. The IntServ proposal [122] uses RSVP signalling protocol to extend Internet functionality to include CAC. However, CAC schemes on the Internet are not widely deployed and it is believed they cannot scale to widespread use [93] due to the amount of per-connection information required inside Internet routers and switches.

The congestion control behaviour of TCP protocol is important to the techniques developed in this thesis and we will detail the congestion control mechanism of TCP in this section. The key function of TCP is to provide a reliable connection for the application layer across an unreliable best-effort network. TCP provides a number of guarantees to the application layer: (1) delivery of data (2) in order delivery of data and (3) error free delivery of data. These guarantees are achieved by error detection, buffering and retransmission. However, in the case of congestion collapse, none of these guarantees can be met. The congestion control mechanism in TCP is in place to protect the integrity of the network and prevents congestion collapse.

TCP/IP

By far, the most dominant load control paradigm on the Internet is source flow control. The dominant source flow control protocol on today's Internet is TCP. In [116] measurements of traffic at core routers indicate that about 95% percent of traffic volume on the Internet is generated by the Transport Control Protocol TCP algorithm. The remaining traffic is UDP (about 4%) and ICMP. There are various versions of TCP, including Reno [112, 113], SACK [82] and Vegas [18]. The most widely deployed version is TCP Reno. This modern version of TCP stems from the first congestion controlled TCP, TCP-Tahoe, as described by Jackobson and Karels [57].

The congestion control behaviour of TCP protocol is important to the techniques developed in this thesis and we will detail the congestion control mechanism of TCP in this section. The key function of TCP is to provide a reliable connection for the application layer across an unreliable best-effort network. TCP provides a number of guarantees to the application layer: (1) delivery of data (2) in order delivery of data and (3) error free delivery of data. These guarantees are achieved by error detection, buffering and retransmission. However, in the case of congestion collapse, none of these guarantees can be met. The congestion control mechanism in TCP is in place to protect the integrity of the network and prevents congestion collapse.

To ensure the integrity of the network, TCP is designed with the "conservation of packet principle". This principle demands that, in equilibrium, a packet must be removed from the network, for a new packet to be placed onto the network. The intuitive argument to explain this principle is that a successful delivery of a packet frees the network resources for the delivery of another packet. To support this conservation principle, TCP has a window based flow control.

Window based flow control limits how many packets TCP is able to transmit onto the network which have not been acknowledged as having been received. As shown in Fig. 2.1, only the amount of packets which fit into the window are allowed onto the network, and packets generated by the application which do not fit, wait at the source. TCP sends acknowledgement packets from the destination for packets received successfully. Once the acknowledgement for a packet is received at the source, the acknowledged packet and the acknowledgement packet are removed from the transmission window, leaving room for the transmission of the new packets awaiting transmission.

At the opposite end of the load control spectrum is the best-effort network. Unlike a CAC based network, the best-effort network does not need information about each connection to be stored in the routers and switches on the data path, because there is no resource reservation for a connection. The best-effort network sees each packet transmitted as independent of any other packet. The sources themselves decide how much they should send and on the Internet the sources are predominantly TCP/IP. All requested connections are admitted, and the available capacity is shared between the connections. As a result, no explicit guarantees can be given about the bandwidth available to each connection. However, the simplicity of the network infrastructure is compelling, since there is no concept of connection within the network, only the ability to forward packets is required of the network. This simplicity is a key reason for the success of IP networks.

Somewhere between CAC and best-effort networks, are flow aggregate schemes such as DiffServ [26] [10] [29]. With DiffServ, although individual connections are given no guarantees of minimum bandwidth, classes of connections are given minimum bandwidth guarantees. The bandwidth allocated to each connection within a class is essentially made by a best-effort mechanism, and each connection with a class competes for bandwidth. However, the aggregate of connections within the class is guaranteed a minimum bandwidth in the presence of other classes in the network. Such aggregate schemes require only a small amount of state information in the network, such as the relative bandwidth assignment between the classes, and this state information is proportional to the number of classes. Although not offering the per-connection guarantees of CAC systems, they allow the network operator to give important applications some protection from less important flows. We will discuss DiffServ in detail in Chapter 4.

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!