The demand of multimedia applications

The demand of multimedia applications

TOPIC NO 8:

Discuss the concept of Quality-of-Service (QoS) When applied to systems and networks. Describe IntServ and DiffServ, two well-known internet Qos models. Explore why these QoS models have failed to gain universal acceptance?

Abstract

The concept of the Quality of Service (QoS) demand for the network and system increases with the increase in demand of multimedia applications usage over the internet. Now a days network design engineers always consider QoS as a main part of the network design architecture. As network design these days also meant for the special multimedia applications solutions. IP based networks is a subject which has received huge amount of attention recently. With the continuous demand for using multimedia and Voice over IP (VoIP) applications over the internet, the Quality of Service (QoS) demands have also grown rapidly during the recent years.

There are Several applications over the internet require some level of Quality of Service (QoS) guarantees in terms of variables such as bandwidth, delay and packet loss. Ream time and time critical applications like video conferencing, streaming video, VOIP depends upon QoS. IP was not designed with QoS in mind instead it was designed to provide best-effort service for data packets to be transported from source to destination. IPv4 has been the backbone for the Internet since it was deployed. However, IPv4 has shown its limitations in terms of QoS as global Internet communication and service demands increase. IPv6 is the next generation protocol designed as a replacement for IPv4. As the address space of IPv4 is running out quickly, the advent of IPv6 in general networks seems nearer. It is interesting to see what QoS capabilities are there in IPv6.

The Internet traffic has increased exponentially and continues to do so. So, there is a need for a QoS framework that can provide service guarantees for QoS-sensitive traffics in IP based networks.

This dissertation project aims to study the Quality of Service (QoS) issues in IPv4 and IPv6 based networks. During the previous years, a number of research efforts have been directed to address these issues. These efforts have resulted in the proposals of QoS based frameworks such as Integrated Services (IntServ) and Differentiated Services (DiffServ).

My aim is to have an in-depth look at the working of these two frameworks and to investigate how they improve the QoS for IP based network traffic. This is done with the help of artificially setting up IntServ and DiffServ enabled networks in simulation software (OPNET IT Guru 9.1)

1. Introduction

Quality of Service (QoS) for IP based networks is a subject which has received huge amount of attention in recent years. Especially with the continuous growth of the multimedia applications on the Internet and their demand, there is an increasing need to satisfy the Quality of Service (QoS) requirements of such applications. QoS refers to the capacity of a network to provide better service to selected network traffic over the various networks. Primary goal of QoS is to provide priority to a certain 'flow' along with bandwidth, delay, jitter, packet-loss and reliability requirements.

Various research efforts have resulted in the proposals of several QoS based frameworks. They include Integrated Services (IntServ), Differentiated Services (DiffServ), Multi Layer Protocol Switching (MPLS) and others. I will be concentrating my research on IntServ and DiffServ and how they improve QoS service requirements.

Providing Quality of Service (QoS) in the context of bandwidth and delay is very important for internet applications, especially to support the requirements of real-time and mission critical applications. The core technology of the internet i.e., IP was does not have integrated QoS features. Therefore, some sort of framework/technology is needed to provide QoS capabilities to the applications. IntServ and DiffServ frameworks provide such capabilities for QoS-sensitive applications over IP routed networks.

IntServ Framework was proposed in early 90's to provide QoS to the Internet. In the IntServ Framework RSVP (Resource Reservation Protocol) is the signalling protocol. RSVP negotiates end-to-end QoS across the IP network by reserving the bandwidth at the routers before the flow is sent. When a flow arrives with QoS requirements, the ingress edge router initiates the path establishment process by sending a PATH message to the destined edge router. The edge router responds by sending a RESV message back to the egress router and tries to reserve the bandwidth required to meet the requested QoS. Core routers configure their traffic control mechanisms such that each incoming flow is guaranteed to receive the bandwidth that they have reserved. Through this per flow based hop-by-hop signalling, IntServ provides end-to-end QoS guarantees. IntServ defines two classes of service namely, guaranteed service and controlled load service. Each flow is assigned to one of these classes.

In the DiffServ framework, QoS is achieved by providing services on a per class basis rather than per flow basis at the edge routers of the network. Packets are marked with special tags to prioritize them in a network traffic flow. So, Core routers forward each packet to its next hop according to the per hop basis determined by the traffic class of the packet, which is identified by DiffServ Control Point (DSCP) in the packet header (Type of Service bits).

How to provide QoS requirements to today's internet backbone networks is a popular research area. IntServ and DiffServ Frameworks provide necessary technology to explore this area. My research has concentrated on these two models and sees how they work in IP based networks. Different scheduling and queuing techniques have been explored, which can be used to implement IntServ and DiffServ. OPNET IT Guru simulation software is used for simulation to best understand and study these technologies.

1.1 Aims and Objectives

Major aim of this thesis is to understand the core concepts and working of the IP based networks that require QoS service and deal with the problems arising out of it. Investigate how IntServ and DiffServ frameworks provide QoS to network traffic in IP networks. My investigation has covered IPv4 and IPv6 based networks and try to find what type of issues are arising in such networks regarding QoS. In essence the aims of this dissertation are:

  • Investigation of IPv4 and IPv6 in terms of QoS issues. What are the problems and solutions?
  • To investigate how IntServ and DiffServ are used to generate the Quality of Service.
  • Setting up a simulation network in Simulation software for analysis of IntServ and DiffServ in IP based networks. The whole working of the dissertation has been used to make a simulated network using OPNET simulation software. This has resulted in the generation of some performance measurement parameters for QoS that will better explain how IntServ and DiffServ improve QoS.

2. Literature Review

In the previous years we have witnessed an exponential growth of the Internet in terms of number of connected hosts, internet traffic and capacity. In the earlier years of internet e-mail and file transfer applications accounted for more than 90% of the traffic but now they have largely been surpassed by the web and multimedia (voice, music and video) traffic [7].

The internet changed its shape and complexion as it became an integral part of day to day business and personal life. As more and more businesses got to internet, more internet applications were developed. While the advancements in type of applications that were used on internet were taking place at a rapid speed, the actual backbone of Internet remained the same. In the current age of cheap communication (courtesy Voice over IP), the internet has played a big role which has lead to large amount of traffic being directed on the internet. Sustained QoS has become a fundamental requirement for time-critical and real-time applications.

The increasing popularity of IP has shifted the paradigm from 'IP over everything' to 'everything over IP' [5]. It is important to understand how the IPv4 has evolved to adopt QoS and how the next generation IPv6 is developed with inherent QoS features.

The issue of QoS is not some thing that is taken notice of any recently. In fact various research efforts have been made to address this issue since early 1990's. An early protocol for bandwidth reservation, know as SP-II [6] was proposed in 1995. But IETF are the main generator of solutions of end-to-end QoS problem. They came up with two QoS frameworks in the form of Integrated Services (IntServ) [1] in 1997 and Differentiated Services (DiffServ) [2][3] in 1998. IntServ framework provides per flow QoS guarantees to individual flows. It presented a solution better than the traditional 'best-effort' nature of IP networks by giving service guarantees in terms of resource allocation for the entire session [8]. The IntServ architecture relies on the Resource Reservation Protocol (RSVP) [9] for signalling and to reserve the QoS for each flow in the network. DiffServ on the other hand provides differential levels of service to different aggregate flows. It does this by marking the Type of Service (TOS) byte of the IP header with DiffServ Control Point (DSCP).

Do these two technologies solve the problem of end-to-end QoS? Ilvesmaki [4] argues that with current internet architecture, there are no guarantees for an absolute QoS. It is debatable if internet will be able to provide absolute end-to-end QoS. That is because internet comprises of a number of different networking technologies. Some are capable to offer QoS and some are not. Further to that, the diversity in the access technologies such as xDSL, ISDN, broadband and traditional PSTN modem creates a situation in which consistent QoS may not be possible. But in IntServ and DiffServ, one can find a potential solution to provide end-to-end QoS. They have evolved as the widely accepted QoS architectures. The reason is that Internet is predominantly based on IP and IntServ and DiffServ uses IP headers for their core functionality. IPv4 and IPv6 has specialized header fields which can be manipulated to give better resource access to a particular packet as compared to the others.

IntServ/RSVP and DiffServ have their short comings too such as scalability problem in IntServ and end-to-end QoS guarantees in DiffServ. Recently new standards like Aggregate RSVP [11] have been defined to overcome key shortcomings of RSVP. Also there have been proposals to combine aggregate RSVP and DiffServ so that guaranteed service levels for particular services can be provided [12]. Other architectures such as DiffServ-aware Traffic Engineering (DiffServ-TE) [11] are also proposed that adapts the basic DiffServ standard with Traffic Engineering capabilities. A lot of interest is also shown in Multi-Protocol Layer Switching (MPLS) networks [11], which are widely seen as the key to solution for QoS at layer 2.

According to Huston [12] the tools for QoS have not changed appreciably over the past few years. So, IntServ and DiffServ are still the core technologies around which research efforts can be made and they provide necessary technology to explore area of QoS. My research will concentrate on these two models and see how they work in IPv4 and IPv6 based networks. Different techniques and methodologies will be explored, which can be used to implement IntServ and DiffServ to the routers. I plan to use simulation software to best understand and study these technologies and gather results by research in order to understand and propose possible solutions.

3. Quality of Service

Within the last decade the Internet traffic has grown rapidly. One of the reasons is that Internet has become an integral part of today's business and day to day life. There are various flows on the Internet varying from flows for data network services to most recently voice and multimedia services. As a consequence the term Quality of Service (QoS) has been viewed with particular interest by large businesses and Internet Service Provider (ISP) companies.

3.1 What is Quality of Service?

Quality of Service (QoS) refers to the capacity of a network to provide better service to selected network traffic over the different networks e.g., Frame Relay, ATM and IP-routed networks. The emergence of QoS-sensitive applications such as Voice over IP and multimedia applications require some sort of network performance guarantees from the Internet.

Most of the research work in the field of QoS has been aimed at addressing QoS in the context of bandwidth and delay. Quality of Service is a broad term but the primary goal of QoS is to provide priority to a certain flow along with appropriate bandwidth, delay, latency, packet-loss and reliability requirements. This can be achieved by either raising the priority of a certain flow or by reducing the priority of other flows. QoS for different applications means differently depending upon the stringency required by the application. QoS could mean guaranteed delivery of each data packet for some applications whereas for others some minor packet loss would not be a problem but latency, jitter or bandwidth could be of high importance. In this sense, Quality of Service means that the user application receives a predefined, but not necessary constant, amount of requested network resource within the set of parameters associated with QoS. The parameters that are commonly associated with QoS are:

  • Bandwidth: It is the measure of maximum amount of data that can be transferred in fixed time between two end points of the network. The bandwidth is usually expressed in bits per second (bps) or bytes per second.
  • Delay: It is measured as the elapsed time for a packet to reach from its source through network to its destination.
  • Jitter: The variation in end-to-end transmitting delay is called jitter. It is also referred to as delay variation.
  • Packet Loss rate: It is defined as the ratio of dropped packets to the total number of packets.

3.2 Quality of Service issues in IPv4 and IPv6

Internet Protocol (IP) has been around since early 1980's and its early design did not took into account that it Internet would become so popular. It was not designed for real-time applications. Yet, there are a number of time-critical and real-time applications sending their data on the Internet. Such applications have specific QoS requirements and most of their data cannot tolerate packet-loss, delay and jitter.

IPv4 is a connectionless protocol with no guarantees regarding the delivery of a packet. IP does not understand the concept of 'flow' as well. For real-time applications the sequence and timeliness of the delivery of packets is very important. But IP networks treat each packet separately and regard each packet as unrelated. IP routers on their part work on the principle of dynamic routing. So, data packets for a single application do not necessarily follow the same path to the destination. The absence of fixed path means that different packets suffer different delay times and therefore consistent QoS guarantees can not be given.

There is no signalling involved at IP level. Therefore, there is no way to tell a network that it is about to receive traffic with particular QoS requirements and IP is not able to a use before hand to back-off if there is congestion.

3.2.1 QoS support in IPv4

QoS support in Internet Protovol v 4 is very limited. IPv4 defines Type of Service (ToS) attribute in its header with the concept that some of the traffic can be given priority than the others. Figure 3.1 shows the IPv4 header and Type of Service byte is shown in Figure 3.2

In ToS Byte, Bits (0-2) are defined for Precedence. Their defined values are:

111 - Network Control

110 - Internetwork Control

101 - CRITIC/ECP

100 - Flash Override

011 - Flash

101 - Immediate

001 - Priority

000 - Routine

Bits (3-6) are defined for type of service. They are:

0000 - All normal

1000 - Minimize delay

0100 - Maximize throughput

0010 - Maximize reliability

0001 - Minimize monetary cost

ToS byte is too small a field to provide a complete end-to-end QoS solution. It provides a fixed and limited model for service differentiation. Priority field defines only few codes relative priorities which are not enough to define real-time applications requirements.

ToS field is not readily accepted and adopted and most of the routers on the internet do not use this field for any purpose. Although, QoS does work with IPv4 provided that some sort of standard architectures are used with it that support end-to-end quality of service. But there are a number of different interpretations and definitions of the IPv4 QoS standards. This means that not all QoS-compliant devices are compatible with one another. Some of these QoS architectures are discussed in the sections below.

3.2.2 QoS support in IPv6

If we have a look at the IPv6 header in Figure 3.3, IPv6 includes standardized support for QoS by defining Priority (also known as Traffic Class) and Flow Label fields. The Traffic Class field is defined so that different QoS classes of traffic can be defined and services are provided to them according to their classification. The Flow Label field (20 bits wide) is set up so that routers can identify packets belonging to an individual QoS flow. This allows those routers to allocate the necessary amount of bandwidth to those packets. QoS instructions are included in the IPv6 packet header so that efficient packet processing can be made thus reducing queuing delays at the routes. This also ensures that the packet body can be encrypted and still QoS can be provided because the header portion containing the QoS instructions is not encrypted. This makes it possible to send streaming audio and video over the Internet with encryption techniques such as IPSec encryption.

IPv6 does give standardized support for QoS but there are some pending problems regarding flow labels because it is not clear whether it will succeed or not due to lack of widely adopted architecture for its use. The lack of consensus about QoS architecture prevents an adequate support from IP based protocols.

3.3 Quality of Service Solutions

Now given the architecture of Internet backbone protocols, the question arises 'How to tackle the problem of QoS in IP networks and what are the solutions?' There are three possible ways:

3.3.1 Over Provisioning

One solution would be to reserve dedicated bandwidth for the flow before hand. The surplus bandwidth would be sufficient to handle the peak data rates. But to dedicate certain bandwidth for one flow will is not the most economical solution. This solution is not viable economically because the spare capacity goes wasted the time it is not used which would otherwise have been used to accommodate other flows.

3.3.2 Explicit Reservation

A better solution is explicit reservation. In this case an explicit path is established through the network before actual data is passed. The request is first send through the network to reserve the bandwidth to the flow. The answer to the request is a simple 'Yes' or 'No'; either the capacity is available or it is not. This solution makes it more like a conventional circuit switching telecommunication networks where the explicit path is established before the actual call is setup. In this case the network resources are reserved according to the requirements of the application's QoS request.

3.3.3 Prioritisation

This solution makes use of the classification of the traffic. Different classes of network traffic are 'marked' with different priorities. Higher priority is given to the traffic that is marked with high QoS demand. These classifications are possible by the bandwidth management policy, also known as Service Level Agreement (SLA).

3.4 Quality of Service Models

In the beginning, Quality of Service techniques were largely implemented as a feature on routers and other networking equipment that worked well enough in a bounded network. But as the need for the QoS increased various research efforts were made to define a comprehensive model that can encompass the wide area of network. Mainly, QoS models can be defined on the basis of how they manage particular flows through the network. They are characterised as follows:

3.4.1 Fine-grained QoS Architecture

Each flow has a guaranteed reservation of the resources before the flow is actually passed through the network. The resources are reserved using separate flows that are passed on special signalling protocols.

3.4.2 Coarse-grained QoS Architecture

Different network traffic flows are grouped into classes and the flow aggregates are provided with guarantees rather than for each individual flow. As the resources are shared by different flows within a class, every flow can not be guaranteed an absolute reservation of the resources.

IETF took the charge of defining a standard model for QoS and came up with two contrasting models for QoS - Integrated Services or IntServ [1], first defined in 1997; and Differentiated Services or DiffServ [2][3], first defined in 1999.

IntServ is an example of Fine-grained QoS architecture. It provides an infrastructure to handle both conventional and QoS-sensitive Internet traffic. It uses resource ReSerVation Protocol (RSVP) to reserve the bandwidth before actual flow is sent.

On the other hand, DiffServ is an example of Coarse-grained architecture which uses prioritisation of flow aggregates.

In the following section we will look at these two models in detail.

3.5 Integrated Services (IntServ)

The Integrated Services (IntServ) architecture, designed and developed by IETF is based on fine-grained QoS architecture. In this, IntServ explicitly reserves the path for QoS-sensitive flows from its source to the destination. A flow can be defined as communication between two applications at different locations in the network. IntServ works more like conventional circuit switching and can be characterized as connection-oriented network protocol due to its network path reservation feature. So, this model is more concerned with the timeliness of the delivery of the packets. And to achieve that the routers involved in the connection has to maintain states for each individual flow.

In IntServ framework resource ReSerVation Protocol (RSVP) is the signalling protocol. RSVP negotiates the end-to-end resource reservation for a particular flow before it is actually transmitted. Thus IntServ provides end-to-end QoS solutions by using end-to-end signalling, state-maintenance and admission control at each network element.

3.5.1 Classification of types of Applications

The IntServ architecture is based on the assumptions that only some of the network flows require quality of service guarantees while the others are happy with the normal best-effort service. Before actually explaining the architecture of IntServ it is important to understand how different applications can be classified according to their service requirements. These are:

  • Elastic: Elastic applications are not concerned with the timeliness of arrival of packets. Whenever packets arrive they are processed immediately. Email, FTP etc. are the examples of Elastic applications.
  • Real-time: In Real-time applications, packets should arrive at the destination within some bounded time after the arrival of the previous packet other wise it is useless e.g., Voice over IP (VoIP) applications and multimedia applications.

Real-time applications are further classified into tolerant and intolerant real-time applications. Minor jitters are acceptable in tolerant real-time applications. While intolerant real-time applications do not afford even minor jitters.

3.5.2 Types of Services

Based on these types of applications IntServ defines 2 types of services:

  1. Guaranteed Load Service: In this service the delay is very limited and zero packet loss is guaranteed. This service is intended for intolerant real-time applications which require strict bound on end-to-end latency e.g., Voice over IP and video conferencing applications require this sort of service.
  2. Controlled Load Service: The aim of this type of service is to provide same sort of service for the traffic flows in a heavily loaded network as they would in a lightly loaded network. So, this service guarantees at least 'Best-effort' service in heavily loaded network. This service is intended for elastic and tolerant real-time applications that can afford slight jitter and latency e.g., Email, streaming audio/video applications

IntServ assigns one of these services to each flow. However, principally there can be more than these two classes of service that IntServ can carry.

Figure 3.4 shows an IntServ enabled network. There are a number of routers in this network topology. All of them implement IntServ specific tasks. A particular flow can take any path along this network. In the figure arrows specify end-to-end path from sender to receiver. Sender and receiver implement the signalling mechanism of RSVP to request either Guaranteed Load service or Controlled Load service from the network. The routers and the nodes involved in this network have to keep the knowledge of each individual flow in order to provide the requested resources for the entire session. States are maintained by the routers for each individual flow and there could be a number of flows at the same time. As more and more individual flows are requested the performance of the network can degrade linearly. IntServ works on the assumption that there will be small number of flows that require QoS while most of the flows will work on the traditional best-effort service.

To request a Controlled Load Service, an application has to provide some estimated parameters such as maximum burst size and the required transmission rate. By providing these parameters, the flow of such application will suffer little or no delay if its burst time is not so high. Controlled Load service shows some flexibility by allowing infrequent degradation of service but the transit delay experienced by a high percentage of packets will not greatly exceed the minimum end-to-end delay of a successfully delivered packet which in turn is always within the bounds experienced by the best-effort service [15].

Guaranteed Load Service is designed for intolerant real-time applications and it provides definite guarantees on bandwidth and delay. So, if the flow remains within the specified traffic service parameters, this type of service guarantees that the packet will arrive at its destination within the guaranteed delivery time and will not be dropped off in router queue overflows. To request Guaranteed Load Service the application provides with traffic specifications and desired service specification to the nodes involved in the actual flow. These requests are provided to the nodes through RSVP request messages.

To request these services IntServ framework uses RSVP signalling protocol. Now at times it may not be possible for IntServ framework to provide the required service to a flow because the requested parameters are not available at the time. So the simple answer to the QoS requests is a simple 'yes' or 'no'.

To support these two types of services, IntServ requires all the network elements along the path of the flow to implement its mechanism. Due to per-flow nature of IntServ framework, it is intended that all the routers that take part in the packet forwarding should have the information of that particular flow. This information is essential to provide network parameters such as available bandwidth, delay and packet loss to the flow.

3.5.3 Traffic Control Components

The framework of IntServ consist the following:

  • Packet Scheduler: The packet scheduler is in charge of how the packets are forwarded. It forwards different streams of packets based on how they are queued.
  • Classifier: Incoming packets are mapped into different classes for the purpose of traffic control. All packets that are mapped to a class get same treatment.
  • Admission Control: Admission control makes sure that the incoming packets can be granted the service that they are asking for. This decision is made on the new flow's type of service requirements and the current state of the network. This admission control decision is made at every node along the path of the flow.

3.5.4 RSVP Signalling

Resource ReSerVation Protocol, or RSVP, is the signalling protocol for IntServ. When used with IntServ, RSVP provides a mechanism that enables an application to request a particular quality of service for a session creating and maintaining dynamic states on the network elements across the path of the flow. Thus, RSVP is at the very core of the IntServ architecture.

RSVP is designed as a 'soft-state' protocol. In that it requires periodic updates from the senders and receivers. In the absence of these updates the reservation state at the network entity will automatically time out and the reserved resources will be released.

RSVP provides a communication medium for senders, receivers and intermediate routers so that a necessary router state is setup to support a required IntServ service class. RSVP identifies a particular session by the combination of IP address, protocol type and port number from the IP header of the packet. The messages used by RSVP to accomplish its task of setup, are PATH and RESV.

PATH: The Path message originates from the sender of the traffic. The primary role of PATH message is to provide reverse route state to the routers and to provide knowledge of sender traffic to the revivers.

RESV: The RESV message originates from the traffic receiver. The primary role of RESV message is to provide resource reservation requests to all the routers between sender and receiver.

When an application wants to send traffic to the sender with QoS requirements, the sender would send an RSVP PATH message to the receiver through the network. Each router along the path identifies the PATH message and checks its resources if it can support the requested QoS parameters before passing it on to the next node. Each router along the path stores a 'soft state' of the session and periodic updates are sent to all routers involved in the session to maintain the session. Once the PATH message reaches the receiver, it also checks the requested parameters and determines if it can support such a session. If the receiver decides to accept the traffic, it then sends the RESV message back to the sender. This RESV message is sent back to the sender through the same path the PATH message arrived. Although, this path can change when the actual traffic is transmitted later on. When the RESV message arrives at the routers, they again check if the resources could be allocated to this session. If they determine that resources are available then the session is established. Otherwise a tear message is generated to clear the reservations.

Once a session is established all the routers involved in the session should maintain it. This is done by sending periodic updates of PATH and RESV messages. IETF Internet working group recommends these updates should be sent every 30 seconds. If routers do not receive the update, the 'soft state' times out and the session is broken. This is a good mechanism to tear down unresponsive sessions so that reserved bandwidth can be freed.

3.5.5 Disadvantages of IntServ

IntServ model is not an easy model to deploy over the current Internet architecture. Theoretically it is possible to provide QoS for every flow in the network, given that it is requested through RSVP and the resources are available. But practically, it requires significant changes and investment in the current network. The limitations of IntServ have prevented it from becoming favourite QoS solution among the internet community. They are mentioned in the following:

  • Every networking node, including sender, receiver and intermediate routers, involved in the communication should be RSVP compatible and capable of signalling required QoS.
  • Since state has to be maintained for each flow at each intermediate router, scalability becomes a big issue when a large number of flows are running through the network
  • IntServ mechanism is processor intensive for the routers due processing on individual flows and state maintenance.
  • Reservations on each router are 'soft' due to which periodic updates are to be sent across the network, which in turn sends extra load on the network.

3.5.6 Solutions

The basic shortcomings of the RSVP protocol have been addressed by RSVP Refresh Reduction and Reliable Messaging [13][14]. It addresses the issue of scalability by defining Bundle Message which reduces raw RSVP message rate through message aggregation. It also reduces the state change propagation time by defining short hand message identifier to reduce the processing by allowing the receiver to identify unchanged message easily. Other solutions include RSVP scalability enhancement and Proxy RSVP which significantly reduces the overhead and enhances the capability of RSVP.

3.6 Differentiated Services (DiffServ)

Differentiated Services is the coarse-grained solution. DiffServ model does not explicitly reserve the resources for individual flows. Instead, individual flows are grouped together and these individual groups are sent across the network with appropriate guarantees of service.

Unlike IntServ, DiffServ framework works on a provisioned QoS model and does not rely on any protocol to provide quality of service but rather an architectural framework is developed with complete QoS solution. The network routers and other elements are set up to service different classes of traffic each with different QoS requirements. DiffServ simplifies the solution by classifying different traffic flows in to different classes, also known as Class of Service (CoS) and apply QoS parameters to those classes. The packets are marked with special tag by manipulating the values in the Type of Service (ToS) byte in the IPv4 header (Fig. 3.1) or Traffic Class byte in IPv6 (Fig. 3.3), so that differential levels of service can be given to different aggregate flows at the entry points to the network. This byte in IPv4 and IPv6 header is generally known as DS field as shown in Figure 3.5. The values are 6-bit pattern called Differentiated Services Code Point (DSCP). DSCP occupies first six bits while the last two bits are unused and set to zero.

The IPv4 Type of Service byte is also used for IP precedence mechanism defined by IETF. In this mechanism packets are marked with the appropriate priority precedence bit. Every network node along the path of the packet knows the meaning of the value set in the IP headers TOS field thus provide appropriate service level to it. IPv4 ToS byte can bee seen in the Figure 3.2. The 3 bits used to specify IP precedence can define eight different categories. The packets with lower precedence level will have higher probability of being dropped in case there is congestion. Each packet is also marked with levels of delay, throughput and reliability specified in DTS bits. This scheme has not become popular with network operators due to its limitations. DiffServ uses same sort of concept but in this case the IPv4 ToS byte or IPv6 Traffic Class byte is known as the DiffServ (DS) field.

The workload on core routers in DiffServ enabled network is reduced by aggregating flows at the edge routers. When traffic reaches at the edge routers, it is categorized into different classes. A special forward treatment is given to each packet at each node of the network providing each packet with the appropriate QoS parameters in terms of delay, jitter and bandwidth etc. This behaviour is known as Per-Hop Behaviour (PHB), which is explained later in the chapter. This combination of packet marking and PHBs results in scalable quality of service solution for QoS sensitive applications. In a real networking environment, service providers provide PHB on the basis of an agreement which is called a Service Level Agreement (SLA). This ensures that the data from a particular client is served with the QoS requirements expressed in the SLA.

3.6.1 Differentiated Services Model

A typical DiffServ Network is shown in Figure 3.6. Each node involved in the DiffServ enabled network is called Differentiated Services node or DS node and Differentiated Services Domain is a set of contiguous DS nodes which operate with common service provisioning policy and set of PHB groups implemented on each node [3]. DS Domain consists of interior routers and is bounded by boundary routers which are known as Ingress and Egress routers. Traffic enters into a DS domain through ingress router and leaves from egress router. So a particular boundary router can act as both ingress and egress router at the same time. Architecture for DiffServ defines that both interior and boundary routers must be able to provide appropriate PHB to each packet. The figure 3.6 shows the layout of a sample DiffServ Network. It shows many traffic flows entering into the DS domain via ingress router and some leave via egress router. The role of ingress routers is to ensure that the traffic entering into the DS domain conforms to the Traffic Conditioning Agreement (TCA) [TCA is agreed as part of the SLA]. So each flow is marked with different DSCP values and thus served with different PHBs. The role of egress router is to perform the traffic conditioning according to the TCA between peering domain and its own domain. The collection of packets that have the same DSCP and sending them to it appropriate direction is called Behaviour Aggregate (BA).

The edge routers always expect a classified, marked or conditioned traffic. SLA between the two different DS domain actually defines which domain is responsible for marking the traffic to comply with the agreed traffic conditioning agreement (TCS), which is a part of SLA. This ensures that the each aggregated traffic gets the service promised to it by the SLA and also ensuring that the BAs through the core routers does not suffer as well.

3.6.2 Traffic Classification and Conditioning

DiffServ use following to do its classification and conditioning on the network packets.

3.6.2.1 Packet Marking

Routers mark the packets upon entry into the DS domain. They do this by manipulating the ToS byte in the packet header. Six bits are used to classify the packet and is known as Differentiated Service Code Point (DSCP). With DSCP up to 64 different classes can be defined and supported.

3.6.2.2 Classifier

As defined in RFC2474 [2] packet classifiers are used to steer the traffic within a DS domain. The classifier function on a particular router reads the classifier keys within the incoming packets and assigns them to various outgoing flows. There are two types of classifiers that are defined. They are Behaviour Aggregate classifiers that are used at the core routers and The Multi-Field (MF) classifiers that are used at edge routers. These classifier keys are different depending upon where the router is in the DS domain. If they are core routers they classify packets on the basis of DSCP only and if they are edge (ingress) routers then the classifier key consist of combination of multiple fields such as source address, destination address, protocol id, and source and destination ports.

3.6.2.3 Conditioner

The conditioner function at the routers ensures that classified traffic conforms to the traffic conditioning agreement (TCA).The conditioner contains following elements: meters, markers, shapers and droppers to enforce the TCA.

  • Meters: The meters compare the actual traffic against the profile stored as a part of SLA. The traffic that conforms to this profile is called in-profile traffic, while the rest are known as out-of-profile traffic. The meter passes state information to other conditioning functions that take appropriate actions for each packet that is either in-profile of out-of-profile.
  • Markers: All the in-profile traffic can now enter the DS domain if it has appropriate DSCP or they are marked by the marker with the appropriate DSCP to select an appropriate PHB.
  • Shapers: All the out-of-profile traffic is sent to shapers to be marked as in-profile otherwise it is sent to Droppers.
  • Droppers: Dropper drops the out-of-profile packets so that the stream is made compliant with the SLA profile.

3.6.3 Per Hop Behaviour (PHB)

DiffServ reduces the complexity on the core routers by aggregating flows into BA by classifying packets into one of the DSCP. BA can have packets from more than one application. PHB refers to the packet forwarding behaviour of the core routers on any given packet belonging to a particular BA. The intermediate routers allocate resources to a particular stream of packets on the basis of PHBs. PHBs are specified in terms of delay, packet loss, bandwidth, buffer size etc. A PHB is selected at a router by mapping of DSCP at the routers. The IETF DiffServ group has recommended that the PHBs should be defined as groups to provide consistency. Standard PHBs are defined which have recommended DSCP values. The total space of DSCP is greater than the space available for recommended code points so a locally configurable mapping can also be accommodated. The standard PHBs are:

3.6.3.1 Default PHB

Any packet with DSCP value of '000000' gets the traditional best-effort service from the DS router. If a packet's DSCP value is not mapped to any other PHBs, it will be automatically mapped to default PHB. This will also be the case if there are no locally configured policies defined.

3.6.3.2 Class selector PHB

To provide interoperability with the non-DS compliant nodes, Class Selector PHB is defined. For this, the DSCP values of the form 'xxx000' (where x can be 0 or 1) are defined. This makes the DS compliant node to be compatible with the IP precedence scheme. Thus the PHBs shows same sort of behaviour for a non-DS compliant node as the IP precedence based classification and forwarding.

3.6.3.3 Expedited Forwarding PHB

The DSCP value of '101110' is defined for Expedited Forwarding (EF) PHB. The EF PHB is the basic element of DiffServ to provide low packet loss, delay and jitter service. This service is specially defined for applications that need guaranteed bandwidth service, such as voice. To minimize delay the packets face very small or in some cases no queues. For this the routers implement special queuing mechanisms namely:

  • Priority Queuing (PQ): In this type of queuing mechanism the EF flows simply get higher priority than the other flows. This ensures that during congestion the higher priority data does not get delayed by the ones with lower priority. Consequently, lower priority traffic can suffer significant delays.
  • Weighted Fair Queuing (WFQ): In WFQ, appropriate weights are assigned to EF and non-EF flows so that a fair service can be given to non-EF flows while providing priority to EF flows. This is done by allocating percentage of the output bandwidth equal to relative weight of each traffic class during congestion.

3.6.3.4 Assured Forwarding PHB

Assured Forwarding PHB defines methods by which BAs can be given different forwarding behaviors. It defines four different classes to classify the traffic, namely AF1, AF2, AF3 and AF4. Each class is assigned certain bandwidth and resources according to the SLA. When there is congestion, AF drops packets on the basis of Drop Precedence, which is defined for each class. Table 3.1 shows values for each class and drop precedence. The flows that need AF are marked at the ingress router with a DSCP value of any of the four AF classes (001 for AF1, 010 for AF2, 011 for AF3 and 100 for AF4). It is then metered according to the traffic conditioning agreement (TCA) and the markers mark the drop precedence of each packet (010 for DP1, 100 for DP2 or 110 for DP3). The in-profile packets are marked with low drop precedence and the out-of-profile packets are marked with higher precedence. This mechanism is useful in case when flows within a BA exceed their assigned bandwidth. The packets of such a flow are marked by the markers with higher drop precedence. In case of congestion the packets with higher drop precedence are dropped first. But in the case of light traffic load, the traffic with high drop precedence (DP3) will also pass through successfully.

Routers implement AH PHB by using special queuing mechanism called Random Early Detection (RED) queuing. RED has higher thresholds on the buffer for queuing length of in-profile packets and lesser one for out-of-profile packets. As the queue increases, the out-of-profile traffic has a higher probability of being dropped.

3.6.4 Disadvantages of DiffServ

Although DiffServ solves the shortcomings of IntServ in terms of scalability and provide coarse-grained QoS for a network but it has some drawbacks of its own. Some of the problems associated with DiffServ are:

  • DiffServ cannot guarantee and absolute end-to-end QoS because there is no signalling involved. It has no prior knowledge of where a specific flow will reach its intended destination with the promised QoS parameters. So if there is heavy congestion at some point there is no way that applications can adjust to network conditions.
  • DiffServ needs to be provisioned through SLAs. To set up different classes and aggregate traffic through the network, the routers should have the knowledge of applications and statistical aggregates of the traffic flow. DiffServ lack dynamic admission control so they give optimal performance if traffic conditions change.
  • DiffServ divides the flows into Behaviour Aggregates, which can contain flows from many different applications and sources. If one of these flows does not conform to the SLA then the other flows within that particular BA will also suffer.

3.6.5 Solutions

Advantage of using IntServ is that it provides end-to-end QoS guarantees while DiffServ lacks in that respect. Similarly, DiffServ aggregates flows according to classification of network traffic while IntServ reserves QoS resources on per flow basis. To address the individual disadvantages of IntServ and DiffServ frameworks, research efforts were made to combine these two technologies with IntServ being implemented at the edge routers and DiffServ used at the core. This approach provides the network with best of the two worlds by mapping the IntServ specific parameters at edge routers to DiffServ PHBs of the core routers.

Two of such models, commonly known as Hybrid models have been defined. They are Microsoft model [17] and Tunneled Aggregate model [18]. Both of them only differ from DiffServ model in the way DiffServ core networks deals with the reservation requests.

4. QoS Simulation

One of the aims of this dissertation is to implement an artificial environment of IP based networks and test the IntServ and DiffServ QoS architectures in them. Simulation is a very good way to better understand and study these two QoS frameworks. For this purpose I have used OPNET IT Guru Academic Edition 9.1 simulation software [19].

4.1 IntServ Network Simulation

As discussed earlier IntServ Framework defines two types of services i.e., Controlled Load service and Guaranteed Load service. OPNET has an integrated model for RSVP protocol which supports Controlled Load service. So, for the simulation of IntServ, this type of service will be used. For the sake of understanding how RSVP works in IntServ framework this simulation reserves the bandwidth and buffer size in the beginning of the simulation. The entire simulation then uses these reserved resources.

Simulation is done using same traffic type (voice) with and without IntServ support. In this scenario there are two routers used, the link between these two routers is the bottleneck and the applications data is transferred to their respective destinations through this very link (defined as PPP D0).

4.1.1 Scenario: Voice Application

Figure below shows the network topology for this scenario in which performance of Voice applications is gauged with and without RSVP reservations. IntServ uses RSVP for its implicit reservations and here we see how this improves the end-to-end delay. Following network topology is used for simulation:

As can be seen from the following graph that the end-to-end delay for voice traffic with traditional best-effort service is much greater than the traffic sent by IntServ/RSVP mechanism, in which explicit bandwidth (of 5000 bytes) and buffer size (of 5000 bytes/sec) was reserved to provide better service level.

End-to-end delay (sec)

cdf probability of end-to-end delay

Above graph shows cumulative distribution function (cdf) of the end-to-end delay in both cases. It shows that most of the packets that have reserved their services via RSVP have suffered a delay of less than one second. The rest of traffic mostly has to suffer delays of more than one second.

The above simulation shows that by using IntServ/RSVP framework, network traffic can be given agreed service guarantees. In both cases the traffic type is similar and yet there is so much difference in their average end-to-end delays.

4.2 DiffServ Network Simulation

The network topology used for the simulation of DiffServ networks consists of two routers. There are four clients in this network, which have same kind of traffic (Video) but different Type of Service (ToS) and/or DSCP values. Each client has a respective server at the other end of the network. Traffic from all the sources gets mixed while transferred between the two routers, Router A and Router B. This will make a bottleneck between the two routers and there will be variable queuing delays for each packet, which will be monitored for this simulation purpose. To study the effects of DiffServ mechanism, Expedited Forwarding PHB is studied with two queuing mechanism i.e., Priority Queuing and Weighted Forwarding Queuing.

4.2.1 Expedited Forwarding PHB with Priority Queuing

To show the effects of EF PHB with Priority Queuing (PQ), there are two scenarios. Scenario 1 employs PQ on the routers on the basis of ToS field values in the IP header of packets. Scenario 2 employs PQ on the basis of DiffServ Code Point (DSCP) values in the IP header's DS field. Network topology for both Scenarios is same except that the 'client ToS 1' is replaced by 'client_EF_PHB' along with its appropriate server on the other end. It is compared with the simulation of a network (Scenario 1) that employs PQ on the basis of ToS field of IP packets instead of DSCP codes.

4.2.1.1 Scenario 1 and Scenario 2 - End-to-end delay

End-to-end delay (sec)

Above graph compares the end-to-end delay of the traffic in the two scenarios. In Scenario 1, the blue curve shows the end-to-end delay of the traffic sent by 'client ToS 1' with ToS value of 1 i.e., lowest priority. Consequently it suffers the most. Graph shows an average end-to-end delay of 0.6 seconds. While the traffic with ToS value of 4, highest among the others, gets the least delay (depicted by light blue curve).

In Scenario 2, Expedited Forwarding (EF) PHB is employed by the routers. Other clients are employing same ToS values while the one with ToS value of 1 is replaced with DSCP code of EF in the DS field. The blue curve shows that it is given the highest priority; an end-to-end delay of less than 0.025 seconds as compared to 0.6 in previous scenario. While the rest are given priorities according to the ToS values in their IP header.

In scenario 2 the traffic from 'client_EF_PHB' is classified as EF PHB according to its DSCP value. The traffic of other three clients is automatically classified into Class Selector PHB because they have classified their precedence on the basis of ToS instead of DSCP code. Each of these three clients sends traffic with their specific ToS values (2, 3 and 4). In this case, routers have allocated separate queue for the traffic from 'client_EF_PHB' as it is classifying its traffic with EF DSCP code. While the rest of clients only get the chance to send its traffic when the higher priority queue is empty.

4.2.1.2 Traffic Dropped

Traffic drop in Scenario 2 is more because priority is given to 'client_EF_PHB' traffic. Other traffic gets their chance only when the higher priority queue is empty. As a consequence the traffic of other clients gets dropped in case of congestion

Traffic Drop (Packets/sec)

4.2.1.3 Priority Queue Delays at the routers in Scenario 1

PQ provides differential service by having different queues for each type of traffic. The queue having higher priority or higher ToS value is given priority. The service provided to the lower priority traffic very much depends on the service provided to higher priority traffic. If there is constant inflow of traffic in higher priority queue, the lower priority traffic will suffer greatly as depicted in the graph below.

Priority Queuing Delay (sec)

The graph shows that in Scenario 1, four different queues are maintained at the routers for each type of traffic. Each source has marked its packet with appropriate ToS value (1 to 4 in this case) and routers, Router1 and Router2 have managed separate queues for each. Blue curve shows the queuing delay for traffic with ToS value of 1. It suffers the most as compared to the higher priority queues. One thing to note here is that low priority should be set for those network traffic which can adjust to the packet loss e.g., traffic like FTP which runs on TCP and can adjust to packet loss by slowing its rate.

4.2.1.4 Priority Queue Delays at routers in Scenario 2

In this case only two queues are maintained at both routers. One for DSCP based EF PHB traffic and other for rest of the traffic (automatically classified into Class Selector PHB). Here we see that the queuing delay for EF PHB traffic is almost negligible while the rest of the traffic has to wait until this queue is empty. This makes their average delay to about 0.13 sec.

Priority Queuing Delay (sec)

4.2.2 Expedited Forwarding PHB with Weighted Fair Queuing

As we have seen there is a problem of fairness in Priority Queuing mechanism. This can be avoided by employing a fair scheduling scheme in the shape of Weighted Fair Queuing (WFQ). In WFQ, weights are associated with each queue such that the number of consecutive time slots given to each queue depends upon the weight given to it. This mechanism ensures that the low priority queues do not starve and give the appropriate bandwidth according to the weight associated with it.

The simulation model is the same as in the previous case of Priority Queuing except that the WFQ scheduling is used at the routers.

In both scenarios the network is composed of four pairs of video traffic clients. Traffic is queued at the routers and they are differentiated using the WFQ mechanism. In Scenario 1 this differentiating is done on the basis Type of Service (ToS) while in Scenario 2 it is done by DiffServ code point (DSCP).

4.2.2.1 Scenario 1 and Scenario 2 - End-to-end delay

Routers support multiple queues for each type of service. In Scenario 1, queue 4 receives 'client TOS 4' traffic, queue 3 receives 'client TOS 3' traffic and so on. With WFQ employed the queues send the traffic according to the weights defined for them (see Table 4.1). As a result of this classification the traffic with higher ToS value gets lowest end-to-end delay (Queues 4, 3 and 2). But as a result queue 1 is starving for bandwidth as depicted by the blue curve in the left graph.

End-to-end delay (sec)

In Scenario 2, queue 4 receives 'client TOS 4' traffic, queue 3 receives 'client TOS 3' traffic and so on and queue 1 (depicted by blue curve on graph on the right) receives traffic from DSCP based client. The queues send the traffic according to the weights defined for them in Table 4.2. As a result of this classification the traffic of 'client_EF_PHB' gets the highest priority and enjoys most of the bandwidth. Note that the rest of the traffic suffers almost same end-to-end delay. This is because they are buffered in the same queue. This simulation shows an improvement in overall fairness in the control of the bandwidth.

4.2.2.2 Queuing delay & Packet dropped at router Scenario 1

In Scenario1, the Queue 1 suffers the most as it has the least weight attached to it, while the rest of the traffic enjoys good share of the bandwidth. Similarly the packet drop rate for queue1 is the highest then queue 2 and so on. The relative fairness is increased as compared to Scenario1 of the Priority Queuing mechanism.

4.2.2.3 Queuing Delay at routers & Packets dropped at router Scenario 2

In the above graph, it is shown that there are two queues maintained by the router. One for DSCP based EF PHB traffic and other for rest of the traffic (automatically classified into Class Selector PHB). With WFQ employed, it can be seen that the queuing delay for EF PHB traffic is almost negligible as it has a weight of 55 attached to it as shown in Table 4.2 .The rest of the traffic has to wait until this queue is empty.

The above simulation shows that DiffServ Model can make classes of network traffic and different classes can be given service guarantees according to their respective requirements. PQ and WFQ employed at the routers determine how different classes of traffic are queued and processed.

5. Conclusion

An IPv4 based network needs many modifications to support QoS requirement of time-critical and real-time applications. While IPv6 network does support standard QoS features but lack of consensus about QoS architecture gives it some sort of uncertainty. Two QoS based architectures in the shape of IntServ and DiffServ provide much needed support of QoS in these connection-less protocols.

This work has demonstrated through simulation the ability of IntServ and DiffServ to provide QoS to selected traffic of the network. Simulation results have shown that IntServ/RSVP framework provides service enhancement by reserving bandwidth and other resources. While doing this it requires per-flow reservations at all nodes along the network path. Due to this reason it shows its complexity in implementation and doubts in scalability.

Simulation of DiffServ enabled networks has shown the ways of differentiation of traffic. Most routers on the internet use FIFO scheduling scheme at their interfaces. It has been shown that the use of PQ and WFQ scheduling scheme has improved the performance for specific network traffic. While PQ can provide very good service to higher priority traffic but it leaves other traffic starving for bandwidth. WFQ has shown that it can provide a fairness for every one involved while giving priority to EF class traffic.

Simulation is a good way for researchers to get familiarity with a technology. However, simulation ignores the actual complexity of configuring a real network and simplifies the task by assuming most of the configuration parameters are working fine.

The simulation work done has provide a fair comparison of the IntServ and DiffServ frameworks and scheduling algorithms used their impact on the QoS, but there are other factors such as SLAs and router's configurations that dictate how QoS is provided. Service Level Agreements (SLA) forms the basis for the level of service provided to different users and types of applications. Similarly, every entity involved in communication network should be QoS aware and provide its support at various hardware and software levels to provide a complete end-to-end QoS solution,

5.1 Future Research

During this research it has been observed that following issues need more attention in future research:

  • QoS routing protocols
  • Billing for QoS-enabled networks
  • IntServ over DiffServ architecture

6. References

[1] J. Wroclawusky, The use of RSVP with IETF Integrated Services, IETF RFC2210, September 1997

[2] K. Nichols, S. Blake, F. Baker, D. Black. Definition of the Differentiated Service Field (DS Field) in IPv4 and IPv6 Headers. RFC2474 IETF Networking Group, December 1998

[3] Blake, et. al. Architecture for Differentiated Services, RFC2475 IETF Networking Group, December 1998

[4] Mika Ilvesmaki. QoS in the Internet: RSVP / Integrated Services. October 2004.

[5] Cisco White Paper. DiffServ - The scalable end-to-end quality of service model. August 2005. http://www.cisco.com/en/US/products/ps6610/products_white_paper09186a00800a3e2f.shtml (12 Feb. 2006)

[6] L. Delgrossi and L. Berger. Internet Stream Protocol Version 2 (ST2) Protocol Specification - Version ST2+. RFC1819, August 1995.

[7] K. Claffy, G. Miller, K. Thompson. The nature of the beast: recent traffic measurements from an internet backbone. Inet98 1998 http://www.caida.org/outreach/papers/1998/Inet98/Inet98.html (12 Feb. 2006)

[8] S. Shenker and J. Wroclawski. General characterization parameters for Integrated Service network elements. RFC2215 IETF Networking Group, September 1997.

[9] R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin. Resource ReSerVation Protocol (RSVP) - Version 1, Functional Specification. RFC2205 IETF Networking Group, September 1997.

[10] El-Bahlul Fgee, Jason D. Kenny, W. J. Philips, William Robertson and S. Sivakumar. Comparison of QoS performance between IPv6 QoS management model and IntServ and DiffServ QoS models. CNSR'05

[11] F. Le Faucheur, Et al. Protocol extensions for support of DiffServ-aware MPLS Traffic Engineering. RFC4124 IETF Networking Group, June 2005

[12] G. Huston Telstra, Next Steps for the IP QoS Architecture. RFC2990 IETF Networking Group, November 2000

[13] L. Berger, D. Gan, G. Swallow, P. Pan , F. Tommasi, S. Molendini, RSVP Refresh Overhead Reduction Extension. RFC 2961 IETF Networking Group, April 2001

[14] Lou Berger, Update of RSVP Refresh Reduction Extensions. November 1999 http://www.isi.edu/rsvp/washdc.meeting/berger.refresh.slides.pdf

[15] Amit P Kuchera, Scalable Emulation of IP Networks through Virtualization. Master Thesis University of Kansas, 2003

[16] Bay Networks, White Paper IPv6. 1997 http://www.cs-ipv6.lancs.ac.uk/ipv6/documents/papers/BayNetworks/bay6.gif

[17] Y. Bernet, P. Ford, R. Yavatkar, F. Baker, L. Zhang, M. Speer, R. Braden, B. Davie, J. Wroclawski, E. Felstaine, A Framework for Integrated Services Operations over DiffServ Networks. RFC2998 IETF Networking Group, November 2000

[18] F. Baker, C. Iturralde, F. Le Faucheur, B. Davie, Aggregation of RSVP for IPv4 and IPv6 Reservations. RFC3175 IETF Networking Group, September 2001

[19] OPNET IT Guru Academic edition 9.1. OPNET Technologies Inc. http://www.opnet.com

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!