Multiprotocol Label Switching (MPLS)


Multiprotocol Label Switching (MPLS) is a comprehensive data networking technology that provides many benefits to enterprises and carriers. AT&T has had many years of experience with MPLS, and was an early adopter, announcing its first MPLS-based service in 1999. Since then, AT&T has continually rolled out new and enhanced MPLS-based IP VPN services in support of enterprise customers. Today, AT&T is regarded by leading telecommunications analysts as having one of the most comprehensive VPN portfolios in the industry, including MPLS, IPSec and SSL-based solutions.

Now considered a mainstream technology throughout the telecommunications industry, MPLS is the key technological component underpinning AT&T's current, and future, network evolution. AT&T has adopted MPLS as a strategic platform onto which all of its diverse data networks will converge into a single, seamless global MPLS network, transported on an intelligent optical infrastructure. MPLS also provides additional traffic engineering mechanisms such as control and path calculation

Multiprotocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today's Internetworking environment. Members of the IETF community worked extensively to bring a set of standards to market and to evolve the ideas of several vendors and individuals in the area of label switching. The IETF document draft-ietf-mpls-framework contains the framework of this initiative and describes the primary goal as follows: The primary goal of the MPLS working group is to standardize a base technology that integrates the label swapping forwarding paradigm with network layer routing. This base technology (label swapping) isexpected to improve the price/performance of network layer routing, improve the scalability of the network layer, and provide greater flexibility in the delivery of new routing services by allowing new routing services to be added without a change to the forwarding paradigm.


Multiprotocol Label Switching (MPLS) is a standardized protocol and comprehensive unifying networking architecture. The design of MPLS follows the principles described by the classic 1984 paper on systems design, "End-to-End Arguments in System Design," by J.H. Saltzer, D.P. Reed and D.D. Clark. In this seminal work, the authors describe the principles of using simplicity in the core of a system and complexity on the edges. To both simplify and increase the efficiency of core transport, the MPLS protocol enables data to be transmitted efficiently across a network infrastructure utilizing a technology known as "label switching." As customers' data enters the network, a 20 bit header called a label is appended to each packet. Labels can be used to convey several types of information about a packet, but probably the most frequent use of a label is to uniquely identify a customer's Virtual Private Network in a shared infrastructure and keep it private. Used in this fashion, a label uniquely identifies a packet as belonging to a specific IP VPN. Upon reaching its destination, the label is removed, thereby returning the data packet to its original state. The process is seamless and unnoticeable to end-users. One can think of MPLS in this context as a "special delivery courier service" for network data packets.

There has been much confusion in the industry regarding terminology related to VPNs and MPLS. In this paper we'll be using the following definitions. "MPLS VPN" will be used to describe the full range of VPN functionality supported by MPLS: both Layer 2 and Layer 3 VPN services over an MPLS core. This comprehensive and somewhat generic term includes Frame Relay, Asynchronous Transfer Mode (ATM) and IP VPNs. An "IP VPN" is a Layer 3 IP-routed service which is provided to the customer; various technologies can be used to create an IP VPN, such as IPSec, MPLS, SSL, etc. This paper does not describe all possible IP VPN technologies but is focused on MPLS, the technology, its key attributes, and how it can be used to create IP VPNs. The term "IP VPN" does not describe the type of network upon which the service is provisioned; it is possible to create IP VPNs on both public (connected to the Internet) and private (not connected to the Internet) IP networks. Lastly, an "MPLS-based IP VPN" is an IP VPN which is provisioned over a network which is MPLS-enabled. Within the context of any specific paragraph where one of these terms is used, "VPN" without any modifiers may be used later in the same paragraph after the specific type of VPN has been identified.


MPLS is a technology used for optimizing traffic forwarding through a network. Though MPLS can be applied in many different network environments, this discussion will focus primarily on MPLS in IP packet networks by far the most common application of MPLS today.

MPLS assigns labels to packets for transport across a network. The labels are contained in an MPLS header inserted into the data packet (Figure below).

These short, fixed-length labels carry the information that tells each switching node (router) how to process and forward the packets, from source to destination. They have significance only on a local node-to-node connection. As each node forwards the packet, it swaps the current label for the appropriate label to route the packet to the next node. This mechanism enables very-high-speed switching of the packets through the core MPLS network.

MPLS combines the best of both Layer 3 IP routing and Layer 2 switching. In fact, it is sometimes called a "Layer 2½" protocol. While routers require network-level intelligence to determine where to send traffic, switches only send data to the next hop, and so are inherently simpler, faster, and less costly. MPLS relies on traditional IP routing protocols to advertise and establish the network topology. MPLS is then overlaid on top of this topology. MPLS predetermines the path data takes across a network and encodes that information into a label that the network's routers understand. This is the connection oriented approach.

Since route planning occurs ahead of time and at the edge of the network (where the customer and service provider network meet), MPLS-labeled data requires less router horsepower to traverse the core of the service provider's network.


Every MPLS node must run one or more IP routing protocols (or rely on static routing) to exchange IP routing information with other MPLS nodes in the network. In this sense, every MPLS node (including ATM switches) is an IP router on the control plane. Similar to traditional routers, the IP routing protocols populate the IP routing table. In traditional IP routers, the IP routing table is used to build the IP forwarding cache (fast switching cache in Cisco IOS) or the IP forwarding table (Forwarding Information Base [FIB] in Cisco IOS) used by Cisco Express Forwarding (CEF). In an MPLS node, the IP routing table is used to determine the label binding exchange, where adjacent MPLS nodes exchange labels for individual subnets that are contained within the IP routing table. The label binding exchange for unicast destination-based IP routing is performed using the Cisco proprietary Tag Distribution Protocol (TDP) or the IETF-specified Label Distribution Protocol (LDP). The MPLS IP Routing Control process uses labels exchanged with adjacent MPLS nodes to build the Label Forwarding Table, which is the forwarding plane database that is used to forward label packets through the MPLS network.

Layer 3 Router in a Network

Routers use Layer 3 addresses, which usually have structure to the address, routers can use techniques, such as address summarization, in building networks that maintain performance and responsiveness as they grow in size and shape. By imposing a hierarchical structure on a network, routers will efficiently use redundant paths and determine optimal routes in the constantly changing network environment.

The router functions that are vital in a switched LAN design are described in this section:

  • Broadcast and multicast control.
  • Broadcast segmentation.
  • Media transition.

Broadcast and Multicast Control

If the user applications require broadcast or multicast support or both, such as videoconferencing, IPTV, or streaming data, such as a stock ticker etc, the broadcasts and multicasts that can cause network congestion should be manage. Routers are best suited to control these broadcasts and multicasts in the network by performing the following functions:

  • Caching the addresses of remote hosts: When the hosts send a broadcast packet determining the address of a remote host that the router already knows about, the router will respond on behalf of the remote host and filters the packet from leaving the local network by dropping the broadcast packet.
  • Caching advertised network services: When a router learns of new network services, it caches the necessary information and does not forward the broadcasts related to the new service. When a client of that network service sends a broadcast locating that service, the router will respond on behalf of the new service and filters the broadcast from the rest of the network by dropping the broadcast packet, sparing the other network hosts from having to respond. For instance, Novell Internetwork Packet Exchange (IPX)clients use broadcasts to find local services in a network without a router, every server in the network will respond to every client broadcast by multicasting its list of services. Routers will manage these Novell broadcasts by collecting services not local to the switch and sending out periodic updates describing the services offered on the entire network.
  • Providing special protocols: Special multicast protocols, such as theInternet Group Multicast Protocol (IGMP)andProtocol Independent Multicast (PIM) should be provided. These new multicast protocols enable multicasting applications to "negotiate" with routers, switches, and workstations in determining which devices belong to a multicast group. This negotiation helps will limit the range and impact of the multicast stream on the network.

A good network design usually contains a mix of appropriately scaled switching and routing implementations. Given the effects of broadcast radiation on CPU performance, well-managed switched LAN designs must include routers for broadcast and multicast management to keep the network from being saturated and crippled with unnecessary traffic.


Label switching is technique for overcoming the inefficiency of the traditional layer 3 hop-by-hop routing. Labels are assigned to packets that will allow network devices to forward packets in layer 2 at high speed. The label points to an entry in the forwarding table that specifies where the packet should be forwarded. This label switching technique is much faster than the traditional routing method where each packet is examined before a forwarding decision is made. According toRFC 2475(An Architecture for Differentiated Services, December 1998):

  • 90Label switching techniques
  • Topology-driven label assignment
  • Signaling/request/control-driven label assignment
  • Traffic-driven label assignment
  • Traffic engineering across explicit routes
  • LSRs (label switching routers)
  • GSMP (General Switch Management Protocol.


The limits of performance the current Internet make the integration of IP with ATM a badly debated issue in the networking arena, which lead to various competing approaches and products. However, legitimate technical and market issues are often intertwined with biased views and hype, since vendors are competing in the standards arena as well as on the markets. This together with the speed of technical evolution causes confusion for purchasers of networking equipment who usually prefer a single vendor for their networks, they may run the risk of remaining locked into a solution that will not scale with the evolving needs and that may not fully inter-operate with other networks, even using the IP protocol. It is important to note that the term 'Internet' refers to indeed a specific network. In the course of the study, we will refer more generally to the TCP/IP family of protocols, in order to pinpoint all types of IP networks such as intranets, extranets and the Internet. We shall cover both IP version 4 (IPv4) and IP version 6 (IPv6). Unless when Ipv6 is explicitly stated, IPv4 should be assumed. IP and ATM integration here refers to the support of IP over (or within) ATM. From the users of view, integration is a particular case of coexistence. Another particular case is IP and ATM interworking, which means that the interoperation between applications or between complete protocol stacks, based on one side on the IP technology, and on the other side on the ATM technology. For instance, interworking between IETF IP based videoconferencing and ATM based videoconferencing. In this study, we are not dealing with any interworking scenario. Interworking scenarios do not happen very often and are difficult to realize. In fact, additional non-trivial aspects like interworking at the user plane, mapping of addresses, and several other issues strictly dependent upon the involved technologies, will have to be considered in this case.


MPLS Today

Multi-Protocol Label Switching (MPLS) is an important initiative in many enterprises today. Purpose built for delivering multimedia voice, video and data in prioritized QoS classes, IT organizations are now rolling out MPLS upgrades to regional and remote offices as they look for new ways to lower cost, extend scalability, improve reliability, and secure their networked data.

Business and Network Challenges

Ensuring high quality delivery of all applications and services across an MPLS network, at all times, for all offices and users essential for today's enterprises.

  • Maintain network visibility when migrating from Frame Relay or ATM
  • Manage capacity and budget planning activities
  • Validate choices of tiered, QoS-based service plans
  • Enable high levels of employee productivity
  • Troubleshoot network and application response time issues
  • Consolidate reporting and analysis of all WAN technologies
  1. Network Performance Management
  2. Managing application and network performance is daunting. A major challenge facing IT professionals today is gaining adequate visibility into all the network technologies that have been implemented to enhance the delivery of business services. To manage the increasing complexity and diversity, you need a network performance monitoring system that spans the enterprise, regardless of network topology.

  3. Application Performance Monitoring
  4. Corporations and government agencies around the world rely on networked applications to conduct day-to-day business. Anything that interrupts or impedes performance of business-critical applications can directly impact revenue, productivity and/or customer services. SNMP utilization statistics are no longer sufficient - without visibility into all application traffic, companies simply cannot make informed business decisions.

  5. Network Capacity Planning
  6. Many factors affect the continued growth of network traffic - corporate initiatives like VoIP, new application implementations such as an ERP, and even recreational usage like streaming radio or video. These business changes, along with technology innovation, have led to an explosion of new, bandwidth-intensive applications and an increased probability of network congestion - a significant risk to business processes. Without information on what applications are consuming bandwidth corporations are forced to upgrade to higher and higher speed networks.

    Business-Based Network Capacity Planning with the NetScout Solution

    With the NetScout Performance Management Solution you can report on and analyze growth trends and usage patterns in order to make decisions about optimizing bandwidth, rescheduling activities, reallocating traffic, or even creating use policies. In addition to enterprise-wide baselines and forecasts, nGenius goes beyond basic utilization to identify the applications consume bandwidth. This provides the justification for capacity planning and allows you to keep your budget in check.

    • Proactively combat network congestionby reporting on bandwidth growth and forecasting capacity shortfalls
    • Gain quantifiable business justificationby understanding and reporting on which applications consume your network resources in order to postpone upgrades and justify growth & policy decisions
    • Tune traffic to optimize resourcesby identifying over- and under-utilized (physical and logical) segments so you can redistribute load and balance costs
    • Baseline current traffic patternsto ensure new applications can be supported during peak activity periods
    • Plan enterprise-wide network capacitywith a unified performance management solution that will support all data sources and areas of your network from the core to access layers, including higher speed networks such as 10 Gigabit Ethernet and OC-12 ATM
    • Curtail network misusefor better network utilization by identifying and reporting on non-business uses of the network

  7. Network Troubleshooting
  8. When the network is hampered by degradations and outages, it is essential to re-establish service or fix the slowdown as quickly as possible so that business operations are not unduly affected. Without highly-granular, real-time information, troubleshooting can be a discouraging, time-consuming task with a high mean-time to repair (MTTR).

  9. VoIP Use Today
  10. While a majority of organizations have transferred telephone service from common plain old telephone service (POTS) to Voice over IP (VoIP), dealing with converging voice, video and data traffic in the service delivery environment continues to raise new challenges. Networks are becoming more complex and the services that reside on them are more diverse than ever before - yet end-users require the same level, if not better, quality from VoIP telephone service as they received with POTS in order to perform productively. It is also vital that these new voice initiatives do not degrade performance of key business data applications, or vice versa.

Managing VoIP Effectively

The nGenius® Service Assurance Solution leverages packet-flow data derived from the real traffic traversing the network to provide the deep visibility and action-oriented data needed to:

  • Predict service delivery issues with intelligent early warningthat leverages key performance indicators (KPIs) and automated behavior analysis with action-oriented alerts that equates to recognizing problems sooner and diagnosing the cause before the end-user is impacted and calls the help desk.
  • Effectively manage application and network performancewith unified visibility that provides the ability to view voice, data and video services side-by-side in order to understand the interrelationships of all services that traverse the network infrastructure. Network Performance Management in support of IT Standards and Regulatory Compliance
  • The increase in societal scrutiny of public corporations and service organizations has led to a number of regulatory initiatives, such as Sarbanes Oxley for financial disclosure, Basel II in banking, and HIPAA in the medical industry. Because of the central role today's networks play in the delivery of business operations and services, the IT department is wholly impacted by these external regulations. In fact, many leading companies are even adopting best practice models, such as ITIL or COBIT, in order to gain service efficiencies and to be readily accountable to their organizations, end users and regulatory bodies.
  • Monitor all applications and business processeson your network, including those subject to regulation, new or custom applications deployed in support of compliance, or activity indicating inappropriate use
  • Report the response time of key business applicationsto verify service level compliance, e.g., HIPAA standards for electronic healthcare transactions, or to demonstrate the continued improvement of business processes for an ITIL implementation
  • Benefit from unified performance managementto serve a number of job functions within the IT environment, helping to reduce costs associated with IT management
  • Streamline the troubleshooting workflowin support of process improvements with nGenius's intuitive functionality and superior performance management architecture


Each packet enters an MPLS network at ingress LSR and exits the MPLS network at an egress LSR. This mechanism creates what is known as a Label Switched Path (LSP), which essentially describes the set of LSRs through which a labeled packet must traverse to reach the egress LSR for a particular FEC. This LSP is unidirectional, which means that a different LSP is used for return traffic from a particular FEC. The creation of the LSP is a connection-oriented scheme because the path is set up prior to any traffic flow. However, this connection setup is based on topology information rather than a requirement for traffic flow. This means that the path is created regardless of whether any traffic actually is required to flow along the path to a particular set of FECs. As the packet traverses the MPLS network, each LSR swaps the incoming label with an outgoing label, much like the mechanism used today within ATM where the VPI/VCI is swapped to a different VPI/VCI pair when exiting the ATM switch. This continues until the last LSR, known as the egress LSR, is reached. Each LSR keeps two tables, which hold information that is relevant to the MPLS forwarding component. The first, known in Cisco IOS as the Tag Information Base (TIB) the key concept in MPLS is identifying and marking IP packets with labels and forwarding them to a router, which then uses the labels to switch the packets through the network. The labels are created and assigned to IP packets based upon the information gathered from existing IP Routing protocols

Routers on the edge of the network (label edge routers [LERs]) attach labels to packets based on a forwarding equivalence class (FEC). Packets are then forwarded through the MPLS network, based on their associated FECs, through swapping the labels by routers or switches in the core of the network called label switch routers (LSRs), to their destination. (Refer Fig. 3).

MPLS also makes it possible to have granular control over a packet's path by referencing the incoming labels to the LIB (Label Information Base). As the network is established and signaled, each MPLS router builds a Label Information Base (LIB)-a table that specifies how to forward a packet. This table associates each label with its corresponding FEC and the outgoing port to forward the packet to. This LIB is typically established in addition to the routing table and Forwarding Information Base (FIB) that traditional routers maintain. Consider again Fig. 2. The following table I is an example of Router C's LIB. Now the Packets destined to router F originating from Router A will follow the solid path. Packets originating from Router B will follow the dotted path. This is accomplished by referencing incoming labels to the LIB in order to get the value of the outgoing label and the outgoing interface. The packets arriving on interface S2 with label value 60 will be forwarded on interface S0 with outgoing label 20. Similarly, the packets arriving on interface S3 with label value 55 will be forwarded on interface S1 with outgoing label value 70.

MPLS Components

  1. Forwarding Equivalence Class (FEC)
  2. A FEC is a set of packets that are forwarded in the same way through a network. A FEC can include all packets whose destination address matches a particular IP network prefix, or source and destination computer. FECs are usually built through information learned through an IGP, such as OSPF or RIP.

    When a packet enters into an MPLS network, the MPLS edge router classifies the packet as part of a particular Forwarding Equivalency Class. Based on the information gleaned from the packet such as source or destination address, the physical interface the packet arrived on, Quality of Service requirements, etc, these groups of packets are forwarded through the MPLS network over the same path with the same treatment.

  3. Label
  4. Label is a short, fixed-length, physically contiguous identifier which is used to identify a FEC. It contains all the information needed to forward the packet. The labels are created and assigned to IP packets based upon the information gathered from existing IP routing protocols

    Label (20)-A locally significant ID used to represent a particular FEC during the forwarding process.

    • Exp or CoS (3) - Class of service (CoS), Also called experimental range and considered for QoS implementations. S (1)- Used to signify if label stack is present. If the label is the only one present or at the bottom of the stack, the bit will be a value of zero.
    • TTL -Field used to signify the number of MPLS nodes that a packet has traversed to reach its destination. The value is copied from the packet header and copied back to the IP packet header when it emerges from the Label Switched Path. When an IP packet is presented to the LER, it pushes the shim header between layers 2 and 3 headers. The shim header is neither a part of layer 2 or layer 3 but it provides a means to relate both layer 2 and layer 3 information. MPLS uses the control-driven model to initiate the assignment and distribution of label bindings i.e. labels are assigned in response to the normal processing of routing protocol traffic, control traffic, or in response to static configuration. The MPLS control component centers around IP functionality with new standard-based IP signaling and label distribution protocols, as well as extensions to existing protocols. The forwarding component is based on the label-swapping algorithm.

  5. Label Edge Router (LER):
  6. A Router sits at the edge of an MPLS domain and is capable of utilizing the routing information to assign labels to packets and then forward them into an MPLS domain.

  7. Label Switching Router (LSR)
  8. A Router that typically resides somewhere in the middle of a network and is capable of forwarding packets based upon a label.

  9. Label Switch Path
  10. The path followed by a packet in the MPLS domain. It represents a set of routers that the packet has traversed.

  11. Label Stack
  12. By placing multiple labels onto a packet, MPLS can support a hierarchical routing design. The set of labels attached to a packet is called the label stack. As the packet traverses the network, only the topmost label is swapped. The labels are organized in a last-in, first-out manner. In other Words the topmost label signifies the highest LSP, and each successive label signifies the next lowest LSP


Label distribution protocol is used for distribution of label binding information to label switch routers (LSR'S) in MPLS network. It is used to map FEC'S (FORWARD EQUIVALENCE CLASS) to labels, which in turn creates LSP(LABEL SWICHED PROTOCOL), LDP sessions are the information obtained after communication between LDP peers in the MPLS network (not necessarily adjacent). The peer exchanges the following messages;

  1. Discovery message: announces and maintain the presence of an LSR in a network.
  2. Sessions messages: establish and maintain, and terminate sessions between LDP peers.
  3. Advertisement message: create, change and delete label mappings for FEC'S.
  4. Notification messages: provide advisory information and signal error information.

Extension of the base LDP protocol have also been defined to support explicit routing based on QoS (QUALITY OF SERVICE) and CoS (CLASS OF SERVICE)

With destination-based routing, a router makes the forwarding decision based on the Layer 3 destination address carried in the packet and the information stored in the forwarding information base (FIB) maintained by the router. A router constructs its FIB by using the information receives router from routing protocols, such as OSPF and BGP.

However, LSR must distribute and use allocated labels for LSR peers to correctly forward the frame. LSRs distribute labels using a label distribution protocol (LDP). A label binding associates the destination subnet to the locally significant label. (Labels are replaced at each hop, so they are locally significant.) Whenever an LSR discovers another neighbor LSR, the two establish a TCP connection to transfer label bindings. LDP exchanges subnet/label bindings using one of two methods: downstream unsolicited distribution or downstream-on-demand distribution. Both LSRs must agree as to which mode to use.


The LDP and TDP (Cisco's proprietary) can be used to advertise labels bindings for IGP prefixes. Although TDP and LDP are similar, there have a number of differences.


Note that this question focuses on LDP, although TDP is discussed where needed.

Extensions to the RSVP

Extended RSVP is usually used in MPLS networks to signal the TE tunnels. TE LSP tunnels can be used to make a better use of bandwidth by taking advantage of underutilized paths through the network.

TE LSPs can also be reserved based on bandwidth requirements and administrative policies. TE LSPs can also follow an explicit or dynamic path. Irrespective of whether they are explicit or dynamic, however, paths must conform to all bandwidth and administrative requirements.

Extensions to OSPF and IS-IS facilitate the flooding of link bandwidth and policy information throughout the MPLS network. This allows the TE tunnel to initiate (head-end) LSR to calculate the path using a constrained shortest path (CSPF) algorithm.

Once the path has been calculated, the tunnel is then signaled using RSVP Path and Resv messages. Path messages will contain a LABEL_REQUEST object (among others) and will travel hop-by-hop along the path which is described to the tunnel tail-end. Resv messages contain a LABEL object and travel backward along the path from the tail-end to the head-end LSR. The purpose of the LABEL_REQUEST object is to request a label binding for the LSP. While the purpose of the LABEL object is to distribute the label bindings for the LSP. TE tunnels use downstream-on-demand label distribution.


  • A mechanism that enables an LSR to discover potential LDP peers
  • Avoids unnecessary explicit configuration of LSR label switching peers
  • Two variants of the discovery mechanism
    • Basic discovery mechanism: used to discover LSR neighbors that are directly connected at the link level
    • Extended discovery mechanism: used to locate LSRs that are not directly connected at the link level


  • Exchange of LDP discovery Hellos triggers session establishment
  • Two step process
    • Transport connection establishment
  • If LSR1 does not already have a LDP session for the exchange of label spaces LSR1:a and LSR2:b, it attempts to open a TCP connection with LSR2
  • LSR1 determines the transport addresses at its end (A1) and LSR2's end (A2) of the TCP connection
  • If A1>A2, LSR1 plays the active role; otherwise it is passive
    • Session initialization
  • Negotiate session parameters by exchanging LDP initialization messages


  • Two label distribution techniques
    • Downstream on demand label distribution: An LSR can distribute a FEC label binding in response to an explicit request
    • Downstream Unsolicited label distribution: Allows an LSR to distribute label bindings to LSRs that have not explicitly requested them
  • Both can be used in the same network at the same time; however, each LSR must be aware of the distribution method used by its peer


  1. Independent Label Distribution Control
    • Each LSR may advertise label mappings to its neighbors at any time
    • Independent Downstream on Demand mode - LSR answers without waiting for a label mapping from next hop
    • Independent Downstream Unsolicited mode - LSR advertises label mapping for a FEC whenever it is prepared
    • Consequence: upstream label can be advertised before a downstream label is received
  2. Ordered Label Distribution Control
    • Initiates transmission of label mapping for a FEC only if it has next FEC next hop or is the egress
    • If not, the LSR waits till it gets a label from downstream LSR
    • LSR acts as an egress for a particular FEC, if
    • Next hop router for FEC is outside of label switching network
    • FEC elements are reachable by crossing a domain boundary


ATM Network is usually a high speed networking technology based on two very simple concepts-network conformity and QoS. Information (i.e. data, video, Internet and voice) transferred across an ATM network are carried in 53-byte chunks called cells. The fixed length cell enables ATM network to transfer information faster than Frame Relay or IP (Internet Protocol) with the highest degree of data integrity and the lowest degree of latency for any existing networking technology. The 53-byte cells are comprised of 48 bytes of payload (user information) and a 5-byte header. The 5-byte header enables all addressing and QoS information to be tagged at the front of a cell so as to allow the payload (user information) to be switched, instead of routed, without delay and according to priority. Using a uniform cell structure allows all the information to travel the network at a known interval. By putting information from different business applications into cells, the cells can be able to share the same network and still be treated in the same manner without latency or loss of integrity.

While the underlying protocols and technologies are different, both MPLS andATMprovide aconnection-orientedservice for transporting data across computer networks. In both technologies, connections are signaled between endpoints, connection state is maintained at each node in the path, and encapsulation techniques are used to carry data across the connection. Excluding differences in the signaling protocols (RSVP/LDP for MPLS andPNNI: Private Network-to-Network Interface for ATM) there still remain significant differences in the behavior of the technologies.

The most significant difference is in the transport and encapsulation methods. MPLS is able to work with variable length packets while ATM transports fixed-length (53 byte) cells. Packets must be segmented, transported and re-assembled over an ATM network using an adaptation layer, which adds significant complexity and overhead to the data stream. MPLS, on the other hand, simply adds a label to the head of each packet and transmits it on the network.

Differences exist, as well, in the nature of the connections. An MPLS connection (LSP) is uni-directional - allowing data to flow in only one direction between two endpoints. Establishing two-way communications between endpoints requires a pair of LSPs to be established. Because 2 LSPs are required for connectivity, data flowing in the forward direction may use a different path from data flowing in the reverse direction. ATM point-to-point connections (Virtual Circuits), on the other hand, arebi-directional, allowing data to flow in both directions over the same path (bi-directional are only SVC ATM connections; PVC ATM connections are uni-directional).

Both ATM and MPLS support tunneling of connections inside connections. MPLS uses label stacking to accomplish this while ATM usesVirtual Paths. MPLS can stack multiple labels to form tunnels within tunnels. The ATM Virtual Path Indicator (VPI) and Virtual Circuit Indicator (VCI) are both carried together in the cell header, limiting ATM to a single level of tunneling.

The biggest single advantage that MPLS has over ATM is that it was designed from the start to be complementary to IP. Modern routers are able to support both MPLS and IP natively across a common interface allowing network operators great flexibility in network design and operation. ATM's incompatibilities with IP require complex adaptation, making it comparatively less suitable for today's predominantly IPnetworks.


Comparing MPLS with existing IP core and IP/ATM technologies, MPLS has many advantages and benefits:

  • The performance characteristics of layer 2 networks
  • The connectivity and network services of layer 3 networks
  • Improves the price/performance of network layer routing
  • Improved scalability
  • Improves the possibilities for traffic engineering
  • Supports the delivery of services with QoS guarantees
  • Avoids need for coordination of IP and ATM address allocation and routing information

MPLS Virtual Private Networks

One of the most compelling drivers for MPLS in service provider networks is its support for Virtual Private Networks (VPNs), in which the provider's customers can connect geographically diverse sites across the provider's network.

There are three kinds of MPLS-based VPN:

Layer 3 VPNs:With L3 VPNs the service provider participates in the customer's Layer 3 routing. The customer's CE router at each of his sites speaks a routing protocol such as BGP or OSPF to the provider's PE router, and the IP prefixes advertised at each customer site are carried across the provider network. L3 VPNs are attractive to customers who want to leverage the service provider's technical expertise to insure efficient site-to-site routing.

Layer 2 VPNs: The provider interconnects the customer sites via the Layer 2 technology - usually ATM, Frame Relay, or Ethernet - of the customer's choosing. The customer implements whatever Layer 3 protocol he wants to run, with no participation by the service provider at that level. L2 VPNs are attractive to customers who want complete control of their own routing; they are attractive to service providers because they can serve up whatever connectivity the customer wants simply by adding the appropriate interface in the PE router.

Virtual Private LAN Service:VPLS makes the service provider's network look like a single Ethernet switch from the customer's viewpoint. The attraction of VPLS to customers is that they can make their WAN look just like their local campus- or building-scope networks, using a single technology (Ethernet) that is cheap and well understood. Unlike traditional Metro Ethernet services built around actual Ethernet switches, service providers can connect VPLS customers from regional all the way up to global scales. So a customer with sites in London, Dubai, Bangalore, Hong Kong, Los Angeles, and New York can connect all his sites with what appears to be a single Ethernet switch.


Multiprotocol Label Switching (MPLS) traffic engineering software enables an MPLS backbone to replicate and expand upon the traffic engineering capabilities of Layer 2 ATM and Frame Relay networks.

Traffic engineering is essential for service provider and Internet service provider (ISP) backbones. Such backbones must support a high use of transmission capacity, and the networks must be very resilient, so that they can withstand link or node failures. MPLS traffic engineering provides an integrated approach to traffic engineering. With MPLS, traffic engineering capabilities are integrated into Layer 3, which optimizes the routing of IP traffic, given the constraints imposed by backbone capacity and topology. MPLS traffic engineering routes traffic flows across a network based on the resources the traffic flow requires and the resources available in the network. MPLS traffic engineering employs "constraint-based routing," in which the path for a traffic flow is the shortest path that meets the resource requirements (constraints) of the traffic flow. In MPLS traffic engineering, the flow has bandwidth requirements, media requirements, a priority versus other flows, and so on.

MPLS traffic engineering gracefully recovers to link or node failures that change the topology of the backbone by adapting to the new set of constraints.

Why Use MPLS Traffic Engineering?

WAN connections are an expensive item in an ISP budget. Traffic engineering enables ISPs to route network traffic in such a way that they can offer the best service to their users in terms of throughput and delay. Currently, some ISPs base their services on an overlay model. In this approach, transmission facilities are managed by Layer 2 switching. The routers see only a fully meshed virtual topology, making most destinations appear one hop away. The use of the explicit Layer 2 transit layer gives you precise control over the ways in which traffic uses the available bandwidth. However, the overlay model has a number of disadvantages. MPLS traffic engineering provides a way to achieve the same traffic engineering benefits of the overlay model without needing to run a separate network, and without needing a non-scalable full mesh of router interconnects.

Existing Cisco IOS software releases (for example, Cisco IOS Release 12.0) contain a set of features that enable elementary traffic engineering capabilities. Specifically, you can create static routes and control dynamic routes through the manipulation of link state metrics. This functionality is useful in some tactical situations, but is insufficient for all the traffic engineering needs of ISPs.

With MPLS traffic engineering, you do not have to manually configure the network devices to set up explicit routes. Instead, you can rely on the MPLS traffic engineering functionality to understand the backbone topology and the automated signaling process.

MPLS traffic engineering accounts for link bandwidth and for the size of the traffic flow when determining explicit routes across the backbone. The need for dynamic adaptation is also necessary. MPLS traffic engineering has a dynamic adaptation mechanism that provides a full solution to traffic engineering a backbone. This Mechanism enables the backbone to be resilient to failures, even if many primary paths are pre calculated off-line.

How MPLS Traffic Engineering Works

MPLS is an integration of Layer 2 and Layer 3 technologies. By making traditional Layer 2 features available to Layer 3, MPLS enables traffic engineering. Thus, you can offer in a one-tier network what now can be achieved only by overlaying a Layer 3 network on a Layer 2 network.

MPLS traffic engineering automatically establishes and maintains the tunnel across the backbone, using RSVP. The path used by a given tunnel at any point in time is determined based on the tunnel resource requirements and network resources, such as bandwidth. Available resources are flooded via extensions to a link-state based Interior Protocol Gateway (IPG). Tunnel paths are calculated at the tunnel head based on a fit between required and available resources (constraint-based routing). The IGP automatically routes the traffic into these tunnels. Typically, a packet crossing the MPLS traffic engineering backbone travels on a single tunnel that connects the ingress point to the egress point.

MPLS traffic engineering is built on the following IOS mechanisms:

  • Label-switched path (LSP) tunnels, which are signaled through RSVP, with traffic engineering extensions. LSP tunnels are represented as IOS tunnel interfaces, have a configured destination, and are unidirectional.
  • A link-state IGP (such as IS-IS) with extensions for the global flooding of resource information, and extensions for the automatic routing of traffic onto LSP tunnels as appropriate.
  • An MPLS traffic engineering path calculation module that determines paths to use for LSP tunnels.
  • An MPLS traffic engineering link management module that does link admission and bookkeeping of the resource information to be flooded.
  • Label switching forwarding, which provides routers with a Layer 2-like ability to direct traffic across multiple hops as directed by the resource-based routing algorithm.

One approach to engineer a backbone is to define a mesh of tunnels from every ingress device to every egress device. The IGP, operating at an ingress device, determines which traffic should go to which egress device, and steers that traffic into the tunnel from ingress to egress. The MPLS traffic engineering path calculation and signaling modules determine the path taken by the LSP tunnel, subject to resource availability and the dynamic state of the network. For each tunnel, counts of packets and bytes sent are kept. Sometimes, a flow is so large that it cannot fit over a single link, so it cannot be carried by a single tunnel. In this case multiple tunnels between a given ingress and egress can be configured, and the flow is load shared among them.


MPLS traffic engineering offers benefits in two main areas:

  1. Higher return on network backbone infrastructure investment. Specifically, the best route between a pair of POPs is determined taking into account the constraints of the backbone network and the total traffic load on the backbone.
  2. Reduction in operating costs. Costs are reduced because a number of important processes are automated, including set up, configuration, mapping, and selection of Multiprotocol Label Switching traffic engineered tunnels (MPLS TE) across a Cisco 12000 series backbone


The question of what WAN communication infrastructure a company should implement is no longer a simple comparison of old technologies versus the new technology. Multiple new technologies must be considered. MPLS is typically used by a service provider to better manage its network resources and is focused on providing a connection-oriented VPN. This is similar in many respects to older technologies like frame relay and ATM. Customers satisfied with these older VPN technologies may look to MPLS or similar technologies, like Sprint IP Intelligent Frame Relay to provide their solution. Growing organizations with a variety of communication requirements should look to IPSec VPNs. These offer flexibility not inherent in MPLS or other traditional WAN technologies. Another advantage of IPSec VPNs is the inclusion of QoS guarantees that are necessary to support a successful business strategy. Look for the next Sprint white paper to gain an even greater understanding of the VPN solutions available to meet all of your business communication needs


  1. Pultz, J. and Rickard, N., "MPLS Networks: Drivers Beat Inhibitors in 2003," Gartner Research, 10 February 2003 Carr, Charles, "IP VPN: Hitting the Big Time," Gartner Research, 20 January 2003 Cited on19th September 2009
  4. URL cited on17th September 2009
  5. Mannie, E.;Corridoni, J.;Cremonese, P.;Giordano, S.Technical reportDate:1999 Cited on20th September 2009
  6. URL Cited on17th September 2009
  7. MPLS and VPN Architectures -- CCIP Edition Pearson education 2002, Delhi, India Cited on22nd September 2009
  8. cited on17th September 2009
  9. Routing TCP/IP - Jeff Doyle CCIE #1919 Volume I Tec media, 2002 Delhi India.
  10. RFC 3032 - MPLS Label Stack Encoding.
  11. Packets Forwarding with Multiprotocol Label Switching R.N.Pise, S.A.Kulkarni, and R.V.Pawar
  12. ATM OVERVIEW cited on17th September 2009
  13. cited on17th September 2009
  14. (Cisco IOS Release 12.0(5)S Multiprotocol Label Switching (MPLS) Traffic Engineering)

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!