Network Traffic Routing using Neural etwork

Network Traffic Routing using Neural etwork

ABSTRACT

This application is designed to check the competence of the packets send in the dissimilar topology .We have four types of topology

  1. Star topology
  2. ring topology
  3. bus topology
  4. Mess topology. We will be sending out packets from one customer system to another customer system via routing. We will be using dissimilar types of algorithm to test the efficiency of packets send.
  • Flooding

  • Hot-Potato

  • Source Routing

  • Distance Vector (Bellman-Ford)

  • RIP (Routing Information Protocol)

  • Link state

    By this we can know that what topology should be used in our network to make it more efficient and the packets will move faster.

    INTRODUCTION

    This submission is intended to check the efficiency of the packets send in the dissimilar topology. We will be sending out packets from one customer system to another customer system via routing. We will be using dissimilar types of algorithm to test the competence of packets send.

    By this we can be acquainted with that what topology should be used in our network to make it more knowledgeable and the packets will move faster. We are checking which topology is best for plummeting the traffic and increasing the good organization of the packets travelling within the node.

    PROJECT PURPOSE

    The principle of the project is we are checking which topology is best for reducing the traffic and increasing the efficiency of the packets travelling within the nodes. We are also distribution the packets through the shortest detachment.

    PROJECT SCOPE

    By this we can know that what topology should be used in our network to make it well-organized and the packets will move earlier. So that the packets send should be reached safely and nearer within the network without damage.

    PROJECT OUTLINE

    Packets send should be reach safely and faster within the network.

    PROBLEM DEFINATION

    The packets transfer from the server to the customer or customer to the server. The small package my be broken, lost, damaged within the network.

    EXISTING SYSTEM AND PROBLEMS

    In the existing system the packets transferred from the server to the customer or customer to the server. The packet my be broken, lost, damaged within the network.

    PROPOSED SYSTEM

    In the existing system the packets transferred from the server to the customer or customer to the server. The packet my be broken, lost, damaged within the network. In the proposed system we are examination which topology is best for reducing the traffic and mounting the efficiency of the packets travelling within the nodes. We are also sending the packets through the nonstop distance.

    MODULES

    1. Customer
    2. Server
    3. Router

    Neural Network

    This site is intended to be a guide on technologies of neural networks; technologies that I believe are an essential basis about what awaits us in the future. The site is divided into 3 sections: The first one surround technical information about the neural networks architectures known, this section is merely theoretical, The second section is set of topics related to neural networks as: artificial intelligence genetic algorithms, DSP's, among others.

    And the third section is the site blog where I rendering individual projects connected to neural networks and simulated cleverness, where the sympathetic of certain theoretical dilemma can be unstated with the aid of basis code programs. The site is repeatedly updated with new satisfied where new topics are added; these topics are related to artificial skill technologies.

    Artificial neural network

    An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the accomplishment of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a ordinary microchip is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages.

    Advantages:

    • A neural network can perform tasks that a linear program cannot.

    • When an element of the neural network fails, it can continue exclusive of any problem by their parallel nature.

    • A neural system learns and does not need to be reprogrammed.

    • It can be implementing in any application.

    • It can be implemented without any problem.

    Disadvantages:

    • The neural network needs training to operate.

    • The architecture of a neural network is dissimilar from the structural design of microprocessors therefore needs to be emulate.

    • have need of high processing time for large neural networks.

    Another aspect of the artificial neural networks is that there are dissimilar architectures, which consequently requires dissimilar types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple.

    Artificial neural networks (ANN) are among the most recent signal-processing technologies in the engineer's toolbox. The field is highly interdisciplinary, but my approach will restrict the view to the engineering perspective. In engineering, neural networks serve two significant functions: as prototype classifiers and as nonlinear adaptive filters. I will provide a brief overview of the theory, learning rules, and applications of the most important neural network models.

    Definitions and Style of working out. An Artificial Neural Network is an adaptive, most time and again nonlinear system that learns to perform a function (an input/output map) from data. A dative means that the system parameter is changed during operation, normally called the training phase. After the training phase the Artificial Neural Network parameters are fixed and the system is deployed to solve the problem at hand (the testing phase). The Artificial Neural Network is built with a systematic step-by-step procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule. The input/output training data are fundamental in neural network technology, because they convey the necessary information to "discover" the optimal operating point. The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map, i.e., some Artificial Neural Networks are universal pampers . There is a style in neural computation that is worth describing.

    An input is accessible to the neural network and a matching desired or target response set at the output (when this is the case the training is called supervised). An error is composed from the difference between the desired response and the system output. This error information is fed back to the system and adjusts the system parameters in a systematic fashion (the learning rule). The process is repeated until the performance is acceptable. It is clear from this description that the performance hinges heavily on the data. If one does not have data that cover a significant portion of the operating conditions or if they are noisy, then neural network technology is probably not the right solution. On the other hand, if there is plenty of data and the problem is poorly understood to derive an approximate model, then neural network technology is a good choice. This operating procedure should be contrasted with the traditional engineering design, made of exhaustive subsystem specifications and intercommunication protocols. In artificial neural networks, the designer chooses the network topology, the performance function, the learning rule, and the criterion to stop the training phase, but the system automatically adjusts the parameters. So, it is difficult to bring a priori information into the design, and when the system does not work properly it is also hard to incrementally refine the solution. But ANN-based solutions are extremely efficient in terms of development time and resources, and in many difficult problems artificial neural networks provide performance that is difficult to match with other technologies. Denker 10 years ago said that "artificial neural networks are the second best way to implement a solution" motivated by the simplicity of their design and because of their universality, only shadowed by the traditional design obtained by studying the physics of the problem. At present, artificial neural networks are emerging as the technology of choice for many applications, such as pattern recognition, prediction, system identification, and control.

    Introduction

    The power and usefulness of artificial neural networks have been demonstrated in several applications including speech synthesis, diagnostic problems, medicine, business and finance, robotic control, signal processing, computer vision and many other problems that fall under the category of pattern recognition.For some application areas, neural models show promise in achieving human-like performance over more traditional artificial intelligence techniques.

    What, then, are neural networks? And what can they be used for? Although von-Neumann-architecture computers are much faster than humans in numerical computation, humans are still far better at carrying out low-level tasks such as speech and image recognition. This is due in part to the massive parallelism employed by the brain, which makes it easier to solve problems with simultaneous constraints. It is with this type of problem that traditional artificial intelligence techniques have had limited success. The field of neural networks, however, looks at a variety of models with a structure roughly analogous to that of the set of neurons in the human brain.

    The branch of artificial intelligence called neural networks dates back to the 1940s, when McCulloch and Pitts [1943] developed the first neural model. This was followed in 1962 by the perception model, devised by Rosenblatt, which generated much interest because of its ability to solve some simple pattern classification problems. This interest started to fade in 1969 when Minsk and Paper [1969] provided mathematical proofs of the limitations of the perception and pointed out its weakness in computation. In particular, it is incapable of solving the classic exclusive-or (XOR) problem, which will be discussed later. Such drawbacks led to the temporary decline of the field of neural networks.

    The last decade, however, has seen renewed interest in neural network, both among researchers and in areas of application. The expansion of more-powerful networks, better training algorithms, and enhanced hardware has all contributed to the revival of the field. Neural-network paradigms in recent years comprise the Boltzmann machine, Hopfield's network, Kohonen's network, Rumelhart's competitive learning model, Fukushima's model, and Carpenter and Grasberg's Adaptive Resonance Theory model [Wasserman 1989; Freeman and Skapura 1991]. The field has generated interest from researchers in such diverse areas as engineering, computer science, psychology, neuroscience, physics, and mathematics. We describe several of the more important neural models, followed by a discussion of some of the available hardware and software used to implement these models, and a sampling of applications.

    Applications

    Neural networks have been applied to a wide variety of dissimilar areas including speech synthesis, pattern recognition, diagnostic problems, medical illnesses, robotic control and computer vision.

    Neural networks have been shown to be particularly useful in solving problems where traditional artificial intelligence techniques involving symbolic methods have failed or proved inefficient. Such networks have shown promise in problems involving low-level tasks that are computationally intensive, including vision, speech recognition, and many other problems that fall under the category of pattern recognition. Neural networks, with their massive parallelism, can provide the computing power needed for these problems. A major shortcoming of neural networks lies in the long training times that they require, particularly when many layers are used. Hardware advances should diminish these limits, and neural-network-based systems will become greater complements to conventional computing systems.

    Since the 1970s, work has been done on monitoring the Space Shuttle Main Engine (SSME), involving the development of an Integrated Diagnostic System (IDS). The IDS is a hierarchical multilevel system, which integrates various fault discovery algorithms to provide a monitoring system that works for all stages of operation of the SSME. Three fault-detection algorithms have been used, depending on the SSME sensor data. These employ statistical methods that have a high computational complexity and a low degree of reliability, particularly in the presence of noise. Systems based on neural networks offer promise for a fast and reliable real-time system to help overcome these difficulties, as is seen in the work of Dietz et al. [1989]. This work involves the development of a fault diagnostic system for the SSME that is based on three-layer back propagation networks. Neural networks in this application allow for better performance and for the diagnosis to be accomplished in real time. Furthermore, because of the parallel structure of neural networks, better performance is realized by parallel algorithms running on parallel architectures.

    Routing

    A simple definition of routing is "knowledge how to get from here to there."In some cases, the expression routing is used in a very strict sense to refer only to the process of obtain and distribute information, but not to the process of using that information to in point of fact get from one place to.Since it is easier said than done to clutch the usefulness of information that is acquired but never used, we employ the term routing to refer in universal to all the things that are done to discover and advertise paths from here to there and to actually move packets from here to there when necessary. The distinction between routing and forwarding is potted in the formal conversation of the functions performed by OSI end systems and in-between systems, in which context the distinction is meaningful.

    Routing is the act of moving in sequence across an inter network from a source to a purpose.Along the way, at least one intermediate node typically is encountered.Routing is the process of result a path from a source to every destination in the network.It allows users in the remote part Hof the world to get to information and services provide by computers anywhere in the world. Routing is proficient by means of routing protocols that institute mutually consistent routing tables in every router in the Network.

    When a packet is conventional by the router or is forwarded by the host, they both must make decisions as to how to send the packet. To do this, the router and the host consult a database for in sequence known as the routing table. This database is stored in RAM so that the lookup process is optimized.As the packet is forwarded through various routers towards its target, each router makes a decision so as to precede by consult its routing table.

    ROUTING TABLE

    A routing table consists at least two columns: the first is address of an objective point or destination Network, and the second is the address of the next element that is the next hop in the "best" path to its destination. When a packet arrives at a router the router or the switch controller consults the routing table to decide the next hop for the packet.Not only the local information but the global information is also consulting for routing.But global information is hard to collect, subject to frequent changes and is capacious.

    The information in the routing table can be generated in one of two ways.The first method is to manually put together the routing table with routes for each objective network.This is known as static routing. The second method for generate routing table information is to make use of dynamic routing protocol.A dynamic routing protocol consists of direction-finding tables that are built and maintain automatically through and continuing statement between routers.Periodically or on demand, messages are exchanged between routers for the purpose of updating information kept in their routing tables.

    ROUTING TABLE

    Router

    The Network frontwards IP packets from a source to an aim using target address field in the packet header.A router is defined as a host that has an boundary on more than one Network.

    Every router along the path has routing table with at least two fields:

    A Network number and the interface on which to send packets with that network number.

    The router reads the objective address from an incoming packet's header and uses the routing table to forward it to suitable interface. By introducing routers with interface on more than one come together, we can connect clusters into larger ones.By induction we can compose arbitrarily large networks in this fashion, as long as there are routers with interfaces on each subcomponent of the Network.

    PACKET

    The Network carries all the information using packets.

    A packet has two parts:

    The information satisfied called the payload, and the information about the payload, called the meta-data.

    The meta-data consists of fields such as the source ands target addresses, data length, sequence number and data type. The introduction of meta-data is a fundamental improvement in networking technology. The Network cannot conclude where samples originate, or where they are going without supplementary context information. Meta-data makes information self-descriptive, allowing the network to understand the data without additional background information. In meticulous if the meta-data surround a source and intention address, no matter where in the network the packet is, the Network knows where it came from and where it wants to go.The Network can store a packet, for hours if essential, then "freeze" it and still know what has to be done to deliver the data. Packets are efficient for data transfer, but are not so nice-looking for real-time services such as voice.

    LINK

    Link is the connection among two routers.If there are two routers the messages are sent from one to other using the link.So link acts as a bridge between two routers.If a link goes down then information will not be convey to the routers.We have to search for the other unusual links to reach from source to intention.Hence link plays a major role in the transmission of data as it acts as a carrier of the messages sent by the routers.

    ROUTING ALGORITHM

    Routing is talented by means of routing protocols that establish equally dependable routing tables in every router in the Network.A routing protocol on paper in the form of code is routing algorithm.A routing algorithm asynchronously updates routing tables at every router or switch manager.The global information to be maintained by routing tables is capacious. Routing algorithm summarizes this information to extract only the portions pertinent to each node.The heart of routing algorithm does all the chores.

    The various concepts for discussion are:

    • Design goals of Routing Algorithm
    • Factors that decide the best Path
    • Choices in Routing

    Routing algorithms often have one or more of the following design goals

    • Optimality
    • Optimality refers to the potential of the routing algorithm to select the best route, which depends on the metrics and metric weightings used to make the computation.One routing algorithm, for example, may use a number of hop and delays, but may weight delay more a great deal in the calculation. Naturally, routing protocols must describe their metric calculation algorithms firmly.

    • Simplicity and low overhead
    • Routing algorithms also are intended to be as simple as possible with a minimum of software and expenditure in the clouds.In other words, the routing algorithm must offer its functionality competently, with a minimum of software and utilization overhead. Efficiency is mainly important when the software implementing the routing algorithm must run on a computer with incomplete physical resources.

    • Robustness and stability
    • Routing algorithms must be robust, which means that they should carry out correctly in the face of unusual or unanticipated circumstances, such as hardware failures, high load circumstances, and incorrect implementations.Because routers are located at network junction points, they can cause substantial harms when they fail.The best routing algorithms are often those that have withstand the test of time and have well-known stable under a variety of network conditions.

    • Rapid convergence
    • Routing algorithms must converge quickly. Meeting is the process of accord, by all routers, on optimal routes.When a network event causes routes either to go down or become available, routers hand out routing update messages that filter through networks, inspiring recalculation of optimal routes and sooner or later cause all routers to agree on these routes. Routing algorithms that converge slowly can reason routing loops or network outages.

    • Flexibility
    • Routing algorithms should also be plastic, which means that they should quickly and accurately adapt to a variety of network state of affairs. Routing algorithms can be automatic to adapt to changes in network bandwidth, router queue size, and network delay, among other variables.

    Factors that decide the best path

    Routing algorithms have used many dissimilar metrics to determine the best route. Sophisticated routing algorithms can base route selection on multiple metrics, combining them in a single (hybrid) metric.All the following metrics have been used:

    Path Length

    Path length is the most common routing metric. Some routing protocols allow network administrators to assign arbitrary costs to each network link.In this case, path length is the sum of the costs associated with each link traversed.Other routing protocols define hop count, a metric that specifies the number of passes through internetworking products, such as routers, that a packet must take en route from a source to a destination.

    Reliability

    Reliability, in the context of routing algorithms, refers to the dependability (usually described in terms of the bit-error rate) of each network link.Some network links might go down more often than others.After a network fails, certain network links might be repaired more easily or more quickly than other links.Any reliability factors can be taken into account in the assignment of the reliability ratings, which are arbitrary numeric values usually assigned to network links by network administrators.

    Delay

    Routing delay refers to the length of time required to move a packet from source to destination through the internet work.Delay depends on many factors, including the bandwidth of intermediate network links, the port queues at each router along the way, network congestion on all intermediate network links, and the physical distance to be traveled.Because delay is a conglomeration of several important variables, it is a common and useful metric.

    Bandwidth

    Bandwidth refers to the available traffic capacity of a link. All other things being equal, a 10-Mbps Ethernet link would be preferable to a 64-kbps leased line.Although bandwidth is a score of the maximum within reach throughput on a link, routes through links with greater bandwidth do not of necessity provide better routes than routes through slower links. If, for example, a faster link is busier, the actual time required to send a packet to the destination could be greater.

    Load

    Load refers to the degree to which a network resource, such as a router, is busy.Load can be calculated in a variety of ways, including CPU utilization and packets processed per second.Monitoring these parameters on a repeated basis can be resource-intensive itself.

    Communication Cost

    Communication cost is another important metric, especially because some companies may not care about performance as much as they care about operating expenditures.Even though line delay may be longer, they will send packets over their own lines rather than through the public lines that cost money for usage time.

    Choices in Routing

    Routing algorithms can be classified by type. Key dissimilar iators include:

    Static versus dynamic (Non-adaptive versus Adaptive)

    Non-adaptive algorithms do not base their routing decisions on measurements or estimates of the current traffic and topology.The choice of route is computed in advance, offline and downloaded to the routers when the network is booted. Adaptive algorithms in contrast change their decisions.

    Single-path versus Multi-path

    Some sophisticated routing protocols support multiple paths to the same destination. Unlike single-path algorithms, these multi path algorithms permit traffic multiplexing over multiple lines. The advantages of multi path algorithms are obvious: They can provide substantially better throughput and reliability.

    Flat versus Hierarchical

    Some routing algorithms operate in a flat space, while others use routing hierarchies. In a flat routing system, the routers are peers of all others. In a hierarchical routing system, some routers form what amounts to a routing backbone.Packets from non-backbone routers travel to the backbone routers, where they are sent through the backbone until they reach the general area of the destination.At this point, they travel from the last backbone router through one or more non-backbone routers to the final destination.

    Host-intelligent versus Router-intelligent(Source Routing versus Hop by hop)

    Some routing algorithms assume that the source end-node will determine the entire route.This is usually referred to as source routing. In source-routing systems, routers merely act as store-and-forward devices, involuntarily sending the packet to the next stop. Other algorithms assume that hosts know nothing about routes.In these algorithms, routers determine the path through the inter network based on their own calculations.In the first system, the hosts have the routing intelligence.In the latter system, routers have the routing intelligence.

    Intra domain versus Inter domain

    Some routing algorithms work only within domains; others work within and between domains.The nature of these two algorithm types is dissimilar . It stands to reason, therefore, that an optimal intradomain- routing algorithm would not necessarily be an optimal interdomain- routing algorithm.

    Centralized versus Decentralized

    In centralized routing, a central processor collects information about the status of each link and processes this information to compute a routing table for every node.It then distributes these tables to all the routers. In decentralized routing, routers must cooperate using a distributed routing protocol to create mutually consistent routing tables.

  • Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!