Optical communication system

Optical communication system

FEC FOR 40GBIT/S OPTICAL COMMUNICATION SYSTEM INTRODUCTION

The capacity of optical transmission systems has been drastically increased over the past ten years. When the data rate of multimode fiber (MMF) optical systems reaches the range of tens of gigabits per second, the channel impairments become more and more severe, and the degradation of the optical signals limits the data transmission distance. Therefore, advanced digital signal processing (DSP) techniques are now employed to enhance the transmission capacity in optical systems, such as equalization (EQ) and forward error correction (FEC) codec. The DSP techniques can help overcome channel impairments, thus achieving the goal of improving the transmission quality and increasing the transmission distance. The Reed-Solomon (RS) code is one of the most widely utilized FEC. System simulation of the optical system shows that the RS(255, 239) code can provide approximately 5.5-dB coding gain to decrease the bit error rate(BER) from to for correcting random faults.

FORWARD ERROR CORRECTION CODES

FEC coding is one kind of digital signal processing which progress data reliability by establishing a known structure formation into a data sequence proceeding to transmission. This structure enables the receiver to detect and correct errors occurred by corruption from the channel. While the name imply, the coding technique facilitate the decoder to fix the errors with out request of re-transmission of the original data.

The Aerospace Company has devoted substantial effort to do research and develop forward error correction techniques, with particular stress on one procedure known as turbo coding. This work is considered as an important function in supporting some government programs, counting the advanced enormously high Frequency program, the sophisticated Wideband System, the geostationary equipped Environmental Satellite System.

Evolution of FEC:

In system communications that work on FEC coding, a digital data source transmits a data sequence to the encoder. This encoder places into an unnecessary bits, thus outputting a longer succession of code bits, called a code-word. Such code-words can then be broadcast to a receiver, which utilizing a suitable decoder to extract the original data sequence.

Codes that introduce an outsized measure of redundancy express comparatively little data per all individual code bit. This is beneficial due to the reason it decrease the probability that all of the original data will be wiped out during a single transmission. Alternatively, the count of parity bits will usually increase communication bandwidth needs or message delay.

Block coding/ Algebraic coding is the only type of FEC coding in utilization when Claude Shannon published his Mathematical Theory of Communication during the year 1948. Through this technique, the encoder mix together parity bits into the data sequence utilizing a fussy algebraic algorithm. At the receiver side, the decoder applies an opposite of the algebraic algorithm to recognize and right as any errors occur by channel corruption.

Andrew Viterbi, in 1967 developed a decoding technique that has since become the standard for decoding error-correcting codes. At each bit-interval, the Viterbi decoding algorithm compares the actual received code bits with the code bits that might have been generated for each possible memory-state transition. It chooses, based on metrics of similarity, the most likely sequence within a specific time frame. The Viterbi decoding algorithm requires less memory than sequential decoding due to the reason unlikely sequences are dismissed early, leaving a relatively small number of candidate sequences that need to be stored.

Another FEC technique, known as convolutional coding, was first introduced in 1955. Error-correcting codes process the incoming bits in streams rather than in blocks. The paramount feature of such codes is that the encoding of any bit is strongly influenced by the bits that preceded it. An error-correcting decoder takes into account such memory when trying to estimate the most likely sequence of data that produced the received sequence of code bits. Historically, the first type of error-correcting decoding, known as sequential decoding, utilized a systematic procedure to search for a good estimate of the message sequence; however, such procedures require a great deal of memory, and typically suffer from buffer overflow and non graceful degradation.

Some types of algebraic coding are most effective in combating "bursty" errors. Error-correcting coding is usually more robust when faced with random errors or white noise; however, any decoding errors occurring in the error-correcting decoder are likely to occur in bursts. During 1974, Joseph Odenwalder combined these two coding techniques to form a concatenated code. In this arrangement, the encoder linked together an algebraic code followed by an error-correcting code. The decoder, a mirror image of the encoding operation, consisted of an error-correcting decoder followed by an algebraic decoder. Thus, any bursty errors resulting from the error-correcting decoder could be effectively corrected by the algebraic decoder. Performance was further enhanced by utilizing an inter leaver between the two encoding stages to mitigate any bursts that might be too long for the algebraic decoder to handle. This particular structure demonstrated significant improvement over previous coding systems and is currently being utilized in the Deep Space Network and Air Force Satellite Control Network as well as in commercial broadcasting services.

Reed Solomon Codes:

Reed-Solomon codes can be utilized as both error-correcting and erasure codes. In the error-correcting setting, we wish to transmit a sequence of numbers over a noisy communication channel. The channel noise might cause the data sent to arrive corrupted. In the erasure setting, the channel might fail to send our message. For both cases, we handle the problem of noise by sending additional data beyond the original message. The data sent is an encoding of the original message. If the noise is small enough, the additional data will allow the original message to be recovered, through a decoding process.

CHAPTER 2

LITERATURE REVIEW

Optical Communications

The utilization of light to send messages is not new. Fires were utilized for signaling in biblical times, smoke signals have been utilized for thousands of years and flashing lights have been utilized to communicate between warships at sea since the days of Lord Nelson.

Development of fibers and devices for optical communications began in the early 1960s and continues strongly today. But the real change came in the 1980s. During this decade optical communication in public communication networks developed from the status of a curiosity into being the dominant technology. Among the tens of thousands of developments and inventions that have contributed to this progress four stands out as milestones:

  • The invention of the LASER (in the late 1950's)
  • The development of low loss optical fiber (1970's)
  • The invention of the optical fiber amplifier (1980's)
  • The invention of the in-fiber Bragg grating (1990's)

The continuing development of semiconductor technology is quite fundamental but of course not specifically optical. The predominant utilization of optical technology is as very fast “electric wire”. Optical fibers replace electric wire in communications systems and nothing much else changes. Perhaps this is not quite fair. The very speed and quality of optical communications systems has itself predicated the development of a new type of electronic communications itself designed to be run on optical connections. ATM and SDH technologies are good examples of the new type of systems.

It is important to realize that optical communications is not like electronic communications. While it seems that light travels in a fiber much like electricity does in a wire this is very misleading. Light is an electromagnetic wave and optical fiber is a waveguide. Everything to do with transport of the signal even to simple things like coupling two fibers into one is very different from what happens in the electronic world. The two electronic and optics fields, while closely related employ different principles in different ways.

Some people look ahead to “true” optical networks. These will be networks where routing is done optically from one end-user to another without the signal ever becoming electronic. Indeed some experimental local area (LAN) and metropolitan area (MAN) networks like this have been built. In 1998 optically routed nodal wide area networks are imminently feasible and the necessary components to build them are available. However, no such networks have been deployed operationally yet.

In 1998 the “happening” area in optical communications is Wavelength Division Multiplexing (WDM). This is the ability to send many perhaps up to 1000 independent optical channels on a single fiber. The first fully commercial WDM products appeared on the market in 1996. WDM is a major step toward fully optical networking.

Optical Transmission System Concepts

The basic components of an optical communication system are

  • A serial bit stream in electrical form is presented to a modulator, which encodes the data appropriately for fiber transmission.
  • A light source (laser or Light Emitting Diode - LED) is driven by the modulator and the light focused into the fiber.
  • The light travels down the fiber
  • At the receiver end the light is fed to a detector and converted to electrical form.
  • The signal is then amplified and fed to another detector, which isolates the individual state changes and their timing. It then decodes the sequence of state changes and reconstructs the original bit stream.
  • The timed bit stream so received may then be fed to a utilizing device.

Optical communication has many well-known advantages:

Weight and Size

Fiber cable is significantly smaller and lighter than electrical cables to do the same job. In the wide area environment a large coaxial cable system can easily involve a cable of several inches in diameter and weighing many pounds per foot. A fiber cable to do the same job could be less than one half an inch in diameter and weigh a few ounces per foot. This means that the cost of laying the cable is dramatically reduced.

Material Cost

Fiber cable costs significantly less than copper cable for the same transmission capacity.

Data Capacity

The data rate of systems in utilization in 1998 is usually 150 or 620 Mbps on a single (unidirectional) fiber. This is due to the reason these systems were installed in past years. The usual rate for new systems is 2.4 Gbps or even 10 Gbps. This is very high in digital transmission terms. In telephone transmission terms the very best coaxial cable systems give about 2,000 analog voice circuits. A 150 Mbps fiber connection gives just over 2,000 digital telephone (64 Kbps) connections. But the 150 Mbps fiber is at a very early stage in the development of fiber optical systems. The coaxial cable system with which it is being compared is much more costly and has been developed to its fullest extent. Fiber technology is still in its infancy. Utilizing just a single channel per fiber, researchers have trial systems in operation that communicate at speeds of 100 Gbps. By sending many (“wavelength division multiplexed”) channels on a single fiber, we can increase this capacity a hundred and perhaps a thousand times. Recently researchers at NEC reported a successful experiment where 132 optical channels of 20 Gbps each were carried over 120 km. This is 2.64 terabits per second! This is enough capacity to carry about 30 million uncompressed telephone calls (at 64 Kbps per channel). Thirty million calls is about the maximum number of calls in progress in the world at any particular moment in time. That is to say, we could carry the world's peak telephone traffic over one pair of fibers. Most practical fiber systems don't attempt to do this due to the reason it costs less to put multiple fibers in a cable than to utilization sophisticated multiplexing technology.

No Electromagnetic Interference:

Due to the reason the connection is not electrical, you can neither pick up nor create electrical interference. This is one reason that optical communication has so few errors. There are very few sources of things that can distort or interfere with the signal. In a building this means that fiber cables can be placed almost anywhere electrical cables would have problems. In an industrial plant such as a steel mill, this gives much greater flexibility in cabling than previously available. In the wide area networking environment there is much greater flexibility in route selection. Cables may be located near water or power lines without risk to people or equipment.

Utilization of Forward Error Correction codes:

As bandwidth demands increase and the tolerance for errors and latency decreases, designers of data-communication systems are looking for new ways to expand available bandwidth and improve the quality of transmission. One solution isn't actually new, but has been around for a while. Nevertheless, it could prove quite useful. Called forward error correction (FEC), this design technology has been utilized for years to enable well organized, high-quality data communication over noisy channels, such as those found in satellite and digital cellular-communications applications.

Recently, there have been significant advances in FEC technology that allow today's systems to approach the Shannon limit. By considering theoretically, this is the maximum level of data content for any given channel. These advances are being utilized successfully to decrease cost and increase performance in a variety of wireless communications systems counting with satellites, wireless LANs, and fiber communications. As well as, high-speed silicon ASICs for FEC applications has been developed, promising to further revolutionize communication systems design.

As the capabilities of FEC increase, the number of errors that can be corrected also increases. The advantage is obvious. Noisy channels create a relatively large number of errors. The ability to correct these errors means that the noisy channel can be utilized reliably. This enhancement can be parlayed into several system improvements, counting with bandwidth efficiency, extended range, higher data rate, and greater power efficiency, as well as increased data reliability.

FEC requires that data first be encoded. The original user data to be transmitted over the channel is called data bits, while the data after the addition of error correction data is called coded bits.

For k data bits, the encoding process results in n coded bits where n > k. All n bits are transmitted. At the receiver, channel measurements are made and estimates of the transmitted n bits are generated. An FEC decoder utilizes these n bit estimates, along with knowledge of how all n bits were created, to generate estimates of the k data bits. The decoding process effectively detects and corrects errors in the n-channel bit estimates while recovering the original k data bits.

Due to the reason the decoder only utilizing the data received and never requests a retransmission, the flow of data is always moving forward. The process is, therefore, known as forward error correction.

The FEC decoding process doesn't need to generate n-bit estimates as an intermediate step. In a well-designed decoder, quantized channel-measurement data is taken as the decoder input. This raw channel measurement data consists of n metrics where each metric corresponds to the probability that a particular bit is a logical 1. Furthermore, the probability that a given bit is a logical 0 is related to this number. These metrics are usually represented by 3- or 4-bit integers called soft decision metrics. The decoder output is an estimate of the k data bits.

A code's performance is strongly dependent on the data transmission channel. In order to facilitate the comparison of one code with another, a model is utilized where noise is added to antipodal signals. In this model, the noise is additive white Gaussian noise (AWGN). Unrelated noise samples are added to antipodal channel symbols. The variance of the noise is related to the power spectral density of the noise (No). Antipodal signaling, a mapping where the 1s and 0s will be transmitted, are sent as +Z and -Z. For example, Z could represent 1 V on a transmission wire. So, 1s and 0s would be transmitted as +1 V and -1 V, respectively. The received energy per transmitted data bit (Eb) is proportional to Z2. An important parameter in the system is the signal-to-noise ratio, Eb/No. The AWGN model accurately represents many types of real channels. Many times, channels exhibiting other types of impairments have AWGN-like impairment as well.

FEC codes come in two primary types, error-correcting and block. In a simple error-correcting encoder, a sequence of data bits passes through a shift register, and two output bits are generated per data bit. Then, the two output bits are transmitted. Essentially, the decoder estimates the state of the encoder for each set of two channel symbols it receives. If the decoder accurately knows the encoder's state sequence, then it knows the original data sequence too.

The Reed-Solomon (RS) code is one of the most widely utilized FEC.

Reed Solomon code

Despite revolutionary developments in capacity-approaching codes in recent years, Reed-Solomon (RS) codes remain very relevant today, especially for high rate systems with relatively small data packets. RS codes are straightforward to decode and have excellent burst correction capability. But in addition to their practical value in facilitating reliable communication over noisy channels, Reed-Solomon codes are also beautiful and elegant and almost endlessly fascinating in their own right. We will only provide a brief introduction on this page, and the reader is strongly encouraged to seek out more data, for instance here, here, here, and here.

Before we describe the codes themselves we will need to touch on some background material. A full understanding of RS codes, however, requires a certain level of mathematical maturity that is not possible to provide in a short monograph.

Linear Block Codes

Reed-Solomon codes are linear block codes. A block code is a way of mapping some number k of symbols to another number n of symbols with n > k. We will call the block of k symbols a message word, and the block of n symbols a code-word. The process of mapping k message symbols to n code symbols is called encoding, and the reverse process is called decoding. This mapping can be systematic, wherein the block of n code-word symbols consists of the k message word symbols plus (n-k) added redundant symbols, or non-systematic, where the message word symbols cannot be directly recovered from the code-word without decoding. Usually, any linear block code can be represented as an "equivalent" systematic code. A block code is called "linear" if the sum of two code-words is always a valid code-word, and a scalar multiple of any code-word is also a valid code-word.

Code-word Symbols

So coding usually maps k symbols to n symbols, but what do we mean by symbols? A binary code utilizing two values, for example the binary numbers {0,1}, as symbols. Q-ary symbols, as the name suggests, utilization symbols taken from an alphabet A of q possible values. So, for example, 3-ary symbols would be symbols chosen from a set of three elements, such as {0, 1, 2}. Practical RS codes utilization q = 256, which, it turns out, can be rather conveniently represented utilizing 8-bit symbols called bytes, familiar to computer users. So a Reed-Solomon code of length n = 255 consists of 255 8-bit symbols per code-word, or 2040 binary bits.

Fields

The 8-bit symbols utilized in practical RS codes are manipulated in the encoding process not as ordinary binary numbers, but as elements of a mathematical structure called a Finite Field, or Galois Field, in honor of their inventor, the brilliant French mathematician Evariste Galois. Without getting too caught up in the details, a field is a set of elements that can be added, subtracted, multiplied, and divided, with the important stipulation that the result of any of those operations is always still an element of the field. Additionally, we require that additive and multiplicative inverses and identity elements exist for each (non-zero) element of the field, and that the field elements obey the familiar commutative, associative, and distributive properties. The real numbers are a familiar example of a field: we can add, subtract, multiply and divide any two (non-zero) real numbers and the result is always another real number. Multiplicative and additive inverses can be found for any real number; the multiplicative identity element is '1', the additive identity element is '0'. The real numbers are an infinite field. We can construct a field with a finite number of elements, if we follow certain rules for constructing such fields.

Finite Fields or Galois Fields

Let's choose, for example, the field with 5 elements, and call the elements {0, 1, 2, 3, 4}. We will denote this field GF(5), or the Galois field with 5 elements. Note that though the elements of our finite field with 5 elements look like the ordinary integers, they are not; this is simply a convenient notation. The most important rule is that adding subtracting, multiplying and dividing any two elements of the field always results in another element of the field (i.e. the operation is "closed"). So what happens when we add, say, 3 and 4? Clearly the result can't be 7, due to the reason the only elements in our field are {0, 1, 2, 3, 4}. One possible and intuitive solution is to utilization modular arithmetic, that is, to do operations in the finite field with 5 elements utilizing mod 5 arithmetic. One way to illustrate this is utilizing the division algorithm. The division algorithm is a way of representing a number (the dividend) as a multiple of another number (the divisor), plus some remainder. In our example, we divide 3 + 4 = 7 by 5, and take the remainder: 7 = 5 x 1 + 2. The remainder (or "residue") is always less than 5, and therefore is in our set {0, 1, 2, 3, 4}. By utilizing, mod 5 arithmetic on finite field, 3 + 4 = 2. We won't sweat the details here, but suffice it to say that this procedure always works, and our set of five elements is, in fact, a field if we conduct all our operations utilizing, mod 5 arithmetic. In fact, this construction procedure works for all finite fields with a prime number of elements. So, for example, we could construct a finite field with 23 elements {0, 1, 2... 22} utilizing modulo 23 arithmetic. We can also construct finite fields with a power of a prime number of elements (i.e. fields with order -- that is, number of elements -- pm, for any prime p) utilizing polynomial arithmetic modulo an irreducible polynomial. This is, in fact, the type of finite field utilized in practical RS codes, namely, fields with 28 = 256 elements. These fields can be constructed utilizing an irreducible polynomial (whose coefficients are taken from GF(2), i.e. are binary numbers) of degree 8. An irreducible polynomial is to polynomial arithmetic what a prime number is to integer arithmetic; it can't be factored evenly into the product of two smaller polynomials, just as a prime number, by definition, can't be factored into the product of two integers.

Polynomials

We can construct RS codes utilizing linear algebra alone, but for practical reasons it is customary to add an additional structural constraint to our RS codes. We will map our k message symbols to a polynomial of degree k - 1, and call the result the message polynomial m(x) = mk-1xk-1 + mk-2xk-2 + ... + m1x + m0. The n code-word symbols can be mapped to a code-word polynomial of degree n - 1. c(x) = cn-1xn-1 + cn-2xn-2 + ... + c1x + c0.

Cyclic codes

In addition to being linear block codes, RS codes are cyclic codes. In simple terms, a code is cyclic if, for every code-word of the form {c0, c1, ..., cn-1}, {cn-1, c0, c1, ..., cn-2} is also a code-word. That is, all circular shifts of any code-word in the code are also code-words in the code. In polynomial terms, if c(x) is a code-word, then x ∙ c(x) mod (xn - 1) is also a code-word. In fact, it's easy to show that multiplication of a cyclic code-word by xm results in a (right) circular shift of the code-word m places. Since RS codes are also linear, we know that the sum of two code-words is always a code-word, and multiplication of a code-word by a scalar always results in a code-word, we can write expressions like: an-1xn-1c(x) + an-2xn-2c(x) + ... + a0c(x) with the result always guaranteed to be a code-word of the code if the code is a linear cyclic code. This leads to the remarkable result that multiplying any code-word c(x) by any polynomial a(x) (reducing the result modulo xn -1) results in a valid code-word. In fact, we won't prove it here, but for any cyclic code there exists exactly one irreducible polynomial of least degree, called the generator polynomial of the code. The code then consists of all polynomial multiples of the generator polynomial (i.e. the generator polynomial can be utilized to "generate" the code). This fact is crucial to the well organized construction of RS codes, due to the reason polynomial multiplication is relatively easy to implement in hardware. Note that all our polynomial coefficients are Galois field elements as defined above, and hence all our arithmetic operations must be done utilizing Galois field arithmetic. Also note that straightforward multiplication of a message m(x) by a generator polynomial to create a code-word, i.e. c(x) = m(x) g(x), results in a non-systematic code-word. Most practical RS code-word schemes utilization a systematic form, so it is customary to create a code-word by multiplying the message by xn-k (which is equivalent to shifting the message to the n-k highest coefficients in the code-word polynomial), dividing the result by g(x) and adding to create the systematic code-word. c(x) = xn-km(x) + Rg(x)[xn-km(x)], where Rg(x)[∙] denotes the remainder that results from dividing by g(x).

Defining Reed-Solomon codes

So, now that we've provided some rudimentary background, it's time to define Reed-Solomon codes themselves. There are two common definitions of Reed-Solomon codes: as polynomial codes over finite fields (construction 1), and as cyclic codes of length q-1 over GF(q) (construction 2). Other definitions are possible, for example those involving orthogonal arrays, or the frequency domain construction, or as a projective geometry over GF(q), but, in the interest of brevity, we will concern ourselves with the two popular constructions outlined above. We will maintain that these two definitions are not strictly equivalent, though in the case of cyclic code constructions of length n = q - 1 it is possible to show the equivalence of the two constructions.

Polynomial Codes over Certain Finite Fields (construction I)

This is the original definition utilized by Irving S. Reed and Gustave Solomon in their landmark 1960 paper "Polynomial codes over certain finite fields" published in the Journal of the Society for Industrial and Applied Mathematics. The idea is this: given the message m(x) in the form of a polynomial, as outlined above, whose k coefficients are taken from the finite field GF(q), simply evaluate the polynomial at n distinct elements of the finite field, to obtain the n coefficients of the code-word. In other words, if we denote n distinct elements of the field a0, a1, a2, ..., an-1, then:

(c0, c1, c2, ..., cn-1) = m(a0), m(a1), m(a2), ..., m(an-1)

Although this construction is positively simple, it is usually not utilized in practice, due to the lack of well organized methods for encoding and decoding. Also note that this construction is not systematic, and that it yields codes of length q if utilized over all the elements of the field GF(q), though often the evaluation of the message at zero, m(0), is omitted, giving a code of length n = q - 1.

A generalization of the above construction leads to the definition of Generalized Reed-Solomon (GRS) codes: let a0, a1, ..., an-1 be n distinct elements of GF(q), and let v0, v1, ..., vn-1 be n non-zero (but not necessarily distinct) elements of GF(q), then the GRSk(a, v) code consists of all vectors

(c0, c1, c2, ..., cn-1) = v0m(a0), v1m(a1), v2m(a2), ..., vn-1m(an-1).

It should be clear that if the message has k symbols, and the length of the code = n = q - 1, then the code consists of n equations in k unknowns, which is over specified when n > k. For instance, when n = 255, and k = 239, then there will be 255 equations, but only 239 unknown message coefficients, hence the correct coefficients can be recovered even if some of them are corrupted, giving the code its error correcting capability.

Generator polynomial approach (construction II)

Recall, from our discussion above, that a cyclic code can be completely specified as all polynomial multiples of a generator polynomial. Then given the message m(x) in the form of a polynomial, as outlined above, whose k coefficients are taken from the finite field with q elements, we can construct RS code-words c(x) = m(x) g(x) (or the equivalent systematic construction). All we need to do is specify the generator polynomial of the code. We'll need a few additional definitions first:

Powers of a field element

We form the powers of an element of a Galois field in the usual way, namely a0 = 1, a1 = a, a2 = a∙a, a3 = a∙a∙a, etc.

Primitive element

In any Galois field there exist one or more primitive elements. A primitive element is defined as follows: Take any element of the field, and take successive powers of the element, like so: a0 = 1, a, a2, a3... This gives a sequence of distinct field elements. The elements have to repeat eventually, however, due to the reason there are only q distinct elements in the finite field GF(q). It can easily be proven that the first field element to repeat is, in fact, always 1. The smallest power x of a given field element α, such that αx = 1, is called the order of α. If the order of a field element is q - 1, then the field element is called a primitive element of the field. A primitive field element α can be utilized to "generate" all the elements of the field by taking successive powers of α.

The general form of the generator polynomial of a RS code is defined in such a way as to have as its roots 2t consecutive powers of a primitive element α. Thus we can write,

g(x) = (x - αb)(x - αb+1)(x - αb+2)...(x - αb+2t-1)

For convenience, the constant b is often chosen to be 0 or 1. Given the generator polynomial, RS code-words can now be constructed as c(x) = m(x) g(x), or as c(x) = xn-km(x) + Rg(x)[xn-km(x)]. This is the procedure utilized most often in practice.

How RS codes work

Now that we've covered the construction of RS codes, let's look at how they work. The idea behind error correction coding is to start with a "message" (i.e. the thing you want to encode) of length k, and convert it to a "code-word" of longer length n, in such a way that the additional data in the coded form allows one to recover the original message if parts of it are corrupted. To see how this works, we'll need some additional definitions:

Hamming weight

The Hamming weight of a code-word is simply the number of non-zero symbols in the code-word. So, for example, if you had the following binary code-word: 1001010001, its weight would be 4. This works the same way regardless of the field the symbols come from. If you had the following code-word with symbols from the field GF(5): 30104102, its weight would be 5.

Hamming distance

The Hamming distance between two code-words in a code is the number of places in which the code-words differ. For example, given the following two binary code-words- 100011 and 110000, the Hamming distance between them would be 3.

Minimum distance of a code

It is calculated as; take the distance between each code-word in the code and every other code-word in the code, and the nominal distance get over all the estimation is the minimum distance of the code. Luckily, for linear codes, there is a much easier way to discover the minimum distance. For linear codes, the minimum distance of a code is the weight of the lowest weight code-word in the code.

The minimum distance of a code is by far the most important property of the code in determining the error accurate capability of the code. To see why this is, we need to look additional intimately at the decoding process. Consider a simple binary repetition code of length 4. A recurrence code of length n, as the name proposes, is a code where each code-word consists of n recurrence of a single symbol. So there are two code-words in the binary repetition code of length 4: (0000) and (1111). Clearly, the minimum distance of this code is 4. Now suppose we send the message (1) as the code-word (1111) across our transportation channel. Clearly, if the first bit gets corrupted in transmission, then we will receive the code-word as (0111). In this case the (Hamming) distance between our received word and (1111) is 1, and the distance between our received word and (0000) is 3. So a reasonable decoding scheme would be that if the received word is not (1111) or (0000), then choose as the decoded word the word that is closest in Hamming distance to the received word. This is called nearest neighbor decoding, and in our example clearly (0111) would be corrected to (1111). Now consider the case where two bits are corrupted in transmission. We send (1111), and the word received is (1010). Now we can't recover the sent word, due to the reason the distance to both (1111) and (0000) is 2. This is known as decoder failure. The decoder knows it has received a corrupted code-word, but it isn't capable of correcting it. Now consider that we sent (1111) and three bits are corrupted in transmission and we receive (0001). In this case we will incorrectly decode this word as (0000). This situation is known as decoder error.

This kind of decoding can be comprehensive too much larger and more complicated codes, but it very quickly become not practical to evaluate a received code-word to all other words in the code, and to choose the code-word that is the smallest distance away from the received word. The important thing to observe, however, is that for any code with a minimum distance d, we can always correct up to [(d - 1)/2] errors, (here we are utilizing square brackets [] to indicate the "floor" function, i.e. take the greatest integer less than or equal to the expression in the brackets).

For an RS code with 2t redundant "check" symbols, the minimum distance cannot go beyand 2t + 1 (that is, dmin ≤ n - k + 1), due to the reason a message can exist with only one non-zero symbol. This is considered to as the Singleton bound. Utilizing construction I, we can see that m(x), being of degree at most k - 1, cannot have more than k - 1 zeros. Thus c(x) can't have more than k - 1 zero positions (for a non-zero code-word). This means that dmin ≥ n - (k - 1). Hence (since dmin ≤ n - k + 1 and dmin ≥ n - (k - 1)), for RS codes, dmin = n - k + 1 = 2t + 1. Codes that have this distance property are called Maximum Distance Separable (MDS) codes.

So an RS code with 2t check symbols can correct up to [(2t + 1 - 1)/2] = t errors.

The preceding is true for all codes. So what creates RS codes so individual? First, RS codes are MDS, which has been established to be the best minimum distance accessible. Recall that minimum distance is the most important property of an error correction code. Second, due to the reason of their structure, RS codes are easy to encode and relatively easy to decode.

CHAPTER 3

MATLAB

What Is MATLAB?

MATLAB is a high presentation language for technical compute. It integrate computation, visualization, and program in an user-friendly setting where troubles and solution are expressed in familiar mathematical data. Typical uses include

Typical uses of MATLAB

Math and computation

Algorithm development

Data acquisition

Modeling, simulation, and prototyping

Data analysis, exploration, and visualization

Scientific and engineering graphics

Application development, including graphical user interface building.

The main features of MATLAB

  • Advance algorithm for high performance numerical computation ,especially in the Field matrix algebra
  • A large collection of predefined mathematical functions and the ability to define one's own functions.
  • Two-and three dimensional graphics for plotting and displaying data
  • A complete online help system
  • Powerful, matrix or vector oriented high level programming language for individual applications.
  • Toolboxes available for solving advanced problems in several application areas

Features and capabilities of MATLAB

MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allow you to answer a lot of technical computing troubles, particularly those with matrix and vector formulations, in a little bit of the time it would obtain to write a plan in a scalar non interactive language such as C or FORTRAN.

MATLAB was initially written to present easy admittance to matrix software implemented by the LINPACK and EISPACK projects. MATLAB has evolve above a period of years with input from a lot of users. In university surroundings, it is the typical instructional device for opening and superior courses in mathematics, engineering, and science. In industry, MATLAB is the apparatus of option for high efficiency research, growth, and analysis.

MATLAB quality a family of append application precise solutions called toolboxes. Very significant to a good number users of MATLAB, toolboxes permit you to study and apply particular technology. Toolboxes are complete collection of MATLAB functions (M-files) that expand the MATLAB setting to resolve exacting classes of troubles. Areas in which toolboxes are existing include “signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others”.

The MATLAB System:

The MATLAB scheme consists of five major parts:

Development Environment:

This is the collection of tools and services that assist you use MATLAB functions and library. Several of these tools are graphical user interfaces.

The MATLAB Mathematical Function:

This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

The MATLAB Language:

This is a high level matrix/array with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allow equally program in the small to quickly create quick and dirty throw away program, and programming in the huge to generate full large and composite application program.

Graphics:

MATLAB has general amenities for display vectors and matrices as graph, with annotate and printing these graphs. It include high level functions for two dimensional and three dimensional data revelation, image processing, animation, and giving graphics. It also include low level function that permit you to completely modify the appearance of graphics with to build complete graphical user interfaces on your MATLAB application.

The MATLAB Application Program Interface (API):

This is a collection that allows you to write down C and Fortran program that act together with MATLAB. It include services for call routine from MATLAB (dynamic linking), call MATLAB as a computational tool, and for evaluation and inscription MAT-files.

MATLAB WORKING ENVIRONMENT:

MATLAB DESKTOP:-

Matlab Desktop is the main Matlab application window. The desktop contains five sub windows, the command window, the workspace browser, the current directory window, the command history window, and one or more figure windows, which are shown only when the user displays a graphic.

The command window is where the user types MATLAB commands and expressions at the prompt (>>) and where the output of those commands is displayed. MATLAB defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some data about them. Double clicking on a variable in the workspace browser launches the Array Editor, which can be used to obtain data and income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current directory, whose path is shown in the current directory window. For example, in the windows operating system the path might be as follows: C:\MATLAB\Work, indicating that directory “work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current directory window shows a list of recently used paths. Clicking on the button to the right of the window allows the user to change the current directory.

MATLAB uses a search path to find M-files and other MATLAB related files, which are organize in directories in the computer file system. Any file run in MATLAB must reside in the current directory or in a directory that is on search path. By default, the files supplied with MATLAB and math works toolboxes are included in the search path. The easiest way to see which directories are on the search path. The easiest way to see which directories are soon the search paths, or to add or modify a search path, is to select set path from the File menu the desktop, and then use the set path dialog box. It is good practice to add any commonly used directories to the search path to avoid repeatedly having the change the current directory.

The Command History Window contains a record of the commands a user has entered in the command window, including both current and previous MATLAB sessions. Previously entered MATLAB commands can be selected and re-executed from the command history window by right clicking on a command or sequence of commands. This action launches a menu from which to select various options in addition to executing the commands. This is useful to select various options in addition to executing the commands. This is a useful feature when experimenting with various commands in a work session.

Implementations:

1. Arithmetic operations

Entering Matrices

The best way for you to get started with MATLAB is to learn how to handle matrices. Start MATLAB and follow along with each example.

You can enter matrices into MATLAB in several different ways:

  • Enter an explicit list of elements.
  • Load matrices from external data files.
  • Generate matrices using built-in functions.
  • Create matrices with your own functions in M-files.

Start by entering Dürer's matrix as a list of its elements. You only have to follow a few basic conventions:

  • Separate the elements of a row with blanks or commas.
  • Use a semicolon, to indicate the end of each row.
  • Surround the entire list of elements with square brackets, [ ].

To enter matrix, simply type in the Command Window

A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]

MATLAB displays the matrix you just entered:

A =

16 3 2 13

5 10 11 8

9 6 7 12

4 15 14 1

This matrix matches the numbers in the engraving. Once you have entered the matrix, it is automatically remembered in the MATLAB workspace. You can refer to it simply as A. Now that you have A in the workspace,

sum, transpose, and diag

You are probably already aware that the special properties of a magic square have to do with the various ways of summing its elements. If you take the sum along any row or column, or along either of the two main diagonals, you will always get the same number. Let us verify that using MATLAB.

The first statement to try is

sum(A)

MATLAB replies with

ans =

34 34 34 34

When you do not specify an output variable, MATLAB uses the variable ans, short for answer, to store the results of a calculation. You have computed a row vector containing the sums of the columns of A. Sure enough, each of the columns has the same sum, the magic sum, 34. How about the row sums? MATLAB has a preference for working with the columns of a matrix, so one way to get the row sums is to transpose the matrix, compute the column sums of the transpose, and then transpose the result. For an additional way that avoids the double transpose use the dimension argument for the sum function. MATLAB has two transpose operators. The apostrophe operator (e.g., A') performs a complex conjugate transposition. It flips a matrix about its main diagonal, and also changes the sign of the imaginary component of any complex elements of the matrix. The apostrophe-dot operator (e.g., A'.), transposes without affecting the sign of complex elements. For matrices containing all real elements, the two operators return the same result.

So

A'

produces

ans =

16 5 9 4

3 10 6 15

2 11 7 14

13 8 12 1

and

sum(A')'

produces a column vector containing the row sums

ans =

34

34

34

34

The sum of the elements on the main diagonal is obtained with the sum and

the diag functions:

diag(A)

produces

ans =

16

10

7

1

and

sum(diag(A))

produces

ans =

34

The other diagonal, the so-called anti diagonal, is not so important Mathematically, so MATLAB does not have a ready-made function for it. But a function originally intended for use in graphics, fliplr, flips a matrix

From left to right:

Sum (diag(fliplr(A)))

ans =

34

You have verified that the matrix in Dürer's engraving is indeed a magic Square and, in the process, have sampled a few MATLAB matrix operations.

Operators

Expressions use familiar arithmetic operators and precedence rules.

+ Addition

- Subtraction

  • Multiplication

    / Division

    \ Left division (described in “Matrices and Linear Algebra” in the

    MATLAB documentation)

    . ^ Power

    ‘Complex conjugate transpose

    ( ) Specify evaluation order

    Generating Matrices

    MATLAB provides four functions that generate basic matrices.

    zeros All zeros

    ones All ones

    rand Uniformly distributed random elements

    randn Normally distributed random elements

    Here are some examples:

    Z = zeros (2, 4)

    Z =

    0 0 0 0

    0 0 0 0

    F = 5*ones (3, 3)

    F =

    5 5 5

    5 5 5

    5 5 5

    N = fix (10*rand (1, 10))

    N =

    9 2 6 4 8 7 4 0 8 4

    R = randn(4,4)

    R =

    0.6353 0.0860 -0.3210 -1.2316

    -0.6014 -2.0046 1.2366 1.0556

    0.5512 -0.4931 -0.6313 -0.1132

    -1.0998 0.4620 -2.3252 0.3792

    Using the MATLAB Editor to create M-Files:

    The MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor window has numerous pull-down menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and also uses color to differentiate between various elements of code, this text editor is recommended as the tool of choice for writing and editing M-functions. To open the editor, type edit at the prompt opens the M-file filename.m in an editor window, ready for editing. As noted earlier, the file must be in the current directory, or in a directory in the search path.

    Getting Help:

    The principal way to get help online is to use the MATLAB help browser, opened as a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by typing help browser at the prompt in the command window. The help Browser is a web browser integrated into the MATLAB desktop that displays a Hypertext Markup Language (HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to find data, and the display pane, used to view the data. Self-explanatory tabs other than navigator pane are used to perform a search.

    CHAPTER 4

    DESIGN AND IMPLEMENTATION

  • Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!