This project studies the principles behind various error correction and control mechanisms that are used in wireless networks today. Focusing on various forward error correction techniques, convolutional and block codes are studied. The Viterbi Algorithm is a very popular algorithm for decoding convolutional codes and is in use in most communication systems today. However due to its computational complexity, a major portion of the energy consumption at the receiver end, almost one-third, comes from the decoder. This paper investigates an energy saving strategy is that will enable wireless transmission receivers, which are constrained by energy availability, decode transmissions optimally. An analysis of how factors such as Signal to Noise Ratio and Bit Error Rate affect this algorithm will also be made.
This section provides a brief overview of the project and the major concepts that will be discussed in this initial report. It also gives an outline of the main motivations and ideas that underpin this project.
Unlike packetized wired digital networks, packetized wireless digital networks are much more prone to bit errors. Packets may be lost and even the ones that are received may be damaged. Therefore, good error detection and correction mechanisms are vital to ensure efficient and accurate wireless communication systems. Numerous techniques exist for ensuring that the receiver gets an error free message. A few of these techniques are Automatic Repeat Request (ARQ), Forward Error Correction (FEC) and Hybrid Automatic Repeat Request (H-ARQ) .
Forward Error Correction or FEC refers to the process of detecting and correcting errors in a received signal. There are various FEC codes in use today for the purpose of error correction. Most codes fall into either of two major categories: Block Codes and Convolutional Codes. Block codes work with fixed length blocks of code. On the other hand, convolutional codes deal with sequential data, taken a few bits at a time, with the output depending on both present as well as past input. Chapter II Section 2 gives a brief description of some of these different kinds of FEC codes.
In terms of actual implementation, block codes are very complex and hard to implement. Convolutional Codes on the other hand, are easier to implement. Convolutionally coded data would still be transmitted as blocks. However these blocks would be much larger in comparison to those used by Block Codes. The fact that Convolutional Codes are easier to implement, coupled with the emergence of a very efficient convolutional decoding algorithm, known as Viterbi Algorithm, may have been the reason for convolutional codes to become the preferred method for real time communication technologies.
The constraint length of a convolutional code is the number of stages present in the combinatorial logic of the encoder. The error correction power of a convolutional code increases with its constraint length. However, the decoding complexity increases exponentially as the constraint length increases. Since its discovery in 1967, the most widely used decoding logic in use in the recent years has been the Viterbi algorithm. The reasons for this will be described in detail in Chapter II Section 3. For constraint lengths greater than 9, another decoding algorithm known as sequential decoding is used. Due to its high accuracy in finding the most likely sequence of states, the Viterbi algorithm is used in many applications ranging from radio communication links, geostationary satellite networks and voice recognition to even DNA sequence analysis.
This project studies the use of various error detection and correction techniques for mobile networks with a focus on the Viterbi Algorithm. Since preservation of battery energy is a major concern for mobile devices, it is essential that the error detection and correction mechanism take the minimum amount of energy to execute. To this end, this project explores the possibility of improving the energy efficiency of the Viterbi decoder and attempts to develop an algorithm for the same.
Outline of the Scope and Context of the Investigation
This project focuses on the use of Viterbi Algorithm for forward error correction in mobile networks. A brief explanation of the encoding and decoding mechanism used is given. This algorithm forms the basis for many wireless communication systems such as Bluetooth, WiFi, WiMax and the like. A brief study of other error detection and correction mechanisms will be made.
One of the main considerations in designing decoders for mobile networks has been that of energy consumption. It is necessary to keep energy consumption at a minimum in order to optimize use of available battery energy. However, in order to get good error correcting capabilities, we need to keep the constraint length high. As we have described before, the complexity of convolutional code increases exponentially with constraint length. This makes the decoding mechanism more complex, and as a result it consumes more energy.
In Chapter II Section 5, we look at ways by which we can improve the power efficiency of the decoding mechanism. The line of thought, originally proposed by Barry Cheetham in his paper (name), was that if we can switch off the Viterbi decoder when we know that no errors are occurring, it would save a good amount of energy. When bit-errors are detected, the bits can be traced back, the Viterbi decoder switched on and the errors corrected normally. In this project we try to simulate this process, look at the possible problems that may occur and work out how they can be overcome. We analyze how much energy this method would save and whether this presents a considerable improvement over existing mechanisms.
In Chpater II Section 5, a literature review is provided that gives an overview of the current research and theories relevant to this project. Chapter III outlines the research methodologies that will be followed, the overall plan for the various tasks in the project and the criteria with which the outcomes of the tests will be analyzed.
This section describes different kinds of error detection and correction techniques used in wireless networks. It also provides a literature review detailing the different methods that are being researched in an attempt to optimize the energy consumption of the Viterbi decoder.
AUTOMATIC REPEAT REQUEST (ARQ)
Automatic Repeat request or ARQ is a method in which the receiver sends back a positive acknowledgement if no errors are detected in the received message. The sender waits for this acknowledgement. If it does not receive an acknowledgement (ACK) within a predefined time, or if it receives a negative acknowledgement (NAK), it retransmits the message. This retransmission is done either until it receives an ACK or until it exceeds a specified number of retransmissions. 
This method has a number of drawbacks. Firstly, transmission of a whole message takes much longer as the sender has to keep waiting for acknowledgements from the receiver. Secondly, due to this delay, it is not possible to have practical, real time, two-way communications. There are a few simple variations to the standard Stop-and-Wait ARQ such as Go-back-N ARQ, selective repeat ARQ.
'Stop and Wait' ARQ
The transmitter sends a packet and waits for a positive acknowledgement. Only once it receives this ACK does it proceed to send the next packet.
The transmitter transmits packets continuously until it receives a NAK. A sequence number is assigned to each transmitted packet so that it may be properly referenced by the NAK. There are two ways a NAK is processed.
In 'Go-back-N' ARQ, the packet that was received in error is retransmitted along with all the packets that followed after it until the NAK was received. N refers to the number of packets that have to be traced back to reach the packet that was received in error. In some cases this value is determined using the sequence number referenced in the NAK. In others, it is calculated using roundtrip delay. The disadvantage of this method is that even though subsequent packages may have been received without error, they have to be discarded and retransmitted again. This results in loss of efficiency.
This disadvantage is overcome by using Selective-repeat ARQ. When a NAK is received, only the packet that was received in error needs to be retransmitted. The other packets that have already been sent in the meantime are stored in a buffer and can be used once the packet in error is retransmitted correctly. The transmissions then pick up from where they left off.
Continuous ARQ requires a higher memory capacity as compared to Stop and Wait ARQ. However it reduces delay and increases information throughput. 
The main advantage of ARQ is that as it only detects errors and makes no attempt to correct them, it requires much simpler decoding equipment and much less redundancy as compared to Forward Error Correction techniques which are described below. The huge drawback however, is that the ARQ method may require a large number of retransmissions to get the correct packet. Hence the delay in getting messages across maybe excessive. 
HYBRID AUTOMATIC REPEAT REQUEST (H-ARQ)
Hybrid Automatic Repeat Request or H-ARQ is another variation of the ARQ method. In this technique, error correction information is also transmitted along with the code. This gives a better performance especially when there are a lot of errors occurring. However, it introduces a larger amount of redundancy in the information sent and therefore reduces the rate at which the actual information can be transmitted. There are two different kinds of H-ARQ, namely
FORWARD ERROR CORRECTION CODES
Forward Error Correction is a method used to improve channel capacity by introducing redundant data into the message. This redundant data allows the receiver to detect and correct errors without the need for retransmission of the message. Forward Error Correction proves advantageous in noisy channels when a large number of retransmissions would normally be required before a packet is received without error. It is also used in cases where no backward channel exists from the receiver to the transmitter. A complex algorithm or function is used to encode the message with redundant data. The process of adding redundant data to the message is called channel coding. This encoded message may or may not contain the original information in an unmodified form. Systemic codes are those that have a portion of the output directly resembling the input. Non-systemic codes are those that do not. 
It was earlier believed that as some degree of noise was present in all communication channels, it would not be possible to have error free communications. This belief was proved wrong by Claude Shannon in 1948. In his paper  titled "A Mathematical Theory of Communication", Shannon proved that channel noise limits transmission rate and not the error probability. According to his theory, every communication channel has a capacity C (measured in bits per second), and as long as the transmission rate, R (measured in bits per second), is less than C, it is possible to design an error-free communications system using error control codes. The now famous Shannon-Hartley theorem, describes how this channel capacity can be calculated. However, Shannon did not describe how such codes may be developed. This lead to a wide spread effort to develop codes that would produce the very small error probability as predicted by Shannon. It was only in the 1960's that these codes were finally discovered.  There were two major classes of codes that were developed, namely, Block Codes and Convolutional Codes.
As described by Proakis, Linear Block Codes consist of fixed length vectors called code words. The length of the code word is the number of elements in the vector and is denoted by n. A binary code of length n can form 2n different code words. Out of these possible codes, we select a subset of code words M such that M=2k where k<. So, k information bits selected from the set M are mapped onto an n length code word. This resulting code is called (n,k) code. The ratio (k/n) =Rc is called the rate of the code. Encoding is done using generator matrix Cm=XmG where G = [Ik|P]. Ik is a k X k identity matrix and P is a k X (n-k) matrix which determines n-k redundant bits or parity check bits.
To summarize, a block code is described using two integers k and n, and a generator matrix or polynomial. The integer k is the number of data bits in the input to the block encoder. The integer n is the total number of bits in the generated codeword. Also, each n bit codeword is uniquely determined by the k bit input data. 
Another parameter used to describe block codes is its weight. This is defined as the number of non zero elements in the code word. In general, each code word has its own weight. If all the M code words have equal weight it is said to be fixed-weight code. 
A commonly known linear block code is the Hamming code. Hamming codes can detect and correct single bit errors in a block of data. In these codes, every bit is included in a unique set of parity bits. A bit-error can be checked by analyzing the parity bits using a pattern of errors known as the error syndrome. If all the parity bits are correct according to this pattern, we can conclude that there is no single bit error in the message. If there are errors in the parity bits, we can find the erroneous data bit by adding up the positions of the erroneous parity bits. Thus, we also know that if only a single parity bit is in error, it is the parity bit which is erroneous. The stated reference provides the general algorithm used for creating hamming codes. 
While Hamming codes are easy to implement, a problem arises if more than one bit in the received message is erroneous. This may result in either the error going undetected or the code being corrected to a wrong value. Hence, we need to find more robust error detection and correction schemes that will be able to accommodate and correct multiple errors in a transmitted message.
Cyclic codes and Cyclic Redundancy Checks (CRC)
Cyclic Codes are linear codes that can be expressed by the following mathematical property.
If C = [c n-1 cn-2 ... c1 c0] is a code word of a cyclic code, then [c n-2 cn-3 ... c0 cn-1], which is obtained by cyclically shifting all the elements to the left, is also a code word.  In other words, every cyclic shift of a codeword results in another codeword. This cyclic structure is very useful in encoding and decoding operations because it is very easy to implement in hardware.
A cyclic redundancy check or CRC is a very common form of cyclic codes which are used for error detection purposes in communication systems. Using different kinds of generator polynomials, it is possible to detect different kinds of errors such as all single bit errors, all double bit errors, any odd number of errors, or any burst error of length less than a particular value. This makes the CRC check a very useful form of error detection. The CRC check which is in use for WLAN's according to the IEEE 802.11 standard is the CRC-32.  This will be used for the purpose of the project.
Convolutional codes are codes that are generated sequentially by passing the information sequence through a linear finite-state shift register. A convolutional code is described using 3 parameters k, n and K. The integer k represents the number of input bits for each shift of the register. The integer n represents the number of output bits generated at each shift of the register. K is an integer known as constraint length, which represents the number of k bit stages present in the encoding shift register.
An important difference between block codes and convolutional codes are that convolutional encoders have memory. The n bit output generated by the encoder depends not only on the present k input bits but also on the previous K-1 input k bits. 
There are alternative ways of describing a convolutional code. It can be expressed as a tree diagram, a trellis diagram or a state diagram. For the purpose of this project, we will use the trellis and state diagram. Below I describe how these are constructed.
A state-diagram shows all possible present states of the decoder as well all the possible next states. First a state transition table is made, which shows the next state for each possible combination of the present state and input to the decoder. This can then be mapped onto a diagram and this is called the state diagram. The following figures show how a state diagram is drawn for a convolutional encoder. For the purpose of illustration a 3 stage encoder with rate has been shown. In the actual experiment, we will be using the standard 7 stage encoder with rate .
In a trellis diagram the mappings from current state to next state are done in a slightly different manner as shown in Figure 2.3. Additionally, the diagram is extended to represent all the time instances until the whole message is decoded. In the following Figure 2.3, a trellis diagram is drawn for the above mentioned convolutional encoder. The complete trellis diagram will replicate this figure for each time instance that is to be considered.
The most common convolutional code used in communication systems is has a symbol rate of and constraint length K = 7. This means that for each bit of information passed to the encoder, two bits of output are produced. The constraint length 7 implies that the input persists in the system or affects the system output for 7 clock cycles. 
Data can be convolutionally coded for FEC using different algorithms. The Viterbi Algorithm is the most common of these. However, in recent years, other codes are being used which provide superior performance. Two of these codes are described below.
Concatenated coding schemes combine two or more relatively simple component codes as a means of achieving large coding gains. Such concatenated codes have the error-correction capability of much longer codes while at the same time permitting relatively easy to moderately complex decoding. [Ref: Bernard Sklar. Fundamentals of Turbo Codes. http://www.informit.com/articles/article.aspx?p=25936, 2002]
Encoder - In most communication links, bit-errors are introduced into the message as short bursts due to some sudden disturbance in the medium. When many bit-errors occur adjacent to each other, it is more difficult to correct them. Turbo Codes try to reduce the occurrence of such bursts of error by scrambling the input message before encoding. This is achieved by means of a transpose operation on the matrix holding the information bits. The transposed matrix is then convolutionally coded and transmitted. The advantage is that any bursts of errors that occur will now be spread over a wider range bits. As the bit-errors are now farther apart there is a higher probability that the bit-errors may be corrected at the decoder. This method is advantageous when the medium is known to produce burst errors. There is also a probability that this type of code adversely affects the outcome. This may happen if bit-errors which would have been far apart are now adjacent to each other at the end of the scrambling and unscrambling operations.
Decoder - The decoder consists of an iterative mechanism in which the output of one decoder is passed to the input of another decoder and then sent back to the first decoder. This process is repeated a number of times and is expected to reduce bit-errors with each iteration. In order to make full use of this method, the decoders must produce soft decision outputs as hard decisions will severely limit its error correcting capability.
Low Density Parity Check Codes
Low Density Parity Check Codes or LDPC codes as they are known are block codes that have a parity matrix in which every row and column is 'sparse'. This means that each constraint, that is each row or column of the matrix, has only a few of its members acting as variables. In other words, only a few of the members in each row or column will have a value of 1. The remaining members will have a value of 0. 
THE VITERBI ALGORITHM
The Viterbi Algorithm was developed by Andrew J. Viterbi and published in the IEEE transactions journal on Information theory in 1967.  It is a maximum likelihood decoding algorithm for convolutional codes. This algorithm provides a method of finding the branch in the trellis diagram that has the highest probability of matching the actual transmitted sequence of bits. Since being discovered, it has become one of the most popular algorithms in use for convolutional decoding. Apart from being an efficient and robust error detection code, it has the advantage of having a fixed decoding time. This makes it suitable for hardware implementation. 
Data is convolutionally coded by using a series of shift registers and an associated combinatorial logic which usually consists of a series of exclusive-or gates.
The decoding mechanism comprises of three major stages
- Branch Metric Computation (BMC) - The state-diagram describes all the possible states that can follow a particular state when given an input of 1 or 0. The error metric or error probability for each transition state at a particular time instant is measured as the sum of the error metric for the preceding state and the hamming distance between the previous state and the present state. This error metric is calculated for each state at each time instant.
- Add-compare-select (ACS)
The error metrics from different predecessors to a particular state are compared and the one with the smallest error metric is selected. It is considered that this is the most probable transition that occurred in the original message. This process is repeated for each state at each time instant and the surviving states are stored in a matrix. In cases where more than one path results in the same accumulated error metric, we consistently choose either the higher or lower state as the surviving state.
Once the end of the trellis is reached, the state with the least accumulated error metric is selected and its state number is stored. Its predecessor is selected from the surviving state history table and that state number stored. In this way we work backwards through the trellis, storing the state number of the predecessor at each time instant.
We then determine the input bit that corresponds to each transition of state by comparing with the state diagram. In this way, working forward through the trellis we establish the entire bit sequence and this represents the decoded message.
LITERATURE REVIEW [3 pages]
Power Saving Strategy -( discuss related approaches)
This section describes the core objectives of the project and the research methodologies that will be adopted to achieve project goals. Descriptions of the key deliverables and software tools that will be used are provided. Finally, a project plan for the research project has been developed and summarized with the help of a Gantt chart.
Significance of Project
It is known that power consumption of the Viterbi decoder could account for as much as one third of the power consumption of the baseband processing.  The significance of this project lies in improving the energy efficiency of decoders that are used in mobiles today. An improved algorithm may result in reducing amount of energy required to decode signals received on a mobile handset and in turn improve battery life of the mobile handset.
Aims of the project
The project consists of two parts. The first part involves a study of convolutional codes and other existing forward error correction mechanisms in use today. The second part of the project involves evaluating the feasibility of a power saving strategy for decoding signals encoded using the Viterbi algorithm.
We investigate a method which allows us to switch off the Viterbi decoder when error free transmissions are being received. When errors start occurring, the algorithm should trace back the required number of bits, switch on the Viterbi decoder and proceed. The project aims to simulate a communication system in MATLAB where signals are coded and transmitted. Bit-errors will be introduced at random intervals. The received signals will be decoded using the developed algorithm. Using this simulation we estimate the power used and compare it with that consumed by traditional methods. The following research approach will be adopted for the purpose of this project.
The initial phase will consist of a study of existing algorithms and mechanisms. Following this, a suitable algorithm will be designed component-wise to meet each function of the new system. In this project, we will consider a K=7 (171, 133) Convolutional encoder. This is the industry standard that is in use is most communication systems today.
The underlying principle for the switch off mechanism can be described in the following way. Taking the case of the K=7 (171, 133) convolutional encoder, we know that each input bit is exclusively-or'ed with flip flops 1,2, 3 and 6 for the lower output bit and flip flops 2,3,5 and 6 for the upper output bit. The lower and upper bit are then interleaved and transmitted.
Exclusive-Or has the property that ((A xor B ) xor B) = A. Therefore at the receiver end, if we xor alternate arriving bits with the corresponding set of flip flops (flip flops 1,2, 3 and 6 for lower bits and 2,3, 5 and 6 for upper bits), we can get back the original message bit. Since we know that both the upper bit and lower bit were produced with the same original message bit, xoring them again with the corresponding flip flops should produce the same value for both upper and lower bits.
If they are equal, we can be reasonably sure that there was no bit-error introduced in transmission. If they are not equal, we know that either one of the bits contain an error. We then need to go back a certain number of bits, start up the Viterbi decoder and proceed conventionally. This principle lies at the heart of the attempt to reduce energy consumption of the Viterbi decoder.
Each component that is developed will be implemented in MATLAB and tested using carefully selected data. Once all the components have been developed and tested individually, they will be integrated into a single system. The use of SIMULINK, a simulation software developed by MathWorks, or other simulation software to simulate the communication channel may be considered. The implementation will first be done using hard decision inputs to the Viterbi decoder. Subsequently soft decision inputs and possibly soft decision outputs may be incorporated as well.
Data Collection and Analysis
The simulations will be run repeatedly under various test conditions. The parameters to be varied will include Signal to Noise Ratio (SNR) and Bit Error Rate (BER). The data collected will be analyzed to estimate the amount of power being used at the receiver end. This will then be compared to the power that is estimated to be used without the switch off mechanism in place.
A chief concern will be an analysis of whether the process of switching on and off the decoder a multiple number of times results in an overall higher amount of power use as compared to the conventional methods.
The following are the major deliverables of the project.
A MATLAB implementation of the algorithm to add CRC check bits to the message to be transmitted and convolutionally encode the data using a , K=7 (171, 133) convolutional encoder
An implementation of the logic that will be used if it is determined that bits are being received error free. This will permit switching off the decoder and still enabling us to separate the actual message from the received code.
An implementation of the logic to trace back the bits and turn on the decoder when bits are being received with error
An implementation of the Viterbi algorithm to decode the received data. This will consist of a branch metric computation (BMC) segment, an Add-Compare-Select (ACS) segment and a Traceback segment.
An implementation that performs a CRC check on the decoded data.
A complete simulated communication system where information is sent with random errors introduced to simulate channel noise and a receiver that implements the developed algorithm. The use of SIMULINK or other simulation software to achieve this will be considered.
A suitable algorithm to estimate power used at the receiver.
A dissertation report
This project will be developed using MATLAB Version 2007b and possibly SIMULINK Version 2007b to simulate the communication channel. Both of these have been developed by MathWorks.
The results of the simulation need to be evaluated in the following way.
Calculation of the amount of power that would be used without the power saving mechanism. This will be done by modifying the code to have the decoder on throughout the simulation regardless of the occurrence or non-occurrence of bit errors.
Comparison of this value with the amount of power used when the power saving mechanism is in operation. The power used at the receiver, by the Viterbi decoder will be estimated in the following way. The amount of time that the viterbi decoder is operational is measured using 'tic and toc' statements in MATLAB. A counter is also kept to record the number of times the decoder is switched on. Based on this information we can calculate the amount of energy spent if we know the energy spent by the decoder per unit time.
These simulations will be carried out repeatedly and under different scenarios. Some of the parameters that will be varied are the Signal to Noise Ratio and the Bit Error Rate.
Analysis of whether the difference in energy used is substantial enough to merit redesign of the receiver systems.
The Project has been categorized into 3 main tasks.
- Project Background Research
- Design and Implementation of Code
- Preparation of Dissertation Report
LIST OF REFERENCES
- A. J. Viterbi, "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm," IEEE Trans. Inform.Theory, vol. IT-13, pp. 260-269, Apr. 1967
- Peterson and Davie. (2003). Computer Networks: A Systems Approach, Third Edition,
- Tanenbaum, Andrew S. (2003). Computer networks. Upper Saddle River, New Jersey: Prentice Hall.
- Sklar, B. (2001), Digital Communications - Fundamentals and Applications. Second edition.
- Forward Error Correction. Wikipedia. http://en.wikipedia.org/wiki/Forward_error_correction. Retrieved 12th April 2010
- C.E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal, vol. 27, pp.379-423, 1948
- Jacobsmeyer, M.J., P.E.Introduction to Error Control Coding. Pericle Communciations Company
- Proakis, J.G.Digital Communications. Third edition: McGraw-Hill, Inc.
- General Algorithm. Hamming Codes. http://en.wikipedia.org/wiki/Hamming_code#General_algorithm. Retrieved 14th April 2010
- P. Brenner. A Technical Tutorial on the IEEE 802.11 Protocol. BreezeCom Wireless Communications, 1992.
- Fleming, C. Tutorial on Convolutional Coding with Viterbi Decoding. Spectrum Applications
- David J.C. MacKay. Information Theory, Inference, and Learning Algorithms.Cambridge University Press.2003
- Sherif Welson Shaker. Design and Implementation of Low-Power Viterbi Decoder for Software-Defined WiMAX Receiver . 17th Telecommunications forum TELFOR 2009.
- R.Henning and C.Chakrabarti, "An Approach for Adaptively Approximating the Viterbi Algorithm to Reduce Power Consumption While Decoding Convolutional Codes," IEEE Trans. Signal processing, vol. 52. May 2004.