The video coding technology
During the past two decades, video coding technology has matured and state-of-the-art coding standards have become very important part of the video industry. Standards such as MPEG-2 and H.264/AVC provide strong support for digital video transmission, storage and streaming applications. Video streaming addresses the problem of transferring video data as a continuous stream. With streaming, the end-user can start displaying the video data or multimedia data before the entire file has been transmitted. To achieve this, the bandwidth efficiency and flexibility between video servers and equipment of end-users are very important and challenging problems. In response to such challenges, a variety of video coding and streaming techniques have been proposed to provide video streaming services over wireless network. In this Research paper different streaming approaches were discussed and analysed as well as scalable video streaming over the Internet has been comprehensively investigated. A brief overview of the diverse range of video streaming and communication applications has been introduced. The different classes of video applications provide different sets of constraints and degrees of freedom in system design. The three most fundamental challenges faced in video streaming are:
unknown and time-varying bandwidth, delay jitter, and loss, must be addressed in video streaming.........
The Moving Picture Experts Group (MPEG) was formed by the ISO to set standards for audio and video compression and transmission. MPEG's official designation is ISO/IEC JTC1/SC29 WG11 - Coding of moving pictures and audio. However, wireless channel characteristics such as shadowing, multipath, fading, and interferences still limit the available bandwidth for the deployed applications. Consequently, video compression techniques are a crucial part of multimedia applications over WLAN.
In this Research we have addressed H.264 wireless video transmission over IEEE 802.11 WLAN by proposing a robust cross-layer architecture that leverages the inherent H.264 error resilience tools and the existing QoS-based IEEE 802.11e MAC protocol possibilities to get desired results for most users over the wireless technology.
The recently developed H.264 video standard achieves efficient encoding over a bandwidth ranging from a few kilobits per second to several megabits per second. Hence, transporting H.264 video is expected to be an important component of many wireless multimedia services, such as video conferencing, real-time network gaming, and TV broadcasting. However, due to wireless channel characteristics and lack of QoS support, the basic 802.11-based channel access procedure is merely sufficient to deliver non-real-time traffic. The delivery should be augmented by appropriate mechanisms to better consider different QoS requirements and ultimately adjust the medium access parameters to the video data content characteristics to make it feasible for high as well low traffic users.
H.264 STANDARD OVERVIEW
H.264 consists of two conceptually different layers. First, the video coding layer (VCL) contains the specification of the core video compression engines that achieve basic functions such as motion compensation, transform coding of coefficients, and entropy coding.
This layer is transport-unaware, and its highest data structure is the video slice a collection of coded macroblocks (MBs) in scan order. Second, the network abstraction layer (NAL) is responsible for the encapsulation of the coded slices into transport entities of the network. In this H.264 overview, we particularly focus on the NAL layer features and transport possibilities.
The emerging H.26L standard has a number of features that distinguish it from existing standards, while at the same time it sharing common features with other existing standards used all over the world.
The following points are the key features of H.26L:
- Up to 50% in bit rate savings: Compared to H.263v2 (H.263+) or MPEG-4 Simple Profile, H.26L permits an average reduction in bit rate by up to 50% for a similar degree of encoder optimization at most bit rates.
- High quality video: H.26L offers consistently high video quality at all bit rates, including low bit rates.
- Error resilience: H.26L provides the tools necessary, to deal with packet loss in packet networks and bit errors in error-prone wireless networks.
NETWORK ABSTRACTION LAYER
The NAL defines an interface between the video codec itself and the transport world. It operates on NAL units (NALUs) that improve transport abilities over almost all existing networks. An NALU consists of a one-byte header and a bit string that represents, in fact, the bits constituting the MBs of a slice. The header byte itself
Consists of an error flag, a disposable NALU flag, and the NALU type. Finally, the NAL provides a means to transport high-level syntax (i.e., syntax assigned to more than one slice, e.g., to a picture or group of pictures) to an entire sequence and a lot more has been discussed....
MPEG-4 is a patented collection of methods defining compression of audio and visual (AV) digital data. It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group (MPEG) under the formal standard ISO/IEC 14496. Uses of MPEG-4 include compression of AV data for web (streaming media) and CD distribution, voice (telephone, videophone) and broadcast television applications.
One of the driving forces of the next wireless LAN (WLAN) generation is the promise of high-speed multimedia service. Providing multimedia services to mobiles and fixed users through wireless access can be a reality with the development of:
- Two high-speed physical (PHY) layers, IEEE 802.11g (54 Mb/s) and IEEE 802.11n (100 Mb/s)
- The new IEEE 802.11e quality of service (QoS)-based medium access control (MAC)layer
Initially, MPEG-4 was aimed primarily at low bit-rate video communications; however, its scope as a multimedia coding standard was later expanded. MPEG-4 is efficient across a variety of bitrates ranging from a few kilobits per second to tens of megabits per second. MPEG-4 provides the following functionalities:-
- Improved coding efficiency
- Ability to encode mixed media data (video, audio, speech)
- Error resilience to enable robust transmission
- Ability to interact with the audio-visual scene generated at the receiver
Adaptive streaming is a process that adjusts the quality of a video delivered to a web page based on changing network conditions to ensure the best possible viewer experience. Internet connection speeds vary widely, and the speed of each type of connection also varies depending on a wide variety of conditions. For example, if a user connects to an ISP at 56 Kbps, that does not mean that 56 Kbps of bandwidth is available at all times. Bandwidth can vary, meaning that a 56-Kbps connection may decrease or increase based on current network conditions, causing video quality to fluctuate as well. Adaptive streaming adjusts the bit rate of the video to adapt to changing network conditions.
Adaptive streaming simplifies content creation and management, making streaming video easy to deploy, and does not require any coding.
System and Network Models (Simulations)
Simulations are conducted using the network simulator ns2. We used the network architecture shown below to simulate a service provided by the MPEG-4 server attached to the node "S." The server sends data to the client attached to the node "C". The client is also an ns2 agent which extends the capabilities of the RTP sink by reporting statistics information to the server. The client is also an ns2 agent which extends the capabilities of the RTP sink by reporting statistics information to the server.
The network is loaded by FTP streams carried over TCP. This allows the link between the routers "R1" and "R2" to be congested differently.
Classification of cross-layer designs
Top-down approach: the higher-layer protocols optimize their parameters and the strategies at the next lower layer.
2. Bottom-up approach: the lower layers try to insulate the higher layers from losses and bandwidth variations.
3. Application-centric approach: the APP layer optimizes the lower layer parameters one at a time in a bottom-up (starting from the PHY) or top-down manner, based on its requirements.
4. MAC-centric approach: the APP layer passes its traffic information and requirements to the MAC, which decides which APP layer packets/flows should be transmitted and at what QoS level.
5. Integrated approach: strategies are determined jointly by all the open system interconnection (OSI) layers.
Transmission by signalling
the system gets informations of SNR (Singal to Noise Ratio) from PHY layer and translates into throughput to video rate control in APP layer adjusting the compression ratio of video rate encoder. While the state of wireless network is unstable, SNR will rapidly decrease. Thus, if adjusting the encoded bit-rate of video encoder in real-time from streaming server, throughput can be controled.
Transmission by packet bursting
proposed TS-DCF (Timestamp DCF). TS-DCF is designed for video streaming. The timestamp for video packet can be got by RTP (real time protocol) header (Figure 2.3). The packets of the same timestamp have the same type (I/P/B type) of video frame. Thus, TS-DCF transmits the same type of video frame by packet bursting.
Transmission by TAR
Time-based Adaptive Retry (TAR) mechanism  is proposed by CMU Advanced Multimedia Processing Lab. According to classification of , TAR is MAC-centric approach. TAR improves the retransmission mechanism for video packet in DCF.
Transmission by classification of video slice
classified different video slices for H.264/AVC to different ACs (Access Categories) for IEEE 802.11e. According to classification of . The method is MAC-centric approach.
To support the varying Quality-of-Service (QoS) requirements of emerging applications, a new standard IEEE 802.11e  has been specified. The 802.11e standard defines four access categories (ACs) that have different transmission priorities. The transmission priority is the probability of successfully earning the chance to transmit when individual ACs are competing to access the wireless channel; the higher the transmission priority, the better is the opportunity to transmit. However, for a wireless channel, the unavoidable burst loss, excessive delays, and limited bandwidth become challenges for efficient multimedia transmission over wireless network. Consequently, several advanced mechanisms were proposed based on 802.11e to support multimedia transmissions and in particular video transmission quality. Most of the proposed mechanisms improved the performance by adjusting the operation of 802.11e MAC, such as Contention Window size, TXOPlimit and data transmission rate
QOSC (Quality of Service Control) in MPEG-4
Adaptive Video Streaming
MPEG-4 enables different software and hardware developers to create multimedia objects possessing better abilities of adaptability and flexibility to improve the quality of such services and technologies as digital television, animation graphics, the World Wide Web and their extensions. This standard enables developers to better control their content and to fight more effectively against copyright violations. Data network providers can use MPEG-4 for data transparency. With the help of standard procedures, MPEG-4 data can be interpreted and transformed into other signal types compatible with any available network. The MPEG-4 format provides end users with a wide range of interaction with various animated objects. Standardized Digital Rights Management signaling, otherwise known in the MPEG community as Intellectual Property Management and Protection (IPMP).
Analysis of a Live P2P Video Multicast
Live multicast of a 4-hour baseball game to over 120K IP addresses on the Internet Pre-planned live multicast, therefore additional bandwidth provisioning was possible Content format: CBR, WMV codec, 759 kbps audio+video stream at 29fps and VGA resolution Content generated and mostly consumed in developed country, hence, good network infrastructure Mesh-based P2P algorithms used.
Challenge was in rebuilding the macro-characteristics of the multicast session from these log snippets.
QoS cross layer architecture with adaptive enhanced distributed channel
Proposed Architecture (PA)
Video coding layer and network abstraction layer of existing architecture are used in PA without modification (Fig. 2). Also, mapping algorithm used in PA is similar to existing architecture. But, MAC layer protocol has been changed from EDCA to AEDCA. In EDCA, CWmin and CWmax values are statically set for each priority level. After each successful transmission, CW value is set to CWmin. In AEDCA, CW value is reset more slowly to adaptive values by considering their current sizes and average collision rate, while maintaining priority-based discrimination. This avoids burst collisions. After each unsuccessful transmission of packets, new CW value is increased with a Persistence Factor (PF) and it is assured that higher priority traffic has a smaller value of PF than lower priority traffic.
Challenges in Cross Layer Design for Multimedia Transmission over Wireless
Over the last few years a number of new protocols have been developed for multimedia applications in the whole OSI layer's scale. To name a few are The RTP and RTCP protocols , which operate on the transport layer usually on top of the UDP protocol, have been especially designed for multimedia data transmission. The RTSP protocol offers control mechanisms over real time multimedia transmission whereas SIP and H.323 are used in multimedia conferencing.
Apart from the above developments there have been a number of proposals for improving QoS in multimedia applications through cross layer adaptation strategies. In  the need of a cross-layer optimization is examined and an adaptation framework is proposed amongst the APP, the MAC and the Physical (PHY) layers.
The cross layered architecture available to the literature can be divided to the following categories:
- Creation of new interfaces: Several cross-layer designs require creation of new interfaces between the layers. The new interfaces are used for information sharing between the layers at runtime.
- Merging of adjacent layers: Another way to do cross-layer design is to design two or more adjacent layers together such that the service provided by the new layer is the union of the services provided by the constituent layers.
- Design coupling without new interfaces : Another category of cross-layer design involves coupling two or more layers at design time without creating any extra interfaces for information sharing at runtime.
- Vertical calibration across layers : Adjusting parameters that span across layers.
Many Signaling issues between the layers for cross-layer optimization over wireless networks are also examined in this research paper. Different proposals are there including a new signaling framework in which signaling can be done between two non-neighbouring layers, through light-weighted messages and the use of a message control mechanism to avoid message dissemination overflow. Although this proposal avoids heavy ICMP messages for out-bound signaling between the layers that is proposed earlier, it introduces very high complexity.
Packet transmission is made with a novel scheduling algorithm at the MAC layer whose function is based on the user and application priority levels.
Priorities are assigned to users on the basis of paid services, in which users are classified into groups with different QoS levels.
In this Research, we proposed cross-layer architecture for H.264 (mp4) video transmission over IEEE 802.11e wireless network. Our cross-layer architecture proposal includes the complete transmission chain from application layer source coding to wired and wireless channel models, required cross-layer signalling Mechanisms and full network functionality.
We also analyzed the benefits of cross-layer designs by determining a set of transmissions schemes and combining them through time sharing.
We have also proposed several different protocol solutions which are optimal in terms of network overhead caused by them. At the protocol level we proposed several new enhancements for standard solutions such as dynamic modifications for transport and MAC layer level partial checksum, and adaptive video transmission optimization and cross-layer information delivery at data link layer level. The simulation results show that the proposed scheme performs superior in both light load and heavy load than EDCA and the static mapping schemes. This cross-layer optimization makes efficient use of the network resources by allocating them only to links supporting traffic.