Mini Aerial Vehicle

Chapter 1 : Introduction and project overview

1.1 Introduction

Final year project(fyp) is a milestone for a undergraduate to prove that I am capable to develop a project base on the knowledge that had been gain throughout the years of study in Multimedia University in the lecture classes and completion of assignment.

1.2 Final Year Project Report Objective

* To present the work that had done in good quality which is short but compact with details of project done

* To produce report in a logical flow and strong explanation so it is more understandable by others

* To assist students in preparing a report in accordance to standards set by a body

* Useful to students to find jobs after graduate as prove of able to develop a subject given independently or cooperation.

1.3 Overview of Project Title

Flapping wing surveillance mini aerial vehicle is a type of embedded image application robot that is capable of carrying a visual processing system while flying in the air. The initialization of this project is to develop an mini aerial vehicle with video processing ability for visual information processing with minimal attraction from the public eyes. The application is for military reconnaissance activities and simultaneous localization and mapping activities for outdoor and movable surveillance camera for indoor application. PIC18f2620 is the main microcontroller to control the retrieve of visual image and process it to store in an external storage media or life feed image to a terminal. Image is capture with a small camera called uCAM which is a serial JPEG camera module to capture pictures through EUSART communication protocol. The image data is process and stored in a external storage media(SD card) with SPI communication. The mini aerial vehicle type chosen to mount the camera system is a flapping wing type MAV called ornithopter.

Chapter 2 : Literature Review and Theoretical Background

2.1 Mini Aerial Vehicle

2.1.1 Background on Unmanned Aerial Vehicle and Mini Aerial Vehicle

Mini aerial vehicle (MAV) is one of the categories of unmanned aerial vehicles (UAVs). UAVs are commonly used in military applications for recognition, environmental observation and maritime surveillance. It is also used for non military applications such as environmental observation, rice paddy remote sensing and infrastructure maintenance. The term UAV covers all mechanical and electronics engineered flying objects which are flying in the air without any pilot on board with the capability of controlling it. The remotely control aerial vehicles are clearly defined by the Unmanned Vehicle System International Association as mini, close short and medium range UAVs depending on their size, endurance, range and flying altitude. The definition of the UVS community, in which the vehicle is fit, is listed in the table below, all other kinds of aircraft that is not in the category falls into the general category of ‘High Altitude Long Endurance' group.

Category Name

Mass (kg)

Range (km)

Flight Altitude (m)

Endurance (hours)

Micro

<5

<10

250

1

Mini

<25/30/150

<10

150/250/300

<2

Close Range

25-150

10-30

3000

2-4

Medium Range

50-250

30-70

3000

3-6

High Alt. Long Endurance

>250

>70

>3000

>6

Table 2.1 Category of UAV

The development of UAV has been strongly motivated by military application after World War II, nations were looking for aerial vehicles, which have the abilities for replacing deployment of human beings in high risk area for surveillance, reconnaissance and penetration of hostile terrain. Development of insect-size UAV is reportedly expected in the near future. Although military use is one of the motivating factors for UAV advancing development, UAVs are also being use commercially in scientific, police and mapping application for hazardous terrain and places which is inaccessible by ground.

There are three types of MAVs under observation, airplane-like fixed wing models, bird or insect like ornithopter (flapping wing) model and helicopter like rotary wing models. Each type of it has its own advantages and disadvantages depending on the scenario itself which it is used for. Fixed Wing MAVs can achieve longer flight time and higher efficiency but are generally hard to use in indoor activities because they cannot hover or turn tight corners which is required so there are suited for the tasks that require extended loitering times. Rotary wings allow hovering and movement in any direction; at the cost of shorter flight time as the rotor have to keep working to maintain the altitude of the vehicle. Flapping wings offer most potential in miniaturization and maneuverability but lack of power for carrying any load onboard. Figure below shows the three types of MAVs.

2.1.2 Ornithopter (flapping wing) model

A common believe about ornithopter or birds on how they fly is that they produce the lift force by flapping their wings, but actually they produce the way are same as an airplane, simply by their forward motion through the air. Birds move through the air when their wing is held at a fixed position when they are in a gliding position. The wings deflect the air gently downward producing a reaction force in the opposite direction with the pushed downward air when the wings are at a slight angle. This force is called lift which is the phenomena from Newton's Third Law. For any force applied on a object, a force of a same magnitude but at a opposite direction will exerts on the same surface. Lift is the force which is produced and acts perpendicular to the wings surface and prevents the bird from falling. Figure 2.4 shows how the lift force is produce when the air is directed downward. The bird will eventually slow down in the presents of air resistance or drag on the body and wings of the bird and then it would not have enough speed to continue flying. To prevent the bird from falling, the bird can lean forward a little and go into a shallow dive to produce a slightly angle forward lift force by the wings and helps the bird speed up. The bird has to sacrifice some height in exchange for increase in speed. In general, the bird is constantly losing altitude, relative to the surrounding air in exchange to maintain its speed that it needs to keep flying. Figure 2.5 shows how the birds maintain speed by a slight diving.

The slight angle of the wings which allow them to deflect the air downward and produce the lift force is called the angle of attack. The wing will suffer a lot of air resistance if the angle of attack is too great. If the angle is too small, the wing would not produce a sufficient lift force. The best angle is depends on the shape of the wing and what matters is the angle relative to the travel direction. An ornithopter wing usually made up of a thin fabric membrane, which takes on a curve or cambered shape when it is push against the air. Birds have more of a rounded leading edge to help reduce the air resistance.

The bird wings flap with an up-and-down motion, their whole body is moving forward when the wings are flapping up-and-down. There is very minimum of up-and-down flapping close to the birds body, but further apart to the wingtips, there is much more amplitude on the vertical motion. As the bird is flapping along, it needs to make sure it has a correct angle of attack all along its wingspan. Since the outer part move more steeply than the inner part, the wing has to twist, so that each part of the wing can maintain the correct angle of attack. The wings will twist automatically if the wings are flexible enough as shown in figure 2.8. As the wings move downward and twists when the whole bird went into a steep dive, the lift force at the outer part of the wing is angled forward. However, this is only the wing moving downward, not the whole bird. Therefore the bird can generate a large amount of forward propulsive force or thrust, without losing any altitude as shown in figure 2.9. The air is not only deflected downward but it also deflected to the rear of the bird. The air is forced to the back just as it would be by the propeller of an airplane. In the other hand, many people believe that the upstroke of the bird wings will somehow cancel the lift force produce during downstroke, but eventually it can be controlled by the angle of attack and birds do make the upstroke more efficient. Figure 2.10 shows that the outer part of the wing points straight along its direction of travel so it can pass through the air with the least possible air resistance. In other words, the angle of attack is reduced. Other than that, the bird partially folds its wing to reduc the wingspan and eliminates the outer part drag of the wing. The inner part of the wing is different to the outer part. There is little up-and-down movement there, so that part of the wing continues to provide lift just as a result of its forward motion. The bird's body will fly up and down slightly when the bird flies as the result of the inner part of the wings produced lift in the upstroke and the upstroke as a whole offers less lift than the downstroke. So like an airplane, lift and thrust functions are separated. The inner part of the wings produced lift and the outer part provide thrust. Figures 2.11 shows that the inner part of the wing produces lift, even during the upstroke.

2.2 Surveillance Camera System

2.2.1 Image sensor

An image sensor is a device that converts an optical image into electronic signals or in other words converts lights into electrons. Early sensors are made up of video camera tubes and nowadays, image sensor is categories into two types, which is charge-coupled device (CCD) and complementary metal-oxide-semiconductor (CMOS) active-pixel sensor. Today, most digital cameras use CCD sensor or CMOS sensor. A easy way of understanding the technology used to perform the conversion of light from an image object is by imagining the sensor used is having a 2-D array constructed by thousands and millions of tiny solar cells, each of which transforms the lights reflected from one small portion of image into electrons. Both CCD and CMOS device perform this task with different technology used. The next step is to read the value (accumulated charge) of each cell in the image sensor. There are several parameters that can be used to evaluate performance of an imaging sensor, including its dynamic range, its signal-to-noise ratio, its low sensitivity, etc. An imaging sensor alone produce only gray-scale picture and need to pair up with color image sensors, differing by the means of the color separation mechanism to produce color images. The common color image sensor used now is Bayer sensor. Image sensors come in a variety of sizes with the smallest ones used in point and shoot cameras and the largest in professional SLRs. Consumer SLRs often use sensors having the same size as a frame of advanced photo system (APS) film. Professional SLR occasionally use sensors the same size as a frame of 35mm film. Larger image sensors generally have larger photosites that capture more light with less noise.

Some typical sensor sizes is shown below in table 2.2 below

Size

Width(mm)

Height(mm)

1/4

3.2

2.4

1/3

4.8

3.6

1/2

8

6.4

2/3

11

8.8

1

16

12.8

APS-C

22.2

14.8

Full Frame

36

24

Table 2.2 : typical sensor size

2.2.2 Charge-coupled Device (CCD)

A charge-coupled device (CCD) uses a special manufacturing process to create the ability to transport accumulated charges within the photoactive region to a region where the charge can be processed. This is achieved by shifting the charged signals between stages within the device one at a time. CCDs are implemented as shift registers that move charges between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. To measure accumulated signal charge when an image is projected through a lens on the capacitor array (photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. The fundamental light-sensing unit of the CCD is a metal oxide semiconductor (MOS) capacitor operated as a photodiode and storage buffer. A dimensional array will capture the images to one or two dimensional pictures according to the dimensional array used that corresponds to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the light from an image, a control circuit causes each capacitor to transfer its content to it neighbor (serial shift register). The last capacitor in the array shifts its charge into a charge amplifier or metering register, which converts the charge into corresponding voltage level. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor into a sequence of voltage. In a digital device, these voltages is sampled, digitized and stored in a memory block, and they are processed into a continuous analog signal which is then process and send out to other circuits for transmission, recoding or other processing. A summarize of a single frame capture with a full frame CCD camera system can be summarize as follow:

* Camera shutter is opened to allow accumulation of photoelectrons, with the gate electrodes biased appropriately for charge collection.

* At the end of the light exposed period, the shutter is close and accumulated charge in pixels is shifted row by row across the parallel register under the clock control signals. Rows of charge packets are transferred in sequence from one edge of the parallel register into the serial shift register.

* Charge contents of pixels in the serial register are transferred one pixel at a time into an output node to be read by a charge amplifier, which boosts the electron signal and converts it into an analog voltage signal.

* An Analog Digital Converter assigns a digital value for each pixel according to its voltage amplitude and it is stored inside a memory buffer.

* The serial readout process is repeated until all pixel rows of the parallel register are emptied.

The CCS image sensors can be manufactured in several different architectures. The most common are fill-frame, frame transfer and interline (see figure 2.15). The way to distinguish characteristics of each of this architecture is their approach to the problem of shuttering. Full frame CCD features high density pixel arrays capable of producing digital images with the highest resolution. In a full frame device (figure 2.12), all of the image area is photoactive, and there is no electronic shutter. The imaging surface which is made up of parallel shift register must be protected from the incident light during readout of the CCD. A mechanical shutter must be added to the full frame image sensor or the image smears as the device is clocked or read out. Charge accumulated while the shutter open is subsequently transferred and read out after the shutter is closed by shifting the rows of image information in a parallel fashion one row at a time, to the serial shift register, and then the serial register sequentially shifts each row of information to an output amplifier as a data stream, because two steps cannot occur simultaneously, image frames rate are limited by the mechanical shutter speed, the charge transfer rate and readout steps.

Frame transfer CCD can operate at faster frame rated than full frame devices because exposure and readout can occur simultaneously with various degrees if overlap in timing. They are similar to the full frame devices in structure of parallel register but half of the silicon surface is covered by an opaque mask typically made of aluminum and is used as the image storage buffer for photoelectrons gathered by the unmasked photoactive region. The image can be quickly transferred from the photoactive area to the opaque area or storage region with a small amount of smear of a few percent which is acceptable. That image can be read out slowly from the storage region while a new image is integrating or exposing in the photoactive area. A camera shutter is not necessary because the time required for charge transfer from the image area to the storage area of the sensor is only a fraction of the time needed for a typical exposure, which can be illustrated in figure 2.13. A common disadvantage to the frame transfer architecture is that it requires twice the silicon real estate of an equivalent full frame device; hence, it costs almost twice as much compared to full frame devices.

The interline architecture is designed to compensate for many of the shortcomings of frame transfer CCD. These devices are composed of a hybrid structure incorporating a separate photodiode and an associated parallel readout CCD storage region into each pixel element. The functions of these two regions are isolated by a metallic mask structure placed over at the light shielded parallel readout CCD area. In the design, columns of active imaging pixels and masked storage transfer pixel alternate over the entire parallel register array. Because a charge transfer channel is located immediately adjacent to each photosensitive pixel column, stored charge must only be shifted one column into another transfer channel. This single transfer step can be performed within milliseconds, after which the storage array is read out by a series of parallel shifts into serial shift register while the image area is being exposed for the next image. This architecture allows very short integration periods through electronic control of exposure intervals, and with the present of a mechanical shutter, the array can be rendered effectively light-insensitive by discarding accumulated charge rather than shifting it to the transfer channel and smear is essentially eliminated. The advantages comes with a cost, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this harmful characteristic by shielding the surface of the device to divert the light away from the opaque region and on the active area with microlenses. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.

Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night vision device, and zero-lux (luminance) photography. For normal silicon-based sensors, the sensitivity is limited to 1.1 µm. As a consequence of their sensitivity to infrared, infrared emitted from remote controls or infrared emitting devices often appears on CCD-based digital cameras if they do not have infrared filters place above the imaging area to filter out infrared wavelength and let only visible light exposures. Cooling can reduces the array's dark current and thermal noise, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths.

Although CCDs are not basically color sensitive, three different approaches are commonly employed to produce color images with CCD camera system in order to capture the visual appearance of an object. The acquisition of color images with a CCD camera requires that red, green and blue wavelengths be separated by color filters, acquired separately, and subsequently combined into a composite color image. Each approach utilized to achieve color discrimination has strength and weakness, and all imposed constraints that limit speed, reduce dynamic range, lower temporal and spatial resolution, and increase noise in color cameras compared to gray-scale cameras. The most common approach is to mask the CCD array pixel array with an alternating mask of red, green and blue (RGB) microlens filters arranged in a specific pattern, usually the Bayer mosaic pattern. Alternatively, with a three-chip design, the image is divided with a beam-splitting prism and color filters into three (RGB) components, which are captured by separate CCDs and the outputs recombined into a color image. The third approach is a frame-sequential method that used a single CCD to capture a separate image for each color sequentially by switching color filters placed in the illumination path or above the photoactive area.

2.2.3 : CMOS Image Sensors

The CMOS Image sensor refers to the process by which the image sensor is manufactured and not to a specific technology. CMOS have a light sensing mechanism similar to CCD, by taking advantage of the photoelectric effect, which happens when light photons hit the crystallized silicon and charge up the electrons to escape from the valence band into the conduction band. CMOS sensors have low power consumption, master clock, and uses single voltage power supply. When the specially doped silicon semiconductor materials is exposed to a wide wavelength band of visible light, numbers of electrons are released proportional to the light intensity striking the surface of photodiodes. Electrons are collected in a potential well until the illumination is finished, and they are converted into voltages before passing to an ADC to form digital electronic representation of the object image. CMOS image sensor has the ability to integrate a number of processing and control function, which is more important than the primary task of photon collection, directly onto the sensor integrated circuit. These abilities generally include timing logic, white balance, ADC, shutter control, gain adjustments and image processing algorithms, the CMOS circuit architecture resembles a random-access memory more than a simple photodiode array because of its capability to perform all these operations. The most popular CMOS designs are based on active pixel sensor (APS) technology where each pixel is incorporated with both the photodiode and readout amplifier. The accumulated charge can be converted into an amplified voltage inside each pixel and then transferred sequentially to the signal processing area.

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!