When a photograph is taken in low light conditions or of a fast moving object, motion blur can cause significant degradation of the image . This is caused by the relative movement between the object and the sensor in the camera while the shutter opens. Both the object moving and camera shake contribute to this blurring. The problem is particularly apparent in low light conditions when the exposure time can often be in the region of several seconds. Many methods are available for preventing image motion blurring at the time of image capture and also post processing images to remove motion blur later. As well as in every day photography, the problem is particularly important to applications such as video surveillance where low quality cameras are used to capture sequences of photographs of moving objects (usually people). Currently adopted techniques can be categorized as following:

  • Better hardware in the optical system of the camera to avoid unstabilisation.
  • Post processing of the image to unblur by estimating the camera's motion
  • From a single photograph (blind deconvolution)
  • From a sequence of photographs
  • A hybrid approach that measures the camera's motion during photograph capture.


Image blur is a common problem. It may be due to the point spread function of the sensor, sensor motion, or other reasons.

Linear model of observation system is given as

g(x, y) = f(x, y) * h(x, y) + w(x, y)


The blur effect or the degradation factor of an image can be due to many factors like :

1. Relative motion during the process of image capturing using camera or due to comparaitively long exposure times by the subject.

2. Out-of-focus by lens, use of a highly convex lens, wind, or a short exposure time leading to reduction of photons counts captured.

3. Scattered light disturbance confocal microscopy.


For television sports where camera lens are of conventional types, they expose pictures 25 or 30 times per second. In this case motion blur can be avoided because it obscures the exact position of a projectile or athlete in slow motion .Special cameras are used in this cases which can eliminate motion blurring by taking pictures per 1/1000 second, and then transmitting them over the course of the next 1/25 or 1/30 of a second. Although this gives sharper clear slow motion replays, it can appear abnormal at natural speed because the eye expects to see motion blurring. Sometimes, process of deconvolution can remove motion blur from images.


The starting step performed in the linear equation mentioned just before is for creating a point spread function to add blur to an image. The blur created using a PSF filter in MATLab that can approximate the linear motion blur. This PSF was then convoluted with the original image to generate a blurred image. Convolution is a mathematical process by which a signal is mixed with a filter in order to find the resulting signal. Here signal is image and the filter is the PSF. The density of blur added to the original image is dependent on two parameters of the PSF, length of blur, and the angle created in the blur. These attributes can be adjusted to generate different density of blur, but in most practical cases a length of 31 pixels and an angle of 11 degrees were found to be sufficient for motion blur to the image.


After a decided amount of blur was mixed to the original image, an attempt was made to restore the blurred image to regain the original form of the image. This can be achieved using several algorithms. In our treatment, a blurred image, i, results from:


Here 's' is the PSF which gets convolved with the ideal image 'o'. Additionally, some additive noise factor, 'n' may be present in the medium of image capture. The well known method Inverse filter, employs a linear deconvolution method. Because the Inverse filter is a linear filter, it is computationally easy but leads to poorer results in the presence of noise.



When a image is captured usig a camera, instead of static instance of the object the image represents the scene over a short period of time which may include certain motion. During the movement of the objects in a scene, an image of that scene is expected to represent an integration of all positions of the corresponding objects along with the movement of camera's viewpoint, during the period of exposure determined by the shutter speed. So the object moving with respect to the camera appear blurred or smeared along with the direction of relative motion. This smearing may either on the object that is moving or may affect the static background if the camera is actually moving. This may gives a natural instinct in a film or television image, as human eye behaves in a similar way.

As blur gets generated due to the relative motion between the camera and objects and the background scene, this can be avoided if the camera can track these moving objects. In this case, instead of long exposure times, the objects will look sharper but the background will appear more blurred.


Similarly, during the real-time computer animation process each frame shows a static instance in time with zero motion blur. This is the reason for a video game with a 25-30 frames per second will seem staggered, while in the case of natural motion which is also filmed at the same frame rate appears rather more continuous. These next generation video games include motion blur feature, especially for simulation of vehicle games. During pre-rendered computer animation (ex: CGI movies), as the renderer has more time to draw each frame realistic motion blur can be drawn.

Chapter 2: BLUR MODELS

The blurring effect images modeled as per in (1) as the convolution process of an ideal image with a 2-D point-spread function (PSF) . The interpretation of (1) is that if the ideal image, hence the name point-spread function.

It should be noted that point-spread functions (PSF) described here are spatially invariant as they are not a function of the spatial location under consideration. I assumes that the image is blurred in symmetric way for every spatial location. PSFs that do not follow this assumption are generated due to the rotational blurs such as turning wheels or local blurs for example, person out of focus while the background is in focus. Spatially varying blurs can degrade the modeling, restoration and identification of images which is outside the scope of the presented work and is still a challenging task.

In general blurring process of images are spatially continuous in nature. Blur models are represented in their continuous forms, followed by their discrete (sampled) counterparts, as the identification and restoration algorithms are always based on spatially discrete images. The image sampling rate is assumed to be choosen high enough so as to minimize the (aliasing) errors involved transferring the continuous to discrete models.

Spatially continuous PSF of a blur generally satisfies three constraints, as:

takes on non-negative values only, because of the physics of the underlying image formation process,

  • when dealing with real-valued images the point-spread function d(x, y) is real-valued too,
  • the imperfections generated during the image formation process can be modeled as passive operations on the data, i.e. no energy gets absorbed or generated. For spatially continuous blurs a PSF is has to satisfy
  • 3.1. NO BLUR

    When recorded image is perfectly imaged, no blur is apparent to be presnt in the discrete image. So the spatially continuous PSF can be described using a Dirac delta function:

    Theoretically (6a) can never be satisfied. However, equation (6b) is possible subjected to the amount of "spreading" in the continuous image being smaller than the sampling grid applied to obtain the discrete image.


    Generally motion blur can be distinguished due to relative motion between the recording device and the scene. This can be in a liner translation, a rotation, due to a sudden change of scaling, or a certain combinations of these. Here the case of a global translation will be considered.


    When a camea images a 3-D scene onto a 2-D imaging plane, some parts of the scene are in focus while rest are not. When camera's aperture is circular, the image of any point source is actually a small disk, called as the circle of confusion (COC). The degree of defocus (diameter of the COC) actually depends on the focal length as well as the aperture number of the lens, and the distance among camera and the object. An accurate model should describe the diameter of the COC, as well as the intensity distribution within the COC. In case, the degree of defocusing is comparatively larger than the wavelengths considered, a geometrical approach can be taken for a uniform intensity distribution within the COC. The spatially continuous form of PSF of this uniform out-of-focus blur with radius R is given by:

    Also for this PSF the discrete version d(n1, n2 ) , is not easily arrived at. A coarse approximation is the following spatially discrete PSF:

    here C is a constant that has to be chosen so that (5b) is satisfied. The approximation form (8b) is not correct for the fringe elements of the point-spread function. A more accurate model for the fringe elements should involve the integrated area covered by the spatially continuous PSF, as illustrated in Figure 5. Figure 5(a) suggests the fringe elements should to be calculated by integration for accuracy. Figure 5(b) represents the modulus of the Fourier transform for the PSF considering R=2.5. Here a low pass behavior is observed (in this case both horizontally and vertically) along with characteristic pattern of spectral zeros.


    Atmospheric turbulence is considered a severe limitation in remote sensing. Although the blur introduced by atmospheric turbulence is supposed to depend on a variety of external factors (like temperature, wind speed, exposure time), for long-term exposures the point-spread function can be described reasonably well by a Gaussian function:

    The spatially continuous PSF has to be truncated properly since it does not have a finite support. The spatially discrete form approximation of (9a) is then given by:

    Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!