Ray tracing is, in many ways, a standardized technique. Almost all ray tracing programs (known as "ray tracers") use the same basic algorithm in one way or the other. This algorithm is presented below.
- THE PRE-RENDERING SECTION
- THE MAIN RENDERING LOOP
- THE POST-RENDERING SECTION
THE PRE-RENDERING SECTION
This section is executed before the actual rendering of the scene takes place. It is the most implementation and programmer dependent of all the sections of the algorithm. Its structure and content relies entirely on how the programmer has chosen to describe and store his scene and what variables need to be initialized for the program to run properly. Some parts that are usually present in some form or the other include generation of efficiency structures, initialization of the output device, setting up various global variables etc. No any standard algorithm exists for this section.
THE MAIN RENDERING LOOP
This section actually produces the picture of the 3-D scene. The algorithm for this section is given below. Note that the procedure Ray Trace is recursive, that is, it calls itself.
For each pixel on the screen,
Generate a ray R from the viewing position through the point on the view
plane corresponding to this pixel.
Call the procedure RayTrace() with the arguments R and 0
Plot the pixel in the colour value returned by RayTrace()
Procedure RayTrace(ray R, integer Depth) returns colour
Set the numerical variable Dis to a maximum value
Set the object pointer Obj to null
For each object in the scene
Calculate the distance (from the starting point of R) of the nearest
intersection of R with the object in the forward direction
If this distance is less than Dis
Update Dis to this distance
Set Obj to point to this object
If Obj is not null
Set the position variable Pt to the nearest intersection point of R and Obj
Set the total colour C to black
For each light source in the scene
For each object in the scene
If this object blocks the light coming from the light source to Pt
Attenuate the intensity of the received light by the transmittivity
of the object
Calculate the perceived colour of Obj at Pt due to this light source
using the value of the attenuated light intensity
Add this colour value to C
Next light source
If Depth is less than a maximum value
Generate two rays Refl and Refr in the reflected and refracted directions,
starting from Pt
Call RayTrace with arguments Refl and Depth + 1
Add (the return value * reflectivity of Obj) to C
Call RayTrace with arguments Refr and Depth + 1
Add (the return value * transmittivity of Obj) to C
Set the total colour C to the background colour
THE POST-RENDERING SECTION
This section is also more or less programmer dependent, but the functions performed usually fall into one or more of the following groups
- Memory cleanup: The space taken up for the scene and rendering data is released.
- Applying post-process filters: Special effects such as blur, edge highlighting, suppression of colour channels, gamma correction etc may be added to the picture after the actual rendering has taken place. Ant aliasing, motion blue etc are usually a part of the Freeing output devices: If the output device is a file, it is closed. If it is a hardware device, then any link established with it may be suspended.
- Displaying statistics and other user data: This is, of course, entirely up to the programmer. But it is useful to know the details of the rendering process.
There are two popular ray tracing techniques. First is the method of images, where a form image theory establishes exact optical radiation paths. Another method, often called brute-force ray tracing or ray launching, launches rays from a transmitter to estimate the radiation paths. There are two subsets of ray launching techniques, denoted hereafter as ray shooting and tube shooting.
METHOD OF IMAGES
The method of images uses a concept similar to image theory to place artificial sources in the environment that model reflections from the at planes of a database Image theory determines exact radiation paths and introduces no kinematics errors into the radiation paths found. Fig 1.2 demonstrates a simple application of the method of images. The first surface creates an image transmitter. The second surface creates two additional transmitters: a primary image and an image of the first image. This procedure repeats for all of the reflective surfaces of a database. The implementation is straight forward but the time for image mapping increases exponentially with the number of surface elements in the environment. Every additional reflective surface wills double the number of images available to a receiver. Image-based ray tracing can be accelerated by purging extra images that do not have a field of view to other reflective surfaces. The method of images still exhibits a strong dependence on database complexity. Therefore, this ray tracing technique does not lend itself to complicated, three-dimensional cityscapes, which require multiple reflections from thousands of surfaces .Many wireless ray tracing algorithms incorporate the method of images into hybrid ray launching techniques. The most common example of this simulation method is the urban canyon approximation for propagation in cities. This technique assumes that transmitters and receivers lie well below the lowest rooftops, that the sides of a building are at from top to bottom, and that the ground is planar. Propagation prediction then reduces to a two-dimensional, top-view problem; the method of images
Second image plane rays, (d) resulting ray traced paths.
Compensates for the third dimension by assuming that every 2D overhead path also includes a ground reflection Indoor simulations also use the method of images to compensate for ceiling-floor interactions
ONE FINAL NOTE OF CAUTION:
The ray tracing literature often refers to the method of images as "image theory" ray tracing. This work intentionally avoids using this terminology because it is often misleading or incorrect. Image theory is a rigorously defined concept in electromagnetic theory that solves for propagation by placing image sources outside the solution area that match the field boundary conditions. Typically, image theory is only applicable to environments with smooth, in finite or semi-infinite perfect electrically conducting surfaces arranged in a limited set of canonical geometries. Propagation in complicated urban or indoor environments quickly violates every one of these criteria. The success of the image concept in ray tracing is based solely on its value as a path finding technique and not as a field finding technique. Ray Launching The counterpart to image theory ray tracing is brute-force ray tracing, or ray launching. A ray launching algorithm starts at a transmitter location and sends thousands of test rays in every direction into the numerical environment.
The algorithm searches through the test rays to discover which ones pass closest to a receiver location and then use these rays to estimate the exact propagation path taken by a multipart wave. Field strength and power estimates must be calculated from the path using geometrical optics as shown by numerous ray launching schemes and implementations are possible, but all of them perform following three steps:
- Launch test rays,
- Find rays closest to receiver locations, and
- Interpret field strength from these paths.
Ray launching techniques are very powerful when used in complicated 3D environments. Although the intersection and reflection calculations for thousands of test rays appear staggering, the calculation time for a ray launcher does not depend as strongly on the complexity of the environment database when compared to the method of images. By using spatial discrimination techniques, such as bounding volume hierarchies (a technique originally developed for computer graphics), the calculation time exhibits a logarithmic dependence on the number of surfaces in an
Tube shooting like its closely-related counterpart, ray shooting is an example of brute-force ray tracing. Tube shooting algorithms use adjacent rays to define ray tubes as in Figure 1.3 the power density of a ray tube varies inversely to the total area outlined by the surrounding rays, thereby conserving the total power of each tube. Each ray must carry a list of adjacent rays or some orientation information in order to reconstruct the ray tube geometry after twisting and bouncing through the environment. Receivers accumulate power by intersecting with the tube area instead of the individual rays.
Tube shooting predictions become attractive when the computer database contains curved surfaces and undulating terrain. Ray tubes benefit from their ability to adapt to curved surface reflections, which is why the technique has origins in radar cross section calculations of irregular bodies. The ability of a ray tube to adapt to an irregular wave front also has promising applications for GTD paths, where the diffracted wave front becomes astigmatic. On the other hand, the use of adjacent rays to define tubes implies dependence between rays and adds complexity to the algorithm. There also may be ambiguity along surface boundaries when a tube's constituent rays take completely deferent path directions. Many researchers have used tube shooting successfully for three dimensional wireless propagation predictions.
Ray shooting is a versatile ray tracing technique that launches rays from a transmitter and reflects them through the environment. The launched rays that pass arbitrarily close to a receiver establish the actual radiation paths. The physical tracing of rays does not differ from tube shooting algorithms only the interpretation of traced rays differs. One benefit of ray launching is that trajectory and field calculations of each ray can be performed completely independent from one another. The simplicity and lack of computational overhead of ray shooting algorithms has powerful implications for parallel processing. Since this technique is trivially parallelize able, using parallel processing on a large computer network may save tremendous amounts of calculation time.