As already mentioned, one of the major drawbacks of raytracing is its computational expense compared to other image rendering techniques. Given this, it is not surprising that many workers have attempted to speed up raytracing. The attempts can be divided into five categories:
(1)
Devising cheaper intersection tests;
(2)
Using special purpose hardware to speed up the
calculations involved;
(3)
Using parallel processing hardware to generate sections
of the image in parallel;
(4)
Adaptively controlling the number of rays cast;
(5)
Decreasing the total number of expensive intersection
tests.
Note that this is only one of several ways of classifying the work. The following sections briefly review these different approaches to speeding up raytracing.
One of the things that conventional raytracing algorithms perform many thousands of times during the generation of an image is ascertaining whether a ray intersects with an object. This is generally done using analytical geometry which in turn involves among other things finding roots of polynomials, a process that can be very expensive in computer time. To reduce this expense several workers have devised cheaper intersection tests for certain classes of objects.
One way of reducing the expense of intersection tests is to surround each object with a bounding volume. If a ray does not intersect with the bounding volume, then there is no way it could have intersected with the object itself. This technique is particularly attractive where an object with an inexpensive intersection test such as a sphere or cube is used as a bounding volume for an object with an expensive intersection test. The use of bounding volumes was pioneered by Rubin and Whitted [RUBI80] and has since been incorporated into most raytracing systems, including the one at Curtin. Weghorst, Hooper and Greenberg [WEGH84] explore the idea of surrounding objects with bounding volumes only if there is a likely saving to be made and adaptively choosing the shape of the bounding volume, depending on the complexity of the object.
Another way of reducing the computational expense of intersection tests is by refining the actual algorithm used to perform the test. Kajiya published an article showing improved intersection tests for prisms, surfaces of revolution and fractal surfaces [KAJI83].
The speed of raytracing can be improved by moving parts of the process from software into silicon. As techniques for designing and producing special purpose VLSI chips, improve this method may become increasingly attractive. Pulleyblank and Kapenga have conducted research into the feasibility of a VLSI chip for raytracing Bicubic Patches [PULL87].
Several workers have used unconventional general purpose hardware to speed up raytracing.
Supercomputers have been used in a brute force attempt to produce raytracing animation sequences. One example of this might be found in [OSAK83] cited in [FUJI85].
Plunkett and Bailey [PLUN85] used vector processing hardware to perform parts of the raytracing algorithm in parallel. Also Goldsmith and Salmon [GOLD87] developed a raytracing system to run on the 64 processor "Mark II Caltech Hypercube"
There are two ways that the number of rays cast to generate a raytraced image can be reduced.
One of these involves adaptively controlling the density of primary rays cast depending on the intensity differences between adjacent rays. This approach has been described by Roth [ROTH82]. Antialiasing can be achieved by extending this approach to a sub- pixel level. See Whitted [WHIT80], Miller [MILL85] or Fujimoto [FUJI85].
Most raytracing systems allow rays to be traced to an arbitrary depth which corresponds to the number of times that a ray is reflected or refracted before reaching the viewer's eye. However it is not necessary to trace all rays to the same arbitrary depth. It is possible to adaptively vary the depth to which rays are traced. Hall and Greenberg [HALL83] were probably the first to propose this scheme.
One of the most successful approaches to speeding up raytracing has been to reduce the number of expensive ray object intersection tests required to generate an image.
Rubin and Whitted [RUBI80] and also Weghorst Hooper and Greenberg [WEGH84] used a hierarchy of bounding volumes to reduce the number of intersection tests required. The improvements that this technique can offer over conventional raytracing can be likened to the improvements a binary search can offer over a linear one. Where Rubin and Whitted manually encoded objects into a hierarchy, Goldsmith and Salmon [GOLD87] devised an algorithm for automating this process.
Weghorst Hooper and Greenberg [WEGH84] used image coherence to cut down on the number of intersection calculations required for primary rays. Using scan line methods they performed a visible surface preprocess to create an "item buffer" which consisted of a list of objects fully or partially visible through each pixel on the screen. Then, during the casting of primary rays, only objects deemed visible by the preprocess were tested for intersection with the ray. This approach is conceptually inelegant in that primary rays have to be dealt with differently from all the other rays, however since the average tree depth for a complex image is often quite low, according to Hall and Greenberg [HALL83], substantial savings can be made.
Shadow testing is the process of determining whether a point is in shadow with respect to each of the light sources in a scene. Traditionally shadow testing is one of the main factors contributing to exponential growth in computation time of raytraced images. Haines and Greenberg [HAIN86], using a similar approach to Weghorst et al, reduced the number of ray object intersection tests required during shadow testing. A "light buffer", consisting of lists of objects visible from the light source, was produced for each light source via a preprocess, which took advantage of scan line methods. During shadow testing the light buffer was accessed to provide a reduced list of objects that could possibly be shadowing a point.
Some of the most spectacular improvements in raytracing computation time have been achieved by techniques that divide the three dimensional space in a scene up into small compartments. A ray is traced from compartment to compartment and ray object intersection tests are only performed for objects in the current compartment. Glassner [GLAS84], and Fujimoto and Iwata [FUJI85] have published articles in well known journals on this subject and several other workers [CLEA83] [MATS83] have published articles in 'harder to procure' journals.
Glassner's approach was to dynamically subdivide space into cubes of decreasing volume until each cube (called a voxel) contained less than a maximum number of objects. This process results in a tree structure (called an octree) consisting of cubes of space of various sizes, all containing less than a maximum number of objects. An algorithm was developed to move from voxel to voxel along the path of a ray. This algorithm worked by first determining a point that was guaranteed to be in the next voxel, then determining in which voxel that point was.
Fujimoto and Iwata used two different ways of subdividing space. One of these involved subdividing space into voxels of constant size, and the other involved the use of octrees to dynamically subdivide space into voxels of decreasing size, in much the same manner as Glassner. To move from voxel to voxel along the path of a ray, an especially innovative algorithm was developed. They called this algorithm "Three Dimensional Digital Differential Analyser" (3DDDA), because it is a three dimensional extension of the two dimensional "DDA" widely used in drawing lines on raster displays. The algorithm uses incremental methods to calculate successive voxels pierced by a ray. Using the system they developed to test their algorithm, one picture, of a DNA molecule containing 7011 atoms, that took only 2 hrs and 15 minutes to generate, would have taken an estimated 40 days by classical raytracing.
It is hard to compare the results of the above workers because of the lack of suitable benchmarks, however, the results obtained by Fujimoto and Iwata [FUJI85] seem spectacular compared to the others, which is the principle reason that the work of these two Japanese researchers was chosen as a platform on which to base the Raytrace System.