Computer Graphics
Computer Graphics
For each pixel, examine all n objects to determine the one closest to the viewer.
If there are p pixels in the image, complexity depends on n and p ( O(np) ).
Making use of the results calculated for one part of the scene or image for other nearby parts.
Coherence is the result of local similarity
As objects have continuous spatial extent, object properties vary smoothly within a small local
region in the scene. Calculations can then be made incremental.
Types of coherence:
1. Object Coherence:
Visibility of an object can often be decided by examining a circumscribing solid (which may be of
simple form, eg. A sphere or a polyhedron.)
2. Face Coherence:
Surface properties computed for one part of a face can be applied to adjacent parts after small
incremental modification. (eg. If the face is small, we sometimes can assume if one part of the face is
invisible to the viewer, the entire face is also invisible).
3. Edge Coherence:
The Visibility of an edge changes only when it crosses another edge, so if one segment of an nonintersecting edge is visible, the entire edge is also visible.
4. Scan line Coherence:
Line or surface segments visible in one scan line are also likely to be visible in adjacent scan lines.
Consequently, the image of a scan line is similar to the image of adjacent scan lines.
5. Area and Span Coherence:
A group of adjacent pixels in an image is often covered by the same visible object. This coherence is
based on the assumption that a small enough region of pixels will most likely lie within a single
polygon. This reduces computation effort in searching for those polygons which contain a given
screen area (region of pixels) as in some subdivision algorithms.
6. Depth Coherence:
The depths of adjacent parts of the same surface are similar.
7. Frame Coherence:
Pictures of the same scene at successive points in time are likely to be similar, despite small changes
in objects and viewpoint, except near the edges of moving objects.
Most visible surface detection methods make use of one or more of these coherence properties of a
scene.
To take advantage of regularities in a scene, eg., constant relationships often can be established
between objects and surfaces in a scene.
This method requires an additional buffer (if compared with the Depth-Sort Method) and the
overheads involved in updating the buffer. So this method is less attractive in the cases where
only a few objects in the scene are to be rendered.
------------ Step 1
Step 2 is not efficient because not all polygons necessarily intersect with the scan line.
Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment of
the scan line.
To speed up the process:
Recall the basic idea of polygon filling: For each scan line crossing a polygon,
this algorithm locates the intersection points of the scan line with the polygon
edges. These intersection points are sorted from left to right. Then, we fill the
pixels between each intersection pair.
With similar idea, we fill every scan line span by span. When polygons overlap on a scan
line, we perform depth calculations at their edges to determine which polygon should be
visible at which span.
Any number of overlapping polygon surfaces can be processed with this method. Depth
calculations are performed only when there are polygons overlapping.
We can take advantage of coherence along the scan lines as we pass from one scan line to the
next. If there is no change in the pattern of the intersection of polygon edges with the
successive scan lines, it is not necessary to do depth calculations.
This works only if surfaces do not cut through or otherwise cyclically overlap each other. If
cyclic overlap happens, we can divide the surfaces to eliminate the overlaps.
In case if there are any overlaps in depth, we need to make some additional comparisons to
determine whether a pair of surfaces should be reordered. The checking is as follows:
a. The bounding rectangles in the xy plane for the 2 surfaces do not overlap
b. The surface S with greater depth is completely behind the overlapping surface relative to the
viewing position.
c. The overlapping surface is completely in front of the surface S with greater depth relative to the
viewing position.
d. The projections of the 2 surfaces onto the view plane do not overlap.
If any of the above tests is passed, then the surfaces no need to be re-ordered.
suitable for a static group of 3D polygons to be viewed from a number of view points.
based on the observation that hidden surface elimination of a polygon is guaranteed if all
polygons on the other side of it as the viewer is painted first, then itself, then all polygons on the
same side of it as the viewer.
BSP Algorithm
Procedure DisplayBSP(tree: BSP_tree)
Begin
If tree is not empty then
If viewer is in front of the root then
Begin
DisplayBSP(tree.back_child)
displayPolygon(tree.root)
DisplayBSP(tree.front_child)
End
Else
Begin
DisplayBSP(tree.front_child)
displayPolygon(tree.root)
DisplayBSP(tree.back_child)
End
End
Discussion:
- Back face removal is achieved by not displaying a polygon if the viewer is located in its back
half-space
- It is an object space algorithm (sorting and intersection calculations are done in object space
precision)
- If the view point changes, the BSP needs only minor re-arrangement.
- A new BSP tree is built if the scene changes
- The algorithm displays polygon back to front (cf. Depth-sort)
The procedure to determine whether we should subdivide an area into smaller rectangle is:
1. We first classify each of the surfaces, according to their relations with the area:
Surrounding surface - a single surface completely encloses the area
Overlapping surface - a single surface that is partly inside and partly outside the area
Inside surface - a single surface that is completely inside the area
Outside surface - a single surface that is completely outside the area.
To improve the speed of classification, we can make use of the bounding rectangles of surfaces for
early confirmation or rejection that the surfaces should be belong to that type.
2. Check the result from 1., that, if any of the following condition is true, then, no subdivision of this
area is needed.
a. All surfaces are outside the area.
b. Only one surface is inside, overlapping or surrounding surface is in the area.
c. A surrounding surface obscures all other surfaces within the area boundaries.
For cases b and c, the color of the area can be determined from that single surface.
So, visibility of surfaces can be determined by tracing a ray of light from the centre of projection
(viewer's eye) to objects in the scene. (backward-tracing).
Find out which objects the ray of light intersects.
Then, determine which one of these objects is closest to the viewer.
Then, set the pixel color to this object.
The ray-casting approach is an effective visibility-detection method for scenes with curved surfaces,
particularly spheres.
10