MODULE- 4
3D VIEWING AND VISIBLE SURFACE DETECTION
4.1. 3D Viewing
4.1.1. 3D Viewing Concepts
Each object in the scene in 3D model is defined with a set of surfaces that form a closed
boundary around the object interior.
• The definition may also include information about the interior structure of an
object.
• Graphics packages provide routines for displaying internal components or cross-
sectional views of a solid object.
• Viewing functions process the object descriptions through a set of procedures that
ultimately project a specified view of the objects onto the surface of a display
device.
Viewing a Three-Dimensional Scene
To obtain a display of a three-dimensional world-coordinate scene
• Set up a coordinate reference for the viewing, or “camera,” parameters. This defines
the position and orientation for a view plane (or projection plane) corresponding
to a camera film plane (Figure 1).
Figure 1: Coordinate reference for obtaining a selected view of a three-
dimensional scene
• Transfer the object descriptions to the viewing reference coordinates and project
onto the view plane.
• View of an object can be either
o Generated on the output device in wire-frame (outline) form or
o Apply lighting and surface-rendering techniques to obtain a realistic
shading of the visible surfaces.
Projections
Different methods for projecting a scene onto the view plane include:
Computer Graphics & Visualization-17CS62 MODULE 4
• To get the description of a solid object onto a view plane, project points on the
object surface along parallel lines → parallel projection (Figure 2)
Figure 2: Three parallel-projection views of an object, showing relative
proportions from different viewing positions.
• To generate a view of a three-dimensional scene, project points to the view plane
along converging paths → perspective projection
o This causes objects farther from the viewing position to be displayed
smaller than objects of the same size that are nearer to the viewing position.
o The scene generated appears more realistic
NOTE
Parallel lines along the viewing direction appear to converge to a distant point in
the back-ground, and objects in the background appear to be smaller than objects
in the foreground.
Depth Cueing
Depth information is important in a three-dimensional scene to easily identify, for a
particular viewing direction, which is the front and which is the back of each displayed
object.
Figure 3: The wire-frame representation of the pyramid in (a) contains no depth
information to indicate whether the viewing direction is (b) downward from a position
An ambiguity can result when a wire-frame object is displayed without depth
information (Figure 3).
To indicate depth with wire-frame displays, vary the brightness of line segments
according to their distances from the viewing position (Figure 4).
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 2 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 4: A wire-frame object displayed with depth cueing, so that the brightness of lines
decreases from the front of the object to the back
• The lines closest to the viewing position are displayed with the highest intensity,
and lines farther away are displayed with decreasing intensities.
• Depth cueing is applied by choosing a maximum and a minimum intensity value
and a range of distances over which the intensity is to vary.
• Depth cueing is also used to model the effect of the atmosphere on the perceived
intensity of objects.
o More distant objects appear dimmer to us than nearer objects due to light
scattering by dust particles, haze, and smoke.
o The atmosphere can even change the perceived color of an object.
Identifying Visible Lines and Surfaces
To clarify depth relationships in a wire-frame display using techniques other than depth
cueing, the approaches include:
i. Highlight the visible lines or display them in a different color.
ii. Display the nonvisible lines as dashed lines or
iii. Remove the nonvisible lines from the display (Figures3(b) and 3(c))
• Removing the hidden lines also removes information about the shape of the
back surfaces of an object, and wire-frame representations are used to get an
indication of an object’s overall appearance, front and back.
To produce a realistic view of a scene, back parts of the objects are completely
eliminated so that only the visible surfaces are displayed by applying surface-rendering
procedures.
Surface Rendering
Added realism is attained in displays by rendering object surfaces using the lighting
conditions in the scene and the assigned surface characteristics.
• Set the lighting conditions by specifying the color and location of the light sources
• Also set background illumination effects.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 3 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
• Surface properties of objects
o Transparent or opaque
o Smooth or rough.
• Parameters to model surfaces
o glass, plastic, wood-grain patterns, and the bumpy appearance of an orange.
Exploded and Cutaway Views
Objects are to be defined as hierarchical structures, so that internal details can be stored.
• Exploded and cutaway views objects are used to show the internal structure and
relationship of the object parts.
• Cutaway view removes part of the visible surfaces to show internal structure
Three-Dimensional and Stereoscopic Viewing
This adds a sense of realism to a computer-generated scene
• 3D views can be obtained by reflecting a raster image from a vibrating, flexible
mirror.
• The vibrations of the mirror are synchronized with the display of the scene on the
cathode ray tube (CRT).
• As the mirror vibrates, the focal length varies so that each point in the scene is
reflected to a spatial position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and the other
for the right eye
• The viewing positions correspond to the eye positions of the viewer.
• The two views are displayed on alternate refresh cycles of a raster monitor
• The special glasses alternately darken first one lens and then the other, in
synchronization with the monitor refresh cycles → scene is displayed with a 3D
effect.
4.1.2. 3D Viewing Pipeline
Procedures for generating a computer-graphics view of a three-dimensional scene are
similar to taking a photograph.
• Choose a viewing position corresponding to camera position.
• Viewing position is chosen according to whether we want to display a front, back,
side, top, or bottom view of the scene.
• A position can also be picked in the middle of a group of objects or even inside a
single object, such as a building or a molecule. Then decide on the camera
orientation (Figure 5).
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 4 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 5: Photographing a scene involves selection of the camera position and
orientation.
o Which way the camera is pointed from the viewing position or how the
camera is rotated around the line of sight to set the “up” direction for the
picture?
o When the shutter is snapped, the scene is cropped to the size of a selected
clipping window, which corresponds to the aperture or lens type of a
camera, and light from the visible surfaces is projected onto the camera film.
Computer-graphics program has more flexibility more options for generating views
of a scene than in a real camera.
• Either parallel projection or a perspective projection, can be used.
• It is possible to
o Selectively eliminate parts of a scene along the line of sight.
o Move the projection plane away from the “camera” position,
o get a picture of objects in back of the synthetic camera.
Some of the viewing operations for a 3D scene are the same as or similar to those
used in the 2D viewing pipeline.
• A 2D viewport is used to position a projected view of the 3D scene on the output
device, and
• A 2D clipping window is used to select a view that is to be mapped to the viewport.
• Display window can be set up in screen coordinates as in 2D applications.
• Clipping windows, viewports, and display windows are specified as rectangles with
their edges parallel to the coordinate axes.
In 3D viewing, the clipping window is positioned on a selected view plane, and
scenes are clipped against an enclosing volume of space, which is defined by a set of
clipping planes.
• The clipping window is positioned on a selected view plane, and scenes are clipped
against an enclosing volume of space, which is defined by a set of clipping planes
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 5 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 6: General three-dimensional transformation pipeline, from modeling coordinates
(MC) to world coordinates (WC) to viewing coordinates (VC) to projection coordinates
(PC) to normalized coordinates (NC) and, ultimately, to device coordinates (DC)
The general processing steps for creating and transforming a three-dimensional
scene to device coordinates is shown in Figure 6.
• Once the scene has been modeled in world coordinates, a viewing-coordinate
system is selected and the description of the scene is converted to viewing
coordinates.
• The viewing coordinate system defines the viewing parameters, including the
position and orientation of the projection plane (view plane), similar to the camera
film plane.
• A 2D clipping window, corresponding to a selected camera lens, is defined on the
projection plane, and a 3D clipping region is established → View Volume.
• View volume’s shape and size depends on the dimensions of the clipping window,
the type of projection chosen, and the selected limiting positions along the viewing
direction.
• Projection operations are performed to convert the viewing-coordinate description
of the scene to coordinate positions on the projection plane.
• Objects are mapped to normalized coordinates, and all parts of the scene outside
the view volume are clipped off.
o The clipping operations can be applied after all device-independent
coordinate transformations are completed.
• The viewport limits could be given in normalized coordinates or in device
coordinates.
To develop viewing algorithms,
• Assume that the viewport is to be specified in device coordinates and that
normalized coordinates are transferred to viewport coordinates, following the
clipping operations.
• Identify visible surfaces
• Apply the surface-rendering procedures.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 6 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
• Lastly map viewport coordinates to device coordinates within a selected display
window.
NOTE:
Scene descriptions in device coordinates are some-times expressed in a left-handed
reference frame so that positive distances from the display screen can be used to measure
depth values in the scene.
4.1.3. 3D Viewing Coordinate Parameters
Establishing a 3D viewing reference frame is similar to setting up the 2D viewing reference
frame. (Figure 7)
• Select a world-coordinate position P0=(x0, y0, z0) for the viewing origin → View
point or viewing position (eye position or the camera position).
• Specify a view-up vector V, which defines the yview direction.
• For 3D space, assign a direction for one of the remaining two coordinate axes
through second vector zview axis.
Figure 7: A right-handed viewing-coordinate system, with axes xview, yview, and zview,
relative to a right-handed world-coordinate frame.
The View-Plane Normal Vector
Because the viewing direction is usually along the zview axis, the view plane, also called the
projection plane, is assumed to be perpendicular to this axis. Therefore, the orientation of
the view plane, as well as the direction for the positive zview axis, can be defined with a
view-plane normal vector N (Figure 8).
Figure 8: Orientation of the view plane and view-plane normal vector N.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 7 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
An additional scalar parameter is used to set the position of the view plane at some
coordinate value zvp along the zview axis (Figure 9).
Figure 9: Three possible positions for the view plane along the zview axis.
• It is the distance from the viewing origin along the direction of viewing, which is
often taken to be in the negative zview direction.
• Therefore, the view plane is always parallel to the xviewyview plane, and the
projection of objects to the view plane represents the view of the scene displayed
on the output device.
• Specifying Vector N
o The direction for N is defined to be along the line from the world-coordinate
origin to a selected point position or
o N is said to be in the direction from a reference point Pref to the viewing
origin P0 (Figure 10).
Figure 10: Specifying the view-plane normal vector N as the direction from a
selected reference point Pref to the viewing-coordinate origin P0
• Here, the reference point is referred to as a look-at point within the scene, with the
viewing direction opposite to the direction of N.
• The direction angles, can be used to define view-plane normal vector and other
vector directions.
o There are three angles α,β, and γ that a spatial line makes with the x,y, and
z axes, respectively.
The View-Up Vector
After choosing view-plane normal vector, set the direction for the view-up vector V, where
V is used to establish the positive direction for the yview axis.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 8 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
• V is defined by selecting a position relative to the world-coordinate origin where
the direction for the view-up vector is from the world origin to the selected position.
• Vector V should be perpendicular to N as N defines the direction for the zview axis.
But this is difficult to determine same.
Hence, viewing routines typically adjust the user-defined orientation of
vector V (Figure 11) so that V is projected onto a plane that is perpendicular to the
view-plane normal vector.
Figure 11: Adjusting the input direction of the view-up vector V to an orientation
perpendicular to the view-plane normal vector N.
o A convenient for V often in a direction parallel to the world yw axis; that is,
set V=(0, 1, 0).
The uvn Viewing-Coordinate Reference Frame
• Left-handed viewing coordinates are sometimes used in graphics packages, with
the viewing direction in the positive zview direction. Increasing zview values are
interpreted as being farther from the viewing position along the line of sight.
• Right-handed viewing systems are more common, because they have the same
orientation as the world-reference frame. Here, graphics package deals with only
one coordinate orientation for both world and viewing references.
• Left-handed coordinate references are often used to represent screen coordinates
and for the normalization transformation
• N defines the direction for the zview axis and V is used to obtain the direction for the
yview axis. There is a need to determine the direction for the xview axis.
o Using the input values for N and V, it is possible to compute a third vector,
U, perpendicular to both N and V.
o Vector U defines the direction for the positive xview axis.
▪ Determine the correct direction for U by taking the vector cross
product of V and N so as to form a right-handed viewing frame.
▪ The vector cross product of N and U produces adjusted value for V,
perpendicular to both N and U, along the positive yview axis.
▪ Therefore, set of unit axis vectors for a right-handed viewing
coordinate system are:
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 9 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
…(1)
Figure 12: A right-handed viewing system defined with unit
vectors u, v, and n.
The coordinate system formed with these unit vectors→ uvn
viewing-coordinate reference frame (Figure 12).
Generating Three-Dimensional Viewing Effects
Different views of objects in a scene can be obtained by varying the viewing parameters.
• From a fixed viewing position, the direction of N can be changed to display objects
at positions around the viewing-coordinate origin.
• N can be varied to create a composite display consisting of multiple views from a
fixed camera position.
• A wide viewing angle can be simulated by producing seven views of the scene from
the same viewing position, but with slight shifts in the viewing direction; then the
view can be combined to form a composite display.
• Stereoscopic views can be generated by shifting the viewing direction as well as
shifting the view point slightly to simulate the two eye positions.
Interactive applications:
• The normal vector N is the viewing parameter that is most often changed.
• When the direction of N is changed, the directions of other vector axes must also
be changed, to maintain a right-handed viewing-coordinate system.
To simulate an animation panning effect, as when a camera moves through a scene
or follows an object that is moving through a scene, the direction of N can be fixed as
the view point is moved (Figure 13).
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 10 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 13: Panning across a scene by changing the viewing position, with a fixed
direction for N.
To display different views of an object, such as a side view and a front view, view
point can be moved around the object (Figure 14).
Figure 14: Viewing an object from different directions using a fixed reference point.
4.1.4. Transformation from World to Viewing Coordinates
• After constructing a scene in 3D viewing pipeline, transfer object descriptions to
the viewing-coordinate reference frame.
• This is equivalent to a sequence of transformations that superimposes the viewing
reference frame onto the world frame.
• Use methods for transforming between coordinate systems:
i. Translate the viewing-coordinate origin to the origin of the world-coordinate
system
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 11 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
ii. Apply rotations to align the xview, yview, and zview axes with the world xw, yw,
and zw axes, respectively.
The viewing-coordinate origin is at world positionP0=(x0, y0, z0). The
matrix for translating the viewing origin to the world origin is:
…(2)
For the rotation transformation, use the unit vectors u, v, and n to form the
composite rotation matrix that superimposes the viewing axes onto the world frame.
The transformation matrix is:
…(3)
The elements of matrix R are the components of the uvn axis vectors.
The coordinate transformation matrix is then obtained as the product of the
translation and rotation matrices:
…(4)
Translation factors→ vector dot product of each of the u, v, and n unit vectors with
P0 (vector from the world origin to the viewing origin).
→ negative projections ofP0on each of the viewing-coordinate
axes.
The matrix elements are evaluated as:
…(5)
4.1.5. Projection Transformation
Object descriptions are projected to the view plane. Graphics packages support both
parallel and perspective projections.
i. Parallel Projection
• Coordinate positions are transferred to the view plane along parallel lines
• Parallel projection for a straight-line segment defined with endpoint
coordinatesP1andP2 is given in Figure 15.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 12 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 15: Parallel projection of a line segment onto a view plane.
• Preserves relative proportions of objects
• Used in computer-aided drafting and design to produce scale drawings of
3D objects.
• All parallel lines in a scene are displayed as parallel
• There are two general methods for obtaining a parallel-projection view of
an object:
o Project along lines that are perpendicular to the view plane
o Project at an oblique angle to the view plane.
ii. Perspective projection
• Object positions are transformed to projection coordinates along lines that
converge to a point behind the view plane.
• Perspective projection for a straight-line segment, defined with endpoint
coordinates P1 and P2, is given in Figure 16.
Figure 16: Perspective projection of a line segment onto a view plane
• It does not preserve relative proportions of objects.
• These views of a scene are more realistic because distant objects in the
projected display are reduced in size.
4.1.6. Orthogonal Projections
A transformation of object descriptions to a view plane along lines that are all parallel to
the view-plane normal vector N is called an orthogonal projection (or, orthographic
projection).
• Produces a parallel-projection transformation in which the projection lines are
perpendicular to the view plane.
• Used to produce the front, side, and top views of an object (Figure 17).
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 13 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 17: Orthogonal projections of an object, displaying plan and elevation
views.
o Front, side, and rear orthogonal projections→ elevations
o Top orthogonal view→ plain view.
Axonometric and Isometric Orthogonal Projections
• Orthogonal projections that display more than one face of an object → axonometric
orthogonal projections.
• The most commonly used axonometric projection is the isometric projection
o Generated by aligning the projection plane (or the object) so that the plane
intersects each coordinate axis in which the object is defined, called the
principal axes, at the same distance from the origin.
o An isometric projection for cube is given in Figure 18.
Figure 18: An isometric projection of a cube
o This is obtained by aligning the view-plane normal vector along a cube
diagonal.
o There are eight positions, one in each octant, for obtaining an isometric
view.
o All three principal axes are foreshortened equally in an isometric projection
to maintain relative proportions
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 14 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
• In a general axonometric projection, scaling factors may be different for the three
principal directions
Orthogonal Projection Coordinates
With the projection direction parallel to the zview axis, the transformation equations for an
orthogonal projection are trivial. For any position(x, y, z) in viewing coordinates, as in
Figure 19, the projection coordinates are
xp = x, yp = y
Figure 19: An orthogonal projection of a spatial position onto a view plane.
The z-coordinate value for any projection transformation is preserved for use in the
visibility determination procedures. And each three-dimensional coordinate point in a
scene is converted to a position in normalized space.
Clipping Window and Orthogonal-Projection View Volume
In the camera analogy, the type of lens is one factor that determines how much of the scene
is transferred to the film plane. A wide-angle lens takes in more of the scene than a regular
lens. For computer-graphics applications, we use the rectangular clipping window for this
purpose. As in two-dimensional viewing, graphics packages typically require that clipping
rectangles be placed in specific positions.
In OpenGL, we set up a clipping window for three-dimensional viewing just as we did for
two-dimensional viewing, by choosing two-dimensional coordinate positions for its lower-
left and upper-right corners.
For three-dimensional viewing, the clipping window is positioned on the view plane with
its edges parallel to the xview and yview axes, as shown in figure 20.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 15 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 20: A clipping window on the view plane, with minimum and maximum coordinates
given in the viewing reference system.
The edges of the clipping window specify the x and y limits for the part of the scene
that we want to display. These limits are used to form the top, bottom, and two sides of a
clipping region called the orthogonal-projection view volume.
Because projection lines are perpendicular to the view plane, these four boundaries
are planes that are also perpendicular to the view plane and that pass through the edges of
the clipping window to form an infinite clipping region, as in Figure 21
Figure 21: Infinite orthogonal-projection view volume.
We can limit the extent of the orthogonal view volume in the zview direction by
selecting positions for one or two additional boundary planes that are parallel to the view
plane. These two planes are called the near-far clipping planes, or the front-back clipping
planes. The near and far planes allow us to exclude objects that are in front of or behind
the part of the scene that we want to display.
Some graphics libraries provide these two planes as options, and other libraries
require them. When the near and far planes are specified, we obtain a finite orthogonal
view volume that is a rectangular parallelepiped, as shown in figure 22 along with possible
placement for the view plane.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 16 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
Figure 22: A finite orthogonal view volume with the view plane “in front” of the
near plane.
Normalization Transformation for an Orthogonal Projection
Using an orthogonal transfer of coordinate positions onto the view plane, we obtain
the projected position of any spatial point(x, y, z)as simply(x, y). Thus, once we have
established the limits for the view volume, coordinate descriptions inside this rectangular
parallelepiped are the projection coordinates, and they can be mapped into a normalized
view volume without any further projection processing.
Because screen coordinates are often specified in a left-handed reference frame
(Figure23), normalized coordinates also are often specified in a left-handed system. This
allows positive distances in the viewing direction to be directly interpreted as distances
from the screen (the viewing plane). Thus, we can convert projection coordinates into
positions within a left-handed normalized-coordinate reference frame, and these coordinate
positions will then be transferred to left-handed screen coordinates by the viewport
transformation.
Figure 23: A left-handed screen-coordinate reference frame.
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 17 of 18 2019-20
Computer Graphics & Visualization-17CS62 MODULE 4
To illustrate the normalization transformation, we assume that the orthogonal-
projection view volume is to be mapped into the symmetric normalization cube within a
left-handed reference frame. Also, z-coordinate positions for the near and far planes are
denoted as znear and zfar, respectively. Figure 24 illustrates this normalization
transformation. Position(xmin, ymin, znear) is mapped to the normalized
position(−1,−1,−1), and position(xmax, ymax, zfar) is mapped to(1, 1, 1).
`Figure 24: Normalization transformation from an orthogonal-projection view
volume to the symmetric normalization cube within a left-handed reference frame.
Transforming the rectangular-parallelepiped view volume to a normalized cube is
similar to the methods for converting the clipping window into the normalized symmetric
square. The normalization transformation for the orthogonal view volume is
Jayashree N, Dept. of CSE, CBIT, Kolar. Page 18 of 18 2019-20