Application of Computer Graphics: 2.1 Cathode-Ray Tubes (CRT) - Still The Most Common Video Display Device Presently
Application of Computer Graphics: 2.1 Cathode-Ray Tubes (CRT) - Still The Most Common Video Display Device Presently
Presentation Graphics
To produce illustrations which summarize various kinds of data. Except 2D, 3D graphics are good
tools for reporting more complex data.
Computer Art
Painting packages are available. With cordless, pressure-sensitive stylus, artists can produce electronic
paintings which simulate different brush strokes, brush widths, and colors. Photorealistic techniques,
morphing and animations are very useful in commercial art. For films, 24 frames per second are
required. For video monitor, 30 frames per second are required.
Entertainment
Motion pictures, Music videos, and TV shows, Computer games
Visualization
For analyzing scientific, engineering, medical and business data or behavior. Converting data to visual
form can help to understand mass volume of data very efficiently.
Image Processing
Image processing is to apply techniques to modify or interpret existing pictures. It is widely used in
medical applications.
2.1 Cathode-Ray Tubes (CRT) - still the most common video display device presently
An electron gun emits a beam of electrons, which passes through focusing and deflection systems and
hits on the phosphor-coated screen. The number of points displayed on a CRT is referred to as the
1
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
resolution (eg. 1024x768). Different phosphors emit small light spots of different colors, which can
combine to form a range of colors. A common methodology for color CRT display is the Shadow-
mask method.
The light emitted by phosphor fades very rapidly, so it needs to redraw the picture repeatedly. There
are 2 kinds of redrawing mechanisms: Raster-Scan and Random-Scan
Raster-Scan
The electron beam is swept across the screen one row at a time from top to bottom. As it moves across
each row, the beam intensity is turned on and off to create a pattern of illuminated spots. This
scanning process is called refreshing. Each complete scanning of a screen is normally called a frame.
The refreshing rate, called the frame rate, is normally 60 to 80 frames per second, or described as 60
Hz to 80 Hz.
Picture definition is stored in a memory area called the frame buffer. This frame buffer stores the
intensity values for all the screen points. Each screen point is called a pixel (picture element).
On black and white systems, the frame buffer storing the values of the pixels is called a bitmap. Each
entry in the bitmap is a 1-bit data which determine the on (1) and off (0) of the intensity of the pixel.
On color systems, the frame buffer storing the values of the pixels is called a pixmap (Though
nowadays many graphics libraries name it as bitmap too). Each entry in the pixmap occupies a
number of bits to represent the color of the pixel. For a true color display, the number of bits for each
entry is 24 (8 bits per red/green/blue channel, each channel 2 8=256 levels of intensity value, ie. 256
voltage settings for each of the red/green/blue electron guns).
2
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
The CRT's electron beam is directed only to the parts of the screen where a picture is to be drawn. The
picture definition is stored as a set of line-drawing commands in a refresh display file or a refresh
buffer in memory.
Random-scan generally have higher resolution than raster systems and can produce smooth line
drawings, however it cannot display realistic shaded scenes.
2.2 Flat-Panel Displays - will be the most common video display device very soon.
- Liquid crystal refers to compounds which are in crystalline arrangement, but can flow like liquid.
- The light source passes through a liquid-crystal material that can be aligned to either block or
transmit the light.
- 2 glass plates, each containing a light polarizer at right angles to the other, sandwich a liquid
crystal material.
- Rows of horizontal transparent conductors are built into one glass page. Columns of vertical
conductors are put into the other plate. The intersection of 2 conductors defines a pixel position. -
- Passive-matrix LCD
- In the "on" state, polarized light passing through the material is twisted so that it will pass through
the opposite polarizer.
- Different materials can display different colors.
- By placing thin-film transistors at pixel locations, voltage at each pixel can be controlled. --
Active-matrix LCD.
3
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
In this context we discuss the graphics systems of raster-scan devices. A graphics processor accepts
graphics commands from the CPU and executes the graphics commands which may involve drawing
into the frame buffer. The frame buffer acts as a temporary store of the image and also as a decoupler
to allow the graphics processor and the display controller to operate at different speeds. The display
controller reads the frame buffer line by line and generates the control signals for the screen.
Graphics commands:
- Draw point
- Draw polygon
- Draw text
- Clear frame buffer
- Change drawing color
2D graphics processors execute commands in 2D coordinates. When objects overlap, the one being
drawn will obscure objects drawn previously in the region. BitBlt operations (Bit Block Transfer) are
usually provided for moving/copying one rectangular region of frame buffer contents to another
region.
Display Controller for a raster display device reads the frame buffer and generates the control signals
for the screen, ie. the signals for horizontal scanning and vertical scanning. Most display controllers
include a colormap (or video look-up table). The major function of a colormap is to provide a
mapping between the input pixel value to the output color.
4
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
A tablet contains a stylus and a drawing surface and it is mainly used for the input of drawings. A
tablet is usually more accurate than a mouse, and is commonly used for large drawings.
Scanners are used to convert drawings or pictures in hardcopy format into digital signal for computer
processing.
Touch panels allow displayed objects or screen positions to be selected with the touch of a finger. In
these devices a touch-sensing mechanism is fitted over the video monitor screen. Touch input can be
recorded using optical, electrical, or acoustical methods.
Directing pictures to a printer or plotter to produce hard-copy output on 35-mm slides, overhead
transparencies, or plain paper. The quality of the pictures depend on dot size and number of dots per
inch (DPI).
Laserjet printers use a laser beam to create a charge distribution on a rotating drum coated with a
photoelectric material. Toner is applied to the drum and then transferred to the paper. To produce
color outputs, the 3 color pigments (cyan, magenta, and yellow) are deposited on separate passes.
Inkjet printers produce output by squirting ink in horizontal rows across a roll of paper wrapped on a
drum. To produce color outputs, the 3 color pigments are shot simultaneously on a single pass along
each print line on the paper.
Inkjet or pen plotters are used to generate drafting layouts and other drawings of normally larger
sizes. A pen plotter has one or more pens of different colors and widths mounted on a carriage which
spans a sheet of paper.
General graphics packages are designed to be used with Cartesian coordinate representations (x,y,z).
Usually several different Cartesian reference frames are used to construct and display a scene:
5
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
3. Output Primitives
Shapes and colors of objects can be described internally with pixel arrays or sets of basic geometric
structures such as straight line segments and polygon color areas. The functions provided by graphics
programming packages to deal with these basic geometric structures are called output primitives.
For example:
Drawing a point: SetPixel(100,200,RGB(255,255,0));
Drawing a line: MoveTo(100,100); LineTo(100,200);
Drawing some text: SetText(100,200,"Hello");
Drawing an ellipse: Ellipse(100,100,200,200);
Painting a picture: BitBlt(100,100,50,50,srcImage,0,0,SRCCOPY);
This is to compute intermediate discrete coordinates along the line path between 2 specified endpoint
positions. The corresponding entry of these discrete coordinates in the frame buffer is then marked
with the line color wanted.
6
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
This algorithm is very efficient since it use only incremental integer calculations. Instead of
calculating the non-integral values of D1 and D2 for decision of pixel location, it computes a value, p,
which is defined as:
7
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
To save time in drawing a circle, we can make use of the symmetrical property of a circle which is to
draw the segment of the circle between 0 and 45 degrees and repeat the segment 8 times as shown in
the diagram to produce a circle. Ths algorithm also employs the incremental method which further
improves the efficiency.
8
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
- Basic idea: For each scan line crossing a polygon, this algorithm locates the intersection points of
the scan line with the polygon edges. These intersection points are shorted from left to right.
Then, we fill the pixels between each intersection pair.
- Some scan-line intersection at polygon vertices require special handling. A scan line passing
through a vertex as intersecting the polygon twice. In this case we may or may not add 2 points to
the list of intersections, instead of adding 1 points. This decision depends on whether the 2 edges
at both sides of the vertex are both above, both below, or one is above and one is below the scan
line. Only for the case if both are above or both are below the scan line, then we will add 2 points.
- Inside-Outside Tests: The above algorithm only works for standard polygon shapes. However, for
the cases which the edges of the polygon intersects, we need to identify whether a point is an
interior or exterior point. Students may find interesting descriptions of 2 methods to solve this
problem in many text books: odd-even rule and nonzero winding number rule.
9
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
- This algorithm starts at a point inside a region and paint the interior outward towards the boundary.
- This is a simple method but not efficient: 1. It is recursive method which may occupy a large stack
size in the main memory.
void BoundaryFill(int x, int y, COLOR fill, COLOR boundary)
{ COLOR current;
current=GetPixel(x,y);
if (current<>boundary) and (current<>fill) then
{ SetPixel(x,y,fill);
BoundaryFill(x+1,y,fill,boundary);
BoundaryFill(x-1,y,fill,boundary);
BoundaryFill(x,y+1,fill,boundary);
BoundaryFill(x,y-1,fill,boundary);
}
}
- More efficient methods fill horizontal pixel spands across scan lines, instead of proceeding to
neighboring points.
- Flood-Fill is similar to Boundary-Fill. The difference is that Flood-Fill is to fill an area which I
not defined by a single boundary color.
10
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
- A character is defined by its outline which is usually comoposed of lines and curves.
- We can use a method similar the one for rendering polygon to render a character
- However, because text is used very oftenly, we usually convert them into bitmaps in advance to
improve the drawing efficiency.
- To draw a character on the screen, all we need to do is to copy the corresponding bitmap to
the specified coordinate.
- The problem with this method is that scaling a character with a bitmap to produce different
character sizes would result in a block-like structures (stair-case, aliasing). Hence we normally
render a few bitmaps for a single character to represent different sizes of the same character.
3.7 Bitmap
- A graphics pattern suh as an icon or a character may be needed frequently, or may need to be
re-used.
- Generating the pattern every time when needed may waste a lot of processing time.
- A bitmap can be used to store a pattern and duplicate it to many places on the image or on
the screen with simple copying operations.
11
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
3.8 Properties
In graphical packages, we can specify such properties, eg. In Powerpoint, we can modify
the properties of objects by a format command.
In programming tools, we may pass the properties as arguments when we call the functions of these
primitives, or we may pre-select the properties before calling the functions.
In many applications, changes in orientations, size, and shape are accomplished with geometric transformations
that alter the coordinate descriptions of objects.
Other transformations:
Reflection
Shear
Basic Transformations
Translation
We translate a 2D point by adding translation distances, tx and ty, to the original coordinate position
(x,y):
x' = x + tx, y' = y + ty
1 0tx
0
1 t y
0 0 1
Then we can rewrite the formula as:
x'1 0 t x x
y' = 0 1 t y
y
0 0 1 1
1
For example, to translate a triangle with vertices at original coordinates (10,20), (10,10), (20,10) by
tx=5, ty=10, we compute as followings:
The resultant coordinates of the triangle vertices are (15,30), (15,20), and (25,20) respectively.
Exercise: translate a triangle with vertices at original coordinates (10,25), (5,10), (20,10) by t x=15,
ty=5. Roughly plot the original and resultant triangles.
To rotate an object about the origin (0,0), we specify the rotation angle ?. Positive and negative values
for the rotation angle define counterclockwise and clockwise rotations respectively. The followings is
the computation of this rotation for a point:
Alternatively, this rotation can also be specified by the following transformation matrix:
cos sin 0
sin cos 0
0 0 1
Then we can rewrite the formula as:
1 0
0 1 1
For example, to rotate a triange about the origin with vertices at original coordinates (10,20), (10,10),
(20,10) by 30 degrees, we compute as followings:
x' 0.866 0.5 010 0.866 *10 (0.5) *10 0 *1 3.66
y' = 0.5 0.866 0 10 = 0.5 *10 0.866 *10 0 *1 = 13.66
1 0 0 1 1 0 *10 0 *10 1*1 1
Rotation of vertex (20,10):
Exercise: Rotate a triange with vertices at original coordinates (10,20), (5,10), (20,10) by 45
degrees. Roughly plot the original and resultant triangles.
We scale a 2D object with respect to the origin by setting the scaling factors s x and sy, which
are multiplied to the original vertex coordinate positions (x,y):
Alternatively, this scaling can also be specified by the following transformation matrix:
sx 0 0
0 sy 0
0 0 1
Then we can rewrite the formula as:
Exercise: Scale a triange with vertices at original coordinates (10,25), (5,10), (20,10) by s x=1.5,
sy=2, with respect to the origin. Roughly plot the original and resultant triangles.
C'=A·(B·C)
C' = (A·B)·C
The advantage of computing it using C' = (A·B)·C instead of C'=A·(B·C) is that, for computing
the 3 vertices of the triangle, C1, C2, C 3, the computation time is shortened:
Using C'=A·(B·C):
- compute B · C1 and put the result into I1
- compute A · I1 and put the result into C1'
- compute B · C2 and put the result into I2
- compute A · I2 and put the result into C2'
- compute B · C3 and put the result into I3
- compute A · I3 and put the result into C3'
4
CS3162 Introduction to Computer Graphics
Helena Wong, 2000
Example: Rotate a triangle with vertices (10,20), (10,10), (20,10) about the origin by 30 degrees
and then translate it by tx=5, ty=10,
0 0 1 0 0 1
And we compute the translation matrix:
1 0 5
A= 0 1 10
0 0 1
1 0 5 0.866 0.5 0
M= 0 1 10 · 0.5 0.866 0
0 0 1 0 0 1
0.866 0.5 5
M= 0.5 0.866 10
0 0 1
x' 0.866 0.5 5 10 0.866 *10 (0.5) *10 5 *1 8.66
y' = 0.5 0.866 10 10 = 0.5 *10 0.866 *10 10 *1 = 23.66
1 0 0 1 1 0 *10 0 *10 1*1 1
Transformation of vertex (20,10):
Exercise: Translate a triangle with vertices (10,20), (10,10), (20,10) by t x=5, ty=10 and then rotate it about
the origin by 30 degrees. Compare the result with the one obtained previously: (3.66,32.32),
(8.66,23.66), and (17.32,28.66) by plotting the original triangle together with these 2 results.
Translations
By common sense, if we translate a shape with 2 successive translation vectors: (t x1, ty1) and (tx2, ty2), it is
equal to a single translation of (tx1+ tx2, ty1+ t y2).
This additive property can be demonstrated by composite transformation matrix:
1 0 t x1 1 0 t x 2 1*1 0 * 0 t x1 * 0 1* 0 0 *1 t x1 * 0 1* t x 2 0 * t y 2 t x1 *1
0 1 t · 0 1 t = 0 *1 1* 0 t * 0 0 * 0 1*1 t * 0 0*t 1* t t *1
y1 y2 y1 y1 x2 y2 y1
0 0 1
This demonstrates that 2 successive translations are additive.
Rotations
By common sense, if we rotate a shape with 2 successive rotation angles: ? and a, about the origin, it
is equal to rotating the shape once by an angle ? + a about the origin.
Similarly, this additive property can be demonstrated by composite transformation matrix:
cos cos = sin
cos
sin sin (cos sin sin cos ) 0
cos sin sin sin cos cos 0
0 0 1
cos() sin() 0
= sin() cos() 0
0 0 1
This demonstrates that 2 successive rotations are additive.
By common sense, if we scale a shape with 2 successive scaling factor: (s x1, sy1) and (sx2, sy2), with
respect to the origin, it is equal to a single scaling of (s x1* sx2, sy1* sy2) with respect to the origin. This
multiplicative property can be demonstrated by composite transformation matrix:
s x1 0 0s x 2 0 0
s
0 y1 0 · 0 s y2 0
0 0 1 0 0 1
s x1 * s x 2 0 * 0 0 * 0 s x1 * 0 0 * s y 2 0 * 0 s x1 * 0 0 * 0 0 *1
= 0 * s x 2 s y1 * 0 0 * 0 0 * 0 s y1 * s y 2 0 * 0 0 * 0 s y1 * 0 0 *1
0 * s x 2 0 * 0 1* 0 0 * 0 0 * s y 2 1* 0 0 * 0 0 * 0 1*1
s x1 * s x 2 0 0
s *s
=
0 y1 y2 0
0 0 1
This demonstrates that 2 successive scalings with respect to the origin are multiplicative.
Rotation about an arbitrary pivot point is not as simple as rotation about the origin. The procedure of
rotation about an arbitrary pivot point is:
- Translate the object so that the pivot-point position is moved to the origin.
- Rotate the object about the origin.
- Translate the object so that the pivot point is returned to its original position.
0 0 1
General Fixed-Point Scaling
Scaling with respect to an arbitrary fixed point is not as simple as scaling with respect to the origin.
The procedure of scaling with respect to an arbitrary fixed point is:
1. Translate the object so that the fixed point coincides with the origin.
2. Scale the object with respect to the origin.
3. Use the inverse translation of step 1 to return the object to its original position.
1 0 x f sx 0 0 1 0xf s x 0 x f (1 s x )
0
1 y 0 s
f y
0 0
1 y
f
=
0 s
yf
y (1 s )
y
0 0 1 0 0 1 0 0 1 0 0 1
Scaling along an arbitrary direction is not as simple as scaling along the x-y axis. The procedure of
scaling along and normal to an arbitrary direction (s 1 and s2), with respect to the origin, is:
1. Rotate the object so that the directions for s1 and s2 coincide with the x and y axes respectively.
2. Scale the object with respect to the origin using (s 1, s2).
3. Use an opposite rotation to return points to their original orientation.
Reflection
Shear
X-direction shear, with a shearing parameter sh x, relative
to the x-axis:
x' 1 sh x 0x
y' = 0 1 0 y
1 001 1
ie. x'=x+y*shx; y'=-x
Exercise: Think of a y-direction shear, with a shearing parameter sh y, relative to the y-axis.
Transformation Between 2 Cartesian Systems
For modelling and design applications, individual objects may be defined in their own
local Cartesian References. The local coordinates must then be transformed to position
the objects within the overall scene coordinate system.
Suppose we want to transform object descriptions from the xy system to the x'y' system:
The composite transformation is:
0 0 10 0 1
Clipping
Line Clipping
This section treats clipping of lines against rectangles. Although there are specialized algorithms
for rectangle and polygon clipping, it is important to note that other graphic primitives can be
clipped by repeated application of the line clipper.
Before we discuss clipping lines, let's look at the simpler problem of clipping individual
points.
If the x coordinate boundaries of the clipping rectangle are Xmin and Xmax, and the y
coordinate boundaries are Ymin and Ymax, then the following inequalities must be
satisfied for a point at (X,Y) to be inside the clipping rectangle:
If any of the four inequalities does not hold, the point is outside the clipping rectangle.
1. End-points pairs are check for trivial acceptance or trivial rejected using the outcode.
2. If not trivial-accepance or trivial-rejected, divided into two segments at a clip edge.
3. Iteratively clipped by testing trivial-acceptance or trivial-rejected, and divided into two
segments until completely inside or trivial-rejected.
To perform trivial accept and reject tests, we extend the edges of the clip rectangle to
divide the plane of the clip rectangle into nine regions. Each region is assigned a 4-bit
code deteermined by where the region lies with respect to the outside halfplanes of the
clip-rectangle edges. Each bit in the outcode is set to either 1 (true) or 0 (false); the 4 bits
in the code correspond to the following conditions:
5.
Note the difference between this strategy for a polygon and the Cohen-Sutherland algorithm for
clipping a line: The polygon clipper clips against four edges in succession, whereas the line
clipper tests the outcode to see which edge is crossed, and clips only when necessary.
Polygons can be clipped against each edge of the window one at a time. Windows/edge
intersections, if any, are easy to find since the X or Y coordinates are already known.
Vertices which are kept after clipping against one window edge are saved for clipping
against the remaining edges.
Note that the number of vertices usually changes and will often increases.
We are using the Divide and Conquer approach.
The clip boundary determines a visible and invisible region. The edges from vertex i to vertex
i+1 can be one of four types:
All information in the user dialogue is then presented in the language of the Mion 6-1
application. In an architectural design package, this means that all interactions The User
Dialogue
are described only in architectural terms, without reference to particular data
structures or other concepts that may be unfamiliar to an architect. In the following
sections, we discuss some of the general considerations in structuring a user
dialogue.
Windows and Icons
Figure 8-1 shows examples of common window and icon graphical interfaces. Visual
representations are used both for obpds to be manipulated in an application
and for the actions to be performed on the application objects.
A window system provides a window-manager interface for the user and
functions for handling the display and manipulation of the windows. Common
functions for the window system are opening and closing windows, repositioning
windows, resizing windows, and display routines that provide interior and
exterior clipping and other graphics functions. Typically, windows are displayed
with sliders, buttons, and menu icons for selecting various window options.
Some general systems, such as X Widows and NeWS, are capable of supporting
multiple window managers so that different window styles can be accommodated,
each with its own window manager. The window managers can then be
designed for particular applications. In other cases, a window system is designed
for one specific application and window style.
Icons representing objects such as furniture items and circuit elements are
often referred to as application icons. The icons representing actions, such as 1-0-
tate, magnlfy, scale, clip, and paste, are called control icons, or command icons.
The virtual world is the scene database which contains the geometric
representations and attributes for all objects within the environment. The format
of this representation is dependant on the graphics and simulation engines used.
The graphics engine is responsible for actually generating the image which a
viewer will see. This is done by taking into account the scene database and the
viewers current position and orientation. It also includes combining information
from the scene database with textures, sounds, special effects, etc. to produce an
impression that you are looking into the scene from a particular point. The
simulation engine actually does most of the work required to maintain a virtual
environment. It is concerned purely with the dynamics of the environment - how
it changes over time and how it responds to the user’s actions. This includes
handling any interactions, programmed object actions, physical simulation (e.g.
gravity or inertia) or user actions. Finally, the user interface controls how the user
navigates and interacts with this virtual environment. It acts as a buffer between
the virtual world software and the myriad of input and output devices which may
be used. Inputs and outputs are mostly independent of the VR software except in
specialist applications.
There are hundreds of different software packages which allow users to either
experience virtual worlds, or even to create and edit them. The majority of
professional VR packages offer the same basic functionality, allowing a world to
be created from any number of 3D objects which can be arbitrarily defined using
either graphic primitives or specific face sets. These packages also offer total
freedom in viewing the virtual world from any conceivable position and
orientation. Different systems merely offer additional features, perform operations
better, give better performance or image quality, etc. A more interesting
development of 3D graphics engines and immersive environments has occurred in
the computer games industry. Regardless of the stigma of computer games within
the serious research community, it is undeniable that they offer some of the most
immersive, usable and engrossing virtual environments - how would they sell
otherwise?
Most professional VR packages are very expensive and often require high
specification workstations to run properly. The benefits of such systems are their
flexibility and generic nature. Anyone who needed such powerful packages, and
could afford them, could also afford the computing power needed to run them.
Computer games, on the other hand, have had to evolve in a much more
restrictive environment. In order to be successful they have to sell as many units
as possible at a price which ordinary computer owners can afford. In order to do
this they must be developed to run on as many "normal" computers as possible.
The tricks needed here are to fit as many features as possible into the product,
while using as little computing power as possible. Clearly not an easy task. For
this reason 3D computer games have evolved in more pronounced steps than
professional VR systems. Professional VR systems at the outset have tried to
create a flexible, true-3d world almost irrespective of the hardware requirements,
whereas 3D games have tried to provide as much as possible in the commonly
available hardware at the time.
Visual realism
Image resolution
Image resolution is another factor which is closely linked with visual realism.
Computer generated images consist of discrete picture elements or pixels, the size
and number of these being dependent on the display size and resolution. At higher
resolutions the discrete nature of the display becomes less apparent, however, the
number of pixels in the image becomes vastly greater. As the colour and intensity
of each pixel must be generated individually, this puts a heavier load on the
graphics system
Frame rate
Frame rate is another affect of the discrete nature of computer graphics and
animation. To give the impression of a dynamic picture, the system simply
updates the display very frequently with a new image. This system relies on the
human phenomenon of persistence of vision, our ability to integrate a rapid
succession of discrete images into a visual continuum. This occurs at frequencies
above the Critical Fusion Frequency (CFF) which can be as low as 20Hz. Normal
television broadcasts update at a frequency of 50Hz (in the UK - 60 Hz in the
US). This means that in order for a virtual environment to appear flicker free, the
system must update the image greater than 20 times each second - again a heavy
load on the graphics system.
Latency
Latency is probably one of the most important aspects of a virtual reality system
which must be addressed to make the environment not only more realistic, but
simply tolerable. Latency or lag is the delay induced by the various components
of a VR system between a user’s inputs and the corresponding response from the
system in the form of a change in the display. As latency increases, a user’s senses
become increasingly confused as their actions become more and more delayed.
Chronic cases can even result in simulator sickness, a recognised medical problem
associated with virtual environments. Latency must be kept to a minimum in order
to create a usable VR system.
Types of VR systems
Video mapping
Monitoring the user with a video camera provides another form of interactive
environment. The computer identifies the user’s body and overlays it upon a
computer generated scene. The user can watch a monitor which shows the
combined image. By gesturing and moving around in front of the camera the user
can interact with the virtual environment.
Immersive VR
Telepresence
Telepresence links remote sensors and cameras in the real world with an interface
to a human operator. For example, the remote robots used in bomb disposal
operations are a form of telepresence. The operator can see the environment
which the robot is in and can control its position and actions from a safe distance.
Such systems are used widely in any applications which must be performed in
hostile or dangerous environments.
Augmented reality
Fish tank VR
This phrase was used to describe a hybrid system which incorporated a standard
desktop VR system with a stereoscopic viewing and head tracking mechanism.
The system used LCD shutter glasses to provide the stereoscopic images and a
head tracker which monitored the user’s point of view on the screen. As the user
moved their head, the screen display updated to show the new perspective. This
provided a far superior viewing experience than normal desktop VR systems by
providing motion parallax as the user moved their head.
We will now take a look at some aspects of VR applications. This list is only a
very small sample of the full potential of this technology.
Flight simulation
One of the main contributors to VR research is the work that came from
developing immersive simulations. In both civil and military aviation, pilot
training is an incredibly costly and time consuming business. Pilots must spend
hundreds of hours of flight time during their training and even still this cannot
prepare them for all the possible emergencies or problems that may arise in a
flight. Flight simulators were developed to provide a safe and realistic addition to
pilot training. Pilots could use the simulators to enact almost any conceivable
emergency scenario which would not usually be possible during a real flight.
One other aspect of engineering in which VR can play a useful role is in industrial
prototyping. The design process usually involves creating a number of, often fully
working, scale model or real sized prototypes of a product. These prototypes are
created to evaluate the product before the design is finalised and it goes into full
production. Often these prototypes are very expensive and time consuming to
construct, especially in large projects such as car design. VR and CAD tools can
be used to quickly prototype and evaluate a product. The benefits of this approach
allow a far greater flexibility than a model. Often the virtual prototype can be
created automatically from the schematics and can be quickly revised to
demonstrate design alternatives.
Visualisation
These fields have all been serviced relatively well by normal 2D tools, but more
and more research is looking into the applications of 3D graphics and VR
technology to aid the understanding of these information systems. Data
visualisation has used 3D to highlight trends and anomalies in large,
multidimensional data sets and also for displaying demographic data
superimposed on meaningful image maps. VR has been used to visualise and
query large databases by generating a ‘landscape’ from the structure and contents
of the databases. Users can then intuitively explore the database by using their
natural abilities and perception. Some research has even combined theories from
town planning and information visualisation to create more legible environments.
Finally, software visualisation has seen limited success with standard 2D
representations. The size and complexity of modern software make visualising or
understanding their structure an increasingly difficult problem. Investigations are
being made into how VR can help to create more understandable and information
rich software visualisations.
Architecture
Architects have always used CAD software for planning and designing buildings.
Even when such software was limited to simple 2D elevations it proved extremely
beneficial. Now, CAD systems play an even greater role in architectural design by
providing plans, sections, elevations, line perspectives and fully rendered
visualisations of the interior and exterior of a building. VR systems are
increasingly being used for large projects to provide clients with a virtual walk
through of their proposed building. This allows the architects to interact and
communicate with the client on specific points and enable the client to gain a
better idea of what the final result will be. It also means that if any changes are to
be made in the design, they can be made quickly and cost effectively at an early
stage in the construction process.
Another project, by the Centre for Computer Graphics research at the University
of Pennsylvania, has developed "Jack" who is a virtual human which can be used
to evaluate many aspects of human factors in new designs or situations. Jack is a
fully articulated human 3D model which incorporates 68 joints. All articulation
points are restricted to moving within human limits. Also, the limbs and torso
possess attributes such as mass, centre of gravity and moment of inertia allowing
Jack to be subjected to dynamic simulations in order to observe his movement and
reaction, for example in a crash simulation. Jack can also be customised to the
physical attributes of any individual.
Physical simulations
VR systems have been used greatly for visualising simulation results allowing the
user to see the invisible. One application provided a simulation of a wind tunnel
experiment with a model aircraft as the subject. The simulation modelled the flow
of air over the surface of the virtual model using accurate physical equations to
provide realistic results. A user could enter the virtual experiment and inspect any
aspect of it freely without interrupting the simulation. The user could also
introduce smoke trails at any point and view how the smoke behaved in the air
currents. Another early example is molecular modelling. One such system used a
large boom arm control which a chemist could use to manually ‘dock’ compounds
into the appropriate receptors. The simulation modelled the atomic forces at work
in the molecules and provided tactile feedback to the user.
UNIT II
Three Dimensional Transformations
Methods for geometric transforamtions and object modelling in 3D are extended from 2D methods by including
the considerations for the z coordinate.
Translation
We translate a 3D point by adding translation distances, tx, ty, and tz, to the original coordinate
position (x,y,z):
x' = x + tx, y' = y + ty, z' = z + tz
Alternatively, translation can also be specified by the transformation matrix in the following formula:
t
x' 1 0 0 x x
y' 0 1 0 ty y
=
z' 0 0 1 t z z
1 0 0 0 1 1
Exercise: translate a triangle with vertices at original coordinates (10,25,5), (5,10,5), (20,10,10) by
tx=15, ty=5,tz=5. For verification, roughly plot the x and y values of the original and
resultant triangles, and imagine the locations of z values.
Scaling With Respect to the Origin
We scale a 3D object with respect to the origin by setting the scaling factors s x, sy and sz, which
are multiplied to the original vertex coordinate positions (x,y,z):
x' = x * sx, y' = y * sy, z' = z * sz
Alternatively, this scaling can also be specified by the transformation matrix in the following formula:
x' s x 0 0 0 x
y' = 0 s y 0 0 y
z' 0 0 sz 0 z
1 0 0 0
1
1
Exercise: Scale a triangle with vertices at original coordinates (10,25,5), (5,10,5), (20,10,10) by
sx=1.5, sy=2, and sz=0.5 with respect to the origin. For verification, roughly plot the x
and y values of the original and resultant triangles, and imagine the locations of z values.
Coordinate-Axes Rotations
A 3D rotation can be specified around any line in space. The easiest rotation axes to handle are the
coordinate axes.
Z-axis rotation: x' = x cos ? - y sin ?,
y' = x sin ? + y cos ?, and
z' = z
or:
x' cos sin 0 0x
0 0y
y' sin cos
=
z' 0 0 1 0z
1
0 0 0 11
X-axis rotation: y' = y cos ? - z sin ?,
z' = y sin ? + z cos ?, and
x' = x
or:
x' 1 0 0x
y' cos sin 0y
=
z' 0 sin cos 0z
1 0 0 0
1
1
Y-axis rotation: z' = z cos ? - x sin ?,
x' = z sin ? + x cos ?, and
y' = y
or:
x' cos sin 0x
y'
sin cos 0y
=
z' 0 0z
1
0 0 0
1
1
3D Rotations About an Axis Which is Parallel to an Axis
Step 1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
Step 2. Perform the specified rotation about that axis.
Step 3. Translate the object so that the rotation axis is moved back to its original position.
General 3D Rotations
Step 1. Translate the object so that the rotation axis passes through the coordinate origin. Step
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes. Step
3. Perform the specified rotation about that coordinate axis.
Step 4. Rotate the object so that the rotation axis is brought back to its original orientation.
Step 5. Translate the object so that the rotation axis is brought back to its original position.
UNIT III
Color Models
A color model is a method for explaining the properties or behaviour of color within some
particular context.
Light or colors are from a narrow frequency band within the electromagnetic spectrum:
RGB Model
4
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
YIQ Model
CMY Model
Consider that,
- Magenta ink indeeds subtracts the green component from incident light, so the remaining red and
blue components are seen by us, as a resultant color of magenta.
- Cyan ink indeeds subtracts the red component from incident light, so the remaining green and
blue components are seen by us, as a resultant color of cyan.
- If we mix the ink of magenta and cyan, then, this ink subtracts the green and red component from
the incident light, and the remaining blue component is seen by us, as a resultant color of blue.
HSV Model
5
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
HSV Model
Used by Tektronix.
H: Hue
L: Lightness
S: Saturation
6
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Values of intensity calculated by an illumination model must be converted to one of the allowable
intensity levels for the particular graphics system in use.
2. The intensities produced by display devices are not linear with the electron-gun
voltage. This is solved by applying a gamma correction for video lookup correction:
Voltage for intensity Ik is computed as:
Vk = (Ik / a )1 / ?
Where a is a constant and ? is an adjustment factor controlled by the user.
For example, the NTSC signal standard is ?=2.2.
Halftoning is used when an output device has a limited intensity range, but we want to create an
apparent increase in the number of available intensities.
Example: The following shows an original picture and the display of it in output devices of limited
intensity ranges (4 colors, 8 colors, 16 colors):
7
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Each r,g,b color has 4 phosphor dots in the pattern, which allows 5
possible settings per color. This gives a total of 125 different color
combinations.
8
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Dithering
The above approach, however, needs a higher resolution output device to display a picture in the same physical
dimensions. So, in reality, we have to refine this approach so that it does not require for higher resolution.
Dithering generally means to approximate halftone without this requirement. Interested students may find
further discussion on dithering in many text books.
Below are two examples of dithering results, using 4 and 2 colors respectively.
6.4 Anti-Aliasing
On dealing with integer pixel positions, jagged or stairstep appearances happen very usually. This
distortion of information due to undersampling is called aliasing. A number of antialiasing methods
have been developed to compensate this problem.
One way is to display objects at higher resolution. However there is a limit to how big we can make
the frame buffer and still maintaining acceptable refresh rate.
Other methods modify pixel intensities by varying them along the boundaries of primitives
=> smoothing the edges. These include supersampling, area sampling, and pixel phasing.
Supersampling
In supersampling, intensity information is obtained from multiple points that contribute to the
overall intensity of a pixel.
9
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Pixel-Weighting Masks: Supersampling can be implemented by giving more weight to sub-pixels near
the center of a pixel area.
1 2 1
2 4 2
1 2 1
Filtering Technique: Similar to pixel-weighting. Instead of using the grid of weighting values, we
imagine a continuous weighting surface covering the pixel:
Area Sampling
In area sampling, we set each pixel intensity proportional to the area of overlap of the pixel.
Pixel Phasing
Move the electron beam to more nearly approximate positions (Micropositioning).
10
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Diagonal lines normally appear less bright than horizontal line. Some compensation should be done to
compensate for this effect by adjusting the intensity of each line according to its slope.
To smooth area outlines, we may adjust each pixel intensity at the boundary positions according to the
percent of pixel area that is inside the boundary.
Temporal Aliasing
To cope with different requirement and characteristics of image files, most applications can handle
multiple coloring modes.
1. Bitmap mode
Uses one of two color values (black or white) to represent the pixels in an image. Images in Bitmap
mode are called bitmapped, or 1-bit, images because they have a bit depth of 1.
2. Grayscale mode
Uses up to 256 shades of gray. Every pixel of a grayscale image has a brightness value ranging from 0
(black) to 255 (white).
Uses at most 256 colors. When converting a true-color image to indexed color, a color lookup table is
built to stores and indexes the colors in the image. If a color in the original image does not appear in
the table, the software chooses the closest one or simulates the color using available colors.
By limiting the palette of colors, indexed color can reduce file size while maintaining visual quality.
4. Multichannel mode
File Compression
Many image file formats use compression techniques to reduce the storage space required by bitmap
image data. Compression techniques are distinguished by whether they remove detail and color from
the image. Lossless techniques compress image data without removing detail; lossy techniques
compress images by removing detail.
- Run Length Encoding (RLE) is lossless. This method scans the bitmap row by row. For each row,
it divides the line of colors into run-lengths according to the change of colors, then record for each
color how many pixels are to be painted.
- Joint Photographic Experts Group (JPEG) is lossy. It achieves data compression through sampling
techniques in the context of digital signal processing. It is best for continuous-tone images, such
as photographs. You can manipulate the compression parameters to choose between greater
compression or greater accuracy.
13
CS3162 1. BMP
Introduction to
Computer BMP is
Graphics the
Helen
a standard
Wong Window
, s image
2000
format
on DOS
Common File Formats and
Windows-compatible computers. The BMP format supports RGB, indexed-color,
grayscale, and Bitmap color modes.
2. GIF
The Graphics Interchange Format (GIF) is the file format commonly used to
display indexed-color graphics and images. GIF uses a LZW-compressed format.
Transparent color is supported.
3. JPEG format
The Joint Photographic Experts Group (JPEG) format is commonly used to
display photographs and other continuous-tone images. The JPEG format Point source
supports CMYK, RGB, and grayscale color modes.
4. PCX
The PCX format supports RGB, indexed-color, grayscale, and Bitmap color
modes. PCX supports the RLE compression method. Images can have a bit depth
Distributed light
of 1, 4, 8, or 24.
source
5. PDF
Portable Document Format (PDF) is used by Adobe Acrobat, Adobe’s electronic
publishing software. PDF files can represent both vector and bitmap graphics, and
can contain electronic document search and navigation features such as electronic
links.
6. Raw
The Raw format is a flexible file format for transferring files between
applications and computer platforms. Raw format consists of a stream of bytes
describing the color information in the file. Each pixel is described in binary
format, with 0 equalling black and 255 white (for images with 16-bit channels,
the white value is 65535).
7. TIFF
The Tagged-Image File Format (TIFF) is used to exchange files between
applications and computer platforms. TIFF is a flexible bitmap image format
supported by virtually all paint, image-editing, and page-layout applications.
Also, virtually all desktop scanners can produce TIFF images.
The TIFF format supports CMYK, RGB, and grayscale files with alpha channels,
and Lab, indexed-color, and Bitmap files without alpha channels. TIFF also
supports LZW compression.
Surface Shading
A shading model is used in computer graphics to simulate the effects of light shining
on a surface.
Types of illumination
The area-subdivision method takes advantage of area coherence in a scene by locating those view
areas that represent part of a single surface.
The total viewing area is successively divided into smaller and smaller rectangles until each small area
is simple, ie. it is a single pixel, or is covered wholly by a part of a single visible surface or no surface
at all.
The procedure to determine whether we should subdivide an area into smaller rectangle is:
1. We first classify each of the surfaces, according to their relations with the area:
Surrounding surface - a single surface completely encloses the area
Overlapping surface - a single surface that is partly inside and partly outside the
area Inside surface - a single surface that is completely inside the area
Outside surface - a single surface that is completely outside the area.
To improve the speed of classification, we can make use of the bounding rectangles of surfaces for
early confirmation or rejection that the surfaces should be belong to that type.
- Check the result from 1., that, if any of the following condition is true, then, no subdivision of this
area is needed.
- All surfaces are outside the area.
- Only one surface is inside, overlapping or surrounding surface is in the area.
- A surrounding surface obscures all other surfaces within the area boundaries.
For cases b and c, the color of the area can be determined from that single surface.
1
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
In these methods, octree nodes are projected onto the viewing surface
in a front-to-back order. Any surfaces toward the rear of the front
octants (0,1,2,3) or in the back octants (4,5,6,7) may be hidden by the
front surfaces.
The intensity of a pixel in an image is due to a ray of light, having been reflected from some objects in
the scene, pierced through the centre of the pixel.
So, visibility of surfaces can be determined by tracing a ray of light from the centre of projection
(viewer's eye) to objects in the scene. (backward-tracing).
The ray-casting approach is an effective visibility-detection method for scenes with curved surfaces,
particularly spheres.
2
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
In order to produce a realistic image with various kinds of reflection, there are 3 common shading
methods which are mainly applied to polygons:
2. Gouraud Shading
- The intensity value is calculated once for each vertex
of a polygon.
- The intensity values for the inside of the polygon are
obtained by interpolating the vertex values.
- Eliminates the intensity discontinuity problem.
- Still not model the specular reflection correctly.
- The interpolation of color values can cause bright or dark intensity streaks, called the Mach-
bands, to appear on the surface.
1
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
3. Phong Shading
- Instead of interpolating the intensity values, the normal
vectores are being interpolated between the vertices.
Shadow
7. Shadow can help to create realism. Without it, a cup, eg., on a table may look as if the cup is
floating in the air above the table.
8. By applying hidden-surface methods with pretending that the position of a light source is the
viewing position, we can find which surface sections cannot be "seen" from the light source =>
shadow areas.
9. We usually display shadow areas with ambient-light intensity only.
Texture Mapping
Since it is still very difficult for the computer to generate realistic textures, a method called texture
mapping is developed in which a photograph of real texture is input into the computer and mapped
onto the object surface to create the texture for the object.
8. Texture pattern is defined in a MxN array of texels or a texture map indiced by (u,v) coordinates.
9. For each pixel in the display:
Map the 4 corners of pixel back to the object surface (for cureved surfaces, these 4 points
define a surface patch)
Map the surface patch onto the texture map (this mapping computes the source area in the
texture map)
The pixel values is modified by weighted sum of the texels' color.
2
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Bump Mapping
- To simulate surface roughness within the geometric description of the object, surface roughness
can be generated by perturbing surface normals.
Fog Effect
- As an object is further away from the observer, the color of the object fades.
- Fog is a general term that describes similar forms of atmospheric effects. It can be used to
simulate haze, mist, smoke, or pollution.
The previously mentioned shading methods have a fundamental limitation that they do not model
light reflection / refraction well. The following discussed methods, ray tracing and radiosity, are able
to generate realistic light reflection and refraction behavior.
Ray tracing
- For each pixel on the image plane, a ray is projected from the center of projection through the
pixel into the scene.
- The first object that the ray intersects is determined.
- The point at which the ray hits the object (ie. the ray intersection point) is also determined. The
color value is then calculated according to the directions of the light sources to the surface normal.
- However, if there is another object located between a particular light source and the ray
intersection point, then the point is in shadow and the light contribution from that particular light
source is not considered.
3
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
- The ray is then reflected and projected from the object until it intersects with another object. (If
the surface is a transparent surface, the ray is refracted as well as reflected.)
- The point at which the reflected ray hits the second object are determined and the color value is
again calculated in a similar way as the first object.
- The reflected ray is then reflected again from the second object. This process will continue until
the color contribution of an intersected object is too small to be considered.
- In practical situation, se would specify the maximum number of reflections that a pixel can make
to prevent spending too much processing at a particular pixel.
- All the color values calculated from the intersected objects will be weighted by the attenuation
factors (which depend on the surface properties of the objects) and added up to produce a single
color value. This color value becomes the pixel value.
Because ray-tracing method calculates the intensity value for each pixel independently, we can
consider specular reflection and refraction in the calculation. Hence the method can generate very
realistic images. The major problem of this method, however, is that it requires a lot of computations
and therefore this method is slow.
To calculate the color of a pixel, consider the following diagram:
4
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Radiosity
Ray-tracing can model specular reflection very well, but not diffuse reflection.
Why?
Recall that in diffuse reflection, although light is reflected with equal intensity in all directions,
the amount of light energy received by another surface depends on the orientation of the surface
relative to the source. Hence surfaces of different orientations may receive different amount of
light.
Consider the diagram, assuming that all 3 surfaces are matte. Although A reflects light with equal
intensity to B and C, B would receive more light energy than C because B has a smaller angle of
incident (ie. higher cos).
- In a closed environment such as a room, the rate at which energy leaves a surface, called its
radiosity, is the sum of the rates at which the surface emits energy and it reflects (or transmits)
energy from other surfaces.
- To simplify the calculation, all surfaces in the scene are broken into small patches. Each of which
is assumed to be of finite size, emitting and reflecting light uniformly over its entire area.
- Once we have obtained a radiosity for each patch, we can render the scene with a scan-conversion
method using the calculated radiosities as the intensities of the patches.
- Note that the radiosities calculated are view-independent. Hence, the radiosity method although
can deal with diffuse reflection well, cannot deal with specular reflection. (Specular reflection is
view-dependent).
5
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
D. To be able to deal with both, we may combine the radiosity method with the ray-tracing method.
The prices are the added complexity of the algorithm and the increase in computational time. An
example is a 2-pass approach that includes a view-independent radiosity process executed in the
first pass, followed by a view-dependent ray-tracing approach in the second pass.
Left: radiosity. Right: Diffuse first pass and ray-traing second pass.
1. Modelling Transformation:
In this stage, we transform objects in their local modelling coordinate
systems into a common coordinate system called the world coordinates.
Note that:
Perspective transformation is different from perspective projection:
Perspective projection projects a 3D object onto a 2D plane perspectively.
Perspective transformation converts a 3D object into a deformed 3D object.
After the transformation, the depth value of an object remains unchanged.
Before the perspective transformation, all the projection lines converge to
the center of projection. After the transformation, all the projection lines
are parallel to each others.
Perspective Projection = Perspective Transformation + Parallel Projection
3. Clipping:
In 3D clipping, we remove all objects and parts of objects which are
outside of the view volume.
Since we have done perspective transformation, the 6 clipping planes,
which form the parallelepiped, are parallel to the 3 axes and hence clipping
is straight forward.
Hence the clipping operation can be performed in 2D. For example, we
may first perform the clipping operations on the x-y plane and then on the
x-z plane.
1
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Characteristics of approaches:
- Require large memory size?
- Require long processing time?
- Applicable to which types of objects?
Considerations:
- Complexity of the scene
- Type of objects in the scene
- Available equipment
- Static or animated?
2
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
- Making use of the results calculated for one part of the scene or image for other nearby parts.
- Coherence is the result of local similarity
- As objects have continuous spatial extent, object properties vary smoothly within a small local
region in the scene. Calculations can then be made incremental.
Types of coherence:
1. Object Coherence:
Visibility of an object can often be decided by examining a circumscribing solid (which may be of
simple form, eg. A sphere or a polyhedron.)
2. Face Coherence:
Surface properties computed for one part of a face can be applied to adjacent parts after small
incremental modification. (eg. If the face is small, we sometimes can assume if one part of the face is
invisible to the viewer, the entire face is also invisible).
3. Edge Coherence:
The Visibility of an edge changes only when it crosses another edge, so if one segment of an non-
intersecting edge is visible, the entire edge is also visible.
6. Depth Coherence:
The depths of adjacent parts of the same surface are similar.
7. Frame Coherence:
Pictures of the same scene at successive points in time are likely to be similar, despite small changes
in objects and viewpoint, except near the edges of moving objects.
Most visible surface detection methods make use of one or more of these coherence properties of a
scene.
To take advantage of regularities in a scene, eg. Constant relationships often can be established
between objects and surfaces in a scene.
3
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
In a solid object, there are surfaces which are facing the viewer (front faces) and there are surfaces
which are opposite to the viewer (back faces).
These back faces contribute to approximately half of the total number of surfaces. Since we cannot
see these surfaces anyway, to save processing time, we can remove them before the clipping process
with a simple test.
Each surface has a normal vector. If this vector is pointing in the direction of the center of projection,
it is a front face and can be seen by the viewer. If it is pointing away from the center of projection, it
is a back face and cannot be seen by the viewer.
The test is very simple, if the z component of the normal vector is positive, then, it is a back face. If
the z component of the vector is negative, it is a front face.
Note that this technique only caters well for nonoverlapping convex
polyhedra.
For other cases where there are concave polyhedra or overlapping objects,
we still need to apply other methods to further determine where the
obscured faces are partially or completely hidden by other objects (eg.
Using Depth-Buffer Method or Depth-sort Method).
Algorithm:
1. Initially each pixel of the z-buffer is set to the maximum depth value (the depth of the back
clipping plane).
2. The image buffer is set to the background color.
3. Surfaces are rendered one at a time.
4. For the first surface, the depth value of each pixel is calculated.
5. If this depth value is smaller than the corresponding depth value in the z-buffer (ie. it is closer to
the view point), both the depth value in the z-buffer and the color value in the image buffer are
replaced by the depth value and the color value of this surface calculated at the pixel position.
6. Repeat step 4 and 5 for the remaining surfaces.
7. After all the surfaces have been processed, each pixel of the image buffer represents the color of a
visible surface at that pixel.
4
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
- This method requires an additional buffer (if compared with the Depth-Sort Method) and the
overheads involved in updating the buffer. So this method is less attractive in the cases where only
a few objects in the scene are to be rendered.
- Simple and does not require additional data structures.
- The z-value of a polygon can be calculated incrementally.
- No pre-sorting of polygons is needed.
- No object-object comparison is required.
- Can be applied to non-polygonal objects.
- Hardware implementations of the algorithm are available in some graphics workstation.
- For large images, the algorithm could be applied to, eg., the 4 quadrants of the image separately,
so as to reduce the requirement of a large additional buffer.
In this method, as each scan line is processed, all polygon surfaces intersecting that line are examined
to determine which are visible. Across each scan line, depth calculations are made for each
overlapping surface to determine which is nearest to the view plane. When the visible surface has
been determined, the intensity value for that position is entered into the image buffer.
5
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
- Step 2 is not efficient because not all polygons necessarily intersect with the scan line.
- Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment of
the scan line.
- To speed up the process:
Recall the basic idea of polygon filling: For each scan line crossing a polygon,
this algorithm locates the intersection points of the scan line with the polygon
edges. These intersection points are sorted from left to right. Then, we fill the
pixels between each intersection pair.
With similar idea, we fill every scan line span by span. When polygon overlaps on a scan line,
we perform depth calculations at their edges to determine which polygon should be visible at
which span.
Any number of overlapping polygon surfaces can be processed with this method. Depth
calculations are performed only when there are polygons overlapping.
We can take advantage of coherence along the scan lines as we pass from one scan line to the
next. If no changes in the pattern of the intersection of polygon edges with the successive scan
lines, it is not necessary to do depth calculations.
This works only if surfaces do not cut through or otherwise cyclically overlap each other. If
cyclic overlap happens, we can divide the surfaces to eliminate the overlaps.
- The algorithm is applicable to non-polygonal surfaces (use of surface and active surface table, z-
value is computed from surface representation).
- Memory requirement is less than that for depth-buffer method.
- Lot of sortings are done on x-y coordinates and on depths.
6
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
1. Sort all surfaces according to their distances from the view point.
2. Render the surfaces to the image buffer one at a time starting from the farthest surface.
3. Surfaces close to the view point will replace those which are far away.
4. After all surfaces have been processed, the image buffer stores the final image.
The basic idea of this method is simple. When there are only a few objects in the scene, this method
can be very fast. However, as the number of objects increases, the sorting process can become very
complex and time consuming.
7
CS3162 Introduction to Computer Graphics
Helena Wong,
2000
Discussion:
- Back face removal is achieved by not displaying a polygon if the viewer is located in its back
half-space
- It is an object space algorithm (sorting and intersection calculations are done in object space
precision)
- If the view point changes, the BSP needs only minor re-arrangement.
- A new BSP tree is built if the scene changes
- The algorithm displays polygon back to front (cf. Depth-sort)
6