Computer Graphics Complete
Computer Graphics Complete
Simulation and animation: - Use of graphics in simulation makes mathematic models and mechanical
systems more realistic and easy to study.
Art and commerce: - There are many tools provided by graphics which allows used to make their picture
animated and attracted which are used in advertising.
Process control: - Now a day’s automation is used which is graphically displayed on the screen.
Cartography: - Computer graphics is also used to represent geographic maps, weather maps,
oceanographic charts etc.
Education and training: - Computer graphics can be used to generate models of physical, financial and
economic systems. These models can be used as educational aids.
Image processing: - It is used to process image by changing property of the image.
Display devices
Display devices are also known as output devices.
Most commonly used output device in a graphics system is a video monitor.
Cathode-ray-tubes
CPU
I/O Port
Display
buffer (Interaction (Display
memory data) command)
Keyboard Mouse
Vector scan display directly traces out only the desired lines on CRT.
If we want line between point p1 & p2 then we directly drive the beam deflection circuitry which focus
beam directly from point p1 to p2.
If we do not want to display line from p1 to p2 and just move then we can blank the beam as we move it.
To move the beam across the CRT, the information about both magnitude and direction is required. This
information is generated with the help of vector graphics generator.
Fig. 1.2 shows architecture of vector display. It consists of display controller, CPU, display buffer memory
and CRT.
Display controller is connected as an I/O peripheral to the CPU.
Display buffer stores computer produced display list or display program.
The Program contains point & line plotting commands with end point co-ordinates as well as character
plotting commands.
Display controller interprets command and sends digital and point co-ordinates to a vector generator.
Vector generator then converts the digital co-ordinate value to analog voltages for beam deflection
circuits that displace an electron beam which points on the CRT’s screen.
In this technique beam is deflected from end point to end point hence this techniques is also called
random scan.
We know as beam strikes phosphors coated screen it emits light but that light decays after few
milliseconds and therefore it is necessary to repeat through the display list to refresh the screen at least
30 times per second to avoid flicker.
As display buffer is used to store display list and used to refreshing, it is also called refresh buffer.
CPU
I/O Port
(Interaction (Display
data) command
)
Keyboard
Display controller
Mouse
00000000000000000
00000111111100000 CRT
00000000100000000 Video controller
T
00000000100000000
00000000100000000
Refresh buffer
Fig. 1.3 shows the architecture of Raster display. It consists of display controller, CPU, video controller,
refresh buffer, keyboard, mouse and CRT.
The display image is stored in the form of 1’s and 0’s in the refresh buffer.
The video controller reads this refresh buffer and produces the actual image on screen.
It will scan one line at a time from top to bottom & then back to the top.
Horizontal
Vertical OFF ON Retrace
Retrace
In this method the horizontal and vertical deflection signals are generated to move the beam all over the
screen in a pattern shown in fig. 1.4.
Here beam is swept back & forth from left to the right.
When beam is moved from left to right it is ON.
Electron Beam The electron beam is swept across the The electron beam is directed only to the
screen, one row at a time, from top to parts of screen where a picture is to be
bottom. drawn.
Resolution Its resolution is poor because raster Its resolution is good because this system
system in contrast produces zigzag produces smooth lines drawings because
lines that are plotted as discrete point CRT beam directly follows the line path.
sets.
Picture Definition Picture definition is stored as a set of Picture definition is stored as a set of line
intensity values for all screen points, drawing instructions in a display file.
called pixels in a refresh buffer area.
Realistic Display The capability of this system to store These systems are designed for line-
intensity values for pixel makes it well drawing and can’t display realistic shaded
suited for the realistic display of scenes scenes.
contain shadow and color pattern.
Draw an Image Screen points/pixels are used to draw Mathematical functions are used to draw
an image. an image.
Beam-penetration technique
Shadow-mask technique
Advantage of DVST
Refreshing of CRT is not required.
Very complex pictures can be displayed at very high resolution without flicker.
Flat screen.
Disadvantage of DVST
They do not display color and are available with single level of line intensity.
For erasing it is necessary to removal of charge on the storage grid so erasing and redrawing process
take several second.
Erasing selective part of the screen cannot be possible.
Cannot used for dynamic graphics application as on erasing it produce unpleasant flash over entire
screen.
It has poor contrast as a result of the comparatively low accelerating potential applied to the flood
electrons.
The performance of DVST is somewhat inferior to the refresh CRT.
It is similar to plasma panel display but region between the glass plates is filled with phosphors such as
zinksulphide doped with magnesium instead of gas.
When sufficient voltage is applied the phosphors becomes a conductor in area of intersection of the two
electrodes.
Electrical energy is then absorbed by the manganese atoms which then release the energy as a spot of
light similar to the glowing plasma effect in plasma panel.
It requires more power than plasma panel.
In this good color and gray scale difficult to achieve.
Fig. 1.10: - Light twisting shutter effect used in design of most LCD.
It is generally used in small system such as calculator and portable laptop.
This non emissive device produce picture by passing polarized light from the surrounding or from an
internal light source through liquid crystal material that can be aligned to either block or transmit the
light.
The liquid crystal refreshes to fact that these compounds have crystalline arrangement of molecules
then also flows like liquid.
It consists of two glass plates each with light polarizer at right angles to each other sandwich the liquid
crystal material between the plates.
Rows of horizontal transparent conductors are built into one glass plate, and column of vertical
conductors are put into the other plates.
The intersection of two conductors defines a pixel position.
In the ON state polarized light passing through material is twisted so that it will pass through the
opposite polarizer.
In the OFF state it will reflect back towards source.
We applied a voltage to the two intersecting conductor to align the molecules so that the light is not
twisted.
This type of flat panel device is referred to as a passive matrix LCD.
In active matrix LCD transistors are used at each (x, y) grid point.
CRT
Viewer
Vibrating mirror changes its focal length due to vibration which is synchronized with the display of an
object on CRT.
The each point on the object is reflected from the mirror into spatial position corresponding to distance
of that point from a viewing position.
Very good example of this system is GENISCO SPACE GRAPH system, which use vibrating mirror to
project 3D objects into a 25 cm by 25 cm by 25 cm volume. This system is also capable to show 2D cross
section at different depth.
Stereoscopic system
Stereoscopic views does not produce three dimensional images, but it produce 3D effects by presenting
different view to each eye of an observer so that it appears to have depth.
To obtain this we first need to obtain two views of object generated from viewing direction
corresponding to each eye.
We can construct the two views as computer generated scenes with different viewing positions or we
can use stereo camera pair to photograph some object or scene.
When we see simultaneously both the view as left view with left eye and right view with right eye then
two views is merge and produce image which appears to have depth.
One way to produce stereoscopic effect is to display each of the two views with raster system on
alternate refresh cycles.
The screen is viewed through glasses with each lance design such a way that it act as a rapidly
alternating shutter that is synchronized to block out one of the views.
Virtual-reality
Virtual reality is the system which produce images in such a way that we feel that our surrounding is
what we are set in display devices but in actually it does not.
In virtual reality user can step into a scene and interact with the environment.
System Bus
I/O Devices
Raster graphics systems having additional processing unit like video controller or display controller.
Here frame buffer can be anywhere in the system memory and video controller access this for refresh
the screen.
In addition to video controller more processors are used as co-processors to accelerate the system in
sophisticated raster system.
Raster graphics system with a fixed portion of the system memory reserved for
the frame buffer
System bus
I/O Devices
Fig. 1.15: - Architecture of a raster graphics system with a fixed portion of the system memory reserved for
the frame buffer.
X Y
register register
Frame Buffer
Two registers are used to store the coordinates of the screen pixels which are X and Y
Initially the X is set to 0 and Y is set to Ymax.
The value stored in frame buffer for this pixel is retrieved and used to set the intensity of the CRT beam.
After this X register is incremented by one.
This procedure is repeated till X becomes equals to Xmax.
Then X is set to 0 and Y is decremented by one pixel and repeat above procedure.
This whole procedure is repeated till Y is become equals to 0 and complete the one refresh cycle. Then
controller reset the register as top –left corner i.e. X=0 and Y=Ymax and refresh process start for next
refresh cycle.
Since screen must be refreshed at the rate of 60 frames per second the simple procedure illustrated in
figure cannot be accommodated by typical RAM chips.
To speed up pixel processing video controller retrieves multiple values at a time using more numbers of
registers and simultaneously refresh block of pixel.
Such a way it can speed up and accommodate refresh rate more than 60 frames per second.
System Bus
I/O Devices
System Bus
I/O Devices
An application program is input & stored in the system memory along with a graphics package.
Graphics commands in the application program are translated by the graphics package into a display file
stored in the system memory.
This display file is used by display processor to refresh the screen.
Display process goes through each command in display file. Once during every refresh cycle.
Sometimes the display processor in random scan system is also known as display processing unit or a
graphics controller.
In this system graphics platform are drawn on random scan system by directing the electron beam along
the component times of the picture.
Lines are defined by coordinate end points.
This input coordinate values are converts to X and Y deflection voltages.
A scene is then drawn one line at a time.
Keyboards
Keyboards are used as entering text strings. It is efficient devices for inputting such a non-graphics data
as picture label.
Cursor control key’s & function keys are common features on general purpose keyboards.
Many other application of key board which we are using daily used of computer graphics are
commanding & controlling through keyboard etc.
Mouse
Mouse is small size hand-held box used to position screen cursor.
Wheel or roller or optical sensor is directing pointer on the according to movement of mouse.
Three buttons are placed on the top of the mouse for signaling the execution of some operation.
Now a day’s more advance mouse is available which are very useful in graphics application for example Z
mouse.
Joysticks
A joy stick consists of small vertical lever mounted on a base that is used to steer the screen cursor
around.
Most joy sticks selects screen positioning according to actual movement of stick (lever).
Some joy sticks are works on pressure applied on sticks.
Sometimes joy stick mounted on keyboard or sometimes used alone.
Movement of the stick defines the movement of the cursor.
In pressure sensitive stick pressure applied on stick decides movement of the cursor. This pressure is
measured using strain gauge.
This pressure sensitive joy sticks also called as isometric joy sticks and they are non movable sticks.
Data glove
Data glove is used to grasp virtual objects.
The glow is constructed with series of sensors that detect hand and figure motions.
Electromagnetic coupling is used between transmitter and receiver antennas which used to provide
position and orientation of the hand.
Transmitter & receiver Antenna can be structured as a set of three mutually perpendicular coils forming
3D Cartesian coordinates system.
Input from the glove can be used to position or manipulate object in a virtual scene.
Digitizer
Digitizer is common device for drawing painting or interactively selecting coordinates position on an
object.
One type of digitizers is graphics tablet which input two dimensional coordinates by activating hand
cursor or stylus at selected position on a flat surface.
Stylus is flat pencil shaped device that is pointed at the position on the tablet.
Image Scanner
Image Scanner scan drawing, graph, color, & black and white photos or text and can stored for computer
processing by passing an optical scanning mechanism over the information to be stored.
Once we have internal representation of a picture we can apply transformation.
We can also apply various image processing methods to modify the picture.
For scanned text we can apply modification operation.
Touch Panels
As name suggest Touch Panels allow displaying objects or screen-position to be selected with the touch
or finger.
A typical application is selecting processing option shown in graphical icons.
Prof. Vijay M. Shekhat, CE Department | 2160703 – Computer Graphics 17
Unit-1 – Basics of Computer Graphics
Some system such as a plasma panel are designed with touch screen
Other system can be adapted for touch input by fitting transparent touch sensing mechanism over a
screen.
Touch input can be recorded with following methods.
1. Optical methods
2. Electrical methods
3. Acoustical methods
Optical method
Optical touch panel employ a line of infrared LEDs along one vertical and one horizontal edge.
The opposite edges of the edges containing LEDs are contain light detectors.
When we touch at a particular position the line of light path breaks and according to that breaking line
coordinate values are measured.
In case two line cuts it will take average of both pixel positions.
LEDs operate at infrared frequency so it cannot be visible to user.
Electrical method
An electrical touch panel is constructed with two transparent plates separated by small distance.
One is coated with conducting material and other is coated with resistive material.
When outer plate is touch it will come into contact with internal plate.
When both plates touch it creates voltage drop across the resistive plate that is converted into
coordinate values of the selected position.
Acoustical method
In acoustical touch panel high frequency sound waves are generated in horizontal and vertical direction
across a glass plates.
When we touch the screen the waves from that line are reflected from finger.
These reflected waves reach again at transmitter position and time difference between sending and
receiving is measure and converted into coordinate values.
Light pens
Light pens are pencil-shaped device used to select positions by detecting light coming from points on the
CRT screen.
Activated light pens pointed at a spot on the screen as the electron beam lights up that spot and
generate electronic pulse that causes the coordinate position of the electron beam to be recorded.
Voice systems
It is used to accept voice command in some graphics workstations.
It is used to initiate graphics operations.
It will match input against predefined directory of words and phrases.
Dictionary is setup for a particular operator by recording his voice.
Each word is speak several times and then analyze the word and establishes a frequency pattern for that
word along with corresponding function need to be performed.
When operator speaks command it will match with predefine dictionary and perform desired action.
A general programming package provides an extensive set of graphics function that can be used in high
level programming language such as C or FORTRAN.
It includes basic drawing element shape like line, curves, polygon, color of element transformation etc.
Example: - GL (Graphics Library).
Special-purpose application package are customize for particular application which implement required
facility and provides interface so that user need not to vory about how it will work (programming). User
can simply use it by interfacing with application.
Example: - CAD, medical and business systems.
Coordinate representations
Except few all other general packages are designed to be used with Cartesian coordinate specifications.
If coordinate values for a picture are specified is some other reference frame they must be converted to
Cartesian coordinate before giving input to graphics package.
Special-purpose package may allow use of other coordinates which suits application.
In general several different Cartesian reference frames are used to construct and display scene.
We can construct shape of object with separate coordinate system called modeling coordinates or
sometimes local coordinates or master coordinates.
Once individual object shapes have been specified we can place the objects into appropriate positions
called world coordinates.
Finally the World-coordinates description of the scene is transferred to one or more output device
reference frame for display. These display coordinates system are referred to as “Device Coordinates” or
“Screen Coordinates”.
Generally a graphic system first converts the world-coordinates position to normalized device
coordinates. In the range from 0 to 1 before final conversion to specific device coordinates.
An initial modeling coordinates position ( Xmc,Ymc) in this illustration is transferred to a device
coordinates position(Xdc,Ydc) with the sequence ( Xmc,Ymc) ( Xwc,Ywc) ( Xnc,Ync) ( Xdc,Ydc).
Graphic Function
A general purpose graphics package provides user with Varity of function for creating and manipulating
pictures.
The basic building blocks for pictures are referred to as output primitives. They includes character,
string, and geometry entities such as point, straight lines, curved lines, filled areas and shapes defined
with arrays of color points.
Input functions are used for control & process the various input device such as mouse, tablet, etc.
Control operations are used to controlling and housekeeping tasks such as clearing display screen etc.
All such inbuilt function which we can use for our purpose are known as graphics function
This system was adopted as a first graphics software standard by the international standard organization
(ISO) and various national standard organizations including ANSI.
GKS was originally designed as the two dimensional graphics package and then later extension was
developed for three dimensions.
PHIGS is extension of GKS. Increased capability for object modeling, color specifications, surface
rendering, and picture manipulation are provided in PHIGS.
Extension of PHIGS called “PHIGS+” was developed to provide three dimensional surface shading
capabilities not available in PHIGS.
Fig. 2.1: - Stair step effect produced when line is generated as a series of pixel positions.
The stair step shape is noticeable in low resolution system, and we can improve their appearance
somewhat by displaying them on high resolution system.
More effective techniques for smoothing raster lines are based on adjusting pixel intensities along the
line paths.
For raster graphics device-level algorithms discuss here, object positions are specified directly in integer
device coordinates.
Pixel position will referenced according to scan-line number and column number which is illustrated by
following figure.
6
5
4
3
2
1
0
0 1 2 3 4 5 6
Fig. 2.2: - Pixel positions referenced by scan-line number and column number.
To load the specified color into the frame buffer at a particular position, we will assume we have
available low-level procedure of the form 𝑠𝑒𝑡𝑝𝑖𝑥𝑒𝑙(𝑥, 𝑦).
Y2
y1
X1 X2
DDA Algorithm
Digital differential analyzer (DDA) is scan conversion line drawing algorithm based on calculating either
∆𝑦 or ∆𝑥 using above equation.
We sample the line at unit intervals in one coordinate and find corresponding integer values nearest the
line path for the other coordinate.
Consider first a line with positive slope and slope is less than or equal to 1:
We sample at unit x interval (∆𝑥 = 1) and calculate each successive y value as follow:
𝑦 = 𝑚 ∗ 𝑥 + 𝑏
𝑦𝑘 = 𝑚 ∗ (𝑥 + 1) + 𝑏
In general 𝑦𝑘 = 𝑚 ∗ (𝑥 + 𝑘) + 𝑏 , &
𝑦𝑘+1 = 𝑚 ∗ (𝑥 + 𝑘 + 1) + 𝑏
Now write this equation in form:
𝑦𝑘+1 − 𝑦𝑘 = (𝑚 ∗ (𝑥 + 𝑘 + 1) + 𝑏) – (𝑚 ∗ (𝑥 + 𝑘) + 𝑏)
𝑦𝑘+1 = 𝑦𝑘 + 𝑚
So that it is computed fast in computer as addition is fast compare to multiplication.
Specified
13 50
line path
Specified
12 49
line path
11 48
10 47
10 11 12 13 14 15 50 51 52 53 54 55
Fig. 2.4: - Section of a display screen where a Fig. 2.5: - Section of a display screen where a
straight line segment is to be plotted, starting negative slope line segment is to be plotted,
from the pixel at column 10 on scan line 11. starting from the pixel at column 50 on scan
line 50.
The vertical axes show scan-line positions and the horizontal axes identify pixel column.
Sampling at unit 𝑥 intervals in these examples, we need to decide which of two possible pixel position is
closer to the line path at each sample step.
To illustrate bresenham’s approach, we first consider the scan-conversion process for lines with positive
slope less than 1.
Pixel positions along a line path are then determined by sampling at unit 𝑥 intervals.
Starting from left endpoint (𝑥0 , 𝑦0 ) of a given line, we step to each successive column and plot the pixel
whose scan-line 𝑦 values is closest to the line path.
Assuming we have determined that the pixel at (𝑥𝑘 , 𝑦𝑘 ) is to be displayed, we next need to decide which
pixel to plot in column 𝑥𝑘 + 1.
Our choices are the pixels at positions (𝑥𝑘 + 1, 𝑦𝑘 ) and (𝑥𝑘 + 1, 𝑦𝑘 + 1).
Let’s see mathematical calculation used to decide which pixel position is light up.
We know that equation of line is:
𝑦 = 𝑚𝑥 + 𝑏
Now for position 𝑥𝑘 + 1.
𝑦 = 𝑚(𝑥𝑘 + 1) + 𝑏
Now calculate distance bet actual line’s 𝑦 value and lower pixel as 𝑑1 and distance bet actual line’s 𝑦
value and upper pixel as 𝑑2 .
𝑑1 = 𝑦 − 𝑦𝑘
Prof. Vijay M. Shekhat, CE Department | 2160703 – Computer Graphics 4
Unit-2 – Graphics Primitives
d1 = m(xk + 1) + b − yk ……………………………………………………………………………………………………………..……...(1)
𝑑2 = (𝑦𝑘 + 1) − 𝑦
𝑑2 = (𝑦𝑘 + 1) − 𝑚(𝑥𝑘 + 1) − 𝑏..…………………………………………………………………………………………………………(2)
Now calculate 𝑑1 − 𝑑2 from equation (1) and (2).
𝑑1 − 𝑑2 = (𝑦 – 𝑦𝑘 ) – ((𝑦𝑘 + 1) – 𝑦)
𝑑1 − 𝑑2 = {𝑚(𝑥𝑘 + 1) + 𝑏 − 𝑦𝑘 } − {(𝑦𝑘 + 1) − 𝑚(𝑥𝑘 + 1) − 𝑏}
𝑑1 − 𝑑2 = {𝑚𝑥𝑘 + 𝑚 + 𝑏 − 𝑦𝑘 } − {𝑦𝑘 + 1 − 𝑚𝑥𝑘 − 𝑚 − 𝑏}
𝑑1 − 𝑑2 = 2𝑚(𝑥𝑘 + 1) − 2𝑦𝑘 + 2𝑏 − 1……………………………………………………………………………….……………..(3)
Now substitute 𝑚 = ∆𝑦/∆𝑥 in equation (3)
∆𝑦
𝑑1 − 𝑑2 = 2 (∆𝑥 ) (𝑥𝑘 + 1) − 2𝑦𝑘 + 2𝑏 − 1 ….………………………………….………………………………………………….(4)
Now we have decision parameter 𝑝𝑘 for 𝑘 𝑡ℎ step in the line algorithm is given by:
𝑝𝑘 = ∆𝑥(𝑑1 − 𝑑2 )
𝑝𝑘 = ∆𝑥(2∆𝑦/∆𝑥(𝑥𝑘 + 1) – 2𝑦𝑘 + 2𝑏 – 1)
𝑝𝑘 = 2∆𝑦𝑥𝑘 + 2∆𝑦 − 2∆𝑥𝑦𝑘 + 2∆𝑥𝑏 − ∆𝑥
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥 ……………………………………………………….………………………(5)
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 𝐶 (𝑊ℎ𝑒𝑟𝑒 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝐶 = 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥)…………………….……………...(6)
The sign of 𝑝𝑘 is the same as the sign of 𝑑1 − 𝑑2 , since ∆𝑥 > 0 for our example.
Parameter 𝑐 is constant which is independent of pixel position and will eliminate in the recursive
calculation for 𝑝𝑘 .
Now if 𝑝𝑘 is negative then we plot the lower pixel otherwise we plot the upper pixel.
So successive decision parameters using incremental integer calculation as:
𝑝𝑘+1 = 2∆𝑦𝑥𝑘+1 − 2∆𝑥𝑦𝑘+1 + C
Now Subtract 𝑝𝑘 from 𝑝𝑘+1
𝑝𝑘+1 − 𝑝𝑘 = 2∆𝑦(𝑥𝑘+1 − 𝑥𝑘 ) -2∆𝑥(𝑦𝑘+1 − 𝑦𝑘 )
𝑝𝑘+1 − 𝑝𝑘 = 2∆𝑦𝑥𝑘+1 − 2∆𝑥𝑦𝑘+1 + C − 2∆𝑦𝑥𝑘 + 2∆𝑥𝑦𝑘 − C
But 𝑥𝑘+1 = 𝑥𝑘 + 1, so that (𝑥𝑘+1 − 𝑥𝑘 ) = 1
𝑝𝑘+1 = 𝑝𝑘 + 2∆𝑦 − 2∆𝑥(𝑦𝑘+1 − 𝑦𝑘 )
Where the terms 𝑦𝑘+1 − 𝑦𝑘 is either 0 or 1, depends on the sign of parameter 𝑝𝑘 .
This recursive calculation of decision parameters is performed at each integer 𝑥 position starting at the
left coordinate endpoint of the line.
The first decision parameter 𝑝0 is calculated using equation (5) as first time we need to take constant
part into account so:
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥
Now 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑏 = 𝑦0 – 𝑚𝑥0
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥(𝑦0 − 𝑚𝑥0 ) − ∆x
Now Substitute 𝑚 = ∆𝑦/𝛥𝑥
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥(𝑦0 − (∆𝑦/∆𝑥)𝑥0 ) − ∆x
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥𝑦0 − 2∆𝑦𝑥0 − ∆x
𝑝0 = 2∆𝑦 − ∆x
Let’s see Bresenham’s line drawing algorithm for |𝑚| < 1
1. Input the two line endpoints and store the left endpoint in (𝑥0 , 𝑦0 ).
2. Load (𝑥0 , 𝑦0 ) into the frame buffer; that is, plot the first point.
3. Calculate constants ∆𝑥, ∆𝑦, 2∆𝑦, and 2∆𝑦 − 2∆𝑥, and obtain the starting value for the decision
parameter as
Y2
Δy
Y1
Δx
X1 X2
Fig. 2.6: - Bounding box for a line with coordinate extents ∆x and ∆y.
Another way to set up parallel algorithms on raster system is to assign each processor to a particular
group of screen pixels.
With sufficient number of processor we can assign each processor to one pixel within some screen
region.
This approach can be adapted to line display by assigning one processor to each of the pixels within the
limit of the bounding rectangle and calculating pixel distance from the line path.
The number of pixels within the bounding rectangle of a line is ∆𝑥 × ∆𝑦.
Perpendicular distance 𝑑 from line to a particular pixel is calculated by:
𝑑 = 𝐴𝑥 + 𝐵𝑦 + 𝐶
Where
𝐴 = −∆𝑦/𝑙𝑖𝑛𝑒𝑙𝑒𝑛𝑔𝑡ℎ
𝐵 = −∆𝑥/𝑙𝑖𝑛𝑒𝑙𝑒𝑛𝑔𝑡ℎ
𝐶 = (𝑥0 ∆𝑦 − 𝑦0 ∆𝑥)/𝑙𝑖𝑛𝑒𝑙𝑒𝑛𝑔𝑡ℎ
With
𝑙𝑖𝑛𝑒𝑙𝑒𝑛𝑔𝑡ℎ = √∆𝑥 2 + ∆𝑦 2
Once the constant 𝐴, 𝐵, and 𝐶 have been evaluated for the line each processors need to perform two
multiplications and two additions to compute the pixel distance 𝑑.
A pixel is plotted if d is less than a specified line thickness parameter.
Instead of partitioning the screen into single pixels, we can assign to each processor either a scan line or
a column a column of pixels depending on the line slope.
Circle
Yc
Xc
Properties of Circle
The distance relationship is expressed by the Pythagorean theorem in Cartesian coordinates as:
(𝑥 − 𝑥𝑐 )2 + (𝑦 − 𝑦𝑐 )2 = 𝑟 2
We could use this equation to calculate circular boundary points by incrementing 1 in 𝑥 direction in
every steps from 𝑥𝑐 – 𝑟 to 𝑥𝑐 + 𝑟 and calculate corresponding 𝑦 values at each position as:
(𝑥 − 𝑥𝑐 )2 + (𝑦 − 𝑦𝑐 )2 = 𝑟 2
(𝑦 − 𝑦𝑐 )2 = 𝑟 2 − (𝑥 − 𝑥𝑐 )2
y = 𝑦𝑐 ± √𝑟 2 − (𝑥𝑐 − 𝑥)2
But this is not best method for generating a circle because it requires more number of calculations which
take more time to execute.
And also spacing between the plotted pixel positions is not uniform as shown in figure below.
Fig. 2.8: - Positive half of circle showing non uniform spacing bet calculated pixel positions.
(-Y, X) (Y, X)
45O
(-X, Y)
(X, Y)
𝒙𝟐 + 𝒚𝟐 − 𝒓𝟐 = 𝟎
𝒚𝒌 Midpoint
𝒚𝒌 − 𝟏
𝒙𝒌 𝒙𝒌 + 𝟏 𝒙𝒌 + 𝟐
Fig. 2.10: - Midpoint between candidate pixel at sampling position 𝑥𝑘 + 1 along circle path.
Assuming we have just plotted the pixel at (𝑥𝑘 , 𝑦𝑘 ) and next we need to determine whether the pixel at
position ‘(𝑥𝑘 + 1, 𝑦𝑘 )’ or the one at position’ (𝑥𝑘 + 1, 𝑦𝑘 − 1)’ is closer to circle boundary.
So for finding which pixel is more closer using decision parameter evaluated at the midpoint between
two candidate pixels as below:
1
𝑝𝑘 = 𝑓𝑐𝑖𝑟𝑐𝑙𝑒 (𝑥𝑘 + 1, 𝑦𝑘 − 2)
1 2
𝑝𝑘 = (𝑥𝑘 + 1)2 + (𝑦𝑘 − 2) − 𝑟 2
If 𝑝𝑘 < 0 this midpoint is inside the circle and the pixel on the scan line 𝑦𝑘 is closer to circle boundary.
Otherwise the midpoint is outside or on the boundary and we select the scan line 𝑦𝑘 − 1.
Successive decision parameters are obtain using incremental calculations as follows:
Ellipse
Properties of Ellipse
If we labeled distance from two foci to any point on ellipse boundary as 𝑑1 and 𝑑2 then the general
equation of an ellipse can be written as follow.
𝑑1 + 𝑑2 = 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡
Expressing distance in terms of focal coordinates 𝑓1 = (𝑥1 , 𝑦1 ) and 𝑓2 = (𝑥2 , 𝑦2 ) we have
√(𝑥 − 𝑥1 )2 + (𝑦 − 𝑦1 )2 + √(𝑥 − 𝑥2 )2 + (𝑦 − 𝑦2 )2 = 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡
An interactive method for specifying an ellipse in an arbitrary orientation is to input two foci and a point
on the ellipse boundary.
With this three coordinates we can evaluate constant in equation:
√(𝑥 − 𝑥1 )2 + (𝑦 − 𝑦1 )2 + √(𝑥 − 𝑥2 )2 + (𝑦 − 𝑦2 )2 = 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡
We can also write this equation in the form
𝐴𝑥 2 + 𝐵𝑦 2 + 𝐶𝑥𝑦 + 𝐷𝑥 + 𝐸𝑦 + 𝐹 = 0
Where the coefficients 𝐴, 𝐵, 𝐶, 𝐷, 𝐸, and 𝐹 are evaluated in terms of the focal coordinates and the
dimensions of the major and minor axes of the ellipse.
Major axis of an ellipse is straight line segment passing through both foci and extends up to boundary on
both sides.
The minor axis spans shortest dimension of ellipse, it bisect the major axis at right angle in two equal
half.
Then coefficient in 𝐴𝑥 2 + 𝐵𝑦 2 + 𝐶𝑥𝑦 + 𝐷𝑥 + 𝐸𝑦 + 𝐹 = 0 can be evaluated and used to generate pixels
along the elliptical path.
Ellipse equation are greatly simplified if we align major and minor axis with coordinate axes i.e. 𝑥 − 𝑎𝑥𝑖𝑠
and 𝑦 − 𝑎𝑥𝑖𝑠.
We can say ellipse is in standard position if their major and minor axes are parallel to 𝑥 − 𝑎𝑥𝑖𝑠 and 𝑦 −
𝑎𝑥𝑖𝑠 which is shown in below figure.
Fig. 2.12: - Ellipse centered at (𝑥𝑐 , 𝑦𝑐 ) with semi major axis 𝑟𝑥 and semi minor axis 𝑟𝑦 are parallel to
coordinate axis.
Equation of ellipse shown in figure 2.12 can be written in terms of the ellipse center coordinates and
parameters 𝑟𝑥 and 𝑟𝑦 as.
2
𝑥 − 𝑥𝑐 2 𝑦 − 𝑦𝑐
( ) +( ) =1
𝑟𝑥 𝑟𝑦
Using the polar coordinates 𝑟 and 𝜃, we can also describe the ellipse in standard position with the
parametric equations:
𝑥 = 𝑥𝑐 + 𝑟𝑥 cos θ
𝑦 = 𝑦𝑐 + 𝑟𝑦 sin θ
Symmetry considerations can be used to further reduced computations.
An ellipse in standard position is symmetric between quadrants but unlike a circle it is not symmetric
between octant.
Thus we must calculate boundary point for one quadrant and then other three quadrants point can be
obtained by symmetry as shown in figure below.
(−𝑥, 𝑦) (𝑥, 𝑦)
𝑟𝑦
𝑟𝑥
(𝑥𝑐 , 𝑦𝑐 )
(−𝑥, − 𝑦) (𝑥, − 𝑦)
Fig. 2.14: - Ellipse processing regions. Over the region 1 the magnitude of ellipse slope is < 1 and over
the region 2 the magnitude of ellipse slope > 1.
We take unit step in 𝑥 direction if magnitude of slope is less than 1 in that region otherwise we take unit
step in 𝑦 direction.
Boundary divides region at 𝑠𝑙𝑜𝑝𝑒 = −1.
With 𝑟𝑥 < 𝑟𝑦 we process this quadrant by taking unit steps in 𝑥 direction in region 1 and unit steps in 𝑦
direction in region 2.
Region 1 and 2 can be processed in various ways.
We can start from (0, 𝑟𝑦 ) and step clockwise along the elliptical path in the first quadrant shifting from
unit step in 𝑥 to unit step in 𝑦 when slope becomes less than -1.
Alternatively, we could start at (𝑟𝑥 , 0) and select points in a counterclockwise order, shifting from unit
steps in 𝑦 to unit steps in 𝑥 when the slope becomes greater than -1.
With parallel processors, we could calculate pixel positions in the two regions simultaneously.
Here we consider sequential implementation of midpoint algorithm. We take the start position at (0, 𝑟𝑦 )
and steps along the elliptical path in clockwise order through the first quadrant.
We define ellipse function for center of ellipse at (0, 0) as follows.
𝑓𝑒𝑙𝑙𝑖𝑝𝑠𝑒 (𝑥, 𝑦) = 𝑟𝑦 2 𝑥 2 + 𝑟𝑥 2 𝑦 2 − 𝑟𝑦 2 𝑟𝑥 2
Which has the following properties:
< 0 𝑖𝑓 (𝑥, 𝑦)𝑖𝑠 𝑖𝑛𝑠𝑖𝑑𝑒 𝑡ℎ𝑒 𝑒𝑙𝑙𝑖𝑝𝑠𝑒 𝑏𝑜𝑢𝑛𝑑𝑎𝑟𝑦
𝑓𝑒𝑙𝑙𝑖𝑝𝑠𝑒 (𝑥, 𝑦) {= 0 𝑖𝑓 (𝑥, 𝑦)𝑖𝑠 𝑜𝑛 𝑡ℎ𝑒 𝑒𝑙𝑙𝑖𝑝𝑠𝑒 𝑏𝑜𝑢𝑛𝑑𝑎𝑟𝑦
> 0 𝑖𝑓 (𝑥, 𝑦)𝑖𝑠 𝑜𝑢𝑡𝑠𝑖𝑑𝑒 𝑡ℎ𝑒 𝑒𝑙𝑙𝑖𝑝𝑠𝑒 𝑏𝑜𝑢𝑛𝑑𝑎𝑟𝑦
Thus the ellipse function serves as the decision parameter in the midpoint ellipse algorithm.
At each sampling position we select the next pixel from two candidate pixel.
𝒓𝒚 𝟐 𝒙𝟐 + 𝒓𝒙 𝟐 𝒚𝟐 − 𝒓𝒙 𝟐 𝒓𝒚 𝟐 = 𝟎
𝒚𝒌 Midpoint
𝒚𝒌 − 𝟏
𝒙𝒌 𝒙𝒌 + 𝟏 𝒙𝒌 + 𝟐
Fig. 2.15: - Midpoint between candidate pixels at sampling position 𝑥𝑘 + 1 along an elliptical path.
Assume we are at (𝑥𝑘 , 𝑦𝑘 ) position and we determine the next position along the ellipse path by
evaluating decision parameter at midpoint between two candidate pixels.
1
𝑝1𝑘 = 𝑓𝑒𝑙𝑙𝑖𝑝𝑠𝑒 (𝑥𝑘 + 1, 𝑦𝑘 − )
2
1 2
𝑝1𝑘 = 𝑟𝑦 2 (𝑥𝑘 + 1)2 + 𝑟𝑥 2 (𝑦𝑘 − ) − 𝑟𝑥 2 𝑟𝑦 2
2
If 𝑝1𝑘 < 0, the midpoint is inside the ellipse and the pixel on scan line 𝑦𝑘 is closer to ellipse boundary
otherwise the midpoint is outside or on the ellipse boundary and we select the pixel 𝑦𝑘 − 1.
At the next sampling position decision parameter for region 1 is evaluated as.
1
𝑝1𝑘+1 = 𝑓𝑒𝑙𝑙𝑖𝑝𝑠𝑒 (𝑥𝑘+1 + 1, 𝑦𝑘+1 − )
2
1 2
𝑝1𝑘+1 = 𝑟𝑦 2 [(𝑥𝑘 + 1) + 1]2 + 𝑟𝑥 2 (𝑦𝑘+1 − ) − 𝑟𝑥 2 𝑟𝑦 2
2
Now subtract 𝑝1𝑘 from 𝑝1𝑘+1
1 2 1 2
𝑝1𝑘+1 − 𝑝1𝑘 = 𝑟𝑦 2 [(𝑥𝑘 + 1) + 1]2 + 𝑟𝑥 2 (𝑦𝑘+1 − ) − 𝑟𝑥 2 𝑟𝑦 2 − 𝑟𝑦 2 (𝑥𝑘 + 1)2 − 𝑟𝑥 2 (𝑦𝑘 − )
2 2
+ 𝑟𝑥 2 𝑟𝑦 2
1 2 1 2
𝑝1𝑘+1 − 𝑝1𝑘 = 𝑟𝑦 2 [(𝑥𝑘 + 1) + 1]2 + 𝑟𝑥 2 (𝑦𝑘+1 − ) − 𝑟𝑦 2 (𝑥𝑘 + 1)2 − 𝑟𝑥 2 (𝑦𝑘 − )
2 2
2
1 1 2
𝑝1𝑘+1 − 𝑝1𝑘 = 𝑟𝑦 2 (𝑥𝑘 + 1)2 + 2𝑟𝑦 2 (𝑥𝑘 + 1) + 𝑟𝑦 2 + 𝑟𝑥 2 (𝑦𝑘+1 − ) − 𝑟𝑦 2 (𝑥𝑘 + 1)2 − 𝑟𝑥 2 (𝑦𝑘 − )
2 2
2 2
1 1
𝑝1𝑘+1 − 𝑝1𝑘 = 2𝑟𝑦 2 (𝑥𝑘 + 1) + 𝑟𝑦 2 + 𝑟𝑥 2 [(𝑦𝑘+1 − ) − (𝑦𝑘 − ) ]
2 2
Now making 𝑝1𝑘+1 as subject.
𝒓𝒚 𝟐 𝒙𝟐 + 𝒓𝒙 𝟐 𝒚𝟐 − 𝒓𝒙 𝟐 𝒓𝒚 𝟐 = 𝟎
𝒚𝒌 Midpoint
𝒚𝒌 − 𝟏
𝒙𝒌 𝒙𝒌 + 𝟏 𝒙𝒌 + 𝟐
Fig. 2.16: - Midpoint between candidate pixels at sampling position 𝑦𝑘 − 1 along an elliptical path.
For this region, the decision parameter is evaluated as follows.
1
𝑝2𝑘 = 𝑓𝑒𝑙𝑙𝑖𝑝𝑠𝑒 (𝑥𝑘 + , 𝑦𝑘 − 1)
2
2
1
𝑝2𝑘 = 𝑟𝑦 2 (𝑥𝑘 + ) + 𝑟𝑥 2 (𝑦𝑘 − 1)2 − 𝑟𝑥 2 𝑟𝑦 2
2
If 𝑝2𝑘 > 0 the midpoint is outside the ellipse boundary, and we select the pixel at 𝑥𝑘 .
If 𝑝2𝑘 ≤ 0 the midpoint is inside or on the ellipse boundary and we select 𝑥𝑘 + 1.
At the next sampling position decision parameter for region 2 is evaluated as.
1
𝑝2𝑘+1 = 𝑓𝑒𝑙𝑙𝑖𝑝𝑠𝑒 (𝑥𝑘+1 + , 𝑦𝑘+1 − 1)
2
2
1
𝑝2𝑘+1 = 𝑟𝑦 2 (𝑥𝑘+1 + ) + 𝑟𝑥 2 [(𝑦𝑘 − 1) − 1]2 − 𝑟𝑥 2 𝑟𝑦 2
2
Now subtract 𝑝2𝑘 from 𝑝2𝑘+1
1 2 1 2
𝑝2𝑘+1 − 𝑝2𝑘 = 𝑟𝑦 2 (𝑥𝑘+1 + ) + 𝑟𝑥 2 [(𝑦𝑘 − 1) − 1]2 − 𝑟𝑥 2 𝑟𝑦 2 − 𝑟𝑦 2 (𝑥𝑘 + ) − 𝑟𝑥 2 (𝑦𝑘 − 1)2
2 2
2 2
+ 𝑟𝑥 𝑟𝑦
1 2 1 2
𝑝2𝑘+1 − 𝑝2𝑘 = 𝑟𝑦 2 (𝑥𝑘+1 + ) + 𝑟𝑥 2 (𝑦𝑘 − 1)2 − 2𝑟𝑥 2 (𝑦𝑘 − 1) + 𝑟𝑥 2 − 𝑟𝑦 2 (𝑥𝑘 + ) − 𝑟𝑥 2 (𝑦𝑘 − 1)2
2 2
2 2
1 1
𝑝2𝑘+1 − 𝑝2𝑘 = 𝑟𝑦 2 (𝑥𝑘+1 + ) − 2𝑟𝑥 2 (𝑦𝑘 − 1) + 𝑟𝑥 2 − 𝑟𝑦 2 (𝑥𝑘 + )
2 2
2
2 (𝑦 2 2
1 1 2
𝑝2𝑘+1 − 𝑝2𝑘 = −2𝑟𝑥 𝑘 − 1) + 𝑟𝑥 + 𝑟𝑦 [(𝑥𝑘+1 + ) − (𝑥𝑘 + ) ]
2 2
Now making 𝑝2𝑘+1 as subject.
1 2 1 2
𝑝2𝑘+1 = 𝑝2𝑘 − 2𝑟𝑥 2 (𝑦𝑘 − 1) + 𝑟𝑥 2 + 𝑟𝑦 2 [(𝑥𝑘+1 + ) − (𝑥𝑘 + ) ]
2 2
Prof. Vijay M. Shekhat, CE Department | 2160703 – Computer Graphics 16
Unit-2 – Graphics Primitives
Here 𝑥𝑘+1 is either 𝑥𝑘 or 𝑥𝑘 + 1, depends on the sign of 𝑝2𝑘 .
In region 2 initial position is selected which is last position of region one and the initial decision
parameter is calculated as follows.
1
𝑝20 = 𝑓𝑒𝑙𝑙𝑖𝑝𝑠𝑒 (𝑥0 + , 𝑦0 − 1)
2
2
1
𝑝20 = 𝑟𝑦 2 (𝑥0 + ) + 𝑟𝑥 2 (𝑦0 − 1)2 − 𝑟𝑥 2 𝑟𝑦 2
2
For simplify calculation of 𝑝20 we could select pixel position in counterclockwise order starting at (𝑟𝑥 , 0).
In above case we take unit step in the positive 𝑦 direction up to the last point selected in region 1.
If 𝑝1𝑘 < 0, the next point along the ellipse centered on (0, 0) is (𝑥𝑘+1 , 𝑦𝑘 ) and
𝑝1𝑘+1 = 𝑝1𝑘 + 2𝑟𝑦 2 𝑥𝑘+1 + 𝑟𝑦 2
If 𝑝2𝑘 > 0, the next point along the ellipse centered on (0, 0) is (𝑥𝑘 , 𝑦𝑘 − 1) and
𝑝2𝑘+1 = 𝑝2𝑘 − 2𝑟𝑥 2 𝑦𝑘+1 + 𝑟𝑥 2
Otherwise, the next point along the ellipse is (𝑥𝑘 + 1, 𝑦𝑘 − 1) and
𝑝2𝑘+1 = 𝑝2𝑘 − 2𝑟𝑥 2 𝑦𝑘+1 + 𝑟𝑥 2 + 2𝑟𝑦 2 𝑥𝑘+1
Using the same incremental calculations for 𝑥 and 𝑦 as in region 1.
6. Determine symmetry points in the other three quadrants.
7. Move each calculated pixel position (𝑥, 𝑦) onto the elliptical path centered on (𝑥𝑐 , 𝑦𝑐 ) and plot the
coordinate values:
𝑥 = 𝑥 + 𝑥𝑐
𝑦 = 𝑦 + 𝑦𝑐
Repeat the steps for region 2 until 𝑦𝑘 ≥ 0.
Filled-Area Primitives
In practical we often use polygon which are filled with some color or pattern inside it.
Prof. Vijay M. Shekhat, CE Department | 2160703 – Computer Graphics 17
Unit-2 – Graphics Primitives
There are two basic approaches to area filling on raster systems.
One way to fill an area is to determine the overlap intervals for scan line that cross the area.
Another method is to fill the area is to start from a given interior position and paint out wards from this
point until we encounter boundary.
Fig. 2.17: - Interior pixels along a scan line passing through a polygon area.
For each scan-line crossing a polygon, the algorithm locates the intersection points are of scan line with
the polygon edges.
This intersection points are stored from left to right.
Frame buffer positions between each pair of intersection point are set to specified fill color.
Some scan line intersects at vertex position they are required special handling.
For vertex we must look at the other endpoints of the two line segments of the polygon which meet at
this vertex.
If these points lie on the same (up or down) side of the scan line, then that point is counts as two
intersection points.
If they lie on opposite sides of the scan line, then the point is counted as single intersection.
This is illustrated in figure below
Fig. 2.18: - Intersection points along the scan line that intersect polygon vertices.
As shown in the Fig. 2.18, each scan line intersects the vertex or vertices of the polygon. For scan line 1,
the other end points (B and D) of the two line segments of the polygon lie on the same side of the scan
Decerement 0
4
Decerement \1
5
6
Decerement 2
6
3
Y0 0 Y0
X0 X0
∆𝑥
Fig. 2.19: - line with slope 7/3 and its integer calculation using equation 𝑥𝑘+1 = 𝑥𝑘 + ∆𝑦.
Steps for above procedure
1. Suppose m = 7/3
2. Initially, set counter to 0, and increment to 3 (which is 𝛥𝑥).
3. When move to next scan line, increment counter by adding ∆𝑥
4. When counter is equal or greater than 7 (which is 𝛥𝑦), increment the x-intercept (in other words, the 𝑥-
intercept for this scan line is one more than the previous scan line), and decrement counter by 7(which
is ∆𝑦).
To efficiently perform a polygon fill, we can first store the polygon boundary in a sorted edge table that
contains all the information necessary to process the scan lines efficiently.
We use bucket sort to store the edge sorted on the smallest 𝑦 value of each edge in the correct scan line
positions.
Only the non-horizontal edges are entered into the sorted edge table.
Figure below shows one example of storing edge table.
Yc Yb Xc 1/mcb
B
Yd Yc Xd 1/mdb Ye Xd 1/mde
C Scan Line Yc
Ya Yb Xc 1/mcb Yb Xa 1/mab
C’ E
.
Scan Line Yd .
.
Scan Line Ya D 1
0
A
Inside-Outside Tests
In area filling and other graphics operation often required to find particular point is inside or outside the
polygon.
For finding which region is inside or which region is outside most graphics package use either odd even
rule or the nonzero winding number rule.
Fig. 2.21: - Identifying interior and exterior region for a self-intersecting polygon.
Seed
2
(a)
1
2
(b)
1
3
5 6
6
4
(c) 5
1
4
4
(d) 5
1
4
Fig. 2.24: - Boundary fill across pixel spans for a 4-connected area.
Flood-Fill Algorithm
Sometimes it is required to fill in an area that is not defined within a single color boundary.
In such cases we can fill areas by replacing a specified interior color instead of searching for a boundary
color.
This approach is called a flood-fill algorithm. Like boundary fill algorithm, here we start with some seed
and examine the neighbouring pixels.
However, here pixels are checked for a specified interior color instead of boundary color and they are
replaced by new color.
Using either a 4-connected or 8-connected approach, we can step through pixel positions until all
interior point have been filled.
The following procedure illustrates the recursive method for filling 4-connected region using flood-fill
algorithm.
Procedure :
flood-fill4(x, y, new-color, old-color)
{
if(getpixel (x,y) = = old-color)
{
putpixel (x, y, new-color)
flood-fill4 (x + 1, y, new-color, old -color);
flood-fill4 (x, y + 1, new -color, old -color);
flood-fill4 (x - 1, y, new -color, old -color);
flood-fill4 (x, y - l, new -color, old-color);
}
}
Note: 'getpixel' function gives the color of .specified pixel and 'putpixel' function draws the pixel with
specified color.
Character Generation
We can display letters and numbers in variety of size and style.
The overall design style for the set of character is called typeface.
Today large numbers of typefaces are available for computer application for example Helvetica, New
York platino etc.
Originally, the term font referred to a set of cast metal character forms in a particular size and format,
such as 10-point Courier Italic or 12- point Palatino Bold. Now, the terms font and typeface are often
used interchangeably, since printing is no longer done with cast metal forms.
Two different representations are used for storing computer fonts.
Outline Font
In this method character is generated using curve section and straight line as combine assembly.
Figure below shows how it is generated.
Start
Starbust Method
03 04 03 04
13 14 13 14
02 23 05 02 23 05
17 18 17 18
01 06 01 06
21 21
12 22 07 12 22 07
20 20
24 19 24 19
11 08 11 08
16 15 16 15
10 09 10 09
(a) (b)
Fig. 2.28: - (a) Starbust Method. (b) Letter V using starbust method
In this method a fix pattern of line segments are used to generate characters.
As shown in figure 2.28 there are 24 line segments.
We highlight those lines which are necessary to draw a particular character.
Pattern for particular character is stored in the form of 24 bit code. In which each bit represents
corresponding line having that number.
That code contains 0 or 1 based on line segment need to highlight. We put bit value 1 for highlighted line
and 0 for other line.
Code for letter V is
110011100001001100000000
This technique is not used now a days because:
1. It requires more memory to store 24 bit code for single character.
Line Attributes
Basic attributes of a straight line segment are its type, its dimension, and its color. In some graphics
packages, lines can also be displayed using selected pen or brush option.
Line Type
Possible selection for the line-type attribute includes solid lines, dashed lines, and dotted lines etc.
We modify a line –drawing algorithm to generate such lines by setting the length and spacing of
displayed solid sections along the line path.
A dashed line could be displayed by generating an inter dash spacing that is equal to the length of the
solid sections. Both the length of the dashes and the inter dash spacing are often specified as user
options.
To set line type attributes in a PHIGS application program, a user invokes the function:
setLinetype(It)
Where parameter lt is assigned a positive integer value of 1, 2, 3, 4… etc. to generate lines that are,
respectively solid, dashed, dotted, or dash-dotted etc.
Other values for the line-type parameter It could be used to display variations in the dot-dash patterns.
Once the line-type parameter has been set in a PHIGS application program, all subsequent line-drawing
commands produce lines with this Line type.
Raster graphics generates these types by plotting some pixel and some pixel is off along the line path.
We can generate different patterns by specifying 1 for on pixel and 0 for off pixel then we can generate
1010101 patter as a dotted line.
It is used in many application for example comparing data in graphical form.
Line Width
Implementation of line-width options depends on the capabilities of the output device.
A heavy line on a video monitor could be displayed as adjacent parallel lines, while a pen plotter might
require pen changes.
To set line width attributes in a PHIGS application program, a user invokes the function:
setLinewidthScalFactor (lw)
Line-width parameter lw is assigned a positive number to indicate the relative width of the line to be
displayed.
Values greater than 1 produce lines thicker than the standard line width and values less than the 1
produce line thinner than the standard line width.
Fig. 2.30: - Double-wide raster line with slope |𝑚| < 1 generated with vertical pixel spans.
Fig. 2.31: - Raster line with slope |𝑚| > 1 and line-width parameter 𝑙𝑤 = 4 plotted with horizontal pixel
spans.
As we change width of the line we can also change line end which are shown below which illustrate all
three types of ends.
Fig. 2.32: - Thick lines drew with (a) butt caps, (b) projecting square caps, and (c) round caps.
Similarly we generates join of two lines of modified width are shown in figure below which illustrate all
three type of joins.
Fig. 2.33: - Thick lines segments connected with (a) miter join, (b) round join, and (c) bevel join.
Line Color
The name itself suggests that it is defining color of line displayed on the screen.
By default system produce line with current color but we can change this color by following function in
PHIGS package as follows:
setPolylineColorIndex (lc)
In this lc is constant specifying particular color to be set.
255
Greyscale
With monitors that have no color capability, color function can be used in an application program to set
the shades of grey, or greyscale for display primitives.
Numeric values between 0-to-1 can be used to specify greyscale levels.
This numeric values is converted in binary code for store in raster system.
Table below shows specification for intensity codes for a four level greyscale system.
Area-Fill Attributes
For filling any area we have choice between solid colors or pattern to fill all these are include in area fill
attributes.
Area can be painted by various burses and style.
Prof. Vijay M. Shekhat, CE Department | 2160703 – Computer Graphics 31
Unit-2 – Graphics Primitives
Fill Styles
Area are generally displayed with three basic style hollow with color border, filled with solid color, or
filled with some design.
In PHIGS package fill style is selected by following function.
setInteriorStyle (fs)
Value of fs include hollow, solid, pattern etc.
Another values for fill style is hatch, which is patterns of line like parallel line or crossed line.
Figure bellow shows different style of filling area.
Pattern Fill
We select the pattern with
setInteriorStyleIndex (pi)
Where pattern index parameter pi specifies position in pattern table entry.
Figure below shows pattern table.
Index(pi) Pattern(cp)
1 4 0
[ ]
0 4
2 2 1 2
[1 2 1]
2 1 2
Table 2.2: - Pattern table.
For example, the following set of statements would fill the area defined in the fillArea command with
the second pattern type stored in the pattern table:
SetInteriorStyle( pattern ) ;
setInteriorStyleIndex ( 2 ) ;
fillArea (n, points);
Separate table can be maintain for hatch pattern and we can generate our own table with required
pattern.
Other function used for setting other style as follows
setpatternsize (dx, dy)
setPaternReferencePoint (positicn)
We can create our own pattern by setting and resetting group of pixel and then map it into the color
matrix.
Character Attributes
The appearance of displayed characters is controlled by attributes such as font, size, color, and
orientation.
Attributes can be set for entire string or may be individually.
Text Attributes
In text we are having so many style and design like italic fonts, bold fonts etc.
For setting the font style in PHIGS package we have one function which is:
setTextFont (tf)
Where tf is used to specify text font
It will set specified font as a current character.
For setting color of character in PHIGS we have function:
setTextColorIndex (tc)
Where text color parameter tc specifies an allowable color code.
For setting the size of the text we use function.
setCharacterheight (ch)
For scaling the character we use function.
setCharacterExpansionFacter (cw)
Where character width parameter cw is set to a positive real number that scale the character body
width.
Spacing between character is controlled by function
setCharacterSpacing (cs)
Where character spacing parameter cs can be assigned any real value.
The orientation for a displayed character string is set according to the direction of the character up
vector:
setCharacterUpVector (upvect)
Parameter upvect in this function is assigned two values that specify the 𝑥 and 𝑦 vector components.
Prof. Vijay M. Shekhat, CE Department | 2160703 – Computer Graphics 33
Unit-2 – Graphics Primitives
Text is then displayed so that the orientation of characters from baseline to cap line is in the direction of
the up vector.
For setting the path of the character we use function:
setTextPath (tp)
Where the text path parameter tp can be assigned the value: right, left, up, or down.
It will set the direction in which we are writing.
For setting the alignment of the text we use function.
setTextAlignment (h, v)
Where parameter h and v control horizontal and vertical alignment respectively.
For specifying precision for text display is given with function.
setTextPrecision (tpr)
Where text precision parameter tpr is assigned one of the values: string, char, or stroke.
The highest-quality text is produced when the parameter is set to the value stroke.
Marker Attributes
A marker symbol display single character in different color and in different sizes.
For marker attributes implementation by procedure that load the chosen character into the raster at
defined position with the specified color and size.
We select marker type using function.
setMarkerType (mt)
Where marker type parameter mt is set to an integer code.
Typical codes for marker type are the integers 1 through 5, specifying, respectively, a dot (.), a vertical
cross (+), an asterisk (*), a circle (o), and a diagonal cross (x). Displayed marker types are centred on the
marker coordinates.
We set the marker size with function.
SetMarkerSizeScaleFactor (ms)
Where parameter marker size ms assigned a positive number according to need for scaling.
For setting marker color we use function.
setPolymarkerColorIndex (mc)
Where parameter mc specify the color of the marker symbol.
Transformation
Changing Position, shape, size, or orientation of an object on display is known as transformation.
Basic Transformation
Basic transformation includes three transformations Translation, Rotation, and Scaling.
These three transformations are known as basic transformation because with combination of these
three transformations we can obtain any transformation.
Translation
(𝒙′ , 𝒚′ )
𝒕𝒚
(𝒙, 𝒚)
𝒕𝒙
Rotation
It is a transformation that used to reposition the object along the circular path in the XY - plane.
To generate a rotation we specify a rotation angle 𝜽 and the position of the Rotation Point (Pivot
Point) (𝒙𝒓, 𝒚𝒓 ) about which the object is to be rotated.
Positive value of rotation angle defines counter clockwise rotation and negative value of rotation angle
defines clockwise rotation.
We first find the equation of rotation when pivot point is at coordinate origin(𝟎, 𝟎).
(𝒙′ , 𝒚′ )
(𝒙, 𝒚)
𝜽
(𝒙′ , 𝒚′ )
(𝒙, 𝒚)
𝜽
∅
(𝒙𝒓 , 𝒚𝒓 )
Scaling
Fixed Point
𝑷′ = 𝑻(𝒕𝒙 ,𝒕𝒚) ∙ 𝑷
𝒙′ 𝟏 𝟎 𝒕𝒙 𝒙
[𝒚′ ] = [𝟎 𝟏 𝒕𝒚 ] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of translation matrix is obtain by putting −𝒕𝒙 & − 𝒕𝒚 instead of 𝒕𝒙 & 𝒕𝒚.
Rotation
𝑷′ = 𝑹(𝜽) ∙ 𝑷
𝒙′ 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝒙
[𝒚′ ] = [ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of rotation matrix is obtained by replacing 𝜽 by −𝜽.
Scaling
𝑷′ = 𝑺(𝒔𝒙 ,𝒔𝒚) ∙ 𝑷
𝟏 𝟎 𝟎 𝟏 𝟏
𝟏 𝟏
NOTE: - Inverse of scaling matrix is obtained by replacing 𝒔𝒙 & 𝒔𝒚 by 𝒔 & 𝒔𝒚
respectively.
𝒙
Composite Transformation
We can set up a matrix for any sequence of transformations as a composite transformation matrix by
calculating the matrix product of individual transformation.
For column matrix representation of coordinate positions, we form composite transformations by
multiplying matrices in order from right to left.
Translations
Two successive translations are performed as:
𝑷′ = 𝑻(𝒕𝒙𝟐 , 𝒕𝒚𝟐 ) ∙ {𝑻(𝒕𝒙𝟏 , 𝒕𝒚𝟏 ) ∙ 𝑷}
𝑷′ = {𝑻(𝒕𝒙𝟐 , 𝒕𝒚𝟐 ) ∙ 𝑻(𝒕𝒙𝟏 , 𝒕𝒚𝟏 )} ∙ 𝑷
𝟏 𝟎 𝒕𝒙𝟐 𝟏 𝟎 𝒕𝒙𝟏
′
𝑷 = [𝟎 𝟏 𝒕𝒚𝟐 ] [𝟎 𝟏 𝒕𝒚𝟏 ] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝟏 𝟎 𝒕𝒙𝟏 + 𝒕𝒙𝟐
′
𝑷 = [𝟎 𝟏 𝒕𝒚𝟏 + 𝒕𝒚𝟐 ] ∙ 𝑷
𝟎 𝟎 𝟏
′
𝑷 = 𝑻(𝒕𝒙𝟏 + 𝒕𝒙𝟐 , 𝒕𝒚𝟏 + 𝒕𝒚𝟐 ) ∙ 𝑷}
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
This concept can be extended for any number of successive translations.
Example: Obtain the final coordinates after two translations on point 𝑝(2,3) with translation vector
(4, 3) and (−1, 2) respectively.
1 0 3 2 5
𝑃 ′ = [ 0 1 5] ∙ [ 3] = [ 8]
0 0 1 1 1
Rotations
Two successive Rotations are performed as:
𝑷′ = 𝑹(𝜽𝟐 ) ∙ {𝑹(𝜽𝟏 ) ∙ 𝑷}
𝑷′ = {𝑹(𝜽𝟐 ) ∙ 𝑹(𝜽𝟏 )} ∙ 𝑷
𝐜𝐨𝐬 𝜽𝟐 − 𝐬𝐢𝐧 𝜽𝟐 𝟎 𝐜𝐨𝐬 𝜽𝟏 −𝐬𝐢𝐧 𝜽𝟏 𝟎
𝑷′ = [ 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟐 𝟎] [ 𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝐜𝐨𝐬 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟐 𝐬𝐢𝐧 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 − 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 𝟎
𝑷′ = [𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 + 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟐 𝐬𝐢𝐧 𝜽𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏
Scaling
Two successive scaling are performed as:
𝑷′ = 𝑺(𝒔𝒙𝟐 , 𝒔𝒚𝟐 ) ∙ {𝑺(𝒔𝒙𝟏 , 𝒔𝒚𝟏 ) ∙ 𝑷}
𝑷′ = {𝑺(𝒔𝒙𝟐 , 𝒔𝒚𝟐 ) ∙ 𝑺(𝒔𝒙𝟏 , 𝒔𝒚𝟏 )} ∙ 𝑷
𝒔𝒙𝟐 𝟎 𝟎 𝒔𝒙𝟏 𝟎 𝟎
𝑷′ = [ 𝟎 𝒔𝒚𝟐 𝟎] [ 𝟎 𝒔𝒚𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝒔𝒙𝟏 ∙ 𝒔𝒙𝟐 𝟎 𝟎
𝑷′ = [ 𝟎 𝒔𝒚𝟏 ∙ 𝒔𝒚𝟐 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏
𝑷′ = 𝑺(𝒔𝒙𝟏 ∙ 𝒔𝒙𝟐 , 𝒔𝒚𝟏 ∙ 𝒔𝒚𝟐 ) ∙ 𝑷
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
This concept can be extended for any number of successive scaling.
Example: Obtain the final coordinates after two scaling on line 𝑝𝑞 [𝑝(2,2), 𝑞(8, 8)] with scaling factors are
(2, 2) and (3, 3) respectively.
6 0 0 2 8 12 48
𝑃′ = [0 6 0] ∙ [2 8] = [12 48]
0 0 1 1 1 1 1
Final Coordinates after rotations are 𝑝, (12, 12) and 𝑞 , (48, 48).
(𝒙𝒓 , 𝒚𝒓 ) (𝒙𝒓 , 𝒚𝒓 )
(𝒙𝒇 , 𝒚𝒇 ) (𝒙𝒇 , 𝒚𝒇 )
𝒔𝟏
Reflection
y
1 Original
Position
2 3
x
2’ 3’
Reflected
1’ Position
1 0 0
[0 −1 0]
0 0 1
y
1’ 1 Original
Reflected
Position Position
3’ 2’ 2 3
−1 0 0
[ 0 1 0]
0 0 1
y
Original
3 Position
1 2
1’ x
3’
Reflected
2’
Position
−1 0 0
[ 0 −1 0]
0 0 1
y
Original x=y line
Position
3
2 1
1’
3’
Reflected
2’ Position
0 1 0
[ 1 0 0]
0 0 1
x=-y line 3 y
1 2
Original
’ 1’
3 Position
2’
Reflected
Position
x
0 −1 0
[−1 0 0]
0 0 1
Example: - Find the coordinates after reflection of the triangle [A (10, 10), B (15, 15), C (20, 10)] about x
axis.
1 0 0 10 15 20
𝑃′ = [0 −1 0] [10 15 10 ]
0 0 1 1 1 1
10 15 20
𝑃′ = [−10 −15 −10]
1 1 1
Final coordinate after reflection are [A’ (10, -10), B’ (15, -15), C’ (20, -10)]
Shear
A transformation that distorts the shape of an object such that the transformed shape appears as if the
object were composed of internal layers that had been caused to slide over each other is called shear.
Two common shearing transformations are those that shift coordinate x values and those that shift y
values.
Shear in 𝒙 − 𝒅𝒊𝒓𝒆𝒄𝒕𝒊𝒐𝒏 .
Before After
Y Shear Shear
Y
X X
Shear in 𝒚 − 𝒅𝒊𝒓𝒆𝒄𝒕𝒊𝒐𝒏 .
Before
Y Shear After
Y
Shear
X X
Fig. 3.1: - A viewing transformation using standard rectangles for the window and viewport.
Now we see steps involved in viewing pipeline.
Fig. 3.3: - A viewing-coordinate frame is moved into coincidence with the world frame in two steps: (a)
translate the viewing origin to the world origin, and then (b) rotate to align the axes of the two systems.
We can obtain reference frame in any direction and at any position.
For handling such condition first of all we translate reference frame origin to standard reference frame
origin and then we rotate it to align it to standard axis.
In this way we can adjust window in any reference frame.
this is illustrate by following transformation matrix:
𝐲𝐯 − 𝐲𝐯𝐦𝐢𝐧 𝐲𝐰 − 𝐲𝐰𝐦𝐢𝐧
=
𝐲𝐯𝐦𝐚𝐱 − 𝐲𝐯𝐦𝐢𝐧 𝐲𝐰𝐦𝐚𝐱 − 𝐲𝐰𝐦𝐢𝐧
Solving by making viewport position as subject we obtain:
𝐱 𝐯 = 𝐱 𝐯𝐦𝐢𝐧 + (𝐱 𝐰 − 𝐱 𝐰𝐦𝐢𝐧 )𝐬𝐱
𝐲𝐯 = 𝐲𝐯𝐦𝐢𝐧 + (𝐲𝐰 − 𝐲𝐰𝐦𝐢𝐧 )𝐬𝐲
Where scaling factor are :
𝐱 𝐯𝐦𝐚𝐱 − 𝐱 𝐯𝐦𝐢𝐧
𝐬𝐱 =
𝐱 𝐰𝐦𝐚𝐱 − 𝐱 𝐰𝐦𝐢𝐧
𝐲𝐯𝐦𝐚𝐱 − 𝐲𝐯𝐦𝐢𝐧
𝐬𝐲 =
𝐲𝐰𝐦𝐚𝐱 − 𝐲𝐰𝐦𝐢𝐧
We can also map window to viewport with the set of transformation, which include following sequence
of transformations:
1. Perform a scaling transformation using a fixed-point position of (xWmin,ywmin) that scales the window
area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport.
For maintaining relative proportions we take (sx = sy). in case if both are not equal then we get stretched
or contracted in either the x or y direction when displayed on the output device.
Characters are handle in two different way one way is simply maintain relative position like other
primitive and other is to maintain standard character size even though viewport size is enlarged or
reduce.
Number of display device can be used in application and for each we can use different window-to-
viewport transformation. This mapping is called the workstation transformation.
P9
Window Window
P4 P10
P2 P2
P8
P1 P1
P5 P6 P5 P6 P8
P3
P7
P7
Algorithm
Step-1:
Assign region code to both endpoint of a line depending on the position where the line endpoint is located.
Step-2:
If both endpoint have code ‘0000’
Then line is completely inside.
Otherwise
Perform logical ending between this two codes.
Step-3:
Draw line segment which are completely inside and eliminate other line segment which found completely
outside.
Algorithm
1. Read two end points of line 𝑃1 (𝑥1 , 𝑦1 ) and 𝑃2 (𝑥2 , 𝑦2 )
2. Read two corner vertices, left top and right bottom of window: (𝑥𝑤𝑚𝑖𝑛 , 𝑦𝑤𝑚𝑎𝑥 ) and (𝑥𝑤𝑚𝑎𝑥 , 𝑦𝑤𝑚𝑖𝑛 )
3. Calculate values of parameters 𝑝𝑘 and 𝑞𝑘 for 𝑘 = 1, 2, 3, 4 such that,
𝑝1 = −∆𝑥, 𝑞1 = 𝑥1 − 𝑥𝑤𝑚𝑖𝑛
𝑝2 = ∆𝑥, 𝑞2 = 𝑥𝑤𝑚𝑎𝑥 − 𝑥1
𝑝3 = −∆𝑦, 𝑞3 = 𝑦1 − 𝑦𝑤𝑚𝑖𝑛
𝑝4 = ∆𝑦, 𝑞4 = 𝑦𝑤𝑚𝑎𝑥 − 𝑦1
4. If 𝑝𝑘 = 0 for any value of 𝑘 = 1, 2, 3, 4 then,
Line is parallel to 𝑘 𝑡ℎ boundary.
Advantages
1. More efficient.
2. Only requires one division to update 𝑢1 and 𝑢2 .
3. Window intersections of line are calculated just once.
P1
P1 P1
Fig. 3.7: - Three possible position for a line endpoint p1 in the NLN line-clipping algorithm.
We can also extend this procedure for all nine regions.
Now for p1 is inside the window we divide whole area in following region:
Fig. 3.10: - Two possible sets of clipping region when p1 is in corner region.
Regions are name in such a way that name in which region p2 falls is gives the window edge which
intersects the line.
For example region LT says that line need to clip at left and top boundary.
For finding that in which region line 𝒑𝟏 𝒑𝟐 falls we compare the slope of the line to the slope of the
boundaries:
𝒔𝒍𝒐𝒑𝒆 𝒑𝟏 𝒑𝑩𝟏 < 𝒔𝒍𝒐𝒑𝒆 𝒑𝟏 𝒑𝟐 < 𝑠𝑙𝑜𝑝𝑒 𝒑𝟏 𝒑𝑩𝟐
Where 𝒑𝟏 𝒑𝑩𝟏 and 𝒑𝟏 𝒑𝑩𝟐 are boundary lines.
For example p1 is in edge region and for checking whether p2 is in region LT we use following equation.
Polygon Clipping
For polygon clipping we need to modify the line clipping procedure because in line clipping we need to
consider about only line segment while in polygon clipping we need to consider the area and the new
boundary of the polygon after clipping.
Sutherland-Hodgeman Polygon Clipping
For correctly clip a polygon we process the polygon boundary as a whole against each window edge.
This is done by whole polygon vertices against each clip rectangle boundary one by one.
Beginning with the initial set of polygon vertices we first clip against the left boundary and produce new
sequence of vertices.
Then that new set of vertices is clipped against the right boundary clipper, a bottom boundary clipper
and a top boundary clipper, as shown in figure below.
Fig. 3.12: - Processing the vertices of the polygon through boundary clipper.
There are four possible cases when processing vertices in sequence around the perimeter of a polygon.
Window
3
2’
2 1’
2’
2’
1 3’ 4
6 2’
5’
’
2’ 4
2’ 5
V2’ V
2
(resume) V1’
V3’ V3
V4’
V1
V4
(stop)
V5’
(resume) V7’
V5
V6 V6’
(a) (b)
Fig. 3.14: - Clipping a concave polygon (a) with the Weiler-Atherton algorithm generates the two se
As shown in figure we start from v1 and move clockwise towards v2 and add intersection point and next
point to output list by following polygon boundary, then from v2 to v3 we add v3 to output list.
From v3 to v4 we calculate intersection point and add to output list and follow window boundary.
Similarly from v4 to v5 we add intersection point and next point and follow the polygon boundary, next
we move v5 to v6 and add intersection point and follow the window boundary, and finally v6 to v1 is
outside so no need to add anything.
This way we get two separate polygon section after clipping.
Parallel Projection
This method generates view from solid object by projecting parallel lines onto the display plane.
By changing viewing position we can get different views of 3D object onto 2D display screen.
Perspective projection
This method generating view of 3D object by projecting point on the display plane along converging
paths.
Depth cueing
Many times depth information is important so that we can identify for a particular viewing direction
which are the front surfaces and which are the back surfaces of display object.
Simple method to do this is depth cueing in which assign higher intensity to closer object & lower
intensity to the far objects.
Depth cuing is applied by choosing maximum and minimum intensity values and a range of distance over
which the intensities are to vary.
Another application is to modeling effect of atmosphere.
Surface Rendering
More realistic image is produce by setting surface intensity according to light reflect from that surface &
the characteristics of that surface.
It will give more intensity to the shiny surface and less to dull surface.
It also applies high intensity where light is more & less where light falls is less.
CRT
Viewer
Vibrating mirror changes its focal length due to vibration which is synchronized with the display of an
object on CRT.
The each point on the object is reflected from the mirror into spatial position corresponding to distance
of that point from a viewing position.
Very good example of this system is GENISCO SPACE GRAPH system, which use vibrating mirror to
project 3D objects into a 25 cm by 25 cm by 25 cm volume. This system is also capable to show 2D cross
section at different depth.
Another way is stereoscopic views.
Stereoscopic views does not produce three dimensional images, but it produce 3D effects by presenting
different view to each eye of an observer so that it appears to have depth.
To obtain this we first need to obtain two views of object generated from viewing direction
corresponding to each eye.
We can contract the two views as computer generated scenes with different viewing positions or we can
use stereo camera pair to photograph some object or scene.
When we see simultaneously both the view as left view with left eye and right view with right eye then
two views is merge and produce image which appears to have depth.
One way to produce stereoscopic effect is to display each of the two views with raster system on
alternate refresh cycles.
The screen is viewed through glasses with each lance design such a way that it act as a rapidly
alternating shutter that is synchronized to block out one of the views.
Polygon Surfaces
A polygonal surface can be thought of as a surface composed of polygonal faces.
The most commonly used boundary representation for a three dimensional object is a set of polygon
surfaces that enclose the object interior
Polygon Tables
Representation of vertex coordinates, edges and other property of polygon into table form is called
polygon table.
Polygon data tables can be organized into two groups: geometric table and attributes table.
Geometric table contains vertex coordinate and the other parameter which specify geometry of polygon.
Attributes table stores other information like Color, transparency etc.
Convenient way to represent geometric table into three different table namely vertex table, edge table,
and polygon table.
V1
E1
V2 E3
S1 E6
E2
S2
V3 V5
E4 E5
V4
Edge Table
Vertex Table
E1: V1, V2
V1: X1, Y1, Z1 Polygon Surface
E2: V2, V3
V2: X2, Y2, Z2 Table
E3: V3, V1
V3: X3, Y3, Z3 S1: E1, E2, E3
E4: V3, V4
V4: X4, Y4, Z4 S2: E3, E4, E5, E6
E5: V4, V5
V5: X5, Y5, Z5
E6: V5, V1
Fig. 4.5: - Edge table of above example with extra information as surface pointer.
Now if any surface entry in polygon table will find edge in edge table it will verify whether this edge is of
particular surface’s edge or not if not it will detect errors and may be correct if sufficient information is
added.
Plane Equations
For producing display of 3D objects we must process the input data representation for the object
through several procedures.
For this processing we sometimes need to find orientation and it can be obtained by vertex coordinate
values and the equation of polygon plane.
Equation of plane is given as
𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 = 0
Where (x, y, z) is any point on the plane and A, B, C, D are constants by solving three plane equation for
three non collinear points. And solve simultaneous equation for ratio A/D, B/D, and C/D as follows
𝐴 𝐵 𝐶
𝑥1 + 𝑦1 + 𝑧1 = −1
𝐷 𝐷 𝐷
𝐴 𝐵 𝐶
𝑥2 + 𝑦2 + 𝑧2 = −1
𝐷 𝐷 𝐷
𝐴 𝐵 𝐶
𝑥3 + 𝑦3 + 𝑧3 = −1
𝐷 𝐷 𝐷
Solving by determinant
1 𝑦1 𝑧1 𝑥1 1 𝑧1 𝑥1 𝑦1 1 𝑥1 𝑦1 𝑧1
𝐴=| 1 𝑦2 𝑧2| 𝐵 = | 2 𝑥 1 𝑧2| 𝐶 = | 2𝑥 𝑦2 1 | 𝐷 = − | 2 𝑦2 𝑧2 |
𝑥
1 𝑦3 𝑧3 𝑥3 1 𝑧3 𝑥3 𝑦3 1 𝑥3 𝑦3 𝑧3
By expanding a determinant we get
𝐴 = 𝑦1 (𝑧2 − 𝑧3 ) + 𝑦2 (𝑧3 − 𝑧1 ) + 𝑦3 (𝑧1 − 𝑧2 )
𝐵 = 𝑧1 (𝑥2 − 𝑥3 ) + 𝑧2 (𝑥3 − 𝑥1 ) + 𝑧3 (𝑥1 − 𝑥2 )
𝐶 = 𝑥1 (𝑦2 − 𝑦3 ) + 𝑥2 (𝑦3 − 𝑦1 ) + 𝑥3 (𝑦1 − 𝑦2 )
𝐷 = −𝑥1 (𝑦2 𝑧3 − 𝑦3 𝑧2 ) − 𝑥2 (𝑦3 𝑧1 − 𝑦1 𝑧3 ) − 𝑥3 (𝑦1 𝑧2 − 𝑦2 𝑧1 )
This values of A, B, C, D are then store in polygon data structure with other polygon data.
Orientation of plane is described with normal vector to the plane.
N= (A, B, C)
Y
X
Z
Fig. 4.6: - the vector N normal to the surface.
Here N= (A,B,C) where A, B, C are the plane coefficient.
When we are dealing with the polygon surfaces that enclose object interior we define the side of the
faces towards object interior is as inside face and outward side as outside face.
We can calculate normal vector N for any particular surface by cross product of two vectors in counter
clockwise direction in right handed system then.
𝑁 = (𝑣2 − 𝑣1)𝑋(𝑣3 − 𝑣1)
Now N gives values A, B, C for that plane and D can be obtained by putting these values in plane
equation for one of the vertices and solving for D.
Using plane equation in vector form we can obtain D as
𝑁 ∙ 𝑃 = −𝐷
Plane equation is also used to find position of any point compare to plane surface as follows
Prof. Vijay M. Shekhat, CE Department | 2160703 – Computer Graphics 5
Unit-4 – 3D Concept & Object Representation
Polygon Meshes
Fig. 4.7: -A triangle strip formed with 11 triangle Fig. 4.8: -A quadrilateral mesh containing 12 quadrilaterals
connecting 13 vertices constructed from a 5 by 4 input vertex array
Polygon mesh is a collection of edges, vertices and faces that defines the shape of the polyhedral object
in 3D computer graphics and solid modeling.
An edge can be shared by two or more polygons and vertex is shared by at least two edges.
Polygon mesh is represented in following ways
o Explicit representation
o Pointer to vertex list
o Pointer to edge list
Explicit Representation
In explicit representation each polygon stores all the vertices in order in the memory as,
𝑃 = (((𝑥1 , 𝑦1 , 𝑧1 ), (𝑥2 , 𝑦2 , 𝑧2 )), … , ((𝑥𝑚 , 𝑦𝑚 , 𝑧𝑚 ), (𝑥𝑛 , 𝑦𝑛 , 𝑧𝑛 )))
It process fast but requires more memory for storing.
Spline Representations
Spline is flexible strip used to produce a smooth curve through a designated set of points.
Several small weights are attached to spline to hold in particular position.
Spline curve is a curve drawn with this method.
The term spline curve now referred to any composite curve formed with polynomial sections satisfying
specified continuity condition at the boundary of the pieces.
A spline surface can be described with two sets of orthogonal spline curves.
Approximation Spline: - When curve section follows general control point path without necessarily
passing through any control point, the resulting curve is said to approximate the set of control points
and that curve is known as Approximation Spline.
Spline curve can be modified by selecting different control point position.
We can apply transformation on the curve according to need like translation scaling etc.
The convex polygon boundary that encloses a set of control points is called convex hull.
Fig. 4.11: -convex hull shapes for two sets of control points.
A poly line connecting the sequence of control points for an approximation spline is usually displayed to
remind a designer of the control point ordering. This set of connected line segment is often referred as
control graph of the curve.
Control graph is also referred as control polygon or characteristic polygon.
Fig. 4.12: -Control-graph shapes for two different sets of control points.
Fig. 4.13: - Piecewise construction of a curve by joining two curve segments uses different orders of
continuity: (a) zero-order continuity only, (b) first-order continuity, and (c) second-order continuity.
First order continuity is often sufficient for general application but some graphics package like cad
requires second order continuity for accuracy.
Hermit Interpolation
It is named after French mathematician Charles hermit
It is an interpolating piecewise cubic polynomial with specified tangent at each control points.
It is adjusted locally because each curve section is depends on it’s end points only.
Parametric cubic point function for any curve section is then given by:
𝑝(0) = 𝑝𝑘
𝑝(1) = 𝑝𝑘+1
𝑝′ (0) = 𝑑𝑝𝑘
𝑝′ ′(1) = 𝑑𝑝𝑘+1
Where dpk & dpk+1 are values of parametric derivatives at point pk & pk+1 respectively.
Vector equation of cubic spline is:
𝑝(𝑢) = 𝑎𝑢3 + 𝑏𝑢2 + 𝑐𝑢 + 𝑑
Where x component of p is
𝑥(𝑢) = 𝑎𝑥 𝑢3 + 𝑏𝑥 𝑢2 + 𝑐𝑥 𝑢 + 𝑑𝑥 and similarly y & z components
Matrix form of above equation is
𝑎
𝑏
𝑃(𝑢) = [𝑢3 𝑢2 𝑢 1] [ ]
𝑐
𝑑
Now derivatives of p(u) is p’(u)=3au2+2bu+c+0
Matrix form of p’(u) is
𝑎
𝑏
𝑃′(𝑢) = [3𝑢2 2𝑢 1 0] [ ]
𝑐
𝑑
Now substitute end point value of u as 0 & 1 in above equation & combine all four parametric equations
in matrix form:
𝑝𝑘 0 0 0 1 𝑎
𝑝𝑘+1 1 1 1 1 𝑏
[ 𝑑𝑝 ] = [ ][ ]
𝑘 0 0 1 0 𝑐
𝑑𝑝𝑘+1 3 2 1 0 𝑑
Now solving it for polynomial co efficient
𝑎 2 −2 1 1 𝑝𝑘
𝑏 −3 3 −2 −1 𝑝𝑘+1
[ ] =[ ] [ 𝑑𝑝 ]
𝑐 0 0 1 0 𝑘
𝑑 1 0 0 0 𝑑𝑝𝑘+1
𝑎 𝑝𝑘
𝑏 𝑝𝑘+1
[ ] = 𝑀𝐻 [ 𝑑𝑝 ]
𝑐 𝑘
𝑑 𝑑𝑝𝑘+1
Now Put value of above equation in equation of 𝑝(𝑢)
2 −2 1 1 𝑝𝑘
−3 3 −2 −1 𝑝𝑘+1
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] [ ] [ 𝑑𝑝 ]
0 0 1 0 𝑘
1 0 0 0 𝑑𝑝 𝑘+1
𝑝𝑘
𝑝𝑘+1
𝑝(𝑢) = [2𝑢3 − 3𝑢2 + 1 − 2𝑢3 + 3𝑢2 𝑢3 − 2𝑢2 + 𝑢 𝑢3 − 𝑢2 ] [ 𝑑𝑝 ]
𝑘
𝑑𝑝𝑘+1
𝑝(𝑢) = 𝑝𝑘 (2𝑢3 − 3𝑢2 + 1) + 𝑝𝑘+1 (−2𝑢3 + 3𝑢2 ) + 𝑑𝑝𝑘 (𝑢3 − 2𝑢2 + 𝑢) + 𝑑𝑝𝑘+1 (𝑢3 − 𝑢2 )
𝑝(𝑢) = 𝑝𝑘 𝐻0 (u) + 𝑝𝑘+1 𝐻1 (u) + 𝑑𝑝𝑘 𝐻2 (u) + 𝑑𝑝𝑘+1 𝐻3 (u)
Where 𝐻𝑘 (u) for k=0 , 1 , 2 , 3 are referred to as blending functions because that blend the boundary
constraint values for curve section.
Cardinal Splines
As like hermit spline cardinal splines also interpolating piecewise cubics with specified endpoint tangents
at the boundary of each section.
But in this spline we need not have to input the values of endpoint tangents.
In cardinal spline values of slope at control point is calculated from two immediate neighbor control
points.
It’s spline section is completely specified by the 4-control points.
Fig. 4.16: -parametric point function p(u) for a cardinal spline section between control points pk and pk+1.
The middle two are two endpoints of curve section and other two are used to calculate slope of
endpoints.
Now parametric equation for cardinal spline is:
𝑝(0) = 𝑝𝑘
𝑝(1) = 𝑝𝑘+1
1
𝑝′ (0) = (1 − 𝑡)(𝑝𝑘+1 − 𝑝𝑘−1 )
2
1
𝑝′ (1) = (1 − 𝑡)(𝑝𝑘+2 − 𝑝𝑘 )
2
Where parameter t is called tension parameter since it controls how loosely or tightly the cardinal spline
fit the control points.
Fig. 4.17: -Effect of the tension parameter on the shape of a cardinal spline section.
When t = 0 this class of curve is referred to as catmull-rom spline or overhauser splines.
Using similar method like hermit we can obtain:
𝑝𝑘−1
𝑝𝑘
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] ∙ 𝑀𝑐 ∙ [𝑝 ]
𝑘+1
𝑝𝑘+2
Where the cardinal matrix is
−𝑠 2 − 𝑠 𝑠 − 2 𝑠
2𝑠 𝑠 − 3 3 − 2𝑠 −𝑠
𝑀𝑐 = [ ]
−𝑠 0 𝑠 0
0 1 0 0
With 𝑠 = (1 − 𝑡)⁄2
Put value of Mc in equation of p(u)
−𝑠 2 − 𝑠 𝑠 − 2 𝑠 𝑝𝑘−1
2𝑠 𝑠 − 3 3 − 2𝑠 −𝑠 𝑝𝑘
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] ∙ [ ] ∙ [𝑝 ]
−𝑠 0 𝑠 0 𝑘+1
0 1 0 0 𝑝𝑘+2
𝑝(𝑢) = [−𝑠𝑢3 + 2𝑠𝑢2 − 𝑠𝑢 (2 − 𝑠)𝑢3 + (𝑠 − 3)𝑢2 + 1 (𝑠 − 2)𝑢3 + (3 − 𝑠)𝑢2 + 𝑠𝑢 𝑠𝑢3 − 𝑠𝑢2 ]
𝑝𝑘−1
𝑝𝑘
∙ [𝑝 ]
𝑘+1
𝑝𝑘+2
𝑝(𝑢) = 𝑝𝑘−1 (−𝑠𝑢3 + 2𝑠𝑢2 − 𝑠𝑢) + 𝑝𝑘 ((2 − 𝑠)𝑢3 + (𝑠 − 3)𝑢2 + 1)
+ 𝑝𝑘+1 ((𝑠 − 2)𝑢3 + (3 − 𝑠)𝑢2 + 𝑠𝑢) + 𝑝𝑘+2 (𝑠𝑢3 − 𝑠𝑢2 )
𝑝(𝑢) = 𝑝𝑘−1 𝐶𝐴𝑅0 (𝑢) + 𝑝𝑘 𝐶𝐴𝑅1 (𝑢) + 𝑝𝑘+1 𝐶𝐴𝑅2 (𝑢) + 𝑝𝑘+2 𝐶𝐴𝑅3 (𝑢)
Where polynomial 𝐶𝐴𝑅𝑘 (𝑢) 𝑓𝑜𝑟 𝑘 = 0,1,2,3 are the cardinals blending functions.
Figure below shows this blending function shape for t = 0.
Fig. 4.18: -The cardinal blending function for t=0 and s=0.5.
Kochanek-Bartels spline
It is extension of cardinal spline
Two additional parameters are introduced into the constraint equation for defining kochanek-Bartels
spline to provide more flexibility in adjusting the shape of curve section.
For this parametric equations are as follows:
𝑝(0) = 𝑝𝑘
𝑝(1) = 𝑝𝑘+1
1
𝑝′ (0) = (1 − 𝑡)[(1 + 𝑏)(1 − 𝑐)(𝑝𝑘 − 𝑝𝑘−1 ) + (1 − 𝑏)(1 + 𝑐)(𝑝𝑘+1 − 𝑝𝑘 )]
2
1
𝑝′ (1) = (1 − 𝑡)[(1 + 𝑏)(1 + 𝑐)(𝑝𝑘+1 − 𝑝𝑘 ) + (1 − 𝑏)(1 − 𝑐)(𝑝𝑘+2 − 𝑝𝑘+1 )]
2
Where ‘t’ is tension parameter same as used in cardinal spline.
B is bias parameter and C is the continuity parameter.
In this spline parametric derivatives may not be continuous across section boundaries.
Bias B is used to adjust the amount that the curve bends at each end of section.
Fig. 4.19: -Effect of bias parameter on the shape of a Kochanek-Bartels spline section.
Parameter c is used to controls continuity of the tangent vectors across the boundaries of section. If C is
nonzero there is discontinuity in the slope of the curve across section boundaries.
It is used in animation paths in particular abrupt change in motion which is simulated with nonzero
values for parameter C.
Bezier Curves
Bezier curve section can be fitted to any number of control points.
Number of control points and their relative position gives degree of the Bezier polynomials.
With the interpolation spline Bezier curve can be specified with boundary condition or blending function.
Most convenient method is to specify Bezier curve with blending function.
Consider we are given n+1 control point position from p0 to pn where pk = (xk, yk, zk).
This is blended to gives position vector p(u) which gives path of the approximate Bezier curve is:
𝑛
Fig. 4.20: -Example of 2D Bezier curves generated by different number of control points.
Efficient method for determining coordinate positions along a Bezier curve can be set up using recursive
calculation
For example successive binomial coefficients can be calculated as
𝑛−𝑘+1
𝐶(𝑛, 𝑘) = 𝐶(𝑛, 𝑘 − 1) 𝑛≥𝑘
𝑘
∑ 𝐵𝐸𝑍𝑘,𝑛 (𝑢) = 1
𝑘=0
So any curve position is simply the weighted sum of the control point positions.
Bezier curve smoothly follows the control points without erratic oscillations.
P3
P2
P4
P1
P0=P5
Fig. 4.21: -A closed Bezier Curve generated by specifying the first and last control points at the same
location.
If we specify multiple control point at same position it will get more weight and curve is pull towards
that position.
P3
P1=P2
P0
P4
Fig. 4.22: -A Bezier curve can be made to pass closer to a given coordinate position by assigning multiple
control point at that position.
Bezier curve can be fitted for any number of control points but it requires higher order polynomial
calculation.
Complicated Bezier curve can be generated by dividing whole curve into several lower order polynomial
curves. So we can get better control over the shape of small region.
Since Bezier curve passes through first and last control point it is easy to join two curve sections with
zero order parametric continuity (C0).
For first order continuity we put end point of first curve and start point of second curve at same position
and last two points of first curve and first two point of second curve is collinear. And second control
point of second curve is at position
𝑝𝑛 + (𝑝𝑛 − 𝑝𝑛−1 )
So that control points are equally spaced.
Fig. 4.23: -Zero and first order continuous curve by putting control point at proper place.
Similarly for second order continuity the third control point of second curve in terms of position of the
last three control points of first curve section as
𝑝𝑛−2 + 4(𝑝𝑛 − 𝑝𝑛−1 )
C2 continuity can be unnecessary restrictive especially for cubic curve we left only one control point for
adjust the shape of the curve.
The form of blending functions determines how control points affect the shape of the curve for values of
parameter u over the range from 0 to 1.
At u = 0 𝐵𝐸𝑍0,3 (𝑢) is only nonzero blending function with values 1.
At u = 1 𝐵𝐸𝑍3,3 (𝑢) is only nonzero blending function with values 1.
So the cubic Bezier curve is always pass through p0 and p3.
Other blending function is affecting the shape of the curve in intermediate values of parameter u.
𝐵𝐸𝑍1,3 (𝑢) is maximum at 𝑢 = 1⁄3and 𝐵𝐸𝑍2,3 (𝑢) is maximum at 𝑢 = 2⁄3
Blending function is always nonzero over the entire range of u so it is not allowed for local control of the
curve shape.
At end point positions parametric first order derivatives are :
𝑝′ (0) = 3(𝑝1 − 𝑝0 )
𝑝′ (1) = 3(𝑝3 − 𝑝2 )
And second order parametric derivatives are.
𝑝′′ (0) = 6(𝑝0 − 2𝑝1 + 𝑝2 )
𝑝′′ (1) = 6(𝑝1 − 2𝑝2 + 𝑝3 )
This expression can be used to construct piecewise curve with C1 and C2 continuity.
Now we represent polynomial expression for blending function in matrix form:
𝑝0
𝑝1
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] ∙ 𝑀𝐵𝐸𝑍 ∙ [𝑝 ]
2
𝑝3
−1 3 −3 1
3 −6 3 0
𝑀𝐵𝐸𝑍 = [ ]
−3 3 0 0
1 0 0 0
We can add additional parameter like tension and bias as we did with the interpolating spline.
Bezier Surfaces
Two sets of orthogonal Bezier curves can be used to design an object surface by an input mesh of control
points.
By taking Cartesian product of Bezier blending function we obtain parametric vector function as:
𝑚 𝑛
Fig. 4.25: -Bezier surfaces constructed for (a) m=3, n=3, and (b) m=4, n=4. Dashed line connects the
control points.
Each curve of constant u is plotted by varying v over interval 0 to 1. And similarly we can plot for
constant v.
Bezier surfaces have same properties as Bezier curve, so it can be used in interactive design application.
For each surface patch we first select mesh of control point XY and then select elevation in Z direction.
We can put two or more surfaces together and form required surfaces using method similar to curve
section joining with continuity C0, C1, and C2 as per need.
B-Spline Curves
General expression for B-Spline curve in terms of blending function is given by:
𝑛
For any u in between 𝑢𝑑−1 to 𝑢𝑛+1 , sum of all blending function is 1 i.e. ∑𝑛𝑘=0 𝐵𝑘,𝑑 (𝑢) = 1
There are three general classification for knot vectors:
o Uniform
o Open uniform
o Non uniform
Where
−1 3 −3 1
1 3 −6 3 0
𝑀𝐵 = [ ]
6 −3 0 3 0
1 4 1 0
We can also modify the B-Spline equation to include a tension parameter t.
The periodic cubic B-Spline with tension matrix then has the form:
−𝑡 12 − 9𝑡 9𝑡 − 12 𝑡
1 3𝑡 12𝑡 − 18 18 − 15𝑡 0
𝑀𝐵𝑡 = [ ]
6 −3𝑡 0 3𝑡 0
𝑡 6 − 2𝑡 𝑡 0
When t = 1 𝑀𝐵𝑡 = 𝑀𝐵
We can obtain cubic B-Spline blending function for parametric range from 0 to 1 by converting matrix
representation into polynomial form for t = 1 we have
1
𝐵0,3 (𝑢) = (1 − 𝑢)3
6
1
𝐵1,3 (𝑢) = (3𝑢3 − 6𝑢2 + 4)
6
1
𝐵2,3 (𝑢) = (−3𝑢3 + 3𝑢2 + 3𝑢 + 1)
6
1
𝐵3,3 (𝑢) = 𝑢3
6
Multiple knot value also reduces continuity by 1 for each repeat of particular value.
We can solve non uniform B-Spline using similar method as we used in uniform B-Spline.
For set of n+1 control point we set degree d and knot values.
Then using the recurrence relations we can obtain blending function or evaluate curve position directly
for display of the curve.
B-Spline Surfaces
B-Spline surface formation is also similar to Bezier splines orthogonal set of curves are used and for
connecting two surface we use same method which is used in Bezier surfaces.
Vector equation of B-Spline surface is given by cartesion product of B-Spline blending functions:
𝑛1 𝑛2
3D Translation
Similar to 2D translation, which used 3x3 matrices, 3D translation use 4X4 matrices (X, Y, Z, h).
In 3D translation point (X, Y, Z) is to be translated by amount tx, ty and tz to location (X', Y', Z').
𝒙, = 𝒙 + 𝒕𝒙
𝒚, = 𝒚 + 𝒕𝒚
𝒛, = 𝒛 + 𝒕𝒛
Let’s see matrix equation
𝑷′ = 𝑻 ∙ 𝑷
𝒙, 𝟏 𝟎 𝟎 𝒕𝒙 𝒙
𝒚′ 𝟎 𝟏 𝟎 𝒕𝒚 𝒚
[ ,] = [ ]∙[ ]
𝒛 𝟎 𝟎 𝟏 𝒕𝒛 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
Example : - Translate the given point P (10,10,10) into 3D space with translation factor T (10,20,5).
𝑃′ = 𝑇 ∙ 𝑃
𝑥, 1 0 0 𝑡𝑥 𝑥
𝑦′ 0 1 0 𝑡𝑦 𝑦
[ ,] = [ ]∙[ ]
𝑧 0 0 1 𝑡𝑧 𝑧
1 0 0 0 1 1
𝑥, 1 0 0 10 10
𝑦′ 0 1 0 20 10
[ ,] = [ ]∙[ ]
𝑧 0 0 1 5 10
1 0 0 0 1 1
𝑥, 20
𝑦′ 30
[ ,] = [ ]
𝑧 15
1 1
Final coordinate after translation is P, (20, 30, 15).
Rotation
For 3D rotation we need to pick an axis to rotate about.
The most common choices are the X-axis, the Y-axis, and the Z-axis
Y Y Y
X X X
Z Z Z
Z-Axis Rotation
Two dimension rotation equations can be easily convert into 3D Z-axis rotation equations.
Rotation about z axis we leave z coordinate unchanged.
𝒙, = 𝒙 𝐜𝐨𝐬 𝜽 − 𝒚 𝐬𝐢𝐧 𝜽
𝒚, = 𝒙 𝐬𝐢𝐧 𝜽 + 𝒚 𝐜𝐨𝐬 𝜽
𝒛, = 𝒛
Where Parameter 𝜽 specify rotation angle.
Matrix equation is written as:
𝑷′ = 𝑹𝒛 (𝜽) ∙ 𝑷
𝒙, 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟎 𝒙
𝒚′ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝟎 𝒚
[ ,] = [ ]∙[ ]
𝒛 𝟎 𝟎 𝟏 𝟎 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
X-Axis Rotation
Transformation equation for x-axis is obtain from equation of z-axis rotation by replacing cyclically as
shown here
𝒙→𝒚→𝒛→𝒙
Rotation about x axis we leave x coordinate unchanged.
𝒚, = 𝒚 𝐜𝐨𝐬 𝜽 − 𝒛 𝐬𝐢𝐧 𝜽
𝒛, = 𝒚 𝐬𝐢𝐧 𝜽 + 𝒛 𝐜𝐨𝐬 𝜽
𝒙, = 𝒙
Where Parameter 𝜽 specify rotation angle.
Matrix equation is written as:
𝑷′ = 𝑹𝒙 (𝜽) ∙ 𝑷
𝒙, 𝟏 𝟎 𝟎 𝟎 𝒙
𝒚′ 𝟎 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝒚
[ ,] = [ ]∙[ ]
𝒛 𝟎 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
Y-Axis Rotation
Transformation equation for y-axis is obtain from equation of x-axis rotation by replacing cyclically as
shown here
𝒙→𝒚→𝒛→𝒙
General 3D Rotations when rotation axis is parallel to one of the standard axis
Three steps require to complete such rotation
1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
2. Perform the specified rotation about that axis.
3. Translate the object so that the rotation axis is moved back to its original position.
This can be represented in equation form as:
𝑷′ = 𝑻−𝟏 ∙ 𝑹(𝜽) ∙ 𝑻 ∙ 𝑷
Y
P2
P1
u’ u
α
X
uz
X
uz β u’’
Scaling
It is used to resize the object in 3D space.
We can apply uniform as well as non uniform scaling by selecting proper scaling factor.
Scaling in 3D is similar to scaling in 2D. Only one extra coordinate need to consider into it.
Scaling
X
Fixed Point
Other Transformations
Reflections
Reflection means mirror image produced when mirror is placed at require position.
When mirror is placed in XY-plane we obtain coordinates of image by just changing the sign of z
coordinate.
Transformation matrix for reflection about XY-plane is given below.
𝟏 𝟎 𝟎 𝟎
𝟎 𝟏 𝟎 𝟎
𝑹𝑭𝒛 = [ ]
𝟎 𝟎 −𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Similarly Transformation matrix for reflection about YZ-plane is.
−𝟏 𝟎 𝟎 𝟎
𝟎 𝟏 𝟎 𝟎
𝑹𝑭𝒙 = [ ]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Similarly Transformation matrix for reflection about XZ-plane is.
𝟏 𝟎 𝟎 𝟎
𝟎 −𝟏 𝟎 𝟎
𝑹𝑭𝒚 = [ ]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Shears
Shearing transformation can be used to modify object shapes.
They are also useful in 3D viewing for obtaining general projection transformations.
Here we use shear parameter ‘a’ and ‘b’
Shear matrix for Z-axis is given below
𝟏 𝟎 𝒂 𝟎
𝟎 𝟏 𝒃 𝟎
𝑺𝑯𝒛 = [ ]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Similarly Shear matrix for X-axis is.
𝟏 𝟎 𝟎 𝟎
𝒂 𝟏 𝟎 𝟎
𝑺𝑯𝒙 = [ ]
𝒃 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Similarly Shear matrix for X-axis is.
𝟏 𝒂 𝟎 𝟎
𝟎 𝟏 𝟎 𝟎
𝑺𝑯𝒚 = [ ]
𝟎 𝒃 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Viewing Pipeline
Viewing Co-ordinates.
Generating a view of an object is similar to photographing the object.
We can take photograph from any side with any angle & orientation of camera.
Similarly we can specify viewing coordinate in ordinary direction.
Fig. 5.9: -A right handed viewing coordinate system, with axes Xv, Yv, and Zv, relative to a world-
coordinate scene.
Fig. 5.10: -Viewing scene from different direction with a fixed view-reference point.
Fig. 5.11: - Aligning a viewing system with the world-coordinate axes using a sequence of translate-rotate
transformations.
As shown in figure the steps of transformation
Projections
Once world-coordinate descriptions of the objects in a scene are converted to viewing coordinates, we
can project the three-dimensional objects onto the two-dimensional view plane.
Process of converting three-dimensional coordinates into two-dimensional scene is known as projection.
There are two projection methods namely.
1. Parallel Projection.
2. Perspective Projection.
Lets discuss each one.
Parallel Projections
View
Plane
P1
P1’
P2
P2’
View Plane
Projection Line
(X,Y,Z) Yv
(X,Y)
Xv
Zv
View Plane
Projection Line
(Xp, Yp)
(X,Y,Z)
α L
Φ
Xv
(X,Y)
Zv
Perspective Projection
View
Plane
P1
P1’
P2 P2’ Projection
Reference
point
P=(x,y,z)
(xp,yp,zvp)
zvp zprp zv
View
Plane
Parallelpiped
View Volume Frustum
View
Zv Volume
Window
Zv
Back Front
Window
Plane Plane
Projection
Parallel Projection Back Reference
Front
(a) Plane Point
Plane
Perspective Projection
(b)
Window N Zv
View Vp
Volume
Window N Zv
View V’p
Volume
Frustum
Centerline
Zv
View Volume
View Plane
Center of Window
Center of Window
With the projection reference point at a general position (Xprp, Yprp, Zprp) the transformation involves a
combination z-axis shear and a translation:
𝟏 𝟎 𝒂 −𝒂𝒛𝒑𝒓𝒑
𝟎 𝟏 𝒃 −𝒃𝒛𝒑𝒓𝒑
𝑴𝒔𝒉𝒆𝒂𝒓 = [ ]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Where the shear parameters are
𝒙𝒑𝒓𝒑 − (𝒙𝒘𝒎𝒊𝒏 + 𝒙𝒘𝒎𝒂𝒙 )/𝟐
𝒂=−
𝒛𝒑𝒓𝒑
𝒚𝒘𝒎𝒊𝒏 +𝒚𝒘𝒎𝒂𝒙
𝒚𝒑𝒓𝒑 −
𝟐
𝒃=−
𝒛𝒑𝒓𝒑
Points within the view volume are transformed by this operation as
𝒙′ = 𝒙 + 𝒂(𝒛 − 𝒛𝒑𝒓𝒑 )
𝒚′ = 𝒚 + 𝒃(𝒛 − 𝒛𝒑𝒓𝒑 )
𝒛′ = 𝒛
After shear we apply scaling operation.
𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒗𝒑 − 𝒛
𝒙′′ = 𝒙′ ( ) + 𝒙𝒑𝒓𝒑 ( )
𝒛𝒑𝒓𝒑 − 𝒛 𝒛𝒑𝒓𝒑 − 𝒛
𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒗𝒑 − 𝒛
𝒚′′ = 𝒚′ ( ) + 𝒚𝒑𝒓𝒑 ( )
𝒛𝒑𝒓𝒑 − 𝒛 𝒛𝒑𝒓𝒑 − 𝒛
Homogeneous matrix for this transformation is:
−𝒙𝒑𝒓𝒑 𝒙𝒑𝒓𝒑 𝒛𝒗𝒑
𝟏 𝟎
𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑
−𝒚𝒑𝒓𝒑 𝒚𝒑𝒓𝒑 𝒛𝒗𝒑
𝟎 𝟏
𝑴𝒔𝒄𝒂𝒍𝒆 = 𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑
𝟎 𝟎 𝟏 𝟎
−𝟏 𝒛𝒑𝒓𝒑
𝟎 𝟎
[ 𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 ]
Therefore the general perspective-projection transformation is obtained by equation:
𝑴𝒑𝒆𝒓𝒔𝒑𝒆𝒄𝒕𝒊𝒗𝒆 = 𝑴𝒔𝒄𝒂𝒍𝒆 ∙ 𝑴𝒔𝒉𝒆𝒂𝒓
Back-Face Detection
Back-Face Detection is simple and fast object –space method.
It identifies back faces of polygon based on the inside-outside tests.
A point (x, y, z) is inside if Ax + By + Cz + d < 0 where A, B, C, and D are constants and this equation is
nothing but equation of polygon surface.
We can simplify test by taking normal vector N= (A, B, C) of polygon surface and vector V in viewing
direction from eye as shown in figure
Fig. 6.1:- vector V in the viewing direction and back-face normal vector N of a polyhedron.
Then we check condition if 𝑉 ∙ 𝑁 > 0 then polygon is back face.
If we convert object description in projection coordinates and our viewing direction is parallel to zv then
v= (0,0,vz) and
𝑉 ∙ 𝑁 = 𝑉𝑧 𝐶.
So now we only need to check sign of C.
In right handed viewing system V is along negative zv axis. And in that case
If C<0 the polygon is backface.
Also we cannot see any face for which C=0.
So in general for right handed system
If 𝐶 ≤ 0 polygon is back face.
Similar method can be used for left handed system.
In left handed system V is along the positive Z direction and polygon is back face if 𝐶 ≥ 0.
For a single convex polyhedron such as the pyramid by examining parameter C for the different plane we
identify back faces.
So far the scene contains only non overlapping convex polyhedral, back face method works properly.
For other object such as concave polyhedron as shown in figure below we need to do more tests for
determining back face.
Fig. 6.2:-view of a concave polyhedron with one face partially hidden by other faces.
S3
S2
S1 Yv
(X, Y) Xv
Zv
Fig. 6.3:- At view plane position (x, y), surface s1 has smallest depth from the view plane and so is visible
at that position.
We are starting with pixel position of view plane and for particular surface of object.
If we take orthographic projection of any point (x,y,z) of the surface on the view plane we get two
dimension coordinate (x,y) for that point to display.
Here we are taking (x.y) position on plan and find particular surface is at how much depth.
We can implement depth buffer algorithm in normalized coordinates so that z values range from 0 at the
back clipping plane to zmax at the front clipping plane.
Fig. 6.4:-From position (x,y) on a scan line, the next position across the line has coordinates (x+1,y), and
the position immediately below on the next line has coordinates (x,y-1).
For horizontal line next pixel’s z values can be calculated by putting x’=x+1 in above equation.
−𝐴(𝑥 + 1) − 𝐵𝑦 − 𝐷
𝑧′ =
𝐶
𝐴
𝑧′ = 𝑧 −
𝐶
Similarly for vertical line pixel below the current pixel has y’=y-1 so it’s z values can be calculated as
follows.
−𝐴𝑥 − 𝐵(𝑦 − 1) − 𝐷
𝑧′ =
𝐶
𝐵
𝑧′ = 𝑧 +
𝐶
If we are moving along polygon boundary then it will improve performance by eliminating extra
calculation.
For this if we move top to bottom along polygon boundary we get x’=x-1/m and y’=y-1, so z value is
obtain as follows.
−𝐴(𝑥 − 1⁄𝑚) − 𝐵(𝑦 − 1) − 𝐷
𝑧′ =
𝐶
𝐴⁄ + 𝐵
𝑚
𝑧′ = 𝑧 +
𝐶
Alternately we can use midpoint method to find the z values.
Light source
When we see any object we see reflected light from that object. Total reflected light is the sum of
contribution from all sources and reflected light from other object that falls on the object.
So that the surface which is not directly exposed to light may also visible if nearby object is illuminated.
Ambient Light
This is a simple way to model combination of light reflection from various surfaces to produce a uniform
illumination called ambient light, or background light.
Ambient light has no directional properties. The amount of ambient light incident on all the surfaces and
object are constant in all direction.
If consider that ambient light of intensity 𝐼𝑎 and each surface is illuminate with 𝐼𝑎 intensity then resulting
reflected light is constant for all the surfaces.
Diffuse Reflection
When some intensity of light is falls on object surface and that surface reflect light in all the direction in
equal amount then the resulting reflection is called diffuse reflection.
Ambient light reflection is approximation of global diffuse lighting effects.
Diffuse reflections are constant over each surface independent of our viewing direction.
Amount of reflected light is depend on the parameter Kd, the diffuse reflection coefficient or diffuse
reflectivity.
Kd is assign value in between 0 and 1 depending on reflecting property. Shiny surface reflect more light
so Kd is assign larger value while dull surface assign small value.
If surface is exposed to only ambient light we calculate ambient diffuse reflection as:
𝐼𝑎𝑚𝑏𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑎
Where 𝐼𝑎 the ambient light is falls on the surface.
Practically most of times each object is illuminated by one light source so now we discuss diffuse
reflection intensity for point source.
Φn
Radiant energy direction
dA
Fig. 6.6:- Radiant energy from a surface area dA in direction Φn relative to the surface normal direction.
As shown reflected light intensity is does not depends on viewing direction so for lambertian reflection,
the intensity of light is same in all viewing direction.
Even though there is equal light distribution in all direction from perfect reflector the brightness of a
surface does depend on the orientation of the surface relative to light source.
As the angle between surface normal and incidence light direction increases light falls on the surface is
decreases
Fig. 6.7:- An illuminated area projected perpendicular to the path of the incoming light rays.
If we denote the angle of incidence between the incoming light and surface normal as𝜃, then the
projected area of a surface patch perpendicular to the light direction is proportional to 𝑐𝑜𝑠𝜃.
If 𝐼𝑙 is the intensity of the point light source, then the diffuse reflection equation for a point on the
surface can be written as
𝐼𝑙,𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑙 𝑐𝑜𝑠𝜃
Surface is illuminated by a point source only if the angle of incidence is in the range 0 0 to 900 other than
this value of 𝜃 light source is behind the surface.
Fig. 6.8:-Angle of incidence 𝜃 between the unit light-source direction vector L and the unit surface
normal N.
As shown in figure N is the unit normal vector to surface and L is unit vector in direction of light source
then we can take dot product of this to is:
𝑁 ∙ 𝐿 = cos 𝜃
And
Fig. 6.10:- Calculation of vector R by considering projection onto the direction of the normal vector N.
𝑅 + 𝐿 = (2𝑁 ∙ 𝐿)𝑁
𝑅 = (2𝑁 ∙ 𝐿)𝑁 − 𝐿
Somewhat simplified phong model is to calculate between half way vectors H and use product of H and
N instead of V and R.
Here H is calculated as follow:
𝐿+𝑉
𝐻=
|𝐿 + 𝑉|
𝐼 = 𝐾𝑎 𝐼𝑎 + ∑ 𝐼𝑙 [𝐾𝑑 (𝑁 ∙ 𝐿) + 𝐾𝑠 (𝑁 ∙ 𝐻)𝑛𝑠 ]
𝑖=1
Properties of Light
Light is an electromagnetic wave. Visible light is have narrow band in electromagnetic spectrum nearly
400nm to 700nm light is visible and other bands not visible by human eye.