Introduction To Computer Graphics: Xiaoyu Zhang
Introduction To Computer Graphics: Xiaoyu Zhang
Xiaoyu Zhang
Technically, its about the production, manipulation and display of images using computers Practically, its about movies, games, art, science, training, advertising, design,
Imaging = representing 2D images Modeling = representing 3D objects Rendering = constructing 2D images from 3D models Animation = simulating changes over time
2D Graphics
Sprites in games: Images are built by overlaying characters and objects on a background
High quality and artistry Big budgets Complicated models and high quality rendering
Video Games
Focus on interactivity Cost-effective solution Highly tuned for performance Drive the commodity graphics hardware technology
Medical Imaging
Emphasize precision and correctness Focus on presentation and interpretation of data Construction of models from acquired data
Scientific Visualization
Broad introduction to Computer Graphics Algorithms Software Hardware 2D Graphics Drawing lines and curves, clipping, transformations 3D Graphics Viewing, transformations, lighting, texture mapping 3D Modeling: describing volumes and surfaces and drawing them effectively Programmable pipeline, shaders OpenGL Other interesting stuffs Raytracing, animation,
Cal State San Marcos
Prerequisites
Software Infrastructure
Provides an API for drawing objects specified in 3D Included as part of Windows, available for Linux and other platforms
Support mouse and keyboards. Only popup menu, no widgets like buttons, scroll bar etc
You are welcome to use other interface toolkits, for example GLUI, SDL, FLTK, QT etc. See class web page under resources
References
The definitive references. Version 1.1 available online, the newest version 2 (edition 5) can be purchased.
Introduction
Graphics System Raster Image Frame buffer Light & Color Graphics pipeline and hardware
Output device
Input devices
Image stored in FB
Cal State San Marcos
Raster Images
Frame buffer stores an image as an array (the raster) of picture elements (pixels)
What is a pixel?
A pixel is not... a box a disk a teeny tiny little light A pixel is a point... it has no dimension it occupies no area it can have a coordinate
Digital Images
Computers work with discrete pieces of information How do we digitize a continuous image? Sampling!
Break the continuous space into small areas, pixels Use a single value for each pixel - the pixel value (intensity, color, ) No longer continuous in space or intensity
An ideal image can be viewed as a function, I(x, y), that gives an intensity for any given coordinate (x, y). We could plot this function as a height field. This plot would lowest at dark points in the image and highest at bright points.
Sampling an Image
An image is actually a sample of the function at the pixel locations. Pixels are stored in memory as arrays of numbers representing the intensity of the underlying function Insufficient sampling causes aliases
Cal State San Marcos
Pixel Grids
Pixel Centers: Address pixels by integer coordinates (i, j) Pixel Center Grid: Set of lines passing through pixel centers Pixel Domains: Rectangular Semi-open areas surrounding each pixel center
Pi , j (i 1 / 2, i 1 / 2) ( j 1 / 2, j 1 / 2)
Calligraphic (Vector) Display Devices draw polygons and line segments directly: e.g. Plotters
Called vector images for historical reasons Postscript (PDF) is the most famous vector image format
Rendering requires rasterization algorithms to quickly convert geometric primitives into pixels.
Cal State San Marcos
Raster Cathode Ray Tubes (CRTs) are the most common display device
Capable of high resolution. Good color fidelity. High contrast (100:1). High update rates.
Electron beam scanned in regular pattern of horizontal scan lines. At each pixel in scan line, intensity of electron beam modified by the pixel value in the frame buffer.
Cal State San Marcos
Color CRT
Color CRTs have three different colors of phosphor and three independent electron guns. Shadow Masks allow each gun to irradiate only one color of phosphor.
LCD
Liquid Crystal Displays (LCDs) becoming more popular and reasonably priced
Flat panels Flicker free Decreased viewing angle Random access to LCD cells. Electrical signals control the polarization of the LCD Cells Thus turn on or off the light passing through the panel using polarizing filters. Sub-pixel color filter masks used for RGB.
Cal State San Marcos
Works as follows:
Raster images are stored in the frame buffer. Frame buffers are composed of VRAM (video RAM). VRAM is dual-ported memory capable of
Random access Simultaneous high-speed serial output: built-in serial shift register can output entire scan line at high rate synchronized to pixel clock.
Sampling Issues
Can only store a finite number of pixels Resolution: Pixels per inch, or dpi (dots per inch from printers) Storage space goes up with square of resolution
Can only store a finite range of intensity values Typically referred to as depth - number of bits per pixel
Also concerned with the minimum and maximum intensity dynamic range Both film and digital cameras have highly limited dynamic range The big question is: What is enough resolution and enough depth?
Each pixel requires at least 3 bytes. One byte for each primary color: RGB. Each pixel can be one of 2^24 = 16 million colors Frame buffer size (1280 * 1024): 1280*1024*3 = 3.75 MB
Cal State San Marcos
Each pixel uses one byte Each byte is an index into a color map Color-map animations color-map can be changed Each pixel may be one of 2^24 colors, but only 256 color be displayed at a time Frame buffer size: (1280 * 1024): 1280*1024 = 1.25 MB
Cal State San Marcos
Display synchronized with CRT sweep Update frame buffer while its being scanned Generally, updates are visible
Double Buffering
Adds a second frame buffer Swaps during vertical blanking Updates are invisible
Objects Viewer (camera) Light source(s) Attributes that govern how light interacts with the materials in the scene Note the independence of the objects, the viewer, and the light source(s)
Light is electromagnetic wave in the visible spectrum The frequency of light determines its color
Three-Color Theory
Color receptors
There are three types of cones, referred to as S, M, and L. They are roughly equivalent to blue, green, and red sensors, respectively. Their peak sensitivities are located at approximately 430nm, 560nm, and 610nm for the "average" observer
Trichromacy
By experience, it is possible to match almost all colors using only three primary sources - the principle of trichromacy In practical terms, this means that if you show someone the right amount of each primary, they will perceive the right color This was how experimentalists knew there were 3 types of cones
Cal State San Marcos
Many colors can be represented as a mixture of R, G, B: M=rR + gG + bB (Additive matching) Gives a color description system - two people who agree on R, G, B need only supply (r, g, b) to describe a color
Cal State San Marcos
Rendering
Rendering is the conversion of a 3D scene into a 2D raster image:
The scene composed of models in three space. Models are composed of primitives, supported by the rendering system. Models entered by hand or created by a program. The image drawn on monitor, printed on laser printer, or written to a raster in memory or a file. require us to consider device independence
Rendering pipeline
Software
Rendering Primitives
Models composed of, or converted to a large number of geometric primitives. The only rendering primitives typically supported in hardware are
Points (single pixels) Line Segments Polygons (often restricted to convex polygons).
Piecewise polynomial (spline) curves Piecewise polynomial (spline) surfaces Implicit surfaces (quadrics ) Other...
A software renderer may support modeling primitives directly, or may convert into polygonal or linear approximations for hardware rendering.
Graphics Pipeline
Classically, ``model'' to ``scene'' to ``image'' rendering is broken into finer steps, called the graphics pipeline. Part of the pipeline often implemented in graphics hardware to get interactive speeds.
application program
display
Vertex Processing
Much of the work in the pipeline is in converting object representations from one coordinate system to another
Every change of coordinates is equivalent to a matrix transformation Vertex processor also computes vertex colors
Cal State San Marcos
Modeling transformations
We start with 3-D models defined in their own model space (MCS)
Modeling transformations orient models within a common coordinate frame called world space (WCS) All objects, light sources, and the viewer live in world space Transformations are represented as matrices
Cal State San Marcos
Viewing Transformation
Another change of coordinate systems Maps points from world space into eye space (VCS) Viewing position is transformed to the origin Viewing direction is oriented along some axis A viewing volume is defined for clipping
Lighting
lighting is computed for each vertex to determine its color of each point in the scene. Lighting depends on
Light sources
Surface properties
Projection
The projection step maps all of 3-D objects onto the 2D/3D screen space (NDCS).
Greatly simplified by the fact that viewing transformations map the eye to the origin and the viewing direction to -z axis. There are parallel projection and perspective projections
Primitive Assembly
Vertices must be collected into geometric objects before clipping and rasterization can take place
Clipping
The right picture shows the view volume that is visible for a perspective projection window, called viewing frustum. It is determined by a near and far cutting planes and four other planes Anything outside of the frustum is not shown on the projected image, and doesnt need to be rendered The process of remove invisible objects from rendering is called clipping
Cal State San Marcos
far
near
Rasterization
Also called scan conversion, converts primitives into fragments in SCS. Fragments are potential pixels Have a location in frame bufffer Color and depth attributes Vertex attributes are interpolated over objects by the rasterizer. Further operations may be applied to fragments
Fragment Processing
Fragments are processed to determine the color of the corresponding pixel in the frame buffer Colors can be determined by texture mapping or other fragment processing Fragments may be blocked by other fragments closer to the camera
Hidden-surface removal
Cal State San Marcos
Programmer sees the graphics system through a software interface: the Application Programmer Interface (API)
API Contents
Objects Viewer Light Source(s) Materials Input from devices such as mouse and keyboard Capabilities of system
Cal State San Marcos
Other information
Application
Vertices (3D)
CPU
GPU
Render-to-texture
Note:
Vertex processor does all transform and lighting Pipe widths vary
Can now program vertex processor! Can now program pixel processor!
Summary
Read textbook chapter 1. Try to download GLUT and run included examples. Next time: 2D Graphics