Difference Between Computer Vision, Image Processing and Computer Graphics
Difference Between Computer Vision, Image Processing and Computer Graphics
is a field of computer science, and it aims at enabling computers to process and identify images
and videos in the same way that human vision does.
Aims to mimic the human visual system
Objective is to build artificial systems which can extract information from images/to make
computers understand images and videos
Difference between computer vision, image processing and computer graphics
In Computer Vision (image analysis, image interpretation, scene understanding), the
input is an image and the output is interpretation of a scene. Image analysis is concerned with
making quantitative measurements from an image to give a description of the image.
In Image Processing (image recovery, reconstruction, filtering, compression,
visualization), the input is an image and the output is also an image.
In Computer Graphics, the input is any scene of a real world and the output is an image.
Computer vision makes a model from images (analysis), whereas computer graphics takes a
model as an input and converts it to an image (synthesis).
The image formation process can be mathematically represented as:
Image = PSF ∗ Object function + Noise
Object function is an object or a scene that is being imaged.
Point spread function (PSF) is the impulse response when the inputs and outputs are the intensity
of light in an imaging system
Signal processing is a discipline in electrical engineering and in mathematics that deals with
analysis and processing of analog and digital signals, and deals with storing, filtering, and other
operations on signals
Image processing the field that deals with the type of signals for which the input is an image
and the output is also an image is done in image processing. As its names suggests, it deals with
the processing on images.
Image processing basically includes the following three steps:
Methods used for image processing namely, analog image and digital image processing.
Analog image processing done on analog signalsprocessing on two dimensional analog
signals images are manipulated by electrical means by varying the electrical signalE.g.
television image, hard copies like printouts and photographs
Digital image processing developing a digital system that performs operations on a digital
image help in manipulation of the digital images by using computers The three general
phases that all types of data have to undergo while using digital technique are pre-processing,
enhancement, and display, information extraction.
Fundamental steps in image processing:
1. Image Sensors: senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the problem
domain.
2. Image Processing Hardware: is the dedicated hardware that is used to process the
instructions obtained from the image sensors. It passes the result to general purpose
computer.
3. Computer: used in the image processing system is the general-purpose computer that is used
by us in our daily life.
4. Image Processing Software: is the software that includes all the mechanisms and algorithms
that are used in image processing system.
5. Mass Storage: stores the pixels of the images during the processing.
6. Hard Copy Device: Once the image is processed then it is stored in the hard copy device. It
can be a pen drive or any external ROM device.
7. Image Display: It includes the monitor or display screen that displays the processed images.
8. Network: is the connection of all the above elements of the image processing system.
Image enhancement process involves different techniques which are used to improve visual
quality or appearance of an image Image enhancement is the improvement of image quality
without having a knowledge of source of degradation.
If the source of degradation is known, then the process of image quality improvement is called
“image restoration,”
Image filtering is a process to modify the pixels of an image based on some function of a local
neighborhood of the pixel (neighborhood operation).
• Sharpening filters: These are used to highlight fine details in an image. Sharpening can be done
by differentiation. Each pixel is replaced by its second order derivative or Laplacian.
• Unsharp masking and high-boost filters: This can be done by subtracting a blurred version of
an image from the image itself.
• Median filters: These are used to remove salt and pepper noises in the image. Figure below
shows one example of image filtering.
Spatial domain filtering Neighborhood of a point (x, y) can be defined by using a square area
centered at (x, y), and this square/rectangular area is called a “mask” or “filter.”
Fourier transform image processing tool which is used to decompose an image into its sine
and cosine components. Important applications for Fourier transform image analysis, image
filtering, image reconstruction and image compression.
Digital images are formed from optical imageOptical image primary components – lighting
component lighting component corresponds to the lighting condition of a scene (incident
illumination) and reflectance component way the objects in the image reflect light