0% found this document useful (0 votes)
128 views10 pages

Object Tracking For Autonomous Vehicles: Project Report

This project report describes developing an object tracking system for autonomous vehicles using computer vision techniques. The system detects pedestrians and vehicles in real-time using a Haar cascade classifier with OpenCV. Testing showed the model could successfully detect cars and humans simultaneously in images and video. While effective, the Haar cascade method has limitations such as requiring tuning of parameters and not performing as well on lower-powered devices. The authors aim to deploy the model on a Raspberry Pi for real-time use in vehicles.

Uploaded by

FIRE OC GAMING
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views10 pages

Object Tracking For Autonomous Vehicles: Project Report

This project report describes developing an object tracking system for autonomous vehicles using computer vision techniques. The system detects pedestrians and vehicles in real-time using a Haar cascade classifier with OpenCV. Testing showed the model could successfully detect cars and humans simultaneously in images and video. While effective, the Haar cascade method has limitations such as requiring tuning of parameters and not performing as well on lower-powered devices. The authors aim to deploy the model on a Raspberry Pi for real-time use in vehicles.

Uploaded by

FIRE OC GAMING
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

PROJECT REPORT

Object Tracking For Autonomous Vehicles

Submitted to Coarse

ECE 2008 – ROBOTICS AND AUTOMATION


SLOT: - B2

SUBMITTED BY: -
MARAM SAI KALYAN 18BEC0156

J SANTHAN REDDY 18BEC0016

V PRANAY TEJA 18BEC0188

SIDDAM SAATHWIK 18BEC0543


SAGAR

SUBMITTED TO:
PROF. SANJAY R,
CERTIFICATE

This is to certify that the project work entitled “OBJECT TRACKING FOR
AUTONOMOUS VEHICLES”, that is being submitted by a group of four
candidates for ROBOTICS AND AUTOMATION [ECE-2008] is a record of
bonafide work done under my supervision. The Contents of this Project work, in
full or in parts, have neither been taken from any other source nor have been
submitted for any other CAL course.

Place: Vellore
Date: 28th October 2020

Signature of students:
Maram Sai Kalyan – 18BEC0156
J Santhan Reddy – 18BEC0016
V Pranay Teja – 18BEC0188
SiddamSaathwik Sagar - 18BEC0543
ACKNOWLEDGEMENT

We would like to thank our professor SANJAY R, whom we are highly


indebted to for giving us this opportunity to perform this project and for his
help, and guidance throughout this project.
We would like to express our gratitude to VIT University and the School of
Electronics and Communication Engineering (SENSE).
We would like to extend our gratitude to the Dean of SENSE and this
prestigious University in supporting us and giving us an opportunity to carry out
our studies at this University.
Object Tracking For Autonomous Vehicles
Abstract
Autonomous vehicle is an engineering technology that can improve transportation safety,
alleviate traffic congestion and reduce carbon emissions. In this project we have developed a
pedestrian detection and avoidance system for deployment in vehicles that, when fully
implemented, could prevent some of the numerous vehicle-pedestrian accidents. The system
is vision-based and the problem has been to design vision algorithms that are robust enough
to reliably detect and warn for any pedestrians in an highly cluttered urban environment, but
also fast enough to be deployed on-line in “smart” vehicles.

Introduction
In recent years, autonomous/self-driving cars have drawn much interest as a topic of research
for both academia and industry. For a car to be a truly autonomous, it must make sense of the
environment through which it is driving. The autonomous car must be able to both localize
itself in an environment and identify and keep track of objects (moving and stationary). The
car gets information about the environment using exteroceptive sensors such LiDAR,
cameras, inertial sensors, and GPS. The information from these sensors can be used together
and fused to localize the car and track objects in its environment, allowing it to travel
successfully from one point to another.
We developed algorithm that helps to detect object without LIDAR. As LIDAR is still
working in trail version which is not fully in our country due to rules and regulations
regarding safety and security of people of our country. Our algorithm works in detecting
pedestrians and vehicles in real time which can be deployed in car.
The process of path planning and autonomous vehicle guidance depends on three things:
localization, mapping, and tracking objects. Localization is the process of identifying the
position of the autonomous vehicle in the environment. Mapping includes being able to make
the sense of the environment. Tracking of moving objects involves being able to identify the
moving objects and track them during navigation

Technologies Used
1) Computer Vision
Computer vision is a field of study which encompasses on how computer see and understand
digital images and videos. Computer vision involves seeing or sensing a visual stimulus,
make sense of what it has seen and also extract complex information that could be used for
other machine learning activities.
Applications of Computer Vision
There are many practical applications of computer vision:
● Autonomous Vehicles — This is one of the most important applications of Computer
vision where the self-driving cars need to gather information about their surroundings
to decide how to behave.
● Facial Recognition — This is also a very important application of computer vision
where electronics use facial recognition technology to basically validate the identity
of the user.
● Image Search and Object Recognition — Now we could search objects in an image
using image search. A very good example is google lens where we could search a
particular object within the image by clicking the photo of the image and the
computer vision algorithm will search through the catalogue of images and extract
information out of the image.
● Robotics — Most robotic machines, often in manufacturing, need to see their
surroundings to perform the task at hand. In manufacturing machines may be used to
inspect assembly tolerances by “looking at” them.
2) OpenCV (Open Source Computer Vision Library: http://opencv.org) is an open-source
BSD-licensed library that includes several hundreds of computer vision algorithms.
3) Haar Cascade Classifiers : We will implement our use case using the Haar Cascade
classifier. Haar Cascade classifier is an effective object detection approach which was
proposed by Paul Viola and Michael Jones in their paper, “Rapid Object Detection using
a Boosted Cascade of Simple Features” in 2001.
4) Software Used : Python IDLE

System model methodology


Step -1

We have resize our image, and import cv2 and numpy and also use the CascadeClassifier
function of OpenCV to point to the location where we have stored the XML file,
haarcascade_fullbody.xml in our case. I have downloaded the xml file to my local and used
the path of my machine.

Step 2
The 2nd step is to load the image and convert it into gray-scale. Before showing the code, I
want to tell you the reason why we are converting the image to grayscale here.

Generally the images that we see are in the form of RGB channels(Red, Green, Blue). So,
when OpenCV reads the RGB image, it usually stores the image in the BGR (Blue, Green,
Red) channel. For the purposes of image recognition, we need to convert this BGR channel to
gray channel. The reason for this is gray channel is easy to process and is computationally
less intensive as it contains only 1-channel of black-white

Step 3
Now after converting the image from RGB to Gray, we will now try to locate the exact
features in our face. Let’s see how we could implement that in code.

In this piece of code what we are trying to do is, using the edge classifier which is an object
loaded with haarcascade_car.xml., we are using an inbuilt function with it called the
detectMultiScale.

This function will help us to find the features/locations of the new image. The way it does is,
it will use all the features from the edge_classifier object to detect the features of the new
image.

Step-4

We do it same as car detection with small changes .we will be using the
haarcascade_fullbody.xml to identify the features of the pedestrian’s body.

Performance Analysis
We know that Haar Cascade is not suitable for mobile and lower ranged processor oriented
devices as it processes floats.
• High quality imaging – the quality of the captured image should be high; else noises may
occur in the output.
• Due to not having so many night view dataset, we could not test many night view videos for
vehicle tracking as well as detection.
When compared to neural networks method our model gives faster result and takes less time
compared to it .

Results and discussion


1. Detection of Car
• Original Picture of Video

• Grayscale Picture of Video

• Detected Picture of Video(Red Box for Vehicle)


2. Detection of Human
• Original Picture of Video
• Grayscale Picture of Video
• Detected Picture of Video(Yellow Box for Human)

3. Simultaneous Detection of both Pedestrians and Vehicles


• Original Picture of Video

• Grayscale Picture of Video


• Output picture of Video

Although haar cascade classifier is pretty helpful ,there are few drawbacks of this approach.
● The most challenging part of this is accurately specifying the parameter value of
scaleFactor and minNeighbors of the detectMultiScale function. It is pretty common
to run into scenarios where we need to tune both the parameters on a image-by-image
basis which is a big turn off when it comes to an image detection use-case.
● The scaleFactor is basically used to control the image pyramid which in turn is used
to detect the object at various scales of an image. If the scaleFactor is too large then
chances are that the image detection will not be accurate and we will be missing
objects at scales that fall in between the pyramid layers.
● However, if we decrease the value of scaleFactor then you will get many layers of
pyramids on the same image scale which makes detection slower and increases false-
positives.
Conclusion
Our aim is not only to develop algorithm for our project, but also deploy in cars for real-time
purpose. Many research papers and articles we read developed a algorithm that requires high
configuration pc mainly high Graphic Processing Unit(GPU). Our model simple ,robust and
is made with less human interference. We can use raspberry pi, a credit card size
microprocessor inside our vehicles, using Pi camera (a portable light weight camera that
supports Raspberry Pi).We made the algorithm in Python3 language (Interpreter ,high-level
and general-purpose programming language) which is recently popular language according to
PYPL(Popularity of Programming Language 31.02%) which is currently used for many
advanced technologies like Artificial Intelligence, Machine Learning, Data Science.

Refernces
1. Zhilu Chen, “Computer Vision and Machine Learning for Autonomous Vehicles”,
WORCESTER POLYTECHNIC INSTITUTE,August 2017
2. Mohana ,HV Ravish Aradhay , “Object Detection and Tracking using Deep Learning
and Artificial Intelligence for Video Surveillance Applications”. (IJACSA)
International Journal of Advanced Computer Science and Applications, Vol. 10, No.
12, 2019.
3. Aryal, Milan, "Object Detection, Classification, and Tracking for Autonomous
Vehicle" (2018). Masters Theses. 912. https://scholarworks.gvsu.edu/theses/912
4. https://towardsdatascience.com/computer-vision-detecting-objects-using-haar-
cascade-classifier-4585472829a9
5. https://docs.opencv.org/3.4/db/d28/tutorial_cascade_classifier.html

You might also like