CHAKI, Sudipto Et Al
CHAKI, Sudipto Et Al
net/publication/359384141
CITATIONS READS
6 78
4 authors, including:
Milon Biswas
University of Alabama at Birmingham
75 PUBLICATIONS 587 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Sudipto Chaki on 03 April 2022.
Abstract. Human being precepts its visual information with the help of eyes.
But, some people are visually impaired and are incapable to collect visual in-
formation. That is why they have to rely on others while walking or moving
somewhere. So, we have proposed a framework that can solve this issue effec-
tively and give the blind people a scope to move around freely. Some researchers
have already contributed to this issue and proposed some walker-based systems
to assist visually impaired people. But, most of the walkers are either heavy or
big. Heavyweight restricts them from being user-friendly and big size makes
it impossible to be used in a narrow space. In this regard, we have developed
a light-weight robot that can assist blind people to find an obstacle-freeway
while moving. Our robot is integrated with a Microsoft Kinect camera and to
process the captured images, we use the Raspberry pi as its computational
unit. By using the depth sensor of the Kinect device, the robot collects its
surroundings images and is sent to the Raspberry pi unit to analyze them to
detect any obstacles coming towards. We accommodate our robot with an ob-
stacle avoidance algorithm so that if any obstacle is detected within a certain
range, the robot can automatically move either right or left depending on the
intensity values of the images.
1 Introduction
Navigation without vision is difficult. Navigating in spaces is quite a challenge for
the users who are blind as they largely have to rely on their hands and ears for spa-
tial perception. These people suffer from serious visual impairments preventing them
from traveling independently. Several technologies have been emerged to help visually
impaired people. In this regard, we have built a prototype robot that can securely
guide visually impaired people. The self-driving car concept is now a very popular
application area in autonomous vehicle management systems. A laser light reflection-
based obstacle avoidance vehicle management is proposed by Nikolaos Baras et al. [1].
The authors have used Raspberry Pi as its processing unit that is integrated with its
obstacle avoidance algorithm. But, they have built their systems which are largely
dependent on Wi-Fi for navigation purposes which may fail due to the absence of it
while moving in a robust environment. Nowadays, infrared light is being used in auto-
matic vehicle management systems [2]. The authors have proposed a prototype-based
automatic vehicle for any indoor environment. But, they have failed to deploy their
prototype autonomous vehicle in outdoor environments. The future of our transporta-
tion will be largely dependent on the autonomous vehicle management system. The
key contributions of our proposed framework is highlighted in the following:
– We develop a prototype of obstacle avoidance robot that is capable of providing
a safe passage by avoiding obstacles in its way.
– Our prototype robot is integrated with an obstacle avoidance algorithm that per-
forms better with lower time complexity.
– Our proposed framework can be beneficial for the visually impaired people if we
integrate our robot with a smart cane.
The rest of the part of this paper is organized in the following manner. Section 2
describes the research background of our proposed work. Section 3 provides the robot
based framework of this paper. We show some experimental result analysis in section
4. Finally, we add the conclusion and future works in section 5.
2 S. Chaki et al.
2 Related Research
Some previous researches on collision avoidance technique for a robot are being treated
as high-level tasks. But, Takizawa et al. [3] have proposed a novel technique for mo-
bile robots that shows the robot can be controlled in a real-time monitoring system.
Microsoft Kinect [4] has a depth sensor and is equipped with an infrared light imaging
technique that helps to identify any type of 3-D objects from the environment. But,
this system is integrated with a tactile device that needs to be held by its user and
this may not be user-friendly for visually impaired people. To remove this problem,
we have integrated a Kinect device with a robot that is equipped to pave an obstacle
freeway without any physical involvement. To process in real-time, we have a Rasp-
berry Pi unit that can process our captured image faster than in [5,6]. Apart from the
Kinect device, the radio frequency identification-based object detection technique [7]
is applied for blind people so that they may move freely in the indoor environment by
using tags attached to the objects. For identifying obstacles while moving, blind users
need to wear a glove to their hands to detect obstacles while touching the objects. A
novel approach for the surveillance purpose [8] that integrates with computer vision
and neural network techniques is proposed by Budiharto.
A recent advancement for the obstacle avoidance robot is proposed by Vakada
Naveen et al. [9,10] where they built a prototype of ultrasonic sensor-based electronic
travel aid to avoid obstacles while moving in the environment. But, due to the large
size of this prototype device, it is not suitable for light-weight vision-less robots. SICK
sensor-based [11] object detection and identification for the ground robots is proposed
by Yan Peng et al. To reduce noise from the sensed images, they have applied a filtering
technique to improve the performance by reducing noise levels. Over the few years,
Raspberry Pi [12, 13] is being used as a very popular computational processing unit
especially in miniature robots. In our works, we have used it to process our novel
obstacle avoidance algorithm and hence to send signals to the robot to avoid the
upcoming obstacles by turning its wheels in a certain direction provided by it.
An experimental autonomous [14,15] ground vehicle system is used to test a unique
obstacle avoidance approach. In comparison to prior techniques, the suggested method
provides a novel solution to the problem and has numerous advantages. This inno-
vative method is simple to tweak and takes into account the robot’s range of vi-
sion as well as non-holistic restrictions. The Bug algorithms [16], the Potential Field
techniques, and the Vector Field Histogram approach, all active sensor-based meth-
ods [17], are discussed in this article for obstacle avoidance and path planning. Un-
like most existing hybrid navigation systems [18], in which deliberative layers play a
dominant role and reactive layers are merely simple executors, the proposed archi-
tecture focuses on the navigation problems and pursues a more independent reactive
layer that can guarantee convergence without the assistance of a deliberative layer.
These papers [19, 20] describe a hybrid approach (Roaming Trails) that combines
a prior knowledge of the environment with local perceptions to complete assigned
tasks efficiently and safely: that is, by ensuring that the robot never becomes stuck
in deadlocks, even when operating in a partially unknown dynamic environment. The
obstacle avoidance robot uses sonar range sensors as a sensory element [21]. The
authors introduced an improved artificial potential field-based regression search (Im-
proved APF-based RS) technique [22] in this work, which can quickly get a global
sub-optimal/optimal path [23] with complete known environment [24,25] information
without local minima and oscillations. The use of computer vision [26,27] and picture
sequence algorithms to detect and avoid obstacles for autonomous land vehicle (ALV)
navigation in outdoor road conditions is proposed.
In our work, we have used the Microsoft Kinect sensor to obtain the depth in-
formation of the scene for obstacle classification. The advantage of our framework
with an existing product is its ability to detect obstacles within the specified range
in a faster manner because of our obstacle avoidance algorithm. Besides, our robot
automatically moves in a direction where no obstacle is present that can lead the user
to an obstacle-free path. It will be much cheaper compared to all the other devices
currently available in the market. A lower price and its ability to detect obstacles will
be greatly helpful for visually impaired individuals.
A Framework of an Obstacle Avoidance Robot for the Visually Impaired People 3
This device will start up when power will be given to the processor and the Mi-
crosoft Kinect camera. After booting up the raspberry pi which is used as the process-
ing device, a program inside the raspberry will automatically run. This task is done
by writing a bash script. A bash script is a text file containing a series of commands.
The camera will sense the surrounding environment. Based on the position of the
obstacle, the processor will send signals to the motor driver. Depending on the signal
of the motor driver, it will turn the wheels attached to that device. A battery of 12v
is used to supply power to the raspberry pi unit and the Microsoft Kinect sensor. The
voltage regulator regulates the voltage level. As the raspberry pi operates on 5v, a
bug module is used to supply the exact amount of voltage. The Kinect is powered by
using a booster module.
Having the position of the obstacle detected, the raspberry pi will send signals to
the motor drivers. It amplifies the level of current. Two types of polarity are given
to the motor driver. Depending on the polarities, the wheels are rotated. To provide
a clear understanding of the proposed system, we have subdivided the whole system
into mainly two parts and they are the object detection phase, and obstacle avoidance
phase.
4 S. Chaki et al.
Formation Generation Layer: The formation generating layer uses the positions of
suspected impediments to decide where the robot should travel in order to produce a
specific configuration. The terms ”target” and ”place” are used throughout the article
to refer to an obstacle-free path and a point where the robot should travel based on
the position of the obstruction. The generalized format of the formation matrix is
depicted in Eq. 1. The robot measures the distance d from the upcoming obstacles
in its way in order to move to a certain direction (i.e. either left or right depending
on the condition arises from the flag bit). The assessment of distance measurement is
shown in Eq. 2. An outline of distance measurement of our robot bot is shown in Fig.
2(a). The location and target are calculated using a distance cost table. The creation
of a distance cost table based on formation shape is depicted in Fig. 2(b). While
moving to a certain direction the robot takes it decision based the distance from the
cost table (i.e. formation matrix of distance).
P (1)
P (2)
P (3)
T (1) d(1) θ(1)
. = T (2) d(2) θ(2)
F (M ) = (1)
.
T (2) d(2) θ(2)
.
q
di,j = d2j + d2i,l − 2dj di,l cos (∆θ) (2)
Notation: P (i) – ith row of the formation matrix, T (i) – target, d(i) – distance,
$theta (i) - angle
Erosion: The basic idea is to erode the regions of the foreground pixels which are
typically the white pixels. It takes two inputs to process further. The first input
is the image that is to be eroded and the second one is a set of coordinate points
named structuring elements. The second one is also known as the kernel. The use of
the structuring element is to determine the precise effect of the erosion on the input
image.
Edge Detection: To detect an object coming towards the robot, edge detection is
needed to identify the object as an obstacle. Most of the shape information of an
image is enclosed in edges. So, in this regard, we have used Canny edge detector
methods and then enhance those areas of an image that contain edges. The sharpness
of the image is increased in several steps.
– Noise Reduction: Edge detection is very much susceptible to noise. So, we have
reduced the noise level from the image using proper filtering techniques.
– Intensity Gradient of the Image: Having done with the noise removal part,
the noise-free image is then filtered with a Sobel kernel to get the first derivative
in the horizontal and vertical directions.
– Non-Maximum Suppression: In our previous steps, we have reduced noise
levels from the image but some unwanted pixels may be included while finding
the edges which may not form the necessary edges.
A Framework of an Obstacle Avoidance Robot for the Visually Impaired People 5
Fig. 2. (a) Measurement of distance from the obstacle (b) Measuring the angle while move-
ment of the robot.
Contour Detection: Contours are a curve joining all the continuous points (along-
side the boundary), having the same intensity values. In the process of object, detec-
tion contours are a useful tool. It takes three arguments. The first one is the input
image, the second one is the contour retrieval mode and the last one is the contour
approximation method. Then it returns a modified image.
Binning: This is a way to group several continuous values into a smaller number of
bins. Binning combines the information of adjacent pixels into a resulting information
bin. This operation leads to a reduced resolution.
right or left. The connection between the robot and the raspberry pi is done using the
GPIO (general purpose input-output) pins. Pins are arranged in a (2×20) fashion.
This provides an interface between the raspberry pi and the motor driver. These pins
act as a switch that has a high voltage of (3.3v) and a low voltage of (0v) to connect
the raspberry pi to the navigation robot.
The objects that are approaching towards the robot are defined as obstacles based
on the intensity values as described in the previous section. The following Fig. 3(a)
shows an object that exists in the way of our moving robot. The Kinect device cap-
tured the image and after analyzing the intensity values shown in Fig. 3(b), our
system has detected the object is at a safe distance and the system generates a soft
signal for the robot to move forward. Different types of intensity levels from 0 to 7
are the NumPy (a python library) data to identify the distance from the obstacle. So,
our obstacle avoidance algorithm then sends a signal to the robot to move forward as
it is a safe state.
Fig. 3. (a) An obstacle exists at a safe distance, and (b) NumPy library data for the safe
state.
When the obstacle is closer than the safe distance shown in Fig. 4(a), the robot
rotates in a certain direction based on the intensity levels shown in Fig. 4(b). The
intensity levels got darker than the previous safe state as the obstacle is nearer to our
robot.
When the object is at a collision state shown in Fig. 5(a) (i. e. the obstacle is
imminent), the nearest pixels of the obstacle got even darker shown in Fig. 5(b)
than the previous scenarios. This indicates that a collision might happen if the robot
continues to go in the same direction. To avoid this collision, the algorithm sends a
A Framework of an Obstacle Avoidance Robot for the Visually Impaired People 7
Fig. 4. (a) An obstacle exists at a nearer distance, and (b) NumPy library data for the
nearer state.
collision alert signal. This signal is then sent to the robot through the Raspberry Pi
unit. The motor driver takes the signal and rotates at an obstacle freeway by moving
its wheels.
Fig. 5. (a) An obstacle exists at a collision state, and (b) NumPy library data for the collision
state.
We have tested our robot several times with different hurdles to identify any mov-
ing or stand-still objects from its surroundings. And, with the help of the Kinect
sensor as well as an integrated obstacle avoidance algorithm, our robot has success-
fully detected obstacles and avoided them by turning its wheels. In our experimental
environment, we have found our robot can move at an average speed of 0.43 m/sec
while avoiding obstacles successfully. On the other hand, the average speed of the
proposed LIDAR-based robot [1] was 0.31 m/sec. Our robot achieves such greater
speed due to the lightweight factor associated with our proposed obstacle avoidance
algorithm.
We compare the performance of our robot with other existing obstacle avoidance
robots in Table 1 in terms of five parameters. To move faster and avoid the upcoming
obstacles towards the robot effectively, we use raspberry pi unit as our main processing
unit. In addition, our proposed obstacle avoidance algorithm analyze the surroundings
of the robot in terms of MKC-360 unit more efficiently than the previous proposed
frameworks. Our obstacle avoidance algorithm is capable to identify the distance of
the obstacle and thereby process further to take the decision of movement based on
the intensity values (i. e. higher the intensity level higher the probability of collision).
Table 1. A comparative study among available systems with our proposed framework.
References
1. Baras, N., Nantzios, G., Ziouzios, D., Dasygenis, M. (2019, May). Autonomous obsta-
cle avoidance vehicle using lidar and an embedded system. In 2019 8th International
Conference on Modern Circuits and Systems Technologies (MOCAST) (pp. 1-4). IEEE.
2. Run, R. S., Xiao, Z. Y. (2018). Indoor autonomous vehicle navigation—a feasibility study
based on infrared technology. Applied System Innovation, 1(1), 4.
3. Takizawa, H., Yamaguchi, S., Aoyagi, M., Ezaki, N., Mizuno, S. (2013, June). Kinect
cane: Object recognition aids for the visually impaired. In 2013 6th International Con-
ference on Human System Interactions (HSI) (pp. 473-478). IEEE.
4. Khan, A., Moideen, F., Lopez, J., Khoo, W. L., Zhu, Z. (2012, July). KinDectect: Kinect
detecting objects. In International Conference on Computers for Handicapped Persons
(pp. 588-595). Springer, Berlin, Heidelberg.
5. Filipe, V., Fernandes, F., Fernandes, H., Sousa, A., Paredes, H., Barroso, J. (2012). Blind
navigation support system based on Microsoft Kinect. Procedia Computer Science, 14,
94-101.
6. Creo, L. M. V., Dacanay, G. M., Jarque, L. C. P., Umali, C. J. P., Tolentino, E. R.
E. (2021, June). Controlling the Bomb Disposal Robot using Microsoft Kinect Sensor.
In 2021 International Conference on Communication, Control and Information Sciences
(ICCISc) (Vol. 1, pp. 1-6). IEEE.
7. Ganz, A., Gandhi, S. R., Schafer, J., Singh, T., Puleo, E., Mullett, G., Wilson, C.
(2011, August). PERCEPT: Indoor navigation for the blind and visually impaired. In
2011 Annual International Conference of the IEEE Engineering in Medicine and Biology
Society (pp. 856-859). IEEE.
8. Budiharto, W. (2015). Intelligent surveillance robot with obstacle avoidance capabilities
using neural network. Computational intelligence and neuroscience, 2015.
9. Naveen, V., Aasish, C., Kavya, M., Vidhyalakshmi, M., Sailaja, K. (2021). Autonomous
Obstacle Avoidance Robot Using Regression. In Proceedings of International Conference
on Computational Intelligence and Data Engineering (pp. 1-13). Springer, Singapore.
10. Prasad, A., Sharma, B., Vanualailai, J., Kumar, S. A. (2020). A geometric approach
to target convergence and obstacle avoidance of a nonstandard tractor-trailer robot.
International Journal of Robust and Nonlinear Control, 30(13), 4924-4943.
A Framework of an Obstacle Avoidance Robot for the Visually Impaired People 9
11. Peng, Y., Qu, D., Zhong, Y., Xie, S., Luo, J., Gu, J. (2015, August). The obstacle de-
tection and obstacle avoidance algorithm based on 2-d lidar. In 2015 IEEE international
conference on information and automation (pp. 1648-1653). IEEE.
12. Deac, M. A., Al-doori, R. W. Y., Negru, M., Dǎnescu, R. (2018, September). Miniature
autonomous vehicle development on raspberry pi. In 2018 IEEE 14th International Con-
ference on Intelligent Computer Communication and Processing (ICCP) (pp. 229-236).
IEEE.
13. Sunehra, D., Jhansi, B., Sneha, R. (2021, April). Smart Robotic Personal Assistant Ve-
hicle Using Raspberry Pi and Zero UI Technology. In 2021 6th International Conference
for Convergence in Technology (I2CT) (pp. 1-6). IEEE.
14. Sezer, V., Gokasan, M. (2012). A novel obstacle avoidance algorithm:“Follow the Gap
Method”. Robotics and Autonomous Systems, 60(9), 1123-1134.
15. Xie, Y., Zhang, X., Meng, W., Zheng, S., Jiang, L., Meng, J., Wang, S. (2021). Coupled
fractional-order sliding mode control and obstacle avoidance of a four-wheeled steerable
mobile robot. ISA transactions, 108, 282-294.
16. Zhang, W., Cheng, H., Hao, L., Li, X., Liu, M., Gao, X. (2021). An obstacle avoid-
ance algorithm for robot manipulators based on decision-making force. Robotics and
Computer-Integrated Manufacturing, 71, 102114.
17. Oroko, J., Ikua, B. (2012). Obstacle avoidance and path planning schemes for au-
tonomous navigation of a mobile robot: a review. Sustainable Research and Innovation
Proceedings, 4.
18. Zhu, Y., Zhang, T., Song, J., Li, X. (2012). A new hybrid navigation algorithm for
mobile robots in environments with incomplete knowledge. Knowledge-Based Systems,
27, 302-313.
19. Kumari, C. L. (2012). Building algorithm for obstacle detection and avoidance system
for wheeled mobile robot. Global Journal of Research In Engineering.
20. Tian, S., Li, Y., Kang, Y., Xia, J. (2021). Multi-robot path planning in wireless sensor
networks based on jump mechanism PSO and safety gap obstacle avoidance. Future
Generation Computer Systems, 118, 37-47.
21. Yufka, A., Parlaktuna, O. (2009, May). Performance comparison of bug algorithms for
mobile robots. In Proceedings of the 5th international advanced technologies symposium,
Karabuk, Turkey (pp. 13-15).
22. Li, G., Yamashita, A., Asama, H., Tamura, Y. (2012, August). An efficient improved
artificial potential field based regression search method for robot path planning. In 2012
IEEE International Conference on Mechatronics and Automation (pp. 1227-1232). IEEE.
23. Chen, K. H., Tsai, W. H. (2000). Vision-based obstacle detection and avoidance for
autonomous land vehicle navigation in outdoor roads. Automation in construction, 10(1),
1-25.
24. Biswas, M., Whaiduzzaman, M. D. (2018). Efficient mobile cloud computing through
computation offloading. Int. J. Adv. Technol, 10(2).
25. Akib, A. A. S., Ferdous, M. F., Biswas, M., Khondokar, H. M. (2019, May). Artificial
Intelligence Humanoid BONGO Robot in Bangladesh. In 2019 1st International Con-
ference on Advances in Science, Engineering and Robotics Technology (ICASERT) (pp.
1-6). IEEE.
26. Biswas, M., Rahman, A., Kaiser, M. S., Al Mamun, S., Ebne Mizan, K. S., Islam, M.
S., Mahmud, M. (2021, September). Indoor Navigation Support System for Patients
with Neurodegenerative Diseases. In International Conference on Brain Informatics (pp.
411-422). Springer, Cham.
27. Biswas, M., Kaiser, M. S., Mahmud, M., Al Mamun, S., Hossain, M., Rahman, M. A.
(2021, September). An XAI Based Autism Detection: The Context Behind the Detection.
In International Conference on Brain Informatics (pp. 448-459). Springer, Cham.