GNSS and Map Based
Navigation of Drones
-Dr. Bhaskar Anand
Assistant Professor, KLH University Hyderabad
PhD, IIT Hyderabad
Introduction
• GNSS (Waypoint) and map-based
navigation are crucial for UAVs
(Unmanned Aerial Vehicles) because
they enable autonomous flight
• A pre-defined path is followed through
a series of designated points
("waypoints") on a digital map, allowing
the drone to navigate complex
environments without constant manual
control, making it ideal for tasks like
aerial photography, surveying,
inspections, and delivery, where precise
flight paths are required.
UAV Navigation System
• A typical UAV navigation system uses the information from various sensors to estimate the position,
velocity, and orientation of the UAV.
• In addition support system perform the important task of tracking (static or dynamic) or avoidance of
obstacles
• For increased levels of autonomy and flight stabilization require a robust and efficient navigation
system
Typical UAV Navigation system
Subsystems of UAV navigation
• Navigation systems can be split into three main subsystems,
• Pose estimation
• Obstacle detection and avoidance
• Visal servoing (VS)
Subsystem Description Approach
Pose estimation Estimate the UAV’s orientation Visual odometry and
(Localization) and position simultaneous
in 2D and 3D localization and mapping
(SLAM)-based
Obstacle detection Making the appropriate Stereo/ monocular camera/
and avoidance decisions to avoid LiDAR based
obstacles and collision zones
Visual servoing By using visual data, maintain Visual image-based
the stability of
the UAV and its flying maneuvers
Pose Estimation
• Pose estimation includes estimating the
position and orientation of UAVs during
motion based on data obtained from
several sensors, including GPS, IMU,
vision, laser, and ultrasonic sensors
• Information obtained from various
sensors can be separated or combined.
Navigation and mapping processes
require the estimation of position as a
fundamental component.
Pose estimation: GPS
• The GPS, also known as a satellite-based
navigation system (SNS), is considered one
of the best methods for providing 3D
positions to unmanned ground vehicles
(UGVs), UAVs, and autonomous underwater
vehicles (AUVs)
• GPS is commonly used to determine a
UAV’s location during localization.
• To increase accuracy: Differential GPS
(DGPS), RTK
• Buildings, forests, and mountains can
significantly reduce satellite visibility in an
urban environment
• GPS is rendered ineffective in the absence
of satellite signals, such as when flying
indoors.
GPS-Aided Systems
• Stand-alone GPS can also cause errors due to
poor reception and jamming of satellite signals,
resulting in loss of navigational data.
• UAVs require a robust positioning system, for
which various approaches are used
• GPS-aided systems : The gathered GPS data are
fused with data from other sensors
• This multisensory fusion can consist of two or
more sensors. One of the most popular
configurations is the GPS/INS approach, where
the data from the INS and GPS are merged to
compensate for the errors generated by both
sensors and increase the accuracy of localization
Vision-Based Systems: VSLAM
• Visual pose estimation methods are based on
information provided by the visual sensors of
cameras.
• Different types of visual information are used in these
methods, such as horizon detection, landmark
• tracking, and edge detection
• A vision system can also be classified by its structure
as monocular, binocular, trinocular, or
omnidirectional
• Approaches: Visual simultaneous localization and
mapping (VSLAM) and visual odometry (VO).
• VSLAM algorithms aim at constructing a consistent
map of the environment and simultaneously
estimating the position of the UAV within the map.
Vision-Based Systems: Visual odometry (VO)
• Visual odometry (VO) is the process of estimating the pose
and motion of the camera from an image sequence
• Using consumer-grade cameras rather than expensive
sensors or systems, VO is an inexpensive and simple
approach to estimate the location of robots and vehicles
(Gonzalez et al., 2012).
• This method provides incremental online estimation of
vehicle position by matching successive image frames in a
sequence and accumulating the relative poses between
frames (Li et al., 2019)
• Contrary to SLAM technology, VO does not maintain a map
of the environment, allowing robot localization and
navigation without recording observed key points.
• VO technology is characterized by a good balance between
implementation complexity, reliability, and cost. No external
signal or reference is required, VO can work effectively in
environments where GPS signal is weak or non-existent.
Obstacle detection and Avoidance
• Autonomous navigation systems must detect and avoid
obstacles. Furthermore, this process is considered
challenging, particularly for vision-based systems.
• Obstacle detection and avoidance have been solved using
different approaches in vision-based navigation systems
• A 3D model of the obstacle within the environment can be
constructed
• Stereo cameras have been introduced to estimate the
proximity of obstacles using techniques based on stereo
cameras.
• By analyzing the disparity images and viewing angle, the
system determines the size and position of the obstacles
• In addition, this method calculates the relationship
between the size of a detected obstacle and its distance
from the UAV.
• LiDAR can directly provide depth of obstacle along with
size and shape
UAV Navigation: GNSS and Vision
• GNSS and vision are both commonly used to
navigate UAVs, but both of them have their
own advantages and disadvantages
• GPS-based navigation systems:
• global coverage, accuracy, and low cost
• suitable for outdoor navigation
• GPS receivers are widely available and relatively
inexpensive, and they can provide accuracy of
up to sub-meters in the open sky.
• vulnerable to interference and relying on
satellite signals
• a clear view of the sky is required for GPS to
function, which may not be possible in certain
environments (for example, indoors, in urban
areas, and in areas devoid of GPS signals)
UAV Navigation: GNSS and Vision
• Vision-based navigation systems have several advantages, including Robustness to interference,
high resolution, and low cost
• When GPS signals are blocked, a vision-based system can estimate the UAV’s position by using
visual information from its surroundings.
• High-resolution images captured by cameras are useful for detailed localization and mapping of the
environment
• However, vision-based systems typically have a limited range, and the UAV must remain close to
the target in order to achieve an accurate location
• Effected by lighting conditions such as glare and shadows
• In certain environments (such as featureless terrain, snow, and deserts), vision-based methods
cannot be used because there are no distinctive visual features in the environment.
• GPS : outdoor navigation, whereas vision-based sensors are used indoors or in GPS-denied
situations, where GPS signals are blocked or unavailable.
• Furthermore, UAV navigation can be improved through the combination of vision-based methods
and GPS
Drone Waypoint GPS Navigation
• Waypoint GPS Navigation
allows a drone to fly on its
own with it’s flying destination
or points preplanned and
configured into the drone
remote control navigational
software.
• This instructs the drone where
to fly; at what height; the
speed to fly at and it can also
be configured to hover at each
waypoint.
Drone Waypoint GPS Navigation
• Waypoint: reference point on the way
• 3 important things we can do with GPS waypoints
• 1) From anywhere around the world, whether that’s 20 meters away
from here or 50 meters, 100 kilometers or 1,000 kilometers, we can
come straight back from our starting point, an exact location in a
straight line from anywhere.
• 2) Waypoints can be transferred onto a computer’s digital mapping
software.
• 3) We can share our waypoints with other people.
Drone Waypoint GPS Navigation
• With Waypoint GPS navigation, a
site can be surveyed at the correct
locations.
• The drone can fly directly to each
specified location while the pilot on
the ground concentrates on
operating the camera to take aerial
photographs of video.
• The drone takes the shortest route
to each waypoint saving battery and
filming time.
Drone Waypoint GPS Navigation
Key components of map-based UAV
navigation
• Map creation:
• A digital map of the area is created beforehand, either using aerial imagery,
ground-based surveys, or a combination of both, which can include features
like buildings, roads, terrain contours, and other identifiable landmarks.
• Visual perception:
• The drone's camera captures images of its surroundings in real-time, which
are then processed by the onboard computer to identify relevant features and
landmarks.
• Localization:
• By comparing the detected features in the live camera feed to the pre-loaded
map, the drone's navigation system determines its current position and
orientation within the mapped area.
Key components of map-based UAV
navigation
• Path planning:
• Based on the calculated position and the desired destination, the
navigation system generates an optimal flight path, considering
obstacles and terrain constraints.
• Obstacle avoidance:
• In addition to path planning, the drone may use real-time obstacle
detection to adjust its flight path and avoid collisions with
unexpected objects.
Map based navigation
• The map-based system allows the UAV to
navigate with detour behavior and
movement planning capabilities based on
the predefined map and laid-out
environment
• Maps can vary in their level of detailing,
from a 3D model of an entire environment
to a diagram of the interconnection of
elements of an environment
• Map-oriented navigation systems can be
divided into three categories
• map-independent
• map-dependent, and
• map-building-based.
Map-independent navigation
• The map-independent navigation system
operates without a known map, whereas UAVs
navigate only by observing and extracting
distinct features from their surroundings
• Currently, optical flow and feature tracking
methods are the most commonly used
methods in map-independent navigation
systems.
• Optical flow based navigation, refers to a
system where a drone uses a camera to
analyze the apparent motion of pixels in a
scene (optical flow) to determine its own
movement and direction
• Feature Tracking-Based Navigation System
uses the identification and tracking of distinct
features in an environment (like corners, lines,
or specific landmarks) to determine the
position and orientation of a moving object
Map-dependent navigation
• A map-dependent approach relies on the spatial layout
of the environment to enable UAVs to navigate with
detour behavior and plan their movements
• Two different types of maps are primarily used in these
methods: octree and occupancy grid maps
• The maps contain a wide range of information, which
includes 3D models of a complete environment and
maps showing the interconnections between the
elements of that environment.
• Furthermore, when the 3D data are directly stored in a
2D map, they can be applied in indoor environments,
such as office areas, wide hall rooms, or plain outdoor
fields, where height information is less critical.
Map-Building-Based Navigation Systems
• Moreover, owing to natural calamities, such as storms or
heavy rain, UAVs cannot easily recognize the target area
• Therefore, generating maps during navigating through a
complex environment is an effective solution
• Generally, map-building approaches are widely used in
autonomous and semi-autonomous fields and have become
widely accepted and more popular owing to the rapid
development of simultaneous visual localization and mapping
(VSLAM)
• Vision-based SLAM algorithms, which are based on cameras,
have been significantly developed for simultaneously
recovering camera poses and structure of the environment
and have used three types of methods: indirect, direct, and
hybrid methods, depending on the visual sensor image
processing.
Map-Building-Based Navigation Systems
• Indirect approaches identify and extract characteristics. Instead of accessing pictures
explicitly, it is used as input for motion estimation and localization methods.
• Features are often intended to be rotation and perspective invariant and robust against
motion blur and noise
• Direct method optimizes the geometry parameters by utilizing the intensity information of
all images, providing resistance to photometric and geometric aberrations
• Furthermore, direct techniques are more likely to identify dense correspondences,
allowing them to reconstruct dense maps at a higher computational cost.
• Hybrid approaches combine both the direct and indirect approaches
• As a first step, they initialize feature-related maps using indirect approaches. Second, for
more accurate results, they constantly refines the camera poses using direct methods.
Practical systems for UAV navigation
Drone Waypoint GPS Navigation
GNSS/LiDAR with Costmap-based Navigation:
ii. LiDAR-based Navigation:
GNSS/LiDAR integrated system Trajectory replanning after avoiding the obstacle
26
Map based Environment: Stereo-Vision based Navigation:
ii. Stereo-Vision based Navigation:
construction using onboard calculated depth data
Onboard calculated 3D map
27