Robo
Lab Procedure for Simulink
Obstacle Detection
Setup
1. It is recommended that you review Lab 4 – Application Guide before starting this lab.
2. Turn on the QBot Platform by pressing the power button once. To ensure the robot is
ready for the lab, check the following conditions.
a. The LEDs on the robot base should be solid red.
b. The LCD should display the battery level. It is recommended that the battery
level is over 12.5V.
c. The Logitech F710 joystick’s wireless receiver is connected to the QBot
Platform. Before use, always make sure the switch on top is in the X position
and that the LED next to the Mode button is off.
d. Make sure your computer is connected to the same network that the QBot
Platform is on. If using the provided router, the network should be
Quanser_UVS-5G.
e. Test connectivity to the QBot. Using the IP displayed in the robot’s LCD display,
enter the following command in your local computer terminal and hit enter:
ping 192.168.2.x .
3. Deploy and run qbot_platform_driver_physical on QBot Platform:
a. Right click on qbot_platform_driver_physical.rt-linux_qbot_platform, select
“Show more options”, then select “Run on target”.
b. Change Target URI to: tcpip://192.168.2.x:17000
c. Change Model Arguments to -d /tmp -uri tcpip://192.168.2.x:17099
d. Click Run.
e. The QBot Platform LEDs should pulse white if the driver is deployed and
running successfully.
4. Open the Simulink Model obstacle_detection.slx, as shown in Figure 1. Configure the
model so that it can be deployed to the QBot Platform:
a. Open Hardware Settings under the Hardware ribbon in your model.
b. Expand and browse to Code Generation > Interface.
c. Change the MEX-file arguments to the following string including single quotes,
'-w -d /tmp -uri %u','tcpip://192.168.2.x:17001'
Figure 1. Lab 4 Obstacle Detection Simulink Model
LiDAR Data Localization
1. In this lab you will be focusing on processing the LiDAR scan data to enable obstacle
detection of the QBot. First, we will take a look at the Lidar and Localization
subsystem, it should look like figure 2. In this subsystem, the reference frame of the
LiDAR data is transferred from the center of the sensor to the center of the QBot. The
Quanser Ranging Sensor block outputs distances measured in meters and the
corresponding headings. Other properties such as the standard deviation (sigma) and
quality (qual) of the measurement were not used in our application. To learn more
about the output of Quanser Ranging Sensor block, right click on the block and select
“help” to go to its documentation.
Figure 2. Lidar & Localization Block
2
2. Notice that the raw headings signal is connected to a Gain block with a value of -1 and
a Bias block with a value of pi/2. This is to correct the headings such that the angle
value of 0 corresponds to the front of the QBot, as shown in Figure 3. In addition, the
angles now are now increasing counterclockwise, consistent with the positive
convention of our reference frame.
Figure 3. LiDAR measurement correction
3. Two Selector blocks are used to select every 4th element in the distance and headings
signal, effectively downsampling these signals fourfold. Downsampling is beneficial in
the obstacle detection application as it allows for a faster computation when high
resolution is not necessary.
4. Open the MATLAB Function block labeled correct_lidar. To complete this function,
you will use the processed range and angle data and the position of the LiDAR to
adjust the current center of measurement (LiDAR) to the center of the QBot.
Dynamic Monitor Region
1. Go back to the main Simulink model and open Obstacle Detection via Lidar
subsystem, as shown in Figure 4. This subsystem uses the LiDAR measurement to
determine when the QBot should stop to avoid collision with an obstacle. In this lab,
you will implement a simple obstacle detection algorithm using the MATLAB Function
block labeled detect_obstacle.
3
Figure 4. Obstacle Detection via Lidar block
2. Notice that in addition to the LiDAR measurements, QBot body speed commands are
also used as input to the function. This is to allow our obstacle detection algorithm to
dynamically change monitor region and safety threshold, as shown in Figure 5.
Figure 5. Dynamic Scanning
3. Open the Saturation Blocks leading into the forSpd and turnSpd ports of
detect_obstacle block. Take notes of the upper and lower limits. They represent the
maximum forward and turn speed of the QBot.
4. Open detect_obstacle block. In section 1, the variable startingIndex corresponds to
the westmost angles of the monitor region. First, normalize turnSpd using the turn
speed limit. Then, derive an equation for startingIndex in terms of normalized turnSpd
and turnSpeedGain and complete section 1.
5. Similarly, normalized forSpd first, then derive an equation for safetyThreshold in terms
of normalized forSpd and forSpeedGain and complete section 2.
6. Close the MATLAB function and go back to the main Simulink model. Verify that the
Image Processing subsystem is already completed for you. However, feel free to
4
change the gains in the PD controller or copy over your own subsystem block from
the previous lab.
7. Click Monitor & Tune in the Hardware or QUARC ribbon to deploy and run the
model. When the model is run successfully, the QBot Platform LEDs will turn blue.
8. Open the Polar Figure labeled Monitor Scan. This figure should display the LiDAR
scan in the monitor region, as well as a uniform arc representing the safety threshold.
9. Without arming the QBot, move the joystick around. Notice that the monitor region
and safety threshold may not be changing with the body speed commands. This is
most likely due to forSpeedGain and turnSpeedGain being too small to produce any
noticeable responses.
10. Open Obstacle Detection via Lidar subsystem again to tune the two gains. Observe
how the monitor region and safety threshold dynamically change in response to the
varying speed commands. Iterate through this process as much as you need until you
are satisfied with the monitor region responses. Press the right button (RB) to stop the
model.
Obstacle Detection
1. Go back to the Obstacle Detection via Lidar subsystem. Notice that minThreshold has
already been define for you. This variable represents the radius of the bounding circle
of QBot.
2. Open detect_obstacle function. In section 3, the total number of points that lays
between minThreshold and safetyThreshold will be computed and compared to
obstacleNumPoints. When told number of obstacle point exceeds obstacleNumPoints,
obstacleFlag will be set to true. Complete section 3 and close the function.
3. Click Monitor & Tune in the Hardware or QUARC ribbon to deploy and run the
model. When the model is run successfully, the QBot Platform LEDs will turn blue.
4. Without arming the QBot, gradually move an obstacle towards the LiDAR sensor to
trigger obstacleFlag. When an obstacle is detected, the User LED will turn magenta.
5. How sensitive is your obstacle detection algorithm to obstacle? Does it trigger too
early or too late?
6. Tune obstacleNumPoints until you are satisfied with the obstacle detection response.
7. While arming the QBot, drive it to a line on the mat, and press A button to start line
following. Observe the obstacle detection response to different scenarios.
a. The obstacle is directly on the line.
5
b. The obstacle is on the side of the line such that the QBot would narrowly drive
pass the obstacle.
8. Does the QBot stop in time to avoid collision? Does the QBot stop even if it could move
pass the obstacle?
9. Can you move the QBot closer to obstacle after obstacleFlag is triggered?
10. Tune the forSpeedGain, turnSpeedGain, and obstacleNumPoints until the QBot can
response correctly to the two obstacle configurations in step 7.
11. Stop the Simulink model when complete by pressing the right button (RB). Ensure that
you save a copy of your completed files for review later.
12. Turn OFF the robot by single pressing the power button (do not keep it pressed until
it turns off). Post shutdown, all the LEDs should be completely OFF.