KFDynaSLAM is a mono SLAM system that is designed to solve the problem of dynamic points culling in SLAM. I use MaskRCNN to help cull the dynamic points and choose only KeyFrame to segment in order to speed up the system. The basic system is ORBSLAM.
So it's called KFDynaSLAM.
This is a part of the code of my undergraduate project in Shanghai Jiaotong University,"Improvement of Indoor Positioning Technology Based on SLAM--Research on the Problem of Dynamic Point Culling" .
The other part is dedicated to the using auxiliary information of the camera in the environment to help with dynamic point removal,called EnvDynaSLAM. If there is any error, please pull issues or contact [email protected]
KFDynaSLAM是一个为解决SLAM中的动态点去除问题而做的单目SLAM系统。我选用MaskRCNN以辅助动态点的去除,并通过只对关键帧进行语义分割以提升运行速度。
因此,我称他为KFDynaSLAM。
这是我在上海交通大学做的本科毕设项目《基于SLAM的室内定位技术改进--对环境中动态点去除问题的研究》的代码的一部分,另一部分致力于通过环境中摄像头的辅助信息进行动态点去除,EnvDynaSLAM。如有错误,欢迎 pull issues 或者联系[email protected]
Below is a demo screen shot when the KFDynaSLAM is trying to processing a video taken in my house.
Figure 1: Demo
We provide examples to run the SLAM system in the TUM dataset as monocular, for the project is to research the SLAM use of indoor location. So I only use TUM dataset and my own video.
- Prepare ORB-SLAM
- Install ORB-SLAM2 prerequisites: C++11 or C++0x Compiler, Pangolin, OpenCV and Eigen3 (https://github.com/raulmur/ORB_SLAM2).
- Boost for python support
- Install boost libraries with the command
sudo apt-get install libboost-all-dev
.
- Install boost libraries with the command
- Prepare Mask RCNN
-
Install detectron2. You can find install tutorial here https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md.
-
To test whether you install successfully, you can try step.(Remeber to varify the input and output path).
-
Make sure your system python is >=3.6, for only python>=3.6 can install PyTorch ≥ 1.4 and then detectron2
-
Download the
R50-FPN 3x
model from https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md
-
Try
python src/python/step.py
- RUN Clone this repo:
git clone https://github.com/mingsjtu/ORB-SLAM_MaskRCNN_KeyFrame.git
cd ORB-SLAM_MaskRCNN_KeyFrame
Change some path in CMakeLists.txt if needed Build the project
cd ORB-SLAM_MaskRCNN_KeyFrame
chmod +x build.sh
./build.sh
Change path in python_predictor according to your own model path.
I provide prepare_data
folder which contains python files to help you prepare data with your own video.
- Convert the videos to picture sequence
Change the file path and start time in video2pic.py
video_name="/media/gm/Data/SLAM/self_video/5.16morn/huawei_20200516_081324"# mp4file path without '.mp4
video_to_image(video_name+".mp4",video_name+"/rgb",8*3600+11*60+50)# start time hour*3600+minute*60+second
generate_rgbtxt(video_name+"/rgb",video_name+"/rgb.txt")
run
cd prepare_data
python video2pic.py
You will get a sequnence of pictures and a TXT containing pictures name and timestamps.
- Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.
- Or use your own video with
Prepare Date
part above. - Execute the following command. Change
TUMX.yaml
to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. ChangePATH_TO_SEQUENCE_FOLDER
to the uncompressed sequence folder. ChangePATH_TO_MASKS
to your own mask path or the mask path you want the Segment module to save result in.
If PATH_TO_MASKS
is provided, Mask R-CNN is used to segment the potential dynamic content of every frame. These masks are saved in the provided folder PATH_TO_MASKS
. If this argument is no_save
, the masks are used but not saved. If it finds the Mask R-CNN computed dynamic masks in PATH_TO_MASKS
, it uses them but does not compute them again.
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER PATH_TO_MASKS
I use KFDynaSLAM and offlineSLAM(put done mask in PATH_TO_MASKS
before run SLAM) and ORBSLAM using TUM dynamic dataset. The result is as below.
Figure 2: Max Error of diffrent methods
Figure 3: Min Error of diffrent methods
It shows the KFDynaSLAM has better accuracy than ORBSLAM, and offlineSLAM is the best.
I use KFDynaSLAM and offlineSLAM(put done mask in PATH_TO_MASKS
before run SLAM) and EnvDynaSLAM and ORBSLAM using TUM dynamic dataset. The result is as below.
Figure 4: Time test result
It shows the segment part is very time-costing.
You can find EnvDynaSLAM here, which is another repository of mine. It can cull dynamic points with very short time.
Thanks to ORB-SLAM2 and DynaSLAM that give a lot of help to my project.