- 😎End-to-end architecture: Directly output lane shape parameters.
- ⚡Super lightweight: The number of model parameters is only 765,787.
- ⚡Super low complexity: The number of MACs (1 MAC = 2 FLOP) is only 574.280M.
- 😎Training friendly: Lower GPU memory cost. Input (360, 640, 3) with batch_size 16 uses 1245MiB GPU usages.
PyTorch(1.5.0) training, evaluating and pretrained models for LSTR (Lane Shape Prediction with Transformers). We streamline the lane detection to a single-stage framework by proposing a novel lane shape model that achieves 96.18 TuSimple accuracy.
For details see End-to-end Lane Shape Prediction with Transformers by Ruijin Liu, Zejian Yuan, Tie Liu, Zhiliang Xiong.
- 【2021/11/16】 We fix the multi-GPU training.
- 【2020/12/06】 We now support CULane Dataset.
- LSTR-nano(New backbone): 96.33 TuSimple accuracy with only 40% MACs (229.419M) and 40% #Params (302,546) of LSTR.
- Mosaic Augmentation.
- Loguru based logger module.
- Geometry based loss functions.
- Segmentation prior.
We provide the baseline LSTR model file in the ./cache/nnet/LSTR/
Download and extract TuSimple train, val and test with annotations from TuSimple. We expect the directory structure to be the following:
TuSimple/
LaneDetection/
clips/
label_data_0313.json
label_data_0531.json
label_data_0601.json
test_label.json
LSTR/
- Linux ubuntu 16.04
git clone https://github.com/liuruijin17/LSTR.git -b multiGPU
conda env create --name lstr --file environment.txt
conda activate lstr
pip install -r requirements.txt
To train a model: (if you only want to use the train set, please see ./config/LSTR.json and set "train_split": "train")
python train.py LSTR -d 1 -t 8
- Visualized images are in ./results during training.
- Saved model files are in ./cache during training.
To train a model from a snapshot model file:
python train.py LSTR -d 1 -t 8 -r
To evaluate, then you will a result better than the paper's:
python test.py LSTR -d 1 -b 16
To demon TuSimple images in ./results/LSTR/507640/testing/lane_debug:
python demo.py LSTR
- Demo (displayed parameters are rounded to three significant figures.)
To demo TuSimple decoder attention maps (store --debugEnc to visualize encoder attention maps):
python demo.py LSTR -dec
To demo on your images (put them in ./assets, then their results will be saved in ./assets_output):
python demo.py LSTR -f ./assets
@InProceedings{LSTR,
author = {Ruijin Liu and Zejian Yuan and Tie Liu and Zhiliang Xiong},
title = {End-to-end Lane Shape Prediction with Transformers},
booktitle = {WACV},
year = {2021}
}
LSTR is released under BSD 3-Clause License. Please see LICENSE file for more information.
We actively welcome your pull requests!