KittiBox is a collection of scripts to train out model FastBox on the Kitti Object Detection Dataset. A detailed description of Fastbox can be found in our MultiNet paper.
FastBox is designed to archive a high detection performance at a very fast inference speed. On Kitti data the model has a throughput of 28 fps (36ms), and is more than double as fast as FasterRCNN. Despite its impressive speed FastBox outperforms Faster-RCNN significantly.
Task | moderate | easy | hard | speed (ms) | speed (fps) |
---|---|---|---|---|---|
FastBox | 86.45 % | 92.80 % | 67.59 % | 35.75 ms | 27.97 |
Faster-RCNN1 | 78.42 % | 91.62 % | 66.85 % | 78.30 ms | 12.77 |
The code contains for train
, evaluate
and visualize
FastBox in tensorflow. It is build to be compatible with the TensorVision backend which allows to organize experiments in a very clean way. Also check out KittiSeg a similar project implementing a state-of-the-art road segmentation model. The code for joint inference can be found in the MultiNet repository.
The code requires Tensorflow 1.0 as well as the following python libraries:
- matplotlib
- numpy
- Pillow
- scipy
- runcython
Those modules can be installed using: pip install numpy scipy pillow matplotlib runcython
or pip install -r requirements.txt
.
This code requires Tensorflow Version >= 1.0rc
to run. There have been a few breaking changes recently. If you are currently running an older tensorflow version, I suggest creating a new virtualenv
and install 1.0rc using:
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0rc0-cp27-none-linux_x86_64.whl
pip install --upgrade $TF_BINARY_URL
Above commands will install the linux version with gpu support. For other versions follow the instructions here.
- Clone this repository:
git clone https://github.com/MarvinTeichmann/KittiBox.git
- Initialize all submodules:
git submodule update --init --recursive
- Run
cd submodules/utils && make
to build cython code - [Optional] Download Kitti Object Detection Data
- Retrieve Kitti data url here: http://www.cvlibs.net/download.php?file=data_object_image_2.zip
- Call
python download_data.py --kitti_url URL_YOU_RETRIEVED
- [Optional] Run
cd submodules/KittiObjective2/ && make
to build the Kitti evaluation code (see submodules/KittiObjective2/README.md for more information)
Running demo.py
does not require step 4. and step 5. Those steps are only required if you want to train your own model using train.py
or bench a model against the official evaluation score using evaluate.py
. Also note, that I strongly recommend using download_data.py
instead of downloading the data yourself. The script will also extract and prepare the data. See Managing Folders if you like to control where the data is stored.
This project is developed, tested and maintained on a Linux operation system. It is written to be compatible with Windows, however a few modification are neccasary. You can find instructions on how to make the code run under Windows here.
In general I would however suggest to install Linux, at least on a virtual system. Getting used to Linux is not that hard and most Deep Learning Code is written for Linux. On the long run you will save yourselfe quite a bit of pain.
To update an existing KittiBox installation do:
- Pull all patches:
git pull
- Update all submodules:
git submodule update --init --recursive
If you forget the second step you might end up with an inconstant repository state. You will already have the new code for KittiBox but run it old submodule versions code. This can work, but I do not run any tests to verify this.
Run: python demo.py --input_image data/demo.png
to obtain a prediction using demo.png as input.
Run: python evaluate.py
to compute train and validation scores.
Run: python train.py
to train a new model on the Kitti Data.
If you like to understand the code, I would recommend looking at demo.py first. I have documented each step as thoroughly as possible in this file.
The model is controlled by the file hypes/kittiBox.json
. Modifying this file should be enough to train the model on your own data and adjust the architecture according to your needs. You can create a new file hypes/my_hype.json
and train that architecture using:
python train.py --hypes hypes/my_hype.json
For advanced modifications, the code is controlled by 5 different modules, which are specified in hypes/kittiBox.json
.
"model": {
"input_file": "../inputs/idl_input.py",
"architecture_file" : "../encoder/vgg.py",
"objective_file" : "../decoder/fastBox.py",
"optimizer_file" : "../optimizer/generic_optimizer.py",
"evaluator_file" : "../inputs/cars_eval.py"
},
Those modules operate independently. This allows easy experiments with different datasets (input_file
), encoder networks (architecture_file
), etc. Also see TensorVision for a specification of each of those files.
By default, the data is stored in the folder KittiBox/DATA
and the output of runs in KittiBox/RUNS
. This behaviour can be changed by adjusting the environoment Variabels: $TV_DIR_DATA
and $TV_DIR_RUNS
.
For organizing your experiments you can use:
python train.py --project batch_size_bench --name size_5
. This will store the run in the subfolder: $TV_DIR_RUNS/batch_size_bench/size_5_%DATE
This is useful if you want to run different series of experiments.
KittiBox is build on top of the TensorVision TensorVision backend. TensorVision modulizes computer vision training and helps organizing experiments.
To utilize the entire TensorVision functionality install it using
$ cd KittiBox/submodules/TensorVision
$ python setup install
Now you can use the TensorVision command line tools, which includes:
tv-train --hypes hypes/KittiBox.json
trains a json model.
tv-continue --logdir PATH/TO/RUNDIR
continues interrupted training
tv-analyze --logdir PATH/TO/RUNDIR
evaluated trained model
Here are some Flags which will be useful when working with KittiBox and TensorVision. All flags are avaible across all scripts.
--hypes
: specify which hype-file to use
--logdir
: specify which logdir to use
--gpus
: specify on which GPUs to run the code
--name
: assign a name to the run
--project
: assign a project to the run
--nosave
: debug run, logdir will be set to debug
In addition the following TensorVision environoment Variables will be useful:
$TV_DIR_DATA
: specify meta directory for data
$TV_DIR_RUNS
: specify meta directiry for output
$TV_USE_GPUS
: specify default GPU behavour.
On a cluster it is useful to set $TV_USE_GPUS=force
. This will make the flag --gpus
manditory and ensure, that run will be executed on the right gpu.
This project started out as a fork of TensorBox.
1: Code to reproduce the Faster-RCNN can be found here. The repository contains the official py-faster-rcnn code applied to the Kitti Object Detection Dataset.