This repository contains the code used for our work, 'Source-Free Domain Adaptation for YOLO Object Detection,' presented at the ECCV 2024 Workshop on Out-of-Distribution Generalization in Computer Vision Foundation Models. You can find our paper here.
Here is an example of using SF-YOLO for the Cityscapes to Foggy Cityscapes scenario.
conda create --name sf-yolo python=3.11
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
- Download the Foggy Cityscapes dataset from the official website.
- Convert the datasets to YOLO format and place them into the ./datasets folder.
Extract target training data :
cd TargetAugment_train
python extract_data.py --scenario_name city2foggy --images_folder ../datasets/CityScapesFoggy/yolov5_format/images --image_suffix jpg
Then train the Target Augmentation Module :
python train.py --scenario_name city2foggy --content_dir data/city2foggy --style_dir data/meanfoggy --vgg pre_trained/vgg16_ori.pth --save_dir models/city2foggy --n_threads=8 --device 0
Download the Cityscapes source model weights and place them in the ./source_weights
folder, then run:
python train_sf-yolo.py --epochs 60 --batch-size 16 --data foggy_cityscapes.yaml --weights ./source_weights/yolov5l_cityscapes.pt --decoder_path TargetAugment_train/models/city2foggy/decoder_iter_160000.pth --encoder_path TargetAugment_train/pre_trained/vgg16_ori.pth --fc1 TargetAugment_train/models/city2foggy/fc1_iter_160000.pth --fc2 TargetAugment_train/models/city2foggy/fc2_iter_160000.pth --style_add_alpha 0.4 --style_path ./TargetAugment_train/data/meanfoggy/meanfoggy.jpg --SSM_alpha 0.5 --device 0
Other scenarios can be run by following the same steps. All source model weights are available here.
Thanks to the creators of YOLOv5, AdaIN and LODS , which this implementation is built upon.