Kaiyi Huang1, Kaiyue Sun1, Enze Xie2, Zhenguo Li2, and Xihui Liu1.
1The University of Hong Kong, 2Huawei Noah’s Ark Lab
Before running the scripts, make sure to install the library's training dependencies:
Important
To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
Then cd in the example folder and run
pip install -r requirements.txt
And initialize an 🤗Accelerate environment with:
accelerate config
- LoRA finetuning
Use LoRA finetuning method, please refer to the link for downloading "lora_diffusion" directory:
https://github.com/cloneofsimo/lora/tree/master
- Example usage
export project_dir=/T2I-CompBench
cd $project_dir
export train_data_dir="examples/samples/"
export output_dir="examples/output/"
export reward_root="examples/reward/"
export dataset_root="examples/dataset/color.txt"
export script=GORS_finetune/train_text_to_image.py
accelerate launch --multi_gpu --mixed_precision=fp16 \
--num_processes=8 --num_machines=1 \
--dynamo_backend=no "${script}" \
--train_data_dir="${train_data_dir}" \
--output_dir="${output_dir}" \
--reward_root="${reward_root}" \
--dataset_root="${dataset_root}"
or run
cd T2I-CompBench
bash GORS_finetune/train.sh
The image directory should be a directory containing the images, e.g.,
examples/samples/
├── a green bench and a blue bowl_000000.png
├── a green bench and a blue bowl_000001.png
└──...
The reward directory should include a json file named "vqa_result.json", and the json file should be a dictionary that maps from
{"question_id", "answer"}
, e.g.,
[{"question_id": 0, "answer": "0.7110"},
{"question_id": 1, "answer": "0.7110"},
...]
The dataset should be placed in the directory "examples/dataset/".
- Install the requirements
MiniGPT4 is based on the repository, please refer to the link for environment dependencies and weights:
https://github.com/Vision-CAIR/MiniGPT-4
- Example usage
For evaluation, the input images files are stored in the directory "examples/samples/", with the format the same as the training data.
export project_dir="BLIPvqa_eval/"
cd $project_dir
out_dir="examples/"
python BLIP_vqa.py --out_dir=$out_dir
or run
cd T2I-CompBench
bash BLIPvqa_eval/test.sh
The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_blip/" directory.
download weight and put under repo experts/expert_weights:
mkdir -p UniDet_eval/experts/expert_weights
cd UniDet_eval/experts/expert_weights
wget https://huggingface.co/shikunl/prismer/resolve/main/expert_weights/Unified_learned_OCIM_RS200_6x%2B2x.pth
export project_dir=UniDet_eval
cd $project_dir
python determine_position_for_eval.py --outpath=../examples/
# Add --simple_structure if caption/prompt has a 'simple' structure
The output files are formatted as a json file named "vqa_result.json" in "examples/labels/annotation_obj_detection" directory.
If no positional information is found in the caption score will be -1.
outpath="examples/"
python CLIPScore_eval/CLIP_similarity.py --outpath=${outpath}
or run
cd T2I-CompBench
bash CLIPScore_eval/test.sh
The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_clip" directory.
export project_dir="3_in_1_eval/"
cd $project_dir
outpath="examples/"
data_path="examples/dataset/"
python "3_in_1.py" --outpath=${outpath} --data_path=${data_path}
The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_3_in_1" directory.
If the category to be evaluated is one of color, shape and texture:
export project_dir=Minigpt4_CoT_eval
cd $project_dir
category="color"
img_file="examples/samples/"
output_path="examples/"
python mGPT_cot_attribute.py --category=${category} --img_file=${img_file} --output_path=${output_path}
If the category to be evaluated is one of spatial, non-spatial and complex:
export project_dir=MiniGPT4_CoT_eval/
cd $project_dir
category="non-spatial"
img_file="examples/samples/"
output_path="examples"
python mGPT_cot_general.py --category=${category} --img_file=${img_file} --output_path=${output_path}
The output files are formatted as a csv file named "mGPT_cot_output.csv" in output_path.
Run the inference.py.
export pretrained_model_path="checkpoint/color/lora_weight.pt"
export prompt="A bathroom with green tile and a red shower curtain"
python inference.py --pretrained_model_path "${pretrained_model_path}" --prompt "${prompt}"
If you're using T2I-CompBench in your research or applications, please cite using this BibTeX:
@article{huang2023t2icompbench,
title={T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation},
author={Kaiyi Huang and Kaiyue Sun and Enze Xie and Zhenguo Li and Xihui Liu},
journal={arXiv preprint arXiv:2307.06350},
year={2023},
}
This project is licensed under the MIT License. See the "License.txt" file for details.