We use mmsegmentation and follow Swin-Transformer-Semantic-Segmentation to set up our pipelines.
To evaluate a pre-trained FocalNets on ADE20K, run:
python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> --master_port 12345 tools/test.py \
<config-file> <ckpt-path> --options data.samples_per_gpu=<samples_per_gpu> --luancher pytorch --eval mIoU
For multi-scale evaluation, run:
python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> --master_port 12345 tools/test.py \
<config-file> <ckpt-path> --options data.samples_per_gpu=<samples_per_gpu> --luancher pytorch --eval mIoU --aug-test
For example, to evaluate the UperNet model with FocalNet-B (LRF) on 8 GPUs:
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 tools/test.py \
configs/focalnet/upernet_focalnet_base_patch4_512x512_160k_ade20k_lrf.py focalnet_base_lrf_upernet_160k.pth \
--cfg-options data.samples_per_gpu=1 model.backbone.focal_levels='[3,3,3,3]'
To train UperNet model with pretrained FocalNet, run:
python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> --master_port 12345 tools/train.py \
<config-file> \
--options \
model.pretrained=<pretrained/model/path> \
data.samples_per_gpu=<samples_per_gpu> \
--launcher pytorch
For example, we train UperNet with following commands:
UperNet with FocalNet-T
FocalNet-T (SRF):
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 tools/train.py \
configs/focalnet/upernet_focalnet_tiny_patch4_512x512_160k_ade20k_srf.py \
--options \
model.pretrained='focalnet_tiny_srf.pth' \
data.samples_per_gpu=2 \
--launcher pytorch
FocalNet-T (LRF):
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 tools/train.py \
configs/focalnet/upernet_focalnet_tiny_patch4_512x512_160k_ade20k_lrf.py \
--options \
model.pretrained='focalnet_tiny_lrf.pth' \
data.samples_per_gpu=2 \
--launcher pytorch
UperNet with FocalNet-S
FocalNet-S (SRF):
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 tools/train.py \
configs/focalnet/upernet_focalnet_small_patch4_512x512_160k_ade20k_srf.py \
--options \
model.pretrained='focalnet_small_srf.pth' \
data.samples_per_gpu=2 \
--launcher pytorch
FocalNet-S (LRF):
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 tools/train.py \
configs/focalnet/upernet_focalnet_small_patch4_512x512_160k_ade20k_lrf.py \
--options \
model.pretrained='focalnet_tiny_lrf.pth' \
data.samples_per_gpu=2 \
--launcher pytorch
UperNet with FocalNet-B
FocalNet-B (SRF):
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 tools/train.py \
configs/focalnet/upernet_focalnet_base_patch4_512x512_160k_ade20k_srf.py \
--options \
model.pretrained='focalnet_base_srf.pth' \
data.samples_per_gpu=2 \
--launcher pytorch
FocalNet-B (LRF):
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 tools/train.py \
configs/focalnet/upernet_focalnet_base_patch4_512x512_160k_ade20k_lrf.py \
--options \
model.pretrained='focalnet_base_lrf.pth' \
data.samples_per_gpu=2 \
--launcher pytorch