0% found this document useful (0 votes)
5 views5 pages

com_object_detection_yolov4_custom_model_train

Uploaded by

aryan Dhiman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
5 views5 pages

com_object_detection_yolov4_custom_model_train

Uploaded by

aryan Dhiman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 5

Object Detection using a YOLO V4 custom model training

import numpy as np, cv2 # openCV

clone the darknet project


!git clone https://github.com/AlexeyAB/darknet.git

%cd darknet

1st: change settings in the Makefile to enable GPU processing, CUDA and OpenCV
!sed -i 's/GPU=0/GPU=1/g' Makefile

!sed -i 's/CUDNN=0/CUDNN=1/g' Makefile

!sed -i 's/OPENCV=0/OPENCV=1/g' Makefile

!head Makefile # confirm changes

Double click the Makefile Scroll down to line 20: ARCH = ... and delete the two lines:
**-gencode arch=compute_35,code=sm_35 **

**-gencode arch=compute_50,code=[sm_50,compute_50] **

%mkdir ../drive/MyDrive/my_model/
%cp Makefile ../drive/MyDrive/my_model/

2nd: Get pre-trained weights for new training and testing (custom model in github)
%mkdir customization
%cd customization
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137

%cp yolov4.weights ../../drive/MyDrive/my_model/

3d: create a new custom network configuration file

make a new copy of original configuration file


%cp ../cfg/yolov4.cfg .

Double click on yolov4.cfg to open it to the left of this window for editing

change the following:


line 3: a subdivision is the number of tiles each image is cut to for GPU processing. Change
this from: subdivisions=8 -> subdivisions=64

line 7: the resized image width. Change this from width=608 -> width=416

line 8: the resized image height. Change this from height=608 -> height=416
line 19: max_batches is equal to classes*2000 but not less than number of training images
and not less than 6000. Change this from max_batches = 500500 -> 4000 for our two
classes.

line 21: change line steps to 80% and 90% of max_batches. We use a single step for
memory efficiency. Change this from steps=400000,450000 -> steps=3200

change the last set of filters before each output layer:


line 961, 1049, 1137: change from filters=255 -> filters=21. The rule is (classes + 5)x3 in
the 3 [convolutional] before each [yolo] layer. Keep in mind that it only has to be the last
[convolutional] before each of the [yolo] layers.

change the number of classes in each output layer:

line 968, 1056, 1144: change from classes=80 ->classes=2. → Ctrl S

REMARK
The max_batches entry is set to 4000 based on the YOLO guidlines but this will result in
approximately 10h of training. Since it was observed empirically that the best network
weights are obtained before 2000 epochs, it is recommended to change the following to:

max_batches = 2000

steps = 1600
!cat yolov4.cfg

back it up
%cp yolov4.cfg ../../drive/MyDrive/my_model/

Get the data

1st: Create the directories


%cd ../
%mkdir custom_data
%cd custom_data
%mkdir images
%mkdir labels

2nd: upload/ downalofd to dive


%cd ./images/
%cp /content/drive/MyDrive/object_detection/data/archive.zip .
!unzip archive.zip -d .
%rm archive.zip

%mv ./images/* #movbe all images


%rm -r images/ #remove the folder
!find . -type f | awk -F. '!a[$NF]++{print $NF}' # all format of files

from glob import glob


pngs = glob('./*.png')

for j in pngs:
img = cv2.imread(j)
cv2.imwrite(j[:-3] + 'jpg', img) → conrt to jpg

%rm *.png

3d: populate the labels/ directory


%cp *.txt ../labels/

4th: create the auxiliary files these to store test & train data as an data_path url
%cd ../
!touch training_data.txt
!touch validation_data.txt
!touch face_mask_classes.names
!touch face_mask.data

append class names in face_mask_classes.names


!echo "no face mask" >> face_mask_classes.names
!echo "face mask" >> face_mask_classes.names

configure the data file


!echo "classes = 2" >> face_mask.data
!echo "train = custom_data/training_data.txt" >> face_mask.data
!echo "valid = custom_data/validation_data.txt" >> face_mask.data
!echo "names = custom_data/face_mask_classes.names" >> face_mask.data
!echo "backup = backup/" >> face_mask.data

!head face_mask.data

back them up
%cp face_mask.data ../../drive/MyDrive/my_model/
%cp face_mask_classes.names ../../drive/MyDrive/my_model/

5th: split the data into train and validation sets

and populate the two respective text files with the appropriate file names
from sklearn.model_selection import train_test_split
import pandas as pd
import os

PATH = 'images/'
list_img=[img for img in os.listdir(PATH) if img.endswith('.jpg')==True]
path_img=[]

for i in range (len(list_img)):


path_img.append(PATH+list_img[i]) #/contentment/drive/…1.jpg

df=pd.DataFrame(path_img)

# split
data_train, data_test, labels_train, labels_test = train_test_split(df[0],
df.index, test_size=0.20, random_state=42)

train_idx=list(data_train.index)
test_idx=list(data_test.index)

# relative path to the binary


relpath = "custom_data/"
backup_path = "/content/drive/MyDrive/my_model/"
# Train file
# Open a file with access mode 'a'
with open("training_data.txt", "a") as file_object:
for i in range(len(train_idx)):
file_object.write(relpath+data_train[train_idx[i]]+"\n") #/contentment/drive/…1.jpg

with open("validation_data.txt", "a") as file_object:


for i in range(len(test_idx)):
file_object.write(relpath+data_test[test_idx[i]]+"\n")

back them up
%cp training_data.txt ../../drive/MyDrive/my_model/
%cp validation_data.txt ../../drive/MyDrive/my_model/

Compile DarkNet
%cd ..

!make -j4

Test DarkNet with COCO example data


!./darknet detector test cfg/coco.data cfg/yolov4.cfg
customization/yolov4.weights data/person.jpg

confirm findings:
from google.colab.patches import cv2_imshow
test_image = cv2.imread("data/person.jpg")
cv2_imshow(test_image)

Train a Custom Model


Create a backup directory for the weights
%rm -r /content/darknet/backup

%mkdir ../drive/MyDrive/my_model/backup/

!ln -s /content/drive/MyDrive/my_model/backup/ /content/darknet/

Train the model

Auxiliary parameters:

map flag: if set, generates a raster plot/graph of the loss and mean average precision
dont_show flag: if set, it prevents attempts to display the progress chart which may cause
disruprions in the notebook environment
!./darknet detector train custom_data/face_mask.data customization/yolov4.cfg
customization/yolov4.conv.137 -map -dont_show

It timed out after 9h 17m 45s with approximately 95% of the iterations completed. We
used the best weights obtained

While Training
progress chart: there is an image file (PNG) generated periodically to report the latest
mean average precision (mAP) vs the Loss value for each iteration. It can be found inside
the darknet directory. You may download this to check the training progress.
log: there may be times in which it stalls, but that is temporary and the process is running
in the background
disconnection/timeout: in this case check your backup folder in Google drive for which
you created the symbolic link. You will find several weight files. Among them there is one
generated for every 1000 iterations, one with the best weights computed (highest mAP and
not lowest Loss), and one with the latest weights before timeout or training completion.
You may chose to reconnect the notebook and start off where you stopped by running the
following:
!./darknet detector train custom_data/face_mask.data customization/yolov4.cfg
/content/drive/MyDrive/my_model/backup/yolov4_last.weights -map -dont_show

Upon Training Completion

Once training is complete, dowload the foloowing:


best weights file, i.e. MyDrive/my_model/backup/yolov4_best.weights

the configuration file /content/darknet/customization/yolov4.cfg

the summary chart image file /content/darknet/chart_yolov4.png

t
o

d
e

You might also like