发布时间:2022-09-09 02:00
darknet\cfg\
下配置文件中classes
和filters(# or 255)
最后的feature map每个像素预测num
个框v3 = ((num=9) / 3) * (classes + 5)
; v2 = num * (classes + 5)
cfg\voc.data
键入一些路径data\voc.names
你的类别名称data\train_list.txt
生成路径列表文件,用vim
打开后将格式设置为set fileformat = unix
example\detector.c
中修改存储模型的间隔./darknet detector train cfg/voc.data cfg/yolov3-spp.cfg darknet53.conv.74 # 训练命令
./darknet detector train cfg/voc.data cfg/yolov3-spp.cfg backup/yolov3-spp.backup #断点重新开始
ls /dev/video* 查看摄像头
| tee train_yolov3.log
import os
def file_name(file_dir):
with open('train_list.txt', 'w') as f:
for a, b, files in os.walk(file_dir):
for i in files:
name = '/home/wyh/data/images/' + i +'\n'
f.write(name)
if __name__ == '__main__':
file_name(r'C:\Users\Administrator\Desktop\images')
import argparse
import sys
import matplotlib.pyplot as plt
def main(argv):
parser = argparse.ArgumentParser()
parser.add_argument("log_file", help = "path to log file" )
parser.add_argument( "option", help = "0 -> loss vs iter" )
args = parser.parse_args()
f = open(args.log_file)
lines = [line.rstrip("\n") for line in f.readlines()]
# skip the first 3 lines
lines = lines[3:]
numbers = {'1','2','3','4','5','6','7','8','9','0'}
iters = []
loss = []
for line in lines:
if line[0] in numbers:
args = line.split(" ")
if len(args) >3:
iters.append(int(args[0][:-1]))
loss.append(float(args[2]))
plt.plot(iters,loss)
plt.xlabel('iters')
plt.ylabel('loss')
plt.grid()
plt.show()
if __name__ == "__main__":
main(sys.argv)
./darknet detector valid cfg/voc.data cfg/yolov3.cfg backup/yolov3.weights -out ''
在results文件夹下
https://github.com/rbgirshick/py-faster-rcnn/tree/master/lib/datasets
from voc_eval import voc_eval
print voc_eval('/home/cxx/Amusi/Object_Detection/YOLO/darknet/results/{}.txt', '/home/cxx/Amusi/Object_Detection/YOLO/darknet/datasets/pjreddie-VOC/VOCdevkit/VOC2007/Annotations/{}.xml', '/home/cxx/Amusi/Object_Detection/YOLO/darknet/datasets/pjreddie-VOC/VOCdevkit/VOC2007/ImageSets/Main/test.txt', 'person', '.')
python2 compute_mAP.py
替换数据集的话:
rm annots.pkl
darknet的cfg文件中有一个配置参数: burn_in
burn_in=1000
假设:全局最优点就在初始位置附近,所以训练开始后的burn_in次更新,学习速率从小到大变化。update次数超过burn_in后,采用配置的学习速率更新策略从大到小变化,显然finetune时可以尝试。
一旦设置了这个参数,当update_num小于burn_in时,不是使用配置的学习速率更新策略,而是按照下面的公式更新
lr = base_lr * power(batch_num/burn_in,pwr)