发布时间:2023-03-07 15:00
1、创建虚拟环境
conda create -n openmmlab python=3.7 -y
conda activate openmmlab
2、安装pytorch(默认cuda10.1)
conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.1 -c pytorch
3、安装mmcv-full (版本要求:>=1.4.5, <=1.6.0)
pip install mmcv-full==1.4.5 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.7.0/index.html
4、安装mmdetection
pip install mmdet
5、安装mmrotate
pip install mmrotate
6、安装相关依赖
git clone https://github.com/open-mmlab/mmrotate.git
cd mmrotate
pip install -r requirements/build.txt
pip install -v -e . # or \"python setup.py develop\"
1、下载权重
下载地址:https://download.openmmlab.com/mmrotate/v0.1.0/oriented_rcnn/oriented_rcnn_r50_fpn_fp16_1x_dota_le90/oriented_rcnn_r50_fpn_fp16_1x_dota_le90-57c88621.pth
2、运行demo:
python demo/image_demo.py demo/demo.jpg configs/oriented_rcnn/oriented_rcnn_r50_fpn_fp16_1x_dota_le90.py checkpoints/oriented_rcnn_r50_fpn_fp16_1x_dota_le90-57c88621.pth
3、测试效果:
1、DOTA航拍数据集
下载地址:https://captain-whu.github.io/DOTA/dataset.html
2、DOTA数据集文件结构
mmrotate
├── mmrotate
├── tools
├── configs
├── data
│ ├── DOTA
│ │ ├── train
│ │ ├── val
│ │ ├── test
3、更改数据集基础路径
change data_root in configs/_base_/datasets/dotav1.py to split DOTA dataset.
4、裁剪数据集
python tools/data/dota/split/img_split.py --base-json \\
tools/data/dota/split/split_configs/ss_trainval.json
python tools/data/dota/split/img_split.py --base-json \\
tools/data/dota/split/split_configs/ss_test.json
如果想要获得多尺度数据集:
python tools/data/dota/split/img_split.py --base-json \\
tools/data/dota/split/split_configs/ms_trainval.json
python tools/data/dota/split/img_split.py --base-json \\
tools/data/dota/split/split_configs/ms_test.json
# single-gpu
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
# multi-gpu
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [optional arguments]
# multi-node in slurm environment
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments] --launcher slurm
举例:
# 单GPU
python ./tools/test.py \\
configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_1x_dota_le90.py \\
checkpoints/SOME_CHECKPOINT.pth --format-only \\
--eval-options submission_dir=work_dirs/Task1_results
# 单节点多GPU,指定GPU的数目为1
./tools/dist_test.sh \\
configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_1x_dota_le90.py \\
checkpoints/SOME_CHECKPOINT.pth 1 --format-only \\
--eval-options submission_dir=work_dirs/Task1_results
评估测试精度:
# 单GPU
python ./tools/test.py \\
configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_1x_dota_le90.py \\
checkpoints/SOME_CHECKPOINT.pth --eval mAP
# 单节点多GPU,指定GPU的数目为1
./tools/dist_test.sh \\
configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_1x_dota_le90.py \\
checkpoints/SOME_CHECKPOINT.pth 1 --eval mAP
结果可视化:
python ./tools/test.py \\
configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_1x_dota_le90.py \\
checkpoints/SOME_CHECKPOINT.pth \\
--show-dir work_dirs/vis
# 单GPU,如果要在命令中指定工作目录,可以添加参数。--work_dir ${YOUR_WORK_DIR}
python tools/train.py ${CONFIG_FILE} [optional arguments]
# 单节点多GPU
./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
可选参数是:
--no-validate(不建议):默认情况下,代码库将在训练期间执行评估。要禁用此行为,请使用--no-validate.
--work-dir ${WORK_DIR}:覆盖配置文件中指定的工作目录。
--resume-from ${CHECKPOINT_FILE}:从以前的检查点文件恢复。
resume-from和之间的区别load-from: resume-from同时加载模型权重和优化器状态,epoch 也是从指定的检查点继承而来的。它通常用于恢复意外中断的训练过程。 load-from只加载模型权重,训练epoch从0开始。通常用于finetuning。