TensorRT 基于Yolov3的开发
阅读原文时间:2023年07月08日阅读:1

TensorRT 基于Yolov3的开发

Models

Desc

tensorRT for Yolov3

https://github.com/lewes6369/TensorRT-Yolov3

Test Enviroments

Ubuntu  16.04

TensorRT 5.0.2.6/4.0.1.6

CUDA 9.2

下载官方模型转换的caffe模型:

百度云pwd:gbue

谷歌drive

如果运行模型是自己训练的,注释“upsample_param”块,并将最后一层的prototxt修改为:

Download the caffe model converted by official model:

Baidu Cloud here pwd: gbue

Google Drive here

If run model trained by yourself, comment the "upsample_param" blocks, and modify the prototxt the last layer as:

layer {

#the bottoms are the yolo input layers

bottom: "layer82-conv"

bottom: "layer94-conv"

bottom: "layer106-conv"

top: "yolo-det"

name: "yolo-det"

type: "Yolo"

}

如果不同的内核,还需要更改“YoloConfigs.h”中的yolo配置。

Run Sample

#build source code

git submodule update --init --recursive

mkdir build

cd build && cmake .. && make && make install

cd ..

#for yolov3-608

./install/runYolov3 --caffemodel=./yolov3_608.caffemodel --prototxt=./yolov3_608.prototxt --input=./test.jpg --W=608 --H=608 --class=80

#for fp16

./install/runYolov3 --caffemodel=./yolov3_608.caffemodel --prototxt=./yolov3_608.prototxt --input=./test.jpg --W=608 --H=608 --class=80 --mode=fp16

#for int8 with calibration datasets

./install/runYolov3 --caffemodel=./yolov3_608.caffemodel --prototxt=./yolov3_608.prototxt --input=./test.jpg --W=608 --H=608 --class=80 --mode=int8 --calib=./calib_sample.txt

#for yolov3-416 (need to modify include/YoloConfigs for YoloKernel)

./install/runYolov3 --caffemodel=./yolov3_416.caffemodel --prototxt=./yolov3_416.prototxt --input=./test.jpg --W=416 --H=416 --class=80

tensorRT for Yolov3

Test Enviroments

Ubuntu  16.04

TensorRT 5.0.2.6/4.0.1.6

CUDA 9.2

Performance

Eval Result

用appending附件编译上面的模型模型--evallist=labels.txt

从val2014中选择的200张图片制作的int8校准数据(见脚本目录)

提示注意:             

在yolo层和nms中,caffe的实现没有什么不同,应该与tensorRT fp32的结果相似。

Details About Wrapper

see
link TensorRTWrapper

https://github.com/lewes6369/tensorRTWrapper

TRTWrapper

Desc

a wrapper for tensorRT net (parser caffe)

Test Environments

Ubuntu  16.04

TensorRT 5.0.2.6/4.0.1.6

CUDA 9.2

About Wraper

you can use the wrapper like this:

//normal

std::vector> calibratorData;

trtNet net("vgg16.prototxt","vgg16.caffemodel",{"prob"},calibratorData);

//fp16

trtNet net_fp16("vgg16.prototxt","vgg16.caffemodel",{"prob"},calibratorData,RUN_MODE:FLOAT16);

//int8

trtNet net_int8("vgg16.prototxt","vgg16.caffemodel",{"prob"},calibratorData,RUN_MODE:INT8);

//run inference:

net.doInference(input_data.get(), outputData.get());

//can print time cost

net.printTime();

//can write to engine and load From engine

net.saveEngine("save_1.engine");

trtNet net2("save_1.engine");

when you need add new plugin ,just add the plugin code to pluginFactory

Run Sample

#for classification

cd sample

mkdir build

cd build && cmake .. && make && make install

cd ..

./install/runNet --caffemodel=${CAFFE_MODEL_NAME} --prototxt=${CAFFE_PROTOTXT} --input=./test.jpg

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器