모델에 변경사항이 있어서 재학습 후 onnx변환하였습니다. 파일은 구글드라이브 링크로 첨부해드리겠습니다.
output_data폴더는 제가 carla에서 수집한 이미지와 steer값 데이터입니다.
(topst) avees@avees-MS-7E01:~/tc-nn-toolkit/EnlightSDK$ python converter.py pilotnet.onnx --output pilotnet.enlight --type unknown --dataset Custom --dataset-root ./output_data --num-images 100 --enable-track
이 명령어로 변환해서 아래 결과가 나왔습니다.
[INFO] Find model_config pilotnet.json, but can’t find it
[INFO] so, set DEFAULT config parameter
[v0.9.9] Converting to enlight format. start.
model : pilotnet.onnx
output : pilotnet.enlight
model_config : auto
type : unknown
yolo_version :
dfl_reg_max : 0
weight : None
mean : (0.486, 0.456, 0.406)
std : (0.229, 0.224, 0.225)
add_detection_post_process : None
num_class : -1
class_labels : None
omit_post_process : False
output_order : auto
variance : (0.125, 0.125)
no_background : False
logistic : softmax
force_output : None
input_shape : None
input_quantization_scale : 128.0
dataset : Custom
dataset_root : ./output_data
image_set : test
download : False
batch_size : 4
num_workers : 0
enable_letterbox : False
enable_track : True
num_images : 100
dump_stats : False
compatibility_log_root : ./log/compatibility_results
compatibility_list : None
disable_checking_compatibility : False
input_ch_pad : None
debug : False
track_per_channel : False
enable_channel_equalization : False
Checking arguments… done
Start ConstantFold optimizer
optimizing … done.
Start DeadCodeElimination optimizer
optimizing … done.
Start FuseLayers optimizer
optimizing … done.
Start DeadCodeElimination optimizer
optimizing … done.
Start ReplaceLayerEnlightFriendly optimizer
optimizing … done.
Start FuseLayers optimizer
optimizing … done.
Start ConstantFuse optimizer
optimizing … done.
Start PostProcessParameterFold optimizer
optimizing … done.
Start InputQNormalizeFold optimizer
optimizing … done.
Start ExposePadLayer optimizer
optimizing … done.
Start DecomposeActivation optimizer
optimizing … done.
Start MakeInputChannelPartition optimizer
optimizing … done.
Checking arguments… done
Checking compatibility with ENLIGHT NPU
╒═══════════════════════════════╤════╕
│ SUMMARY │ │
├───────────────────────────────┼────┤
│ Number of compatible layers │ 11 │
├───────────────────────────────┼────┤
│ Number of incompatible layers │ 0 │
├───────────────────────────────┼────┤
│ Total number of layers │ 11 │
╘═══════════════════════════════╧════╛
Checking compatibility with ENLIGHT NPU … Done
Writing compatibility result to … /home/avees/tc-nn-toolkit/EnlightSDK/log/compatibility_results/pilotnet_compatibility.log
Serializing Graph done.
Writing to File… Done
Writing File path : pilotnet.enlight
Converter. done.
추가로, enlight_sim.py의 결과는 다음과 같았습니다.
(topst) avees@avees-MS-7E01:~/tc-nn-toolkit/EnlightSDK$ python3 enlight_sim.py pilotnet.enlight --inputs 010023.jpg
[INFO] Find model_config pilotnet.json, but can’t find it
[INFO] so, set DEFAULT config parameter
Inference image. start.
model : pilotnet.enlight
inputs : 010023.jpg
model_config : auto
output : None
result_root : ./output_result
th_iou : 0.5
th_conf : 0.5
has_background : False
topk : 5
force_resize : None
crop : None
use_cv2 : False
enable_letterbox : False
image_format : RGB
dump : False
dump_root : ./output_dump
dump_shape : enlight
dump_format : enlight
enable_show : False
enable_customize_post_process : None
enable_blazeface_post_process : None
enable_save_npy_result_for_unknown : False
enable_opts : False
Initializing Network done.
Custom_100
Checking arguments…
Save results (result) path : ./output_result/pilotnet/010023
Show de-quantized output tensor
shape: torch.Size([1, 1, 1])
OutputTensor([[[-0.2038]]], device=‘cuda:0’)
Save results (text) path: ./output_result/pilotnet/010023/010023_result.txt
[1 / 1]
Inference. done.
추가로 quatization을 진행할땐 아래 명령어로 진행하였습니다.
(topst) avees@avees-MS-7E01:~/tc-nn-toolkit/EnlightSDK$ python quantizer.py pilotnet.enlight
[INFO] Find model_config pilotnet.json, but can’t find it
[INFO] so, set DEFAULT config parameter
Quantization. start
model : pilotnet.enlight
output : None
model_config : auto
stats_file : None
scales_file : None
qbits_file : None
dump_scales : False
dump_qbits : False
dump_type : enlight_name
disable_sanity_checker : False
custom_qparam_type : enlight_name
overwrite_concat_qscale : False
m_std_8 : None
m_std_4 : 5
m_std_ratio : None
weight_range_asymmetric : False
disable_clip_min_max : False
force_output_scale : None
Start BatchormalizeFold optimizer
optimizing … done.
Start FuseConstantEltwLayer optimizer
optimizing … done.
Start TreatIntermediateOuput optimizer
optimizing … done.
Start ModifyLayerQuantizationFriendly optimizer
optimizing … done.
quantization… done.
End sanity check for custom quantization
Start MakeCompilerFriendly optimizer
optimizing … done.
Start MakeLayerAligned optimizer
optimizing … done.
Serializing Graph done.
Writing to File… Done
Writing File path : pilotnet_quantized.enlight
Quantizer done.
양자화가 진행된 파일에 대해 enlight_sim.py를 돌린 결과는 아래와 같습니다.
(topst) avees@avees-MS-7E01:~/tc-nn-toolkit/EnlightSDK$ python3 enlight_sim.py pilotnet_quantized.enlight --inputs 010023.jpg
[INFO] Find model_config pilotnet.json, but can’t find it
[INFO] so, set DEFAULT config parameter
Inference image. start.
model : pilotnet_quantized.enlight
inputs : 010023.jpg
model_config : auto
output : None
result_root : ./output_result
th_iou : 0.5
th_conf : 0.5
has_background : False
topk : 5
force_resize : None
crop : None
use_cv2 : False
enable_letterbox : False
image_format : RGB
dump : False
dump_root : ./output_dump
dump_shape : enlight
dump_format : enlight
enable_show : False
enable_customize_post_process : None
enable_blazeface_post_process : None
enable_save_npy_result_for_unknown : False
enable_opts : False
Initializing Network done.
Custom_100
Checking arguments…
Save results (result) path : ./output_result/pilotnet_quantized/010023
Show de-quantized output tensor
shape: torch.Size([1, 1, 1])
OutputTensor([[[-0.2023]]], device=‘cuda:0’)
Save results (text) path: ./output_result/pilotnet_quantized/010023/010023_result.txt
[1 / 1]
Inference. done.
이 과정까지는 큰 문제가 없는것으로 보여지는데 맞는지 확인 부탁드립니다!
혹시 이후의 컴파일이나 AI-G보드 적용과정에 대해 github에 있는 파일보다 자세한 과정이 담겨있는 파일이 있으면 전달해 주시면 감사하겠습니다!!