Model inference wrong

Hello.

I have model monocular depth estimation and complete compilation and quantization. But on AI-G doesn’t inference, it just return as input video. Can you check all my command and model.

Thank you.

Hello. This is TOPST.
Have you resolved the camera-related issue you previously asked about?
First, you converted the model to an unknown type, but currently only classification and object detection models can be inferred in the toolkit and inference app.
Slight modifications are required to use other types of models.

First, the output tensor is currently output in custom_postproc.c within the build_network directory’s custom_postproc and in post_process.c within the compiled network.

Currently, it simply outputs the output tensor, so it may appear that no visualization or inference is being performed. You need to analyze this tensor, perform postprocessing, and then retrieve the corresponding tensor values ​​from ai-g’s tcnnapp for visualization.

In tcnnapp, you need to modify the postprocessing steps in NnAppMain.c, which we shared previously, when the model type is custom.
You also need to modify NnNeuralNetwork.c to determine which structure the values ​​are stored in when the model type is custom.

A model conversion guide for custom types is currently being prepared.
Please understand that the materials are still insufficient. Please feel free to contact us with any further questions.

Thank you.

Hello.

So i need to modify custom_postproc.c in tc-nn-toolkit to convert, quantize and compile model from tflite to enlight type.

After that, I modify postprocessing in NnAppMain.c and NnNeuralNetwork.c then rebuild AI-G. however, i see 2 line NPU_POST_CUSTOM in NnAppMain.c. do i need to modify all of this?

One in NnDrawResult(app_context_t *pContext) funtion,

One in NnOutputResultData(app_context_t *pContext, MessageHandle msgHandle) funtion.

Thank you.

Yes, you can modify both. I handled everything in the function above, so NnOutputResultData was left empty..

Hello.

I’m working on another Linux server that never had the AI-G firmware installed. I build bitbake -c compile -f tc-nn-app instead of running bitbake telechips-topst-ai-image at step 3.6 . Then, I modify postprocessing in NnAppMain.c and NnNeuralNetwork.c.

After that, how do I rebuild AI-G/tcnnapp? just follow session 3.7 in TOPST ?

Thank you.

Hello,
After making the modifications, run ‘bitbake -c compile -f tc-nn-app’ again.
Then, as shown in the picture below, navigate to the build directory of tcnnapp, and you should be able to see the built tcnnapp.
image
After transferring the app to the board using the scp command, set the permission with “chmod 755 tcnnapp” and run it with “./tcnnapp”.
Thank you.

Hello.

I have analyze output model but it custom_postproc.c, then convert, quantized and compile model.
Can I handle all postprocessing in custom_postproc.c ?

Is the output from the custom_postproc.c file passed to NnNeuralNetwork.c and then to NnAppMain?

Thank you.

Hello.

This is the TOPST manager. First, post-processing is handled in custom_postproc.c and post_process.c within the model.

In post_process.c, the output tensor is obtained through the API, and post-processing is performed on it in custom_postproc.c.

After that, in tc-nn-app, the post-processed results are obtained through the NPU API, and in NnAppMain.c, this value is used to perform post-processing tasks such as visualization.

In the post-processing code, you should declare a structure for the value created from the obtained tensor, then declare the same structure within the tc-nn-app source code, and you can receive these values.

Thank you.

Hello.

I have declared struct Depth in post_process.c and NnType.h.

I declare “DepthResult depth;” in both custom_postproc.c and NnAppmain.c and “extern DepthResult depth” in NnNeuralNetwork.c and NnRtpm.c. then “bitbake -c compile -f tcnnapp”, chmod successfully.

But when i try “./tcnnapp -n midas3_quantized/”, It freezes even though I didn’t add any logic. Are there any other files I need to modify?

Thanks.

Hello.
Could you please send me the modified app source code and post-processing code?
I’ll get back to you after testing.

Thank you.

This drive incude git, midas3_quantized and custom_postproc.c.

in source code, I modified in NnType.h for new struct, new enum _message_result_type in message_api.h. NnDrawResult in NnAppMain.c, NnRunInference in NnNeuralNetwork.c and RtpmSendResultDataAsJson in NnRtpm.c.

Thank you.

Hello
First, I ran the app and model you provided.
Command used: ./tcnnapp -N midas3_quantized/

I used the camera and display output, and in the case of midas, inference proceeded, but no subsequent logs were output, whereas in the case of YOLO, I confirmed that inference proceeded.

Also, when running the unchanged tcnnapp -N midas3_quantized/, I confirmed that the inference logs were displayed correctly.

Since there is no problem with the model, I analyzed the tcnnapp source.
In NnNeuralNetwork.c, for the custom type, the call to the network_run_postprocess() function, which starts post-processing, is missing.

If you just add the function call in that part, it should resolve the issue without any problem.
Thank you~

Thanks for your help. It’s works.

For color visualization in RTPM and display, Which file should I need to modify?

Thank you.

For display visualization, you can add custom type logic inside the NnDrawResult function in the NnAppMain.c file. OpenCV-related functions exist in git/common/utils, so it would be good to refer to them.

In the case of RTPM, after sending the inference result values to RTPM, you can modify model->PostProcessor.py in the RTPM folder.
Thank you.