TI中文支持网
TI专业的中文技术问题搜集分享网站

AM62P: The issue of poor performance when the MNN(alibaba Mobile Neural Network) inference framework calls the GPU for inference via OpenCL.

Part Number:AM62P

We are using the MNN inference framework on the AM62P chip, utilizing GPU resources through the OpenCL interface for deep learning model inference. The computational power consumption of this model is approximately 3.3 GFLOPS. The GPU on the AM62P chip has a computing power of 50 GFLOPS, but our tests show that the inference time per frame of this model is 150ms. Additionally, the inference time of the model after int8 quantization is also around 150ms. Therefore, we would like to know if TI has developed any applications that use the GPU for deep learning model inference on the AM62P chip? Could you provide detailed information about the GPU on the AM62P chip (such as GPU model, instruction set, etc.)?

Links:

Hello!

We have received your case and the investigation will take some time. Thank you for your patience.

赞(0)
未经允许不得转载:TI中文支持网 » AM62P: The issue of poor performance when the MNN(alibaba Mobile Neural Network) inference framework calls the GPU for inference via OpenCL.
分享到: 更多 (0)