libarmnn33

Inference engine for CPUs, GPUs and NPUs - shared library

Description

Arm NN is a set of tools that enables machine learning workloads on any hardware. It provides a bridge between existing neural network frameworks and whatever hardware is available and supported. On arm architectures (arm64 and armhf) it utilizes the Arm Compute Library to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as possible. On other architectures/hardware it falls back to unoptimised functions.

This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX. Arm NN takes networks from these frameworks, translates them to the internal Arm NN format and then through the Arm Compute Library, deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.

This is the shared library package.

Upload more screenshots

Please help extend the collection of screenshots. Just make a screenshot and upload it here. You don't need to register or anything.

Upload a screenshot

Hint: upload an image here from your clipboard with Ctrl-V


Install this software package

If the package is available for the distribution you are currently using on your computer then install the software by clicking on…

Install libarmnn33