Theatre
Neural Networks: From GPU to MCU
Louis Gobin + 2 more
37:37
Join this webinar to understand step-by-step how to port your machine learning model from your preferred development environment to an MCU. We will go over a few of the pipelines available to you and cover:
- Graph optimization
- Model quantization
- Memory optimization
1 / 4
Please log in or create an account to test your knowledge and see the answers.
Why would a team choose STM32Cube.AI instead of NanoEdge AI Studio when moving a model from a GPU development environment to an STM32 microcontroller?
A
Because STM32Cube.AI ingests models from common frameworks (TensorFlow, PyTorch, ONNX, etc.) and optimizes them for STM32 without changing the model's internal logic by default.
B
Because STM32Cube.AI automatically generates a new STM32-specialized model from raw data, replacing the need to train any model beforehand.
C
Because STM32Cube.AI only supports tiny, single-sensor signal analysis models and cannot handle vision or multimodal models.
D
Because STM32Cube.AI forces a full integer-only quantization of any model to guarantee smallest possible memory footprint.
E
Because STM32Cube.AI only produces binary firmware blobs that can’t be inspected or integrated into custom projects.









We are unable to download the slides even.