Talk
ML on the Edge: Tradeoffs and Requirements
Kate Stewart
28:55
Over the last few years, we're starting to see machine learning be more effectively deployed closer to where data is collected in embedded systems. These end point devices may be resource constrained though, either in terms of power, memory or communication capabilities - sometimes all three. Being able to apply machine learning on these end point devices is possible, and enables system-wide efficiencies to be realized. This talk will explore the requirements and tradeoffs for such systems to be considered when using the Zephyr RTOS and Tensorflow Lite for Embedded Microcontrollers projects.
1 / 4
Please log in or create an account to test your knowledge and see the answers.
According to Kate Stewart, what is the primary benefit of running machine learning on resource-constrained edge devices rather than sending all data to the cloud?
A
It eliminates the need for any cloud services for the product lifecycle.
B
It enables system-wide efficiencies by processing data where it is collected, reducing communication overhead and only sending important events upstream.
C
It allows devices to run arbitrarily large, complex models locally without regard to power or memory.
D
It guarantees that devices will never need security updates because nothing is transmitted.
E
It primarily reduces development time since models are easier to train on-device.









No comments or questions yet. Be the first to start the conversation!