Home > On-Demand Archives > Talks >

Make your IoT device feel, hear and see things with TinyML

Jan Jongboom - Watch Now - EOC 2020 - Duration: 40:09

Make your IoT device feel, hear and see things with TinyML
Jan Jongboom

Many IoT devices are very simple: just a radio sending raw sensor values to the cloud. But this limits the usefulness of a deployment. A sensor can send that it saw movement in front of it, but not what it saw. Or a sensor might notice that it's being moved around, but not whether it's attached to a vehicle or is just being carried around. The reason is simple: for knowing what happens in the real world you'll need lots of data, and sending all that data over your IoT network quickly drains your battery and racks up your network bill.

How can we do better? In this talk we'll look at ways to draw conclusions from raw sensor data right on the device. From signal processing to running neural networks on the edge. It's time to add some brains to your IoT deployment. In this talk you'll learn:

  • What is TinyML, and how can your sensors benefit from it?
  • How signal processing can help you make your TinyML deployment more predictable, and better performing.
  • How you can start making your devices feel, hear and see things - all running in realtime on Cortex-M-class devices.
  • Hands-on demonstrations: from initial data capture from real devices, to building and verifying TinyML models, and to deployment on device
M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

Kevin.Kramb
Score: 0 | 4 years ago | 1 reply

Thanks Jan. Very intriguing technology and cool demo. I'm wondering how narrowly or broadly the inferencing engine works after training. Will it continue to work in environments or noise that weren't included in the original training?
For example, in the demo you trained with pictures of eval boards on your desk. If you moved the eval boards over to your piano keyboard would that original model continue to work on the different background? Or would you need to retrain with additional photos taken on the keyboard?
Another example, I assume you trained the "hello world" detector with your voice in your room. Would that model continue to work if you spoke in a noisier environment such as outside with traffic noise? Would that model work with another person's voice that wasn't included in the training?

janjongboomSpeaker
Score: 0 | 4 years ago | no reply

Great question. In general neural networks become unpredictable when dealing with unseen data, which is why it's important to have a good representative subset of realworld data - and also why building a quality dataset is often the most work. We do a couple of things though to help out with this:

  1. I like to deploy a secondary ML model for anomaly detection besides the neural network. This model is just responsible for classifying whether the data looked like anything it has seen during training. If not, then discard the classification and make a note of this. You can build this for a number of models (but not for audio/images at the moment) in Edge Impulse already.
  2. We leverage data augmentation to harden the model in the real world. E.g. for audio we add artificial noise, mask time/frequency bands during training, and for images we randomly crop, zoom in, and add noise to the dataset. This helps a lot.
  3. For audio the signal processing blocks we use help a lot with cleaning up the data, and highlighting the interesting parts of the signal, making this work much better in real-world scenarios. E.g. MFCC highlights and separates the signal into frequency bands associated with human speech, and thus you learn time/frequency correlations not on the raw signal but on how humans perceive the sound. This helps with generalization. It's still important to have a nice combination of data across genders, accents, pitches, etc. to have something work very well, but you don't need a sample of every single person in the world.
  4. On image models: the transfer learning model forces the neural network to recognize things from from shapes / contrast rather than take shortcuts (I see a piece of Jan's desk, it must be this dev board). That's because the lower layers of the neural network are trained on a generic set of images, then top layers retrained with your image set. Together with data augmentation this helps a lot with making a model generalize well in the real world.

Anyway, I could summarize the wall of text above simply with: it depends, make sure to measure, and never blindly trust neural networks - have a safety guard (whether it's normal thresholds, basic control flow checking, or anomaly detection code).

Doini
Score: 0 | 4 years ago | no reply

Thank you for this great presentation!

IoTsri
Score: 0 | 4 years ago | no reply

Very interesting presentation.

Stefan.Krueger
Score: 1 | 4 years ago | no reply

thanks for this nice explanation!
enjoyed your walk-through!

krish
Score: 1 | 4 years ago | no reply

Excellent presentation, thank you very much.

OUR SPONSORS

OUR PARTNERS