Home > On-Demand Archives > Theatre Talks >

Security for Edge AI Models

Sek Chai - Watch Now - EOC 2023 - Duration: 18:25

Security for Edge AI Models
Sek Chai
Security is usually an afterthought. Edge AI models can be copied, cloned, reverse-engineered, and tampered with. In this presentation, we will talk about how AI models can be secured for deployment.
M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

schai
Score: 0 | 1 year ago | no reply

@rjculley -- good question. Training only locks in the parameters of the model, but it does not guarantee that the model that is actually performing the inference on device is the same model. An attack may compromise the model on-device so that it no longer behaves the way you've trained it. Watermarking allows you to "check" the model after the model is trained and deployed.

rjculley
Score: 0 | 1 year ago | no reply

I worked at a company that worked on a neural network to evaluate the consistancy of a material based on the work used to stir the material. With this we developed coefficents that where loaded into the code after the training process was complete. If my neural network is no longer learning how can there be an attack on the device that would need watermarking?

OUR SPONSORS

OUR PARTNERS