Home > On-Demand Archives > Talks >

Embedded Vision: an Introduction

Peter McLaughlin - Watch Now - Duration: 56:16

Embedded Vision: an Introduction
Peter McLaughlin

Computer vision is being revolutionized by advances in artificial intelligence and edge hardware. By applying Deep Learning, computers can detect features in images more reliably and more effectively than the human eye. Integrating computer vision into embedded systems creates powerful new functionality, exciting new use-cases, and opportunities for entirely new products. Present-day capability and applications are merely scratching the surface of computer vision’s potential, which according to Forbes magazine is essential to the Forth Industrial Revolution. Deploying computer vision in embedded systems however presents considerable technical challenges, in particular requiring knowledge of camera optics and advanced image processing techniques. In this session, Peter McLaughlin will provide a broad introduction to computer vision in the embedded system space. Topics covered include computer vision technologies and their use-cases, camera types and data interfaces, optics, image processing and hardware requirements. Attendees will walk away armed with information to help them get started with embedded vision and apply it to their projects.   

Keywords

Computer vision, image processing, artificial intelligence.

Takeaway

Attendees will takeaway an understanding of:

  • The impact and potential of computer vision in the embedded system space
  • Embedded vision use-cases, hardware selection and image processing
  • What to contemplate when designing an embedded vision solution
  • How to get started with computer vision and apply it to an embedded systems project

Intended audience

This session targets embedded system developers and managers who are curious about applying computer vision to embedded systems.

M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

Nyquist
Score: -1 | 12 months ago | 1 reply

Hi Peter,
I have a number of follow up questions from your session on Embedded Vision: an Introduction. (1) Beyond using FFMPEG does ARM or NXP offer a onboard codec chip that compresses streaming video into a MP4 format ? (2) Given that I am looking to extend a camera lens to a distance of 10 feet from the microcontroller is there a ARM Kit or NXP Kit that you might suggest? (3) I have been using the STM32F4 Discovery Kit only to find that the sensor lens cannot be extended beyond 3 feet and there is no means that I know of to record streaming video onto the microsd card in a format such as MP4 - might you suggest how I may achieve that objective with the use of a different kit?
Thanks in advance for your help ...
I thoroughly enjoyed your presentation.

Peter_McLaughlinSpeaker
Score: 0 | 12 months ago | 1 reply

Hello, thank you for watching the talk. Feedback on your questions:

  1. I'm not aware of a specialized onboard DSP dedicated to video compression. Regarding the use of libraries, this NXP application note gives a good overview for i.MX RT MCUs: https://www.nxp.com/docs/en/application-note/AN13205.pdf
  2. If you want to place the camera at 10 feet from the MCU, protocols like MIPI and DVP can't go that far. GMSL is an option but it's more expensive to integrate. I'd suggest looking at GigE or USB3 cameras for that distance. I'd recommend the following ST kit for exploring GigE / USB: https://www.st.com/en/evaluation-tools/stm32mp157f-dk2.html. Clearly this is an MPU, rather than an MCU, therefore it's Linux and Yocto tooling - an important consideration if you're more MCU orientated.
  3. The kit mentioned in my answer to (2) above, being Linux based, would allow easier inclusion of a compression library than a bare-metal MCU.
Nyquist
Score: 0 | 12 months ago | 1 reply

Thanks for the links. Have you heard of Phytec? They have a couple of starter kits that claim to broadcast up to 15 m using a MIPI CSI-2 FPD Link III. It would appear that this is achieved through a FPD-Link III Serializer TI DS90UB953. Have you had any experience with a FPD-Link III Serializer? A number of the compatible kits appear to be NXP based. I wonder if the resolution would be compromised with the introduction of the FPD-Link III Serializer?
https://www.phytec.eu/en/produkte/embedded-vision/

Peter_McLaughlinSpeaker
Score: 0 | 12 months ago | no reply

Yes, FPD-Link III is a "SerDes" protocol which can transport a number of other protocols over longer distances. It's an alternative to GMSL, which is used in automotive. If you are prototyping, I'd suggest going with USB or GigE for simplicity to start with and figure the protocol out later. Driver integration with MIPI CSI-2 can be quite time consuming.

Peter_McLaughlinSpeaker
Score: 0 | 12 months ago | no reply

For people interested in getting started with ST vision, check out my video on the B-CAMS-OMV module: https://agmanic.com/stm32-b-cams-omv-walkthrough/

rokath
Score: 0 | 12 months ago | no reply

Thanks for this compact in-deep overview, Peter. This is very helpful even for not-image-processing guys.

ErikS
Score: 0 | 12 months ago | no reply

Great presentation; lots of helpful information. Thank you!

mohammed.eshaq
Score: 0 | 12 months ago | no reply

A really useful, informative presentation. I enjoyed watching this. Thank you so much!

Thomas.Schaertel
Score: 0 | 12 months ago | no reply

Peter: This was a really great overview for vision starting from the historic cameras up to AI used in embedded vision. I enjoyed your talk very much as a great introduction of the field. Thanks a lot for this!

OUR SPONSORS

OUR PARTNERS