Home > On-Demand Archives > Q&A Sessions >

Live Q&A - Embedded Vision: an Introduction

Peter McLaughlin - Watch Now - Duration: 21:09

Live Q&A - Embedded Vision: an Introduction
Peter McLaughlin
Live Q&A with Peter McLaughlin for the talk titled Embedded Vision: an Introduction
M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

Nyquist
Score: -1 | 12 months ago | 1 reply

Hi Peter,
I have a number of follow up questions from your session on Embedded Vision: an Introduction. (1) Beyond using FFMPEG does ARM or NXP offer a onboard codec chip that compresses streaming video into a MP4 format ? (2) Given that I am looking to extend a camera lens to a distance of 10 feet from the microcontroller is there a ARM Kit or NXP Kit that you might suggest? (3) I have been using the STM32F4 Discovery Kit only to find that the sensor lens cannot be extended beyond 3 feet and there is no means that I know of to record streaming video onto the microsd card in a format such as MP4 - might you suggest how I may achieve that objective with the use of a different kit?
Thanks in advance for your help ...
I thoroughly enjoyed your presentation.

Peter_McLaughlinSpeaker
Score: 0 | 12 months ago | 1 reply

Hello, thank you for watching the talk. Feedback on your questions:

  1. I'm not aware of a specialized onboard DSP dedicated to video compression. Regarding the use of libraries, this NXP application note gives a good overview for i.MX RT MCUs: https://www.nxp.com/docs/en/application-note/AN13205.pdf
  2. If you want to place the camera at 10 feet from the MCU, protocols like MIPI and DVP can't go that far. GMSL is an option but it's more expensive to integrate. I'd suggest looking at GigE or USB3 cameras for that distance. I'd recommend the following ST kit for exploring GigE / USB: https://www.st.com/en/evaluation-tools/stm32mp157f-dk2.html. Clearly this is an MPU, rather than an MCU, therefore it's Linux and Yocto tooling - an important consideration if you're more MCU orientated.
  3. The kit mentioned in my answer to (2) above, being Linux based, would allow easier inclusion of a compression library than a bare-metal MCU.
Nyquist
Score: 0 | 12 months ago | 1 reply

Thanks for the links. Have you heard of Phytec? They have a couple of starter kits that claim to broadcast up to 15 m using a MIPI CSI-2 FPD Link III. It would appear that this is achieved through a FPD-Link III Serializer TI DS90UB953. Have you had any experience with a FPD-Link III Serializer? A number of the compatible kits appear to be NXP based. I wonder if the resolution would be compromised with the introduction of the FPD-Link III Serializer?
https://www.phytec.eu/en/produkte/embedded-vision/

Peter_McLaughlinSpeaker
Score: 0 | 12 months ago | no reply

Yes, FPD-Link III is a "SerDes" protocol which can transport a number of other protocols over longer distances. It's an alternative to GMSL, which is used in automotive. If you are prototyping, I'd suggest going with USB or GigE for simplicity to start with and figure the protocol out later. Driver integration with MIPI CSI-2 can be quite time consuming.

Peter_McLaughlinSpeaker
Score: 0 | 12 months ago | no reply

For people interested in getting started with ST vision, check out my video on the B-CAMS-OMV module: https://agmanic.com/stm32-b-cams-omv-walkthrough/

rokath
Score: 0 | 12 months ago | no reply

Thanks for this compact in-deep overview, Peter. This is very helpful even for not-image-processing guys.

ErikS
Score: 0 | 12 months ago | no reply

Great presentation; lots of helpful information. Thank you!

mohammed.eshaq
Score: 0 | 12 months ago | no reply

A really useful, informative presentation. I enjoyed watching this. Thank you so much!

Thomas.Schaertel
Score: 0 | 12 months ago | no reply

Peter: This was a really great overview for vision starting from the historic cameras up to AI used in embedded vision. I enjoyed your talk very much as a great introduction of the field. Thanks a lot for this!

15:36:03	 From  René Andrés Ayoroa : What are your thoughts on platforms like Raspberry pi for embedded vision?
15:36:18	 From  Thomas Schaertel : You were right, that in former times, lighting was critical. But that is still so, I often use polarization filters for reflection suppression. I was reading a flowmeter with ML to get the values (digital and analog), which then were compared with a level meter app to proof, that all is correct. So lighting for good results is still critical, but with LEDs not expensive.
15:39:17	 From  nyquist : Of the various vendors out there (ARM, NXP etc) which family lends itself to compressing video into at a MP4 or similar.
15:40:51	 From  Thomas Schaertel : For starters I recommend OpenMV (based on STM), besides the camera  it includes also MicroPython around 100$.
15:41:30	 From  nyquist : A codec
15:42:26	 From  René Andrés Ayoroa : On Linux based embedded, is the processing for Vision done in C language or Python?
15:45:29	 From  John : What framerates are used in the various applications? Robotics, vehicles, vs like barcodes and the roll of paper line scan camera at ~kHz as in your presentation?
15:46:02	 From  John : Do you expect CMOS will completely displace CCD? Why?
15:47:28	 From  BobF : Bit of an open-ended question (imaging in general) - the visible spectrum is the most utilised with infra-red and ultra-violet catching-up in specialised applications. How low and how high could we conceivably go, frequency-band wise? Improvements in existing solutions aside.
15:48:18	 From  John : What sort of algorithms are used to deal with motion jitter? Are these in the SoC, the ISP, or the library, or does the programmer need to implement these for each new application?
15:50:30	 From  John : ! The US-manufacturing issue, how does that apply to vision sensors? Is everything made in China right now, and companies are variously moving these to friendly countries too?
15:50:52	 From  John : Thanks!
15:51:24	 From  Thomas Schaertel : Thank you!
15:51:29	 From  Stephane   to   Peter McLaughlin(Direct Message) : Thank you Peter!
15:52:23	 From  René Andrés Ayoroa : Thank you Peter
15:52:26	 From  BobF : Comprehensive slides on a big subject - Thanks

OUR SPONSORS