Home > On-Demand Archives > Q&A Sessions >

Live Q&A - Hands-on With CUDA-C on a Nvidia Jetson Nano GPU

Mohammed Billoo - Watch Now - EOC 2024 - Duration: 19:31

Live Q&A - Hands-on With CUDA-C on a Nvidia Jetson Nano GPU
Mohammed Billoo
Live Q&A with Mohammed Billoo for the talk titled Hands-on With CUDA-C on a Nvidia Jetson Nano GPU
italicssurround text with
boldsurround text with
**two asterisks**
or just a bare URL
surround text with
strikethroughsurround text with
~~two tilde characters~~
prefix with

Score: 0 | 2 months ago | 1 reply

Thanks for a very informative presentation, as always. I particularly liked the "gotchas" one should keep in mind to leverage efficient parallelisation.
In your opinion, would you consider C++'s intrinsic nature of vector manipulation to generally more fitting than C when interfacing with CUDA or maybe when passing back the results to the host?
Do you know how available CUDA when it comes to integration with the Yocto project?
Looking forward to the follow-up article :)

Score: 0 | 1 month ago | no reply

Great questions! Ultimately, vectors in the C++ STL are essentially C arrays under the hood with some extra features (e.g. automatically re-allocating memory as needed, optimally accessing elements). I would actually caution in using C++ vectors initially, since it's important to ensure that memory is aligned properly when passing data to the GPU. Otherwise, you risk losing the benefits of a GPU. CUDA supports exist in The Yocto Project. You can check out the meta-tegra layer here: https://github.com/OE4T/meta-tegra. I'll look to author another blog post on adding CUDA support, along with cross-compiling CUDA-C applications when using The Yocto Project.

15:40:14 From BobF to Everyone:
	You include Floating-Point calculation in the slides, naturally ... have applicable standards, especially 'IEEE 754', now been set in stone i.e. no further amendments here?
15:43:14 From Thomas Schaertel to Everyone:
	Mohammed, what do you think of the tight integration and dependency on Nvidia? I used Cuda for DeepLearning about 6 years ago, but my customers were not amazed to be dependent on one semiconductor manufacturer. Nowadays, AMD and even Intel (called Guida?) offer GPUs, but none of them support Cuda (afaik).
15:49:23 From Thomas Schaertel to Everyone:
	Did you use real GPU hardware or did you rent a GPU (on AWS etc.)?
15:50:28 From Lyden Smith to Everyone:
	Thanks Mohammed!
15:50:28 From Thomas Schaertel to Everyone:
	Thank you both!
15:51:06 From BobF to Everyone:
	Thanks ... peak performance!