Home > On-Demand Archives > Q&A Sessions >
Live Q&A - Hands-on With CUDA-C on a Nvidia Jetson Nano GPU
Mohammed Billoo - Watch Now - Duration: 19:31
15:40:14 From BobF to Everyone: You include Floating-Point calculation in the slides, naturally ... have applicable standards, especially 'IEEE 754', now been set in stone i.e. no further amendments here? 15:43:14 From Thomas Schaertel to Everyone: Mohammed, what do you think of the tight integration and dependency on Nvidia? I used Cuda for DeepLearning about 6 years ago, but my customers were not amazed to be dependent on one semiconductor manufacturer. Nowadays, AMD and even Intel (called Guida?) offer GPUs, but none of them support Cuda (afaik). 15:49:23 From Thomas Schaertel to Everyone: Did you use real GPU hardware or did you rent a GPU (on AWS etc.)? 15:50:28 From Lyden Smith to Everyone: Thanks Mohammed! 15:50:28 From Thomas Schaertel to Everyone: Thank you both! 15:51:06 From BobF to Everyone: Thanks ... peak performance!
Thanks for a very informative presentation, as always. I particularly liked the "gotchas" one should keep in mind to leverage efficient parallelisation.
In your opinion, would you consider C++'s intrinsic nature of vector manipulation to generally more fitting than C when interfacing with CUDA or maybe when passing back the results to the host?
Do you know how available CUDA when it comes to integration with the Yocto project?
Looking forward to the follow-up article :)