Nvidia's CUDA platform extends support to RISC-V, including the open-source instruction set in AI platforms, along with x86 and Arm architecture.
Nvidia has made a significant announcement that could revolutionize AI and high-performance computing (HPC) processor designs. The tech giant has revealed that its CUDA software platform will now be compatible with the RISC-V instruction set architecture (ISA) on the CPU side[1][3].
This move is expected to allow RISC-V CPUs to serve as the main processors in CUDA-based AI systems, marking a shift from the traditional reliance on x86 or Arm cores in such systems.
Key implications of this development include:
- Expansion of RISC-V in AI/HPC systems: The compatibility of CUDA with RISC-V could enable RISC-V CPUs to run as primary application processors in AI and HPC workloads, particularly in edge devices like Nvidia’s Jetson modules initially, and potentially in data center environments over time[1][4].
- Potential future presence of RISC-V in data centers: While widespread adoption of RISC-V for hyperscale data centers is not imminent, Nvidia envisions RISC-V playing a role in data-center-class CUDA systems, contingent on the availability of mature RISC-V CPU platforms tailored for those environments[2][4].
- Enhancement of open-source and custom silicon ecosystems: Integrating RISC-V with CUDA broadens the processor options beyond proprietary x86 and Arm ISAs, potentially benefiting markets that favor open architectures or customized silicon designs. This could be especially meaningful in regions like China, where RISC-V adoption is rapidly growing and some AI accelerator markets face restrictions[5].
- Multi-processor system synergy: Nvidia’s presentation suggests RISC-V CPUs would handle CUDA drivers and application-level tasks, while Nvidia GPUs execute CUDA kernels, potentially alongside data processing units (DPUs), forming heterogeneous systems optimized for AI and HPC workloads[5].
This move aligns with wider industry trends toward diversifying processor architectures in AI and HPC, and it may accelerate innovation by enabling open-source ISA adoption in ecosystems historically dominated by x86 and Arm. However, the CUDA support for RISC-V is still a work in progress without a specified release timeline, requiring ongoing collaboration with the broader RISC-V ecosystem and further development of data-center-grade RISC-V CPUs[2][4].
In summary, Nvidia’s CUDA compatibility with RISC-V is expected to increase RISC-V's relevance in AI and HPC processor designs, particularly expanding its role beyond edge devices toward eventual data center deployment, thereby diversifying the ISA landscape in these critical computing domains[1][3][5].
Nvidia's vision is to build heterogeneous compute platforms with RISC-V CPUs managing workloads. However, the announcement does not explicitly confirm if the workloads in the new setup are exclusively AI-related. This development may influence other companies to follow suit in using RISC-V in future AI and HPC processor designs across data centers.
- The compatibility of Nvidia's CUDA software platform with the RISC-V instruction set architecture (ISA) may lead to RISC-V CPUs serving as primary application processors in AI and high-performance computing (HPC) workloads, even in potential data center environments in the future.
- Nvidia's move to integrate RISC-V with its CUDA platform could encourage other companies to adopt RISC-V in their future AI and HPC processor designs, thereby diversifying the instruction set architecture landscape in these critical computing domains.