Decoupling CUDA from Nvidia GPU Veteran Raja Koduri Challenges Nvidia's AI Dominance

From Manuel Christa | Translated by AI 3 min Reading Time

Related Vendors

Oxmiq is a platform by Raja Koduri designed to bring CUDA-compatible AI workloads to non-Nvidia hardware. Its core is a proprietary software layer that abstracts RISC-V hardware and directly executes CUDA programs.

GPU legend Raja Koduri aims to rival Nvidia with his own start-up.(Image: Manuel Christa)
GPU legend Raja Koduri aims to rival Nvidia with his own start-up.
(Image: Manuel Christa)

Raja Koduri, the former GPU chief of Intel, AMD, Apple, and ATI, is stepping out of the shadows with his own startup. Oxmiq Labs, as it is called, is developing a novel GPU platform based on RISC-V. The goal is not the next gaming GPU, but a flexible infrastructure for AI, graphics, and multimodal applications. Particularly noteworthy: Python-based CUDA applications can be run unchanged on different hardware using Oxmiq's software stack.

Koduri describes his company as "probably the first new GPU start-up in Silicon Valley in over 25 years" and targets a gap in the current AI infrastructure: the close dependence of many developers and data centers on Nvidia.

The Goal: CUDA without Nvidia

The core of the platform is the software layer OXCapsule. It abstracts the underlying hardware and encapsulates applications into so-called "heterogeneous containers." These run independently of the chip used on CPUs, GPUs, or specialized AI accelerators. For developers, this means: write once, run anywhere.

A particularly exciting component is OXPython. This compatibility layer automatically translates CUDA workloads written in Python into the Oxmiq runtime stack. No recompilation, no code adjustment. At launch, OXPython does not run on Oxmiq's own hardware but on the Wormhole and Blackhole AI accelerators from the Tenstorrent corporation.

"OXPython brings CUDA-based Python workloads to platforms like Wormhole and Blackhole. This is an important step for developer portability and a growing ecosystem," says Tenstorrent CEO Jim Keller. "It fits perfectly with our goal of giving developers full control over their AI stacks."

Modular Hardware, Minimal Overhead

Oxmiq follows an "asset-light" approach: Instead of manufacturing its own chips or investing in expensive production tools, the company focuses on licensing intellectual property (IP). The hardware foundation is called OxCore and combines scalar, vector, and tensor units in a modular RISC-V design. It is complemented by OxQuilt, a chiplet-based modular system. Depending on the application, this allows for the configuration of SoCs for edge inference, large AI models, or specialized graphics solutions.

Oxmiq does not provide all components of traditional GPUs. Features like texture units, ray tracing, or display outputs must be added by customers themselves. The focus is clearly on specialized accelerators, not on gaming cards.

Nevertheless, Koduri's experience as a GPU architect is reflected in Oxmiq's technical orientation: for example, in the combination of specialized computing units for graphics and AI applications, the modular architecture, and the focus on programmable hardware interfaces. In the 1990s, he developed graphics chips at S3 and ATI, later managed Radeon at AMD, and led the development of Xe GPUs at Intel.

Investors Bet on a Licensing Model

Oxmiq has raised 20 million US dollars in seed funding. The investors include prominent tech names like MediaTek. The company has already generated initial revenue from software licenses.

"Oxmiq has a bold vision and a top-notch team," says Lawrence Loh, Senior Vice President at MediaTek. "Oxmiq's GPU IP and software enable new possibilities in computing infrastructure – from mobile devices to automotive and edge AI."

Whether the strategy succeeds depends largely on how open the developer community is. If Oxmiq manages to free developers from their dependence on Nvidia, it would mark the first crack in a previously closed ecosystem. Oxmiq positions itself as a complement to the existing hardware landscape – not as a replacement. However, the mere prospect of running CUDA workloads independently of Nvidia is likely to catch many people's attention. (mc)

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent