AI engines, PLD and CPUs combined New FPGA SoCs: Single-chip intelligence for embedded systems

From Michael Eckstein | Translated by AI 4 min Reading Time

Related Vendors

AMD has introduced the second generation of its Versal FPGA SoCs. The components labelled "Gen 2" are aimed, among other things, at the end-to-end acceleration of AI-driven embedded systems. They are expected to deliver 3x more AI computing power per watt and up to 10x more CPU-based scalar computing power than their predecessors.

(Image: AMD)
(Image: AMD)

AMD is expanding its portfolio of adaptive System-on-Chips (SoCs) with the Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2 models. According to the manufacturer, the AI variant has optimized AI engines and can thus process the preprocessing (via FPGA), AI inference (via vector processor array), and post-processing (via CPU) of data in one module. For this, the AI engines are expected to deliver up to three times more TOPS per watt than the predecessor modules. Subaru is among the first customers to announce the use of AMD Versal AI Edge Series Gen 2 for the next generation of the ADAS system EyeSight.

The Prime models combine programmable logic for sensor data processing with powerful embedded Arm CPUs for end-to-end acceleration in non-AI-based embedded systems. The high-performance Arm CPUs used, according to AMD, provide up to ten times more scalar computing power.

Balanced ratio of computing power, power consumption, and chip area

With new hard-IP for high-throughput video processing, including multi-channel workflows with up to 8K, the Versal Prime Gen 2 devices are suitable for applications such as ultra-high-definition (UHD) video streaming and recording, industrial PCs, and onboard computers. AMD emphasizes the balanced ratio of performance, power consumption, and area as well as functional safety of the Versal Gen 2 series, which targets markets like automotive, aerospace and defense, industry, image processing, healthcare, broadcast and pro-AV.

"The demand for AI-enabled embedded applications is exploding and driving the need for single-chip solutions for the most efficient end-to-end acceleration, taking into account the energy and area constraints of embedded systems," says Salil Raje, Senior Vice President and General Manager, Adaptive and Embedded Computing Group, AMD. Building on over 40 years of experience in adaptive computing, the second generation Versal devices would combine multiple computing engines on a single architecture and can thus provide high computational efficiency and performance with scalability from the low-end to the high-end range.

Already in Subaru's next-generation ADAS Vision system

According to AMD, car manufacturer Subaru intends to use Versal AI Edge Series Gen 2 systems for its new Advanced Driver Assistance System (ADAS) EyeSight. The EyeSight system will be integrated into select Subaru vehicle models and enables advanced safety features such as adaptive cruise control, lane-keeping assist, and pre-collision braking. Subaru already uses AMD's adaptive SoC technology in the current EyeSight vehicles.

"Subaru has chosen the Versal AI Edge Series Gen 2 to deliver the next generation of automotive AI performance and safety for EyeSight-equipped vehicles," said Satoshi Katahira, General Manager, Advanced Integration System Department & ADAS Development Department, Engineering Division, Subaru Corporation. The Versal AI Edge Gen 2 devices offer high AI inferencing performance, extremely low latencies, and extensive functional safety features.

Versal AI Edge Series Generation 2

Without a doubt, the real environments in which vehicles move are complex. Accordingly, the requirements for data collection and processing in the integrated embedded systems are high. AMD claims to have found an "optimal mix of processors for all three phases of AI-driven acceleration of embedded systems" with its Versal AI Edge Series Gen 2 components:

  • Pre-processing: FPGA programmable logic for real-time pre-processing with high flexibility for connecting a wide range of sensors and implementing data processing pipelines with high throughput and low latency

  • AI inference: An array of vector processors in the form of modern AI engines for efficient AI inference

  • Post-processing: Arm CPU cores take over complex decision-making processes and controls in safety-critical applications.

According to AMD, this single-chip intelligence can eliminate the need for multi-chip processing solutions, leading to smaller, more efficient embedded AI systems with the potential for faster market readiness.

Versal Prime Series Generation 2

According to AMD, the Versal Prime Series Gen 2 offers end-to-end acceleration for traditional, non-AI based embedded systems. It combines programmable logic for sensor processing with powerful embedded Arm CPUs. Compared to the first generation, these devices offer up to ten times more scalar processing power and can efficiently handle sensor processing and complex scalar workloads.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Optimized development cycles

The tools and libraries of the AMD Vivado Design Suite help developers of embedded hardware systems to increase their productivity, among other things, by shortening compile times and streamlining their design cycles, according to manufacturer AMD. The Vitis Unified Software Platform should also speed up the development of embedded software, signal processing, and AI designs at the user's preferred level of abstraction – even without any FPGA experience.

AMD expects Versal Series Gen 2 samples to be available in the first half of 2025, followed by evaluation kits and System-on-Modules samples in mid-2025 and mass production likely by the end of 2025. (me)