Vehicle ConnectivityEdge AI: How the Next Wave of Innovation in Vehicles is Created
A guest article by
Sonatus | Translated by AI
9 min Reading Time
Software-defined vehicles have redefined expectations regarding the speed of innovation: Functions can be activated via updates, variants can be differentiated via software, new services are created via data. However, the more sensors, software and connectivity move into the vehicle, the clearer the limits of pure connected approaches become. Edge AI promises a remedy.
Software-defined vehicles have redefined expectations regarding the speed of innovation.
(Image: Sonatus)
Software-defined vehicles (SDVs) are the future because they maintain, if not increase, their value over their entire service life thanks to constant updates. However, they also show that pure networking approaches are reaching their limits when it comes to data processing. Not because cloud computing itself is too slow or too expensive. But because many of the new, data-driven functions have to work where the data is generated and where decisions are required in milliseconds: in the vehicle.
This is where edge AI comes in, which is in the process of becoming a crucial building block for SDV architectures. Edge AI accelerates new developments, makes functions more robust, improves data protection and reduces costs. Above all, however, it helps car manufacturers to make the leap from feasibility studies to fleet-wide roll-out when it comes to artificial intelligence (AI).
Edge and Cloud AI Complement Each Other
Edge AI means that AI models run locally on vehicle hardware and process data directly in the vehicle. In practice, this usually involves inference, i.e. the application of trained models to current data streams. Training and extensive analyses still often take place centrally, for example in the backend or in the cloud. However, the decision as to whether a certain pattern is present, whether an anomaly is identifiable or whether a system parameter should be optimized can be made locally.
The distinction to cloud AI is less ideological than architectural. Cloud approaches are strong when it comes to fleet aggregation, model training, data-intensive evaluations or non-time-critical services. Edge AI is strong when the focus is on latency, robustness, data protection and efficiency. It therefore closes a gap that is becoming increasingly visible in SDV scenarios: Many functions must work reliably, even if the connection fluctuates, if bandwidth is scarce or if sensitive raw data is better left in the vehicle.
The importance of edge AI is currently increasing for several reasons. These include, for example
The amount of data in the vehicle is growing rapidly, not only due to cameras and radar sensors, but also due to the large number of electronic control units, domain and zone controllers and software logs.
The expectation of real-time behavior is increasing, for example in safety functions, diagnostics or comfort.
Although connectivity is improving in principle, it is not guaranteed. Tunnels, rural regions, parking garages or urban radio cells are typical examples.
Data sovereignty is becoming increasingly important. The more data is processed about driving behavior, surroundings or vehicle condition, the greater the focus on data protection, cybersecurity and regulatory requirements.
These factors make edge AI attractive because, in principle, it delivers low latency, independence from connectivity and a more efficient data economy. In addition, modern vehicle platforms provide significantly more computing power than just a few years ago. This makes edge AI not only technically possible, but also economically viable.
Another indication is the market dynamics. Forecasts see the market for AI in the automotive sector growing strongly in the coming years. AI is increasingly becoming a key differentiating factor in vehicles. It is important to note that the added value is not created exclusively in the backend. In many cases, it is created in the vehicle itself, where decisions and experiences take place.
Edge AI provides a Playing Field for New Developments
The discussion surrounding artificial intelligence in vehicles has long been dominated by ADAS and autonomous functions. This is understandable because these applications are highly visible and place high demands on real-time and safety. However, edge AI opens up a second, often underestimated area for software-defined vehicles: vehicle subsystems and cross-sectional functions.
This involves optimizations and new functions that are difficult to achieve with traditional programming. AI can recognize patterns that are hidden in high-dimensional data streams. It can identify anomalies before they lead to failures. It can dynamically adjust parameters to optimize energy consumption, wear and tear or comfort. And it can enable personalization without raw data permanently leaving the vehicle.
Typical fields of application are predictive maintenance, condition monitoring, energy and thermal management, safety in the vehicle electrical system, comfort functions in the interior or the derivation of virtual signals from existing sensors. The latter is particularly relevant: If an AI model can reliably deduce a variable from existing sensor signals that previously required a physical sensor, there are direct cost benefits. It also reduces complexity in the supply chain. This logic is not new, but edge AI makes it more scalable because models can run locally in series.
What Problems Does Edge AI Solve?
Edge AI is far more than just a technology trend, but a real problem solver for very specific challenges:
Latency: Many functions require responses in the millisecond range, for example for security-relevant warnings, stability management, monitoring critical components or detecting cyber anomalies. Here, the detour via the cloud is often too slow or too insecure.
Connectivity: Cloud-based functions can be slowed down in practice by network interruptions. Edge AI keeps core functions running, regardless of whether the connection is stable.
Data protection and security: If inference takes place locally, sensitive raw data can remain in the vehicle. This reduces the need to transmit and store data widely. At the same time, edge AI can also improve security itself, for example through local anomaly detection in the vehicle network.
Costs: Data transfer, cloud storage and large-scale backend computing are expensive. The more data is processed and pre-filtered locally, the lower the running costs. This is particularly true if only summarized events, characteristic values or diagnostic signals reach the backend, rather than raw data.
Scaling: In practice, many AI projects fail not because of the model, but because of the path to series production. Edge AI requires a robust lifecycle that includes training, validation, optimization, deployment and monitoring. Without this lifecycle, AI remains a pilot.
Where the AI Bottlenecks Lie: Integration, Silos, Hardware
But how can the power of innovation be unleashed in the car? And what is currently holding it back? Although SDVs are developing rapidly, the way they work is often still characterized by traditional development models. Vehicle data, machine learning development, embedded software and system integration are often carried out by different teams and processes. These silos slow down collaboration and make it difficult to transfer models to integration.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
There is also another problem: tool landscapes are fragmented. Different tools often exist for training, validation, optimization, deployment, integration and monitoring. This increases complexity, creates media disruptions and increases costs.
And last but not least, the variety of hardware must be taken into account: Car manufacturers have to operate models on a variety of platforms, from entry-level to premium series, from older vehicle architectures to new zone platforms. An AI model that works well on a reference platform is far from ready for series production if it cannot be robustly rolled out across variants.
The challenge is therefore less about "developing AI" and more about "operationalizing AI". This is why edge AI is also becoming a strategic issue, as it requires standardized, repeatable paths from the idea to the fleet.
Bringing Edge AI Into Series Production
But what is the path to scalability? Four levers can be derived from practical experience.
Uniform workflows from concept to market launch
An end-to-end workflow reduces friction losses. It defines how data is provided, how models are validated, which quality criteria apply and how the transition to embedded software and system integration takes place. It is crucial that everyone involved works along a common process chain. This shortens development cycles and reduces the risk of models failing in the integration phase.
End-to-end tool chains instead of fragmented toolsets
A consolidated toolchain bundles the steps in the AI lifecycle. This includes training, validation, optimization for target hardware, deployment, integration and monitoring in the field. The goal is not maximum centralization, but consistency. When tooling is standardized across vehicle models and control units, complexity is reduced and the path to series production becomes repeatable.
Vehicle MLOps for fast, consistent fleet rollouts
MLOps is established in the IT environment, but is much more demanding in the vehicle environment. Updates must be rolled out securely, variants must be taken into account, monitoring should be robust and rollbacks must be possible. Standardized pipelines for deployment in the vehicle accelerate updates and ensure that models run consistently across fleets. This is crucial for SDVs because function development is increasingly taking place on a continuous basis instead of project-based.
Edge-first design for offline robustness and efficiency
Many use cases only work reliably if they are not dependent on a permanent connection. Edge-first means that core logic and inference are designed locally. Backend and cloud remain important, but they complement operations instead of carrying them. This improves robustness and reduces ongoing data costs.
Save Costs and Increase Quality
If these levers are successfully implemented, car manufacturers can quickly realize significant economic effects: On the cost side, local data processing reduces transmission costs and speeds up inference. Optimized models maximize the use of existing hardware, which can delay or avoid investments in new infrastructure. Real-time diagnostics shorten repair cycles and can reduce warranty costs. On a large scale, predictive edge AI can help to detect quality problems at an early stage. Depending on the initial situation and use case, potential savings in warranty and recall costs are conceivable. This can be in the single-digit to double-digit percentage range.
On the quality side, edge AI improves the ability to detect conditions and deviations at an early stage. This can increase the reliability of vehicles and fleets. At the same time, local intelligence increases functional robustness because it is less dependent on cloud and network conditions.
For drivers and fleet operators, the benefits are tangible when functions react more quickly, when maintenance requirements become visible at an early stage and when energy consumption, safety and comfort are noticeably improved. The driving experience also benefits because personalization and intelligent assistance functions can work without waiting times and without constant data transmission.
Practical Example: Embedded Tire Digital Twin
The example of an Embedded Tire Digital Twin, which Michelin has developed together with Sonatus, shows how edge AI can work in practice beyond ADAS. The approach aims to model software-based tire information in the vehicle and derive condition indicators from it. The focus is on the idea of improving maintenance and safety through continuous local inference.
Tires are relevant to safety and at the same time a factor for efficiency and operating costs. Wear, load, temperature and condition have a direct impact on driving behavior and energy consumption. A digital twin that works locally in the vehicle can provide information in real time, for example on wear or optimum use. There are potentially clear benefits for fleets: fewer unplanned breakdowns, more predictable maintenance, lower overall costs and increased safety.
What is important about this example is not so much the specific feature as the pattern: Edge AI turns a subsystem into a new software function that can be developed, validated and rolled out in SDV logic. Subsystem use cases like this are the pragmatic entry point into edge AI for many car manufacturers because they offer clear metrics, manageable integration spaces and direct business impact.
What Role Platforms Play for Edge AI
If car manufacturers want to scale edge AI, they need platform capabilities beyond individual models: interoperability across hardware variants, a consistent lifecycle and simple integration.
This is where solutions such as Sonatus AI Director, which aim to operationalize edge AI workloads in the vehicle, can help. Product names are less important than end-to-end toolchain, rollout across ECUs and vehicle lines, standardized deployment and monitoring as well as secure, controllable updates.
Conclusion: Race is Decided by Scaling
Edge AI is a logical response to SDV requirements. It combines real-time, robustness, data protection and cost control and expands the innovation space beyond ADAS. However, the decisive difference is not in the proof of concept, but in scaling.
Automotive manufacturers who take a strategic approach to edge AI start with use cases that deliver measurable benefits, for example in maintenance, quality or energy efficiency. Then it's the industrial implementation that counts: workflows, toolchains, vehicle MLOps and an edge-first design for robust core functions. Those who lay these foundations and make smart use of partnerships in the ecosystem will shorten product cycles and set new standards in terms of safety and efficiency. (se)