Autonomous vehicles, drones, and robots are increasingly operating independently. To ensure they function safely, they must be tested under various conditions before they are actually deployed. Digital twins, which are virtual representations of systems and environments, are considered key technology for this.
Digital twin: Shared representation of a robotics system—left as a digital simulation, right in the real production environment.
(Image: AI-generated)
In combination with artificial intelligence (AI), digital twins enable fast, risk-free, and cost-efficient validation. They can generate masses of test data, simulate critical scenarios, and systematically check performance—long before a prototype hits the road, takes to the air, or enters production. This approach is particularly helpful in testing rare but safety-critical edge cases that would otherwise be difficult or impossible to test safely in real operations. But how is the necessary data generated for this—and what exactly is the role of AI?
How the Digital Twin Becomes Realistic
But first, a fundamental question arises: What requirements must the data meet? The prerequisite for precise, realistic—i.e., truly useful—simulated test results are simulation platforms with physics-based, realistic models. The goal of these simulations is nothing less than to accurately replicate traffic flows, sensor behavior under various environmental conditions, different sensors (camera, radar, LiDAR, etc.), as well as weather, lighting conditions, and other environmental influences. Additionally, the underlying system must be capable of ensuring high reproducibility of tests in large-scale scenario generation—critical for reliable regression testing and performance benchmarks.
The trend towards simulation-supported validation is also reflected in new safety standards: ISO 21448 ("Safety of the Intended Functionality", SOTIF) requires a systematic examination of potential hazards—even in systems operating nominally error-free. Standardized formats such as ASAM OpenX (e.g., OpenSCENARIO, OpenDRIVE) provide the foundation for formally and universally describing scenarios.
Given these high demands, the challenge is obvious: such a simulation platform requires an immense number of data points—and the capacity to process them meaningfully. This is where AI comes into play, without which this technology would probably be inconceivable. AI plays a crucial role in the simulation and use of digital twins in many ways: in the generation and enrichment of realistic test data, scenario planning, validation, and subsequent continuous training.
Data Generation: Many training and test data cannot be captured in reality. AI helps to generate synthetic data that corresponds to physically realistic sensor data. Modern simulators generate photorealistic camera streams, LiDAR point clouds, and radar echoes with correct weather, lighting, and motion effects, for example, when detecting pedestrians at dusk. The challenge is to keep the reality gap—i.e., discrepancies between virtual and real data—to a minimum. Using metrics such as the mAP value (mean Average Precision) or the nuScenes Detection Score ensures that the results meet the highest standards, as the quality of the data determines the quality of the simulation.
Scenario Planning: The diversity and complexity of virtual test scenarios are crucial for the significance of the tests. AI methods are used to automatically generate scenarios or derive them from real data. This way, realistic tests can be derived from recorded driving data, or particularly critical edge cases can be identified with algorithmic support. A structured approach is essential, with several standards now established: The PEGASUS project—a German research project for developing unified quality criteria for autonomous vehicles—describes a scenario as a model of increasingly complex layers composed of a series of macro and micro data. The European Association for Standardization of Automation and Measuring Systems (ASAM) has also developed the OpenX standard as a foundation for a structured scenario test workflow.
First Test, Then Improve
Validation: In the validation phase, AI ensures that the obtained test results are correctly interpreted and utilized. AI-based analysis models evaluate the virtual test runs in the previously defined scenarios to automatically detect risks or anomalies. For example, machine learning methods can be used to train prediction models that recognize impending erroneous decisions from sensor streams at an early stage. Additionally, AI systems support adaptive controls that are tested in the simulator against unforeseen behavior—such as an AI-supported emergency system that can learn in the simulator how to respond to new danger patterns. Lastly, AI ensures the traceability and repeatability of the tests: through automated execution (for instance, via cloud-based batch testing), identical scenarios can be repeatedly played out to exactly compare the impact of software changes.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Training and Continuous Improvement: In virtual environments, AI algorithms can be trained risk-free—such as through reinforcement learning. Continuous learning can also be implemented safely: New software versions are tested through simulation before rollout to avoid unwanted side effects. Fleet operators also use simulation to optimize their autonomous systems in operation—by extensively testing new routes or strategies on the digital fleet twin before implementing them in reality.
Effort And Benefit in Balance
The integration of digital twins into existing development processes is challenging. Building realistic simulations requires detailed data, precise sensor calibration, and considerable expert knowledge. The benefits often only become apparent in the medium term, which can initially seem daunting.
Moreover, the simulation is only as good as the data that drives it. A consistent data strategy is therefore essential—from high-resolution maps and realistic traffic models to precise sensor data. To avoid the mentioned reality gap, constant alignment with real driving data and the incorporation of actual sensor logs are imperative. Culturally, the introduction of simulated testing is also a challenge: engineering teams must learn to take virtual tests as seriously as real ones. Silo thinking within companies can also hinder acceptance if software, testing, and safety departments do not work closely together.
Despite all hurdles, companies that rely on simulation-based development report measurable benefits: shorter development times, lower costs, and higher system reliability. With an experienced technology partner, the path to an autonomous future can be strategically shaped.
About the Author: Thomas Guntschnig is Managing Director for the EMEA region at MORAI Inc., a leading provider of simulation-based validation solutions for autonomous systems. The Korean technology company develops high-precision digital twin platforms for the virtual validation of autonomous vehicles and other autonomous mobility systems. In its home market, MORAI works with Hyundai, Samsung Heavy Industries, and the South Korean government, among others, and collaborates closely with renowned research institutions in Asia and Europe.
In his role, Guntschnig drives the international expansion of MORAI and is involved in the development of global standards and partnerships in simulation-based safety testing as part of the IAMTS (International Alliance for Mobility Testing and Standardization).