Digital Transformation AI Meets Digital Twins: From Data Generation to Testing

By Thomas Guntschnig | Translated by AI 5 min Reading Time

Related Vendors

Autonomous vehicles, drones, and robots are increasingly operating independently. To ensure they function safely, they must be tested under various conditions before they are actually deployed. Digital twins, which are virtual representations of systems and environments, are considered key technology for this.

Digital twin: Shared representation of a robotics system—left as a digital simulation, right in the real production environment.(Image: AI-generated)
Digital twin: Shared representation of a robotics system—left as a digital simulation, right in the real production environment.
(Image: AI-generated)

In combination with artificial intelligence (AI), digital twins enable fast, risk-free, and cost-efficient validation. They can generate masses of test data, simulate critical scenarios, and systematically check performance—long before a prototype hits the road, takes to the air, or enters production. This approach is particularly helpful in testing rare but safety-critical edge cases that would otherwise be difficult or impossible to test safely in real operations. But how is the necessary data generated for this—and what exactly is the role of AI?

How the Digital Twin Becomes Realistic

But first, a fundamental question arises: What requirements must the data meet? The prerequisite for precise, realistic—i.e., truly useful—simulated test results are simulation platforms with physics-based, realistic models. The goal of these simulations is nothing less than to accurately replicate traffic flows, sensor behavior under various environmental conditions, different sensors (camera, radar, LiDAR, etc.), as well as weather, lighting conditions, and other environmental influences. Additionally, the underlying system must be capable of ensuring high reproducibility of tests in large-scale scenario generation—critical for reliable regression testing and performance benchmarks.

The trend towards simulation-supported validation is also reflected in new safety standards: ISO 21448 ("Safety of the Intended Functionality", SOTIF) requires a systematic examination of potential hazards—even in systems operating nominally error-free. Standardized formats such as ASAM OpenX (e.g., OpenSCENARIO, OpenDRIVE) provide the foundation for formally and universally describing scenarios.

AI as A Prerequisite in All Areas of Simulation

Given these high demands, the challenge is obvious: such a simulation platform requires an immense number of data points—and the capacity to process them meaningfully. This is where AI comes into play, without which this technology would probably be inconceivable. AI plays a crucial role in the simulation and use of digital twins in many ways: in the generation and enrichment of realistic test data, scenario planning, validation, and subsequent continuous training.

Data Generation: Many training and test data cannot be captured in reality. AI helps to generate synthetic data that corresponds to physically realistic sensor data. Modern simulators generate photorealistic camera streams, LiDAR point clouds, and radar echoes with correct weather, lighting, and motion effects, for example, when detecting pedestrians at dusk. The challenge is to keep the reality gap—i.e., discrepancies between virtual and real data—to a minimum. Using metrics such as the mAP value (mean Average Precision) or the nuScenes Detection Score ensures that the results meet the highest standards, as the quality of the data determines the quality of the simulation.

Scenario Planning: The diversity and complexity of virtual test scenarios are crucial for the significance of the tests. AI methods are used to automatically generate scenarios or derive them from real data. This way, realistic tests can be derived from recorded driving data, or particularly critical edge cases can be identified with algorithmic support. A structured approach is essential, with several standards now established: The PEGASUS project—a German research project for developing unified quality criteria for autonomous vehicles—describes a scenario as a model of increasingly complex layers composed of a series of macro and micro data. The European Association for Standardization of Automation and Measuring Systems (ASAM) has also developed the OpenX standard as a foundation for a structured scenario test workflow.

First Test, Then Improve

Validation: In the validation phase, AI ensures that the obtained test results are correctly interpreted and utilized. AI-based analysis models evaluate the virtual test runs in the previously defined scenarios to automatically detect risks or anomalies. For example, machine learning methods can be used to train prediction models that recognize impending erroneous decisions from sensor streams at an early stage. Additionally, AI systems support adaptive controls that are tested in the simulator against unforeseen behavior—such as an AI-supported emergency system that can learn in the simulator how to respond to new danger patterns. Lastly, AI ensures the traceability and repeatability of the tests: through automated execution (for instance, via cloud-based batch testing), identical scenarios can be repeatedly played out to exactly compare the impact of software changes.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Training and Continuous Improvement: In virtual environments, AI algorithms can be trained risk-free—such as through reinforcement learning. Continuous learning can also be implemented safely: New software versions are tested through simulation before rollout to avoid unwanted side effects. Fleet operators also use simulation to optimize their autonomous systems in operation—by extensively testing new routes or strategies on the digital fleet twin before implementing them in reality.

Effort And Benefit in Balance

The integration of digital twins into existing development processes is challenging. Building realistic simulations requires detailed data, precise sensor calibration, and considerable expert knowledge. The benefits often only become apparent in the medium term, which can initially seem daunting.

Moreover, the simulation is only as good as the data that drives it. A consistent data strategy is therefore essential—from high-resolution maps and realistic traffic models to precise sensor data. To avoid the mentioned reality gap, constant alignment with real driving data and the incorporation of actual sensor logs are imperative. Culturally, the introduction of simulated testing is also a challenge: engineering teams must learn to take virtual tests as seriously as real ones. Silo thinking within companies can also hinder acceptance if software, testing, and safety departments do not work closely together.

Despite all hurdles, companies that rely on simulation-based development report measurable benefits: shorter development times, lower costs, and higher system reliability. With an experienced technology partner, the path to an autonomous future can be strategically shaped.

About the Author: Thomas Guntschnig is Managing Director for the EMEA region at MORAI Inc., a leading provider of simulation-based validation solutions for autonomous systems. The Korean technology company develops high-precision digital twin platforms for the virtual validation of autonomous vehicles and other autonomous mobility systems. In its home market, MORAI works with Hyundai, Samsung Heavy Industries, and the South Korean government, among others, and collaborates closely with renowned research institutions in Asia and Europe.

In his role, Guntschnig drives the international expansion of MORAI and is involved in the development of global standards and partnerships in simulation-based safety testing as part of the IAMTS (International Alliance for Mobility Testing and Standardization).