AI in Product Development "Engineers Will Need to Think Outside the Box More in the Future"

From Monika Zwettler | Translated by AI 12 min Reading Time

Related Vendors

The use of AI in product development is often fragmented today, with separate information and tools. Comprehensive data quality and interoperability, however, are crucial for successfully deploying AI, as Dr. Dirk Molitor explains in the interview. He also provides tips on how companies can advance their strategic "AI-ification."

One result of the study: The future of engineering lies in agentic AI, i.e., systems capable of autonomous reasoning and orchestrating workflows across various disciplines and tools.(Image: © Boonyapawn - stock.adobe.com)
One result of the study: The future of engineering lies in agentic AI, i.e., systems capable of autonomous reasoning and orchestrating workflows across various disciplines and tools.
(Image: © Boonyapawn - stock.adobe.com)

Accenture, the German Research Center for Artificial Intelligence (DFKI), and the Fraunhofer Institute for Software and Systems Engineering ISST describe in a report how AI can accelerate and transform product development. The report also includes a scalable framework for implementing AI. Dr. Dirk Molitor, Engineering Digitization and PLM Consulting at Accenture GmbH and project leader, provides further insights and advice in this interview on what should be done now.

Dr. Molitor, the whitepaper states that AI has the potential to fundamentally transform and accelerate product development. But what do companies need to do to achieve this?

To fully leverage the benefits of AI in product development, companies must establish solid foundations. Often, AI operates in environments designed for isolated optimization rather than integrated orchestration. The product development process (PDP) consists of numerous interdependent engineering and support processes. Currently, AI is typically applied selectively to automate and simplify individual tasks. However, the true value of AI lies in synchronizing processes across system and domain boundaries, continuously monitoring compatibility between system levels, and coordinating teams and disciplines.

For this potential to be realized, technical standards, harmonized data and tool landscapes, and clear organizational responsibilities must be established. Without a machine-readable, comprehensive understanding of processes, methods, data, and tools, AI remains blind to complex interrelationships and cannot orchestrate today's fragmented organizational units across the board. The consequences are data gaps, inconsistent models, redundant work steps, and a lack of end-to-end transparency.

Dr. Dirk Molitor is an Associate Manager at Accenture GmbH. He advises clients from the automotive, aerospace, and mechanical/plant engineering sectors, focusing on engineering toolchain transformation and AI integration. He previously studied industrial engineering and mechanical engineering at TU Darmstadt and earned his doctorate at the Institute for Production Engineering and Forming Machines on AI-controlled production processes and machine tools.(Image: Accenture)
Dr. Dirk Molitor is an Associate Manager at Accenture GmbH. He advises clients from the automotive, aerospace, and mechanical/plant engineering sectors, focusing on engineering toolchain transformation and AI integration. He previously studied industrial engineering and mechanical engineering at TU Darmstadt and earned his doctorate at the Institute for Production Engineering and Forming Machines on AI-controlled production processes and machine tools.
(Image: Accenture)

You argue that companies must act early and decisively to avoid getting stuck in fragmented solutions. What type of fragmentation is most harmful in engineering, and how does this disadvantage manifest in the daily work of a designer?

The most damaging form of fragmentation in engineering is the simultaneous separation of information, data, and tools across teams, domains, and systems. Every engineer is familiar with the consequences from daily work: a significant portion of working time is not spent on constructive value creation but on the tedious searching, merging, and interpreting of information. Dependencies between components, functions, or disciplines often need to be clarified through numerous coordination loops with various colleagues. Even then, uncertainty frequently remains about whether all impacts of one's work on other artifacts have been fully considered. A large part of the relevant connections also resides in implicit knowledge, such as personal experience or scattered expert knowledge, which is neither centrally documented nor machine-readable. This is precisely where the daily productivity and quality loss occurs: designers work in a constant state of incomplete transparency.

Designers work in a constant state of incomplete transparency.

How does AI help in this regard?

AI could overcome this fragmentation of information, data, and tools by providing the right information, connections, and potential causal chains depending on the task and context. However, this requires a machine-readable modeling of the entire product development process, including the integration of implicit knowledge. Without this foundation, AI applications remain isolated, and engineers remain stuck in searching rather than developing.

What does the framework you proposed in the whitepaper look like in detail?

The framework presented in the whitepaper defines the fundamental prerequisites for AI to be used effectively in engineering, not just in isolated cases but system-wide. It encompasses five interconnected dimensions:

  • Data quality: It ensures that product and process data are consistent, complete, and machine-readable—essential for any AI-driven analysis and automation.
  • Interoperability: The ability to seamlessly exchange information across tools, domains, and system levels. Without standards for interoperable data and interfaces, AI remains limited to isolated solutions.
  • Powerful AI platform: It provides unified access to models, computing resources, development processes, and security mechanisms, preventing different teams from creating incompatible AI stacks.
  • Context management: Goes beyond mere data storage. It links data with its technical, procedural, and organizational meaning, turning it into machine-readable product creation knowledge. AI can thus not only interpret the data but also place it within the engineering context.
  • Federated Governance: Ensures that all departments operate under common rules without overburdening central teams. It defines roles, responsibilities, quality policies, and security requirements, thereby supporting the scalable and controlled deployment of AI solutions.

Together, these five dimensions form a solid foundation on which AI in engineering can realize its full systemic potential.

How does the framework help overcome the concept of isolated tool integration in favor of a scalable Digital Thread (DT)?

The framework helps move beyond the typically isolated tool integration of today and instead build a scalable digital thread by directly addressing the structural causes of data fragmentation. A key initial lever is improving data quality: through AI-supported methods, such as automatic classification, consistency checks, or gap detection, as well as continuous quality monitoring, data is not only cleaned but maintained at a reliable level over the long term. This creates a solid foundation for a seamless digital thread.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Building on this, the framework relies on the standardization and semantic linking of data through ontologies and binding data standards. These ontologies create a common language across tools, domains, and system contexts. When companies apply these standards consistently, formerly isolated data sources can be systematically integrated and placed into a broader context. This is a fundamental prerequisite for enabling information to flow throughout the product lifecycle.

And future agent systems can work with that?

A crucial aspect is the multi-layered modeling of data. This ensures that different AI agents each receive the context relevant to their tasks. For instance, an agent conducting impact chain analyses at the system level requires abstract models, functional dependencies, and system architectures. An agent supporting designers in component creation, on the other hand, needs geometric details, manufacturing constraints, and material-specific parameters. An agent that automates embedded software development, in turn, requires entirely different data, such as state machines, interfaces, or hardware compatibility.

For this layered context to be provided as needed, data must be modeled hierarchically, analogous to the organizational structure: from coarse to fine, from systemic to domain-specific. Through this structuring, AI agents can interact consistently at all levels, from system architecture to individual components or software modules.

Most analyzed scientific publications show a higher vertical than horizontal maturity, meaning they focus on domain-specific problems. What do you attribute this to?

The greatest barrier to horizontal integration cannot be attributed to a single cause; it is a combination of various factors: lack of tool interoperability, insufficient coordination between engineering silos, and a lack of semantic consistency in data. This situation is significantly exacerbated in engineering due to the high heterogeneity of data. Here, 3D geometry data, text documents, sensor and measurement time series, structured data from PLM or ERP systems, and unstructured information from emails, reports, or presentations converge. Additionally, even similar data types exist in various, sometimes proprietary formats—such as CAD data from different manufacturers or domain-specific simulation formats.

And this fragmentation leads to barriers in horizontal integration, meaning the connection of requirements, architecture, design, and testing?

This fragmentation results in AI applications often relying on very limited data foundations and therefore being restricted to small, clearly defined engineering areas. Horizontally integrated use cases that connect multiple domains, tools, and system levels fail either due to technical hurdles such as missing or closed interfaces, organizational boundaries such as silo structures or lack of shared responsibility, or because data exists but is not semantically compatible.

To overcome these barriers, significantly more standardization, open interfaces, and cross-domain modeling of products, processes, and data are needed. Only when data is uniformly described through ontologies and standards, made accessible via open APIs, and linked in shared models can horizontal integration succeed—enabling AI to deliver its value across individual domains and throughout the entire product development process.

Could you elaborate on the difference between a vertical use case and a true horizontal use case in terms of challenges and added value for everyday engineering?

The difference between a highly refined vertical and a true horizontal AI use case is clearly evident in everyday engineering. A vertical use case optimizes a well-defined task within a single domain—for instance, the analysis of an FEM model or the evaluation of measurement data. While it provides local benefits, it does not help to understand how a change impacts other areas. The tedious search for affected artifacts, dependencies, and responsible individuals remains. A horizontal use case, on the other hand, links requirements, architecture, design, and testing into a seamless context. In the case of a product change, for example, this means that instead of manually tracing the impacts across various tools and documents, the AI can suggest which artifacts need to be updated, how to implement the change, and which stakeholders to involve. Engineers would still validate these suggestions, but an accuracy rate of even 80 percent would significantly accelerate changes and greatly reduce coordination efforts.

In short: Vertical AI optimizes individual steps—horizontal AI accelerates the entire development flow.

The Digital Thread and MBSE are seen as complementary. How does MBSE provide the machine-readable foundation that AI systems need to gain an overview of the entire system before they can delegate development tasks?

MBSE is a key technology for enabling horizontal AI applications in the PDP because it provides the formalized, machine-readable system description that AI needs to understand the overall system. Through the model-based representation of requirements, functions, architecture, interfaces, and dependencies, especially with modern SysML v2-based tools, a holistic digital representation of the product is created. This structure allows AI systems to recognize system-wide interrelations, predict impacts, and assign development tasks specifically to the appropriate teams or tools.

However, not all companies are proficient in MBSE today, correct?

The creation of such system models is itself an enormous challenge, especially for complex mechatronic products. Here, AI can provide support by linking data objects, identifying inconsistencies, or completing missing relationships. These are tasks that are hardly manageable manually. This creates a symbiosis: AI assists in building and maintaining the system model, while the system model, in turn, provides AI with the structured context it needs to efficiently, securely, and transparently support or automate horizontal engineering tasks.

Many companies work with a large number of legacy tools. How can they modify their tool landscape at a reasonable cost to meet the requirements for AI-driven engineering?

Companies with extensive legacy tool landscapes must find a pragmatic yet consistent approach to establishing interoperability and standardization for AI-supported engineering. The first step is a clear assessment of the value contribution of each tool: Which tools are truly business-critical, which are used out of habit, and which add more complexity than benefit? Similar to product configurations, each additional tool increases integration and development efforts, as it introduces new interfaces, data formats, and dependencies.

Building on this, companies must link their processes more closely to the tool landscape and ensure that every remaining tool can provide standardized, semantically describable data formats. Where native support is lacking, connectors, APIs, or middleware can help convert proprietary formats into open standards.

Only through such a mix of consolidation, standardization, and targeted modernization can the legacy landscape be developed to meet the requirements of AI-driven engineering.

At the same time, every legacy tool should undergo a critical "justification review": Does it provide indispensable value, or can it be replaced in the long term if it cannot be interoperably integrated? Only through such a mix of consolidation, standardization, and targeted modernization can the legacy landscape be developed to meet the requirements of AI-driven engineering, without companies having to overhaul their entire infrastructure all at once.

What does the roadmap you propose look like?

The proposed roadmap follows a phased approach that places an early and clear focus on value-adding use cases while ensuring their long-term integratability. Each use case is implemented in such a way that it does not remain isolated but gradually creates the prerequisites for scalable, horizontal AI integration in the PDP. Specifically, this means that data from different domains and tools is progressively brought into a common data layer, where it is quality-assured, semantically linked via ontologies, and made reusable for future applications. In this way, domain-specific data spaces are initially created, which are gradually interconnected over time. The goal is to achieve seamless, horizontal integration in the long term.

Companies need clear support from management and must understand the "AI-ification" of product development as an integral part of their R&D strategy.

This approach is only successful if it is strategically anchored. Companies need clear support from management and must incorporate the "AI-fication" of product development as a fixed part of their R&D strategy. Equally crucial is an open error culture, as overcoming existing fragmentation is only possible if experimentation is encouraged and AI integrations are supported with sufficient resources. This creates a roadmap that begins pragmatically, delivers value quickly, and simultaneously lays the foundation for a sustainable, scalable digital thread.

The roadmap begins with building the foundation before automating and orchestrating. What two specific technological measures should companies prioritize today to ensure the integration capability of their AI use cases into future multi-agent systems?

Firstly, companies should familiarize themselves early with graph and vector databases and evaluate how their own engineering data can be meaningfully aggregated within them. Graph databases enable the representation of complex technical dependencies, while vector databases are optimized for semantic representations and fast AI queries. Together, they form the basis for AI agents to efficiently search engineering knowledge, identify connections, and provide information as needed. Both are critical components for drastically reducing search times and making AI agents system-capable.

Secondly, companies should prioritize the development of AI and data platforms. Recently, this has been described in the literature with the term "Engineering Data Backbone." Such a backbone standardizes data from various engineering tools, connects them via standardized interfaces, and makes them accessible for AI applications. This includes ensuring interoperability between tools and the platform, developing consistent data models, as well as the technical implementation and automated monitoring of governance rules. Only with such a platform can future multi-agent systems be reliably orchestrated and scaled across domains.

What new skills do engineers urgently need to develop to succeed in this future AI-supported ecosystem?

Engineers will need to evolve their roles in a future AI-supported development ecosystem: from simply executing complex tasks to evaluating, managing, and ensuring the quality of AI-generated results. Deep specialized knowledge will remain indispensable, but the time required for many operational tasks will significantly decrease thanks to AI. This shifts the focus of work: creative, system-designing activities will increase, while repetitive routine tasks will be increasingly automated or AI-accelerated. One of the most important new competencies will therefore be the ability to critically interpret, validate, and justify AI results to third parties.

So domain experts remain important?

Not only that, but the demand for engineers with T-shaped skills will also grow: deep expertise in their domain combined with a broad understanding of adjacent areas. This is the only way they can assess the impact of their work on other disciplines, identify dependencies, and make horizontal decisions. This trend is already evident in the software industry today: there is less demand for pure specialists in individual programming languages and more for system and software architects who understand highly interconnected systems and interfaces. A similar transformation will encompass engineering as a whole. Engineers will need to "think outside the box," recognize interrelations, and think in terms of integrated systems.

Thank you!

About the whitepaper “AI in New Product Development” 

"The whitepaper 'AI in New Product Development' is a joint work by Accenture GmbH, the Fraunhofer Institute for Software and Systems Engineering ISST, and the German Research Center for Artificial Intelligence (DFKI). Other authors involved in the whitepaper include Vlad Larichev (Accenture), Dr. Tobias Guggenberger (ISST), Dr. Marcel Altendeitering (ISST), Dr. Daniel Porta (DFKI), and Dr. Matthias Ziegler (Accenture)."