Companies that have adopted AI engineering practices for developing and managing adaptive AI systems have a clear competitive advantage, according to the US-based research and consulting firm Gartner.
By 2026, such pioneers will outperform their competitors by at least 25 percent in terms of the number and time required for operationalizing AI models. Consequently, it is important for companies to further drive the adoption of AI and explore new use cases in order to not fall behind. Johanna Pingel, Product Marketing Manager for AI at MathWorks, explains the AI trends that engineers should keep an eye on and the challenges that need to be addressed:
Physics-based AI models consider rules and principles of the real world
As AI spreads into more and more research areas, such as complex technical systems, AI models must consider the physical constraints to be relevant overall. The combination of data and physics, for example through neural ODEs (ordinary differential equations) or PINNS (physics-informed neural networks), holds great potential. At the core of physics-informed AI are simulations: complex models can be configured as variations within a simulation, allowing developers to quickly switch between models to achieve the best and most accurate solutions.
Reduced Order Modeling (ROM) modeling with physically based reduction models is also an important new trend. By using AI, simulations can be accelerated by replacing an extremely computationally intensive first-principles model of a system - while maintaining accuracy.
Collaboration on AI - free access to AI is becoming established
Researchers, engineers, and data scientists should further enhance their interdisciplinary collaboration to think of innovative solutions from different perspectives. To provide the latest models on demand and enable users to build upon the latest research results in a short time, network-based version control services for software development projects like GitHub are recommended. Open-source solutions are also gaining popularity, as engineering teams often work with models from different frameworks. A stronger network between science, academic research institutions, and companies is further driving AI research, benefiting researchers and users. This applies, for example, to topics such as physics-informed machine learning and biomedical image processing.
Companies are focusing on smaller, more easily explainable AI models
AI users are increasingly finding that they need to deploy models, tailor them to hardware, and provide explanations for the models' decisions to make these models relevant. The explainability of models and corresponding applications are therefore becoming more and more important for engineers.
To meet the requirements for cost-effective devices with low power consumption and explainable outputs, engineers are increasingly turning to traditional machine learning models and parametric models. These models are compact, have low memory requirements, and meet the application's requirements through simple interpretability of the output.
When newer, more memory-intensive models are needed, quantization and pruning techniques provide ways to compress the models, reducing the model size with minimal impact on accuracy. If necessary, engineering teams can therefore utilize interpretability, quantization, and pruning to expand the use of AI, including deep learning and traditional machine learning models, to conventional model development.
AI is becoming crucial for the design, development, and operation of modern technical systems
AI is increasingly prevailing in all industries and applications and will be crucial for technological advancement and the development and operation of modern technical systems in the future. In more established fields where AI has only recently been introduced, engineers often require additional background information on this technology and specific reference examples to integrate AI into their work. Based on proven examples, engineering teams can contribute their data and expertise to such examples, expand them, and thus integrate AI specifically tailored to their tasks.
What challenges AI engineers can expect
What challenges do these developments bring? Since different teams are often responsible for creating and implementing AI models, complex challenges arise in the AI environment that engineers continue to face. For example, the selection of preprocessing algorithms and model training typically falls within the purview of data scientists, who focus on accuracy and robustness. However, for a successful port to the target platform, engineers must consider many other criteria. Early testing of algorithms for feasibility assessment (e.g., using PIL = Processor-in-the-Loop) can prevent already trained and sometimes very powerful models from being discarded in the end.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
In most cases, AI training is implemented in a different programming language than the implementation on hardware. However, models from the training environment cannot easily be run on the target hardware. To overcome barriers between scripting languages, there are runtime interpreters (such as TensorFlow Lite), machine learning compiler frameworks like Apache TVM, or automatic code generation in MATLAB/Simulink.
Finally, the security of AI models remains an important topic: While AI models are allowed to make mistakes in the training environment to learn and improve, errors after implementation on hardware can lead to significant damage in real-world systems. The question of reliable, objectively verifiable criteria for a model deemed safe will continue to be an important research area in the future.
Outlook
The introduction of AI has implications for the entire company, from interdisciplinary collaboration to the design of specific components. Therefore, it is crucial for engineers to identify use cases that align with their short-term and long-term goals and to implement them accordingly. With the advancement of AI into all areas of work, including safety-critical areas, questions regarding model quality, language compatibility, and security will particularly come into focus.