AI Deployment Successes with AI Applications in Production

By Dr. Bernhard Valnion | Translated by AI 8 min Reading Time

Related Vendors

The technological advancements in the field of machine learning and artificial intelligence (AI) are progressing at a breathtaking pace. A recent event and its goal came at just the right time to provide a comprehensive overview of the current state of application.

(Image: STUDIO.no.3 - stock.adobe.com / AI-generated)
(Image: STUDIO.no.3 - stock.adobe.com / AI-generated)

Knowledge about AI can quickly fall behind, and good advice becomes costly when it comes to deciding which fields to invest in wisely. This makes events like "Insight.AI in Production // Status—Challenges—Opportunities—Use Cases" by the Upper Austrian Business Agency Business Upper Austria, held on the Keba premises, all the more important. The half-day event provided a comprehensive overview of the current state of applications in Austria using concrete practical examples and ongoing projects in the relevant mechatronics cluster. Particularly impressive was the tour of the Keba Innospace at the conclusion of the event.

Following the introductory remarks, Elmar Paireder presented the key focus areas of the mechatronics cluster and passed the baton to Malte Scheuvens from Fraunhofer Austria. Malte Scheuvens used the powerful image of a breaking wave with a surfer to highlight the challenges posed by AI. This advance of new technologies, the speaker noted, is accompanied by increasing confusion due to ever-emerging buzzwords. Who can confidently place terms like Knowledge Graph, xLSTM, Edge Computing, EU AI Act, or GDPR correctly in the context of AI?

Gallery
Gallery with 5 images

The speaker therefore recommended engaging in communities like AI Austria GenAI to stay up to date.

Following this introduction, Malte Scheuvens discussed ongoing research projects, such as the "TeSLA Learning Assistance System" in collaboration with Infineon Technologies Austria, or "Transparency over Asset Inventory through AI and Semantic Systems" in partnership with Wien Energie. TeSLA is designed to support maintenance personnel for ion implantation systems. Their work is facilitated by accelerated documentation processes, achieved by linking a knowledge base with an AI-based database. In the project, semantic modeling is used to create a network (Knowledge Graph) of real entities such as system components, work orders, and spare parts, enabling the implementation of cross-site maintenance strategies.

And in the project together with Wien Energie, Fraunhofer Austria's approach enabled the analysis and linking of several tens of thousands of plant components based on their structural similarity. This project also aims to harmonize maintenance strategies across the many historically separate plant locations.

Energy Guzzlers: ChatGPT And Other LLMs Require High Computing Power

Next was Bernhard A. Moser, who has been the honorary president of the Austrian Society for Artificial Intelligence since 2020. This organization aims to strengthen Austrian AI research, for example, by making recommendations to the federal government.

The research director addressed a fundamental problem of AI applications: Nowadays, the corresponding algorithms are primarily based on the mathematical operations of multiplication and addition. For example, ChatGPT 4 uses around 1.7 trillion weighting operations, which involves approximately the same number of multiplications and a comparable number of additions. However, a multiplication is, on average, 16 times more resource-intensive than an addition, which is reflected in energy consumption: ChatGPT requires about 1 kilowatt-hour for 350 requests, while our brain performs the same task with only 20 watts, or 1/50th of the energy.

The demand for computing power driven by Large Language Models (LLMs) is growing significantly faster than the improvements expected from Moore's Law scaling. Even the performance advancements of Nvidia graphics processors (GPUs) are merely a drop in the ocean. Nevertheless, computing power improved by a factor of 317 between 2012 and 2021, exceeding the expectations of Moore's Law. However, these impressive achievements are still not sufficient. As a result, experts like Bernhard A. Moser and others from materials science, device and circuit engineering, system design, as well as algorithm and software development, have joined forces to explore new approaches in the field of so-called neuromorphic engineering or computing.

New Approaches Could Improve Data Processing

Such collaborative approaches are essential to bridging the gap between biological systems and conventional AI through innovations. The term "neuromorphic," coined back in the 1980s, in its modern interpretation refers to a system with brain-inspired features such as in-memory computing, hardware learning, spike-based data processing, fine-grained parallelism, and reduced precision computing.

Neuromorphic research can be divided into three areas. First, "Neuromorphic Engineering" uses either complementary metal-oxide-semiconductor (CMOS) or cutting-edge post-CMOS technologies to replicate the brain's analytical mechanisms. Second, "Neuromorphic Computing" explores new data processing methods, and finally, the development of novel innovative nanodevices represents another field of innovation.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Artificial Intelligence in the Service of Resource Conservation

The business with wind turbines is booming. Therefore, the Siemens Energy plant in the Weiz district of Styria will triple its production area for wind power transformers. Optimizing material consumption in the manufacturing of power transformers is thus a worthwhile field.

Michael Zwick from the Software Competence Center Hagenberg (SCCH, an initiative of the Johannes Kepler University Linz) presented a project with remarkable results. Transformer cores are composed of various layers of steel sheets to minimize power loss. An important task in production planning is determining how to cut the sheets from the coils in such a way that the transformer core meets the required properties while also optimizing raw material usage. Michael Zwick emphasized that this is a multifaceted combinatorial optimization problem: not only should the residual material be minimized, but, for example, the noise level threshold should also not be exceeded.

Predict Properties of the End Product

The project employed an AI component that, based on the properties of the raw materials and the cutting plan, provides insights into the characteristics of the final product. This requires a so-called behavioristic approach (Statistical Local Search), which includes stochastic elements to prevent getting "stuck in a local minimum," as Michael Zwick put it. Additionally, the project utilized the "Local Search" method: once an initial solution is found, further small steps are taken to potentially arrive at an even better one.

In this evolutionary approach, constraints must be adhered to, which, unlike optimization goals, are not negotiable. For example, the resulting production plan must be applicable to the machines. However, as Michael Zwick noted, goals and constraints can indeed be interchanged because the strict constraints are sometimes very difficult to meet and can therefore be approximated as optimization goals.

SCCH developed an optimization framework that relied on formalized property descriptions from Siemens experts. The analysis also incorporated values from measurements of the coils' electric fields.

The final transformers are also measured and compared with the predictions. The SCCH model is thus refined through feedback loops. In any case, the results speak for themselves: an overall cost savings of 7 percent was achieved, and for large transformers, this can even lead to a 15 percent material savings (up to 52 US tons for a transformer core weight of 298 US tons).

Advancements in Image Recognition for Quality Control

Danube Dynamics is pulling out all the stops in AI-based embedded system development. The startup was founded six years ago by Nico Teringl and two other colleagues. Their motto is: "Hardware, software, and AI from a single source—all made in Austria." For about two years, Danube Dynamics has been focusing intensely on quality control combined with image recognition. The reason is obvious: due to competitive pressure from Asia, the demands on the precision of manufactured parts are becoming increasingly higher.

Nico Teringl made it clear at the beginning of his lecture that the use of AI is not a guarantee of success: "There is a rule of thumb—what a human cannot see, AI cannot see either." In other words, if the operator cannot clearly identify a quality feature in images, image recognition will also be blind.

The startup uses two AI variants: simple classification based on "bad" images through so-called Convolutional Neural Networks. With training on just a few images, high error detection rates can be achieved. However, the downside is that all error classes must be trained in advance. The newer method, however, is anomaly detection, where the AI learns from "good" images. In analogy to LLMs, Visual Language Models (VLMs) and other techniques are employed. This latter method is more complex but "robust" and can be easily expanded to include additional scenarios. Robust in this context means that the phenomenon of hallucination—known from LLMs—can now be effectively managed in image recognition as well.

Error Detection Rate Over 98 Percent

Subsequently, various use cases were presented, such as quality assurance in the production of peanut snacks, spritz cookies, or French fries. The error classes were interpreted using a VLM in combination with an adapter. Training with 25,000 images took a whole year, but now the system works very well, as Nico Teringl assured. The error detection rate stands at an impressive 98 percent or higher. The inspection setup for this use case includes a camera and an edge computing device for AI analysis.

The grand finale of the event was a tour with Thomas Linde, Chief Innovation Officer at Keba, in the Keba Innospace. On this "runway of innovations," AI-based solutions are showcased, which Keba has designed and implemented in collaboration with partners such as universities (including the Art Academy Linz), research institutions, or corporate consortia in various forms of networking.

At Keba in the Darkroom of Innovation

Upon entering the dimly lit room, visitors are greeted by a kind of bar counter with glass cubes containing 3D photos. These building blocks can be used to configure the experience space in various ways: for example, the "Unskilled People" block can be combined with the "Robotic" block. In the background, a Knowledge Engine (Knowledge Graph) immediately gets to work, enriching the preconfigured use cases depending on the chosen contexts.

One challenge was to host the language model for the assistant Kiki locally at the computing edge, said Thomas Linde. For the experts among us: the hardware consists of just a 1-chip system (SoC) and an ARM CPU Raspberry Pi costing 100 US dollars—no data center! Unlike conventional LLMs, where the solution space is much larger and the computational requirements significantly higher, Kiki's capabilities are limited to meaningful context-relevant responses. This greatly increases response speed. By the way, Kiki understands multiple languages, including Italian and Croatian.

It was a content-rich event that inspired participants to engage more deeply with applications surrounding artificial intelligence. This helps Austria as a business location to stay ahead.

Keba brings AI to the industry

For several years now, Keba has been working on the topic of artificial intelligence, as the potential for its use in the industry is immense. AI offers the ability to solve complex problems, optimize processes, and develop innovative products.

This is why Keba launched its own AI product at the end of 2023: the AI Extension Module, which enhances industrial control systems with the capability to run AI models locally in real time. Stefan Fischereder, Product Manager for Industrial AI at Keba, describes it as "accelerator hardware," similar to that in a consumer computer. It is streamlined to deliver the essential performance needed to compute neural networks.

An additional software stack is designed to enable customers to access these capabilities from PLC programming languages. Both the AI expansion module and the software stack are independent of the cloud and can be used in combination with any industrial controller from Keba.