A technology combining industrial image processing with AI automatically inspects weld joints on car bodies and identifies anomalies. The application improves the consistency, speed, reliability, and accuracy of the entire inspection process—completely autonomously.
DGH developed an automated system for inspecting weld joints using cameras and lighting mounted on robots.
(Image: DGH)
Freelance author
In automotive production, high-quality standards are essential, particularly in welding processes for the body-in-white (BIW). The importance of the structural stability of the body is self-evident. More intriguing is the question of how to ensure the high quality of welding joints—automatically and comprehensively. The challenge lies in the fact that many different defects can occur, compromising the quality of the body. For example, cracks, incomplete weld seams, and irregular welding patterns must be precisely identified. This is the challenge tackled by the DGH Group. The Spanish company, headquartered in Valladolid and recently integrated into the Groupe ADF, supports various industry sectors with innovative products. The result is an inspection system that automatically captures images of welding joints. These images are then immediately analyzed by MV Tec Halcon's AI-based algorithms and DGH image processing software. The software transmits the results—OK or NOK (not okay)—to the PLC, which then determines the next steps for the body accordingly. Halcon is the standard software for industrial image processing (machine vision) developed by MV Tec. The Munich-based family business, established in 1996, has been developing hardware-independent image processing software for industrial applications and is considered a technology leader in this field—partly due to its range of powerful deep-learning algorithms.
Deep Learning in Production: Optical Inspection of Weld Joints
Deep learning is a subset of artificial intelligence. In industrial image processing, deep learning enables the implementation of an increasing number of applications, including ones that were previously not possible. Additionally, the performance of existing applications can be significantly enhanced. The DGH Group also leveraged these developments. On behalf of a major French automotive manufacturer, the expert team at DGH Group developed an automated system for inspecting welds created through metal inert gas welding (MIG welding). "Until now, the inspection was always carried out by experienced employees. It's not always easy to determine whether the quality of the welds from the various processes is acceptable. In the development of the new system, we incorporated employee expertise. Specifically, we trained the underlying deep learning networks with their knowledge. The required robust detection rates are only achievable through the use of deep learning," explains Guillermo Martín, Innovation & Technology Director at DGH. The primary goal of the implementation was to achieve a very high-quality standard for all weld seams. Additionally, the new autonomous quality inspection aims to bring the fundamental benefits of automation into play—namely, higher speed, reliability, accuracy, and clear consistency in decision-making, in contrast to the subjectivity of human decisions.
Clean Process Integration of Industrial Image Processing
The implementation of such a system came with several challenges. "It was clear to us that we had to design the system based on machine vision. Sensors or conventional 2D vision systems fail due to the complexity of the welds. The first challenge was therefore to develop a viable solution and reliably detect various types of defects. Additionally, and this was the second challenge, it was necessary to transfer the knowledge of experienced employees into this system, a deep-learning application. The third challenge was to carry out the inspection processes within a short timeframe, due to the tightly scheduled cycle times," explains Martín. The newly implemented system at the French automotive manufacturer operates as follows: when a car body arrives at the inspection station, the PLC triggers various inspection procedures. Upon receiving a trigger signal, the installed 2D cameras capture images of the welds, individually or sequentially, and transmit them via the Gig-E Vision protocol to the machine vision software for processing. The system examines whether any anomalies around the welds can be detected. The setup is capable of reliably inspecting different types of weld seams, joints, and points produced by various welding methods.
The data is then sent to the PLC, and the corresponding results are visualized on a screen. The inspection application developed by the DGH Group was implemented on an industrial PC, and the system continuously monitors communication with the production facility's PLC as well as with multiple 2D cameras. The core of the setup is the machine vision software Halcon.
Deep Learning Methods
To reliably detect defects, the image processing software utilizes two deep-learning methods. First, "Instance Segmentation" is applied to locate the relevant area on the captured images, i.e., the weld seam. This deep-learning technology is capable of assigning objects pixel-precisely to various trained classes. In the next step, "Anomaly Detection" is used. The deep-learning-based anomaly detection enables automated surface inspection and accurately identifies deviations, i.e., defects of any kind. "Anomaly Detection offered us two key advantages: On the one hand, the detection rates are very high and robust. On the other hand, training the underlying neural networks was simple. Primarily, 'good images,' meaning images of welds without defects, were needed to train the deep-learning networks. The network for anomaly detection is trained exclusively with good images. While the presence of 'bad images' is not a requirement for anomaly detection, having a few bad images can help determine the optimal threshold for distinguishing between good and defective welds. This threshold is applied to the anomaly score, which is the output of the anomaly detection network. However, determining the threshold is not part of the training process. We only required a small number of good images, which is very practical as they can be quickly and easily obtained. Defective images are much harder to organize, not to mention the impossibility of acquiring images of all potential defects. This is where deep learning has a clear advantage," explains Guillermo Martín. In images of welds that differ from the trained images, the anomalies or defects are reliably detected. The extent of the difference (delta) between OK and NOK is determined by the threshold value. The threshold is a parameter within deep-learning methods that defines how much the image being inspected may deviate from the trained "good image." This parameter can be freely adjusted by the user, providing transparency in the "black box" of decision-making by artificial intelligence.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Labeling the Images And Training the Neural Networks
The deep learning technology requires the neural networks to be trained with images before operation. These images must first be labeled for training. For this preparatory work, DGH used MV Tec's deep learning tool. With the free tool, image data can be easily labeled and conveniently trained afterward. To start, DGH collected images of welds, leveraging the expertise of employees. The employees reviewed each image to ensure that primarily "good images" were used for training, as a mistakenly used "bad image" would distort the training results. The "good images" were then uploaded to the deep learning tool and specifically labeled for the Instance Segmentation technology. For this purpose, the "Smart Label Tool" is available. The operator simply clicks on the weld area with the computer mouse, and the tool automatically outlines the weld. This ensures that the deep learning tool is subsequently trained only on the relevant areas of the image. The employees of the automotive manufacturer were also involved in this step as they knew which areas of the image provided critical information about the weld and how large the corresponding outline around the weld should be. After the images were labeled, a split was performed. The dataset was typically divided into 50 percent for training, 25 percent for validation, and another 25 percent for testing. Training, validation, and testing are executed in the deep learning tool easily with the click of a button. The trained model is then saved and loaded into the machine vision software Halcon through the seamless integration with the deep learning tool. The software is now ready for operation.
Machine Vision Software As the Core Component of the Inspection System
"We at DGH have been working with MV Tec for over ten years and are therefore well aware of their powerful tools and algorithms. That's why we decided to trust MV Tec Halcon for this project as well," reveals Guillermo Martín. The challenges regarding training and speed due to the tightly scheduled cycle times have already been mentioned. In addition, there was another requirement for the machine vision software: the environment is challenging due to reflective metal surfaces and varying lighting conditions.
The DGH Group was able to overcome all challenges and deliver a system with the desired high quality. "At the beginning of 2024, the first system was commissioned at the automotive manufacturer's plant. After it ran successfully, we received a new request in April 2024 from the same manufacturer to implement a second system for weld inspection," says Guillermo Martín with satisfaction. The goal of reducing dependence on skilled workers for quality inspection processes amid a labor shortage and consequently increasing the level of automation was achieved. Automation based on machine vision and artificial intelligence has demonstrably minimized errors and ensured consistent and reliable detection of welding defects. For this reason, Guillermo Martín believes there is still growth potential—despite the large number of image processing systems already in use across different industries and manufacturing processes—particularly for deep learning solutions in highly demanding and complex applications.