The 2D Grasping kit from Schunk allows companies to easily automate their gripping and sorting tasks. Thanks to offline AI support and simple user interface, no specialized personnel are needed for setup, teaching, and application. The kit was awarded the Hermes Award at the Hannover Messe 2024.
The AI-supported image processing software of the 2D Grasping Kit from Schunk communicates to the robot the position, rotation angle and gripping position for each component.
(Image: Schunk)
All manufacturing companies face the same challenge: How can they maintain or even expand an economical, efficient production with the same number of employees? Staff shortage has become the norm in all sectors. Therefore, companies want to automate more and more production steps - especially physically demanding or monotonous tasks for which fewer and fewer employees are found.
Challenge of handling components with the robot
Advances in robotics, AI, and gripping systems are constantly opening up new, economical opportunities for automation solutions. In the past, it could still be assumed that the companies had enough skilled personnel to set up and operate automation systems. While the systems always gained more functions, they also became more difficult to operate. This poses problems for small and medium-sized enterprises today: they are struggling with staffing shortages that prevent them from automating their production and thus making it future-proof. In particular, the handling of components with the robot presents a major challenge for them. If a camera system is then also required for the precise positioning of the components, many companies reach their limits and have to rely on external service providers for automation, making them dependent on these.
To master this challenge, Schunk has developed the 2D Grasping Kit, an application kit that enables fast, cost-effective, and uncomplicated automation with the help of AI developed by Schunk in Germany. The kit consists of:
a camera with lens
an industrial PC
the Schunk AI software
the required cables
All components are matched to each other and can be combined with any robot or higher-level control system (for example Siemens PLC) due to an open TCP/IP interface. It allows for the handling and sorting of various components arranged randomly on a plane. Finally, a solution for the type of task that was previously complex to automate, but monotonous and uninteresting for human workers.
For example, when turning parts come out of a machine in a contract manufacturing company, they usually fall into a box. An employee then sorts them by hand and places them correctly in trays to make subsequent processing steps easier to automate. However, it can easily happen that components are damaged or mixed up. If the robot takes over, the employee is relieved and at the same time, effort and susceptibility to failure in subsequent automated processes are reduced.
Complex task made easy
Schunk uses the 2D Grasping Kit in its own production in Germany. Customers have the opportunity to validate their own applications in the CoLab Robot Application Center and find out with little effort how the system can improve their own production.
Once the system is mechanically set up, an average user needs less than half a day to teach (new) components to the system. The web interface of the software guides him step by step to the result. The various steps are:
Step 1: Take pictures of components
Step 2: Define objects and gripping points
Step 3: Train the AI and start
This is what the individual steps look like in detail:
Step 1: Take pictures of components
The camera looks down on a feeding belt, a tray, or a provisioning table. The AI software recognizes and distinguishes the components based on previously trained images and outputs the optimal gripping position. For this, the camera first captures the background on which the components will later lie. Then it takes several pictures of the parts to be gripped. For example, if the robot needs to grip parts along with transparent packaging for a picking task, like screws and nuts in a plastic bag, the operator just takes several images of the components in various positions.
A frequently underestimated challenge for camera-assisted automation systems is lighting. Depending on the installation situation, it can be difficult to choose suitable lighting, especially since there are many different parameters to consider, such as size, distance, wavelength, or beam angle. The 2D Grasping Kit does not require a special light source and is significantly more resistant to extraneous light than traditional vision systems thanks to AI-supported software. Therefore, the camera can handle changing light conditions - such as daylight-dependent sunlight - as well as changing backgrounds. The color and the reflection behavior of the surface also have a minimal impact. For example, the system reliably recognizes metallic components even on bright backgrounds.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Step 2: Define objects and gripping points
In the next step, the operator marks and names the components. The Schunk AI software automatically cuts out the contour of an object against the background, frees it, and calculates variances for viewing angle, lighting situation, and other parameters. After only 10 to 20 images, it has a sufficient dataset of the objects to be detected.
Step 3: Train the AI and start
Once the first two steps are done, the AI trains itself - completely offline. The customer has full control over the data at all times, as it remains completely within their company network. The training takes just one to two hours. Then the 2D Grasping Kit is ready to go.
The AI-supported camera now recognizes the components in the bags based on characteristic features such as shape, size, and color. Variations caused by reflections or deformation of the bags are adjusted and compensated for by the AI. The image processing software communicates with the robot and informs it which components it recognizes, where they are positioned, how far the gripping system should be opened, and at what rotation angle it can best grip the components. The robot then moves with the gripper to the component and picks it up to then place it in a pre-defined position in the correct orientation. During the gripping and process, the camera already captures the next object and has its type and gripping point calculated - this takes about two seconds, so the robot can immediately grab the second object after it has placed the first one.
Various grippers can be used
What is unique is that the system, together with object detection, also automatically calculates the gripping points for the gripper in use and passes the parameters (such as rotation angle and opening width) to the robot control. Of course, if desired, users can easily manually store multiple gripping points.
The 2D Grasping Kit works in the example described with the universal gripper EGK. In the future, it will also work with pneumatic and mechatronic parallel grippers, as well as with magnet, vacuum, and adhesion grippers.