To integrate image processing into cobot applications, smart vision sensors like the O2D500 from IFM are suitable. It can be put into operation very easily just through configuration.
Smart vision sensors facilitate applications where the robot needs to recognize objects.
(Image: IFM Electronic GmbH)
Image processing is still considered by many users to be a field that requires extensive specialized knowledge. However, especially when using collaborative robots (cobots), it is often important for the robot to be able to "perceive" its environment in order to enable safe and efficient interaction with humans. Furthermore, image processing is important when robots need to recognize objects they are supposed to manipulate—such as moving, assembling, sorting, picking, or packaging.
However, image processing systems do not necessarily have to be highly complex and require a lot of know-how. The basic structure of image processing is always similar and consists of three steps: capturing the image, evaluating it, and outputting the results. The concept of smart vision sensors, such as the O2D500 from IFM, is based on implementing all steps in a compact device.
Depending on the application, the vision sensor, which already has integrated lighting, is available in an infrared version or a version for the visible range. To consider the geometric conditions—particularly the distance between the object and the sensor—standard, wide-angle, or telephoto lenses are available. The focus, which ensures a sharp image, operates electromechanically. The rotatable connectors make installation and connection straightforward. For integration into higher-level systems, either Ethernet/IP or Profinet is available.
Calibrate sensor in no time
When using a vision sensor with a robot, one of the important tasks is to calibrate the image captured by the sensor with the robot's coordinate system. To make this task as simple as possible, the Vision Assistant is used. During the development of this software, IFM placed particular emphasis on ensuring that it can also be operated by users who are not specialized experts in image processing. Various image processing algorithms are included and can be used without the need for programming.
This also applies to the sensor-robot calibration, which is based on a so-called marker calibration and works largely automatically with just a few clicks. To do this, the user can print out a calibration sheet from the software, which is then placed in the robot's work area and in the field of view of the vision sensor. After the focus and exposure are set—which also happens automatically with just a few clicks—the tool center point of the robot must be placed sequentially on the four markers. The user then enters the coordinates into the corresponding fields in a table in the Vision Assistant.
In the next step, the data is taken over with a click on the teach button. Finally, up to 16 shots are taken from the calibration sheet in slightly different positions to improve the calibration. The software displays a quality indicator. If this is at least 85 percent, the calibration is complete. If necessary, a Z-offset can still be specified. To verify the calibration once more, the user can measure the length of a line, which is also printed on the sheet, in the software. If this matches the actual length, the vision sensor is ready for use.
Particularly helpful with this type of robot-sensor calibration is that the sensor adapts to the robot's coordinate system. This eliminates the need for a coordinate transformation in the robot program, which is often necessary in other systems and always presents a potential source of error. To simplify the application, ifm has created example programs for various robot manufacturers that users can work with directly. However, the syntax can also be created for other robot types with just a few clicks. The communication modules from the Vision Assistant to the robot are also pre-fabricated, and communication between the robot and the O2D500 takes place via TCP.
Detect components on a conveyor belt
The other functions that are already pre-programmed in the Vision Assistant can be easily combined with the robot functionality. These include contour and object analyses, which can be further expanded with logic functions. This means there are almost no limits to the use of the combination of robot and vision sensor. A typical application is, for example, the recognition of components on a conveyor belt, which the robot then picks up and sorts appropriately. And thanks to the intuitive software, no programming effort is necessary for this.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.