Vision sensor The sharp eyes of the robot

A guest contribution by IFM Electronic | Translated by AI 3 min Reading Time

Related Vendors

To integrate image processing into cobot applications, smart vision sensors like the O2D500 from IFM are suitable. It can be put into operation very easily just through configuration.

Smart vision sensors facilitate applications where the robot needs to recognize objects.(Image: IFM Electronic GmbH)
Smart vision sensors facilitate applications where the robot needs to recognize objects.
(Image: IFM Electronic GmbH)

Image processing is still considered by many users to be a field that requires extensive specialized knowledge. However, especially when using collaborative robots (cobots), it is often important for the robot to be able to "perceive" its environment in order to enable safe and efficient interaction with humans. Furthermore, image processing is important when robots need to recognize objects they are supposed to manipulate—such as moving, assembling, sorting, picking, or packaging.

However, image processing systems do not necessarily have to be highly complex and require a lot of know-how. The basic structure of image processing is always similar and consists of three steps: capturing the image, evaluating it, and outputting the results. The concept of smart vision sensors, such as the O2D500 from IFM, is based on implementing all steps in a compact device.

Gallery
Gallery with 6 images

Depending on the application, the vision sensor, which already has integrated lighting, is available in an infrared version or a version for the visible range. To consider the geometric conditions—particularly the distance between the object and the sensor—standard, wide-angle, or telephoto lenses are available. The focus, which ensures a sharp image, operates electromechanically. The rotatable connectors make installation and connection straightforward. For integration into higher-level systems, either Ethernet/IP or Profinet is available.

Calibrate sensor in no time

When using a vision sensor with a robot, one of the important tasks is to calibrate the image captured by the sensor with the robot's coordinate system. To make this task as simple as possible, the Vision Assistant is used. During the development of this software, IFM placed particular emphasis on ensuring that it can also be operated by users who are not specialized experts in image processing. Various image processing algorithms are included and can be used without the need for programming.

This also applies to the sensor-robot calibration, which is based on a so-called marker calibration and works largely automatically with just a few clicks. To do this, the user can print out a calibration sheet from the software, which is then placed in the robot's work area and in the field of view of the vision sensor. After the focus and exposure are set—which also happens automatically with just a few clicks—the tool center point of the robot must be placed sequentially on the four markers. The user then enters the coordinates into the corresponding fields in a table in the Vision Assistant.

In the next step, the data is taken over with a click on the teach button. Finally, up to 16 shots are taken from the calibration sheet in slightly different positions to improve the calibration. The software displays a quality indicator. If this is at least 85 percent, the calibration is complete. If necessary, a Z-offset can still be specified. To verify the calibration once more, the user can measure the length of a line, which is also printed on the sheet, in the software. If this matches the actual length, the vision sensor is ready for use.

Particularly helpful with this type of robot-sensor calibration is that the sensor adapts to the robot's coordinate system. This eliminates the need for a coordinate transformation in the robot program, which is often necessary in other systems and always presents a potential source of error. To simplify the application, ifm has created example programs for various robot manufacturers that users can work with directly. However, the syntax can also be created for other robot types with just a few clicks. The communication modules from the Vision Assistant to the robot are also pre-fabricated, and communication between the robot and the O2D500 takes place via TCP.

Detect components on a conveyor belt

The other functions that are already pre-programmed in the Vision Assistant can be easily combined with the robot functionality. These include contour and object analyses, which can be further expanded with logic functions. This means there are almost no limits to the use of the combination of robot and vision sensor. A typical application is, for example, the recognition of components on a conveyor belt, which the robot then picks up and sorts appropriately. And thanks to the intuitive software, no programming effort is necessary for this.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent