Vision systems allow robots to see what they are doing

By on March 21, 2022 0



  • March 21, 2022
  • KUKA Systems Corp. North America
  • News



Vision systems allow robots to see what they are doing

For nearly a century, manufacturers have developed and applied robotic equipment to increase speed, accuracy, and repeatability in a wide variety of tasks. Over the past 40 years, vision technology has enabled robots to visualize a known workspace, detect objects, and perform desired actions. Together, the robot, programming software, and imaging hardware increase productivity and consistency. Vision systems expand a robot’s range of applications with increased flexibility to handle greater variation in parts and processes.


Basic functions of the vision system


Robotic vision systems offer four basic functions: positioning, inspection, measurement and code reading. The most common is positioning, ie determining the location of an object and reporting it to the robot controller. The controller then usually instructs the robot to pick up the object and place it elsewhere. Inspection functions visualize an object to identify missing or defective elements. Users can also program a system to measure an object’s dimensions, area, and volume.


Finally, a vision system can decode and read one-dimensional (1D) and two-dimensional (2D) codes to provide optical character recognition (OCR) and verification (OCV). OCR recognizes alphanumeric characters by comparison with a library of character patterns, while OCV checks that an alphanumeric character is complete.


Robotic Vision Levels


Introduced in the 1980s and early 1990s, the first commercial robotic vision systems performed 2D part recognition, in which a camera acquires images of objects in the X and Y planes. These systems provide two-dimensional feedback to guide basic robotic functions and are most often used in simple applications that involve batches of very similar parts in predetermined locations.


The recognition and localization capabilities of a 2D vision system eliminate the need for an operator to manipulate a part and place it in a fixture. With 2D vision, robots can sort and organize objects. Due to their simplicity and extensive development over time, 2D systems are easy and cost effective to integrate and operate.


2D vision technology works best with strong, contrasting lighting and parts that lie relatively flat with limited overlap, such as in simple automated pick-and-place operations. Variable-shaped parts, poor or uneven lighting conditions, and advanced manipulation operations can also challenge the effectiveness of 2D robotic vision systems.


For more complex tasks, 3D vision uses complex imaging technologies – such as multiple cameras – to detect X and Y part locations as well as Z measurements and angles (height) in all three planes, allowing to determine the shape and volume of an object. A 3D vision system can accommodate overlapping and stacked elements in so-called “semi-structured” applications that involve parts at different heights, layers and angles. Applications mainly focus on positioning, measurement and inspection.


The flexibility of 3D vision systems allows them to adapt to changes in a process, operate accurately in poor lighting conditions, and fully exploit the capabilities of six-axis robots. These systems recognize parts, determine distances and calculate trajectories in 3D space, allowing a robot to optimize the trajectory along which it moves an object.


These 3D robotic vision systems expand the range of robotic applications beyond simple part location. With built-in 3D cameras, advanced robots can inspect components, build complex assemblies, and dynamically adapt to different parts and locations. The only trade-off is complexity and additional expense. To determine if an application requires 3D vision or if a 2D configuration will meet the needs of a production line, manufacturers can consult with the robot manufacturer or a system integrator to evaluate the application and plan the best approach.


Robotic camera systems


Depending on what a vision system needs to accomplish, it uses one of three main camera locations. Fixed configurations mount the camera where it can see the workspace, at the expense of limited operating flexibility. Alternatively, a robot-mounted camera provides increased flexibility and can cover a large area, but cycle times can increase to allow software to process camera inputs before directing the robot’s next move.


In a third approach, a fixed camera observes the robot as it transports a part, perhaps manipulating and placing a sheet metal part. The robot grabs the part randomly with a suction cup gripper. The camera then determines the position of the sheet on the gripper and the software directs the robot to place the part precisely in the desired location.


Camera environment and technology


2D applications should depend on the coverage area of ​​the camera lens and lighting that generates sufficient contrast for clear part identification. Different lighting techniques, such as ring lights or backlighting, can produce optimal results depending on the room and its surroundings. 3D imaging can use a variety of camera processes, including electronic scanning or snapshots.


Object detection and imaging technologies can include structured sensors that analyze the reflection of light projected onto a part so that they can read the dimensions of the part. Time-of-flight cameras shine infrared light at an object and measure the time it takes for the light to reflect back to the camera so they can determine depth information.


Software


Robotic vision applications use software that processes a camera image and then directs the robot’s action based on visual information. Rules-based software stores, sorts, and uses data according to rules developed by humans. The system uses rules to interpret and act on visual information from a camera. Some software packages use deep learning technology that uses artificial intelligence (AI) and machine learning to accomplish tasks such as object detection and recognition.


Complete and flexible 2D and 3D robotic vision


Integrated systems embedded in robots can offer powerful tools for 2D object recognition, barcode reading, and performing OCR and OCV. Vision tools locate, inspect and read codes on fixed or moving parts. Systems like KUKA.VisionTech are designed to be easy to integrate, access and use.


With a high-quality camera in an IP 67 enclosure, KUKA.VisionTech supports a wide variety of robot operations, even in unstructured environments, for use in applications ranging from fast-moving consumer goods to manufacturing. ‘food. Code recognition capability simplifies product traceability, which can be critical for sustainability or quality control. At the same time, the system allows manufacturers to secure production and reduce costs.


When it comes to 3D vision systems, 3D stereo cameras have had a huge impact on the advancement of robotic vision system technology. They allow robots to recognize parts, not only their location, but also their orientation. With such systems, KUKA successfully automates the very difficult bin picking process. The 3D stereo vision/camera system captures a part image and transfers it to the software, which then uses the images to extract data representing viable parts for the robot to select. From the image, the software assesses which part is in the optimal picking position or relatively close to it, and then sends decisions to the robot.


Choose a robotic vision system


Robotic vision capability has evolved from simple part recognition to fast, flexible and complex sensor systems. Before adding vision capabilities, manufacturers should consider what their systems need to bring to production operations and select the technology that fully meets those needs. To choose the most efficient and cost-effective system, and one that interfaces seamlessly with different types of cameras, rely on the advice of a robot manufacturer such as KUKA or a system integrator.





Did you enjoy this great article?

Check out our free e-newsletters to read other great articles.

Subscribe