How to deal with complex automation challenges through the combination of robots and advanced machine vision

We have all seen videos of robots quickly assembling cars with little human intervention. Industrial robots like this reduce costs and increase productivity in almost every manufacturing area, but they have one major drawback-they cannot “see”. They are programmed to repeat the exact same action over and over again, and they cannot detect and manipulate objects of different shapes, sizes, and colors, or touching and overlapping each other.

Focusing on Vision-Guided Robots (VGR), how to meet complex automation challenges through the combination of robots and advanced machine vision

We have all seen videos of robots quickly assembling cars with little human intervention. Industrial robots like this reduce costs and increase productivity in almost every manufacturing area, but they have one major drawback-they cannot “see”. They are programmed to repeat the exact same action over and over again, and they cannot detect and manipulate objects of different shapes, sizes, and colors, or touching and overlapping each other. Therefore, if the product changes or a new product is added to the production line, the robot must be reprogrammed. If the product components are transported to the production line through the traditional hopper and vibrating table, the vibrating feeder must be modified.

Coping with chaos

Now, a new generation of robots guided by advanced machine vision is enabling robots to do much more than the repetitive tasks that are common in mass production. Driven by smaller, more powerful, and cheaper cameras and other vision sensors, increasingly complex robotic algorithms, and processors with machine vision-specific hardware accelerators, these vision-guided robotics (VGR) systems are rapidly changing manufacturing And fulfillment process.

VGR makes robots more adaptable and easier to implement in industries where new products are frequently introduced and production cycles are short (including medical equipment and pharmaceutical manufacturing, food packaging, agricultural applications, life sciences, etc.).[1]

For example, a leading global car manufacturer operating a large factory in China uses Teledyne DALSA’s GEVA 1000 vision system to ensure that robots on two assembly lines firmly grasp parts and place them on fast-moving conveyor belts. In the past, parts were manually lifted and placed manually. Automation has increased productivity by approximately six times. A system like this is suitable for environments where clutter is unavoidable or the cost of elimination is too high, or the production line speed is too fast for workers. Advanced systems can even solve the most challenging VGR application, that is, grabbing randomly distributed objects of different sizes, shapes, and weights from boxes in factories and distribution centers (such as Amazon’s large network of automated execution centers).

Unordered crawl

Robots grabbing parts out of the box in disorder is extremely challenging, because the VGR system must locate and grab specific parts in a chaotic environment. When the robot removes parts from the box, other parts may constantly move positions and change directions. The system must identify the correct objects, determine the order in which to pick them up, and calculate how to grab, lift, and place them without colliding with other objects or the walls of the box. This requires high-performance machine vision hardware, complex software, and powerful enough computing power to process large amounts of visual data in real time.

Whether it is a compact smart camera with an integrated vision processor (Teledyne DALSA’s BOA Spot) or a complex laser and infrared sensor, as well as a high-resolution, high-speed camera, it can be used as machine vision hardware.

What about 3D vision?

VGR systems usually use more than one type of sensor to construct 3D images. For example, a robot with a 3D area sensor can locate and grab randomly placed parts in the box. Then the 2D camera instantly detects the orientation of each part so that the robot can place them correctly on the conveyor belt.

Some VGR systems combine 3D time-of-flight (ToF) scanning and snapshot 3D image capture. Compared with a scanning system alone, they obtain a resolution that can handle a wider range of objects, but does not need to move like traditional fast camera systems. camera. ToF scanning measures the time required for the light emitted by the laser to travel between the camera and the surface of the object to determine the depth of the object, which has the advantage of working under any lighting conditions.

How to deal with complex automation challenges through the combination of robots and advanced machine vision

Structured light 3D systems, such as Microsoft’s Kinect sensor for video games, project an invisible infrared light pattern on the object, and then use a 2D camera to detect the distortion of the light pattern to generate a 3D depth image. This process can be used for 3D mapping of multiple objects in the picking box.

Powerful hardware and algorithms

These advanced vision systems can process large amounts of data using hardware accelerators such as FPGA processors and application specific integrated circuits (ASICs). This allows them to handle thousands of SKUs on the production line and in order fulfillment applications.

A key component of the advanced VGR system is the algorithm, which prevents the robot and its arm end grabbing tool from colliding with the side of the box or other objects. This kind of interference avoidance software must be very powerful, because every time an item is taken out of the box, a different path needs to be planned, and the parts are usually stacked together and difficult to distinguish.

Look to the future

More and more VGR software (including the open source Robot Operating System (ROS) that has nothing to do with robots and sensors) will enable robot integrators to provide VGR systems faster and easier, and introduce new, more powerful, and available sensor.

At the same time, machine vision and robotics suppliers are working closely to make VGR easier to use. For example, machine vision vendors have developed tools that make it easier for engineers to model and optimize sensors for robotic cells. They are also developing a Windows-based VGR system that is easy for end customers to use.

Because of these innovations, nearly 50% of robots in consumer electronics (above the circuit board level) and other lightweight assemblies in Asia now use VGR. With random picking technology quickly becoming a flexible, easy-to-understand, and interchangeable commodity, small and medium-sized companies can use it if they want to reduce manual intervention, improve safety, quality, and productivity.

The Links:   BSM100GB170DLC G057QN01-V1

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *