Google's ClearGrasp AI model clears things up for computers
What you need to know
- ClearGrasp is a collaboration between Google, Columbia University, and Synthesis AI.
- The research is an effort to help computers estimate not only the reflected light from transparent objects but also the refracted light.
- Success rate went from 12% to 74% using a gripper arm, and from 64% to 86% with a suction grip.
As humans, we typically don't have much of a problem grabbing objects from a table, whether they are a solid object, like an apple, or if they are transparent like say a glass. However, for computers and robots, that is a different story, but thanks to a new algorithm called ClearGrasp, that may become a thing of the past.
Explained in a recently in Google's AI Blog, it's described how a team of Google researchers, Columbia University, and Synthesis AI, were able to develop a new machine learning algorithm that can accurately estimate 3D data from a transparent object out of RGB-D images. Since most imaging models are based on solid objects that assume all surfaces, whether it's a table or a soda can, are Lambertian — they reflect light evenly in all directions — transparent objects can pose problems. This is because these objects not only reflect the light, but the light is also refracted, which in turn causes issues for the imaging systems.
Object imaging is used in many applications from warehouses, automotive industries, heck it's even being used in kitchens. So the ability to not only see solid objects but also a transparent object, is appealing for a multitude of reasons. This new AI model teaches computers to be able to reconstruct depth from 3D images captured from RGB-D cameras.
The research teams fed large amounts of data into the machine learning model to increase the accuracy of their object detection models for their "pick and place" robot system, which, does as the name implies, picks up objects and places them in another location. The new learning system increased the robot's ability to accurately detect and grab transparent objects using a parallel-jaw gripper from 12% to 74% and from 64% to 86% when using a suction jaw.
With the use of robotics increasing and the new applications, we see their capabilities being applied to. This new research will only amplify the robots use even more. However, computer imaging isn't only used for robot grabbing objects, it is used for cameras in homes, cars, and many other ways — so who knows what the future may hold for these systems?
Ambient computing at its best
With a bit of fun tossed in
The Nest Hub Max is a smart speaker with a screen, and a camera, but it's more than just the sum of it's parts. With Google Assistant you can get answers to all of you questions, operate your smart home, set reminders and so much more. The included camera has some impressive computer vision to ensure your video calls are always top-notch.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android