A researcher at Missouri University of Science and Technology is working on making robots smarter, and the technology could end up in autonomous driving cars, search and rescue equipment or military weapons.
Electrical engineering professor Jagannathan Sarangapani said one of the major constraints on robots that act on their own is that they're limited to seeing things in two dimensions.
“A robot can use coordinates, like GPS, but they never use their visual information to make real-time decisions, unlike a human,” Sarangapani said, “when they are driving a car, for instance.”
He’s working on hardware and software that could enable a robot’s camera to take dozens of pictures a second, compile that into a 3-D model and then use that information to make a decision on what to do next.
“In a robot-type application, where you are sticking the camera onto it, but you are taking a series of images now and then processing them and trying to use that information from a visual perception point of view,” Sarangapani said.
That could include a bomb-diffusing robot deciding if it was safe to move closer, a hazardous chemical disposal unit making adjustment for terrain or a military weapon deciding to neutralize a threat.
The combination of software and hardware could also be used in a swarm of robots in which the primary unit is controlled by a human operator but the followers could “learn” from the leader.
“Using deep learning, we can program leader-follower robots to learn and imitate the behavior of a human,” Sarangapani said.
Sarangapani said that the technology has had some success in medical applications and that the next step is to get funding from the military to develop the technology in the most severe situations.
Follow Jonathan on Twitter: @JonathanAhl