Self-driving cars detect and analyze the environment in lots of different ways. This is what helps them be more responsive, efficient, and performance-consistent when compared to their human counterparts.
But adapting to the elements of the road is just but one requirement of a responsible urban trekker. It turns out, that social rules and conduct will be required for smaller self-driving robots to be even more efficient, according to this latest report.
The Social ‘Drive’ and Adaptation
That cute yellow robot shown above is the brainchild of engineers at the Massachusetts Institute of Technology. Don’t let its rather simple appearance fool you. This is because it has been developed with a sense of “social awareness”, a new AI factor that could help self-driving robots navigate urban locations even more efficiently.
As described by the report, there are currently four major hurdles in autonomous navigation: localization, perception, motion planning, and control. Each is governed by several sensors and algorithms that collate data to identify its surroundings, and to plan the most optimal route to its destination.
Most bots take the “analyze then move” approach. MIT’s bot however, uses a combination of “analyze then move” and what we could probably describe as “conditional execution”. That is, it can choose a default path or setting when certain environmental factors are met. This is to help significantly reduce the analysis time, and help it adapt to the infinite variability of (other) human decision and crowd unpredictability.
Conquering Human Pedestrian Traffic
While it is already natural and intuitive to us, pedestrian traffic can be much more unpredictable for self-driving robots. Each person has his or her own paths to walk, which usually aren’t always straight and direct. As hinted earlier, MIT developed the robot with this element of urban unpredictability in mind. The researchers wanted to optimize its decisions around people, without the need to analyze too much data from their paths and motions.
The “conditional execution” we described earlier is a form of reinforcement learning. To create the default path/setting and its required environmental conditions, the robot was subjected to a multitude of computer simulations. These computer simulations provide the learning patterns, which, when added to specific pedestrian gestures (e.g. staying in the right lane), and standard automated navigation systems, helped form the core “social awareness” factor for the robot.
Additionally, the robot was also programmed to scour the environment every one tenth of a second. Coupled with its leisurely speed of 1.2 meters per second, it is able to navigate seamlessly across a wide variety of urban situations without making all the decision delays too apparent.
Crowd Guidance, Assistance, and Control
Aside from making delivery and service bots potentially smarter on the road, one of the most obvious applications of this new system would on-the-spot guidance and assistance. While the research did not directly suggest this, it was quite heavily hinted at its ending commentary about “how robots might handle crowds in a pedestrian environment”.
Of course, it won’t be as sophisticated as a billionaire philanthropist’s army of public peacemakers. Perhaps something along the lines of South Korea’s current airport robot guides, but better and more improved. At the very least, the finalized version would be something that is significantly more sensible than this poor, troubled fellow.