New Application Helps Modern Robots Navigate Difficult Terrain
New Application Helps Modern Robots Navigate Difficult Terrain
Robots may soon be able to not only detect when their legs are tangled, but also figure out how to get themselves free.
As more industries become comfortable and proficient with employing robots to do their leg work, there are still plenty of onlookers who would like to do the same. However, environmental constraints like loose wires, rubble, debris, or thick vegetation automatically make that impossible because many popular robotic designs are not able to move through such obstacles.
At Carnegie Mellon University, an application is being developed that gives robots the ability to detect when they are getting tangled up and the knowhow to free themselves.
“Our objective was to get legged robots in particular, to be more effective at getting untangled and getting around obstacles that might get in their way even if they don't know specifically what stiffness or geometry these obstacles have,” explained Justin Yim, an engineering professor at the University of Illinois Urbana-Champaign and collaborator on the project. “We targeted working with legged robots because we think that they could be really effective in this area, but aren't living up to their potential.”
The team, led by Aaron Johnson, including master's degree students Jiming Ren, David Ologan, and Selvin Garcia Gonzalez, started with a simple case that mimicked vines or branches in which a robot would walk through bungee cords. Each time the robot encountered the cords it first had to detect where it was tangled. The application would be of no use if the robot began picking up all of its legs each time it was caught in something. Utilizing proprioceptive joint torque estimation, the robot can monitor its legs and determine which one is stuck.
Check This Out: AI-Powered Robots Offer Promise to Public-Facing Industries
Since many modern quadruped robots are built with a quasi-direct drive strategy, their inherent construction is “transparent,” meaning the robot can monitor the behavior of the motor when force is applied to its limbs, Yim explained. This application builds on that existing framework and doesn’t require any additional sensors.
“We want to be able to measure those forces,” Yim said. “With a transparent robot limb, we can track the motor’s current and velocity and use that to determine what loads are applied to the leg.”
Proprioceptive torque estimation has been used on robot limbs before, arms specifically. However, this application required a couple of adjustments. With arms, the goal is for them not to hit anything. But a robot leg is going to hit the ground with every step. The key is to detect when the limb contacts something more than once. For this, the proprioceptive torque estimator utilizes a momentum-based observer to measure if the robot limb is running into an object before it's supposed to hit the ground because it's tripping or stuck on something.
With this application, when the robot is tangled up, it is then directed to lift its leg upward. This reaction doesn’t work in every scenario, but it works in the majority of situations and that was the team's most immediate goal with the application, Yim explained.
Become a Member: How to Join ASME
“We wanted to figure out how we can move around and enable the robot to be more effective at relatively high speed walking through objects like plants or cables,” he said. “It was intuitively designed to work in most cases by stepping over objects.”
The robot is also directed to move the limb slightly forward as it lifts, which allows the robot to stay in contact with the object and give it some idea of the object's shape. It would be very difficult for a robot to detect whether or not it has cleared an object without remaining in contact with it, Yim added.
“If you imagine that you run into a step and you pull your foot backward, it's really hard to tell whether you’ve made it over the object,” he said. “So in that case, if you lift the leg and it retreats away from the object, then you might advance the leg and try to go forward and run into the object again and just kind of bounce back and forth, which would not be very effective.”
One of the challenges the team faced when developing the application was balancing speed and control. To maintain efficiency, the robot typically needed to maintain its speed and couldn't come to a full stop every time it encountered an obstacle.
You Might Also Enjoy: With the Help of AI, Bipedal Robot Learns to Run
“By taking another step when it's stuck, the robot can recover its balance and continue walking,” Yim said. “But if it takes a very, very slow step, it tends to lose control and can fall over. However, if it moves too quickly through the obstacles, it becomes difficult for it to sense when it's touching things. So I think these types of counter objectives make it a balancing task of coming up with a strategy that allows both of the components—the sensing and the control—to succeed.”
Another advantage of this application is that it allows a robot to unstick multiple legs, one at a time, until it frees itself from the obstacle. This type of technology would be very useful in situations where getting stuck is almost a guarantee and will help a robot power through without losing too much energy in the process.
“We hope that this is a nice middle ground that allows us to make robots that are more effective at moving quickly in most cases,” Yim said.
In addition to exploring different strategies to helping robots detangle themselves, the team is also working to integrate the application with an opensource package so that others may use the software to run quadruped robots of their own.
Further down the line, the team hopes to explore how to apply these strategies to other types of robotic frameworks such as hexapods or bipeds so that robots can be adapted to more environments.
Cassie Kelly is a science and technology writer in Columbus, Ohio.
At Carnegie Mellon University, an application is being developed that gives robots the ability to detect when they are getting tangled up and the knowhow to free themselves.
“Our objective was to get legged robots in particular, to be more effective at getting untangled and getting around obstacles that might get in their way even if they don't know specifically what stiffness or geometry these obstacles have,” explained Justin Yim, an engineering professor at the University of Illinois Urbana-Champaign and collaborator on the project. “We targeted working with legged robots because we think that they could be really effective in this area, but aren't living up to their potential.”
The team, led by Aaron Johnson, including master's degree students Jiming Ren, David Ologan, and Selvin Garcia Gonzalez, started with a simple case that mimicked vines or branches in which a robot would walk through bungee cords. Each time the robot encountered the cords it first had to detect where it was tangled. The application would be of no use if the robot began picking up all of its legs each time it was caught in something. Utilizing proprioceptive joint torque estimation, the robot can monitor its legs and determine which one is stuck.
Check This Out: AI-Powered Robots Offer Promise to Public-Facing Industries
Since many modern quadruped robots are built with a quasi-direct drive strategy, their inherent construction is “transparent,” meaning the robot can monitor the behavior of the motor when force is applied to its limbs, Yim explained. This application builds on that existing framework and doesn’t require any additional sensors.
“We want to be able to measure those forces,” Yim said. “With a transparent robot limb, we can track the motor’s current and velocity and use that to determine what loads are applied to the leg.”
Proprioceptive torque estimation has been used on robot limbs before, arms specifically. However, this application required a couple of adjustments. With arms, the goal is for them not to hit anything. But a robot leg is going to hit the ground with every step. The key is to detect when the limb contacts something more than once. For this, the proprioceptive torque estimator utilizes a momentum-based observer to measure if the robot limb is running into an object before it's supposed to hit the ground because it's tripping or stuck on something.
With this application, when the robot is tangled up, it is then directed to lift its leg upward. This reaction doesn’t work in every scenario, but it works in the majority of situations and that was the team's most immediate goal with the application, Yim explained.
Become a Member: How to Join ASME
“We wanted to figure out how we can move around and enable the robot to be more effective at relatively high speed walking through objects like plants or cables,” he said. “It was intuitively designed to work in most cases by stepping over objects.”
The robot is also directed to move the limb slightly forward as it lifts, which allows the robot to stay in contact with the object and give it some idea of the object's shape. It would be very difficult for a robot to detect whether or not it has cleared an object without remaining in contact with it, Yim added.
“If you imagine that you run into a step and you pull your foot backward, it's really hard to tell whether you’ve made it over the object,” he said. “So in that case, if you lift the leg and it retreats away from the object, then you might advance the leg and try to go forward and run into the object again and just kind of bounce back and forth, which would not be very effective.”
One of the challenges the team faced when developing the application was balancing speed and control. To maintain efficiency, the robot typically needed to maintain its speed and couldn't come to a full stop every time it encountered an obstacle.
You Might Also Enjoy: With the Help of AI, Bipedal Robot Learns to Run
“By taking another step when it's stuck, the robot can recover its balance and continue walking,” Yim said. “But if it takes a very, very slow step, it tends to lose control and can fall over. However, if it moves too quickly through the obstacles, it becomes difficult for it to sense when it's touching things. So I think these types of counter objectives make it a balancing task of coming up with a strategy that allows both of the components—the sensing and the control—to succeed.”
Another advantage of this application is that it allows a robot to unstick multiple legs, one at a time, until it frees itself from the obstacle. This type of technology would be very useful in situations where getting stuck is almost a guarantee and will help a robot power through without losing too much energy in the process.
“We hope that this is a nice middle ground that allows us to make robots that are more effective at moving quickly in most cases,” Yim said.
In addition to exploring different strategies to helping robots detangle themselves, the team is also working to integrate the application with an opensource package so that others may use the software to run quadruped robots of their own.
Further down the line, the team hopes to explore how to apply these strategies to other types of robotic frameworks such as hexapods or bipeds so that robots can be adapted to more environments.
Cassie Kelly is a science and technology writer in Columbus, Ohio.