Programming for uncertainty in Artificial Intelligence (AI), as well as incorporating ‘computer vision’, can improve the safety of robotic prosthetics for users.
Walking on different types of terrain can be an issue for those using prosthetics. Now, a new software has been developed by engineers at the North Carolina State University which can help people who use either robotic exoskeletons or robotic prosthetics to walk much safer, as well as allowing for a more natural walk.
This is possible due to the incorporation of computer vision into prosthetic leg control, and the AI accounting for uncertainty on terrains.
Safety of robotic prosthetics
Depending on the terrain users are walking on, lower limb robotic prosthetics will act differently, but problems can be encountered when uncertainties are introduced – sometimes causing the robotic limbs to default into ‘safe mode’.
Taking into account six different terrains, Edgar Lobaton, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University, said: “The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.”
This will make walking much safer for robotic limb users.
A significant advancement for Artificial Intelligence
The incorporation of uncertainty is a significant advancement for AI.
The researchers trained the AI by attaching cameras to able-bodied people who walked on different terrains. This was followed with an evaluation of a person with lower-limb amputation wearing the cameras whilst walking across the same environments.
Lobaton commented: “We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making.
“This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system. We found that the model can be appropriately transferred so the system can operate with subjects from different populations.
“That means that the AI worked well even though it was trained by one group of people and used by somebody different.”
Using a camera on the limbs
Incorporating a camera onto the limb and a set of glasses allowed the AI to utilise ‘computer vision’ data from both cameras.
Co-author of the paper, Helen Huang, Jackson Family Distinguished Professor of Biomedical Engineering in the Joint Department of Biomedical Engineering at NC State and the University of North Carolina at Chapel Hill, said: “Incorporating computer vision into control software for wearable robotics is an exciting new area of research.
“We found that using both cameras worked well but required a great deal of computing power and may be cost-prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well – particularly for near-term predictions, such as what the terrain would be like for the next step or two.”