Prosthetic Limb ‘Sees’ What Its User Wants to Grab

Adding computer vision and deep learning to a prosthetic makes it far more effective

Prototype of the hand that sees - fitted with a 99p camera.JPG
A prosthetic hand outfitted with an inexpensive webcam lets its user grab objects with less effort. Newcastle University, UK

When you grab something, your hand does most of the work. Your brain just says, "go, you don’t worry about how it happens." But with a prosthetic, even the most advanced, that action requires much more intentionality. As a result, many patients abandon their state-of-the-art limbs.

Modern prosthetics receive commands in the form of electrical signals from the muscles they’re attached to. But even the best prosthetics can't do much yet. Users need a long training period to get used to the limb. They can often only move in limited ways, and users need to manually switch between grips to accomplish different tasks—say, to open a door versus pinch and turn a key. All in all, it means the hand can’t work seamlessly with the brain.

One tool that might help solve this problem is computer vision. Researchers at Newcastle University mounted a webcam on a prosthetic hand, connected it to a deep learning neural network, and gave the devices to two amputees whose arms had been amputated above the wrist but below the elbow. The computer used the camera to see what the user was reaching for and automatically adjust the prosthetic's grip.

The results have, so far, been promising. In an article in the Journal of Neural Engineering, the team from Newcastle reported that the users had success rates above 80 percent for picking up and moving objects.

“If we can improve that, get a hundred percent, it would be much more reliable to use the hand for the amputees," says Ghazal Ghazaei, a PhD student at Newcastle and the lead author of the paper.  "If it’s going to be used in real life, it should be errorless.”

The device itself was an off-the-shelf prosthetic called an i-limb ultra, and the webcam was a low-resolution, inexpensive Logitech Quickcam Chat. The real innovation was how Ghazaei’s team devised a computer learning scheme to use the information from the webcam.

The software recognizes patterns in the shape of the object to be lifted and classifies them into categories based on the grip it needs to effectively grasp them. To teach the computer this technique, Ghazaei fed it 72 images each, taken in increments of 5 degrees, of 500 objects. The software filters the objects by their features, and learns through trial and error which ones fall into what categories.

Then, when the prosthetic is presented with an object, the network classifies the low-resolution image based on its broad, abstract shape. It needn’t be something the system has seen before—the general shape of the object is enough to tell the hand what grip to use. Ghazaei and team used four grip types, including pinch (two fingers), tripod (three fingertips), neutral palmar (like grasping a coffee cup), and pronated palmar (where the palm faces downward).

Computer vision has been used on robotic hands before, both in prosthetics and industrial robots. But such efforts have either involved objects of standard size and shape, as in a manufacturing environment, or slower algorithms. The system developed at Newcastle was able to go through this process fast enough to correctly classify the objects in 450 microseconds, or around 1/2000th of a second. “The main difference is the time that it takes to provide a grasp and do the task,” says Ghazaei. “For some of them it’s about four seconds, and some of them need several snapshots. For us, it’s just one snapshot and it’s very fast.”

The impacts of this technology go far beyond picking up household items. Imaging systems could help prosthetic legs know how far they are from the ground, and adjust accordingly, for example. What both instances have in common is a robotic system that is working in conjunction with the brain.

“The main idea is to have an interaction between the robotic device and the human, adding some intelligence into the robotic system,” says Dario Farina, a professor of neurorehabilitation engineering at Imperial College London, whose lab studies neuromuscular interfaces for bodies and brains and the devices they connect to.

“It is not only the patient that controls, with his brain and through the neural interface, the prosthesis, but it is also the patient is helped by a second intelligent entity, which is mounted on the prosthesis and which can see the environment," says Farnia, who was not involved with the Newcastle study. "The main challenge in this is really to be able to share the control between the human and the intelligence system.”

It’s an early inroad into the merging of artificial intelligence with the brain, sussing out which actions work best for each without creating conflict. Ghazaei has encountered this problem; she is still working to manage how much of the broad motion is controlled by the prosthetic's computer, versus the user's actions. Right now, the user points the prosthetic at the item, induces it to take a photo, and then the arm chooses the grasp and grabs.

It’s just one of many remaining challenges. Right now, the system can’t understand long objects that extend out of view. It has trouble with crowded backgrounds. Sometimes it interprets a farther away object as a smaller, nearer one. And Ghazaei says increasing the number of grasp types to 10 or 12 is another goal. But already, she says, the two users in the trial appreciated the increase in performance and the simplicity it lends to the basic act of picking something up.

Grasp classification in myoelectric hands

Get the latest stories in your inbox every weekday.