Have you ever gone to change the channel, then realized your remote had fallen behind the couch? Or wanted to scroll through a recipe on your tablet, but your hands were covered in dough?
Well, you may be in luck. Researchers at Lancaster University in England have developed a gesture recognition technology that can turn basically anything—your hand, a spatula, a screwdriver—into a remote control. The technology, which the researchers call “Matchpoint,” works via a webcam, using computer vision to track a body part or object as it interacts with a small widget in the corner of the screen. The widget is surrounded by targets corresponding to different functions—volume control, scrolling, channel changing, etc. The user gestures to interact with the targets, activating sliders that move up and down with the user’s movement to control functions like volume.
Existing gesture control technologies generally work only with specific objects or body parts that the system already recognizes, such as a dedicated controller or a human hand. And they can be finnicky about how close the user is to the screen, or whether they’re completely visible to the camera, since the system needs to be able to clearly identify the object in question. Matchpoint’s technology is different in that it looks not for a specific object, but for a rotating movement. Matchpoint’s creators hope this will make for an easier user experience.
“The system is most useful when the user is engaged in other activities and may not be in the most conventional position to perform a gesture,” says Christopher Clarke, one of Matchpoint’s creators. “As the system does not rely on human or object detection per se, it still works irrespective of the user’s position or posture, or objects they may be holding—hiding under a nice comfy blanket with a cup of tea.”
Matchpoint also allows multiple hands or objects to interact with digital whiteboards, zooming and rotating images, which could be useful for giving group presentations. Additionally, users can permanently link specific objects to the controls, meaning they’ll retain their control function without any need for an initialization. So a user could, for example, link a toy train with the volume control function of their tablet. Every time they moved the toy forward, the volume would go up.
It’s easy to see how Matchpoint could be useful around the home, allowing users to, say, pause a car repair how-to video without stepping away from the engine, or change the channel with a wave of a baby bottle without waking a sleeping newborn. But its creators hope it has uses beyond entertainment.
“The applications we find most interesting involve ‘sterile’ applications, such as surgery or working in the kitchen, where it is desirable to have a system that users can use any type of object with and doesn’t involve touching things and cross-contaminating objects,” Clarke says.
The system could also potentially be useful to people with disabilities that make it difficult for them to use traditional interface tools like a remote or computer mouse.
Clarke and his cocreator, Hans Gellersen, will present a paper about Matchpoint this month at the ACM Symposium on User Interface Software and Technology 2017 symposium in Quebec City, a conference about technologies for human-machine interfacing. In the future they hope to move Matchpoint beyond the prototype phase and explore commercialization opportunities.
Gesture control technologies have become more common in recent years, but developers are still working out the kinks. Most systems have trouble recognizing gestures in low light conditions or if the user is too far away. They also generally don't work outside, since they rely on infrared sensors to assess distance; natural sunlight obscures the infrared beams. They need cameras, which means they won’t work with older televisions. And some users simply feel silly or uncomfortable waving their hands around.
Matchpoint represents "incremental" rather than "transformational" change in the development of better gesture recognition systems, says Juan P. Wachs, a professor in the School of Industrial Engineering at Purdue University who studies human-machine interfaces.
"The big leap will occur when the computer or device does not tell you what is the movement that you need to do to control something, but you act naturally and the technology can 'infer' what you want to do," he says. "Maybe using brain signals as well, as part of an integrated solution."
But if Matchpoint does come to market, at least lost remotes may soon be a thing of the past, and we’ll all be changing the channel with the wave of a banana.