To find a (honest) man, Diogenes famously used a lantern – the philosopher relied solely on optical recognition methods. Today, however, scientists suggest using Wi-Fi signals for this purpose. More specifically, the method developed by three researchers at Carnegie Mellon University uses the signal from an ordinary home Wi-Fi router to not only pinpoint a person’s location in a room, but also to identify their pose.
Why Wi-Fi? There are several reasons for this. Firstly, unlike optical recognition, radio signals work perfectly in the dark and aren’t hindered by small obstacles like furniture. Secondly, it’s cheap, which can’t be said for lidars and radars – other tools that could potentially do the job. Thirdly, Wi-Fi is already ubiquitous – just reach out and grab it. But just how effective is this method? And what can you do with it? Let’s dive in.
DensePose: a method for recognizing human poses in images
To get started, however, we need to back up a bit – first, we need to understand how to accurately recognize the human body and its poses in general. In 2018, another group of scientists presented a method called DensePose. They successfully used it to recognize human poses in photographs – that is, two-dimensional images with no additional data for depth.
Here’s how it works: first, the DensePose model searches for objects in the images that are recognized as human bodies. These objects are then segmented into distinct areas, each corresponding to a specific body part, and analyzed individually. This approach is used because body parts move very differently: for example, the head and torso behave very differently from the arms and legs.
As a result, the model has learned to correlate a 2D image with the 3D surface of the human body, obtaining not only image annotations corresponding to the recognized pose, but also a UV map of the body depicted in the photo. The latter makes it possible, for example, to overlay a texture on the image.
Most impressively, this technique can accurately recognize the poses of multiple people in group photos, even those chaotic “prom night” pictures where people are huddled together and partially obstruct each other.
What’s more, if the images presented in the paper and the videos published by the researchers are to be believed, the system can confidently handle even the most unusual body positions. For example, the neural network correctly identifies people on bicycles, motorcycles, and horseback, and also accurately determines the poses of baseball players, soccer players, and even breakdancers, who often move in unpredictable ways.
Another advantage of DensePose is that it doesn’t demand extraordinary computing power to work. Using a GeForce GTX 1080 – hardly a top-of-the-line graphics card, even at the time the study was published – DensePose captures 20-26 frames per second at a resolution of 240×320 and up to five frames per second at a resolution of 800×1100.
DensePose over Wi-Fi: radio waves instead of photos
Basically, the Carnegie Mellon researchers’ idea was to use the existing high-performance body recognition AI model, DensePose, but feed it Wi-Fi signals instead of photographs.
For their experiment, they constructed the following setup:
- Two stands with standard TP-Link home routers, each equipped with three antennas: one served as a transmitter, the other as a receiver.
- The recognition scene positioned between these stands.
- A camera mounted on a stand next to the receiver router, capturing the same scene that the researchers were aiming to recognize using Wi-Fi signals.
Next, they ran DensePose, which identified body positions using the camera installed next to the receiver router, and tasked it with training another neural network that worked with the Wi-Fi signal from the receiving router. This signal was preprocessed and modified for more reliable recognition – but these are minor details. The point is that the researchers were indeed able to create a new Wi-Fi-DensePose model that accurately reconstructs the spatial positions of human bodies using Wi-Fi signals.
Limitations of the method
However, let’s not rush to write headlines like “Scientists Learn to See Through Walls Using Wi-Fi” just yet. First of all, the “seeing” here is quite abstract – the model doesn’t actually “see” the human body, but can predict its location and pose with a certain probability based on indirect data.
Visualizing anything with intricate detail using Wi-Fi signals is a complex challenge. This is demonstrated by another, similar study in which researchers experimented with objects much simpler than human bodies – and the results were, to put it mildly, far from ideal.
It’s also important to note that the model built by the Carnegie Mellon University researchers is significantly less accurate than the original method of recognizing poses in photographs, and also exhibits quite serious “hallucinations”. The model has particular difficulty with unusual poses or scenes involving more than two people.
In addition, the test conditions in the study were meticulously controlled: a simple, well-defined geometry, a clear line of sight between the transmitter and receiver, minimal radio signal interference – the researchers set up everything so they could easily “penetrate” the scene with radio waves. This ideal scenario is unlikely to be replicated in the real world.
So if you’re worried about someone hacking into your Wi-Fi router and monitoring what you do at home, relax. If there’s anything to be concerned about in your home, it’s household appliances. For example, smart pet feeders or even children’s toys have cameras, microphones, and cloud connectivity, while robot vacuum cleaners even have lidars that work flawlessly in the dark, as well as the ability to move around.
And just outside, another spy is waiting for you – a four-wheeled one. In terms of the amount of information they collect, today’s cars are miles ahead of smartwatches, smart speakers, and other everyday gadgets.