A group of researchers from Nanyang Technological University in Singapore recently introduced a groundbreaking technique for monitoring human movement in the metaverse.
One of the key aspects of the metaverse is its ability to depict real-world objects and individuals in the digital realm in real-time. In virtual reality, for instance, users can alter their perspectives by turning their heads or manipulate physical controllers in the physical world to influence the digital environment.
The current practice for capturing human activity in the metaverse involves the use of device-based sensors, cameras, or a combination of both. However, as the researchers state in their preprint research paper, these modalities have inherent limitations.
A device-based sensing system, such as a handheld controller with a motion sensor, “only captures information from one point of the human body and therefore cannot model highly complex activities,” state the researchers. On the other hand, camera-based tracking systems struggle with low-light conditions and physical obstacles.
Enter WiFi sensing.
Scientists have been utilizing WiFi sensors to track human movement for years. Similar to radar, the radio signals used to transmit and receive WiFi data can be employed to detect objects in space.
WiFi sensors can be calibrated to detect heartbeats, monitor breathing and sleep patterns, and even detect people through walls.
Researchers in the metaverse field have previously experimented with combining traditional tracking methods with WiFi sensing, with varying degrees of success.
Enter artificial intelligence.
WiFi tracking necessitates the use of artificial intelligence models. However, training these models has proven to be highly challenging for researchers.
As mentioned in the Singaporean team’s paper:
To train the required models for WiFi sensing in human activity recognition (HAR), scientists must construct a library of training data. Depending on the objectives of the specific model, these datasets can contain thousands or even millions of data points.
Often, labeling these datasets can be the most time-consuming aspect of conducting these experiments.
Enter MaskFi.
The Nanyang Technological University team developed “MaskFi” to tackle this challenge. It utilizes AI models created through a technique known as “unsupervised learning.”
In the unsupervised learning paradigm, an AI model is pretrained on a significantly smaller dataset and then iteratively refined until it can accurately predict output states. This approach allows researchers to focus their efforts on developing robust models rather than the laborious task of building extensive training datasets.
According to the researchers, the MaskFi system achieved an accuracy of approximately 97% across two related benchmarks. This suggests that with further development, this system could serve as a catalyst for an entirely new modality in the metaverse: a metaverse that can provide a real-time 1:1 representation of the real world.