This project has developed a light-based optical sensor and set of algorithms that can determine accurate location and positioning at ultra high speeds and extremely low cost and power. It uses optical pre-processing to reduce the amount of data and computational complexity by approximately 3 orders of magnitude. The technology leverages existing room lights to serve as location reference points instead of expensive cameras and hardware. Applications include AR/VR, robotics, drones, automotive, and indoor positioning and tracking.
Location tracking technology is the core and the most complex part of VR devices. And good location tracking technology can ensure good immersion in VR. Currently, there are two major tracking technologies: Outside-in tracking and Inside-out Tracking. Among them, the main VR providers use the Outside-in tracking technology, but this technology is not the perfect, since its technical movement range is limited, susceptible to occlusion and needs to be prepared in advance. The Inside-out Tracking technology was born, which can make up for the shortcomings of the Outside-in tracking technology.
The Inside-out Tracking technology does not require any external sensors, so it can be used in a hardware-free, unmarked environment, unaffected by occlusion, and not limited by sensor monitoring range, so it has more mobility and higher degree of freedom. However, the current Inside-out Tracking technology has low accuracy and certain delay. The Inside-out optical tracking technology developed by this project solves above two problems. In addition to the above technical problems, motion sickness is also a major obstacle to the popularity of VR. The brain has a highly sensitive, built-in “poisoning detector” triggered by the Vestibulo-ocular reflex (VOR). When head movement does not match what the eyes perceive, it creates a feeling of nausea, similar to drunkenness. Its 1000FPS capture technology coupled with deterministic 2ms end-to-end latency means that, both on tethered and untethered systems, there is highly responsive tracking to all six degrees of freedom. Besides, it can effectively avoid nausea.
This project developed an optical compression technique that allows us to accurately capture multi-megapixel images at frame rates of up to 1000 frames per second, without extreme bandwidth or power requirements. Its algorithm team re-imagined SLAM for high-speed operation and rounded out the system by integrating additional data from a MEMS sensor.
Working at high frame rates gives our system an amazing advantage – even the fastest and most unpredictable movements don’t take the system by surprise. Its system currently provides full drift-free 6DOF positional information with sub-centimeter and sub-degree accuracy and end-to-end latency of under 2 mSec. It can be run without resorting to a GPU or a DSP and can even work in systems without DRAM. The system is ideal for a range of both tethered and independent applications.
This project aims to raise $5M for a Two-Year Runway.