I’d like to precisely localize (< 1mm) a robot within a roughly 3m square area.
At first I considered a cheap webcam approach — visual odometry or even just reading linear fiducials (tape measures) with OpenCV — but a friend suggested Valve’s “Lighthouse” positioning system, which looks much more promising. (Also: A great excuse to goof with microcontrollers and analog circuitry.)
These are my research notes. They’re written for my own reference, but feel free to email me if you’re working on a similar project.
The lighthouse system is based on precise timing of near-infrared light pulses. A base station (“lighthouse”) emits a wide field of view sync pulse and then sweeps either a horizontal or vertical beam. A receiver uses the duration between the sync and sweep pulses to determine its horizontal/vertical angular position with respect to the lighthouse. Here’s a great animation:
https://www.youtube.com/embed/oqPaaMR4kY4
Receivers do not need to communicate with the lighthouse, which is just always broadcasting.
The original (1.0) base station can be purchased on Amazon for $130 and a revised (2.0) base station can be purchased from Valve for $149 (or as part of the Vive Pro System for $1,400, if you actually want to play videogames).
The 2.0 base stations have a curved front plate and emit a fancier modulated signal of some kind. Apparently they’re mechanically simpler and allow for combining more lighthouses to create a larger tracked area. See libsurvive for discussion and reverse engineering attempts.
For my project, I’m using the 1.0 base station since there’s more available prior art and the signal looks simpler to decode.
Many people have worked on building open source lighthouse receivers and Alan Yates (the original designer) actively helps on Reddit as /u/vk2zay. Resources that I’ve found most helpful:
Triad Semiconductor sells chips for the Valve Lighthouse system: