This unified system is envisioned as a pair of eyeglasses with integrated display that would overlay the wearer’s view of the local surrounding and include multiple miniature cameras, inertial sensors and computational and communication modules in the frame of eyeglasses. The system would simultaneously acquire and build up a visual model of the surrounding scene while it also estimates the location and orientation of the eyeglasses and the hand gestures and body pose of the wearer; some of the cameras would be pointing toward different parts of the wearer’s body, including the eyes, mouth, hands, and feet. The display would optically overlay the eyeglasses, allowing options of the synthetic imagery to relate visually to the wearer’s surroundings. Commercially, the system could be an alternative to 3D displays because the wearable display gives each user a custom stereoscopic view of the object or scene that the user desires to see. In addition, the rendered object or scene shown on the wearable display overlay the wearer’s view of the local surrounding when the display is transparent. Such a see-through wearable display has many potential applications, such as telepresence, medical and health care.
• The proposed head-mounted display is self-contained
• Initially embodiment is one or more smart-phones and one or more Kinect
• More advanced embodiment will be eyeglasses similar to LUMUS wearable display enhanced with sensors and computation unit
• Multiple pairs of eyeglasses in the same local environment can cooperate to capture more completely the 3D description of the local environment, and can assist each other for improved tracking