It is estimated that worldwide more than 1.3 billion people have vision impairment (e.g. blind or low vision). Visual Impairment in the US is substantial, with 1.02 million blind and 3.22 million classified as “Low Vision” in 2015; this prevalence is expected to double by 2050.
Navigating in unknown built environments is extremely challenging for people with vision impairments. This places a lot of burden on their ability to use transportation to independently commute to their destinations, which might affect their ability to obtain employment and to participate in social functions. Navigation is also a challenge in indoor building environments where a visually impaired individual may need to memorize location of any object that might obstruct their movement in their own home and use assistive technologies like a white cane to do so in an unfamiliar indoor environment. Navigating to a given room in unknown building environments may sometimes require assistance of a sighted individual leading to loss of independence and costly burden for caregivers.
In this project, we will build upon our existing work to develop a smart mobility system that features assisted navigation, autonomous maneuvering and real-time data integration technologies and will form the basis of an entirely new end-to-end mobility system for individuals with vision impairments. The technologies will be integrated into working prototypes and evaluated for usefulness and usability, two factors predictive of technology adoption. The objective of assisted navigation is to empower people with vision impairments to independently navigate their indoor and outdoor environments (i.e., end-to-end) by offering turn-by-turn navigation instructions. The autonomous maneuvering will address the critical problem of helping individuals with vision impairments orient and position themselves inside unknown rooms by adaptively sensing the ambient obstructions and explore feasible paths to guide them to the location where they need to be (e.g., navigation is required from bus stop to main building entrance, followed by indoor navigation from building entrance to the meeting room and then maneuvering will guide the individual to their seat by avoiding obstructions). This requires on-demand sensing capabilities of the current environment, creating real-time maps, and preparing data for obstacle avoidance.
Our approach will integrate visual slam, occupancy grid mapping and feature maps from point clouds along with deep learning to develop the navigation and maneuvering system. This system will rely on sensors placed on the individual and in the space, and our research will result in recommendations in each of these cases. A multi-modal interface will be developed to provide non-visual orientation and directional cues to the user, based on our prior work on non-visual guidance for navigation and reach-and-grasp. The above will be guided by mixed-methods usability studies to evaluate the proposed system’s ease-of-use and potential adoption by the targeted community.