Purchase this article with an account.
W. Fink, M. Tarbell, J. Weiland, M. Humayun; DORA: Digital Object Recognition Audio–Assistant For The Visually Impaired . Invest. Ophthalmol. Vis. Sci. 2004;45(13):4201.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Purpose: To provide a camera–based object detection system for severely visually impaired or blind patients. The system would recognize surrounding objects and their descriptive attributes (brightness, color, etc.), and announce them on demand to the patient via a computer–based voice synthesizer. Methods: A digital camera mounted on the patient’s eyeglasses or head takes images on demand, e.g., at the push of a button or a voice command. Via near–real–time image processing algorithms, parameters such as the brightness (i.e., bright, medium, dark), color (according to a predefined color palette), and content of the captured image frame are determined. The content of an image frame is determined as follows: First, the captured image frame is processed for edge detection within a central region of the image only, to avoid disturbing effects along the borderline. The resulting edge pattern is subsequently classified by artificial neural networks that have been trained previously on a list of "known", i.e., identifiable, objects such as table, chair, door, car, etc. Following the image processing, a descriptive sentence is constructed consisting of the determined/classified object and its descriptive attributes (brightness, color, etc.): "This is a dark blue chair". Via a computer–based voice synthesizer, this descriptive sentence is then announced verbally to the severely visually impaired or blind patient. Results: We have created a Mac OS X version of the above outlined digital object recognition audio–assistant (DORA) using a firewire camera. DORA currently comprises algorithms for brightness, color, and edge detection. A basic object recognition system, using artificial neural networks and capable of recognizing a set of predefined objects, is currently under development. Conclusions: Severely visually impaired patients, blind patients, or blind patients with retinal implants alike can benefit from a system such as the object recognition assistant presented here. With the basic infrastructure (image capture, image analysis, and verbal image content announcement) in place, this system can be expanded to include attributes such as object size and distance, or, by means of an IR–sensitive camera, to provide basic "sight" in poor visibility (e.g., foggy weather) or even at night.
This PDF is available to Subscribers Only