Support Hyperallergic’s independent arts journalism.
What if your smartphone could see for you, the same way it tells time, takes pictures, crushes candy, and occasionally calls people for you?
BlindTool, by Joseph Paul Cohen, a PhD Candidate at the University of Massachusetts Boston, is a free Android app that helps the blind and visually impaired navigate the world. Using computer vision technology — a robot analogue of human vision, in which computers analyze visual information on video — it identifies objects it “sees” through the camera lens and describes them in robotic monotone. If you point your phone at a chair, it says “chair;” if you point at a banana, it says “banana.”
Though it’s far from perfect — BlindTool identified a quarter as a ping-pong ball and an image of Chewbacca from Star Wars as a Yorkshire Terrier — it’s an application of computer vision technology that has the potential to change lives. Cohen is currently raising funds on Kickstarter for BlindTool version 2, which will have more accurate object recognition and personalization. (As of now, its visual vocabulary is limited to 1,000 things, hence the Yorkshire Terrier confusion.)
Cohen’s dream for BlindTool is total scene recognition, for it to “identify all objects in any given scene as if a sighted person was telling you about them.” This made me curious: Could BlindTool, or technology like it, someday aid in efforts to help the blind “see” visual art? Could it lead to a kind of talking Shazam for art? Cohen says that, theoretically, it could.
What would it take to get BlindTool to be able to recognize and identify works of art? Just a little art-historical training, Cohen answers. With a huge data set of images, “you could train this app to recognize all the famous fine artists and artistic styles, and it could speak those things aloud when they’re recognized. If you showed it a bunch of Picasso paintings, you could then take some random piece of art by an anonymous artist and it could say, ‘This artwork is 20% in Picasso’s style.’”
In recent years, after the 25th anniversary of the Americans with Disabilities Act, art museums have put a greater emphasis on programming for blind visitors. Some, like the Museum of Modern Art, the Whitney, and the Minneapolis Institute of Art, offer “Touch Tours,” allowing blind visitors to touch specific sculptures in order to understand their contours and textures. Others, like Florence’s Uffizi Gallery and Madrid’s Museo del Prado, offer 3D models of certain paintings for visitors to touch.
Perhaps someday, an app like BlindTool could help the visually impaired “see” art even when such programs aren’t available. A better current alternative is an app called “Be My Eyes,” which lets volunteers serve as surrogate eyes for the visually impaired via video chat, describing the person’s surroundings in great detail.
The prospect of BlindTool waxing poetic about an Impressionist landscape is exciting but still a ways off. As of now, the app “kind of objectively looks at the world,” Cohen says. “It takes things completely out of context and will tell you what an object statistically looks like.” It’s still learning, and has yet to develop the human eye’s ability to detect subtle cues like depth perception or variations in light and color. “BlindTool is a more naive viewer,” adds Cohen. “I liken it to a baby, because it makes mistakes when identifying things.”
BlindTool V2 is fundraising on Kickstarter through February 6.
h/t Fast Company