Only a few short years ago, if you suffered from a severe visual impairment or were totally blind, all you had to rely on were an onslaught of analog and “dumb” digital tools to hodgepodge together and live your life.
Today, a camera no bigger than a thumb drive can be mounted to the side of any pair of glasses and guide its user through the world surrounding them, all thanks to AI. While AI has been used in some digital accessibility tools in the past, like to harvest text out of photos, the OrCam MyEye that was demoed for me and the AI-driven updates to the Be My Eyes app changes everything for the blind and visually impaired.
I suffer from a degenerative eye disorder that will one day, hopefully in the very distant future, fully take my sight. This used to feel like a death sentence to me, but with technology progressing this quickly and this far in advance, I worry far less.
The MyEye handles issues I’ve feared since I understood what exactly I was dealing with. Like the fact that there’s no physical way to distinguish paper US currency, or even how to plan an outfit while I’m alone. This device helps take the guesswork out of life. Most importantly, it gives a person with a visual disability so much independence.
Being able to identify text simply by pointing at it, people by their faces, currency by looking at it, clothing by holding it or products by their barcode or physical appearance, the MyEye feels like an artificial eye that can still walk you through the world just by its word. The main shortcoming of this device that packs a camera, processor, battery, speaker and touch controls all into a package about the size of a Bic lighter? Its $4,000 price tag.
That’s where Be My Eyes comes in to fill in the gaps until the standalone technology becomes more cost-effective. At CSUN 2023, an accessibility technology conference, Be My Eyes announced Virtual Volunteer, a service powered by ChatGPT 4 that can do everything from identifying food in your fridge, to guiding you to things through landmarks in your photo.
Youtuber Lucy Edwards did a series of ads for Virtual Volunteer, showing off its capabilities in her everyday life. This AI was able to replace an entire guide person and give her so much of her autonomy back through a simple photo and a one-line question. The best part? Once out of beta, all you need is the cellphone you already have. The barriers to entry here are so minimal for such a massive quality of life improvement.
I’m used to seeing accessibility tech advance with such a painful slowness. But with such wide-ranging applications, AI has made leaps in months, when before advancements like decent screen readers took years. I can’t wait to see what’s to come in the years to follow, and I dream for the day that we could possibly feed these tools video, or even live stream it, to get information even quicker.
While I hopefully won’t need to rely on these tools for several more years, I’m awestruck with all the progress being made now. I can’t even begin to imagine what will be available and easily accessible when the time does come that I need these aids. I’m in no rush to need them, but I no longer fear the inevitable because of them.