Revolutionizing web accessibility with EveryAlt and GPT-4 Vision
Nine months ago, it was a pipe dream. Today, it’s real.
Back in February, the world was abuzz with realistic images generated by Midjourney and the C-3PO-esque responses of ChatGPT. I wanted to see if we could use AI in the opposite way –
To accurately describe images, rather than creating them.
If you’ve ever built a website, you know why this is a big deal — U.S. law now mandates that many companies make their sites fully accessible to people with disabilities, which means describing almost every image on every website on the internet with “alternative text” that’s baked into the website code. This ensures that people using screen readers (or those with slow internet connections) can understand the meaning of an image that they can’t see.
With this idea in hand, my team and I raced to build EveryAlt, and we brought it from concept to completion in less than 3 weeks.
We put it out to the world, to lots of rave reviews — and, of course, a heap of AI skepticism.
EveryAlt was pretty good, but there was a problem — our results were only about 80% accurate. This meant that it could save the right users a lot of time, but still wasn’t ready for use by everyone.
Our AI-skeptic friends enjoyed reminding us of this — and even predicted that certain things would “never” be possible or would take “years.”
Fast forward nine months, and all those problems are solved.
We just switched from our old models (a mix of Azure Computer Vision and GPT 3.5) to the new GPT-4 Turbo Vision model — and we’ve yet to find an image that EveryAlt can’t accurately describe!
Here are some examples — with a comparison of EveryAlt 1 vs. EveryAlt 2 below each photo.
V1 — The AI model described this image of a Bible based on the actual events on the page (it read the text, but not the context): “This image depicts Jesus performing miracles, such as walking on water and healing the sick, while being questioned by the Pharisees about why their disciples are not following their traditions.”
V2 — Perfection: “A person is holding an open book with text, likely a Bible, adorned with a small branch and apink rose. The background is blurred.”
V1 — “A person hands a piece of food to another person.” Not bad, but it’s a bag of food, not a piece of food, and a human would definitely add more detail and context if they were writing this themselves.
V2 — Accurate and detailed: “A street food vendor is handing over a Grab Food delivery bag to a courier in green uniform, both smiling, with food displayed in front.”
V1 — Weird: “A person is holding their thumb up, revealing the veins in their wrist and the nail on their finger.”
V2 — Amazing: “The image displays two hands forming a heart shape with fingers against a blurred background, symbolizing love, affection, or friendship.”
Better than human?
Alright, it’s great that EveryAlt V2 is better than V1. But where it gets really wild is when EveryAlt identifies things that I wouldn’t even have been able to identify myself.
In this case, EveryAlt benefits from a whole world of knowledge — which means it can accurately identify elements of an image even if the human author doesn’t know what they are.
For example — what are these? I did not know until I put this into EveryAlt.
EveryAlt V2 says: “A glass jar is tipped over with green hop cones spilling out onto a wooden surface, commonly used in brewing beer for flavor and aroma.”
They’re hops! I’m not a big craft beer guy, so I just learned something new from AI.
Let’s try again — do you recognize this building?
EveryAlt does! “This image shows the Colorado State Capitol Building under a cloudy sky. It features a gold dome, grand stairway, and individuals near the entrance.”
Notice that there’s nothing in particular in this image to identify it as being in Colorado — the AI is just that good. It’s not even based on the filename — it’s pure image recognition.
One last thing — we believe EveryAlt and AI image recognition is going to change the world in a positive, extraordinary way. And to make sure we’re always moving in that direction, we’re donating 10% of all revenue to Helen Keller International, one of the world’s top-rated charities, to help in their mission of Saving Sight around the world.
We’ve launched EveryAlt V2 to the public — so you can try it now with a free account. If you find it helps you accomplish your web accessibility goals, we’d love to hear your story.