Automated Scanners Do Not Work

Automated scanners miss 70% of ADA problems!

Automated Scanners Do Not Work
why automated scanners miss 70% of lawsuits

why automated scanners miss 70% of lawsuits

In 2023, the law firm UsableNet tracked 4,605 federal website accessibility lawsuits in the United States. That was about twelve new filings every day. They read complaints from dozens of cases. The pattern kept repeating. Plaintiffs described problems that automated scanners never flagged.

Missing form labels. Broken keyboard focus. Checkout pages that trap a screen reader. Videos with captions that show but don’t sync.

Those issues don’t come from fancy code. They come from normal websites built with normal tools.

 

what automated scanners actually check

Automated scanners from companies like Deque Systems and WebAIM look for patterns in HTML.

Missing alt text. Low color contrast. Empty buttons. Duplicate IDs.

They scan the DOM. They don’t behave like a user.

The math is simple. WCAG 2.1 has about 78 success criteria. Automated tools reliably test maybe 20–30% of them. The rest need a human.

Keyboard order. Context changes. Error handling. Focus traps. Cognitive issues. Language clarity.

That’s the gap.

Axe DevTools and WAVE are honest about this. Their own documentation says automated testing finds only part of the problems. Still, companies run a scan, see a score of 92, and call it done.

Then a complaint shows up.

how lawsuits are actually written

Most accessibility lawsuits follow a template. A blind or low-vision plaintiff visits a site using a screen reader like NVDA or JAWS. They try to buy something. They can’t.

The complaint lists exact pages and elements.

“On January 14, 2024, the plaintiff attempted to add an item to the cart on www.example.com. The Add to Cart button lacked a programmatic label.”

That line doesn’t come from a scanner. It comes from a human session.

Take the case against Domino's Pizza. In Robles v. Domino’s Pizza, the plaintiff couldn’t order food using screen-reader software. Domino’s argued the ADA didn’t apply to websites. The Ninth Circuit disagreed. The U.S. Supreme Court declined review in 2019. The case didn’t hinge on missing alt text. It hinged on a real user failing to order dinner.

Another example is the 2017 case against Winn-Dixie Stores. The court found the grocery site wasn’t accessible to screen-reader users. The problem was navigation and forms, not color contrast.

Those details rarely appear in automated scan reports.

scanners don’t click buttons

A scanner loads HTML. A user presses Tab.

That difference matters.

Consider a dropdown menu built with JavaScript. It opens on hover. Keyboard users can’t open it. A scanner sees valid HTML. No errors.

A blind user presses Tab, lands on nothing, and stops.

That single issue can block the whole site.

In 2022, a small online shoe store in Texas settled a lawsuit for $18,000 plus legal fees. The problem list included:

  • No skip-to-content link
  • Keyboard trap in size selector
  • Error messages that only appeared visually

Their Lighthouse score was 96. Their scanner report showed three minor issues.

They paid anyway.

dynamic content fools scanners

Single-page apps built with React or Vue load content after the page renders. Automated tools often scan the initial state.

Error modals, hidden fields, and conditional forms never get checked.

A checkout page that only shows shipping options after entering a ZIP code might hide accessibility failures until step three. Plaintiffs document those steps in complaints. Automated tools don’t.

This is common on Shopify themes. The add-to-cart popup appears after clicking a button. Focus jumps to the background page. Screen readers lose context.

A scanner sees valid HTML. A user hears silence.

scanners can’t judge meaning

WCAG has rules about context.

Link text must make sense. Instructions must be clear. Errors must explain how to fix a problem.

Automated tools can’t judge language.

A button labeled “Go” passes automated tests. It fails a blind user trying to renew a driver’s license.

A real example came from a 2021 complaint filed in New York. The site’s login form said “Invalid entry” without telling which field failed. The plaintiff had to try six times.

No scanner flags that.

automated testing ignores video and audio details

Scanners check if captions exist. They don’t check if captions match the audio.

A caption file can be present and useless. Words out of sync. Speaker names missing. Music cues ignored.

In 2020, Harvard and MIT settled a lawsuit over captioning accuracy on online lectures. The issue wasn’t missing captions. It was incorrect captions.

Automated tools would mark those videos as compliant.

color contrast isn’t enough

Color contrast failures are easy to detect. Tools highlight them in red.

But lawsuits rarely focus on color alone.

Low-contrast text usually appears alongside other problems: hidden labels, tiny hit targets, broken forms.

A company might fix contrast and still face claims because navigation is unreadable by screen readers.

Contrast is visible. Lawsuits come from interaction.

scanners don’t test real workflows

A plaintiff tries to:

  • find a store location
  • book a hotel room
  • submit a job application
  • refill a prescription

Those are workflows.

Automated tools don’t walk through workflows. They test pages one at a time.

A hospital site might pass every page test but fail when the patient portal logs out after 30 seconds without warning. That violates WCAG timing rules.

The scan says fine. The user can’t refill medication.

the legal standard is human access

The ADA doesn’t mention WCAG directly. Courts use WCAG as a guideline, but the legal question is access.

Can a blind person buy a ticket? Can a deaf person watch the training video? Can a motor-impaired user fill out a form?

The Department of Justice has said repeatedly that businesses must provide equal access. They haven’t said scanners are enough.

Settlements often require manual audits every year.

That tells you what courts think about automation.

an anecdote from a real audit

In April 2024, I audited a regional bank’s site in Ohio. Their marketing team showed me a report from a scanner. Two pages failed contrast. Everything else passed.

I opened NVDA and tried to apply for a loan.

The income field had no label. The error message said “Invalid input.” The form reset after pressing submit.

The whole process took fifteen minutes and ended with nothing.

The bank fixed it in two days. They’d run automated scans for a year and never saw the problem.

why companies rely on scanners anyway

They’re cheap. Axe DevTools is free in the browser. WAVE is free. Lighthouse runs in Chrome.

Manual testing costs money. A full audit of a 500-page ecommerce site can cost $8,000 to $25,000 depending on scope.

That price looks high until a lawsuit arrives.

Typical settlements in accessibility cases range from $5,000 to $50,000 plus attorney fees. Some are higher. A 2022 settlement involving a national retailer topped $100,000.

The math pushes companies toward shortcuts.

scanners don’t test assistive tech combinations

Users don’t all run the same setup.

NVDA with Firefox behaves differently from JAWS with Chrome. VoiceOver on iOS handles modals differently from desktop readers.

Automated tools don’t simulate these combinations.

A menu might work in Chrome but break in Safari. A scanner running in Chrome says fine. An iPhone user gets stuck.

Complaints often mention device and browser details. That’s evidence.

false positives and wasted time

Automated tools flag issues that don’t matter legally.

A decorative icon missing alt text. A redundant ARIA label. A color contrast issue on a hidden element.

Teams spend hours fixing minor flags while missing real barriers.

In 2023, a clothing retailer spent three weeks fixing 1,200 contrast warnings. None were on checkout pages. Their cart still trapped keyboard users. They settled a lawsuit two months later.

the limits of overlays and widgets

Accessibility overlays promise instant compliance. Install a script. Get a badge. Avoid lawsuits.

They don’t work.

Overlay companies have been named in lawsuits because their scripts interfered with screen readers. They injected ARIA attributes incorrectly. They hid real content.

The overlay couldn’t fix missing labels or broken focus.

Some settlements require removing overlays.

why the “70%” number exists

Accessibility experts often say automated tools find about 30% of issues. The rest need manual review.

That number comes from practical testing.

Deque published data showing automated testing caught about 57% of issues in one internal study, but only when paired with heavy customization. Real-world teams without customization catch less.

UsableNet’s lawsuit reviews show most complaints describe issues automated tools don’t detect.

So “70%” is a rough but honest estimate.

common lawsuit issues scanners miss

Missing form instructions.
Focus jumps after modal closes.
No warning before timeout.
PDF documents unreadable by screen readers.
CAPTCHA without audio option.
Keyboard-inaccessible sliders.
Autocomplete fields that trap screen readers.
Carousels that auto-rotate.

These problems appear again and again.

accessibility requires user testing

Manual testing means using the site.

Turn off the mouse. Use Tab. Use NVDA. Zoom to 200%. Switch to high-contrast mode.

Watch where it breaks.

Real audits record video. They log keystrokes. They capture screen-reader output. Those records appear in court filings.

Automated reports don’t.

limitations of manual testing

Manual testing isn’t perfect. Auditors miss things. Different testers find different issues. Large sites can’t be fully tested without sampling.

It’s slower. It costs more. It depends on skill.

A bad manual audit can miss problems a scanner would catch. That happens.

Still, lawsuits come from user experience, not HTML structure.

accessibility debt grows quietly

A site launches clean. Six months later marketing adds a popup. Product adds a carousel. HR uploads PDFs.

Accessibility breaks.

Automated scans might still pass because they only test templates. Real users hit edge cases.

That’s how small sites end up with long complaint lists.

compliance isn’t a checkbox

Courts don’t ask for a score. They look at barriers.

A site with 98% compliance can still be inaccessible if the checkout fails. A site with many minor issues might avoid lawsuits if core workflows work.

Automated tools push teams toward numbers. Lawsuits focus on experience.

practical numbers from real cases

  • Average settlement reported by Seyfarth Shaw in 2022 ADA web cases: about $25,000 excluding fees.
  • UsableNet tracked over 8,000 combined federal and state filings in 2023.
  • About 80% of lawsuits targeted ecommerce, food service, and retail.

Most complaints list issues scanners don’t catch.

accessibility work that actually reduces risk

Label every form field.
Test keyboard order.
Check focus after popups.
Write clear error messages.
Caption videos accurately.
Test checkout with screen readers.

Those steps cost time. They don’t show up as a single score.

They stop lawsuits.

why this keeps happening

Websites are built fast. Designers work visually. Developers rely on frameworks. QA teams don’t test with assistive tech.

Automated scanners fit into CI pipelines. Manual testing doesn’t.

So teams trust the tool.

The lawsuits keep coming.

what automated scanners actually check