Best Practices

AI Rules for WCAG Accessibility

AI generates inaccessible UIs that fail WCAG on every criterion. Rules for perceivable content, operable controls, understandable interfaces, robust markup, and automated accessibility testing in CI.

8 min read·April 5, 2025

No alt text, 2.5:1 contrast, div buttons, placeholder labels — 15% of users excluded

Perceivable, operable, understandable, robust — WCAG four principles with axe-core CI testing

AI Generates UIs That Exclude 15% of Users

AI generates interfaces with: no alt text on images (screen readers announce "image" with no description), insufficient color contrast (light gray text on white background — unreadable for low-vision users), no keyboard navigation (custom dropdowns and modals unreachable without a mouse), no form labels (inputs with placeholder-only labels that disappear on focus), and div-based interactive elements (clickable divs that screen readers cannot identify as buttons). 15% of the global population has a disability. WCAG compliance is not a nice-to-have — it is a legal requirement in many jurisdictions (ADA, EAA, Section 508).

WCAG 2.2 organizes accessibility into four principles: Perceivable (can users perceive the content? — alt text, contrast, captions, text resize), Operable (can users operate the interface? — keyboard access, timing, no seizure triggers), Understandable (can users understand the interface? — labels, error messages, predictable behavior), and Robust (does it work with assistive technology? — valid HTML, proper ARIA, semantic elements). AI violates all four principles by default.

These rules cover: the four WCAG principles with practical implementation, color contrast requirements, keyboard navigation patterns, form accessibility, semantic HTML, ARIA usage, and automated testing with axe-core in CI.

Rule 1: Perceivable — Content Users Can See, Hear, or Touch

The rule: 'Every non-text element has a text alternative. Images: alt attribute describing the content (alt="Bar chart showing Q1 revenue growth of 15%"), decorative images: alt="" (empty alt — screen readers skip). Videos: captions and transcripts. Audio: transcripts. Color is never the only indicator (red border for errors + error text + error icon). Text is resizable to 200% without loss of content or functionality.'

For color contrast: 'WCAG AA: 4.5:1 contrast ratio for normal text, 3:1 for large text (18px+ or 14px+ bold). WCAG AAA: 7:1 for normal text, 4.5:1 for large text. Tools: Chrome DevTools color picker (shows contrast ratio), axe-core (flags violations automatically), Colour Contrast Analyser (desktop app). Common violations: light gray placeholder text (#999 on #fff = 2.85:1, fails AA), disabled button text (too faint to read), and link text that relies solely on color to distinguish from body text (add underline).'

AI generates: <img src="chart.png" /> (no alt), placeholder text at 2.5:1 contrast, and error states indicated by red border only (colorblind users miss it). Three violations in three elements. With: <img src="chart.png" alt="Q1 revenue: $1.2M, up 15%" />, placeholder at 4.5:1+ contrast, and error = red border + error text + warning icon. Same elements, fully perceivable by everyone.

  • Alt text on every image: descriptive for content images, empty for decorative
  • Color contrast: 4.5:1 minimum for normal text (AA), 7:1 for AAA
  • Never color alone: errors = color + text + icon — not just red border
  • Video captions and transcripts: required for deaf and hard-of-hearing users
  • Text resizable to 200%: layout must not break, content must not be hidden
💡 Three Violations in Three Elements

AI generates: <img> with no alt, placeholder at 2.5:1 contrast, error shown by red border only. Three elements, three WCAG violations. Fix: alt text describing content, 4.5:1+ contrast, error = color + text + icon. Same elements, perceivable by everyone.

Rule 2: Operable — Interface Users Can Navigate and Control

The rule: 'Every interactive element is reachable and operable via keyboard. Tab navigates between focusable elements. Enter/Space activates buttons and links. Escape closes modals and popups. Arrow keys navigate within composite widgets (menus, tabs, radio groups). Focus is visible (:focus-visible outline — never outline: none without a replacement). Focus is trapped in modals (Tab does not escape the modal to the background page). Focus returns to the trigger element when a modal closes.'

For timing and motion: 'No time limits on completing actions (or provide a way to extend). No content that flashes more than 3 times per second (seizure risk). Provide a skip-to-content link as the first focusable element (<a href="#main" class="sr-only focus:not-sr-only">Skip to content</a>). Users can pause, stop, or hide auto-playing content (carousels, animations, videos). prefers-reduced-motion: respect it (disable or reduce animations).'

AI generates: <div onClick={handler}>Click me</div> — not focusable (divs are not in the tab order), not activatable by keyboard (no Enter/Space handling), and not announced as interactive (screen reader says "Click me" with no indication it is a button). Replace with: <button onClick={handler}>Click me</button> — focusable, keyboard-activatable, announced as "Click me, button." One element change, three accessibility fixes.

⚠️ One Element Change, Three Fixes

<div onClick>: not focusable, not keyboard-activatable, not announced as interactive. <button onClick>: focusable, Enter/Space activatable, announced as 'button.' Replace div with button: one element change fixes three accessibility violations instantly.

Rule 3: Understandable — Interface Users Can Comprehend

The rule: 'Every form input has a visible label (<label htmlFor="email">Email</label><input id="email" />). Error messages are specific and actionable ("Email format is invalid. Example: user@example.com" not "Invalid input"). Page language is declared (<html lang="en">). Navigation is consistent across pages (same position, same order). Changes of context do not happen unexpectedly (selecting a dropdown option does not submit the form or navigate away without warning).'

For error handling: 'On form validation error: (1) focus moves to the first error field, (2) the error message appears next to the field (not in a list at the top of the page), (3) the error message is associated with the field via aria-describedby, (4) the error message explains what is wrong AND how to fix it ("Password must be at least 8 characters" not just "Invalid password"), (5) the field is marked with aria-invalid="true". Screen readers announce: "Password, invalid entry, password must be at least 8 characters." The user knows: which field, what is wrong, and how to fix it.'

AI generates: <input placeholder="Email" /> — the placeholder is the only label. On focus: the placeholder disappears, the user forgets what the field is for. Screen readers may not announce placeholders as labels. Error: "Invalid" with no explanation. With proper label + error: the label is always visible, the error is specific, and the association is programmatic. The form is understandable by everyone, not just users who can see and remember the placeholder.

  • Visible <label> for every input — placeholder is not a label (disappears on focus)
  • Specific error messages: what is wrong + how to fix it — not just 'Invalid'
  • aria-describedby links error message to field — screen readers announce both
  • Focus moves to first error on submit — user does not have to search for the error
  • Consistent navigation: same position and order across all pages

Rule 4: Robust — Works with Assistive Technology

The rule: 'Use semantic HTML elements: <button> for actions (not <div onClick>), <a href> for navigation (not <span onClick>), <nav> for navigation regions, <main> for primary content, <header>/<footer> for page structure, <h1>-<h6> for heading hierarchy (no skipped levels), and <ul>/<ol> for lists. Semantic elements communicate purpose to assistive technology automatically. A <button> is announced as "button" by screen readers. A <div> with onClick is announced as nothing — the user does not know it is interactive.'

For ARIA usage: 'ARIA (Accessible Rich Internet Applications) fills gaps that semantic HTML cannot cover: role="dialog" for custom modals, aria-expanded for collapsible sections, aria-live="polite" for dynamic content updates, aria-label for elements with no visible text, and aria-hidden="true" for decorative elements that should be skipped. The first rule of ARIA: do not use ARIA if a native HTML element provides the same semantics (<button> instead of <div role="button">). ARIA supplements HTML — it does not replace it.'

AI generates: <div role="button" tabIndex={0} onClick={handler} onKeyDown={handleKeyboard}>Action</div> — 5 attributes to replicate what <button onClick={handler}>Action</button> provides natively. The div version: must manually handle Enter and Space key events, must manually manage focus styling, and may miss edge cases (form submission, disabled state, type attribute). The button version: all of these are handled by the browser. Use the native element first; ARIA only when no native element exists.

Rule 5: Automated Accessibility Testing in CI

The rule: 'Run axe-core accessibility checks in CI on every pull request. axe-core detects: missing alt text, insufficient contrast, missing form labels, invalid ARIA, duplicate IDs, and heading hierarchy violations. Integration: jest-axe for unit tests (const results = await axe(container); expect(results).toHaveNoViolations()), @axe-core/playwright for E2E tests (check rendered pages), and eslint-plugin-jsx-a11y for static analysis (catch violations at lint time, before tests). Automated testing catches 30-40% of accessibility issues — the rest requires manual testing with screen readers.'

For the testing pyramid: 'Static analysis (eslint-plugin-jsx-a11y): catches missing alt text, missing labels, invalid ARIA — fastest, runs on every save. Unit tests (jest-axe): renders components, runs axe on the DOM — catches dynamic violations. E2E tests (@axe-core/playwright): runs axe on full rendered pages — catches layout and interaction issues. Manual testing (NVDA, VoiceOver): tests actual screen reader experience — catches flow, context, and announcement issues that automated tools miss. All four levels are needed.'

AI generates: no accessibility testing. Violations are discovered by: users who report issues (reactive, not proactive), legal complaints (expensive, reputation-damaging), or annual audits (issues accumulate between audits). axe-core in CI: every PR is checked automatically. Violations fail the build before merge. The baseline accessibility level is maintained with zero manual review effort. The 30-40% of issues caught automatically never reach production.

ℹ️ 30-40% of Issues Caught Automatically

axe-core in CI: every PR checked, violations fail the build before merge. eslint-plugin-jsx-a11y: catches missing alt/labels at lint time. Together they catch 30-40% of issues automatically. The rest requires manual screen reader testing — quarterly with NVDA and VoiceOver.

Complete WCAG Accessibility Rules Template

Consolidated rules for WCAG accessibility.

  • Perceivable: alt text on images, 4.5:1 contrast, captions, never color-only indicators
  • Operable: keyboard navigation, visible focus, focus trap in modals, skip-to-content link
  • Understandable: visible labels, specific error messages with fix instructions, consistent navigation
  • Robust: semantic HTML (button not div), ARIA only when no native element exists
  • axe-core in CI: fail build on violations — catches 30-40% automatically
  • eslint-plugin-jsx-a11y: static analysis on every save — fastest feedback loop
  • Manual screen reader testing: NVDA (Windows), VoiceOver (Mac) — quarterly
  • 15% of users have disabilities: WCAG is a legal requirement, not a nice-to-have
AI Rules for WCAG Accessibility — RuleSync Blog