visually representing ARIA

16 views
Skip to first unread message

Jennifer Sutton

unread,
Oct 13, 2013, 7:39:46 PM10/13/13
to free-aria-googlegroups.com
Greetings, Free-ARIA:

I thought I'd throw out an idea to see what folks might think. The
TLDR version:

Would it be possible to make a tool to visually represent ARIA-widget
interactions of screen readers, and if so, does one exist?

Basically, I'm not a fan of sighted folks having to learn to use
screen readers, and I think one of the issues that sighted devs have,
in terms of how to implement ARIA, is that they cannot see what it's
supposed to do, in the native coding environments where they're
already working.

I know of these two items:

Enabling landmark-based keyboard navigation in Firefox The Paciello Group Blog
http://blog.paciellogroup.com/2013/07/enabling-landmark-based-keyboard-navigation-in-firefox/


and this Favlet that Pearson was working on:
http://wps.pearsoned.com/WAI_ARIA_Testing/139/35647/9125753.cw/index.html


But I'd love to see something more robust that sighted people could
use to test (both landmarks and widgets), so that they wouldn't have
to learn to use a screen reader. And I do especially mean for
testing widgets. I'm imagining some way to emulate what a screen
reader would do, but it would avoid sighted people having to learn
how to use a screen reader. They'd just use keystrokes, and the
screen would display the same result that a screen reader user would have.

Yes, there are now Voiceover and NVDA (so at least sighted people
have free tools to try). Maybe using them is as good as it's
realistic to believe is possible. But I think people get too bogged
down in how tts sounds, and they get overwhelmed by having to learn
something new.

I haven't looked much at the Chrome dev tools, so I don't know what
they may, or may not, do, in terms of representing ARIA.

Of course, I know of the Firefox extension, Fangs, but I don't know
that it's being further developed.

I understand that maybe this idea isn't possible since there are
variations in how screen readers work, and I assume that the
underlying APIs and such, would have to come into play. But it would
at least be great if there were a visual representation of what the
guidelines would expect screen readers to do when appropriate general
(rather than screen-reader-specific) keystrokes were used.

I find myself doubting that the new speech history coming out in JFW
15 (or the visual representation already available in NVDA, would
really address what I perceive to be one of the issues in the
mis-implementation of ARIA. I know Jamie and Mic are totally swamped,
but I wonder if there might be some way to build an NVDA extension
that could work only with the visual representation. But that would
still mean working with NVDA, and I think it would be ideal if this
sort of thing would work as a browser extension with Chrome dev tools
or firebug (as examples).

Thanks for reading and for any thoughts that folks may have.

Best,
Jennifer

Reply all
Reply to author
Forward
0 new messages