Victor et al:
I'll try to describe a bit more what I mean,
below, but I will preface it by saying that it's
conceivable that what I'm imagining is going on
visually (or not) may not be the case, so I'll be
glad to learn. I don't work with sighted devs on
a daily basis, so I'm not always able to sit
side-by-side with them to discuss how my screen
reader experience of a widget may (or may not)
differ from what they experience when testing
with a browser and the keyboard, alone.
At 06:43 PM 10/13/2013, you wrote:
>Hi Jennifer.
>To add to the list of favlets, bookmarklets,
>extensions and such, there is also a SkipTo Menu
>from my accessibility team at PayPal:
><
http://paypal.github.io/SkipTo/>
http://paypal.github.io/SkipTo/.
>This enables keyboard navigation to the ARIA
>landmarks found on the page. We did not include
>all the landmarks on purpose because we wanted
>to make the menu as practical as possible.
JS: Thanks for this reminder. I haven't had a
chance to look closely at this, but I knew of it.
>To your question, however… What do you mean by
>“visual representation”? Do you mean the speech
>output, keyboard shortcuts for landmarks or something more extensive?
JS: It seems to me that landmarks are fairly well
covered by some of what already exists, so I mean
widgets. If we take tabs, as an example, when a
sighted person selects a tab (presumably with the
keyboard but without a screen reader), I guess,
if it's working right, there's a visual
indication of the selection. But I want sighted
folks to *see* what I hear, so I want them to see
the words, in some special color or highlight,
that say "tab selected." I want sighted people to
be able to diagnose ARIA issues quickly by just
glancing with their eyes, rather than having to
fiddle around with a screen reader. I want them
to "get" how ARIA works (and when it's not
implemented correctly) by being able to read, see
highlights, little boxes, colors, or whatever
else will make it very clear to them what they need to do.
For example, I want them to see exactly where
content comes into the page (or goes away) based
on whether or not a tab is selected, but I want
them to see it *where* in the page a screen reader would read it.
Maybe it's possible to see these things already
with VO (I'm not sure how IOS and the Mac may or
may not differ) or even NVDA. But I thought that,
for example, the NVDA speech viewer might show
the words for the landmarks, but not the other widgets.
To put it as simply as I can, I want a what I
hear is what you see type of interface. And yes,
by all means, stretch what I am saying .. show
labels and anything/everything else. I find
myself thinking of the old Homepage Reader, but
no TTS would be required at all. Showing the
screen reader experience, in combination with a
testing tool i.e. a choice to turn that on/off, would be great.
I am envisioning something that would serve as an
error-checking and a teaching tool, at the same
time. I'm focusing on ARIA, though, because it
seems to be difficult to learn/conceptualize.
Many other accessibility issues can be chalked up
to not implementing best practices in coding, generally.
Yes, people should read all of the great ARIA
examples and guidance, but even if they have,
they need visual quick checks, or so it seems to me.
Basically, I'd like to move away from the need
for sighted people to have to learn how to
pretend to be blind in order to understand how ARIA should work.
I hope this helps clarify what I'm envisioning.
Best,
Jennifer
Victor continued: