Re: [free-aria] visually representing ARIA

23 views
Skip to first unread message

Victor Tsaran

unread,
Oct 13, 2013, 9:43:02 PM10/13/13
to free...@googlegroups.com
Hi Jennifer.
To add to the list of favlets, bookmarklets, extensions and such, there is also a SkipTo Menu from my accessibility team at PayPal: http://paypal.github.io/SkipTo/. This enables keyboard navigation to the ARIA landmarks found on the page. We did not include all the landmarks on purpose because we wanted to make the menu as practical as possible.

To your question, however… What do you mean by “visual representation”? Do you mean the speech output, keyboard shortcuts for landmarks or something more extensive?

I do completely agree that one of the reasons sighted developers forget about even the simplest of accessibility rules is because a lot of it is only applicable to non-visual interfaces, think alt text, ARIA landmarks, slider values, labels of various kinds, etc. OK, I am stretching the point here a bit too far but the gist remains the same.

So, what kind of visual representation do you have in mind?

Best,
Victor



On Oct 13, 2013, at 4:39 PM, Jennifer Sutton <jsut...@gmail.com> wrote:

Greetings, Free-ARIA:

I thought I'd throw out an idea to see what folks might think. The TLDR version:

Would it be possible to make a tool to visually represent ARIA-widget interactions of screen readers, and if so, does one exist?

Basically, I'm not a fan of sighted folks having to learn to use screen readers, and I think one of the issues that sighted devs have, in terms of how to implement ARIA, is that they cannot see what it's supposed to do, in the native coding environments where they're already working.

I know of these two items:

Enabling landmark-based keyboard navigation in Firefox  The Paciello Group Blog
http://blog.paciellogroup.com/2013/07/enabling-landmark-based-keyboard-navigation-in-firefox/


and this Favlet that Pearson was working on:
http://wps.pearsoned.com/WAI_ARIA_Testing/139/35647/9125753.cw/index.html


But I'd love to see something more robust that sighted people could use to test (both landmarks and widgets), so that they wouldn't have to learn to use a screen reader.  And I do especially mean for testing widgets. I'm imagining some way to emulate what a screen reader would do, but it would avoid sighted people having to learn how to use a screen reader. They'd just use keystrokes, and the screen would display the same result that a screen reader user would have.

Yes, there are now Voiceover and NVDA (so at least sighted people have free tools to try). Maybe using them is as good as it's realistic to believe is possible. But I think people get too bogged down in how tts sounds, and they get overwhelmed by having to learn something new.

I haven't looked much at the Chrome dev tools, so I don't know what they may, or may not, do, in terms of representing ARIA.

Of course, I know of the Firefox extension, Fangs, but I don't know that it's being further developed.

I understand that maybe this idea isn't possible since there are variations in how screen readers work, and I assume that the underlying APIs and such, would have to come into play. But it would at least be great if there were a visual representation of what the guidelines would expect screen readers to do when appropriate general (rather than screen-reader-specific) keystrokes were used.

I find myself doubting that the new speech history coming out in JFW 15 (or the visual representation already available in NVDA, would really address what I perceive to be one of the issues in the mis-implementation of ARIA. I know Jamie and Mic are totally swamped, but I wonder if there might be some way to build an NVDA extension that could work only with the visual representation. But that would still mean working with NVDA, and I think it would be ideal if this sort of thing would work as a browser extension with Chrome dev tools or firebug (as examples).

Thanks for reading and for any thoughts that folks may have.

Best,
Jennifer

--
You received this message because you are subscribed to the Google Groups "Free ARIA Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to free-aria+...@googlegroups.com.
To post to this group, send email to free...@googlegroups.com.
Visit this group at http://groups.google.com/group/free-aria.
For more options, visit https://groups.google.com/groups/opt_out.

James Teh

unread,
Oct 13, 2013, 9:49:40 PM10/13/13
to free...@googlegroups.com
Hi Jennifer,

On 14/10/2013 9:39 AM, Jennifer Sutton wrote:
> I find myself doubting that the new speech history coming out in JFW 15
> (or the visual representation already available in NVDA, would really
> address what I perceive to be one of the issues in the
> mis-implementation of ARIA.
Can you clarify why NVDA's speech viewer doesn't address this? (I
recognise that it still requires installing NVDA, but am curious as to
whether there are other reasons.)

> I wonder if there might be some way to build an NVDA extension that
> could work only with the visual representation.
Do you mean just the speech viewer without TTS or something more? You
can disable speech easily enough using NVDA+s or set the synthesiser to
no speech. If this is sufficient, is this a documentation/awareness
problem? If not, can you elaborate on your suggestion?

Thanks,
Jamie

--
James Teh
Director, NV Access Limited
Ph + 61 7 5667 8372
www.nvaccess.org
Facebook: http://www.facebook.com/NVAccess
Twitter: @nvaccess

Jennifer Sutton

unread,
Oct 13, 2013, 10:40:44 PM10/13/13
to free...@googlegroups.com
Victor et al:

I'll try to describe a bit more what I mean,
below, but I will preface it by saying that it's
conceivable that what I'm imagining is going on
visually (or not) may not be the case, so I'll be
glad to learn. I don't work with sighted devs on
a daily basis, so I'm not always able to sit
side-by-side with them to discuss how my screen
reader experience of a widget may (or may not)
differ from what they experience when testing
with a browser and the keyboard, alone.



At 06:43 PM 10/13/2013, you wrote:
>Hi Jennifer.
>To add to the list of favlets, bookmarklets,
>extensions and such, there is also a SkipTo Menu
>from my accessibility team at PayPal:
><http://paypal.github.io/SkipTo/>http://paypal.github.io/SkipTo/.
>This enables keyboard navigation to the ARIA
>landmarks found on the page. We did not include
>all the landmarks on purpose because we wanted
>to make the menu as practical as possible.


JS: Thanks for this reminder. I haven't had a
chance to look closely at this, but I knew of it.



>To your question, however… What do you mean by
>“visual representation”? Do you mean the speech
>output, keyboard shortcuts for landmarks or something more extensive?


JS: It seems to me that landmarks are fairly well
covered by some of what already exists, so I mean
widgets. If we take tabs, as an example, when a
sighted person selects a tab (presumably with the
keyboard but without a screen reader), I guess,
if it's working right, there's a visual
indication of the selection. But I want sighted
folks to *see* what I hear, so I want them to see
the words, in some special color or highlight,
that say "tab selected." I want sighted people to
be able to diagnose ARIA issues quickly by just
glancing with their eyes, rather than having to
fiddle around with a screen reader. I want them
to "get" how ARIA works (and when it's not
implemented correctly) by being able to read, see
highlights, little boxes, colors, or whatever
else will make it very clear to them what they need to do.

For example, I want them to see exactly where
content comes into the page (or goes away) based
on whether or not a tab is selected, but I want
them to see it *where* in the page a screen reader would read it.

Maybe it's possible to see these things already
with VO (I'm not sure how IOS and the Mac may or
may not differ) or even NVDA. But I thought that,
for example, the NVDA speech viewer might show
the words for the landmarks, but not the other widgets.

To put it as simply as I can, I want a what I
hear is what you see type of interface. And yes,
by all means, stretch what I am saying .. show
labels and anything/everything else. I find
myself thinking of the old Homepage Reader, but
no TTS would be required at all. Showing the
screen reader experience, in combination with a
testing tool i.e. a choice to turn that on/off, would be great.

I am envisioning something that would serve as an
error-checking and a teaching tool, at the same
time. I'm focusing on ARIA, though, because it
seems to be difficult to learn/conceptualize.
Many other accessibility issues can be chalked up
to not implementing best practices in coding, generally.

Yes, people should read all of the great ARIA
examples and guidance, but even if they have,
they need visual quick checks, or so it seems to me.

Basically, I'd like to move away from the need
for sighted people to have to learn how to
pretend to be blind in order to understand how ARIA should work.

I hope this helps clarify what I'm envisioning.

Best,
Jennifer


Victor continued:

Jennifer Sutton

unread,
Oct 13, 2013, 11:06:11 PM10/13/13
to free...@googlegroups.com
Jamie et al:

Comments below:
At 06:49 PM 10/13/2013, you wrote:


><snip>



>JT: Can you clarify why NVDA's speech viewer doesn't address this?
>(I recognise that it still requires installing NVDA, but am curious
>as to whether there are other reasons.)
>
>>JS: Based on my response to Victor, can you describe for me how it
>>would handle tabs, or a carousel? What text would be put on the
>>screen, and how would it look?

I guess a sighted person would have to look back and forth between
the speech viewer and the page in their browser. I wonder, when
testing widgets, if that becomes hard to visually track.

It would be cool if there were a way to show some kind of split screen.

>>Maybe NVDA would present enough on the screen, but since I'm not
>>visually looking at the output from the speech viewer, it's hard
>>for me to know *exactly* what the output looks like to a sighted person.

Does it present *everything* a blind user hears, in writing, on the
screen, in the order that NVDA speaks it? Is it easy to read?



Let me be clear that I'm not trying to suggest that there's a problem
with the speech viewer; far from it. In fact, I'm recommending that
people use NVDA with the speech viewer more and more (and turn off
the speech). And I also remind people to donate.



>>JS: I wonder if there might be some way to build an NVDA extension that
>>could work only with the visual representation.


JT:
>Do you mean just the speech viewer without TTS or something more?
>You can disable speech easily enough using NVDA+s or set the
>synthesiser to no speech. If this is sufficient, is this a
>documentation/awareness problem?


JS: Perhaps it's a documentation/awareness issue. But again, I'm not
sure since I am not clear about *exactly* what the output looks like,
especially when it comes to representing ARIA visually.


I hope these responses help clarify at least the need that I
perceive. Maybe there's another way to address it; I just keep
feeling like it'd be easier for sighted people to understand how to
implement ARIA (or other nonvisual items) if they could see the
issues in environments where they're already comfortable.

Best,
Jennifer

James Teh

unread,
Oct 13, 2013, 11:34:10 PM10/13/13
to free...@googlegroups.com
On 14/10/2013 1:06 PM, Jennifer Sutton wrote:
>> JS: Based on my response to Victor, can you describe for me how it
>> would handle tabs, or a carousel? What text would be put on the
>> screen, and how would it look?
NVDA's speech viewer is a little text window which *should* display
every piece of text that would be spoken via TTS. For example, if you
press right arrow on a tab control and it moves to a "Notifications"
tab, NVDA might speak "Notifications tab" and "Notifications tab" would
also be written to the speech viewer.

> I guess a sighted person would have to look back and forth between the
> speech viewer and the page in their browser. I wonder, when testing
> widgets, if that becomes hard to visually track.
It possibly does, though I'm not sure what could be done about this.

> It would be cool if there were a way to show some kind of split screen.
You're supposed to be able to drag the Speech Viewer window around. Of
course, I can't confirm this myself, but if there are bugs in this, they
need to be reported and addressed.

> Does it present *everything* a blind user hears, in writing, on the
> screen, in the order that NVDA speaks it? Is it easy to read?
Yes. Each utterance is displayed on its own line. I guess this could get
tricky where there are lots of small utterances. Again, it's possible
there are bugs here; we don't get much feedback on this.

> Let me be clear that I'm not trying to suggest that there's a problem
> with the speech viewer; far from it.
Understood.

> Maybe there's another way to address it; I just keep feeling like it'd
> be easier for sighted people to understand how to implement ARIA (or
> other nonvisual items) if they could see the issues in environments
> where they're already comfortable.
I guess the problem is that the way a screen reader user interacts is
somewhat different to a sighted user and the problems encountered are
often very specific to that interaction. For example, one issue I'm
seeing a bit lately is a button which brings up a pop-up which is just
appended to the end of the DOM without setting focus. The pop-up is
positioned in the correct place visually, but a screen reader user has
no idea that it has appeared or how to get to it. I'm not sure how this
could be communicated without the sighted user being aware of the way a
screen reader user interacts. I guess one way is to show some sort of
representation of the object tree or browse mode, but that doesn't tell
the whole story.

Ted Drake

unread,
Oct 14, 2013, 1:25:15 AM10/14/13
to free...@googlegroups.com, free-aria-googlegroups.com
I can appreciate this idea, especially for widgets like tab panels, tree menus, and grids. It would be much easier to explain and troubleshoot if there was a visual representation when dealing with expected keyboard navigation, aria controls, expanded, roles, etc. it would also help to see when a user would need to switch their navigation modes.

Ted Drake

Sent from my iPad

Victor Tsaran

unread,
Oct 14, 2013, 1:38:52 AM10/14/13
to free...@googlegroups.com
Hi Jennifer and Ted.
Doesn't the "caption panel" in VoiceOver or "speech output" in NVDA fulfill this requirement?In both instances, you can drag the window that outputs the speech to wherever you want it on the screen.

Dominic Mazzoni

unread,
Oct 14, 2013, 2:49:52 AM10/14/13
to free-aria
Hi Jennifer,

Great discussion topic.

I explored this idea a couple of years ago with a Chrome extension I called ChromeShades. It reformats documents to try to reflect how a screen reader user might interact with it. The extension itself has rotted a bit and needs to be updated if you want it to work, but the Help page I wrote is pretty good, it gives you an idea of what features I added and how they work with some simple examples:


One of the main challenges I had was with dynamic sites that keep changing their HTML. The extension couldn't really always keep up. I think that it'd be possible to fix that, but it'd require significantly rethinking some of the design. The other challenge is that sometimes it took away too much information - the original site was actually more accessible than ChromeShades made it seem like it was. It might be possible to solve that issue too, I think it'd be better if it erred on the side of changing the page a bit less.

- Dominic

Reply all
Reply to author
Forward
0 new messages