[Re-sending my message from April 6 which was emailed but is not showing up in the web interface: apologies to anybody who sees this twice!]
Hi Eric,
I'm an engineer on Chrome, and I've been working with the Polymer team on making sure that we address Custom Element accessibility.
I agree with you that it is critically important that we don't leave users with disabilities behind in the Web Components world. I am cautiously optimistic, however.
Firstly, web developers are already a long way down the path of creating complicated, mouse-driven UIs which lack semantic information. Web Components gives us an opportunity to formalize techniques for creating robust custom elements with good accessibility.
For example, the ARIA working group are also currently discussing proposals for allowing ARIA to be extended by developers (see the discussion beginning with this email: http://lists.w3.org/Archives/Public/public-pfwg/2014Apr/0034.html) to better support novel types of UI. One problem with ARIA is that it is not supported by the dominant voice control software, Dragon, which is still using very outdated technology - we do have work to do on the standards side, but we need those standards to be supported by the relevant tools as well.
There is also work underway to formalize the possible actions and events which may occur on an element: http://www.w3.org/TR/indie-ui-events/ - although this is currently in the early stages and in need of more support from the community.
Secondly, Web Components gives developers a chance to try out novel ways of supporting accessibility-related use cases, as argued by Steve Faulkner: http://blog.paciellogroup.com/2014/04/usability-accessibility-opportunities-web-compenent-world/ - which could, in the future, possibly become part of the standard HTML spec.
Our recent article, http://www.polymer-project.org/articles/accessible-web-components.html, outlines the strategies the Polymer team are using to address accessibility.
| By hand | by speech by graphical element |
by speech with deeper application
knowledge |
| Move mouse to beginning of text, hold down mouse button, drag to end of text Right-click text move down menu to paragraph left click paragraph Click tab 'borders' click second from the left line arrangement click okay |
move mouse the beginning of text "hold left mouse button" "drag right" ...wait for it... "stop" "right-click" ...counting menu items.. "move down seven" "mouseclick" move mouse over Borders tab "mouseclick" move mouse over the line arrangement box (second to the left) "mouseclick" |
move mouse to start of text, "leave Mark", move mouse to end of text turn on all borders |
Here is how we are trying to address your four points using the technology available today:
> 1) read the state of anything that can be displayed or changed via a> GUI. this is a getter function
We annotate custom elements with ARIA role, name, state and value properties. This provides the state information which can be queried by speech recognition technology, via platform APIs like IAccessible2, UI Automation, NSAccessible etc., and allows you to query the interface by name.
> 2) change the state of anything that can be changed by a GUI. This is a> putter function.
This is where there is currently a gap in specifications, and authors are forced to implement their own interactions. The IndieUI spec proposes one possible mechanism for addressing this: http://www.w3.org/TR/indie-ui-events/#intro-example-valuechangerequest . To fill this gap for now, we suggest using data binding to translate user gestures via keyboard or mouse events into changes to attributes on the custom elements, which are then reflected in the ARIA attributes accessible via platform APIs.
> 3) do something. This is usually the action associated with a link or> button but can also drive mouse over or any other event causing an action.
Similarly, acting on an element is currently handled via keyboard and mouse events, and this could be supported at a higher level by something like IndieUI actions (http://www.w3.org/TR/indie-ui-events/#actions). Currently, Polymer elements listen for mouse and keyboard events which are used to drive actions on the element. As you say, these events can be simulated by assistive technology via the platform accessibility APIs. We do recommend having a robust keyboard story and ensuring that elements are focusable, to avoid having to perform fiddly mouse interactions.
> 4) tell me when something changes. These event notifications allow you> to use hand/mouse at the same time as speech and it lets the speech
> system stay in sync with what's being displayed.
ARIA provides an aria-live attribute, which is also implicit in certain role values, to notify users when the content of certain regions changes.
I would greatly appreciate it if you could let us know if you see any areas where we could improve this strategy to address your needs. Also, it would help to hear more about your specific experience: what browser and assistive technology are you using? Are there any sites which work well for you now which could be used as a model for best practices?
Thanks,
Alice