On 10/03/2009 09:20 AM, blindfold wrote:
> After about ten minutes or so I got tired of
> the screen still visually reading "Installing voice data on SD card"
> and pressed the home button. Things seemed to work at least partially
> with speech now, and I could navigate my contacts list with the
> entries spoken. Sounds quite good so far, but I feel a bit
> uncomfortable about the possibly unfinished "Installing voice data on
> SD card" part although that most likely has nothing to do with your
> screen reader work and everything with the SVOX installation.
Yeah, I don't actually trigger speech data installation. Ideally it'd be
nice to bundle that with Spiel such that one install rules them all and
the process is somewhat more accessible, but I'm not immediately sure
how to do that.
> I just
> ran Twidroid, and there too I got speech for the UI elements, but not
> for instance the labels alongside the edit boxes.
Yes, this appears to be an API limitation. I get a stream of events, but
can't see a way to get anything more significant. So, for instance, I
might get an event dumping the screen content as an array of Strings,
then events containing the class and content of UI elements that get
focus, but I can't locate the element in the overall hierarchy to make a
guess at its label. This is where my questions about touchscreen access
come in. If you run your finger over non-keyboard-focusable elements,
are they spoken? If labels of text input elements aren't spoken when
focused via the keyboard, is it then possible to find their labels via
the touchscreen and have them read? Not the most ideal, I know, but it'd
at least be something until the API gets more fleshed out in Donut+X.
> Still, it looks like
> you made great progress. Congratulations!!
>
>
Thanks, good to hear. :)
> Next I ran my own fully accessible app, The vOICe for Android, and as
> expected I now got "double speech": every entry in its menu was spoken
> by both a male and a female voice. The speech in The vOICe is still
> based on the "old" Android 1.5 TTS for Android library, so it is not
> so strange that we now get two TTS engines talking at the same time. I
> have not yet started work on the Android 1.6 TTS features that should
> let me drop most of my own accessibility code and avoid this double
> speech output.
>
>
Sweet, would be good to hear from actual app developers about what
functionality I should work on next. :)
OK, so just to be clear, the default UI doesn't support actually swiping
fingers across the screen, as is done with the iPhone? The only
interaction type is actually clicking with finger presses? Damn, that's
going to make a lot of things difficult or impossible just now.
> I'm not blind so I'm not qualified to judge screen reader
> functionality in any detail, but I noticed that your screen reader
> only says "button" in case of radio buttons, so I think it would be
> useful to refine that. In fact you may find it useful to try the
> "double speech" with my app to notice the differences in how dialog
> elements are spoken since The vOICe for Android does say "radio
> button" where applicable. However, this is a just a detail, and
>
Good point. Currently there's some very hacky code that just matches the
event's originating class against the string "Button," mainly meant to
get things up and running quickly, but it's simple enough to add in
checks for RadioButton and the like before that one triggers.
> overall I am very impressed with how far you already got. I also
> informed the seeingwithsound user group today, and one question from a
> blind member there was "this is interesting. I wonder what the cost
> will eventually be". What are your plans that I can report back?
>
>
Right now it's open source under the MIT license (see
http://gitorious.org/spiel for the code.) No build instructions
currently, but I'll throw together a README in the next few days.
It will always be open source, though license is still debatable. Maybe
MIT, maybe GPL, not sure yet. So anyone willing to build it themselves
has it for free. If it ever becomes usable, I'll probably put it on the
Android Market for $5-$10, basically just to get a few beers for the
effort. :) No intent to actually make a killing on this, since the "AT
market as highway robbery" model isn't one I find particularly
appealing. :) Then again, I'm not sure the Android Market is accessible
just now, and if it uses the browser then it definitely isn't. I really
want to make this usable since I want an Android phone quite badly, but
as was stated earlier in this thread, the current API is very basic.
> The hardest part is perhaps to deal with proximity in case say an edit
> box is positioned near a (visually) corresponding text label on the
> screen but without a direct programmatic connection. User interface
> elements can be far apart in the GUI element tree but located close
> together on the screen such that sighted users will visually link them
> together, as was probably intended by the application programmer. One
> would then like to have the text label spoken along with the nearby
> edit box and its content, but I do not know if the screen coordinates
> as needed for doing this can be accessed within the new Android
> accessibility framework. If it is not possible at present, then it
> would be a strong suggestion to the Google Android developers to add
> this information to allow for more intelligent screen reader
> functionality. (This must be "intelligent" because one must inevitably
> make educated/heuristic guesses about which elements at what screen
> separations should be linked together, and which elements should not
> be linked in screen reading.)
>
Agreed, and based on what I've seen, I'm not sure this is possible at
the moment. I think I've gone as far as I can using the current API, and
thus far I've not gotten any answers to any of my inquiries on these
matters. Hopefully Donut+X will give me more to work with. In the
meantime, I'm filing Android issues based on the limitations I'm
encountering and hoping a few months will bring positive change. Based
on what I've seen on this list, I'm hopeful in that regard.