Screen reader presentation

Skip to first unread message

Devin Prater

Oct 27, 2021, 3:55:29 AM10/27/21
to a11y-dev
Good afternoon,

I'm Devin, a screen reader user who knows maybe 0.2% of Python, so not much of a developer. However, being totally blind, I rely on screen readers 100% of the time that I use digital interfaces of any operating system.

So, since Fuchsia is a new, upcoming OS, I wanted to see if I could bring my perspective on things to hopefully give the architects of the future more input to work with. My concern is that the way screen readers work in modern OS's, like iOS, MacOS, ChromeOS, and Android, don't give themselves enough flexibility to display information as best as they could.

So, first, I'll define terms that I'll use. I don't think enough research has been done on this subject for there to be any kind of official terms, so I'll go with:

* Text reader: A screen reader that reads documents, web pages, and other text as if it were a document. They load web pages into a buffer, and change it depending on what the browser says, not particularly on what the user does within the buffer. Other textual objects are treated similar to documents as well.
* Object reader: A screen reader which treats everything, including text, as an object or "element". All paragraphs are treated as a single object. Any formatted text inside the paragraph is split into its own object, with the paragraph continuing in another object after it.

So, these are, mainly, just two different ways to read rich text or web content. A screen reader could have an object-browser mode, like NVDA's object navigation. There are consequences to having an object reader, though.

First, formatted text being split into its own object because it isn't a plain text object makes speech stop reading the paragraph as it reaches the end of that object, then speak the few words that are formatted, then start reading the rest of the paragraph. A user, such as a child learning to use a computer, would be confused as to why that word is by itself. Also with speech, with many modern text-to-speech engines, there aren't many ways to convey different formatting attributes.

If braille is used, the problem is even more noticeable. The user, when encountering the formatted text object, just reads a mostly empty line with a few words on it, depending on how much is formatted. The braille isn't even shown formatted, even though there are plenty of formatting indicators in braille.

Text readers don't always do much better, with Narrator on Windows doing the best for a modern screen reader, but Emacspeak (on Linux, Mac, and the Linux container in Chrome Books) has always done best with speaking formatting.

Object readers do have their wins, though, such as being able to play sounds based on where the object is. I hope that use of HRTF is used with speech as well, so that the user is able to position the speech somewhere besides in the middle of their head. For me, I'd love the speech to come from somewhere in front of me, making it a little easier to think about what I'm listening to while I listen. Braille is better for this kind of contemplation, but not many blind people have the luxury of owning a braille display.

I hope that my perspective has been of some help. I don't know if all this discussion has been had internally on the Fuchsia accessibility team, or if it's too early for these discussions to be held yet, but as there are already plans to build out Fuchsia into other devices and form-factors, I hope as many perspectives can be given as possible, so that accessibility can meet as many needs as possible.
Thanks all for all your hard work.

Lucas Radaelli

Oct 27, 2021, 4:20:48 PM10/27/21
to Devin Prater, a11y-dev
Hey Devin,

I am Lucas, I am the tl for the Fuchsia Screen Reader, and I am also a blind developer.

All the things you have said are on my mind. The text mode vs object mode (aka, browse mode or focus mode in NVDA), is an interesting paradigm, and each mode has indeed its advantages. 

If you take Voiceover on the Mac, for example, I consider it to be an object-based screen reader. However, some of the text formatting is more complicated -- in my opinion it is much easier to read a document in a virtual buffer than it is to read in object mode, because it tends to group a lot of text in a single object.

The Fuchsia screen reader does not follow any of these paradigms yet. For now, we implemented basic support for touch-screen based devices, and there, as you probably have experience, the model is a bit different: it is a mix of object mode with some smart processing of text to make it more readable with screen readers. The points you bring up are super relevant, and, yes, we will start discussing them as we start submitting more code to the codebase to support the ability of controlling the Screen Reader via a keyboard.

Fuchsia is still developing as a platform, and we are preparing a roadmap (a list of things we would like to do), to evolve the accessibility framework.  We intend to publish some documentation of what already exists, and what we want to implement in the future at the end of this year / beginning of next year, so I would love to receive your feedback once you have the chance to read it.

I am a NVDA user myself, as well as a Emacspeak user when I am programming on Linux. I also use Voiceover and Talkback, so I think I have a good experience with all these different screen readers.

One important thing about Fuchsia is the exact point you made: because it is in its beginning, we have the ability to shape how the screen reader of the future will look like. Yes, this may take a while. I can't make promises of dates or anything like that, but this kind of discussion is super relevant.

As I said, our documentation is getting ready, and I think the best way to move the discussion forward is to receive some feedback, questions or comments on what we write in the near future.

Thanks for reaching out!

All posts must follow the Fuchsia Code of Conduct or may be removed.
You received this message because you are subscribed to the Google Groups "a11y-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit

Devin Prater

Nov 1, 2021, 7:56:23 PM11/1/21
to Lucas Radaelli, a11y-dev
Thank you so much. I look forward to reading all I can about Fuchsia accessibility. I'm glad I took the step of reaching out and interacting, and am really glad to see accessibility taken seriously enough to have a mailing list all its own.
Reply all
Reply to author
0 new messages