Technical Review - the way forward

4 views
Skip to first unread message

Steve Lee

unread,
Nov 6, 2006, 7:08:28 AM11/6/06
to osk...@googlegroups.com
At the risk of overloading everyone with more long messages I'd really like to open up the technical discussion on the way forward now I realise how powerful LSR is. I hope by sharing my thoughts to incorporate as many ideas as possible before forging ahead with code. Obviously we can change tack at any point but it would be good to get the best start from where we are now.

The big vision is to provided transparent access to computers and internet for users with mobility impairments by creating innovative alternative input mechanisms.

We need

* Basic OSK display and operation with declarative layout definitions
* Direct in application highlighting and selection/operation
* Various scanning modes and other innovative input schemes
* Dynamic OSK creation based on application context.
* Innovations such as dynamic sizing of buttons and saving user favourites
* Other output such as speech
* Event handling from a range of devices on all platforms
* Interaction with target applications via a11y APIs, OS/Desktop APIs or synthetic events
* End user or 3rd party customisation via documents (appearance) and script (behaviour)
* Cross platform/desktop (Linux, Window, Mac, possibly embedded).
* A flexible architectural structure (a platform for alt input solutions)
* General services from OS/Desktop and connectivity.


Q: how can we best use the existing experience and code bases to move forward to our goal?

Good solutions to much of this puzzle already exist in the various OSK projects (GOK, SAW, OnBoard, Hawking Tool Bar). LSR provides important infrastructure (check the webcast). We have the start of an audit of current solutions on the wiki page and i've added this text. I'll get some architectural UML diagrams sorted.


What We have

OnBoard provides a clean and lean basic OSK that uses SVG via Cairo. Perhaps a good starting point

GOK & SAW both have many advanced switch scanning features and GOK includes text prediction and UI grab for dynamic selection sets.

Python is language of choice for portability, efficiency and 'batteries'. Modules can wrap existing C functionality from GOK and SAW before eventual port to Python.

LSR offers a flexible and general AT framework with a good domain specific class hierarchy and API from within Perks scripts. Plus good support for various input devices and a flexible approach to UI.

Mozilla platform offers great interactive UI (XUL/XBL) and connectivity. XULRunner gives standalone operation and Python (via xpPython). XPCOM gives component oriented architecture.


Way to Go

I was envisioning using the Mozilla platform for XUL with SVG extensions or perhaps onBoard's UI. XPCOM can package existing code.

An alternative approach is to start with LSR and use either onBoard's SVG, XUL or another for UI.

I'd like to blend both LSR and Mozilla to get the best of both worlds.

Comments please.....

--
Steve Lee
www.oatsoft.org
www.fullmeasure.co.uk

davidb

unread,
Nov 6, 2006, 9:23:24 AM11/6/06
to OSK-ng
Hi Steve,

This is great! My comments are inline below.

Steve Lee wrote:
> At the risk of overloading everyone with more long messages I'd really like
> to open up the technical discussion on the way forward now I realise how
> powerful LSR is. I hope by sharing my thoughts to incorporate as many ideas
> as possible before forging ahead with code. Obviously we can change tack at
> any point but it would be good to get the best start from where we are now.
>
> The big vision is to provided transparent access to computers and internet
> for users with mobility impairments by creating innovative alternative input
> mechanisms.
>
> We need
>
> * Basic OSK display and operation with declarative layout definitions
> * Direct in application highlighting and selection/operation
> * Various scanning modes and other innovative input schemes
> * Dynamic OSK creation based on application context.
> * Innovations such as dynamic sizing of buttons and saving user favourites
> * Other output such as speech
> * Event handling from a range of devices on all platforms
> * Interaction with target applications via a11y APIs, OS/Desktop APIs or
> synthetic events
> * End user or 3rd party customisation via documents (appearance) and script
> (behaviour)

Will the appearance of the OSK (when not direct in-app) play nice with
desktop themes? For example, GOK can be configured to adopt the
desktop theme (e.g. high contrast) simply because it uses native
widgets. We really need users to tell us though I suspect opinions
would vary.

> * Cross platform/desktop (Linux, Window, Mac, possibly embedded).
> * A flexible architectural structure (a platform for alt input solutions)
> * General services from OS/Desktop and connectivity.
>

Do we want the OSK to play nice with magnification? An example might
be scanning directly in-application with magnification set to follow
the highlight/indicator. Or a region of the screen that show a
magnified image of the indicated area. Smells like scope creep I know.

>
> Q: how can we best use the existing experience and code bases to move
> forward to our goal?
>
> Good solutions to much of this puzzle already exist in the various OSK
> projects (GOK, SAW, OnBoard, Hawking Tool Bar). LSR provides important
> infrastructure (check the webcast). We have the start of an audit of current
> solutions on the wiki page and i've added this text. I'll get some
> architectural UML diagrams sorted.
>
>
> What We have
>
> OnBoard provides a clean and lean basic OSK that uses SVG via Cairo. Perhaps
> a good starting point
>
> GOK & SAW both have many advanced switch scanning features and GOK includes
> text prediction and UI grab for dynamic selection sets.
>

The main thing is to capture the best ideas and features from
everything, or improve them (or make them obsolete). Using code 'as
is' is less important IMHO. Having this group discussion is ideal for
capturing the ideas and features (thankfully, most of which I think
Steve is already aware).

> Python is language of choice for portability, efficiency and 'batteries'.
> Modules can wrap existing C functionality from GOK and SAW before eventual
> port to Python.
>
> LSR offers a flexible and general AT framework with a good domain specific
> class hierarchy and API from within Perks scripts. Plus good support for
> various input devices and a flexible approach to UI.
>
> Mozilla platform offers great interactive UI (XUL/XBL) and connectivity.
> XULRunner gives standalone operation and Python (via xpPython). XPCOM gives
> component oriented architecture.
>
>
> Way to Go
>
> I was envisioning using the Mozilla platform for XUL with SVG extensions or
> perhaps onBoard's UI. XPCOM can package existing code.
>
> An alternative approach is to start with LSR and use either onBoard's SVG,
> XUL or another for UI.
>

I suppose you've probably considered or plan to have some abstraction
of the GUI? This will make it easier to change your mind later.

> I'd like to blend both LSR and Mozilla to get the best of both worlds.
>
> Comments please.....
>
> --
> Steve Lee
> www.oatsoft.org
> www.fullmeasure.co.uk
>

cheers,
David

Steve Lee

unread,
Nov 6, 2006, 9:49:01 AM11/6/06
to osk...@googlegroups.com
Hey David,

Yes working with existing accessibility features and tools will be an important general issue in addition the specifics that you mention, e.g speech. Not sure about interaction with a screen reader though.We should probably let users decide on whether to use existing settings or override them.

You're right about ideas being more important that code. My reason for mentioning it was as a faster way forward. As long as the extra effort is not large. If you can encapsulate the prediction (say) and use that as a replaceable component that is a win in the short term.

Absolutely to GUI abstraction, same for for events. We want a coding environment that matches the problem domain, e.g cells, action, scan step, gesture, symbol/concept.

Actually I've been thinking that it will be cool to not only create an OSK-ng application but also to make it an open platform for creating exciting new applications and games. For that we present light-weight data-driven customisation with scripting Something like HTML + CSS + Javascript but in an abstract environment (an OSK virtual machine if you like). That should make it easy for almost anyone to pick up and create solutions with. We then just sit back and watch the innovation happen. That's one of the reasons Mozilla is so attractive, it's working that way already with web standards.

Steve

billh

unread,
Nov 6, 2006, 12:26:52 PM11/6/06
to OSK-ng
Hi All;

I'd like to step back a little in the discussion. Since we are talking
about a new, and more general, framework for a next generation OSK, I
think we need to broaden our scope and be very careful not to build in
too many limitations from the start.

I am also wary of using LSR as our framework, given its current licence
and a few implementation issues; if we are going to use Python I think
we should use the existing Python AT-SPI bindings rather than wrap cspi
(LSR currently does the latter, and as the cspi author I specifically
do not recommend this).

What we talked about in Boston was working to build a common
library/set of python modules of general utility to all our ATs,
including GOK, LSR, and orca. Orca, too, can be used as a
general-purpose AT framework, not just a screenreader, but I think it
makes more sense to abstract out some of the common operations that are
required by AT-SPI clients and share that code among our respective
clients/projects.

That said, I do think collaborating on the OSK-ng makes sense; part of
that would involve re-using this shared code base, and building the
pieces of particular interest to OSKs.

I also think we should consider a modular approach to front and
back-ends; this is I think the best way to satisfy the sometimes
conflicting goals different groups have regarding UIs. I personally
would not think that a XUL based OSK makes sense, but I certainly would
not object to a XUL front-end for a "pluggable" OSK framework.
Similarly, one could build a cairo, or Qt, or even SVG front end, if
one were motivated, and this would allow for different visions of what
is best for one's particular user focus (i.e. for kiosks and some end
users, you may want a naturalistically rendered keyboard, for other
users you want a high impact, color-coded front end - witness onBoard
vs. GOK). This also allows for non-graphical front-ends to the same
technology - even WAP or voice interfaces can easily be plugged in to
such a system, if you think about it. Neither XUL nor GTK+ give you
that, which is why a pluggable front-end makes sense to me.

Similarly, input devices can be quite varied. While up until now we've
mostly abstracted them all to be either mouse-like valuators or
button-like switches, there is a limit to how far this can get you, and
it still doesn't solve the problem of OS-dependent configurations (like
evdev, libusb, XInput, joystick, etc. etc.). Input devices are key to
OSK use, and they are the source of the most difficult and annoying
bugs - making the input modules pluggable gives us more flexibility and
room to grow.

As far as the keyboard layout "description language", I think something
based on XML probably makes sense (because of the wide variety of
transformation tools available). The GOK XML could be used as a
starting point, or just re-examined for clues about what we did
wrong+right when designing that XML flavor. We would also need to
consider whether we wanted to use XML for the in-memory keyboard layout
API, or build something more efficient for passing info around at
runtime (since dynamic keyboard layouts may need to change quite
quickly on-the-fly, and this can begin to take a significant fraction
of one's compute cycles if it isn't efficient).

Best regards,

Bill

Peter Parente

unread,
Nov 6, 2006, 12:42:36 PM11/6/06
to OSK-ng
Hello Bill,

> we should use the existing Python AT-SPI bindings rather than wrap cspi
> (LSR currently does the latter, and as the cspi author I specifically
> do not recommend this).

This is incorrect. LSR correctly uses pyORBit, the recommended
approach. Please be careful about what you're stating as fact.

> I am also wary of using LSR as our framework, given its current licence

I acknowledged this problem in my last email. We are fighting to
address it.

I do agree that taking a step back is a good idea, though. When I
posted about LSR, I was clear in saying "this is food for thought" and
"I am joining this conversation late." I do not have the big picture of
exactly what you're trying to accomplish. I can only tell you about
LSR's capabilities.

Pete

billh

unread,
Nov 6, 2006, 12:58:32 PM11/6/06
to OSK-ng

Peter Parente wrote:
> Hello Bill,

Hi Peter;

> > we should use the existing Python AT-SPI bindings rather than wrap cspi
> > (LSR currently does the latter, and as the cspi author I specifically
> > do not recommend this).
>
> This is incorrect. LSR correctly uses pyORBit, the recommended
> approach. Please be careful about what you're stating as fact.

My apologies - is this a change? I guess I got mixed up with what
dogtail and LDTP are doing (neither are using pyORBit directly yet, but
there seemed to be consensus that they should move in that direction).

> > I am also wary of using LSR as our framework, given its current licence
>
> I acknowledged this problem in my last email. We are fighting to
> address it.

Cool, that's great news.

> I do agree that taking a step back is a good idea, though. When I
> posted about LSR, I was clear in saying "this is food for thought" and
> "I am joining this conversation late." I do not have the big picture of
> exactly what you're trying to accomplish. I can only tell you about
> LSR's capabilities.

Cool; I do think the LSR concept of a general harness is a powerful and
good one (and I think it applies to orca too, though the orca team has
been focussed pretty closely on the issues of screenreading and
magnification). I sort of think that the best mix of code might
include pieces from both frameworks, as well as some new code that
takes some inspiration from the things GOK has been doing in C. Maybe
even some of the dogtail or LDTP code could be reused.

As you say, if we get the license compatibilities sorted out, then we
can pick and mix from a number of sources (or, since python is pretty
compact in terms of value for code, it might be just as effective to
write new modules using these existing sources as inspiration).

I suppose you might say that the basic abstraction for an OSK (or at
least the adaptive kind we are probably talking about) is pretty
stateful. That stateful OSK however needs to rely on a fairly complex
stream of events in order to present the "most useful" and "most
appropriate" set of choices to the end user, reflective of the user's
current application context. That event-driven part is where I think
the code reuse between technologies like screen-readers LSR and orca,
and test frameworks like dogtail and LDTP, and OSKs, has a role. And
I think you have the right idea in LSR, to do some broad thinking about
how these concepts can go beyond the standard idea of what an "onscreen
keyboard" is (as we've been doing with GOK).

Perhaps I should start a separate thread on the basic abstractions of
interest?

Bill

> Pete

Peter Parente

unread,
Nov 6, 2006, 1:34:18 PM11/6/06
to OSK-ng
Hi Bill,

> > This is incorrect. LSR correctly uses pyORBit, the recommended
> > approach. Please be careful about what you're stating as fact.

> My apologies - is this a change? I guess I got mixed up with what
> dogtail and LDTP are doing (neither are using pyORBit directly yet, but
> there seemed to be consensus that they should move in that direction).

No problem. We've been using pyORBit since the LSR project started.

> I sort of think that the best mix of code might
> include pieces from both frameworks, as well as some new code that
> takes some inspiration from the things GOK has been doing in C. Maybe
> even some of the dogtail or LDTP code could be reused.

Agreed. For instance, Steve pointed out that he might like to support
declarative scripts. Both LSR and Orca are object oriented, so
inspiration (and maybe code) will have to come from elsewhere for that
feature.

> As you say, if we get the license compatibilities sorted out, then we
> can pick and mix from a number of sources (or, since python is pretty
> compact in terms of value for code, it might be just as effective to
> write new modules using these existing sources as inspiration).

I will keep the group posted on this topic as it relates to LSR.

Steve Lee

unread,
Nov 6, 2006, 5:24:41 PM11/6/06
to osk...@googlegroups.com
Yep declarative enables easy end user experimentation and
customisation. By end user I mean SEN teachers, clinical staff and
facilitators who may have restricted technical skills and time. We
want to encourage innovation and they are the people with the most
experience with requirements. Anything we can do to encourage them to
customise for their clients is great.

Here's some more comments

Framework
---------
It looks we are moving in the direction of an open framework, LSR +
Mozilla or otherwise. I KNOW developers ALWAYS want to do that but in
this case.... We are making a bigger problem for ourselves but given
the vision we want to have a big enough solution to meet it. I have
always had in mind a component based architecture for say scan logic
(e.g XPCOM) and a custom pluggable 'bus' is close to that. However we
also want something working in a reasonable timescale so may want to
evolve from simpler implementations.


UI & Keyboard layout
---------------
For UI I think we either come up with a usable least common
denominator choice or go pluggable through an abstraction, possibly
data driven. The layouts are natuarally data driven (below) but what
about other more dynamic UI and the in-application selection? (We
should present all UI in a consistent way of course). The choices seem
to be API driven by extension scripts (as LSR) or dynamic creation of
declarative XML which is then 'rendered' and run (like DHTML). If we
want something that allows several different groups to implement their
projects then pluggable must be the way given the wide variety of
requirments. A simple cross platform 'standard' default implementation
will still be needed.

I kinda take it as read that predefined layouts will be done in an XML
application.
A flexible XML schema can include standard behaviours and allow
scripting for exceptions.
This to my mind is an application of XUL/XBL.

Do we have any idea how innefficient XML DOM is for manipulation
comparied to Python classes (say).
Are some python XML libs much better than others? Could we parse with
DOM or do we need SAX? I doubt layouts ever get that big.


Input devices
---------------
This is probably the critial part and a good abstraction and
mechanisms are needed fairly early. Pluggable may be the way and
should be fully run time. I'd like to get to the position where users
can just have various devices connected and they work without complex
config, e.g. switches and head pointers together. Thats a sort of USB
Plug 'n' play utopia. Rapid testing of various combinations will be a
bonus during assessments and would allow machine sharing.

Peter Korn mentioned problems with drivers under Linux - anyone have details?

Keith Packard has indicated that Xevie is solving some of the problems
that GOK has faced with device capture under X.

On Windows there are many legacy systems like Serial Keys. Do we support them?

Steve

Reply all
Reply to author
Forward
0 new messages