Bayesian computer interfaces.

6 views
Skip to first unread message

swf...@gmail.com

unread,
Sep 29, 2014, 9:42:32 AM9/29/14
to open-bermuda

I have been thinking about making more use of Bayes theorem in computer interfaces and systems.

 

The idea with Bayes theorem is you start off with some prior assumption about the probability of stuff, then re-estimate your probabilities as more information comes available.


There was a good attempt at explaining Bayes theorem here.:

 

http://www.theguardian.com/science/life-and-physics/2014/sep/28/belief-bias-and-bayes

 

I haven’t read this piece but a quick skim suggests it might be on the lines I am thinking about:

 

http://www.sciencedirect.com/science/article/pii/S0957417412010731


The idea is to create better UI's.   For example, predictive text on phones is good up to a point, but then fails spectacularly -- for instance it thinks I am obsessed with ducks.   Actually, I think some Bayesian stuff is being used to help with these algorithms, but it needs work.


It would be awesome if UI's could learn from their users, help guess what button they were really trying to touch (eg if user hits a button, then promptly hits back that is a clue they need more info to make the right choice before hitting the button.


John

James Tucker

unread,
Sep 29, 2014, 5:45:43 PM9/29/14
to swf...@gmail.com, open-bermuda
It's an interesting concept, but generally you find with all training systems that they want / need to adapt over time to converge on optimal solutions.

There's another side to UI/UX though, which is that humans are very adaptable and good at learning too. Here's a good example: QWERTY. There are common misconceptions that DVORAK is a "faster" layout, and other rumors like QWERTY being designed to "slow people down". Unfortunately this is all misinformation. Fast typists are generally fast on any layout (insane things aside), similarly the world record holder for tap-texting is a guy who used to send telegrams during the world war. He was adapted to morse, but he was able to learn and adapt to a numeric keypad almost instantly. n.b. some skills are transferrable, some are not.

The point is, the brain is pretty good at adapting to "suboptimal" layouts without suffering particularly. For most UIs it will rarely make a statistically significant difference having a button on one side of the screen from another, wrt usage reliability and speed, for normal uses. In some games and so on, sure it could help, but we generally spread our hands to find an arrangement. If the middle finger doesn't work, we use another until it works.

For deeply nested UIs, "frequent use" and so on works pretty well but requires some training. Users tend to scream with adaptive UIs though - take all the changes to the windows start menu since XP (inclusive). This stuff is a constant problem on phone home screens too, and it's extremely hard. Users are getting better at arranging themselves though, and the same thing has been done by heavy MS Office users with the adaptable and custom menu bars for years.

In the home space, what I really (personally) want to start with is a shared-state web interface supporting multiple controllers. Adaptive would be ok, but then I'd have to predict what the interfaces are in order to not be surprised - if I could just start with consistent, that'd be a huge step forward from the offerings on the market today.

/ramble


--
You received this message because you are subscribed to the Google Groups "Open Bermuda" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-bermuda...@googlegroups.com.
To post to this group, send email to open-b...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/open-bermuda/CAO4m3c%2BxygFVt-ZwOmzwiyvoLu1%2BMB81xQQ2NR%2Bgt_j0UOJe2g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages