text editing

189 views
Skip to first unread message

John Asmuth

unread,
Apr 26, 2012, 4:09:59 PM4/26/12
to go-...@googlegroups.com
Anyone know anything about this? With draw2d, I only get to know how wide a character is after I draw it, so things like knowing what to select when the mouse is clicking, etc.. only way I can see how to do it is to buffer each letter individually and use some arithmetic to figure out which one the mouse wants.

Is this the normal way to do it?

I suppose one thing I could do is render each letter individually, but to the same buffer, and keep track of where each letter begins and ends... I'll work on this approach for now, but I'm happy to hear about what might work better.

John Asmuth

unread,
Apr 26, 2012, 10:54:23 PM4/26/12
to go-...@googlegroups.com
I've got a basic text entry widget, it seems to be working pretty well. Hopefully someone else can figure out how to interact with the copy/paste system...

Alex

unread,
May 13, 2012, 1:52:41 AM5/13/12
to go-...@googlegroups.com
Does the current design allow for more complex input methods? I'm talking here about entering languages like Chinese. I completely understand that this isn't core functionality or something that should necessarily be implemented any time soon so early in development. I just want to make sure that the API doesn't expect you to go straight from keyboard events to drawing text. There should be some intermediate layer to allow for this kind of thing. Sorry if this is a stupid question, I'm new to graphics programming and haven't yet grokked enough of uik's code to answer this myself.

John Asmuth

unread,
May 13, 2012, 9:07:11 AM5/13/12
to go-...@googlegroups.com
At the moment, widgets received a wde.KeyTypedEvent which has a Glyph field - the glyph is a string holding the stuff that would be inserted in a text widget. Currently there is no mapping, but that's only because I have absolutely no idea how to do this mapping. How is this kind of thing usually dealt with?

Alex

unread,
May 13, 2012, 11:02:54 AM5/13/12
to go-...@googlegroups.com
I used Chinese as an example because there are so many different input methods for Chinese. If Chinese is supported, other languages will be fine too. Typically, Chinese users will type out the pronunciation of a character (using either English letters or ) and the system will try to guess what character is intended. Since many characters are pronounced the same, and since the pronunciation is usually only partially specified, this guess will often be wrong and require manual correction from a drop-down list of choices. Smart input methods will use statistics to guess more accurately based on context, which means that entering a new character will often change a previously entered character.

Alternatively, there are some lesser-used input methods which require no guessing (or very little guessing anyway). Each character is mapped with a distinct series of keystrokes according to a set of rules. These are faster to input, easier to implement, but harder to learn.

On linux, all of this is wrapped up into ibus. Ibus supports many languages using pluggable engines. It supports GTK, Qt, and X (via XIM). I don't know how it's done on windows or mac. On Plan 9, there was a proposal to add non-English input which I don't think was ever implemented. Since I get the impression that uik is partially inspired by Plan 9's windowing system, you may get some ideas by looking at the proposed architecture[1].

Essentially, you need to send keyboard events to some other component to do this processing and delay text insertion until that component commits something. You also need a way to see intermediate results. For English, this component will just immediately send back the letter that was typed. For other languages, you could reuse systems like ibus or invent your own architecture if you don't like ibus. Reinventing the wheel here wouldn't actually be as hard as it sounds since most input methods are already librar-ized, and are in fact agnostic of systems like ibus (which was able to replace scim without a whole lot of work).

[1]: http://www.mail-archive.com/plan9...@googlegroups.com/msg00169.html

Alex

unread,
May 13, 2012, 11:46:17 AM5/13/12
to go-...@googlegroups.com
And again, if this is encapsulated in the widget then I don't think it's something you need to worry about yet. Just don't get locked into a bad design decision by making developers handle raw keycodes for text input. I've seen lots of applications using less common toolkits get this wrong, and wanted to bring it up sooner rather than later. Keep up the good work, and hopefully I'll be up-to-speed enough to contribute code soon!

John Asmuth

unread,
May 13, 2012, 12:16:25 PM5/13/12
to go-...@googlegroups.com


On Sunday, May 13, 2012 11:46:17 AM UTC-4, Alex wrote:
Just don't get locked into a bad design decision by making developers handle raw keycodes for text input.

This is what I hope to encapsulate in the KeyTypedEvent (which is distinct from the KeyDownEvent). The only thing this doesn't support easily is context.

Michael Schneider

unread,
May 14, 2012, 3:12:58 PM5/14/12
to go-...@googlegroups.com
I've got a basic text entry widget, it seems to be working pretty well. Hopefully someone else can figure out how to interact with the copy/paste system...


I really want to dive into adding cut and paste to the "entry" widget.  But before I do, have we come up with what is canonical for our event system?  Do events bubble from outside in (Menu then Window then Entry widget)?  Or from inside out (Entry widget then Window then Menu) looking for the widget the will accept and handle the event?  Do we start a convention like in Cocoa where the developer is expected to "setFirstResponder" as the focused widget to start from, then bubble out from there? 

John Asmuth

unread,
May 14, 2012, 4:14:19 PM5/14/12
to go-...@googlegroups.com
This merits some planning. Do you ever visit #go-nuts on irc.freenode.net?

Michael Schneider

unread,
May 14, 2012, 4:24:33 PM5/14/12
to go-...@googlegroups.com
This merits some planning. Do you ever visit #go-nuts on irc.freenode.net?

I haven't yet.  Are you planning on scheduling a meeting? 

John Asmuth

unread,
May 14, 2012, 5:40:53 PM5/14/12
to go-...@googlegroups.com
No, but I idle there, and so do lots of smart people who might have opinions on this kind of thing.

Michael Schneider

unread,
May 15, 2012, 4:23:47 PM5/15/12
to go-...@googlegroups.com
My apologies if this gets a little rambly, I've been pondering this all day.

In response to your statement:  "is it worth it to create a whole new slew of events? ForwardWordEvent, EndLineEvent?"

I believe these events shouldn't be generated in wde. Say a developer is writing a game with go.uik.  Let's say the sprite walks forward with the right arrow key, and the developer wants the sprite to run when pressing alt+right_arrow.  Does that mean he would need to catch a ForwardWordEvent?  That's kinda out of left field.  

The more I think about it, I don't like the idea of go.wde firing any widget specific events.  That should be reserved for each individual widget in go.uik to implement.  I'm trying to keep things simple here, just like in Rob Pike's paper (http://doc.cat-v.org/bell_labs/concurrent_window_system/concurrent_window_system.pdf) where each client has two incoming channels, one for the keyboard and one for the mouse.  I feel that having go.wde capable of firing off dozens of different event types greatly muddies the elegance of that simple design.   I believe that go.uik widgets should receive raw keyboard events and fire off (or call) appropriate events.  For example, the text and entry widgets could receive the raw KeyDown events (or KeyTypedEvents), and internally they generate the proper cursor movement events, or cut/paste events, or undo/redo events, etc.  Since we do like to keep all system dependent stuff in go.wde, we could still put all the mapping stuff in go.wde, although invoking it from go.uik.  Pasting in entry.go would end up looking this:

case ev := <-e.UserEvents:

    switch ev := ev.(type) {

        case uik.KeyTypedEvent:

        if ev.Chord == wde.PasteChord {

            paste()

        }

    }

}

On a separate note, the more I consider your top-down approach to how events fire, the more I like it.  In Rob Pike's paper, he speaks of every window in his windowing system only needing to accept two channels, the keyboard channel and mouse channel.  It's so easy to plug into the system in fact, he says that it'd be trivial to embed another entire instance of the windowing system within a window.  With this in mind, we're taking this design one step further with our widgets.  We can embed our widgets within other widgets, as long as they accept and respond to our event channels.  Any events not caught by Cocoa (when on Mac) will be passed to our window.  Any events not caught by the window, will be passed on to the (for example) tab widget.  Events not caught by the tab widget will be passed to (for example) the text widget.  :)

Reply all
Reply to author
Forward
0 new messages