That is, we will need the ability to emit a KeyEvent that only has the one, or only the other, and have the semantics of these firmed up. I think that today in practice we either have the key without the key_meaning, or we have both filled in. Our spec is not very clear about when either of these situations happens, nor what they mean when they do, implying that anything goes, and that the implementation is the spec.
The principled reason to do this is to decouple the two different communication intentions which we currently bundle together: one, is reporting key events (pressed/released), and another is reporting key-event-based text edits.
A practical reason came up while I was working on the dead key support, specifically some interesting edge cases. Namely, what should happen if the user presses a dead key, then presses a different dead key while the composition is still in progress? Or, for that matter, any of the other possible interleavings of keypresses which are logically equivalent to that one. Well one possible answer is that the first dead key press should be output as a diacritic (since it won't ever compose with the other dead key), and then the composition should be restarted with the other dead key. However, this would require us to report 2 different events, but we only have a budget of 1 event to send out - the KeyEvent corresponding to the second dead key actuation. If we were allowed to decouple the key press from the key meaning, we could then have reported a key meaning separately, and do the right thing.
Alternatively, of course, we could have the text editing API that handles the above case, yes; but we still must provide key event based text edits, so we need some solution for the above issue. Hence the decoupling proposal.
Thoughts?
F