Matt Stine and I were talking tonight about the future of Editor and our tooling in general. Conventional desktop widgeting is done so frequenty for game tools that it kind of seems old hat to keep grinding on that design. We brainstormed a bit about how to mix in a bit of web UI (HTML/CSS/JS), but perhaps not go so far as to just dump all the tools into a web browser windows. From what I have heard about others doing that, its a pretty mixed bag (having to put your renderer into a custom plugin that is hosted in some browser process). Things get weird really fast. This got me thinking about some alternatives.
I think it would be conceivable to use
http://docs.wxwidgets.org/trunk/classwx_web_view.html to build a new hybrid app around the core engine and renderer. Using a custom helium:// URI hooked into the new wxwidgets web view control, we could conceivably keep a really skinny MainFrame window for Editor, that is principally just a render view (there could be multiple of these windows as well). The custom URI would allow for an in-process REST API, no sockets needed (though I suppose you could route it over a socket if you really wanted to). The actual editor GUI elements would then be hosted in MiniFrame tool windows that each just have a wxWebView control that occupy them. The would use the helium:// URI to load whatever "page" they wanted in the positioned window (there could just be a drop down or combo box on the top of each mini frame (or context menu) that would select which client page to put into that window. Many tools already end up here just in a conventional widgeting tool kit (I am pretty sure Blender does this). Since each window is ostensibly controlled by our C++ code and wxWidgets, we can cache their size and location on the desktop between runs. This effectively makes Editor work like Photoshop on the Mac, where each render window is a document, and the tool windows are free floating on whatever monitor the user wants. There would be not master MainFrame that is a parent of every window.
We also talked about the concept of a WebGL renderer and how that might work. Right now asset resources call into native OpenGL or D3D routines to allocate GPU resources for incoming asset data from disk, but a WebGL renderer could instead just allocate DOM elements that stand in for VBO/Texture buffers, which in turn just get delivered a la page load or ajax call. Basically the Renderer interfaces just manages page elements instead of GPU memory allocations. Its neat to think about. The only bits we would probably need to upload back via the client page (into the engine proper) would be the input data, which would could handle the same way we would handle input of any other window. The game update would still completely happen in C++ via Components. At that point you could just host a game on real or virtual server hardware and have people play with a locally rendered but remotely updating game engine. It would need to be low latency for some styles of games, but not all games requires lots of bandwidth or a quick roundtrip. Especially games that people could run in their mobile browser without any app install. Also, running in a mobile browser has low buy in for linking from social networks, and it doesn't have any restriction about paying the platform provider for in app transactions.
This all came up as ideas to potentially zag where other engine zig... try and mix things up to target platforms or styles of games that aren't as easy for other platforms that require complete integration to the OS (a native app).