Hi,
I’m a programming language researcher, and we are working on a new visual programming language.
I’m investigating using Blink as the rendering component of our language editing/visualization system.
Essentially what I’d like to be able to do is to programmatically (C++) generate 1: a DOM tree of a new family of Element derived objects, 2: programmatically generate a rendering tree (with new node types that have custom rendering and layout behavior), 3: layout the tree, 4: attach a GraphicsContext to an existing window or bitmapped resource, and 5: use Blink's GraphicsContext to render the custom rendering tree. I’d also like to create a family of custom Element derived objects that would be attached to the rendering tree and use the rendering tree. I’d also like to respond to the standard events, i.e. mouse, keyboard, etc…
It looks like this would be possible using Blink as a library, because the render tree types (LayoutObject derived) are defined as publicly exported (have the CORE_EXPORT macro). I'm a lot more familiar with the old WebKit rendering architecture, but it looks like in Blink, the rendering tree types are all LayoutObject subtypes, which appears to have replaced the WebKit's RenderObject.
Are there any design documents that discuss Blink's rendering architecture? The docs on the blink/chromium project site seem to be a bit different than what I can glean from the blink source.
I would really prefer to use Blink in single process mode for at least our initial implementation. Would it be possible create a Skia surface in memory and attach a RenderingContext to it, then pass the context to the LayoutObject::paint method directly?
Hi Stephen,Yes, this will be a stand alone, compiled application. As this is part of a compiler, one of it’s goals is to compile source code and produce executables that users can run and distribute. But, its a visual programming language, hence, it needs a way to read source code, generate layout info, and display it to the screen.A very simplified overview of how traditional html rendering engines is:(html src) -> [parser] -> (DOM tree) -> (render tree) -> [painter] -> (output device)I’m interested in re-using just the later stages of the layout / rendering process. We’re developing a new programming language, and one of the outputs of a parser is intended to be a render tree made up of a family of LayoutObject derived subtypes. The first stages of process in our case start with source code (which has no relation to HTML), and the parser will know how to generate a render tree from the source abstract syntax tree (AST).Basically, what I’m looking for is canvas and layout engine.I originally intended on only using Skia directly, and creating our custom layout engine, but as Blink already has this, I think its simply more efficient to re-use as much of it as possible.So, essentially, our proposed rendering process will be like:(source code) -> [parser] -> (AST) -> [attach to style info] -> (render tree) -> [painter] -> (output device)
The first stage of the rendering process in our case has no HTML, but the AST is analogous to a HTML DOM. I’m planing on creating our own attach module (WebKit called it attach, I’m not sure if Blink re-named it) which combines an AST with style rules obtained from Blink’s CSS parser to generate a render tree which can then be given to Blink’s layout and painting engines.I’d like to be able to build Blink as a shared library, and just use the LayoutObject derived types and the GraphicsContext. The GraphicsContext looks very easy to attach to an existing Skia surface, which again is really easy to create.I already have a very very simplified set of LayoutObject type classes working which can do some simple layout and draw themselves to a Skia canvas. This is working OK, but I’d like to be able to re-use the additional functionality in Blink’s layout engine and style parser.
I still have 2 things I don't understand about your need.- If this is a visual programming language, what are you using for the coding portion of the project. As I understand "visual programming", it means that a programmer manipulates graphical elements to "draw" the program flow. In other words, what is to the left of "source code" in you program flow diagram, and what is "source code".- How is creating layout objects directly helpful to you at all? The process of creating and maintaining a render tree implements the functionality you appear to want, so why are you writing custom code to create it rather than creating and updating styled html and letting the browser do it's job.Stephen.
I still have 2 things I don't understand about your need.- If this is a visual programming language, what are you using for the coding portion of the project. As I understand "visual programming", it means that a programmer manipulates graphical elements to "draw" the program flow. In other words, what is to the left of "source code" in you program flow diagram, and what is "source code".- How is creating layout objects directly helpful to you at all? The process of creating and maintaining a render tree implements the functionality you appear to want, so why are you writing custom code to create it rather than creating and updating styled html and letting the browser do it's job.Stephen.The source on the left side is prog Lang source code.Now the trick is that's it's intended to be visually edited, hence the need for hit detection. So, a component may be dragged from one region to another. This would trigger an event which would cause one branch of the AST to be pruned, modified and re-attached to another branch. This in turn would trigger a re-gen of the render tree, and eventually a repaint. I've done something very similar in the past where I created a visual computer algebra system.I see this as a very dynamic application, and all of the internal logic which is a mix of C and JIT compiled code from the new language needs to interact with both the visual representation, and the physics engine (the Lang is designed for real-time physics simulation). Ideally, I'd like to be able to render a visual representation of a language element to an OpenGL texture, and have this exist in the physics engine.
Is it possible to add native C++ objects wrapped with a V8 binding on the content layer side, and have these objects callable from javascript?
On the content side, from C++, is is possible to grab and access JavaScript objects?
If so, then I think the content layer approach might work.But, this key fundamental thing is that there is a LOT of functionality already implemented in our C++ code, such as the parser, physics engine, etc. It would be straightforward for me to wrap all of this in V8 bindings. But the important thing is it possible to pass these wrapped objects to the currently running V8 instance that’s associated with the current web view.
So, in the chrome multi-process model, the html dom, render tree, renderer live on one process (render process) and the main app which hosts the content layer lives in the main process.
Is there any documentation describing it, I managed to find the content docs, but they don’t seem to indicate which parts of chrome are processes, which are shared libs, and which are statically linked.
I gather that one part of content is a shared library with a set of interfaces that blink calls. Content embedders create one of these, implement this interfaces and somehow make blink aware of it. How exactly does blink get made aware of this module?
Then there’s the render process which I assume loads blink. The browser process spawns a set of render processes.
1: [...] Since this is backed by native code which creates native windows, should these objects live in the render process or the browser process. How should I go about adding these objects to JS.
2: I’d like to create a top level UI layer in native code, and embed a chrome window, and I’d like for my native code to call JS code that is currently running in the html view, and I’d like for my native code to add listeners that JS could call back into.
3: If JS creates a WebGL canvas, I’d like for the JS to to call my existing native code with the OpenGL rendering context for that window, so my native code could draw into it. I assume this should be done in the rendering process.
embedding WebKit a number of times in ObjectiveC apps.
I’m still a bit fuzzy as to whether the browser process or the render process actually creates the native OS windows, listens for user input and does the drawing to the screen.
As I understand [...]
What about OpenGL canvas, or video element, or plugins like flash.
On that same note, it looks like the rendering process has a message loop.
With that, could a native backed extension loaded in the render process create a new native OS window?
I guess this would require some trickery to get this window to appear as belonging to the main browser hosting process.