Blink render tree

808 views
Skip to first unread message

Andy Somogyi

unread,
Jun 1, 2016, 10:45:30 AM6/1/16
to blink-dev
Hi,

I’m a programming language researcher, and we are working on a new visual programming language. 

I’m investigating using Blink as the rendering component of our language editing/visualization system. 

Essentially what I’d like to be able to do is to programmatically (C++) generate 1: a DOM tree of a new family of Element derived objects, 2: programmatically generate a rendering tree (with new node types that have custom rendering and layout behavior), 3: layout the tree, 4: attach a GraphicsContext to an existing window or bitmapped resource, and 5: use Blink's GraphicsContext to render the custom rendering tree. I’d also like to create a family of custom Element derived objects that would be attached to the rendering tree and use the rendering tree. I’d also like to respond to the standard events, i.e. mouse, keyboard, etc…

It looks like this would be possible using Blink as a library, because the render tree types (LayoutObject derived) are defined as publicly exported (have the CORE_EXPORT macro). I'm a lot more familiar with the old WebKit rendering architecture, but it looks like in Blink, the rendering tree types are all LayoutObject subtypes, which appears to have replaced the WebKit's RenderObject. 

Are there any design documents that discuss Blink's rendering architecture? The docs on the blink/chromium project site seem to be a bit different than what I can glean from the blink source. 

I would really prefer to use Blink in single process mode for at least our initial implementation. Would it be possible create a Skia surface in memory and attach a RenderingContext to it, then pass the context to the LayoutObject::paint method directly?

thanks

-- Andy Somogyi PhD
School of Informatics and Computing
Indiana University

Stephen Chenney

unread,
Jun 1, 2016, 11:42:08 AM6/1/16
to Andy Somogyi, blink-dev
Could you provide more context so we can help you best? In particular:

1) Are you creating a standalone application? If so, could you explain why you want to use web technologies yet don't want to implement it as a web app?

2) Blink is designed to be embedded in chromium's content layer, which provides services that Blink needs. Why are you not modifying Content Shell to do what you want?

Cheers,
Stephen.


Jeremy Roman

unread,
Jun 1, 2016, 12:06:23 PM6/1/16
to Andy Somogyi, blink-dev
I'd advise against this strategy.

In particular, Blink (especially Source/core/) is not intended to be used as a library, and we may radically change how it works without warning.

I'd recommend instead using the content embedding layer (possibly through a third-party library such as CEF), and then use ordinary web development tools to construct your custom elements. (As a bonus, this would make it easier to port your UI to ordinary web browsers.) Or for even less headache from Chromium internals changing, you might consider writing it as a web application (with your other application logic in an HTTP server).

On Wed, Jun 1, 2016 at 10:45 AM, Andy Somogyi <andy.s...@gmail.com> wrote:
Hi,

I’m a programming language researcher, and we are working on a new visual programming language. 

I’m investigating using Blink as the rendering component of our language editing/visualization system. 

Essentially what I’d like to be able to do is to programmatically (C++) generate 1: a DOM tree of a new family of Element derived objects, 2: programmatically generate a rendering tree (with new node types that have custom rendering and layout behavior), 3: layout the tree, 4: attach a GraphicsContext to an existing window or bitmapped resource, and 5: use Blink's GraphicsContext to render the custom rendering tree. I’d also like to create a family of custom Element derived objects that would be attached to the rendering tree and use the rendering tree. I’d also like to respond to the standard events, i.e. mouse, keyboard, etc…

It looks like this would be possible using Blink as a library, because the render tree types (LayoutObject derived) are defined as publicly exported (have the CORE_EXPORT macro). I'm a lot more familiar with the old WebKit rendering architecture, but it looks like in Blink, the rendering tree types are all LayoutObject subtypes, which appears to have replaced the WebKit's RenderObject. 

Yes, WebCore::RenderObject has become blink::LayoutObject.

CORE_EXPORT does not mean that the API is intended to be public. Rather, it means that the symbol is exported from the blink_core shared library (libblink_core.so, blink_core.dll, etc.) when a Chromium "component build" is used. Component builds aren't meant for public consumption, but rather to reduce build time during development; it is expected that these components be linked only with the rest of Chromium, built at the same revision, for development. 
 
Are there any design documents that discuss Blink's rendering architecture? The docs on the blink/chromium project site seem to be a bit different than what I can glean from the blink source. 

Regrettably a number of our documents are out of date. It's something we hope to improve on in the future.
 

I would really prefer to use Blink in single process mode for at least our initial implementation. Would it be possible create a Skia surface in memory and attach a RenderingContext to it, then pass the context to the LayoutObject::paint method directly?

Blink relies on the compositor to achieve a number of effects. Single-process mode is fairly unsupported (except, I believe, for Android WebView); rendering without a compositor is even less so.

Andy Somogyi

unread,
Jun 1, 2016, 12:07:04 PM6/1/16
to Stephen Chenney, blink-dev
Hi Stephen, 

Yes, this will be a stand alone, compiled application. As this is part of a compiler, one of it’s goals is to compile source code and produce executables that users can run and distribute. But, its a visual programming language, hence, it needs a way to read source code, generate layout info, and display it to the screen. 

A very simplified overview of how traditional html rendering engines is:

(html src) -> [parser] -> (DOM tree) -> (render tree) -> [painter] -> (output device)

I’m interested in re-using just the later stages of the layout / rendering process. We’re developing a new programming language, and one of the outputs of a parser is intended to be a render tree made up of a family of LayoutObject derived subtypes. The first stages of process in our case start with source code (which has no relation to HTML), and the parser will know how to generate a render tree from the source abstract syntax tree (AST). 

Basically, what I’m looking for is canvas and layout engine. 

I originally intended on only using Skia directly, and creating our custom layout engine, but as Blink already has this, I think its simply more efficient to re-use as much of it as possible. 

So, essentially, our proposed rendering process will be like:

(source code) -> [parser] -> (AST) ->  [attach to style info] -> (render tree) -> [painter] -> (output device)

The first stage of the rendering process in our case has no HTML, but the AST is analogous to a HTML DOM. I’m planing on creating our own attach module (WebKit called it attach, I’m not sure if Blink re-named it) which combines an AST with style rules obtained from Blink’s CSS parser to generate a render tree which can then be given to Blink’s layout and painting engines. 

I’d like to be able to build Blink as a shared library, and just use the LayoutObject derived types and the GraphicsContext. The GraphicsContext looks very easy to attach to an existing Skia surface, which again is really easy to create.  

I already have a very very simplified set of LayoutObject type classes working which can do some simple layout and draw themselves to a Skia canvas. This is working OK, but I’d like to be able to re-use the additional functionality in Blink’s layout engine and style parser. 

Stephen Chenney

unread,
Jun 1, 2016, 1:09:36 PM6/1/16
to Andy Somogyi, blink-dev, Jeremy Roman
On Wed, Jun 1, 2016 at 12:07 PM, Andy Somogyi <andy.s...@gmail.com> wrote:
Hi Stephen, 

Yes, this will be a stand alone, compiled application. As this is part of a compiler, one of it’s goals is to compile source code and produce executables that users can run and distribute. But, its a visual programming language, hence, it needs a way to read source code, generate layout info, and display it to the screen. 

A very simplified overview of how traditional html rendering engines is:

(html src) -> [parser] -> (DOM tree) -> (render tree) -> [painter] -> (output device)

I’m interested in re-using just the later stages of the layout / rendering process. We’re developing a new programming language, and one of the outputs of a parser is intended to be a render tree made up of a family of LayoutObject derived subtypes. The first stages of process in our case start with source code (which has no relation to HTML), and the parser will know how to generate a render tree from the source abstract syntax tree (AST). 

Basically, what I’m looking for is canvas and layout engine. 

I originally intended on only using Skia directly, and creating our custom layout engine, but as Blink already has this, I think its simply more efficient to re-use as much of it as possible. 

So, essentially, our proposed rendering process will be like:

(source code) -> [parser] -> (AST) ->  [attach to style info] -> (render tree) -> [painter] -> (output device)

If your AST is analogous to DOM, why not write the AST into DOM+style and then serve it up to a browser. You can also write software to generate the DOM content and pass it directly into the Blink parser and renderer. There are examples of this throughout Chromium, including code that creates menus for DOM elements like select or the date picker.

Have you looked at WebASM? Does it help you at all?
 

The first stage of the rendering process in our case has no HTML, but the AST is analogous to a HTML DOM. I’m planing on creating our own attach module (WebKit called it attach, I’m not sure if Blink re-named it) which combines an AST with style rules obtained from Blink’s CSS parser to generate a render tree which can then be given to Blink’s layout and painting engines. 

I’d like to be able to build Blink as a shared library, and just use the LayoutObject derived types and the GraphicsContext. The GraphicsContext looks very easy to attach to an existing Skia surface, which again is really easy to create.  

I already have a very very simplified set of LayoutObject type classes working which can do some simple layout and draw themselves to a Skia canvas. This is working OK, but I’d like to be able to re-use the additional functionality in Blink’s layout engine and style parser. 

I still have 2 things I don't understand about your need.
- If this is a visual programming language, what are you using for the coding portion of the project. As I understand "visual programming", it means that a programmer manipulates graphical elements to "draw" the program flow. In other words, what is to the left of "source code" in you program flow diagram, and what is "source code".
- How is creating layout objects directly helpful to you at all? The process of creating and maintaining a render tree implements the functionality you appear to want, so why are you writing custom code to create it rather than creating and updating styled html and letting the browser do it's job.

Stephen.

Stefan Zager

unread,
Jun 1, 2016, 1:12:56 PM6/1/16
to Andy Somogyi, blink-dev, ikilp...@chromium.org
This sounds like an interesting project, and I can see why you might want to integrate directly into blink.  However, I recommend that you consider a different approach: generate HTML from your AST, and feed it into the content layer.  For the rendering part of it, you can probably leverage the in-progress work to implement custom layout and paint in blink -- briefly, you can write javascript to place and render layout objects however you want.  I'm not sure how you could capture the graphic output at the other end, but that seems like a pretty solvable problem that won't require invasive changes to chromium/blink.

I really think this will be a much more tractable and sustainable approach than what you're proposing.  If you go mucking about in the style/layout/paint C++ code, you will have to fork blink, and you probably would never be able to un-fork or merge down ongoing blink changes; you would get hopelessly broken every time your tried.  You would wind up reinventing a lot of wheels and probably spend many hours mucking through some pretty impenetrable code.  Just getting the style resolver to work with your AST could be enough to send a mortal programmer over the brink.

The code name for the custom layout/paint project is "Houdini".  A web search for "houdini custom layout" should turn up some relevant info, and ikilpatrick@ (cc-ed) can tell you more.

Andy Somogyi

unread,
Jun 1, 2016, 2:05:57 PM6/1/16
to Stephen Chenney, blink-dev, Jeremy Roman


On Jun 1, 2016, at 1:09 PM, Stephen Chenney <sche...@chromium.org> wrote:

I still have 2 things I don't understand about your need.
- If this is a visual programming language, what are you using for the coding portion of the project. As I understand "visual programming", it means that a programmer manipulates graphical elements to "draw" the program flow. In other words, what is to the left of "source code" in you program flow diagram, and what is "source code".
- How is creating layout objects directly helpful to you at all? The process of creating and maintaining a render tree implements the functionality you appear to want, so why are you writing custom code to create it rather than creating and updating styled html and letting the browser do it's job.

Stephen.

The source on the left side is prog Lang source code.

Now the trick is that's it's intended to be visually edited, hence the need for hit detection. So, a component may be dragged from one region to another. This would trigger an event which would cause one branch of the AST to be pruned, modified and re-attached to another branch. This in turn would trigger a re-gen of the render tree, and eventually a repaint. I've done something very similar in the past where I created a visual computer algebra system.

I see this as a very dynamic application, and all of the internal logic which is a mix of C and JIT compiled code from the new language needs to interact with both the visual representation, and the physics engine (the Lang is designed for real-time physics simulation). Ideally, I'd like to be able to render a visual representation of a language element to an OpenGL texture, and have this exist in the physics engine. 

Stefan Zager

unread,
Jun 1, 2016, 2:22:23 PM6/1/16
to Andy Somogyi, Stephen Chenney, blink-dev, Jeremy Roman
On Wed, Jun 1, 2016 at 11:05 AM Andy Somogyi <andy.s...@gmail.com> wrote:


On Jun 1, 2016, at 1:09 PM, Stephen Chenney <sche...@chromium.org> wrote:

I still have 2 things I don't understand about your need.
- If this is a visual programming language, what are you using for the coding portion of the project. As I understand "visual programming", it means that a programmer manipulates graphical elements to "draw" the program flow. In other words, what is to the left of "source code" in you program flow diagram, and what is "source code".
- How is creating layout objects directly helpful to you at all? The process of creating and maintaining a render tree implements the functionality you appear to want, so why are you writing custom code to create it rather than creating and updating styled html and letting the browser do it's job.

Stephen.

The source on the left side is prog Lang source code.

Now the trick is that's it's intended to be visually edited, hence the need for hit detection. So, a component may be dragged from one region to another. This would trigger an event which would cause one branch of the AST to be pruned, modified and re-attached to another branch. This in turn would trigger a re-gen of the render tree, and eventually a repaint. I've done something very similar in the past where I created a visual computer algebra system.

I see this as a very dynamic application, and all of the internal logic which is a mix of C and JIT compiled code from the new language needs to interact with both the visual representation, and the physics engine (the Lang is designed for real-time physics simulation). Ideally, I'd like to be able to render a visual representation of a language element to an OpenGL texture, and have this exist in the physics engine. 


I still maintain that using the content layer as-is will be a more fruitful avenue.  You can implement drag-n-drop editing in javascript and use css animations to make blocks resize and slide around (flexbox is your friend).  Maybe you'd want to simplify dumping the DOM tree to an output file, and you'd still need to figure out the graphics capture part of it.  But compared to wiring everything into the C++ layer, implementing the editing stuff in javascript will be significantly less nightmare-ish.

Andy Somogyi

unread,
Jun 2, 2016, 1:23:11 AM6/2/16
to Stefan Zager, Stephen Chenney, blink-dev, Jeremy Roman
Is it possible to add native C++ objects wrapped with a V8 binding on the content layer side, and have these objects callable from javascript?

On the content side, from C++, is is possible to grab and access JavaScript objects? 

If so, then I think the content layer approach might work. 

But, this key fundamental thing is that there is a LOT of functionality already implemented in our C++ code, such as the parser, physics engine, etc. It would be straightforward for me to wrap all of this in V8 bindings. But the important thing is it possible to pass these wrapped objects to the currently running V8 instance that’s associated with the current web view. 

Stefan Zager

unread,
Jun 2, 2016, 4:16:17 PM6/2/16
to Andy Somogyi, Stefan Zager, Stephen Chenney, blink-dev, Jeremy Roman
On Wed, Jun 1, 2016 at 10:23 PM Andy Somogyi <andy.s...@gmail.com> wrote:
Is it possible to add native C++ objects wrapped with a V8 binding on the content layer side, and have these objects callable from javascript?

Yes; look at the various *.idl files in the repository, and the ScriptWrappable class.
 
On the content side, from C++, is is possible to grab and access JavaScript objects? 

Yes, again, look at how ScriptWrappable is used.
 
If so, then I think the content layer approach might work. 

But, this key fundamental thing is that there is a LOT of functionality already implemented in our C++ code, such as the parser, physics engine, etc. It would be straightforward for me to wrap all of this in V8 bindings. But the important thing is it possible to pass these wrapped objects to the currently running V8 instance that’s associated with the current web view. 

That should all be doable.  My recommendation is that you add a directory to src/third_party/WebKit/modules/, and put all of your source code in there.  Write .idl files to generate v8 bindings for your API's.

Andy Somogyi

unread,
Jun 3, 2016, 4:27:06 PM6/3/16
to Stefan Zager, Stephen Chenney, blink-dev, Jeremy Roman
Thanks, 

Got a few questions about this approach. 

Just to make sure we’re on the same page with terminology, I’ll use ‘extension module’ to refer to classes created in src/third_party/WebKit/modules/. 

So, in the chrome multi-process model, the html dom, render tree, renderer live on one process (render process) and the main app which hosts the content layer lives in the main process. 

When content layer code accesses active JS objects (these live in the render process right), does the entire object get marshaled across process boundaries?

I’d like to be able to create native windows (either NSWindow on OS X, or native W64 via CreateWindow on Windows), and I’d like to create a set of C++ objects with appropriate JavaScript bindings that currently executing JavaScript can create and interact with. Its trivial for me to create V8 bindings, the bit that I’m not sure about is should these live in the main process or in the render process. If my v8 wrapped objects are created on the content side, I’m assuming there’s some API to add the definitions to the currently running v8 instance. 

It seems like copying content-shell is a good way to start building a content app. Also I’d be adding a subdirectory to src/third_party/WebKit/modules/. The Chromium build system is rather different than any git system I’m used to. Normally, what I’d do is have two remotes: my own repo, and the upstream repo, and rebase to the upstream repo every so often. Would this work with the depot-tools system, i.e. would the git rebase-update, gclient sync have issues with my changes. 

thanks

Avi Drissman

unread,
Jun 3, 2016, 4:36:23 PM6/3/16
to Andy Somogyi, Stefan Zager, Stephen Chenney, blink-dev, Jeremy Roman
V8 bindings and stuff live in the render process. NSWindows and the like live in the browser process.

There's nothing that magically connects them. If you want to pass data from the render to the browser process or vice versa, you'll need to make the IPCs happen yourself. Our current IPC layer makes it relatively straightforward for you to write code to move C++ objects across the boundary (templatize a ParamTraits::Read/Write) and then you make IPC messages that take them as parameters.

Avi

Avi Drissman

unread,
Jun 3, 2016, 4:43:12 PM6/3/16
to Andy Somogyi, Stefan Zager, Stephen Chenney, blink-dev, Jeremy Roman
Also, your email shows a misunderstanding that I'd like to clarify.

On Fri, Jun 3, 2016 at 4:27 PM, Andy Somogyi <andy.s...@gmail.com> wrote:
So, in the chrome multi-process model, the html dom, render tree, renderer live on one process (render process) and the main app which hosts the content layer lives in the main process.

"which hosts the content layer" implies incorrect things.

Content encompasses both render processes as well as part of the browser ("main") process. When you say, "If my v8 wrapped objects are created on the content side", that isn't a meaningful statement. There is no "content side". There are only browser processes and render processes.

And if you're using content, you as a content embedder can run code in both browser and render processes without having that code be part of content. Content gives you lots of callbacks that allow you to customize its behavior.

Avi

Andy Somogyi

unread,
Jun 6, 2016, 11:14:41 AM6/6/16
to Avi Drissman, Stefan Zager, Stephen Chenney, blink-dev, Jeremy Roman
Thanks, 

I’m finding the overall architecture a bit difficult to grasp. 

I did find the content docs, but they seem to be a bit slim. 

Is there any documentation describing it, I managed to find the content docs, but they don’t seem to indicate which parts of chrome are processes, which are shared libs, and which are statically linked. 

I gather that one part of content is a shared library with a set of interfaces that blink calls. Content embedders create one of these, implement this interfaces and somehow make blink aware of it. How exactly does blink get made aware of this module?

Then there’s the render process which I assume loads blink. The browser process spawns a set of render processes. 

Then there’s a set of interfaces that content embedders can implement in the main browser process. 

Basically what I’d like to do is

1: I have a large amount of code that creates native OS windows, and has a lot of internal functionality, which are accessed via a couple of facade classes. I’d like to wrap these with a V8 wrapper and have them be available to javascript, and I’d also like the JS to listen for events coming from these objects. Making the V8 wrapper is super easy, thats already done, and I have it running in a stand-alone V8 engine. I’d just like for these objects to be available to JS in an html page. 

Since this is backed by native code which creates native windows, should these objects live in the render process or the browser process. How should I go about adding these objects to JS. 

2: I’d like to create a top level UI layer in native code, and embed a chrome window, and I’d like for my native code to call JS code that is currently running in the html view, and I’d like for my native code to add listeners that JS could call back into. 

3: If JS creates a WebGL canvas, I’d like for the JS to to call my existing native code with the OpenGL rendering context for that window, so my native code could draw into it. I assume this should be done in the rendering process. 

I’ve written these kinds of native code / html-javascript applications before, first embedding Internet Explorer in a C++ applications, then embedding WebKit a number of times in ObjectiveC apps. Both IE and WebKit were very straightforward to embed and interact with JS. Sorry for the questions, I’m just finding that its taking me a while to understand the Chrome architecture. 

thanks 

Avi Drissman

unread,
Jun 6, 2016, 1:04:03 PM6/6/16
to Andy Somogyi, Stefan Zager, Stephen Chenney, blink-dev, Jeremy Roman
On Mon, Jun 6, 2016 at 11:14 AM, Andy Somogyi <andy.s...@gmail.com> wrote:
Is there any documentation describing it, I managed to find the content docs, but they don’t seem to indicate which parts of chrome are processes, which are shared libs, and which are statically linked.

The documentation is a bit weak. If you want a primer, look at //content/shell. It's a simple embedder of content that puts up a web page in a window.
 
I gather that one part of content is a shared library with a set of interfaces that blink calls. Content embedders create one of these, implement this interfaces and somehow make blink aware of it. How exactly does blink get made aware of this module?

That doesn't sound right to me. A content embedder creates WebContents objects and calls methods on them. There are observers that you can put on various objects in content, as well as a general set of callbacks (the ContentClient).

There shouldn't ever be a time that a content embedder has to deal with Blink directly.
 
Then there’s the render process which I assume loads blink. The browser process spawns a set of render processes. 

Yes, and this is mostly invisible to you as the content embedder.
 
1: [...] Since this is backed by native code which creates native windows, should these objects live in the render process or the browser process. How should I go about adding these objects to JS.

Your call. You can put them on either side, but you have to add IPCs to communicate between your code on both sides of the divide.

Let me pull a random example from Chrome to show you how this goes.

ContentRendererClient has a function called RenderFrameCreated that's called when a RenderFrame is created. Chrome is an embedder of content, and in the ChromeContentRendererClient::RenderFrameCreated override, it creates a ContentSettingsObserver. The ContentSettingsObserver is a RenderFrameObserver, so it gets access to incoming IPCs.

Pick a random IPC that it listens to, for example, ChromeViewMsg_ReloadFrame. You can see that on the browser side, there is code in Chrome that says:

web_contents()->GetMainFrame()->Send(new ChromeViewMsg_ReloadFrame(
      web_contents()->GetMainFrame()->GetRoutingID()));

That code is in Chrome, the IPC gets routed around via content to the render process, and then gets passed back to Chrome code in the render process for handling.

You get to build your IPCs to hold whatever info you want. If you want to pass the objects themselves over IPC, that's simple to do in our IPC code; use the ParamTraits::Read/Write mechanism.
 
2: I’d like to create a top level UI layer in native code, and embed a chrome window, and I’d like for my native code to call JS code that is currently running in the html view, and I’d like for my native code to add listeners that JS could call back into. 

I'm sure this is possible as we do it; it's a little out of my area of expertise so I can't point you at an example.
 
3: If JS creates a WebGL canvas, I’d like for the JS to to call my existing native code with the OpenGL rendering context for that window, so my native code could draw into it. I assume this should be done in the rendering process. 

Graphics are well out of my knowledge area, sorry.
 
embedding WebKit a number of times in ObjectiveC apps.

Think of content's WebContents as analogous to WKWebView. You create a WebContents, you get its native view, and you drop it into your view hierarchy. That part is the same. The details of interacting with a multi-process architecture are a bit different, though.

Avi

Andy Somogyi

unread,
Jun 7, 2016, 12:36:53 PM6/7/16
to blink-dev
I’m trying to understand how the Chrome content layer works, specifically how content_shell gets built, and where exactly are the native OS windows created.

I’m still a bit fuzzy as to whether the browser process or the render process actually creates the native OS windows, listens for user input and does the drawing to the screen.

As I understand, there’s a RenderWidgetHostView which is basically a native OS window which is hosted in the browser process. This is connected to a corresponding RenderWidget which is in the render process. All of the DOM, JS, rendering, etc... takes place in the render process, and the RenderWidget is essentially an off-screen pixel buffer to which the html is drawn. Each time it’s updated, it sends an IPC message to the browser process with some sort of pixel data that the RenderWidgetHostView then displays on the screen. The RenderWidgetHostView listens for native input , and sends these via IPC to its corresponding RenderWidget in the render process.


Now, I’m trying to understand how the browser and render processes get built. Looking at the BUILD.gn (in content_shell), two apps get built, pasted here following text.

But both of them appear to only be built from app/shell_main.cc and both appear to link to the same framework. My question, is how does the distinction get made as to whether it is built as a browser or helper app?



mac_app_bundle("content_shell_helper_app") {
testonly = true
output_name = content_shell_helper_name
sources = [
"app/shell_main.cc",
]
deps = [
":content_shell_framework+link",
]
ldflags = [
# The helper is in Content Shell.app/Contents/Frameworks/Content Shell Helper.app/Contents/MacOS/
# so set rpath up to the base.
"-rpath",
"@loader_path/../../../../../..",
]
info_plist_target = ":content_shell_helper_plist"
}


And


mac_app_bundle("content_shell") {
testonly = true
output_name = content_shell_product_name
sources = [
"app/shell_main.cc",
]
deps = [
":content_shell_framework_bundle_data",
":content_shell_resources_bundle_data",

# TODO(rsesek): Remove this after GYP is gone, since it only needs to
# be here per the comment in blink_test_platform_support_mac.mm about
# the bundle structure.
"//components/test_runner:resources",
]
info_plist_target = ":content_shell_plist"
}

Torne (Richard Coles)

unread,
Jun 7, 2016, 12:50:51 PM6/7/16
to Andy Somogyi, blink-dev
Both processes run the same binary. There isn't a browser process app and a renderer process app. There's a command line flag used to tell it what to run as.

Avi Drissman

unread,
Jun 7, 2016, 1:07:24 PM6/7/16
to Andy Somogyi, blink-dev
On Tue, Jun 7, 2016 at 12:36 PM, Andy Somogyi <andy.s...@gmail.com> wrote:
I’m still a bit fuzzy as to whether the browser process or the render process actually creates the native OS windows, listens for user input and does the drawing to the screen.

All that happens in the browser process. The render process is sandboxed. It has no access to anything.
 
As I understand [...]

That understanding is all essentially correct. (The graphics pipeline hasn't actually been that simple for years, but that mental model will get you most of the way.)

Avi

Andy Somogyi

unread,
Jun 8, 2016, 2:41:29 AM6/8/16
to Avi Drissman, blink-dev
Thanks, that makes a lot more sense now.

What about OpenGL canvas, or video element, or plugins like flash.

It seems like a huge amount of overhead of the gl canvas rendering offscreen, copying that pix buff from the graphics board to main memory, piping it to another process and finally blitting it back to the screen.

Same with video.

On that same note, it looks like the rendering process has a message loop. With that, could a native backed extension loaded in the render process create a new native OS window? I guess this would require some trickery to get this window to appear as belonging to the main browser hosting process.

Sent from my iPhone

Andy Somogyi

unread,
Jun 8, 2016, 4:06:31 AM6/8/16
to Avi Drissman, blink-dev
On analyzing the multi process architecture further, I just don't think it will work for my needs. I understand the security advantage, however in my case, I have complete control over the Dom creation with no external content.

I fundamentally need JavaScript code to access my native backed windows in the same process.

That being said, I understand that when chrome is the content embedded, single process mode is discouraged. However Androids web view is single process.

So, using the content API, is it supported creating a renderer and view all in one single process? 

Would it be possible to get some tips on setting up a single process model and attaching it to a native window (either a HDC or CGWindow)

Thanks 



Sent from my iPhone

Torne (Richard Coles)

unread,
Jun 8, 2016, 5:01:38 AM6/8/16
to Andy Somogyi, Avi Drissman, blink-dev
Single-process mode is only regularly used/tested in Android WebView (and some tests). On other OSes/configurations, things may be broken, and depending on it for a real product is probably not a good idea.

Avi Drissman

unread,
Jun 8, 2016, 10:57:48 AM6/8/16
to Andy Somogyi, blink-dev
On Wed, Jun 8, 2016 at 2:41 AM, Andy Somogyi <andy.s...@gmail.com> wrote:
What about OpenGL canvas, or video element, or plugins like flash.

I worked on the graphics pipeline back when it was literally an array of pixels sent from the render process to the browser process over IPC. It hasn't been that way for five years now, at least. I literally have zero knowledge there, and can't illuminate this area for you. Sorry.
 
On that same note, it looks like the rendering process has a message loop.

Yes; something has to keep the events flowing.
 
With that, could a native backed extension loaded in the render process create a new native OS window?

First, it's probably the wrong type of message loop. There are different kinds. The one running the browser process is backed by the OS, usually, so that it can do things like windows and OS integration. The one running the render process is a simpler, more stripped-down one.

Second, even if you were running a native OS message loop, the renderer process is sandboxed. That means that it can't reach out to the OS to make things happen. That's a core part of the security of Chromium. All your OS UI calls are going to fail.
 
I guess this would require some trickery to get this window to appear as belonging to the main browser hosting process.

Yeah. That kind of a third point, in that even if you didn't have the sandbox, it is a separate process.

The moral of the story is that we pay a high price in complexity to maintain the multiprocess model. We think that it's definitely worth it for the S's (we need sandboxes for Security, we need individual processes that can crash without taking down the whole for Stability, we need the ability to throttle processes for Speed), but it does make the learning curve much steeper.

Avi

Stefan Zager

unread,
Jun 8, 2016, 4:53:32 PM6/8/16
to Andy Somogyi, Avi Drissman, blink-dev
As other posters have noted, single-process model is unsupported for desktop.  Anyways, I don't believe that would solve your problem.

Despite your insistence, I am skeptical that your project really requires javascript bindings to run in the same process as the native display.  Can you explain?  Why is it so important to have control over the graphics context?  Are you doing some post-processing or analysis of the rendered pixels?

Andy Somogyi

unread,
Jun 8, 2016, 5:25:00 PM6/8/16
to Stefan Zager, Avi Drissman, blink-dev
Hi Stefan, 

First off, I’ve only been studying the Chrome architecture for about a week now, so don’t understand it very well yet, and I’m certain I have many many misconceptions and misunderstanding about the it. 

I however am very familiar with embedding and interacting with Internet Explorer as a COM component, and embedding and interacting with WebKit as an ObjectiveC component, and accessing and modifying the DOM via COM/ObjectiveC and adding new COM or JavaScriptCore wrapped C++ objects to both platforms. 

What I would like to able to do is have javascript interact with our existing C++ 3D game engine. This engine has a public API which creates windows, edits a scene, runs a physics simulation thread, and lets callers listen for events. It’s very easy for me to manually create V8 wrappers for this API, and have these wrappers accept JS callback functions, no problem here. I would like the JS to be able to create, manipulate and listen for events from these native backed content. 


I would also like the same JS to create lots of UI windows, via the standard w3c window.open method, add content to this windows and listen for events. 

In response to events from either the native backed game engine windows (I understand these can only be created in the main process), AND html events, I’d like the JS to interact with either these game engine widows, or the html DOM. 

I’m afraid I don’t understand how I can add JS bindings to native code on the browser side, and have JS code on the render side interact with it. 

Stefan Zager

unread,
Jun 8, 2016, 6:06:14 PM6/8/16
to Andy Somogyi, Stefan Zager, Avi Drissman, blink-dev
Sounds like what you really want is React and emscripten, but if that's not performant enough (or you are a glutton for punishment), then my recommendation is that you make your engine paint to the chrome compositor, and let the chrome compositor take care of getting it to the native display.  HTMLCanvasPainter.cpp might get you pointed in the right direction.

Lucas Gadani

unread,
Jun 8, 2016, 6:24:43 PM6/8/16
to Andy Somogyi, Stefan Zager, Avi Drissman, blink-dev
There are many ways to do this, but if you want to look at an example, ExtensionFunction does basically what you want; it allows you to define a javascript API and automatically handles the IPC to the browser process.

They are not available in content_shell, since its part of extensions/, but you can check the implementation in src/extensions/browser/extension_function.h to get an idea of how this can be accomplished.

Andy Somogyi

unread,
Jun 8, 2016, 6:50:10 PM6/8/16
to Lucas Gadani, Stefan Zager, Avi Drissman, blink-dev
Thanks Lucas

This looks like exactly what I've been looking for.

Sent from my iPhone
Reply all
Reply to author
Forward
0 new messages