GRC and service architecture

17 views
Skip to first unread message

Oystein Eftevaag

unread,
May 9, 2017, 4:38:29 PM5/9/17
to services-dev

Hi service-dev folks,

 

We (primiano@ and I) recently did some brainstorming around GRC architecture with rockot@ and we wanted to bring the discussion to a wider audience since it seems it’ll probably be something that’ll come up for future servicification/GRC work as well.

 

Background: GRC goals.

GRC is a foundational service. It will receive input from content, Blink, mus+ash, etc. Think of it like a lightweight tracing extended to the service boundaries and add the novel use case of looping the performance metrics data back to Chrome for informing run-time decisions. The actual heuristics it needs to reason with to hint about optimal resource usage, necessarily have to involve domain-specific concepts like processes, tabs, frames, audio state, layers, etc.

This is to solve end-to-end performance problems like: if we know how many processes we have, and we know which processes are hosting the foreground frames, we can hint to V8 to give more GC headroom, and hence reduce pauses, on the active frames, throttle background frames, hint the compositor to drop tiles for tabs that have been not recently used, etc.

 

Concrete problem that triggered this discussion:

primiano@ is moving Memory-Infra into the GRC service. Today (pre-GRC) MI dumps raw information into the trace and consumes it offline (in: telemetry; chrome://tracing UI). We now need to have that information available within Chrome. The driving use cases are: (i) fixing memory UMA (go/memory-uma); (ii) making MI data available to MemoryCoordinator to take the proper decisions. In order to do this we need to have the knowledge of what is a “GPU process”, what is a “V8 heap”, “font caches” and so on.

primiano@ was about to plumb an enum inside GRC of the form ProcessType { kGpuProcess, kBrowserProcess, kRendererProcess, kUtilityProcess, kOtherUnknownProcess} to implement an API of the form “here’s the memory snapshot for this process”. In turn this is to ultimately summarize those snapshots into UMA/UKM keys of the form “Memory.GPU”, “Memory.Renderer” etc.

This raised some concerns of the form “A service shouldn’t know about the existence of renderers and the GPU process. Can you architect this in a more generic way?”.

 

Proposal

While we understand this architectural concerns around the service vision (specifically foundational services), we have a number of speed-related problems to face day by day. Our developers are constantly struggling with plumbing boilerplate and concerns are arising about the maintenance cost of several instrumentation frameworks (See V8ScriptRunner::RunCompiledScript as a key example of four different probes wrapping a script call).

 

Speed is by-nature a cross-cutting problem and we cannot easily solve it pretending to know nothing about the architecture of Chrome.

We are not stating that it is impossible to architect GRC it in a way that reflects the service layering. Our genuine fear is that enforcing that sort of architecture right now is IMHO premature and will slow down progress with big upfront designs.

 

Until both the servicification and GRC have more use cases, it would be great if we could keep things simple and revisit the internal architecture once all the use cases have been consolidated.

 

To clarify we are NOT talking about layering violations in terms of actual code dependencies (read: we are NOT talking about having GRC service depending on content/).

We are talking about being able to model Chrome-specific concepts in GRC (read: having Mojo struct that represent tabs, frames, layers) in order to receive the right signals from to then formulate the right hints.

 

We are aiming for an architecture for GRC which:

  • Enables us to formulate resource usage hints in one central location. This gives velocity to our developers that can quickly iterate on prototypes and capture data for experiments.

  • Doesn’t re-create all of the current layering plumbing that’s needed when, for example, getting an event up from Blink through //content all the way up to //chrome.

  • Doesn’t require us to over-design GRC until the use cases we care about are there.

 

Let us know what you think!


John Abd-El-Malek

unread,
May 9, 2017, 6:05:45 PM5/9/17
to Oystein Eftevaag, services-dev
On Tue, May 9, 2017 at 1:38 PM, 'Oystein Eftevaag' via services-dev <servic...@chromium.org> wrote:

Hi service-dev folks,

 

We (primiano@ and I) recently did some brainstorming around GRC architecture with rockot@ and we wanted to bring the discussion to a wider audience since it seems it’ll probably be something that’ll come up for future servicification/GRC work as well.

 

Background: GRC goals.

GRC is a foundational service. It will receive input from content, Blink, mus+ash, etc. Think of it like a lightweight tracing extended to the service boundaries and add the novel use case of looping the performance metrics data back to Chrome for informing run-time decisions. The actual heuristics it needs to reason with to hint about optimal resource usage, necessarily have to involve domain-specific concepts like processes, tabs, frames, audio state, layers, etc.

This is to solve end-to-end performance problems like: if we know how many processes we have, and we know which processes are hosting the foreground frames, we can hint to V8 to give more GC headroom, and hence reduce pauses, on the active frames, throttle background frames, hint the compositor to drop tiles for tabs that have been not recently used, etc.

 

Concrete problem that triggered this discussion:

primiano@ is moving Memory-Infra into the GRC service. Today (pre-GRC) MI dumps raw information into the trace and consumes it offline (in: telemetry; chrome://tracing UI). We now need to have that information available within Chrome. The driving use cases are: (i) fixing memory UMA (go/memory-uma); (ii) making MI data available to MemoryCoordinator to take the proper decisions. In order to do this we need to have the knowledge of what is a “GPU process”, what is a “V8 heap”, “font caches” and so on.

primiano@ was about to plumb an enum inside GRC of the form ProcessType { kGpuProcess, kBrowserProcess, kRendererProcess, kUtilityProcess, kOtherUnknownProcess} to implement an API of the form “here’s the memory snapshot for this process”. In turn this is to ultimately summarize those snapshots into UMA/UKM keys of the form “Memory.GPU”, “Memory.Renderer” etc.


For this example, where is the code that does the UMA/UKM? i.e. in the GRC or in a consumer? If it's a consumer, then maybe an option is to have the process types as strings, and the consumer can translate the string into an enum. If it's the GRC, is it possible to prefix/append the process name to the UMA/UK keys. If that's not possible, then I think it's fine. Ken and I chatted about this; other places like service_manager will have to know what are different purposes for sandboxing needs.

 

This raised some concerns of the form “A service shouldn’t know about the existence of renderers and the GPU process. Can you architect this in a more generic way?”.

 

Proposal

While we understand this architectural concerns around the service vision (specifically foundational services), we have a number of speed-related problems to face day by day. Our developers are constantly struggling with plumbing boilerplate and concerns are arising about the maintenance cost of several instrumentation frameworks (See V8ScriptRunner::RunCompiledScript as a key example of four different probes wrapping a script call).

 

Speed is by-nature a cross-cutting problem and we cannot easily solve it pretending to know nothing about the architecture of Chrome.

We are not stating that it is impossible to architect GRC it in a way that reflects the service layering. Our genuine fear is that enforcing that sort of architecture right now is IMHO premature and will slow down progress with big upfront designs.

 

Until both the servicification and GRC have more use cases, it would be great if we could keep things simple and revisit the internal architecture once all the use cases have been consolidated.


I sympathize that bringing chrome-specific code now to /services has these extra constraints. FWIW, many of us are dealing with this.

I do think we want to be careful, as this argument is a slippery slope. It'd be great to send other examples to this list so we can discuss it, if they come up.

 

To clarify we are NOT talking about layering violations in terms of actual code dependencies (read: we are NOT talking about having GRC service depending on content/).

We are talking about being able to model Chrome-specific concepts in GRC (read: having Mojo struct that represent tabs, frames, layers) in order to receive the right signals from to then formulate the right hints.


FWIW this seems fine to me.
 

 

We are aiming for an architecture for GRC which:

  • Enables us to formulate resource usage hints in one central location. This gives velocity to our developers that can quickly iterate on prototypes and capture data for experiments.

  • Doesn’t re-create all of the current layering plumbing that’s needed when, for example, getting an event up from Blink through //content all the way up to //chrome.

  • Doesn’t require us to over-design GRC until the use cases we care about are there.

 

Let us know what you think!


--
You received this message because you are subscribed to the Google Groups "services-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to services-dev+unsubscribe@chromium.org.
To post to this group, send email to servic...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/services-dev/CAEkU82xKbBhwwbzF1xPY6Hu-a%2BOwdTYbMOQ9zw78_Mo1fh8G3A%40mail.gmail.com.

prim...@chromium.org

unread,
May 10, 2017, 5:42:09 AM5/10/17
to services-dev, oyst...@google.com
On Tuesday, May 9, 2017 at 11:05:45 PM UTC+1, John Abd-El-Malek wrote:


On Tue, May 9, 2017 at 1:38 PM, 'Oystein Eftevaag' via services-dev <servic...@chromium.org> wrote:

Hi service-dev folks,

 

We (primiano@ and I) recently did some brainstorming around GRC architecture with rockot@ and we wanted to bring the discussion to a wider audience since it seems it’ll probably be something that’ll come up for future servicification/GRC work as well.

 

Background: GRC goals.

GRC is a foundational service. It will receive input from content, Blink, mus+ash, etc. Think of it like a lightweight tracing extended to the service boundaries and add the novel use case of looping the performance metrics data back to Chrome for informing run-time decisions. The actual heuristics it needs to reason with to hint about optimal resource usage, necessarily have to involve domain-specific concepts like processes, tabs, frames, audio state, layers, etc.

This is to solve end-to-end performance problems like: if we know how many processes we have, and we know which processes are hosting the foreground frames, we can hint to V8 to give more GC headroom, and hence reduce pauses, on the active frames, throttle background frames, hint the compositor to drop tiles for tabs that have been not recently used, etc.

 

Concrete problem that triggered this discussion:

primiano@ is moving Memory-Infra into the GRC service. Today (pre-GRC) MI dumps raw information into the trace and consumes it offline (in: telemetry; chrome://tracing UI). We now need to have that information available within Chrome. The driving use cases are: (i) fixing memory UMA (go/memory-uma); (ii) making MI data available to MemoryCoordinator to take the proper decisions. In order to do this we need to have the knowledge of what is a “GPU process”, what is a “V8 heap”, “font caches” and so on.

primiano@ was about to plumb an enum inside GRC of the form ProcessType { kGpuProcess, kBrowserProcess, kRendererProcess, kUtilityProcess, kOtherUnknownProcess} to implement an API of the form “here’s the memory snapshot for this process”. In turn this is to ultimately summarize those snapshots into UMA/UKM keys of the form “Memory.GPU”, “Memory.Renderer” etc.


For this example, where is the code that does the UMA/UKM? i.e. in the GRC or in a consumer?

We are writing/discussing this CLs as we speak, but right now the plan is to:
1) definitely expose a method in the service that returns a snapshot of the various processes. The method is already here, what's missing is an extra argument which returns the actual memory data (today it returns only a bool |success| that tells "the dump has been injected into the trace or not")
2) At this point 1 could be used either from within GRC or from //chrome/browser/metrics/metrics_memory_details.cc whatever is more convenient. I haven't got the code for this yet, but right now I think that the former is slightly more convenient (so, say, expose another method in GRC of the form void CreateMemoryDumpAndCreateUMAHistogram()).

 
If it's a consumer, then maybe an option is to have the process types as strings, and the consumer can translate the string into an enum.
 
We could work around this specific issue as you suggest, although that means that the final client (say chrome/browser/metrics if we go for the second approach) has to know that it has to string-match against kProcessTypeGpu & co.
However, I feel we are just moving the problem off by some week and it's going to come back soon.
As part of the memory snapshot we are going to express both general concepts "this is the total memory usage" and chrome-specific concepts "this is the size of the v8 heaps".
Actually, now that I think about that, looks like we already jumped this fence, in fact //services/resource_coordinator/public/interfaces/memory/memory_instrumentation.mojom has already:

struct OSMemDump {
  uint32 resident_set_kb = 0;
  PlatformPrivateFootprint platform_private_footprint;
};

struct ChromeMemDump {
  uint32 malloc_total_kb = 0;
  uint32 partition_alloc_total_kb = 0;
  uint32 blink_gc_total_kb = 0;
  uint32 v8_total_kb = 0;
};
 
I suppose it didn't catch too much architectural attention during the reviews as these were just uint32, but seems that we are already there in terms of conceptual layering crossing.

If it's the GRC, is it possible to prefix/append the process name to the UMA/UK keys. If that's not possible, then I think it's fine. Ken and I chatted about this; other places like service_manager will have to know what are different purposes for sandboxing needs.
 
UKM is going to be trickier because there the keys are not freeform but we have to fill a proto where the keys are determined a-priori. I'd like to avoid a state where every time something needs to know about chrome-specific concepts we have to plumb something from //service to //chrome. I don't see a clear advantage, as we'd just cheating with patterns like "who is gpu process? I am just a string that happens to contains ['g','p','u']" but on the other side we'll be adding more marshaling, string copies and ipc roundtrips.
 

 

This raised some concerns of the form “A service shouldn’t know about the existence of renderers and the GPU process. Can you architect this in a more generic way?”.

 

Proposal

While we understand this architectural concerns around the service vision (specifically foundational services), we have a number of speed-related problems to face day by day. Our developers are constantly struggling with plumbing boilerplate and concerns are arising about the maintenance cost of several instrumentation frameworks (See V8ScriptRunner::RunCompiledScript as a key example of four different probes wrapping a script call).

 

Speed is by-nature a cross-cutting problem and we cannot easily solve it pretending to know nothing about the architecture of Chrome.

We are not stating that it is impossible to architect GRC it in a way that reflects the service layering. Our genuine fear is that enforcing that sort of architecture right now is IMHO premature and will slow down progress with big upfront designs.

 

Until both the servicification and GRC have more use cases, it would be great if we could keep things simple and revisit the internal architecture once all the use cases have been consolidated.


I sympathize that bringing chrome-specific code now to /services has these extra constraints. FWIW, many of us are dealing with this.

I do think we want to be careful, as this argument is a slippery slope. It'd be great to send other examples to this list so we can discuss it, if they come up.

 

To clarify we are NOT talking about layering violations in terms of actual code dependencies (read: we are NOT talking about having GRC service depending on content/).

We are talking about being able to model Chrome-specific concepts in GRC (read: having Mojo struct that represent tabs, frames, layers) in order to receive the right signals from to then formulate the right hints.


FWIW this seems fine to me.
 

 

We are aiming for an architecture for GRC which:

  • Enables us to formulate resource usage hints in one central location. This gives velocity to our developers that can quickly iterate on prototypes and capture data for experiments.

  • Doesn’t re-create all of the current layering plumbing that’s needed when, for example, getting an event up from Blink through //content all the way up to //chrome.

  • Doesn’t require us to over-design GRC until the use cases we care about are there.

 

Let us know what you think!


--
You received this message because you are subscribed to the Google Groups "services-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to services-dev...@chromium.org.

Colin Blundell

unread,
May 10, 2017, 5:42:47 AM5/10/17
to John Abd-El-Malek, Oystein Eftevaag, services-dev
Would it be technically feasible to layer the concept of "informing run-time decisions based on performance metrics" on top of the concept of "tracing"? Naively, the latter to me seems like the one that should be generic, whereas it makes sense that the former has to have knowledge/assumptions of what's going on its environment in order to make those run-time decisions.

To unsubscribe from this group and stop receiving emails from it, send an email to services-dev...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "services-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to services-dev...@chromium.org.

To post to this group, send email to servic...@chromium.org.

Primiano Tucci

unread,
May 10, 2017, 6:44:33 AM5/10/17
to Colin Blundell, John Abd-El-Malek, Oystein Eftevaag, services-dev
Before GRC started, Oystein and I considered this (have everything based on top of tracing). So, say, have a pipeline that looks like:
[various parts of the codebase] -> [tracing] -> {[dump to file], [consume within chrome to produce metrics]}

We later realized that there were too many technological and architectural limitations with this approach, for instance:

Architecturally:
Tracing doesn't have a strongly defined API surface, everything is based on strings. Today's telemetry metrics built on top of that rely on the TRACE_EVENT macros to not move or get renamed, but that is hard to enforce for us and hard to maintain for codebase owners. We have been bitten in different occasions by this (refactorings that reshuffled TRACE_EVENTs caused false telemetry regressions). As long as it happens in telemetry that is fine, because that is bisectable and we can spot the issue in O(day). If the same pattern repeats on UMA, that is going to be much more painful to debug.

Technologically
it would have caused too much overhead to compute metrics within chrome on top of tracing. The problem is that such model requires that every single event gets logged into the buffer and piped through, introducing some overhead in the client [1] + an extra thread/process has to catch up with the buffer. All this to then drop on the floor most of the events to then react only to the ones we needed.

The GRC architecture is based more on small and efficient local loops, where the metrics code stays local until it has some meaningful information that needs to be dispatched higher up, and on building blocks that help to create these small local loops. Tracing in this context will remain a logging mechanism used by some of our pipelines (chrome://Tracing, telemetry) but not all of them (UMA/UKM) which will be fed by GRC.

I can give you more context about memory-infra which is the use case I am dealing with right now. Today the TL;DR of memory-infra is:
- Various pieces of the codebase register their observers and fill some tables with their domain-specific snapshots (example)
- When MI decides to do snapshots (periodically in the UI, triggered by devtools for trelemetry) these tables are serialized into the trace. Each process serializes its own state into the per-process trace buffer.
- The JS metric code in catapult (used by telemetry) decodes these dumps and rationalizes them, creating a graph of the form "this is the tree of memory reported by v8; this one arena is suballocated from malloc; this texture is backed by shared memory with the gpu process"

This makes impossible, as is, to use this data within chrome, as those snapshots are rationalized only in the catapult JS. Hence today we have two distinct memory computation pipelines for UMA and telemetry. For instance the MemoryCoordinator folks, in various occasions had to copy/paste code (e.g. crrev.com/2636873002, crrev.com/2566043004) just to have similar metrics.

The model we are switching to is a model where the snapshots are requested and the results are delivered back via mojo to GRC (coming up here). GRC can then decide whether to route these snapshots to tracing (to preserve the current telemetry and chrome://tracing pipelines) or hand them to the MC, to UMA/UKM and in future things like task manager and devtools if necessary.

[1] On Android that has a cost of 5-7 us per TRACE_EVENT, mostly dominated by TimeTicks::Now() (or whatever time source we currently use)

You received this message because you are subscribed to a topic in the Google Groups "services-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/services-dev/v85DHRCtrl4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to services-dev...@chromium.org.

To post to this group, send email to servic...@chromium.org.

John Abd-El-Malek

unread,
May 10, 2017, 10:12:00 AM5/10/17
to Primiano Tucci, services-dev, Oystein Eftevaag

FWIW this stuff seems fine to me; i.e. this looks like a breakdown for where large amounts of memory can be used. Generally if the system design changes, this won't have to. I suspect Ken brought up the process type stuff because in that case, as the process model changes we will have to keep another place in services/ in sync. 

If it's the GRC, is it possible to prefix/append the process name to the UMA/UK keys. If that's not possible, then I think it's fine. Ken and I chatted about this; other places like service_manager will have to know what are different purposes for sandboxing needs.
 
UKM is going to be trickier because there the keys are not freeform but we have to fill a proto where the keys are determined a-priori. I'd like to avoid a state where every time something needs to know about chrome-specific concepts we have to plumb something from //service to //chrome. I don't see a clear advantage, as we'd just cheating with patterns like "who is gpu process? I am just a string that happens to contains ['g','p','u']" but on the other side we'll be adding more marshaling, string copies and ipc roundtrips.

Just to be clear, I think this stuff is fine to add. The most important point, which we all agree on, is that it doesn't include blink or content per services/readme. The other stuff, like the ints or an enum for a process type, seems like stuff we can change in the future if we want to make GRC usable in other contexts and we find that they're hindering that.
 

 

This raised some concerns of the form “A service shouldn’t know about the existence of renderers and the GPU process. Can you architect this in a more generic way?”.

 

Proposal

While we understand this architectural concerns around the service vision (specifically foundational services), we have a number of speed-related problems to face day by day. Our developers are constantly struggling with plumbing boilerplate and concerns are arising about the maintenance cost of several instrumentation frameworks (See V8ScriptRunner::RunCompiledScript as a key example of four different probes wrapping a script call).

 

Speed is by-nature a cross-cutting problem and we cannot easily solve it pretending to know nothing about the architecture of Chrome.

We are not stating that it is impossible to architect GRC it in a way that reflects the service layering. Our genuine fear is that enforcing that sort of architecture right now is IMHO premature and will slow down progress with big upfront designs.

 

Until both the servicification and GRC have more use cases, it would be great if we could keep things simple and revisit the internal architecture once all the use cases have been consolidated.


I sympathize that bringing chrome-specific code now to /services has these extra constraints. FWIW, many of us are dealing with this.

I do think we want to be careful, as this argument is a slippery slope. It'd be great to send other examples to this list so we can discuss it, if they come up.

 

To clarify we are NOT talking about layering violations in terms of actual code dependencies (read: we are NOT talking about having GRC service depending on content/).

We are talking about being able to model Chrome-specific concepts in GRC (read: having Mojo struct that represent tabs, frames, layers) in order to receive the right signals from to then formulate the right hints.


FWIW this seems fine to me.
 

 

We are aiming for an architecture for GRC which:

  • Enables us to formulate resource usage hints in one central location. This gives velocity to our developers that can quickly iterate on prototypes and capture data for experiments.

  • Doesn’t re-create all of the current layering plumbing that’s needed when, for example, getting an event up from Blink through //content all the way up to //chrome.

  • Doesn’t require us to over-design GRC until the use cases we care about are there.

 

Let us know what you think!


--
You received this message because you are subscribed to the Google Groups "services-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to services-dev...@chromium.org.
To post to this group, send email to servic...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/services-dev/CAEkU82xKbBhwwbzF1xPY6Hu-a%2BOwdTYbMOQ9zw78_Mo1fh8G3A%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "services-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to services-dev+unsubscribe@chromium.org.

To post to this group, send email to servic...@chromium.org.

Primiano Tucci

unread,
May 10, 2017, 10:52:27 AM5/10/17
to John Abd-El-Malek, services-dev, Oystein Eftevaag
Yeah I know, but conceptually if the process model changes, we really have to change memory-infra to adapt it to whatever new process model we use. 
To be clear this (lack of sync) is not going to create build breakages, but is going to cause misattribution.
 

If it's the GRC, is it possible to prefix/append the process name to the UMA/UK keys. If that's not possible, then I think it's fine. Ken and I chatted about this; other places like service_manager will have to know what are different purposes for sandboxing needs.
 
UKM is going to be trickier because there the keys are not freeform but we have to fill a proto where the keys are determined a-priori. I'd like to avoid a state where every time something needs to know about chrome-specific concepts we have to plumb something from //service to //chrome. I don't see a clear advantage, as we'd just cheating with patterns like "who is gpu process? I am just a string that happens to contains ['g','p','u']" but on the other side we'll be adding more marshaling, string copies and ipc roundtrips.

Just to be clear, I think this stuff is fine to add. The most important point, which we all agree on, is that it doesn't include blink or content per services/readme.
Yup, fully agree.
 
The other stuff, like the ints or an enum for a process type, seems like stuff we can change in the future if we want to make GRC usable in other contexts and we find that they're hindering that.
SG. Look in order to get this more concrete, this is a draft CL of what I'm talking about:

Thanks for the clarifications.
 

--
You received this message because you are subscribed to a topic in the Google Groups "services-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/services-dev/v85DHRCtrl4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to services-dev...@chromium.org.

To post to this group, send email to servic...@chromium.org.

Colin Blundell

unread,
May 11, 2017, 2:55:21 AM5/11/17
to Primiano Tucci, John Abd-El-Malek, services-dev, Oystein Eftevaag
This discussion makes sense to me as well, fwiw. As John wrote upthread, as you're going on, please be careful and aware of exactly how you're tying the service to assumptions about/knowledge of the architecture of Chromium (and raise discussion for anything that seems doubtful/controversial/etc). It sounds like you're already thinking about all of this really carefully, so steady on :).

Primiano Tucci

unread,
May 11, 2017, 1:44:44 PM5/11/17
to Colin Blundell, John Abd-El-Malek, services-dev, Oystein Eftevaag
Given the discussion I thought it was worth dumping my brain on a less ephemeral medium. I wrote down a  short design-doc about the plans for memory-infra and the short-term quirks to handle the transition. Any comments welcome (make sure you log in with a chromium.org account)

Reply all
Reply to author
Forward
0 new messages