Michael Lippautz posted comments on this change.
Patch set 2:Commit-Queue +1
To view, visit change 506152. To unsubscribe, visit settings.
Michael Lippautz posted comments on this change.
Patch set 3:Commit-Queue +1
Michael Lippautz posted comments on this change.
Patch set 4:Commit-Queue +1
PTAL; I incorporated comments from https://chromium-review.googlesource.com/c/502810/. If we are happy with this version, we can also and the ItemParallelJob.
(2 comments)
Patch Set #4, Line 366: kNumMarkers
Simple constant for now. If we feel that this doesn't play nicely then we can think about something more advanced.
File src/heap/mark-compact.cc:
Patch Set #4, Line 2712: kBufferSize
This one is a compromise between load balancing from the roots and allocation/deallocation overhead of items.
To view, visit change 506152. To unsubscribe, visit settings.
Michael Lippautz uploaded patch set #5 to this change.
[heap] MinorMC: Parallel marking
Bug: chromium:651354
Change-Id: I9df2ca542112f04787987bda67657fc4015787b5
---
M src/flag-definitions.h
M src/heap/gc-tracer.cc
M src/heap/gc-tracer.h
M src/heap/mark-compact.cc
M src/heap/mark-compact.h
5 files changed, 277 insertions(+), 56 deletions(-)
To view, visit change 506152. To unsubscribe, visit settings.
Hannes Payer posted comments on this change.
Patch set 5:Code-Review +1
I like it!
(3 comments)
Patch Set #4, Line 366: kNumMarkers
Simple constant for now. If we feel that this doesn't play nicely then we c
How about going to number of available cores?
File src/heap/mark-compact.cc:
Patch Set #4, Line 2712: kBufferSize
This one is a compromise between load balancing from the roots and allocati
Ack
File src/heap/mark-compact.cc:
Patch Set #5, Line 2589: EmptySpecificMarkingDeque
Shouldn't marking_deque_ be empty here?
To view, visit change 506152. To unsubscribe, visit settings.
Ulan Degenbaev posted comments on this change.
Patch set 5:Code-Review +1
nice!
(1 comment)
File src/heap/mark-compact.cc:
Should it be just buffered_objects_size() == kBufferSize ?
To view, visit change 506152. To unsubscribe, visit settings.
Michael Lippautz posted comments on this change.
Patch set 6:
Addressed comments
(3 comments)
Patch Set #4, Line 366: kNumMarkers
How about going to number of available cores?
I only verified that the item based approach properly distributes (in most cases) with few tasks. I will profile with more tasks though and potentially increase if it helps.
File src/heap/mark-compact.cc:
Patch Set #5, Line 2589: ng_deque_->IsEmpty());
Shouldn't marking_deque_ be empty here?
Yes, I replaced it with a DCHECK.
This was a left over. Initially the deque was only emptied after processing all items, but I think it is better to interleave the processing.
Should it be just buffered_objects_size() == kBufferSize ?
Done
To view, visit change 506152. To unsubscribe, visit settings.
Michael Lippautz posted comments on this change.
Patch set 6:Commit-Queue +1
Michael Lippautz posted comments on this change.
Patch set 8:Commit-Queue +1
Michael Lippautz posted comments on this change.
Patch set 8:Commit-Queue +2
Commit Bot posted comments on this change.
Patch set 8:
CQ is trying da patch.
Note: The patchset sent to CQ was uploaded after this CL was approved.
"Rebase" https://chromium-review.googlesource.com/c/506152/8
Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/chromium-review.googlesource.com/506152/8
Bot data: {"action": "start", "triggered_at": "2017-05-17T09:27:25.0Z", "revision": "4eb81c43c08f40f991dd7c9391f344789a5326ae"}
Commit Bot merged this change.
[heap] MinorMC: Parallel marking
Bug: chromium:651354
Change-Id: I9df2ca542112f04787987bda67657fc4015787b5
Reviewed-on: https://chromium-review.googlesource.com/506152
Commit-Queue: Michael Lippautz <mlip...@chromium.org>
Reviewed-by: Hannes Payer <hpa...@chromium.org>
Reviewed-by: Ulan Degenbaev <ul...@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45363}
---
M src/flag-definitions.h
M src/heap/gc-tracer.cc
M src/heap/gc-tracer.h
M src/heap/mark-compact.cc
M src/heap/mark-compact.h
5 files changed, 276 insertions(+), 55 deletions(-)
diff --git a/src/flag-definitions.h b/src/flag-definitions.h
index 2a5d49f..66921ab 100644
--- a/src/flag-definitions.h
+++ b/src/flag-definitions.h
@@ -671,6 +671,10 @@
"use incremental marking for marking wrappers")
DEFINE_BOOL(concurrent_marking, V8_CONCURRENT_MARKING, "use concurrent marking")
DEFINE_BOOL(trace_concurrent_marking, false, "trace concurrent marking")
+DEFINE_BOOL(minor_mc_parallel_marking, true,
+ "use parallel marking for the young generation")
+DEFINE_BOOL(trace_minor_mc_parallel_marking, false,
+ "trace parallel marking for the young generation")
DEFINE_INT(min_progress_during_incremental_marking_finalization, 32,
"keep finalizing incremental marking as long as we discover at "
"least this many unmarked objects")
@@ -1300,6 +1304,7 @@
DEFINE_NEG_IMPLICATION(single_threaded, concurrent_recompilation)
DEFINE_NEG_IMPLICATION(single_threaded, concurrent_marking)
DEFINE_NEG_IMPLICATION(single_threaded, concurrent_sweeping)
+DEFINE_NEG_IMPLICATION(single_threaded, minor_mc_parallel_marking)
DEFINE_NEG_IMPLICATION(single_threaded, parallel_compaction)
DEFINE_NEG_IMPLICATION(single_threaded, parallel_pointer_update)
DEFINE_NEG_IMPLICATION(single_threaded, concurrent_store_buffer)
diff --git a/src/heap/gc-tracer.cc b/src/heap/gc-tracer.cc
index 1f4ea9b..46d5bb6 100644
--- a/src/heap/gc-tracer.cc
+++ b/src/heap/gc-tracer.cc
@@ -531,8 +531,8 @@
"finish_sweeping=%.2f "
"mark=%.2f "
"mark.identify_global_handles=%.2f "
+ "mark.seed=%.2f "
"mark.roots=%.2f "
- "mark.old_to_new=%.2f "
"mark.weak=%.2f "
"mark.global_handles=%.2f "
"clear=%.2f "
@@ -552,8 +552,8 @@
current_.scopes[Scope::MINOR_MC_SWEEPING],
current_.scopes[Scope::MINOR_MC_MARK],
current_.scopes[Scope::MINOR_MC_MARK_IDENTIFY_GLOBAL_HANDLES],
+ current_.scopes[Scope::MINOR_MC_MARK_SEED],
current_.scopes[Scope::MINOR_MC_MARK_ROOTS],
- current_.scopes[Scope::MINOR_MC_MARK_OLD_TO_NEW_POINTERS],
current_.scopes[Scope::MINOR_MC_MARK_WEAK],
current_.scopes[Scope::MINOR_MC_MARK_GLOBAL_HANDLES],
current_.scopes[Scope::MINOR_MC_CLEAR],
diff --git a/src/heap/gc-tracer.h b/src/heap/gc-tracer.h
index e6d77c9..96b21c6 100644
--- a/src/heap/gc-tracer.h
+++ b/src/heap/gc-tracer.h
@@ -101,7 +101,7 @@
F(MINOR_MC_MARK) \
F(MINOR_MC_MARK_GLOBAL_HANDLES) \
F(MINOR_MC_MARK_IDENTIFY_GLOBAL_HANDLES) \
- F(MINOR_MC_MARK_OLD_TO_NEW_POINTERS) \
+ F(MINOR_MC_MARK_SEED) \
F(MINOR_MC_MARK_ROOTS) \
F(MINOR_MC_MARK_WEAK) \
F(MINOR_MC_MARKING_DEQUE) \
diff --git a/src/heap/mark-compact.cc b/src/heap/mark-compact.cc
index 47ca0ee..294a659 100644
--- a/src/heap/mark-compact.cc
+++ b/src/heap/mark-compact.cc
@@ -18,6 +18,7 @@
#include "src/heap/concurrent-marking.h"
#include "src/heap/gc-tracer.h"
#include "src/heap/incremental-marking.h"
+#include "src/heap/item-parallel-job.h"
#include "src/heap/mark-compact-inl.h"
#include "src/heap/object-stats.h"
#include "src/heap/objects-visiting-inl.h"
@@ -350,6 +351,12 @@
: 1;
}
+int MinorMarkCompactCollector::NumberOfMarkingTasks() {
+ return FLAG_minor_mc_parallel_marking
+ ? Min(NumberOfAvailableCores(), kNumMarkers)
+ : 1;
+}
+
MarkCompactCollector::MarkCompactCollector(Heap* heap)
: MarkCompactCollectorBase(heap),
page_parallel_job_semaphore_(0),
@@ -381,7 +388,9 @@
}
}
-void MinorMarkCompactCollector::SetUp() { marking_deque()->SetUp(); }
+void MinorMarkCompactCollector::SetUp() {
+ for (int i = 0; i < kNumMarkers; i++) marking_deque(i)->SetUp();
+}
void MarkCompactCollector::TearDown() {
AbortCompaction();
@@ -389,7 +398,9 @@
delete code_flusher_;
}
-void MinorMarkCompactCollector::TearDown() { marking_deque()->TearDown(); }
+void MinorMarkCompactCollector::TearDown() {
+ for (int i = 0; i < kNumMarkers; i++) marking_deque(i)->TearDown();
+}
void MarkCompactCollector::AddEvacuationCandidate(Page* p) {
DCHECK(!p->NeverEvacuate());
@@ -2477,8 +2488,8 @@
}
inline void MarkObjectViaMarkingDeque(HeapObject* object) {
- if (ObjectMarking::WhiteToBlack<MarkBit::NON_ATOMIC>(
- object, marking_state(object))) {
+ if (ObjectMarking::WhiteToBlack<MarkBit::ATOMIC>(object,
+ marking_state(object))) {
// Marking deque overflow is unsupported for the young generation.
CHECK(marking_deque_->Push(object));
}
@@ -2491,7 +2502,7 @@
Object* target = *p;
if (heap_->InNewSpace(target)) {
HeapObject* target_object = HeapObject::cast(target);
- if (ObjectMarking::WhiteToBlack<MarkBit::NON_ATOMIC>(
+ if (ObjectMarking::WhiteToBlack<MarkBit::ATOMIC>(
target_object, marking_state(target_object))) {
Visit(target_object);
}
@@ -2533,7 +2544,7 @@
if (ObjectMarking::WhiteToBlack<MarkBit::NON_ATOMIC>(
object, marking_state(object))) {
- collector_->marking_visitor_->Visit(object);
+ collector_->marking_visitor(kMainMarker)->Visit(object);
collector_->EmptyMarkingDeque();
}
}
@@ -2541,16 +2552,190 @@
MinorMarkCompactCollector* collector_;
};
+class MarkingItem;
+class PageMarkingItem;
+class RootMarkingItem;
+class YoungGenerationMarkingTask;
+
+class MarkingItem : public ItemParallelJob::Item {
+ public:
+ virtual ~MarkingItem() {}
+ virtual void Process(YoungGenerationMarkingTask* task) = 0;
+};
+
+class YoungGenerationMarkingTask : public ItemParallelJob::Task {
+ public:
+ YoungGenerationMarkingTask(Isolate* isolate,
+ MinorMarkCompactCollector* collector,
+ MarkingDeque* marking_deque,
+ YoungGenerationMarkingVisitor* visitor
+
+ )
+ : ItemParallelJob::Task(isolate),
+ collector_(collector),
+ marking_deque_(marking_deque),
+ visitor_(visitor) {}
+
+ void RunInParallel() override {
+ double marking_time = 0.0;
+ {
+ TimedScope scope(&marking_time);
+ MarkingItem* item = nullptr;
+ while ((item = GetItem<MarkingItem>()) != nullptr) {
+ item->Process(this);
+ item->MarkFinished();
+ collector_->EmptySpecificMarkingDeque(marking_deque_, visitor_);
+ }
+ DCHECK(marking_deque_->IsEmpty());
+ }
+ if (FLAG_trace_minor_mc_parallel_marking) {
+ PrintIsolate(collector_->isolate(), "marking[%p]: time=%f\n",
+ static_cast<void*>(this), marking_time);
+ }
+ };
+
+ void MarkObject(Object* object) {
+ if (!collector_->heap()->InNewSpace(object)) return;
+ HeapObject* heap_object = HeapObject::cast(object);
+ if (ObjectMarking::WhiteToBlack<MarkBit::ATOMIC>(
+ heap_object, collector_->marking_state(heap_object))) {
+ visitor_->Visit(heap_object);
+ }
+ }
+
+ private:
+ MinorMarkCompactCollector* collector_;
+ MarkingDeque* marking_deque_;
+ YoungGenerationMarkingVisitor* visitor_;
+};
+
+class BatchedRootMarkingItem : public MarkingItem {
+ public:
+ explicit BatchedRootMarkingItem(std::vector<Object*>&& objects)
+ : objects_(objects) {}
+ virtual ~BatchedRootMarkingItem() {}
+
+ void Process(YoungGenerationMarkingTask* task) override {
+ for (Object* object : objects_) {
+ task->MarkObject(object);
+ }
+ }
+
+ private:
+ std::vector<Object*> objects_;
+};
+
+class PageMarkingItem : public MarkingItem {
+ public:
+ explicit PageMarkingItem(MemoryChunk* chunk) : chunk_(chunk) {}
+ virtual ~PageMarkingItem() {}
+
+ void Process(YoungGenerationMarkingTask* task) override {
+ base::LockGuard<base::RecursiveMutex> guard(chunk_->mutex());
+ MarkUntypedPointers(task);
+ MarkTypedPointers(task);
+ }
+
+ private:
+ inline Heap* heap() { return chunk_->heap(); }
+
+ void MarkUntypedPointers(YoungGenerationMarkingTask* task) {
+ RememberedSet<OLD_TO_NEW>::Iterate(chunk_, [this, task](Address slot) {
+ return CheckAndMarkObject(task, slot);
+ });
+ }
+
+ void MarkTypedPointers(YoungGenerationMarkingTask* task) {
+ Isolate* isolate = heap()->isolate();
+ RememberedSet<OLD_TO_NEW>::IterateTyped(
+ chunk_, [this, isolate, task](SlotType slot_type, Address host_addr,
+ Address slot) {
+ return UpdateTypedSlotHelper::UpdateTypedSlot(
+ isolate, slot_type, slot, [this, task](Object** slot) {
+ return CheckAndMarkObject(task,
+ reinterpret_cast<Address>(slot));
+ });
+ });
+ }
+
+ SlotCallbackResult CheckAndMarkObject(YoungGenerationMarkingTask* task,
+ Address slot_address) {
+ Object* object = *reinterpret_cast<Object**>(slot_address);
+ if (heap()->InNewSpace(object)) {
+ // Marking happens before flipping the young generation, so the object
+ // has to be in ToSpace.
+ DCHECK(heap()->InToSpace(object));
+ HeapObject* heap_object = reinterpret_cast<HeapObject*>(object);
+ task->MarkObject(heap_object);
+ return KEEP_SLOT;
+ }
+ return REMOVE_SLOT;
+ }
+
+ MemoryChunk* chunk_;
+};
+
+// This root visitor walks all roots and creates items bundling objects that
+// are then processed later on. Slots have to be dereferenced as they could
+// live on the native (C++) stack, which requires filtering out the indirection.
+class MinorMarkCompactCollector::RootMarkingVisitorSeedOnly
+ : public RootVisitor {
+ public:
+ explicit RootMarkingVisitorSeedOnly(ItemParallelJob* job) : job_(job) {
+ buffered_objects_.reserve(kBufferSize);
+ }
+
+ void VisitRootPointer(Root root, Object** p) override {
+ if (!(*p)->IsHeapObject()) return;
+ AddObject(*p);
+ }
+
+ void VisitRootPointers(Root root, Object** start, Object** end) override {
+ for (Object** p = start; p < end; p++) {
+ if (!(*p)->IsHeapObject()) continue;
+ AddObject(*p);
+ }
+ }
+
+ void FlushObjects() {
+ job_->AddItem(new BatchedRootMarkingItem(std::move(buffered_objects_)));
+ // Moving leaves the container in a valid but unspecified state. Reusing the
+ // container requires a call without precondition that resets the state.
+ buffered_objects_.clear();
+ buffered_objects_.reserve(kBufferSize);
+ }
+
+ private:
+ // Bundling several objects together in items avoids issues with allocating
+ // and deallocating items; both are operations that are performed on the main
+ // thread.
+ static const int kBufferSize = 32;
+
+ void AddObject(Object* object) {
+ buffered_objects_.push_back(object);
+ if (buffered_objects_.size() == kBufferSize) FlushObjects();
+ }
+
+ ItemParallelJob* job_;
+ std::vector<Object*> buffered_objects_;
+};
+
MinorMarkCompactCollector::MinorMarkCompactCollector(Heap* heap)
- : MarkCompactCollectorBase(heap),
- marking_deque_(heap),
- marking_visitor_(
- new YoungGenerationMarkingVisitor(heap, &marking_deque_)),
- page_parallel_job_semaphore_(0) {}
+ : MarkCompactCollectorBase(heap), page_parallel_job_semaphore_(0) {
+ for (int i = 0; i < kNumMarkers; i++) {
+ marking_deque_[i] = new MarkingDeque(heap);
+ marking_visitor_[i] =
+ new YoungGenerationMarkingVisitor(heap, marking_deque_[i]);
+ }
+}
MinorMarkCompactCollector::~MinorMarkCompactCollector() {
- DCHECK_NOT_NULL(marking_visitor_);
- delete marking_visitor_;
+ for (int i = 0; i < kNumMarkers; i++) {
+ DCHECK_NOT_NULL(marking_visitor_[i]);
+ DCHECK_NOT_NULL(marking_deque_[i]);
+ delete marking_visitor_[i];
+ delete marking_deque_[i];
+ }
}
SlotCallbackResult MinorMarkCompactCollector::CheckAndMarkObject(
@@ -2563,8 +2748,9 @@
HeapObject* heap_object = reinterpret_cast<HeapObject*>(object);
const MarkingState state = MarkingState::External(heap_object);
if (ObjectMarking::WhiteToBlack<MarkBit::NON_ATOMIC>(heap_object, state)) {
- heap->minor_mark_compact_collector()->marking_visitor_->Visit(
- heap_object);
+ heap->minor_mark_compact_collector()
+ ->marking_visitor(kMainMarker)
+ ->Visit(heap_object);
}
return KEEP_SLOT;
}
@@ -2578,6 +2764,33 @@
MarkingState::External(HeapObject::cast(*p)));
}
+void MinorMarkCompactCollector::MarkRootSetInParallel() {
+ // Seed the root set (roots + old->new set).
+ ItemParallelJob job(isolate()->cancelable_task_manager(),
+ &page_parallel_job_semaphore_);
+
+ {
+ TRACE_GC(heap()->tracer(), GCTracer::Scope::MINOR_MC_MARK_SEED);
+ RootMarkingVisitorSeedOnly root_seed_visitor(&job);
+ heap()->IterateRoots(&root_seed_visitor, VISIT_ALL_IN_SCAVENGE);
+ RememberedSet<OLD_TO_NEW>::IterateMemoryChunks(
+ heap(), [&job](MemoryChunk* chunk) {
+ job.AddItem(new PageMarkingItem(chunk));
+ });
+ root_seed_visitor.FlushObjects();
+ }
+
+ {
+ TRACE_GC(heap()->tracer(), GCTracer::Scope::MINOR_MC_MARK_ROOTS);
+ const int num_tasks = NumberOfMarkingTasks();
+ for (int i = 0; i < num_tasks; i++) {
+ job.AddTask(new YoungGenerationMarkingTask(
+ isolate(), this, marking_deque(i), marking_visitor(i)));
+ }
+ job.Run();
+ }
+}
+
void MinorMarkCompactCollector::MarkLiveObjects() {
TRACE_GC(heap()->tracer(), GCTracer::Scope::MINOR_MC_MARK);
@@ -2585,7 +2798,7 @@
RootMarkingVisitor root_visitor(this);
- marking_deque()->StartUsing();
+ for (int i = 0; i < kNumMarkers; i++) marking_deque(i)->StartUsing();
{
TRACE_GC(heap()->tracer(),
@@ -2594,30 +2807,9 @@
&Heap::IsUnmodifiedHeapObject);
}
- {
- TRACE_GC(heap()->tracer(), GCTracer::Scope::MINOR_MC_MARK_ROOTS);
- heap()->IterateRoots(&root_visitor, VISIT_ALL_IN_SCAVENGE);
- ProcessMarkingDeque();
- }
+ MarkRootSetInParallel();
- {
- TRACE_GC(heap()->tracer(),
- GCTracer::Scope::MINOR_MC_MARK_OLD_TO_NEW_POINTERS);
- RememberedSet<OLD_TO_NEW>::Iterate(
- heap(), NON_SYNCHRONIZED,
- [this](Address addr) { return CheckAndMarkObject(heap(), addr); });
- RememberedSet<OLD_TO_NEW>::IterateTyped(
- heap(), NON_SYNCHRONIZED,
- [this](SlotType type, Address host_addr, Address addr) {
- return UpdateTypedSlotHelper::UpdateTypedSlot(
- isolate(), type, addr, [this](Object** addr) {
- return CheckAndMarkObject(heap(),
- reinterpret_cast<Address>(addr));
- });
- });
- ProcessMarkingDeque();
- }
-
+ // Mark rest on the main thread.
{
TRACE_GC(heap()->tracer(), GCTracer::Scope::MINOR_MC_MARK_WEAK);
heap()->IterateEncounteredWeakCollections(&root_visitor);
@@ -2633,30 +2825,35 @@
ProcessMarkingDeque();
}
- marking_deque()->StopUsing();
+ for (int i = 0; i < kNumMarkers; i++) marking_deque(i)->StopUsing();
}
void MinorMarkCompactCollector::ProcessMarkingDeque() {
EmptyMarkingDeque();
- DCHECK(!marking_deque()->overflowed());
- DCHECK(marking_deque()->IsEmpty());
+ DCHECK(!marking_deque(kMainMarker)->overflowed());
+ DCHECK(marking_deque(kMainMarker)->IsEmpty());
}
-void MinorMarkCompactCollector::EmptyMarkingDeque() {
- while (!marking_deque()->IsEmpty()) {
- HeapObject* object = marking_deque()->Pop();
-
+void MinorMarkCompactCollector::EmptySpecificMarkingDeque(
+ MarkingDeque* marking_deque, YoungGenerationMarkingVisitor* visitor) {
+ while (!marking_deque->IsEmpty()) {
+ HeapObject* object = marking_deque->Pop();
DCHECK(!object->IsFiller());
DCHECK(object->IsHeapObject());
DCHECK(heap()->Contains(object));
-
DCHECK(!(ObjectMarking::IsWhite<MarkBit::NON_ATOMIC>(
object, marking_state(object))));
-
DCHECK((ObjectMarking::IsBlack<MarkBit::NON_ATOMIC>(
object, marking_state(object))));
- marking_visitor_->Visit(object);
+ visitor->Visit(object);
}
+ DCHECK(!marking_deque->overflowed());
+ DCHECK(marking_deque->IsEmpty());
+}
+
+void MinorMarkCompactCollector::EmptyMarkingDeque() {
+ EmptySpecificMarkingDeque(marking_deque(kMainMarker),
+ marking_visitor(kMainMarker));
}
void MinorMarkCompactCollector::CollectGarbage() {
diff --git a/src/heap/mark-compact.h b/src/heap/mark-compact.h
index 4b844b1..fa57bfb 100644
--- a/src/heap/mark-compact.h
+++ b/src/heap/mark-compact.h
@@ -360,14 +360,29 @@
void CleanupSweepToIteratePages();
private:
+ class RootMarkingVisitorSeedOnly;
class RootMarkingVisitor;
- inline MarkingDeque* marking_deque() { return &marking_deque_; }
+ static const int kNumMarkers = 4;
+ static const int kMainMarker = 0;
+
+ inline MarkingDeque* marking_deque(int index) {
+ DCHECK_LT(index, kNumMarkers);
+ return marking_deque_[index];
+ }
+
+ inline YoungGenerationMarkingVisitor* marking_visitor(int index) {
+ DCHECK_LT(index, kNumMarkers);
+ return marking_visitor_[index];
+ }
SlotCallbackResult CheckAndMarkObject(Heap* heap, Address slot_address);
void MarkLiveObjects() override;
+ void MarkRootSetInParallel();
void ProcessMarkingDeque() override;
void EmptyMarkingDeque() override;
+ void EmptySpecificMarkingDeque(MarkingDeque* marking_deque,
+ YoungGenerationMarkingVisitor* visitor);
void ClearNonLiveReferences() override;
void EvacuatePrologue() override;
@@ -376,12 +391,16 @@
void EvacuatePagesInParallel() override;
void UpdatePointersAfterEvacuation() override;
- MarkingDeque marking_deque_;
- YoungGenerationMarkingVisitor* marking_visitor_;
+ int NumberOfMarkingTasks();
+
+ MarkingDeque* marking_deque_[kNumMarkers];
+ YoungGenerationMarkingVisitor* marking_visitor_[kNumMarkers];
base::Semaphore page_parallel_job_semaphore_;
List<Page*> new_space_evacuation_pages_;
std::vector<Page*> sweep_to_iterate_pages_;
+ friend class MarkYoungGenerationJobTraits;
+ friend class YoungGenerationMarkingTask;
friend class YoungGenerationMarkingVisitor;
};
To view, visit change 506152. To unsubscribe, visit settings.