| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
I haven't reviewed everything yet, but want to send out what I've got so far.
In particular, see my comment on L17 of synctest.go. I don't currently understand how to use this API safely with globals containing mutexes (including some std API examples).
// Time advances when every goroutine is idle.I assume this means that time.Sleep blocks until all other goroutines are idle? That may be worth calling out here?
// If every goroutine is idle and there are no timers scheduled,
// Run panics.Won't this be flaky under certain use-cases?
e.g., if one of the goroutines attempts to lock a sync.Mutex from outside of the group (e.g, `crypto/rand.Reader` is a global with a mutex; `reflect.FuncOf` has a global mutex protecting a cache) and there is contention causing the goroutine to block, that will make the goroutine "idle" and thus the group eligible to panic even though it will presumably still make progress.
//go:linkname RunI can never remember our preferred linkname form, so upon seeing this I went off and wrote some docs in CL 609715.
Apologies my tangent delayed my review! (You are doing everything correct here, nothing to do)
// is idle.micronit: accidental newline? Or intentional to put a newline after each comma?
// A goroutine executing a system call is never idle,Does blocking in netpoll count as idle? (e.g., `os.File.Read` on a pipe)
I can't tell from the text here. The `os.File.Read` feels like a blocking system call like you describe here, but I know it is more complex under the hood.
// TestGC tests that GC workers created by a goroutine in a synctest groupWhile you are testing really subtle cases, a test that the finalizer goroutine isn't affected would be nice (see `runtime.createfing`).
// TestGC tests that GC workers created by a goroutine in a synctest groupThe GC workers are created when the first GC starts. If a GC has already run prior to starting this test then you won't get the coverage you want.
To make this robust I think you'd need to run it in a subprocess where you can fail if a GC starts too soon. I don't know if that is worth the complexity.
runtime.ReadMemStats(&mb)Consider reading `/gc/cycles/total:gc-cycles` from runtime/metrics instead.
ReadMemStats has to STW, which perturbs the scheduler a lot. I don't see a specific problem with STW, but it would make this test harder to reason about if there was a CI failure.
case waitReasonChanReceiveNilChan:It would be nice to have a complete enumeration of all wait reasons with a build or init time check that we haven't forgotten any. It feels too easy to add a new wait reason without thinking it add it here.
goready(wake, 0)It is invalid to ready a goroutine if it isn't waiting. IIUC, `active` should protect us from readying the root while it is running (since `active > 0` while root runs). If that is correct, could you note this somewhere?
sg.now = nextThe semantics of time inside of synctest are pretty similar to faketime used by the playground (time_fake.go). It would be nice if we could have a single fake time implementation shared by both modes.
Marking as resolved because I don't see an obvious way to do this, but it is worth thinking about.
}Perhaps check that `gp.timer.isFake` is false and throw otherwise?
`timeSleep` initializes `gp.timer` with `isFake` if the goroutine is in a group. Nothing ever clears `isFake`.
That is fine for most goroutines as they are in the group for their entire lifetime, but the root goroutine joins the group only for the duration of this function.
If something in this function were to call `timeSleep`, it could permanently poison this goroutine's sleep timer. I don't see any reason that we would do that, but it would be a nasty bug.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Thanks! Just responded to the high-level points so far. The point about reflect.FuncOf is a very good one, and I'm not certain yet what the answer (if any) is.
// Time advances when every goroutine is idle.I assume this means that time.Sleep blocks until all other goroutines are idle? That may be worth calling out here?
Any suggestions on how to phrase that? "Sleep blocks until all other goroutines are idle" is *almost* correct, but if two goroutines sleep until the same instant then both wake at the same time.
My thought had been to include some examples in the public version of the package to demonstrate how Sleep interacts with other goroutines.
// If every goroutine is idle and there are no timers scheduled,
// Run panics.Won't this be flaky under certain use-cases?
e.g., if one of the goroutines attempts to lock a sync.Mutex from outside of the group (e.g, `crypto/rand.Reader` is a global with a mutex; `reflect.FuncOf` has a global mutex protecting a cache) and there is contention causing the goroutine to block, that will make the goroutine "idle" and thus the group eligible to panic even though it will presumably still make progress.
I hadn't considered the `crypto/rand.Reader` or `reflect.FuncOf` cases. This is an interesting problem. (In the sense of "may you live in interesting times", I fear.)
For more visibility, I've commented further on: https://github.com/golang/go/issues/67434#issuecomment-2327535500
// is idle.micronit: accidental newline? Or intentional to put a newline after each comma?
Intentional, following the "new line after every new concept" style, but I could merge it if you want.
// A goroutine executing a system call is never idle,Does blocking in netpoll count as idle? (e.g., `os.File.Read` on a pipe)
I can't tell from the text here. The `os.File.Read` feels like a blocking system call like you describe here, but I know it is more complex under the hood.
Blocking in netpoll does not count as idle. Reworded a bit to try to make this clearer.
Treating a goroutine blocked on a read from a pipe or loopback connection as idle is problematic, because we can't distinguish between "durably blocked until this test proceeds and data is written to the pipe" and "the kernel is about to tell us data is available, but we haven't read the notification yet". So instead we consider anything that touches the OS non-idle. The means tests that work on network connections will need to use an in-process fake, like net.Pipe.
(net.Pipe's unbuffered nature makes it a poor fake for some cases, but that's a separate issue.)
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Damien NeilI assume this means that time.Sleep blocks until all other goroutines are idle? That may be worth calling out here?
Any suggestions on how to phrase that? "Sleep blocks until all other goroutines are idle" is *almost* correct, but if two goroutines sleep until the same instant then both wake at the same time.
My thought had been to include some examples in the public version of the package to demonstrate how Sleep interacts with other goroutines.
Attempted a clarification here; PTAL.
// If every goroutine is idle and there are no timers scheduled,
// Run panics.Damien NeilWon't this be flaky under certain use-cases?
e.g., if one of the goroutines attempts to lock a sync.Mutex from outside of the group (e.g, `crypto/rand.Reader` is a global with a mutex; `reflect.FuncOf` has a global mutex protecting a cache) and there is contention causing the goroutine to block, that will make the goroutine "idle" and thus the group eligible to panic even though it will presumably still make progress.
I hadn't considered the `crypto/rand.Reader` or `reflect.FuncOf` cases. This is an interesting problem. (In the sense of "may you live in interesting times", I fear.)
For more visibility, I've commented further on: https://github.com/golang/go/issues/67434#issuecomment-2327535500
I've adjusted the idleness rules:
// TestGC tests that GC workers created by a goroutine in a synctest groupThe GC workers are created when the first GC starts. If a GC has already run prior to starting this test then you won't get the coverage you want.
To make this robust I think you'd need to run it in a subprocess where you can fail if a GC starts too soon. I don't know if that is worth the complexity.
Moved this to the runtime runTestProg test.
// TestGC tests that GC workers created by a goroutine in a synctest groupWhile you are testing really subtle cases, a test that the finalizer goroutine isn't affected would be nice (see `runtime.createfing`).
Done.
This goes into an almost-no-dependencies runTestProg test in the runtime package, since otherwise the os package creates the finalizer goroutine before us.
runtime.ReadMemStats(&mb)Consider reading `/gc/cycles/total:gc-cycles` from runtime/metrics instead.
ReadMemStats has to STW, which perturbs the scheduler a lot. I don't see a specific problem with STW, but it would make this test harder to reason about if there was a CI failure.
Done.
It would be nice to have a complete enumeration of all wait reasons with a build or init time check that we haven't forgotten any. It feels too easy to add a new wait reason without thinking it add it here.
Moved the list of reasons to runtime2.go, next to existing enumerations of reasons.
I think it's unlikely that we'd accidentally add a new reason which should be considered idle; synctest is conservative about when a goroutine becomes idle, and the default is always going to be non-idle.
It is invalid to ready a goroutine if it isn't waiting. IIUC, `active` should protect us from readying the root while it is running (since `active > 0` while root runs). If that is correct, could you note this somewhere?
Added a comment to `maybeWakeLocked`.
Perhaps check that `gp.timer.isFake` is false and throw otherwise?
`timeSleep` initializes `gp.timer` with `isFake` if the goroutine is in a group. Nothing ever clears `isFake`.
That is fine for most goroutines as they are in the group for their entire lifetime, but the root goroutine joins the group only for the duration of this function.
If something in this function were to call `timeSleep`, it could permanently poison this goroutine's sleep timer. I don't see any reason that we would do that, but it would be a nasty bug.
Done
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
I think the current version of the CL matches the version of the API we've decided to go ahead with, and should be ready for review.
Our plan is now to initially release the testing/synctest package behind a GOEXPERIMENT. I'll add the GOEXPERIMENT-guarded external package in a followup to this CL as a thin wrapper around the internal one.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
// Wait blocks until every goroutine within the current bubble,IMO, it would be worth stating that Wait panics if called from a non-bubbled goroutine.
// For example, a goroutine blocked reading from an network connection
// is not idle, even if no data is currently available on the connection.
//
// A goroutine is not idle when blocked on a send or receive on a channel
// that was not created within its bubble.Doesn't need to be done now, but if this becomes a public package I think these need a "why?" explanation because this definition seems odd without the background context.
<-tm.COh no, I hope no global APIs do this. (Luckily this seems fairly niche)
<-readycNothing ever sends to or closes readyc. What is the purpose of this timer?
synctest bool // true if created in a synctest bubbleCan you remind me why we don't track the specific bubble? Is it just to save space?
// We don't want the goroutine's synctest group to become idle,Why would this happen? Won't we ready the goroutine before our's parks? I assume I'm missing something here.
sg.decActive()Does this mean that a coroutine that is not executing because it is waiting on the next iteration is considered idle by synctest?
// but avoids any contamination between GC assist and synctest.Is this just about assists? If so, IMO it makes more sense in gcAssistAlloc.
e.g., newUserArenaChunk (for GOEXPERIMENT=arena) can also assist but is missing this change.
// Mark the group as active even if we block somewhere below.gcBgMarkStartWorkers starts new goroutines. What prevents them from starting in the bubble? (Edit: isSystemGoroutine in newproc)
It might be better to drop the syncGroup during this function like mallocgc does.
// Mark the group as active even if we block somewhere below.Why is this necessary given mallocgc drops the syncGroup?
nonsynctest := falsenit: multiple negatives are a little confusing. Perhaps invert this and call it allSynctest.
throw("semaphore wake of synctest goroutine from outside bubble")Print the id of both synctest groups? You'll thank me later.
{runtime.G{}, 276, 448}, // g, but exported for testingWhy 16 bytes?
waiter *g // caller of synctest.WaitWhat if two goroutines call Wait?
(That seems fine semantically (though maybe confusing?), they both wake up when everything else is idle.)
Edit: I see below that this panics. The Wait documentation should say so.
active int // group will not idle when active > 0The group will not idle when active > 0 or running > 0.
Perhaps expand upon this in the comments. I was initially under the impression that active covered all reasons for the group to be non-idle, but it turns out it only covers reasons other than running goroutines. It would be nice to have a breakdown of what both of these fields cover.
racereleasemergeg(gp, sg.raceaddr())Are the race semantics documented in a comment somewhere?
sg.active++Why is this necessary given that the sg.waiter/sg.root will be runnable soon (L89)? Is it just because we can't call goready with sg.mu held?
If so, perhaps say so explicitly in the comment. My read of changegstatus above is that runnable goroutines do count towards running.
// active > 0 while root runs, so we know it isn't running now.Not true. Root only increments active on L144, _after_ creating a goroutine for f (L140). That goroutine could exit/block before root ever increments active.
But the root does have running > 0 at that point, so it should be OK.
sg.active++What is the advantage to incrementing active vs marking waitReasonSynctestRun as non-idle?
if gp.syncGroup.running == 0 && gp.syncGroup.active == 0 {IIUC, in this case, maybeWakeLocked will have woken root, as maybeWakeLocked looks at sg.waiter, not sg.waiting.
This seems like the wrong semantics, as the waiter and root will run at the same time.
Should maybeWakeLocked be looking at sg.waiting, and if sg.waiter is nil just do nothing with the knowledge that this function will shortly run the waiter?
func timeNow() (sec int64, nsec int32, mono int64) {Since these are pushed, I think typical naming would be time.now (in time) and time_now (in runtime).
Same below (time.nano, time_nano).
return sec, nsec, 0I wonder if any programs will be broken by the lack of monotonic time. I suppose not since the fake time is monotonic anyway? Still, why not just return sg.now here?
if gp.syncGroup != nil {Do I understand correctly that we expect this to always be false because we expect to always be on a g0?
(Channel timers can't get here in synctest mode)
returnComment no need to do anything because root will do it?
TEXT time·now<ABIInternal>(SB),NOSPLIT,$16-24IMO, this should be renamed to something in runtime now that it is unused by the time package.
func now() (sec int64, nsec int32, mono int64)This has no callers, remove?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Thank you for a comprehensive and useful review! That was very helpful.
// Wait blocks until every goroutine within the current bubble,IMO, it would be worth stating that Wait panics if called from a non-bubbled goroutine.
Done.
// For example, a goroutine blocked reading from an network connection
// is not idle, even if no data is currently available on the connection.
//
// A goroutine is not idle when blocked on a send or receive on a channel
// that was not created within its bubble.Doesn't need to be done now, but if this becomes a public package I think these need a "why?" explanation because this definition seems odd without the background context.
Reworded the docs to change "idle" to "durably blocked", and specifically defined "durably blocked" as blocked in a way that can only be unblocked by another goroutine in the bubble.
Nothing ever sends to or closes readyc. What is the purpose of this timer?
I have no idea. Stale test code that snuck in, I think. Removed.
synctest bool // true if created in a synctest bubbleCan you remind me why we don't track the specific bubble? Is it just to save space?
Just to save space. A bool here doesn't change the size of hchan. I don't want to increase the size of every production program's channels for a test-only feature.
If we want to be more robust about catching cross-bubble operations, we could have a global map of chan->bubble, so only bubbled chans pay the cost of tracking. (Do we have an existing weak map implementation?)
// We don't want the goroutine's synctest group to become idle,Why would this happen? Won't we ready the goroutine before our's parks? I assume I'm missing something here.
Good point. I believe this predates my hooking into casgstatus for bookkeeping, so the accounting of the goroutine waking occurred later. It also predates the association of channels with bubbles, in which an unbubbled goroutine could wake a bubbled one.
Under the current design, I don't think we need to do anything tricky here--either we're waking a goroutine in the same bubble (and the current goroutine keeps the bubble awake until it parks), or we're operating on an unbubbled channel (and the bubbled goroutine is not "durably blocked".
Dropped the incActive/decActive, added a cross-bubble-operation check.
sg.decActive()Does this mean that a coroutine that is not executing because it is waiting on the next iteration is considered idle by synctest?
Yes.
// but avoids any contamination between GC assist and synctest.Is this just about assists? If so, IMO it makes more sense in gcAssistAlloc.
e.g., newUserArenaChunk (for GOEXPERIMENT=arena) can also assist but is missing this change.
Moved this to gcAssistAlloc and gcStart.
// Mark the group as active even if we block somewhere below.gcBgMarkStartWorkers starts new goroutines. What prevents them from starting in the bubble? (Edit: isSystemGoroutine in newproc)
It might be better to drop the syncGroup during this function like mallocgc does.
Done.
// Mark the group as active even if we block somewhere below.Damien NeilWhy is this necessary given mallocgc drops the syncGroup?
Changed mallocgc to not drop the syncGroup, dropping the syncGroup here to cover both mallocgc and anything else that calls gcAssistAlloc.
gp.syncGroup.incActive()Damien NeilWhy?
Because semacquire(&worldsema) can block on something external to the bubble, and we don't want to mark the bubble as durably blocked if it does.
But the simpler way to do this is to consider waitReasonSemacquire as a non-durable blocking status; changed to do so and dropped the incActive here. This requires giving sync.WaitGroup.Wait its own identifiable waitReason, so I added waitReasonSyncWaitGroupWait.
nonsynctest := falsenit: multiple negatives are a little confusing. Perhaps invert this and call it allSynctest.
Done.
throw("semaphore wake of synctest goroutine from outside bubble")Print the id of both synctest groups? You'll thank me later.
Done. (Printing just the id of the goroutine being woken, which I think is enough.)
{runtime.G{}, 276, 448}, // g, but exported for testingWhy 16 bytes?
Padding, moved the field adjacent to another pointer to avoid.
What if two goroutines call Wait?
(That seems fine semantically (though maybe confusing?), they both wake up when everything else is idle.)
Edit: I see below that this panics. The Wait documentation should say so.
Documented.
I don't think there's any use case for double-waiting, so it's simplest to disallow it. If we did want to support it, I think we'd wake one goroutine at a time to maintain the guarantee that after Wait returns no other goroutines in the bubble are executing.
The group will not idle when active > 0 or running > 0.
Perhaps expand upon this in the comments. I was initially under the impression that active covered all reasons for the group to be non-idle, but it turns out it only covers reasons other than running goroutines. It would be nice to have a breakdown of what both of these fields cover.
Done.
racereleasemergeg(gp, sg.raceaddr())Are the race semantics documented in a comment somewhere?
There's a comment in synctestWait; added one to raceaddr as well for better discoverability.
sg.active++Why is this necessary given that the sg.waiter/sg.root will be runnable soon (L89)? Is it just because we can't call goready with sg.mu held?
If so, perhaps say so explicitly in the comment. My read of changegstatus above is that runnable goroutines do count towards running.
Done.
// active > 0 while root runs, so we know it isn't running now.Not true. Root only increments active on L144, _after_ creating a goroutine for f (L140). That goroutine could exit/block before root ever increments active.
But the root does have running > 0 at that point, so it should be OK.
Done.
sg.active++What is the advantage to incrementing active vs marking waitReasonSynctestRun as non-idle?
When Run is in waitReasonSynctestRun, it's waiting for the group to become idle so that it can advance the fake clock, panic with a deadlock report, or end the test. In this state, we don't want to consider it non-idle.
if gp.syncGroup.running == 0 && gp.syncGroup.active == 0 {IIUC, in this case, maybeWakeLocked will have woken root, as maybeWakeLocked looks at sg.waiter, not sg.waiting.
This seems like the wrong semantics, as the waiter and root will run at the same time.
Should maybeWakeLocked be looking at sg.waiting, and if sg.waiter is nil just do nothing with the knowledge that this function will shortly run the waiter?
I realize now that this can never happen, since gopark increments active while running the unlock function (synctestwait_c). If Wait is called while all other goroutines in the bubble are blocked, it will always park and then be woken when gopark decrements active.
Changed to throw if the invariant changes.
maybeWakeLocked can't wake the root while Wait is in progress, because either running > 0 (the waiting goroutine hasn't started to park yet), active > 0 (we're in gopark), or waiter != nil (we've made it through to the end of gopark).
Since these are pushed, I think typical naming would be time.now (in time) and time_now (in runtime).
Same below (time.nano, time_nano).
Changed naming of the runtime package functions, but I'm keeping the time package naming: time.now exists and always returns the real clock, time.runtimeNow/runtimeNano are new functions that return the possibly-fake clock.
return sec, nsec, 0I wonder if any programs will be broken by the lack of monotonic time. I suppose not since the fake time is monotonic anyway? Still, why not just return sg.now here?
Done.
if gp.syncGroup != nil {Do I understand correctly that we expect this to always be false because we expect to always be on a g0?
(Channel timers can't get here in synctest mode)
I think that's correct, but I'm not 100% certain I know what a g0 is.
returnComment no need to do anything because root will do it?
Done.
IMO, this should be renamed to something in runtime now that it is unused by the time package.
Unfortunately, time.now is used via //go:linkname by various packages.
This has no callers, remove?
Unfortunately, used via //go:linkname by various packages.
The "this is used via linkname" comment was in runtime/timestub.go for some reason; I've copied it here for clarity.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Code-Review | +2 |
synctest bool // true if created in a synctest bubbleDamien NeilCan you remind me why we don't track the specific bubble? Is it just to save space?
Just to save space. A bool here doesn't change the size of hchan. I don't want to increase the size of every production program's channels for a test-only feature.
If we want to be more robust about catching cross-bubble operations, we could have a global map of chan->bubble, so only bubbled chans pay the cost of tracking. (Do we have an existing weak map implementation?)
Makes sense. We can always do more tracking in the future to be more strict. We have a weak package now, but no prebuilt weak-keyed map.
throw("channel wake of synctest goroutine " + string(itoa(gbuf[:], sgp.g.goid)) + " from outside bubble")Huh, I've never thought of doing this for a dynamic throw argument. Usually we use a separate print:
```
println("runtime: channel wake of synctest goroutine", sgp.g.goid, "from outside bubble")
throw("channel wake of synctest goroutine from outside bubble")
```
IMO, the print is cleaner, but I suppose itoa is OK as well.
throw("channel wake of synctest goroutine " + string(itoa(gbuf[:], sgp.g.goid)) + " from outside bubble")For these violations, perhaps these should use fatal instead of throw?
https://cs.opensource.google/go/go/+/master:src/runtime/panic.go;l=1111
The general idea is that fatal is for user error, while throw is an internal runtime problem.
// but avoids any contamination between GC assist and synctest.```suggestion
// but avoids any contamination between GC and synctest.
```
// Goroutines which are either running, or non-durably blocked```suggestion
// Goroutines which are either running, runnable, or non-durably blocked
```
nit: say runnable here? I suppose you could say that runnable goroutines are non-durably "blocked" by the scheduler, but that doesn't come to my mind immediately, so being explicit is nice.
if gp.syncGroup != nil {Damien NeilDo I understand correctly that we expect this to always be false because we expect to always be on a g0?
(Channel timers can't get here in synctest mode)
I think that's correct, but I'm not 100% certain I know what a g0 is.
"g0" is runtime.m.g0. Each M has a g struct named g0 that holds a stack that the M can use. Notably, this is used for systemstack(). g0 never runs user code.
TEXT time·now<ABIInternal>(SB),NOSPLIT,$16-24Damien NeilIMO, this should be renamed to something in runtime now that it is unused by the time package.
Unfortunately, time.now is used via //go:linkname by various packages.
Ah, it has a comment in https://cs.opensource.google/go/go/+/master:src/runtime/timestub.go;l=14 which I missed.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
throw("channel wake of synctest goroutine " + string(itoa(gbuf[:], sgp.g.goid)) + " from outside bubble")For these violations, perhaps these should use fatal instead of throw?
https://cs.opensource.google/go/go/+/master:src/runtime/panic.go;l=1111
The general idea is that fatal is for user error, while throw is an internal runtime problem.
Good point. I was using fatal here because panic won't properly unwind the held locks, but this should be a panic.
Dropping the throw here, added a panic in send/recv to catch cross-bubble operations.
// but avoids any contamination between GC assist and synctest.```suggestion
// but avoids any contamination between GC and synctest.
```
Done
hchan, root, timers, timer, notifyList, reflectOffs < synctest;Small change to lock ranking to add `reflectOffs < synctest`.
// Goroutines which are either running, or non-durably blocked```suggestion
// Goroutines which are either running, runnable, or non-durably blocked
```nit: say runnable here? I suppose you could say that runnable goroutines are non-durably "blocked" by the scheduler, but that doesn't come to my mind immediately, so being explicit is nice.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
25 is the latest approved patch-set.
The change was submitted with unreviewed changes in the following files:
```
The name of the file: src/runtime/chan.go
Insertions: 8, Deletions: 7.
@@ -316,6 +316,10 @@
// sg must already be dequeued from c.
// ep must be non-nil and point to the heap or the caller's stack.
func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int) {
+ if c.synctest && sg.g.syncGroup != getg().syncGroup {
+ unlockf()
+ panic(plainError("send on synctest channel from outside bubble"))
+ }
if raceenabled {
if c.dataqsiz == 0 {
racesync(c, sg)
@@ -693,6 +697,10 @@
// sg must already be dequeued from c.
// A non-nil ep must point to the heap or the caller's stack.
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int) {
+ if c.synctest && sg.g.syncGroup != getg().syncGroup {
+ unlockf()
+ panic(plainError("receive on synctest channel from outside bubble"))
+ }
if c.dataqsiz == 0 {
if raceenabled {
racesync(c, sg)
@@ -897,13 +905,6 @@
// else has won the race to signal this goroutine but the goroutine
// hasn't removed itself from the queue yet.
if sgp.isSelect {
- if sg := sgp.g.syncGroup; sg != nil {
- gp := getg()
- if gp.syncGroup != nil && gp.syncGroup != sg {
- var gbuf [20]byte
- throw("channel wake of synctest goroutine " + string(itoa(gbuf[:], sgp.g.goid)) + " from outside bubble")
- }
- }
if !sgp.g.selectDone.CompareAndSwap(0, 1) {
// We lost the race to wake this goroutine.
continue
```
```
The name of the file: src/runtime/lockrank.go
Insertions: 13, Deletions: 13.
@@ -41,9 +41,9 @@
lockRankTimer
lockRankNetpollInit
lockRankRoot
- lockRankSynctest
lockRankItab
lockRankReflectOffs
+ lockRankSynctest
lockRankUserArenaState
// TRACEGLOBAL
lockRankTraceBuf
@@ -115,9 +115,9 @@
lockRankTimer: "timer",
lockRankNetpollInit: "netpollInit",
lockRankRoot: "root",
- lockRankSynctest: "synctest",
lockRankItab: "itab",
lockRankReflectOffs: "reflectOffs",
+ lockRankSynctest: "synctest",
lockRankUserArenaState: "userArenaState",
lockRankTraceBuf: "traceBuf",
lockRankTraceStrings: "traceStrings",
@@ -196,9 +196,9 @@
lockRankTimer: {lockRankSysmon, lockRankScavenge, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankTimers},
lockRankNetpollInit: {lockRankSysmon, lockRankScavenge, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankTimers, lockRankTimer},
lockRankRoot: {},
- lockRankSynctest: {lockRankSysmon, lockRankScavenge, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankRoot},
lockRankItab: {},
lockRankReflectOffs: {lockRankItab},
+ lockRankSynctest: {lockRankSysmon, lockRankScavenge, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankRoot, lockRankItab, lockRankReflectOffs},
lockRankUserArenaState: {},
lockRankTraceBuf: {lockRankSysmon, lockRankScavenge},
lockRankTraceStrings: {lockRankSysmon, lockRankScavenge, lockRankTraceBuf},
@@ -211,16 +211,16 @@
lockRankProfBlock: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings},
lockRankProfMemActive: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings},
lockRankProfMemFuture: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankProfMemActive},
- lockRankGscan: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture},
- lockRankStackpool: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan},
- lockRankStackLarge: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan},
- lockRankHchanLeaf: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankHchanLeaf},
- lockRankWbufSpans: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan},
- lockRankMheap: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans},
- lockRankMheapSpecial: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap},
- lockRankGlobalAlloc: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap, lockRankMheapSpecial},
- lockRankTrace: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap},
- lockRankTraceStackTab: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankSynctest, lockRankItab, lockRankReflectOffs, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap, lockRankTrace},
+ lockRankGscan: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture},
+ lockRankStackpool: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan},
+ lockRankStackLarge: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan},
+ lockRankHchanLeaf: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankHchanLeaf},
+ lockRankWbufSpans: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan},
+ lockRankMheap: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans},
+ lockRankMheapSpecial: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap},
+ lockRankGlobalAlloc: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap, lockRankMheapSpecial},
+ lockRankTrace: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap},
+ lockRankTraceStackTab: {lockRankSysmon, lockRankScavenge, lockRankForcegc, lockRankDefer, lockRankSweepWaiters, lockRankAssistQueue, lockRankStrongFromWeakQueue, lockRankSweep, lockRankTestR, lockRankTimerSend, lockRankExecW, lockRankCpuprof, lockRankPollCache, lockRankPollDesc, lockRankWakeableSleep, lockRankHchan, lockRankAllocmR, lockRankExecR, lockRankSched, lockRankAllg, lockRankAllp, lockRankNotifyList, lockRankSudog, lockRankTimers, lockRankTimer, lockRankNetpollInit, lockRankRoot, lockRankItab, lockRankReflectOffs, lockRankSynctest, lockRankUserArenaState, lockRankTraceBuf, lockRankTraceStrings, lockRankFin, lockRankSpanSetSpine, lockRankMspanSpecial, lockRankGcBitsArenas, lockRankProfInsert, lockRankProfBlock, lockRankProfMemActive, lockRankProfMemFuture, lockRankGscan, lockRankStackpool, lockRankStackLarge, lockRankWbufSpans, lockRankMheap, lockRankTrace},
lockRankPanic: {},
lockRankDeadlock: {lockRankPanic, lockRankDeadlock},
lockRankRaceFini: {lockRankPanic},
```
```
The name of the file: src/runtime/synctest.go
Insertions: 1, Deletions: 1.
@@ -20,7 +20,7 @@
// The group is active (not blocked) so long as running > 0 || active > 0.
//
// running is the number of goroutines which are not "durably blocked":
- // Goroutines which are either running, or non-durably blocked
+ // Goroutines which are either running, runnable, or non-durably blocked
// (for example, blocked in a syscall).
//
// active is used to keep the group from becoming blocked,
```
```
The name of the file: src/runtime/mklockrank.go
Insertions: 3, Deletions: 3.
@@ -90,14 +90,14 @@
# Semaphores
NONE < root;
-# Synctest
-hchan, root, timers, timer, notifyList < synctest;
-
# Itabs
NONE
< itab
< reflectOffs;
+# Synctest
+hchan, root, timers, timer, notifyList, reflectOffs < synctest;
+
# User arena state
NONE < userArenaState;
```
```
The name of the file: src/runtime/mgc.go
Insertions: 1, Deletions: 1.
@@ -642,7 +642,7 @@
if gp := getg(); gp.syncGroup != nil {
// Disassociate the G from its synctest bubble while allocating.
// This is less elegant than incrementing the group's active count,
- // but avoids any contamination between GC assist and synctest.
+ // but avoids any contamination between GC and synctest.
sg := gp.syncGroup
gp.syncGroup = nil
defer func() {
```
```
The name of the file: src/runtime/sema.go
Insertions: 4, Deletions: 4.
@@ -630,8 +630,8 @@
next := s.next
s.next = nil
if s.g.syncGroup != nil && getg().syncGroup != s.g.syncGroup {
- var gbuf [20]byte
- throw("semaphore wake of synctest goroutine " + string(itoa(gbuf[:], s.g.goid)) + " from outside bubble")
+ println("semaphore wake of synctest goroutine", s.g.goid, "from outside bubble")
+ panic("semaphore wake of synctest goroutine from outside bubble")
}
readyWithTime(s, 4)
s = next
@@ -687,8 +687,8 @@
unlock(&l.lock)
s.next = nil
if s.g.syncGroup != nil && getg().syncGroup != s.g.syncGroup {
- var gbuf [20]byte
- throw("semaphore wake of synctest goroutine " + string(itoa(gbuf[:], s.g.goid)) + " from outside bubble")
+ println("semaphore wake of synctest goroutine", s.g.goid, "from outside bubble")
+ panic("semaphore wake of synctest goroutine from outside bubble")
}
readyWithTime(s, 4)
return
```
internal/synctest: new package for testing concurrent code
Add an internal (for now) implementation of testing/synctest.
The synctest.Run function executes a tree of goroutines in an
isolated environment using a fake clock. The synctest.Wait function
allows a test to wait for all other goroutines within the test
to reach a blocking point.
For #67434
For #69687
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Note that runtime:mayMoreStackMove.TestSynctest is [failing](https://ci.chromium.org/ui/p/golang/builders/ci/gotip-linux-amd64-longtest/b8730812608564042305/test-results?q=ExactID%3Aruntime%3AmayMoreStackMove.TestSynctest+VHash%3A369855b1842f6583) on longtest builders as of this CL.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |