But... not a single Brazilian. Hmpf.
-- rodrigo
Of course, I'm very happy that Go is in the top 10.
[1]: http://aichallenge.org/profile.php?user=589
[2]: http://aichallenge.org/profile.php?user=4513
I've heard that the Java guy has been preparing a new version since
October, while the Go guy iterated a few versions.
Hah. I'm so funny.
-- rodrigo
How often does this competition takes place? Im interested in participating but this edition is finishing in less then a week.
I would be interested in learning whether (in your opinion) converting
this goroutine-based implementation of your algorithm into sequential
call-return implementation would increase the amount of code that
needed to be written.
Next, I would like to know whether the outcome of your algorithm (that
is: the strength of the solution found by the algorithm) depends on
the global order of messages sent via Go channels. My question
actually is: whether you were able to map the optimization process to
some of the concurrency features offered by the Go language.
Why did you need to make your own multi-tasking scheduler? Wouldn't it
be better to call runtime.Gosched() periodically in your computation
loops (so computations are more evenly distributed and don't starve
communication), or was the cost of the call too high? (I just measured
and on my Macbook Pro I can do ~2.5 million runtime.Gosched calls in
500ms, which is surprisingly small)
Here is a link to the description of my entry
I learned the hard way that CPU bound goroutines don't play nice. My program consisted of five explicit goroutines. In addition to the main program, there was a goroutine each for a game state lexer, and parser, and one for computing non-combat moves and combat-moves respectively. On each turn the main goroutine receives the game state from the parser, updates some data structures, and partitions the problem into the combat and non-combat categories. It then launches a goroutine to process each category and waits for them to send their move orders via a channel. Each of the move computing goroutines is entirely CPU bound and also needs to stop when the time limit is near so that the program can reply to the game server with its moves before time has expired.
Initially I tried using a time.Timer to send a message on a channel when it was time to stop computing moves, but the goroutine used by the time package to sleep for the Timer was starved of CPU cycles by the CPU bound goroutines. Luckily I was able to find some discussion of this phenomenon here on golang-nuts that helped me realize it was an expected behavior rather than something I was doing wrong. I tried different hacks to make this architecture work, but the only approach that worked reliably was to call time.Nanoseconds() at manually tuned loop counts and exit when a threshold was exceeded.
I will use Go again. I enjoyed writing my entry in Go, and I will use it again if I compete in future competitions. I will also consider using Go on other projects if I think it is a good fit.
Why did you need to make your own multi-tasking scheduler? Wouldn't it
be better to call runtime.Gosched() periodically in your computation
loops (so computations are more evenly distributed and don't starve
communication), or was the cost of the call too high? (I just measured
and on my Macbook Pro I can do ~2.5 million runtime.Gosched calls in
500ms, which is surprisingly small)
I think I found why this happens. The problem is that go scheduler has
a list of runnable goroutines, so runtime.Gosched puts current
goroutine to the end of that list and jumps to scheduler. The problem
is that when using timers (e.g. <-time.After(delay)) there are two
things going on. First there's an underlying sleep syscall that works
with timers, when syscall it returns needs goroutine to resume which
it schedules to the end of the list. The timer goroutine then does a
send on a channel, which finally schedules the receiving goroutine to
wake up, and again it adds it to the end of the list.
This basically means that at least *two* cycles of each computing
goroutines will pass before you receive anything from the timer
channel. I played with it and came up with something like this (I'm
using current tip though, not 60.1) that shows what's going on:
package main
import (
"fmt"
"time"
"runtime"
)
func calc() {
d, _ := time.ParseDuration("100ms")
t0 := time.Now()
for {
t1 := time.Now()
if t1.Sub(t0) >= d {
runtime.Gosched()
t0 = time.Now()
}
}
}
func main() {
d, _ := time.ParseDuration("10ms")
t0 := time.Now()
timer := time.After(d)
go calc()
go calc()
<-timer
t1 := time.Now()
fmt.Println(t1.Sub(t0))
}
The goroutines are not strictly computing, but calling time.Now, but
luckily time.Now is lightweight, handled by runtime and doesn't cause
entering/leaving syscall. What it shows is that for runtime.Gosched
every 100ms for small enough timer values at least 400ms will pass for
2 goroutines (2 cycles x 2 goroutines), 600ms for 3, 800ms for 4, etc.
So when planning how often you need to call Gosched you need to keep
this latency in mind (for 10ms latency with 2 computing goroutines you
would need to call Gosched every 2.5ms).
It turned out to be an interesting side effect of the current
scheduler design, hope this helps you in the future.
So when planning how often you need to call Gosched you need to keep
this latency in mind (for 10ms latency with 2 computing goroutines you
would need to call Gosched every 2.5ms).It turned out to be an interesting side effect of the current
scheduler design, hope this helps you in the future.