This looks reasonable, but I'm not sure you want to do it this way.
If you are consistently serving requests at a rate faster than the
data can be written to disk, then you will could accumulate goroutines
waiting to send to the writer goroutine and eventually run out of memory.
Better, I think, would be to use a buffered channel (choose a
suitably big size) and do the send inline:
package main
func main() {
writes := make(chan(string), 1000)
go func() {
for data := range writes {
}
}()
func handleRequest() {
// Read data from http request into theData...
writes <- theData
// Respond http request
}
}
You might also speed up disk throughput by
using a bufio.Writer (again with some suitably chosen
buffer size), but this has the disadvantage that
records will not necessarily be written in whole
units, and if you mind about it, you'd want some
logic to make sure that the buffer is flushed occasionally.
Another technique you could use to save disk writes
without using bufio.Writer is to read all available data
and then write it. This would be an improvement only if the data
sent from each http request is relatively small.
func writer(writes <-chan []byte) {
var buf []byte
for {
data := <-writes
buf = append(buf, data)
// Accumulate data from any goroutines
// that are ready to send it (up to ~256KB), so that we
// use less disk writes.
drain:
for len(buf) < 256*1024 {
select {
case d := <-writes:
buf = append(buf, d)
default:
break drain
}
}
f.Write(buf)
buf = buf[:0]
}
}