--
Vasiliy Tolstov,
Clodo.ru
e-mail: v.to...@selfip.ru
jabber: va...@selfip.ru
Sure, but then there will be another array with the same amount of
space that cannot be used by any public API, and continuous uses of
the public API can cause the amount of wasted space to grow without
bound. There will be O(calls-to-take) useless memory until the queue
becomes garbage.
> through use. You could always make an accumulator that can be grown, but not
> shrunk or accessed via it's public interface, but that's not really a leak
> either, just poor interface design.
> You can, however, leak a goroutine along with any memory it uses by having
> it loop forever without communication, or block on a channel that is no
> longer held by any other goroutine.
>
> Also, go currently has a conservative garbage collector, which means it's
> possible for the collector to overlook some garbage because it thinks there
> might be a pointer to it when there isn't.
My quibble is with defining "memory leak" based on garbage collection
at all. Not only does that definition not extend to languages without
GC that everyone agrees have memory leaks, it does not match developer
intuition.
A similarly naive queue of interface{}s that did not assign nil on
Take would maintain a pointer to something that is not accessible via
the public API. This seems like a leak to me.
I think we should be careful to take developer intuition into account
before saying that certain classes of bugs do not affect certain kinds
of systems.
You can keep asserting this definition and ignoring my arguments as to
why it is a poor definition but mere assertions are not going to make
it a good definition.
Ah. But what if we discover that Steven's definition has been the
commonly understood definition in the field for almost a half century?
Would it be acceptable to you then?
ron
If the poor schlubs who send emails titled "please help me find the
memory leak in my code?" are included "in the field", then yes.
val = make(url.Values)
for {
r, err = http.Get(conf.Uri)
if err != nil || r.StatusCode != 200 {
time.Sleep(conf.Pullint * 1000000000)
continue
}
lerr = nil
rb = bufio.NewReader(r.Body)
for lerr == nil {
line, lerr = rb.ReadString('\n')
match = pattern.FindStringSubmatch(line)
if lerr != os.EOF && lerr != nil {
break
}
if lerr == os.EOF {
break
}
if match == nil {
break
}
match[1] = strings.Trim(match[1], "\n\t\r")
match[2] = strings.Trim(match[2], "\n\t\r")
switch match[1] {
case "userpw":
data.Userpw = match[2]
break
case "rootpw":
data.Rootpw = match[2]
break
case "memmin":
data.Memmin, _ = strconv.Atoui64(match[2])
data.Memmin = data.Memmin * 1024 * 1024
break
case "memmax":
data.Memmax, _ = strconv.Atoui64(match[2])
data.Memmax = data.Memmax * 1024 * 1024
break
case "memint":
data.Memint, _ = strconv.Atoi64(match[2])
if data.Memint == 0 || data.Memint == 1000 {
data.Memint = 1
}
break
case "memhold":
data.Memhold, _ = strconv.Atoui64(match[2])
data.Memhold = data.Memhold * 1024 * 1024
break
case "memblk":
data.Memblk, _ = strconv.Atoui64(match[2])
data.Memblk = data.Memblk * 1024 * 1024
break
case "pullint":
conf.Pullint, _ = strconv.Atoi64(match[2])
break
case "cmdline":
if data.Cmdline != match[2] && match[2] != "" {
data.Cmdline = match[2]
cmd = exec.Command("/bin/sh", "-c", data.Cmdline)
cmd.Start()
}
break
}
}
match = nil
rb = nil
line = ""
if data.Memmax < data.Memmin || data.Memmax == 0 {
data.Memmax = data.Memmin
}
err = Memory(&mem)
if err == nil {
val.Set("mem_used", fmt.Sprintf("%v", mem.mem_used))
}
err = Disk(&disk)
if err == nil {
val.Set("disk_used", fmt.Sprintf("%v", disk.disk_used))
}
r, err = http.PostForm(conf.Uri, val)
if err == nil {
r.Body.Close()
} else {
fmt.Printf(err.String())
}
val.Del("mem_used")
val.Del("disk_used")
r.Body.Close()
_data <- data
if tr, ok := http.DefaultTransport.(*http.Transport); ok {
tr.CloseIdleConnections()
}
time.Sleep(conf.Pullint * 1000000000)
}
return
*arbitrary computation* to decide what garbage is and what garbage
isn't. You can *not* implement such a garbage collector - not because
you (or me) don't like it, but because it is impossible. You just need
to accept the fact that it indeed is impossible.
i'm doing some profiling and get:
Total: 768.0 MB
586.5 76.4% 76.4% 586.5 76.4% runtime.malg
85.5 11.1% 87.5% 110.0 14.3% runtime/pprof.WriteHeapProfile
20.5 2.7% 90.2% 20.5 2.7% bufio.NewReaderSize
14.5 1.9% 92.1% 25.0 3.3% regexp.(*Regexp).FindStringSubmatch
12.5 1.6% 93.7% 12.5 1.6% bufio.NewWriterSize
10.5 1.4% 95.1% 10.5 1.4% io.Copy
7.0 0.9% 96.0% 7.0 0.9% bytes.(*Buffer).grow
7.0 0.9% 96.9% 7.0 0.9% regexp/syntax.(*compiler).inst
6.5 0.8% 97.7% 6.5 0.8% regexp.progMachine
5.0 0.7% 98.4% 5.0 0.7% runtime.convT2E
Is that possible to minimize memory usage ?
P.S. I'm add some bad code lines and memory does not leak now.
Are you using 8g or 6g ? The former does have some known problems,
which Dmitry is looking at solving.
Cheers
Dave
Hi,
Are you using 8g or 6g ? The former does have some known problems,
which Dmitry is looking at solving.
Cheers
Hi,
Are you using 8g or 6g ? The former does have some known problems,
which Dmitry is looking at solving.
Russ has fixed a map related memory leak recently. If I remember correctly, the last element wasn't freed or something like that. Anyway, this fix is already included in r60.2 so, make sure to use a recent version of Go.