The PNG decoder is significantly faster in Go tip when compared to Go 1.0.2:
I haven't measured the net effect but I imagine that it's around 2x
faster. There may be even more improvements possible. The next step is
probably to optimize Go's flate performance, whether in the
compress/flate/*.go code or in the compiler, but I didn't have time to
get around to doing either.
The PNG encoder has not been optimized yet, and I wouldn't be
surprised if there was similar low-hanging performance fruit.
As for writing the explicit loop versus using image/draw, I would
rather that you used image/draw. That package has some code fast-paths
for a number of common operations (such as blitting RGBAs over each
other), but it currently does not have a fast path to convert to an
image.Gray. That's simply a bug.
BTW, your explicit loop has an image.Gray dst and image.NRGBA src and does:
dst.Pix[i] = rgba.Pix[i*4 + 3]
which set's the destination gray value to the source's alpha value.
This translates both fully opaque black and fully opaque white to 100%
white, which might 'work' for your test image, but seems incorrect in
Your saveImage function writes directly to a file. You may or may not
get better numbers if you wrap that file in a bufio.Writer (don't
forget to Flush it), to avoid lots of little writes to the file
system. This is arguably a bug in package image/png.
As for why not use libpng, others have already given some reasons, but
another is that Go is a memory-safe language: slice accesses are
bounds-checked. A native Go PNG decoder is not as susceptible to
buffer overflow attacks as libpng. Even though libpng has had over 15
years of develop-debug-optimize cycles, I note that
has still issued two "serious
vulnerability" warnings in this year alone about "the possibility of
execution of hostile code".