Embedding binary data in Go source, byte array literals bloat the object file

1,483 views
Skip to first unread message

Risto Saarelma

unread,
Dec 5, 2009, 6:52:35 AM12/5/09
to golang-nuts
I want to embed the contents of small binary files in my Go binary so
that I can distribute an app with some data assets as a single
executable file. The most straightforward way seemed to be to generate
code that emits a byte array literal that contains the bytes of the
data.

However, when I tested this with 'var data = [...]byte { 0x01,
0x02, ... }' with 8g, every byte in the array seems to add 22 bytes to
the size of the object file. The ratio of object file size increase to
data size increase should be reasonably close to x1 for this technique
to be worth using, so the byte array literal approach won't do.

A way that does seem to give the 1:1 ratio in object files is putting
the data in a string constant, 'const data = "\x01\x02..."'. I can get
a byte array copy using strings.Bytes and use strings.NewReader on the
const string to get a file-like wrapper on the string, so the const
string approach does seem to work pretty well.

Posting this in case anyone else wants to try the same thing and
instinctually goes for the byte array literals.

Helmar

unread,
Dec 5, 2009, 7:46:17 AM12/5/09
to golang-nuts
Thanks,

well, this is like to be expected.
I'm curious if there is something else in the defaults than
bytes.NewBufferString() to convert strings to []byte. It's a seldom
needed task - but it would be nice as the contrary to string([]
byte)...

I guess with a growing project about everyone will need something like

func StrToByte(a string) []byte {
r := make([]byte, len(a));
for k := range a {
r[k] = byte(a[k])
}
return r;
}

at some place.

Again thank you and maybe you can need that func at a place in your
project ;)

Regards,
-Helmar

Risto Saarelma

unread,
Dec 5, 2009, 9:21:30 AM12/5/09
to golang-nuts

Helmar

unread,
Dec 5, 2009, 9:53:53 AM12/5/09
to golang-nuts
Hi,

On Dec 5, 9:21 am, Risto Saarelma <rsaar...@gmail.com> wrote:
> http://golang.org/pkg/strings/#Bytesis exactly this.

yes. Thanks. Interesting that it is bound to "strings" - I was
expecting it under bytes and was wrong. Just for the records I changed
my util name to the one from "strings":
http://code.google.com/p/pdfreader/source/detail?r=7312848c0d30f474cc385ff6aa72fc555034e1f9

Regards,
-Helmar

Ben Bullock

unread,
Dec 5, 2009, 11:15:13 AM12/5/09
to golang-nuts


On Dec 5, 8:52 pm, Risto Saarelma <rsaar...@gmail.com> wrote:
> I want to embed the contents of small binary files in my Go binary so
> that I can distribute an app with some data assets as a single
> executable file. The most straightforward way seemed to be to generate
> code that emits a byte array literal that contains the bytes of the
> data.
>
> However, when I tested this with 'var data = [...]byte { 0x01,
> 0x02, ... }' with 8g, every byte in the array seems to add 22 bytes to
> the size of the object file.

The object file contains information for the linker, though, so it
doesn't mean that these extra bytes are all reflected in the size of
the binary. Just looking at the assembler output from 8g (8g -S using
the test code below) it looks like there is a lot of type information
included with each of those data points.

I can't make much sense from the output of

8l -a

but it doesn't necessarily follow that the binary is bloated just
because the object file is. The same thing is true with C programs in
fact: if you have a lot of data then the object file may become very
big.

> The ratio of object file size increase to
> data size increase should be reasonably close to x1 for this technique
> to be worth using, so the byte array literal approach won't do.

I'm not sure that you are measuring it right.

Here is a test program:

------------------------

package main
var fabbydata = [...]byte {
0x01, 0x02, 0x03,

};
func main () {
for a, b := range fabbydata {
println (a, " -> ", b);
}
}

--------------

I copied and pasted the data line here (0x01, 0x02) several times so
there are about four thousand copies:

(many lines omitted)
3977 -> 3

The binary is this size:

[ben@mikan] go 536 $ ll 8.out
-rwxr-xr-x 1 share share 76805 Dec 6 01:07 8.out*

Now I revert back to the version with three bytes of data only, as
above:

[ben@mikan] go 537 $ gogo bytes.go
Compiling
Linking
Running
0 -> 1
1 -> 2
2 -> 3

Now look at the size:

[ben@mikan] go 538 $ ll 8.out
-rwxr-xr-x 1 share share 72700 Dec 6 01:07 8.out*
[ben@mikan] go 539 $

It looks like about four thousand bytes of difference between the two
binaries, so that would be about what you'd expect. Surely the extra
bytes are used at the linking stage.

Ben Bullock

unread,
Dec 5, 2009, 11:22:39 AM12/5/09
to golang-nuts
[Please excuse self-followup]

On Dec 6, 1:15 am, Ben Bullock <benkasminbull...@gmail.com> wrote:

> Now look at the size:
>
> [ben@mikan] go 538 $ ll 8.out
> -rwxr-xr-x  1 share  share  72700 Dec  6 01:07 8.out*
> [ben@mikan] go 539 $
>
> It looks like about four thousand bytes of difference between the two
> binaries, so that would be about what you'd expect. Surely the extra
> bytes are used at the linking stage.

Just to make sure I copied and pasted so that the data part is fifty-
five thousand bytes:

<many lines omitted>
55670 -> 3
[ben@mikan] go 540 $ ll 8.out
-rwxr-xr-x 1 share share 130056 Dec 6 01:20 8.out*
[ben@mikan] go 541 $

It really looks like only one byte per data point in the binary to me,
not twenty-two.

hong

unread,
Dec 5, 2009, 10:45:32 AM12/5/09
to golang-nuts
Go does not support composite literals other than integer, float,
string.
If you define a struct or an array literal, Go generates code that
constructs the object at runtime. This feature is almost the same as
Java. Personally, I think this is a mistake. I wish Go can put
literals
into read-only data segment of executable just like C.

Hong

Hong Zhang

unread,
Dec 5, 2009, 2:51:25 PM12/5/09
to r...@google.com, golan...@googlegroups.com
Thanks for clarification. I was confused with const literals.

If Go generates data for literals, why it does not allow

const x = struct { a int; b string } { 3, "hi" }

Linker can just put it into read only data segment. If anyone modifies it,
it will trigger a crash. This is almost the same behavior as C string
literal. Since Go already crashes on nil and out of bound index, crash on
const seems matches current style. I think it will make Go more reliable in
real life.

BTW, can you comment why the other person saw 22x blow up for byte array?

Hong

----- Original Message -----
From: Rob 'Commander' Pike <r...@google.com>
To: hong <ho...@google.com>
Cc: golang-nuts <golan...@googlegroups.com>
Sent: Sat Dec 05 08:56:52 2009
Subject: Re: [go-nuts] Re: Embedding binary data in Go source, byte array
literals bloat the object file


On Dec 5, 2009, at 7:45 AM, hong wrote:

> Go does not support composite literals other than integer, float,
> string.
> If you define a struct or an array literal, Go generates code that
> constructs the object at runtime. This feature is almost the same as
> Java. Personally, I think this is a mistake. I wish Go can put
> literals
> into read-only data segment of executable just like C.

Not true.

From this code:

var x = struct { a int; b string } { 3, "hi" }

6g produces this:

0018 (x.go:4) DATA x+0(SB)/4,$3
0018 (x.go:4) DATA x+8(SB)/8,$string."hi"+0(SB)
0018 (x.go:4) DATA x+16(SB)/4,$2

That's data, not code.

Similarly

var x = []string {"hi", "there"}

produces

0018 (x.go:4) DATA statictmp_0000+0(SB)/8,$string."hi"+0(SB)
0018 (x.go:4) DATA statictmp_0000+8(SB)/4,$2
0018 (x.go:4) DATA statictmp_0000+16(SB)/8,$string."there"+0(SB)
0018 (x.go:4) DATA statictmp_0000+24(SB)/4,$5
0018 (x.go:4) DATA x+0(SB)/8,$statictmp_0000+0(SB)
0018 (x.go:4) DATA x+8(SB)/4,$2
0018 (x.go:4) DATA x+12(SB)/4,$2

There are cases where code will be generated, but your sweeping statement is
false.

Anyway it's an implementation point, not a language requirement.

-rob

Helmar

unread,
Dec 5, 2009, 3:15:30 PM12/5/09
to golang-nuts
Hi,

> BTW, can you comment why the other person saw 22x blow up for byte array?

you are able to predict binary sizes of executables? I think this
person made this experience.
I actually made the experience of x2 for a string literal converted to
byte.

Regards,
-Helmar

Russ Cox

unread,
Dec 5, 2009, 5:00:45 PM12/5/09
to Hong Zhang, golan...@googlegroups.com
On Sat, Dec 5, 2009 at 11:51, Hong Zhang <ho...@google.com> wrote:
> Thanks for clarification. I was confused with const literals.
>
> If Go generates data for literals, why it does not allow
>
> const x = struct { a int; b string } { 3, "hi" }
>
> Linker can just put it into read only data segment. If anyone modifies it,

New features are considered on a design basis first,
and only then on implementation. Just because something
can be implemented doesn't mean it should be.
Go's constants are nice and simple right now,
and one reason is that they are limited to basic values.

Moving on to implementation, I don't know how you imagine
that x will be used, but right now Go doesn't require a
read-only data segment for correct program execution.
Certainly the compiler can disallow assignment to fields
of x at compile time without needing a runtime trap.
Your comment makes it sound like you expect programs
to be able to pass &x to a function. In Go, constants
cannot be addressed - they are actual constants, more
like enums than "const" variables in C++.

Please try writing 1000 lines of Go code. It should give
you a better sense of what features Go does and does
not have. And of course, if you find that you can't program
without Turing complete constants, there's always C++.

> BTW, can you comment why the other person saw 22x blow up for byte array?

The extra space is that each byte in the array is emitted
as a separate DATA statement in the .8 (which is an
encoded form of a .s file, not actual machine code).
It would be easy for the compiler to rewrite large
initialized byte arrays into larger chunks like the string
constants do; that would cut the overhead to something
more like 3x, which is more reasonable.

Note that this only affects the intermediate object files
(the .8 files) and not the final binary (the 8.out).

Russ

Hong Zhang

unread,
Dec 5, 2009, 7:15:35 PM12/5/09
to r...@golang.org, golan...@googlegroups.com
The specific issue I am worried about is things like unicode tables. They
are public mutable the last time I looked at it. It is a serious security
hole. In theory, I can change the table and try to use reflection to access
private field and the like.

To avoid such risk, we have to do more security audit on more code. To
answer your question, I can still write correct code (I may do so even in
asm), but it will have more chance for bug and cost a bit more. I know this
is not a focus area of Go, but just to make the issue clear.

BTW, I didn't mean to take address of const. I was referring to code like
const X = []byte {1, 2, 3}; X[0] = 12;

Hong

----- Original Message -----
From: r...@google.com <r...@google.com>
To: Hong Zhang <ho...@google.com>
Cc: golan...@googlegroups.com <golan...@googlegroups.com>
Sent: Sat Dec 05 14:00:45 2009
Subject: Re: [go-nuts] Re: Embedding binary data in Go source, byte array
literals bloat the object file

r

unread,
Dec 5, 2009, 11:13:39 PM12/5/09
to golang-nuts
[trying a third time, as r...@golang.org from inside google groups as i
try to debug my isolation]

This is a test. I tried sending this earlier but apparently it never
arrived at the list. Sending again.

-rob

SnakE

unread,
Dec 7, 2009, 1:33:39 PM12/7/09
to Hong Zhang, r...@golang.org, golan...@googlegroups.com
2009/12/6 Hong Zhang <ho...@google.com>

The specific issue I am worried about is things like unicode tables. They
are public mutable the last time I looked at it. It is a serious security
hole. In theory, I can change the table and try to use reflection to access
private field and the like.

It makes no sense to talk about *security* in the context of a single process.  You cannot defend against untrusted code if it's in the same process.  It's only reasonable to discuss *safety*, that is, ease of avoiding buts.

Hong Zhang

unread,
Dec 7, 2009, 1:52:26 PM12/7/09
to SnakE, r...@golang.org, golan...@googlegroups.com
> It makes no sense to talk about *security* in the context of a single
> process.  You cannot defend against untrusted code if it's in the same
> process.  It's only reasonable to discuss *safety*, that is, ease of
> avoiding buts.

We don't need to debate on English terms. If you run a single Exchange
Server or SQL Server that serves thousands of users, any major mistake
will cause million dollars of loss, regardless you call it security or safety
or bug.

In ideal world, you can write high quality software and make sure it works
well. This is what you get with single process Windows kernel plus Blue
Screen of Death. In less ideal world, you have to link with many third
party libraries and pray they are all nice and safe. The word "trust" is
not free, we spend a lot of time on code audit and testing. There is
cost associated with any trust or distrust.

My point is if Go can provide some kind of immutable array or const array,
it will reduce some common risks, hence cost. Of course, I can write
relatively safe code in C or asm, so it is not a blocking issue.

Hong

Miguel Pignatelli

unread,
Jan 18, 2010, 11:38:31 AM1/18/10
to golan...@googlegroups.com
Hi all,

I'm trying to learn a bit more about concurrency (in general) and its
implementation in Go (in particular) and I would thank any help in
realizing the best strategy for this (a bit simplified) case:

Suppose that I want to index all the "words" (not really words but DNA
hexamers) in a set of documents for downstream process.

The sequential process for 1 document would involve to read the text
document and populate a map of strings to vectors of ints, where the
keys are the different possible "words" and each int vector represent
its positions in the document.

ACGAGT &[37 115 163 69] // ACGAGT appears in the document at
positions 37, 115, 163 and 69
ATAACG &[45 155]
...

Each document has millions of such words of 6 letters over an alphabet
of 4 letters. I.e, only 4096 (4^6) words (keys) are possible. The
number of documents used at the same time in one analysis is between 2
and 1000.

If I want to load several documents concurrently, there seems to be
various possibilities:

1.- Open one thread per document and populate the map with the
position of each word and the document where it has been found:

This &[{"doc1" 32} {"doc1" 134} {"doc2" 34}]
...

Is there any problem in having various goroutines inserting such
values in the same data structure (the map of strings to structs)? If
so, I suppose that some kind of lock is needed, will this degrade the
gain of performance given by the concurrency? Would this be the best
approach?

2.- Another option would be to create one map per document, so every
goroutine will write its own map:

"doc1" =>
This &[32 134]
is &.....
"doc2" =>
This &[34]
is &....
.....

The drawback is that to look for a word (I have to look hundreds of
thousands of them), I have to lookup in N maps instead of just one
(N==Number of different documents).

3.- A third approach would be to mix the previous 2 by building 1 map
per document and merging them before starting the lookups.

4.- Other?

Which strategy do you anticipate that will have the biggest gain in
performance?
Also, any good reference text in concurrency/parallelism would be
welcome,

Thanx in advance,

M;

John Asmuth

unread,
Jan 18, 2010, 11:51:18 AM1/18/10
to golang-nuts
Hi Miguel,

One issue that jumps out to me right away is that if all the documents
are on the same drive, if you aren't careful then you could have
massive hard disk thrashing and slow things down considerably. This
can be taken care of by reading in each document's raw data in its
entirety one at a time, in the beginning, and then process them
concurrently while in memory.

Beyond that, the questions seem to be "Is map threadsafe?" (I don't
know) and "Is there an efficient way to merge two maps?" (Still don't
know!). I'm sure some other readers of this list will be able to
answer these questions even though I cannot.

- John

On Jan 18, 11:38 am, Miguel Pignatelli <miguel.pignate...@uv.es>
wrote:

distributed

unread,
Jan 18, 2010, 12:32:27 PM1/18/10
to golang-nuts
> One issue that jumps out to me right away is that if all the documents
> are on the same drive, if you aren't careful then you could have
> massive hard disk thrashing and slow things down considerably. This
> can be taken care of by reading in each document's raw data in its
> entirety one at a time, in the beginning, and then process them
> concurrently while in memory.

Or as a middle ground: Define a maximum for CPU bound processing
goroutines, typically $GOMAXPROCS. Have one goroutine, possibly your
main program, reading files from disk and spawn a goroutine for every
file it just read as long as your maximum number of processing
goroutines is not reached.


> Beyond that, the questions seem to be "Is map threadsafe?" (I don't
> know)

It is not. You will have to write some kind of server managing the
map.


> and "Is there an efficient way to merge two maps?" (Still don't
> know!). I'm sure some other readers of this list will be able to
> answer these questions even though I cannot.

That I don't know either.


cheers,
Michael

Miguel Pignatelli

unread,
Jan 18, 2010, 2:45:59 PM1/18/10
to John Asmuth, golang-nuts
Hi John,

thanks for the input,

El 18/01/2010, a las 17:51, John Asmuth escribió:

> Hi Miguel,
>
> One issue that jumps out to me right away is that if all the documents
> are on the same drive, if you aren't careful then you could have
> massive hard disk thrashing and slow things down considerably. This
> can be taken care of by reading in each document's raw data in its
> entirety one at a time, in the beginning, and then process them
> concurrently while in memory.

Hmmm, I thought that my alternative avoids that overhead. If I
understand correctly, you suggest to accumulate all the disk access at
the beginning of the execution time (by having all the goroutines
reading the entire files into memory first), wouldn't this cause the
massive disk thrashing? I was thinking of alternating the disk access
and the record processing by reading small chunks of text and process
them.
"distributed" points out a good alternative: having a goroutine for
disk reading and span other goroutines for processing each document.

Also, regarding the locking of the map, only the current entry of the
map should be blocked at any time by each goroutine (not the entire
map, other goroutines can "write" to other entries of the map). One
important question would be: which is the cost of blocking a map entry
compared to the accession time to it?
I should test some of these alternatives to answer myself, but any
comment is also welcome,

Cheers,

M;

John Asmuth

unread,
Jan 18, 2010, 2:53:06 PM1/18/10
to golang-nuts

On Jan 18, 2:45 pm, Miguel Pignatelli <miguel.pignate...@uv.es> wrote:
> > One issue that jumps out to me right away is that if all the documents
> > are on the same drive, if you aren't careful then you could have
> > massive hard disk thrashing and slow things down considerably. This
> > can be taken care of by reading in each document's raw data in its
> > entirety one at a time, in the beginning, and then process them
> > concurrently while in memory.
>
> Hmmm, I thought that my alternative avoids that overhead. If I  
> understand correctly, you suggest to accumulate all the disk access at  
> the beginning of the execution time (by having all the goroutines  
> reading the entire files into memory first), wouldn't this cause the  
> massive disk thrashing?

Not if you read each document one-at-a-time, in its entirety.

I'm not saying this is the best solution, just a solution that would
avoid hard disk thrashing :)

> I was thinking of alternating the disk access  
> and the record processing by reading small chunks of text and process  
> them.

Done right this could work. But setting this up in such a way as to
avoid disk thrashing seems like a very challenging problem to me.

- John

Russ Cox

unread,
Jan 18, 2010, 2:58:33 PM1/18/10
to Miguel Pignatelli, golan...@googlegroups.com
> Is there any problem in having various goroutines inserting such values in
> the same data structure (the map of strings to structs)? If so, I suppose
> that some kind of lock is needed, will this degrade the gain of performance
> given by the concurrency? Would this be the best approach?

Yes, some kind of lock is needed. There are no atomicity
guarantees about map accesses: if there are multiple writers
or a single writer and multiple readers, a lock is needed.

Even if individual map accesses were atomic, you would
still need a lock, because multiple goroutines would be doing:

array := m[key]
newArray := append(array, value)
m[key] = newArray

The lock is needed to make sure two simultaneous appends
to the array don't step on each other. Even if line 1 and line 3
happen as atomic operations, you need something making the
whole sequence atomic. That's exactly why Go doesn't bother
to make line 1 and line 3 individually atomic.

> Which strategy do you anticipate that will have the biggest gain in
> performance?
> Also, any good reference text in concurrency/parallelism would be welcome,

I think you'd have to build them and see. Luckily none of them
is a particularly large amount of code.

However, given that the key space is both small and likely to
be fairly dense, you'd probably be better off with a simple array,
something like:

type List struct {
sync.Mutex;
actual list stuff
}

var all [4096]List

then to update key k, you can use

all[k].Lock(); update all[k]; all[k].Unlock()

and not block updates involving other keys.

A common middle ground would be to have a smaller
number of locks L, with each protecting 4096/L keys.
Then each goroutine could wait until it has some
threshold K updates queued up, acquire L, apply all
the updates, and release L.

This last approach translates nicely into a programs
with goroutines instead of locks: L goroutines each
in charge of a section of keys and reading from a
suitably buffered channel, and then one goroutine per
file sending updates on the correct one of the L channels
for each update. I used this structure in a program once
where both kinds of goroutine were quite a bit more complex
than in this example, and being able to split the two tasks
and have the many channels act as a cross-connect
helped quite a lot in keeping the complexity down.

It's true that once you get up to the GBs of documents,
the I/O should be a bigger concern than CPU
saturation, especially if the data is coming off a single
disk. In the program I just mentioned, I had data coming
off many disks in parallel, with one goroutine per disk.

Russ

Davies Liu

unread,
Jan 18, 2010, 9:33:38 PM1/18/10
to r...@golang.org, Miguel Pignatelli, golan...@googlegroups.com
a MapReduce like structure:
             
            chan []bytes                                 chan map[string]List
Reader  ===>   N Processing (1 map per doc) ========> Merger (1 map)
            buffered                                              buffered

- Davies
Reply all
Reply to author
Forward
0 new messages