The perspective of a Linux distribution

2,964 views
Skip to first unread message

Gustavo Niemeyer

unread,
Mar 29, 2014, 12:38:23 AM3/29/14
to golang-dev
This bug comment does a great job summarizing how static linkage and a
few other common Go practices affect the workflow of a Linux
distribution:

https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1267393/comments/27

For some context, Jamie is an Ubuntu Security Engineer, and a very
good one at that. We've been negotiating with him and other Ubuntu
leads for a long time to get packaged Go software in Ubuntu, and this
message is the best summary to date of the perspectives of both sides.

It also gives some context for my interest in shared libraries,
discussed recently here in the list.

A good weekend to all.


gustavo @ http://niemeyer.net

Keith Rarick

unread,
Mar 29, 2014, 1:10:43 AM3/29/14
to Gustavo Niemeyer, golang-dev
I think part of the problem is that Ubuntu thinks its job is to apply
security fixes in a package to commands that use that package,
while Go developers often think their job is to decide exactly
which version of each package to use in their commands.

These two "jobs" are fundamentally in conflict.

I tend to think Go developers are perfectly reasonable in wanting
such a level of control (I know I want it), but I can understand how
this would be unsettling for Ubuntu.

Perhaps instead of focusing on the existing (quite complicated)
methods for applying security fixes in the OS, we (Ubuntu and
Go) could think about how to make it easier – more automatic,
even – to apply security fixes upstream.
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "golang-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Gustavo Niemeyer

unread,
Mar 29, 2014, 1:30:01 AM3/29/14
to Keith Rarick, golang-dev
On Sat, Mar 29, 2014 at 2:10 AM, Keith Rarick <k...@xph.us> wrote:
> I think part of the problem is that Ubuntu thinks its job is to apply
> security fixes in a package to commands that use that package,
> while Go developers often think their job is to decide exactly
> which version of each package to use in their commands.
>
> These two "jobs" are fundamentally in conflict.

The two jobs are actually the same one: individuals and companies are
held responsible for the software they distribute, and want control
over it so they can do a good job. That's totally reasonable, on both
ends. What's slightly special about a Linux distribution is that
there's a relatively large volume of software a single company is held
accountable for, so the mechanisms must scale.

> Perhaps instead of focusing on the existing (quite complicated)
> methods for applying security fixes in the OS, we (Ubuntu and
> Go) could think about how to make it easier - more automatic,
> even - to apply security fixes upstream.

That would be welcome, but would not solve the problem. The topic
there is how to get the fixes applied in systems running in the wild.
Even the developers that want control over their own software
generally appreciate and depend upon the fact that most of the
environment their software is running on is receiving security fixes
automatically. Those are the guys responsible for getting such fixes
there.


gustavo @ http://niemeyer.net

minux

unread,
Mar 29, 2014, 1:39:38 AM3/29/14
to golang-dev
Several problems here to support traditional shared library in Go:
1. ABI compatibility problems (the same in C, e.g. constants and type definitions
must remain the same);
for Go, this also includes the runtime. This is fairly big problem in itself, but
let's assume the distribution could handle all the implications.
2. cross package inlining. say pkgA is compiled to pkgA.so, should go inline
trivial exported functions from pkgA when building pkgB?
I think we all agree that cross package inlining is a very nice feature and it helps
reducing function call overhead.

Problem 2 also applies to C++ (and to a lesser extent, C), but the difference between
Go and C++/C is that in the latter two languages, the developer has precise control
what function could be inlined or not; while in the Go world, the compiler controls
this aspect.

Two options:
1. disable cross package inlining when building shared library.
2. still enable cross package inlining, then you can no longer freely update
pkgA.so without also update pkgB.so.

PS: with gccgo, you can actually build shared library for each package, but cmd/go
doesn't support that, so you need to manually do the compilation. And gccgo currently
lacks cross package function inlining capability, so using gccgo to workaround the
problem is viable, but not ideal, as you don't want to depend on gccgo not supporting
cross package inlining forever.

Niklas Schnelle

unread,
Mar 29, 2014, 8:26:09 AM3/29/14
to golan...@googlegroups.com
Somehow I fail to see the advantage of dynamic linking over the mentioned foo-golang-dev packages and recompiling all reverse dependencies.
Linux distributions generally only ship open source software and it's not too hard to use -dev packages instead of library embedding.
With Go's compile speed and modern build servers recompiling even hundreds of packages wouldn't be much of an issue.

Stable ABI's really suck for anything with non C semantics and are an unnecessary burden on compiler developers that will stifle innovation with little
to no gain when all we need is source compatibility which is inherently true for open source systems and an advantage we should exploit.

ron minnich

unread,
Mar 29, 2014, 11:42:19 AM3/29/14
to Niklas Schnelle, golang-dev
If the compiler speed is such that it's not painful to just rebuild,
the code is small (as go programs tend to be), and it makes running
the code faster run you start the program (watching all these .so's
load nowadays is just painful), I don't see the issue with making
upgrades source-code-based instead of shared-library-based.

I think sometimes people get stuck in a rut about "how things are
done" and this seems such a case.

ron

Gustavo Niemeyer

unread,
Mar 29, 2014, 12:32:31 PM3/29/14
to ron minnich, golang-dev, Niklas Schnelle
On Mar 29, 2014 12:42 PM, "ron minnich" <rmin...@gmail.com> wrote:
> I think sometimes people get stuck in a rut about "how things are
> done" and this seems such a case.

They're happy to begin with the recompilation approach, as Jamie made
clear. They're concerned as well, because Go has the potential to have
a large presence in the main archive, and every security issue will be
a much more intense exercise than it ought to be in their experience,
with security issues happening every day [1]. I've been around Linux
distributions for quite a while, and I don't feel able to properly
judge what they're claiming, but it's nice that they're willing to
experiment so we can all learn from it.

[1] http://www.ubuntu.com/usn/


gustavo @ http://niemeyer.net

Ian Lance Taylor

unread,
Mar 29, 2014, 1:10:20 PM3/29/14
to Gustavo Niemeyer, golang-dev
On Fri, Mar 28, 2014 at 9:38 PM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
>
> This bug comment does a great job summarizing how static linkage and a
> few other common Go practices affect the workflow of a Linux
> distribution:
>
> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1267393/comments/27

It's a good summary. It's basically the security argument for shared
libraries. (People used to also support shared libraries on the basis
of reducing disk and phsicaly memory usage, but those concerns are
much less interesting today.)

It also highlights the difference between an organization like Ubuntu
and an organization like Google. On the one hand, pushing a new
shared library can fix a security issue for all applications at once.
On the other hand, pushing a new shared library can break an
application that was not tested with that library--or if you do it
very badly it can break all applications at once. The question is:
what do you build, and what do you test?

Ubuntu builds systems libraries and it tests system libraries. Then
it pushes them out, and hopes that applications continue to work.
That is, of course, almost always true. And while occasionally some
obscure application may break, that application was almost certainly
doing something wrong, and it's not Ubuntu's application anyhow.

Google build complete programs and it tests complete programs. Each
program is pushed out on its own release schedule. Because the
programs do not use shared libraries, they will continue working
reliably even as other programs and libraries are changed. (Changes
to the kernel are a different story, of course.) If Google programs
used shared libraries, and a shared library push broke some obscure
program, that program would almost certainly be doing something
wrong--but until that problem was fixed, some Google service would not
be working, and in the worst case a great deal of money would be lost.
When a security issue is found and fixed, each program is rebuilt and
retested on its own schedule--and, yes, for a serious security issue
that may be a crash schedule. But that still seems like the right
tradeoff.

To put it another way, when Ubuntu accidentally breaks an application
that was not written correctly, it doesn't matter very much to Ubuntu.
But when Google does it, it does matter a great deal to Google. On
the other hand, when Ubuntu finds a security problem, it has no way to
force application developers to rebuild their programs, but Google
does have a way to do that. So Ubuntu supports the use of shared
libraries, and Google does not.

Obviously there is no one right answer. I think we should solve the
shared library issues. I think we should make that a priority for the
1.4 release.

Ian

Keith Rarick

unread,
Mar 29, 2014, 7:36:31 PM3/29/14
to Ian Lance Taylor, Gustavo Niemeyer, golang-dev, Daniel Farina
An idea about this came from my colleague Dan Farina, seeking
the "security argument" benefits of how shared libraries are used,
without the complexity of dynamic linking. His main observation is
that unloading is seldom used, and for most applications it would
be conceivable to do "just in time" static linking – instead of
shipping static binaries (or dynamic binaries), ship static object
files and do plain old static linking on the end user's computer
either at the time of installation or at the moment the command
is executed.

For the sake of concreteness, here's what it might look like in a
scenario for Ubuntu:

Command C uses dependency package D as well as the Go
standard library. This might be represented in Ubuntu as a
package "c", which depends on library package "golang-d" (its
dependency) and "golang-go" (the standard library). These Ubuntu
packages contain unlinked static object files. When the user
installs "c", an installation script runs the linker to produce a
statically-linked executable binary. When Ubuntu has a security
fix for package D, they rebuild and republish "golang-d", which,
when downloaded on the user's machine, causes the same
installation script to run, producing a new static executable "c".

This doesn't address the Google-style vs Ubuntu-style difference.
It's purely a possible implementation strategy for the Ubuntu-style
approach, with perhaps a simpler mechanism than full-blown
dynamic linking.

I'm afraid I don't know enough details to predict how well this
would work in practice, but I think the idea is interesting enough
to explore or at least discuss.

hans....@gmail.com

unread,
Mar 29, 2014, 11:29:35 PM3/29/14
to golan...@googlegroups.com, Ian Lance Taylor, Gustavo Niemeyer, Daniel Farina
From a practical point of view, do they understand that the vast majority of Go programs don't use any system libraries? No stdlib, no ssl, ect. Clearly, they try and fix problems in C, C++, Ruby, and Python programs by patching and updating the various system libraries. But even if Go was using dynamic libs today, those traditional choke points for fixing security problems are irrelevant for Go.

I suppose if all Go libs were dynamically loaded, then they could roll out a security fix and all Go programs could pick it up, but I'm not seeing how that is meaningfully better than just rebuilding all the Go programs. By rebuilding, they get the benefit of whatever built in testing the project may have.

On the other hand, godeps can give much tighter control; it is isn't a perfect answer, but it is functional. Package maintainers could just update the godeps json file to whatever version of the library they thought everyone should be using. It is explicit, and allows for different Go programs to use different versions of the same library.

However, my interest in dynamic libs is not for distribution, but in extending applications when it is not practical to rebuild the application. For instance, imagine a MineCraft server where many people are making code contributions, but they can't rebuild the server with their extensions. Either because they don't have access to the sources, or they don't want to shut the server down every time they add an extension. Some Go projects have started doing these extensions via pipes and separate executables, which is fine when performance is not a concern. But sometimes you need to access data directly for performance reasons, at which point being in the same process space is the only reasonable solution. Clearly this can be fragile, but many significant programs available today use this model: Photoshop, Final Cut Pro, etc..

Oleku Konko

unread,
Mar 30, 2014, 2:08:01 AM3/30/14
to golan...@googlegroups.com, Gustavo Niemeyer

Am in support with Ian to make share and dynamic library issues priority for the 1.4 release

Daniel Farina

unread,
Mar 29, 2014, 7:45:05 PM3/29/14
to Keith Rarick, Ian Lance Taylor, Gustavo Niemeyer, golang-dev
On Sat, Mar 29, 2014 at 4:36 PM, Keith Rarick <k...@xph.us> wrote:
>
> An idea about this came from my colleague Dan Farina, seeking
> the "security argument" benefits of how shared libraries are used,
> without the complexity of dynamic linking. His main observation is
> that unloading is seldom used, and for most applications it would
> be conceivable to do "just in time" static linking - instead of
> shipping static binaries (or dynamic binaries), ship static object
> files and do plain old static linking on the end user's computer
> either at the time of installation or at the moment the command
> is executed.
>
> For the sake of concreteness, here's what it might look like in a
> scenario for Ubuntu:
>
> Command C uses dependency package D as well as the Go
> standard library. This might be represented in Ubuntu as a
> package "c", which depends on library package "golang-d" (its
> dependency) and "golang-go" (the standard library). These Ubuntu
> packages contain unlinked static object files. When the user
> installs "c", an installation script runs the linker to produce a
> statically-linked executable binary. When Ubuntu has a security
> fix for package D, they rebuild and republish "golang-d", which,
> when downloaded on the user's machine, causes the same
> installation script to run, producing a new static executable "c".
>
> This doesn't address the Google-style vs Ubuntu-style difference.
> It's purely a possible implementation strategy for the Ubuntu-style
> approach, with perhaps a simpler mechanism than full-blown
> dynamic linking.
>
> I'm afraid I don't know enough details to predict how well this
> would work in practice, but I think the idea is interesting enough
> to explore or at least discuss.


Yeah, the general idea I toyed with was "link-and-load". The problem
might be the speed of linking or having to thrash the fully-linked
output, among others. It's nothing more than deferring the last
stages of compilation from the .a files laying around.

The backdrop is that the number of programs that support, say,
*re*-loading something like a new OpenSSL or libc in C-land is close
to zero, so in practice a program restart is required in those cases
anyway. Maybe that apparently-acceptable work pattern can be
exploited somehow? "Link-and-load" is one thought experiment that
does that.

Anthony Martin

unread,
Apr 1, 2014, 8:34:29 AM4/1/14
to hans....@gmail.com, golan...@googlegroups.com, Ian Lance Taylor, Gustavo Niemeyer, Daniel Farina
hans....@gmail.com once said:
> However, my interest in dynamic libs is not for distribution, but in
> extending applications when it is not practical to rebuild the application.
> For instance, imagine a MineCraft server where many people are making code
> contributions, but they can't rebuild the server with their extensions.
> Either because they don't have access to the sources, or they don't want to
> shut the server down every time they add an extension. Some Go projects
> have started doing these extensions via pipes and separate executables,
> which is fine when performance is not a concern. But sometimes you need to
> access data directly for performance reasons, at which point being in the
> same process space is the only reasonable solution. Clearly this can be
> fragile, but many significant programs available today use this model:
> Photoshop, Final Cut Pro, etc..

This thread is about dynamic linking. What you've described
here is dynamic loading. Almost all the pieces are in place
to make the latter possible in Go now that the compilers
output object code with relocation tables. I can imagine we
will soon see a proposal for a new "dynld" package. :)

Cheers,
Anthony

Elazar Leibovich

unread,
Apr 2, 2014, 4:39:35 AM4/2/14
to golan...@googlegroups.com, Gustavo Niemeyer
I think the big issue here is not ".so" updates vs recompilation, but the lack of stable releases in the community, hence, the tendency of developer to embed other sources of arbitrary versions in their source tree. For example


Moreover, many times you have to change the pbkdf2 package's source, if it had internal subpackages.

Now from the security engineer POV, this is asking for troubles, since it is not easy to find this code duplication, if a security bug is found in pbkdf2, it is a nightmare to track and fix all embedded code duplication across all Go's packages.

If anyone remember the GDI+ bug in Windows, it was hard to fix, because people embedded

In languages where the community agreed on a way for package versioning, embedding other packages is rare, in Go where there's no such standard yet, embedding of packages is one of the de-facto standard solutions.

Nate Finch

unread,
Apr 2, 2014, 8:29:51 AM4/2/14
to golan...@googlegroups.com, Gustavo Niemeyer
I've been through DLL hell on Windows often enough to know that shared libraries aren't a panacea.  They break as much as they fix.  I really don't want to complicate the Go ecosystem by having a bunch of people start writing libraries that I have to link against rather than including their code in my build.  I really like the fact that Go is only statically linked.  Disk size and memory size is a ridiculous argument, even for phones.

My only concern is what impact dynamic linking will have on the Go ecosystem.  I don't want half the people in the go community to start distributing .so files instead of code.  The fact that everything is distributed as code is an incredibly awesome community builder.   

The linked thread seems to be saying "this is impossible! ...but bend over backwards and screw up your ecosystem to make it 10% less impossible for us", which seems ridiculous.

r.w.jo...@gmail.com

unread,
Apr 2, 2014, 8:58:06 AM4/2/14
to golan...@googlegroups.com, Gustavo Niemeyer
I don't think that anyone is arguing that it should no longer be possible to build statically linked Go binaries.  You should read the original article.  Ian's summary was also quite good.  The requirements of a Linux distribution are different from yours, so you shouldn't be surprised if they are looking to make different trade-offs.

The question is not replacing static build with shared libraries, the question is can we reasonably provide both so that users can make the decision appropriate to their context?

Elazar Leibovich

unread,
Apr 2, 2014, 9:18:31 AM4/2/14
to Nate Finch, golang-dev, Gustavo Niemeyer
I'm not sure if you were replying to me, so in case you did I want to clarify.

I think that the problem was not mainly about lack of SOs, which is solvable. But about the custom of embedding third part libraries to your code, due to the lack of standard versioning scheme.


--

---
You received this message because you are subscribed to a topic in the Google Groups "golang-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-dev/63MFQiary8Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-dev+...@googlegroups.com.

Gustavo Niemeyer

unread,
Apr 2, 2014, 9:40:18 AM4/2/14
to Elazar Leibovich, golang-dev
Hey Elazar,

On Wed, Apr 2, 2014 at 5:39 AM, Elazar Leibovich <ela...@gmail.com> wrote:
> I think the big issue here is not ".so" updates vs recompilation, but the
> lack of stable releases in the community, hence, the tendency of developer
> to embed other sources of arbitrary versions in their source tree. For
> example

This problem is worth solving so we can cross-depend on our code bases
a bit more, but is unrelated to the security upgrades issue. The
problem Jamie brings up there assumes they can either find a patch in
the wild to get the problem fixed, or patch the problem themselves,
and then they must distribute the fixed build to users.


gustavo @ http://niemeyer.net

Nate Finch

unread,
Apr 2, 2014, 10:01:24 AM4/2/14
to golan...@googlegroups.com, Elazar Leibovich
On Wednesday, April 2, 2014 9:40:18 AM UTC-4, Gustavo Niemeyer wrote:
This problem is worth solving so we can cross-depend on our code bases
a bit more, but is unrelated to the security upgrades issue. The
problem Jamie brings up there assumes they can either find a patch in
the wild to get the problem fixed, or patch the problem themselves,
and then they must distribute the fixed build to users.

This seems like a crazy assumption born of the days when we all depended on a handful of security libraries in the OS for most of our crypto needs, and everyone linked to them using C or C++. Those days are over.  Many languages use in-language replacements for OS libraries to provide cross compatibility.  

The problem, I think, is that the engineers see "language compiled to native code" and automatically put it in the mental bucket of C and C++ applications that behave in the way they're used to, rather than in a bucket like Python, Perl, Ruby, etc that operate in a different way (generally not using OS libraries).  How do they operate with those other non-compiled languages?  I think that might provide better insight into how to handle Go code.  What do they do about python applications that embed "outside" code in their own package? (leaving alone that the concept of "outside" is actually quite inaccurate in an open source world).


Gustavo Niemeyer

unread,
Apr 2, 2014, 10:19:21 AM4/2/14
to Keith Rarick, Ian Lance Taylor, golang-dev, Daniel Farina
On Sat, Mar 29, 2014 at 8:36 PM, Keith Rarick <k...@xph.us> wrote:
> An idea about this came from my colleague Dan Farina, seeking
> the "security argument" benefits of how shared libraries are used,
> without the complexity of dynamic linking. His main observation is
> that unloading is seldom used, and for most applications it would
> be conceivable to do "just in time" static linking - instead of

Indeed, unloading and dynamically loading pieces are different issues,
and not related to the security upgrade problem. The described issue
is simply being able to replace a common dependency of all affected
applications at once, rather than rebuilding and redistributing all of
the dependencies.

With that said, using pre-built archives would bring in a few
additional issues. For example, the linking phase for anything using
cgo will require the development packages for non-Go libraries to be
available during runtime. Also, given how packages transitively
build-in their dependencies, they'll still need to look past the fixed
library and into the dependents. So it's worth keeping in the sleeve
for sure, but putting that as the first choice feels like something
that would justify their fears.


gustavo @ http://niemeyer.net

Gustavo Niemeyer

unread,
Apr 2, 2014, 10:21:19 AM4/2/14
to Nate Finch, golang-dev, Elazar Leibovich
On Wed, Apr 2, 2014 at 11:01 AM, Nate Finch <nate....@gmail.com> wrote:
> The problem, I think, is that the engineers see "language compiled to native
> code" and automatically put it in the mental bucket of C and C++
> applications that behave in the way they're used to, rather than in a bucket
> like Python, Perl, Ruby, etc that operate in a different way (generally not
> using OS libraries). How do they operate with those other non-compiled
> languages?

$ ldd /usr/lib/python2.7/lib-dynload/_ssl.x86_64-linux-gnu.so | grep ssl
libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0
(0x00007faf4bcb9000)
$ ldd /usr/lib/perl5/auto/Net/SSLeay/SSLeay.so | grep ssl
libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0
(0x00007f487c189000)
$ ldd /usr/lib/ruby/1.9.1/x86_64-linux/openssl.so | grep ssl
libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0
(0x00007f69f1929000)


gustavo @ http://niemeyer.net

Elazar Leibovich

unread,
Apr 2, 2014, 10:29:51 AM4/2/14
to Gustavo Niemeyer, golang-dev
How would Jamie find all the packages that were embedded in the source.

For example, given that pkdf2 had a bug generating only a small set of keys all the time. How would Jamie patch every copy of pkdf2 embedded in many, many projects? The ID of a package in Go's ecosystem is its URL, what would Jamie search for to make sure Ubuntu doesn't deliver buggy software?

Nate Finch

unread,
Apr 2, 2014, 10:39:51 AM4/2/14
to golan...@googlegroups.com, Nate Finch, Elazar Leibovich
Ahh, huh. Thanks for that. I had thought they weren't using the native SSL libraries, but obviously, they are.  My apologies for jumping to conclusions.

Lars Seipel

unread,
Apr 2, 2014, 4:02:57 PM4/2/14
to Nate Finch, golan...@googlegroups.com, Elazar Leibovich
On Wed, Apr 02, 2014 at 07:01:24AM -0700, Nate Finch wrote:
> (generally not using OS libraries). How do they operate with those other
> non-compiled languages? I think that might provide better insight into how
> to handle Go code. What do they do about python applications that embed
> "outside" code in their own package?

Embedding copies of other libraries is generally not allowed (or only
after careful consideration) in distro repositories. A packaged python
app has to use the packaged python system libraries. If the packager
isn't able to sanely unbundle it (e.g. because the embedded copy was
heavily modified) the app can't be shipped in the repos at all.

Yes, this leads to great friction with the Ruby or Java communities
where it's customary to depend on exact versions of a particular library
and little to no thought is given to stable library interfaces. It's
already terribly annoying when you want to integrate two packages
depending on different library versions into a single app, but it's
totally unworkable at distro scale.

Lars

Michael Hudson-Doyle

unread,
Apr 2, 2014, 4:29:36 PM4/2/14
to Nate Finch, golan...@googlegroups.com, Elazar Leibovich
Nate Finch <nate....@gmail.com> writes:

> On Wednesday, April 2, 2014 9:40:18 AM UTC-4, Gustavo Niemeyer wrote:
>>
>> This problem is worth solving so we can cross-depend on our code bases
>> a bit more, but is unrelated to the security upgrades issue. The
>> problem Jamie brings up there assumes they can either find a patch in
>> the wild to get the problem fixed, or patch the problem themselves,
>> and then they must distribute the fixed build to users.
>>
>
> This seems like a crazy assumption born of the days when we all depended on
> a handful of security libraries in the OS for most of our crypto needs, and
> everyone linked to them using C or C++. Those days are over. Many
> languages use in-language replacements for OS libraries to provide cross
> compatibility.

I don't see how that's relevant really... it doesn't matter if the bug
is in openssl or go.crypto, we'd still like to fix it with one package
update.

(the embedded source problem is something else that makes the Ubuntu
security team's lives harder, but I think it isn't the same problem)

Cheers,
mwh

Nate Finch

unread,
Apr 2, 2014, 4:47:55 PM4/2/14
to golan...@googlegroups.com, Nate Finch, Elazar Leibovich
So, above, Gustavo mentioned that there's some acceptance of simply rebuilding the binaries with updated code from third party libraries (like go.crypto).  That seems pretty workable even at scale, given typical Go compile times, even recompiling 100 applications, unless they are all huge, would only take a small number of minutes, if not less.  What was the reason behind behind being unsatisfied with that approach?


Keith Randall

unread,
Apr 2, 2014, 4:59:15 PM4/2/14
to Nate Finch, golang-dev, Elazar Leibovich
I would imagine it's also the file size of the resulting update.  If there are 100 executables that use go.crypto, the security patch is now 100x bigger than it would be if you just had to update gocrypto.so.



--

---
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.

Nate Finch

unread,
Apr 2, 2014, 5:36:20 PM4/2/14
to golan...@googlegroups.com, Nate Finch, Elazar Leibovich
That's a really good point. It would actually be bigger than 100x, since you'd have to redownload every application that got rebuilt.

Could you just distribute the updated code and compile on the client?  That would keep it in line with how most Go programs are build right now.  Keep a private store of Go code that gets compiled once it's down on the machine to generate the binaries.  I think someone else already mentioned that idea above, but not sure if it had been addressed.

Gustavo Niemeyer

unread,
Apr 2, 2014, 5:36:56 PM4/2/14
to Keith Randall, Nate Finch, golang-dev, Elazar Leibovich
Right, and it's also not a one-off event. It's understandable that a
team working on releasing security fixes on a daily basis has the
desire to rebuild, redistribute, and reinstall a few packages rather
than hundreds of them.
--

gustavo @ http://niemeyer.net

Pierre Durand

unread,
Apr 3, 2014, 5:56:01 AM4/3/14
to golan...@googlegroups.com, Keith Randall, Nate Finch, Elazar Leibovich
Solution:
- Distribute Go as a package
- Distribute apps/libraries as package with sources/.go files
- Compile apps/libraries on install (post-install script)

Niklas Schnelle

unread,
Apr 3, 2014, 8:14:48 AM4/3/14
to golan...@googlegroups.com, Keith Randall, Nate Finch, Elazar Leibovich
I'd say the filesize argument is also kind of moot. Yes mobile updates need small download sizes but that's
why they use diif based/rsync like upgrade mechanisms. Given that Go binaries aren't compressed I'd guess something like rsync shouldn't have a hard
time turning recompiles with a typical security fix into tiny download sizes.

Nate Finch

unread,
Apr 3, 2014, 11:42:54 AM4/3/14
to golan...@googlegroups.com, Keith Randall, Nate Finch, Elazar Leibovich
I don't know how consistent Go is about how it lays out its executables.  It's possibly a small change could cause a massive difference in the resultant executable.  Probably others on the list have a better idea about that.  If it is relatively consistent, then that seems like it would mitigate most of the "100 downloads" problem, if each update is just a few k of file diffs.

ron minnich

unread,
Apr 3, 2014, 12:40:43 PM4/3/14
to Pierre Durand, golang-dev, Keith Randall, Nate Finch, Elazar Leibovich
On Thu, Apr 3, 2014 at 2:56 AM, Pierre Durand <pierre...@gmail.com> wrote:
> Solution:
> - Distribute Go as a package
> - Distribute apps/libraries as package with sources/.go files
> - Compile apps/libraries on install (post-install script)


slightly stranger idea. Instead of option 3, compile the
apps/libraries on usage and cache the binary in ramfs. The only binary
is the go compiler and a minimal shell.

So your path looks like this:
/ramfs/bin:/bin

/bin is set up in such a way that when you invoke, e.g, cp, and you
get /bin/cp, cp is built and the binary dropped into /ramfs/bin and
run.
From then on, when you run cp, you get the one in /ramfs/bin.

The first time you build something that needs lots of packages, those
packages get built too of course and cached into /ramfs/pkg.

This counts on the fact that go compiles are so fast. I hope go
continues to be a fast compiler. It also counts on the fact that ram
is plentiful nowadays. It's inspired by the way tinycore linux used to
work: on boot, all packages and binaries were installed to ram. So
your install was always clean.

To update all your binaries, you fetch all the new sources, and
rm -rf /ramfs/*

And you get your new programs. If you just think you have too many
commands lying around you're not using, then
rm /ramfs/bin/*

Proof of concept: https://github.com/rminnich/u-root

This is probably impractical but it's kind of fun.

ron

Liam

unread,
Dec 3, 2014, 9:50:55 PM12/3/14
to golan...@googlegroups.com
Canonical is floating this proposal for 1.5:

https://docs.google.com/document/d/1PxhXNhsdqwBjteW7poqv4Vf3PhtTmYuQKkOk_JNWeh0/edit?usp=sharing

Why is this preferable to install-time static linking?
Couldn't run-time dynamic linking slow launch times considerably for some apps?

The go platform would benefit immensely from a dlopen() to load go libs, and it seems to me that should be the priority in the shared-library realm.



On Friday, March 28, 2014 9:38:23 PM UTC-7, Gustavo Niemeyer wrote:
This bug comment does a great job summarizing how static linkage and a
few other common Go practices affect the workflow of a Linux
distribution:

https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1267393/comments/27

For some context, Jamie is an Ubuntu Security Engineer, and a very
good one at that. We've been negotiating with him and other Ubuntu
leads for a long time to get packaged Go software in Ubuntu, and this
message is the best summary to date of the perspectives of both sides.

It also gives some context for my interest in shared libraries,
discussed recently here in the list.

A good weekend to all.


gustavo @ http://niemeyer.net

Michael Hudson-Doyle

unread,
Dec 4, 2014, 1:57:06 AM12/4/14
to Liam, golan...@googlegroups.com
Indeed we are.

> Why is this preferable to install-time static linking?

A few reasons might be:

1) What I proposed is more familiar to the Ubuntu security team and so
requires fewer changes to their processes
2) We also hope to reduce the disk requirements of go binaries (we want
to install several conceptually "small" go programs on the phone and
the comparatively large size of Go binaries does stand out)
3) Install-time static linking requires the toolchain be present on the
system (see the phone again, although phone updates are more
image-based by default).

There are counter arguments to all of these of course, the bikeshed is
of many colours.

> Couldn't run-time dynamic linking slow launch times considerably for some
> apps?

It seems a bit unlikely to me. If you end up writing the next
openoffice in Go and you do find it to be a problem, static compilation
will always be close at hand in the Go world I expect.

> The go platform would benefit immensely from a dlopen() to load go libs,
> and it seems to me that should be the priority in the shared-library realm.

I think (haven't explicitly checked with all the decision makers) the
Canonical position here is that we'd /like/ to be able to do that, but
it's not as high a priority for *us*. Some of the work is the same
though and I'd love help...

Cheers,
mwh

>
> On Friday, March 28, 2014 9:38:23 PM UTC-7, Gustavo Niemeyer wrote:
>>
>> This bug comment does a great job summarizing how static linkage and a
>> few other common Go practices affect the workflow of a Linux
>> distribution:
>>
>>
>> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1267393/comments/27
>>
>> For some context, Jamie is an Ubuntu Security Engineer, and a very
>> good one at that. We've been negotiating with him and other Ubuntu
>> leads for a long time to get packaged Go software in Ubuntu, and this
>> message is the best summary to date of the perspectives of both sides.
>>
>> It also gives some context for my interest in shared libraries,
>> discussed recently here in the list.
>>
>> A good weekend to all.
>>
>>
>> gustavo @ http://niemeyer.net
>>
>

Liam Breck

unread,
Dec 4, 2014, 2:45:59 AM12/4/14
to Michael Hudson-Doyle, golang-dev
On Wed, Dec 3, 2014 at 10:56 PM, Michael Hudson-Doyle <michael...@canonical.com> wrote:

1) What I proposed is more familiar to the Ubuntu security team and so
   requires fewer changes to their processes
2) We also hope to reduce the disk requirements of go binaries (we want
   to install several conceptually "small" go programs on the phone and
   the comparatively large size of Go binaries does stand out)
 
How many go apps are in the distro, and how many critical "libgo" bugs have forced them to be rebuilt?
 
3) Install-time static linking requires the toolchain be present on the
   system (see the phone again, although phone updates are more
   image-based by default).

Does that rule out install-time linking, or just shift effort to the distro side?

I'm sympathetic to the problems of distros (we're bundling Arch and adding a custom repo on a forthcoming device) but this seems like a lot of work without much benefit to Go programmers.

> The go platform would benefit immensely from a dlopen() to load go libs,

I think (haven't explicitly checked with all the decision makers) the
Canonical position here is that we'd /like/ to be able to do that, but
it's not as high a priority for *us*.  Some of the work is the same
though and I'd love help...

Could you consider adding that to the proposal? I imagine you'd attract more hands that way...

Taru Karttunen

unread,
Dec 4, 2014, 3:44:46 AM12/4/14
to Liam, golan...@googlegroups.com
On 03.12 18:50, Liam wrote:
> Canonical is floating this proposal for 1.5:
>
> https://docs.google.com/document/d/1PxhXNhsdqwBjteW7poqv4Vf3PhtTmYuQKkOk_JNWeh0/edit?usp=sharing
>
> Why is this preferable to install-time static linking?
> Couldn't run-time dynamic linking slow launch times considerably for some
> apps?
>
> The go platform would benefit immensely from a dlopen() to load go libs,
> and it seems to me that should be the priority in the shared-library realm.

Simple shared library support vs all the mess of abused plugins?

Under the Canonical proposal how is a single package being compiled
into two separate shared libraries being handled?

e.g. if golang.org/x/net/ssh is included in libfoo and libbar how
do things work out?

- Taru Karttunen

Michael Hudson-Doyle

unread,
Dec 4, 2014, 4:12:19 AM12/4/14
to Taru Karttunen, Liam, golan...@googlegroups.com
In general, of course, this is something you should try to avoid
happening, which in the distro case will require a little but not
enormous discipline.

That said, Ian's document
(https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGhFDelrAP0NqSS_00RgZQ/edit)
has a plan for this (see "Multiples copies of a Go package"), which is
what I intend to implement. Basically it boils down to making sure all
the copies of the package are built from the same source and complaining
exceedingly noisily if it is not.

Cheers,
mwh

> - Taru Karttunen

Gustavo Niemeyer

unread,
Dec 4, 2014, 1:27:11 PM12/4/14
to Liam Breck, Michael Hudson-Doyle, golang-dev
Michael is listing several differences, but the most critical one is really number 1, which you jumped over. There is plenty of detail on it right above, in this thread we're commenting on. Tersely, there is significant infrastructure, policies, and minds (!) in place around those, so assuming the proposal is reasonable and the net outcome positive, at some point it's easier to simply allow the tool to do what people want.

Liam Breck

unread,
Dec 4, 2014, 5:13:35 PM12/4/14
to Gustavo Niemeyer, Michael Hudson-Doyle, golang-dev
On Thu, Dec 4, 2014 at 10:27 AM, Gustavo Niemeyer <gus...@niemeyer.net> wrote:
at some point it's easier to simply allow the tool to do what people want.

That could be an argument for prioritizing dlopen functionality; people have been asking for it for four years :-)
 

Florian Weimer

unread,
Dec 5, 2014, 5:31:04 AM12/5/14
to golan...@googlegroups.com
On 12/04/2014 08:45 AM, Liam Breck wrote:
>
> On Wed, Dec 3, 2014 at 10:56 PM, Michael Hudson-Doyle
> <michael...@canonical.com <mailto:michael...@canonical.com>> wrote:
>
>
> 1) What I proposed is more familiar to the Ubuntu security team and so
> requires fewer changes to their processes
> 2) We also hope to reduce the disk requirements of go binaries (we want
> to install several conceptually "small" go programs on the phone and
> the comparatively large size of Go binaries does stand out)
>
> How many go apps are in the distro, and how many critical "libgo" bugs
> have forced them to be rebuilt?

The Go backwards compatibility promise still allows ABI-breaking
changes, so you might still be forced to recompile the whole stack
depending on how things are fixed.

--
Florian Weimer / Red Hat Product Security

Manlio Perillo

unread,
Dec 5, 2014, 6:39:16 AM12/5/14
to golan...@googlegroups.com
Il giorno sabato 29 marzo 2014 05:38:23 UTC+1, Gustavo Niemeyer ha scritto:
This bug comment does a great job summarizing how static linkage and a
few other common Go practices affect the workflow of a Linux
distribution:

https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1267393/comments/27

For some context, Jamie is an Ubuntu Security Engineer, and a very
good one at that. We've been negotiating with him and other Ubuntu
leads for a long time to get packaged Go software in Ubuntu, and this
message is the best summary to date of the perspectives of both sides.


I have a very little experience with Go, however I think that Debian/Ubuntu
should maintain a private Go workspace where all supported packages are stored.
Ubuntu should only support projects that follow semantic version.

When there is a security fix in one of the supported package or Go runtime,
all packages in the Debian workspace are rebuild.

Go makes this workflow easy and fast; the only problem is an increased size for updates of
Go packages.


Regard  Manlio 

Michael Hudson-Doyle

unread,
Dec 7, 2014, 3:44:30 PM12/7/14
to Florian Weimer, golan...@googlegroups.com
Well, there is no proposal here to try to support ABI compatibility
between Go versions (I probably could have been more explicit about
this) and a once-per-6-months recompile-everything is acceptable to the
archive maintainers.

Cheers,
mwh

Florian Weimer

unread,
Dec 7, 2014, 4:19:55 PM12/7/14
to Michael Hudson-Doyle, golan...@googlegroups.com
Then why the Ubuntu security team is interested in this?

This is an honest question. Dynamic linking without ABI stability is
worse for updates (security or otherwise) than static linking, unless
you are prepared to rename the package on every ABI change and support
installation of multiple versions of the same library with different
ABIs (similarly to what happens with other ABI transitions, but it could
be more complicated due to longer dependency chains). But *that* would
cause a problem for the Debian security team because it would require
NEW processing on security.debian.org for every security update which is
a bit of a pain.

The Fedora aspects are discussed here:

<https://fedorahosted.org/fpc/ticket/382>
<https://fedoraproject.org/wiki/PackagingDrafts/Go>

This is mainly about finding something which works with the current
static linking support in the non-GCC toolchain.

Michael Hudson-Doyle

unread,
Dec 7, 2014, 5:58:44 PM12/7/14
to Florian Weimer, golan...@googlegroups.com
Florian Weimer <fwe...@redhat.com> writes:

> On 12/07/2014 09:44 PM, Michael Hudson-Doyle wrote:
>> Florian Weimer <fwe...@redhat.com> writes:
>>
>>> On 12/04/2014 08:45 AM, Liam Breck wrote:
>>>>
>>>> On Wed, Dec 3, 2014 at 10:56 PM, Michael Hudson-Doyle
>>>> <michael...@canonical.com <mailto:michael...@canonical.com>> wrote:
>>>>
>>>>
>>>> 1) What I proposed is more familiar to the Ubuntu security team and so
>>>> requires fewer changes to their processes
>>>> 2) We also hope to reduce the disk requirements of go binaries (we want
>>>> to install several conceptually "small" go programs on the phone and
>>>> the comparatively large size of Go binaries does stand out)
>>>>
>>>> How many go apps are in the distro, and how many critical "libgo" bugs
>>>> have forced them to be rebuilt?
>>>
>>> The Go backwards compatibility promise still allows ABI-breaking
>>> changes, so you might still be forced to recompile the whole stack
>>> depending on how things are fixed.
>>
>> Well, there is no proposal here to try to support ABI compatibility
>> between Go versions (I probably could have been more explicit about
>> this)
>
> Then why the Ubuntu security team is interested in this?

Because it should (hopefully, I admit this part is not proven yet) be
possible to provide updates to libraries (e.g. go.crypto) without
breaking ABI. This is different from providing a DSO that works with
both Go 1.5 and Go 1.6's runtime, which seems totally infeasible today.

(I don't really have any intuition about Go 1.X vs Go 1.X.1 here --
certainly it would be easier for distros if these were ABI compatible,
but that would be contrary to the way Go currently expects things to
work).

> This is an honest question. Dynamic linking without ABI stability is
> worse for updates (security or otherwise) than static linking, unless
> you are prepared to rename the package on every ABI change and support
> installation of multiple versions of the same library with different
> ABIs (similarly to what happens with other ABI transitions, but it could
> be more complicated due to longer dependency chains).

Re-naming the package and supporting co-installation of different ABI
versions is part of the plan, for sure. I expect ABI breaks to be more
frequent and it's yet another open question as to how much of a pain
this will be for package maintainers. In my dh-golang patches I've
tried to make it really easy for the packager to update the ABI version.

> But *that* would cause a problem for the Debian security team because
> it would require NEW processing on security.debian.org for every
> security update which is a bit of a pain.

If the update breaks ABI, then yes. Hopefully they won't very often.

Debian already has infrastructure to cope with packages that break ABI
on every upload (for Haskell packages), but AIUI adding more of this
would not be welcome is there is an alternative.

> The Fedora aspects are discussed here:
>
> <https://fedorahosted.org/fpc/ticket/382>
> <https://fedoraproject.org/wiki/PackagingDrafts/Go>
>
> This is mainly about finding something which works with the current
> static linking support in the non-GCC toolchain.

I'd seen the wiki page before and even chatted very briefly with vbatts
about it. My impression is, similar to the Debian/Ubuntu landscape, the
fedora packagers would like Go to just behave like C and stop being such
a special snowflake :-) One area where it might be nice to cooperate
more closely is to have some way of sharing the names and soversions of
Go packages so that they don't differ gratuitously between the RPM and
deb worlds. But this probably isn't the mailing list for that
conversation!

Cheers,
mwh
Reply all
Reply to author
Forward
0 new messages