Prerequisites for running a crystal binary.

164 views
Skip to first unread message

Tim Uckun

unread,
Aug 31, 2018, 10:27:28 PM8/31/18
to crysta...@googlegroups.com
Hi everyone.

If I compile a crystal code on my ubuntu laptop what libs or packages do I need to install on the target machines to run that binary? I don't want to install crystal on the target machine, I just want to wget the binary from someplace and run it.

How hard is it to run the same binary on a centos or alpine  or arch system?

Can I compile a binary on ubuntu to be run on a mac? What do I need to install on the mac to make that binary run?

Thanks.

Ryan Gonzalez

unread,
Aug 31, 2018, 10:45:15 PM8/31/18
to crysta...@googlegroups.com
I'm just going to say this right now: it's not really going to work that well. 

For starters, when you compile a binary, it depends on several shared libraries. The absolute minimum should be glibc, Boehm libgc, and libevent. For all of these, you'll need the target system to a version either the same version as or newer than the version on your machine. This is technically possible, but it can be a bit of a PITA, especially since glibc wasn't designed to be distributed like this (and it'll mess with the dynamic linker). 

Your best bet would be to simply compile your Crystal program on a sufficiently old distro (e.g. CentOS 6) that no person in their right mind would try to use it on anything older, and include everything *except* glibc (either bundle the libraries or statically link them). Good luck setting the build environment up and trying to debug spurious errors that have probably *already been fixed*. (This is largely why I no longer really endorse AppImages; it's too painful to set up and debug the required build environments.)

Of course, even in this case, there are certain other libraries you can't include (usually graphics related). Have fun figuring those out. Also, it won't work on Alpine Linux. 

What if instead you statically link *everything*? Well, you kinda can't. Glibc doesn't like static linking, and there's quite a few functions that, if used, will make static linking appear to work, except it'll still require associated libraries at runtime. 

Alpine Linux and musl libc are your only hope for a fully statically linked binary...except when they aren't, because musl isn't compatible fully with a lot of stuff and you're inevitably going to run into issues. 

All this to say:

If you're going to do this, use AppImages. It'll still be hard to set up, but DIYing this is asisine. You'll still lose Alpine support though. 

My #1 recommendation would probably be to use containers (e.g. Docker or podman) to handle this, or maybe one of the other Linux distribution tools (Flatpak or Snapcraft). Personally I Flatpak a ton of my command line applications for ease of use. 

--
You received this message because you are subscribed to the Google Groups "Crystal" group.
To unsubscribe from this group and stop receiving emails from it, send an email to crystal-lang...@googlegroups.com.
To post to this group, send email to crysta...@googlegroups.com.
Visit this group at https://groups.google.com/group/crystal-lang.
To view this discussion on the web visit https://groups.google.com/d/msgid/crystal-lang/CAGuHJrNDDkJXh_afjENxVcM70HNNcOunn5eL81jsv2o5%2BTnftg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--

Ryan (ライアン)
Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else
https://refi64.com/

Tim Uckun

unread,
Aug 31, 2018, 11:00:08 PM8/31/18
to crysta...@googlegroups.com
I definitely don't want to use containers.  I was hoping I could do something like this https://github.com/hone/mruby-cli  this project lets you build CLI apps in mruby and generates binaries that run on multiple platforms. I wanted to use Crystal though because I prefer a static typing for the project I have in mind.  

Maybe I'll just use go or mruby instead.



Luis Lavena

unread,
Sep 1, 2018, 6:56:10 AM9/1/18
to Crystal
On Saturday, September 1, 2018 at 5:00:08 AM UTC+2, Tim Uckun wrote:
I definitely don't want to use containers.  I was hoping I could do something like this https://github.com/hone/mruby-cli  this project lets you build CLI apps in mruby and generates binaries that run on multiple platforms. I wanted to use Crystal though because I prefer a static typing for the project I have in mind.  

Maybe I'll just use go or mruby instead.


If you add the IO mgem to mruby, it will also depend on glibc, so will suffer the same issue. This also applies to any other dependencies that are not part of the core of mruby.

There is honestly no easy way to escape this, except as pointed out, use a static build musl environment and have static compilations of most of the libs you're going to depend on. That way, the final package is going to be completely standalone.

In the case of Crystal it gets more complicated, as you will also need a LLVM build that works on that musl's build environment.

Something I tinkered in the past was use cross-compilation of Crystal on my machine, and then on the musl environment build the required libcrystal to be able to link to the other static libs.

Sadly I don't have documentation on this since was a really late night experiment.

Cheers,
--
Luis Lavena

Julian Fondren

unread,
Sep 2, 2018, 3:53:30 AM9/2/18
to Crystal
The other answers are assuming that the target systems aren't under your control, I suppose.

If they *are* under your control, then just use one of them to build a binary of your crystal app. Copy that binary to a target. ldd it. The missing libraries in the ldd output are the libraries you need. You can then use your package system to find the packages that provide those libraries.

People do this all the time and they do it without containers or having to go with static binaries. "That path has thorns" does not mean "that path is impassible". The next-most easiest step up would be to have a nice build environment like Bamboo that creates RPMs or like with the appropriate dependencies for each of your targets, that you can put in your own repo that your servers can install from.

The next-most step towards heresy would be to package some of your libraries up with your binary and use a script to ensure that the binary runs with them linked. This is still not perfect, but if your *only* problem is "I don't want to have to yum install libfunnypics" then this can close a lot of that gap. Again, if you're not deploying to your own farm of fairly uniform servers, you'd be better off with the orthodox answers like flatpack.

Tim Uckun

unread,
Sep 2, 2018, 5:11:44 PM9/2/18
to crysta...@googlegroups.com
They are under my control but I am surprised that the list of dependencies is not static and well known. Do the different parts of the stdlib in cyrstal use different libs? I am happy to ship a script with a few apt-get installs in it if that's going to work.

--
You received this message because you are subscribed to the Google Groups "Crystal" group.
To unsubscribe from this group and stop receiving emails from it, send an email to crystal-lang...@googlegroups.com.
To post to this group, send email to crysta...@googlegroups.com.
Visit this group at https://groups.google.com/group/crystal-lang.

Johannes Müller

unread,
Sep 7, 2018, 6:15:59 AM9/7/18
to Crystal
A few statements in this thread sound awefully weird or at least incomplete or over complicated.

# System Compatibility
First of all, a binary compiled on one system should run on any other system using the same target triple.
As long as they're run on the same architecture, you can use the same binary for any linux distribution like Ubuntu, CentOS, Arch etc. An example for this is the compiler itself. In the the release process it is compiled once (on Alpine) and then used for packages in the different distributions.

# Dependencies
When the binary is able to run on the same system, it usually still needs some dependencies and you need to ensure they're available on the target machine. This is nothing exceptional for Crystal but common to all compiled binaries that use dynamic libraries.
What libraries a specific Crystal binary needs depends on which ones it uses. A default binary typically requires libc, libevent, libpcre, libpthread and others which should typically be available on most common POSIX operating systems.
Some parts of the standard library or shards may add additional dependencies, for example to use the YAML features, you need libyaml. This can get a bit tricky because such libraries are often not installed by default, so when targeting different systems, you need to figure out how to install these dynamic libraries on each system. If you're only targeting one system, this is not a big deal, though. (maybe you don't even need additional libraries, depending on what your application uses). When trying to run the binary, it will tell you which libraries are missing.

To avoid having to install dependencies on the target system, you can use static libraries. This means the libraries are directly embedded into the binary. This obviously has some disadvantages, for example bigger binary size and you can't update dependencies individually, but  that's not necessarily an issue. Such a statically compiled binary can just be downloaded to a target system and should be able to execute without any external dependencies.
This method has become popular in recent years, especially Go advocates that and they have a really good toolchain for that.

In Crystal, is is  little bit more complicated. Fully static binaries can be built on Alpine linux with musl libc. There might be a few glitches still, but in general this works pretty well.
An example for this is, again, the Crystal compiler itself.

# Cross-Compiling
You can compile a binary for a different target system (for example Mac from Linux), this is called cross compiling. The relevant compiler options for this are `--cross-compile` and `--target <target-triple>`.
Please note, that this will only produce an object file, not a finished executable. It still needs to be linked, which can typically only be done on the target system itself.
The compiler in cross-compile mode outputs a linker command to run on the target system.

On Mac, as on any other system, you have to install the libraries your application needs and that can vary (see previous section). 
However, static linking is not supported on MacOS at all.

I hope this helps. I'm by no means a linking expert, but from may experience and what I hear from others, building and distributing a Crystal binary is usually pretty smooth.
Maybe there might be a few glitches here and there, but it's just a complicated topic.
But there are many people willing to help on the mailing list or in chat if you happen to hit any road blocks.

Cheers,
Johannes

Chris Hobbs

unread,
Sep 7, 2018, 6:31:38 AM9/7/18
to crysta...@googlegroups.com


On 07/09/18 11:15, Johannes Müller wrote:
A few statements in this thread sound awefully weird or at least incomplete or over complicated.

# System Compatibility
First of all, a binary compiled on one system should run on any other system using the same target triple.
As long as they're run on the same architecture, you can use the same binary for any linux distribution like Ubuntu, CentOS, Arch etc. An example for this is the compiler itself. In the the release process it is compiled once (on Alpine) and then used for packages in the different distributions.
This isn't true unless you statically link, because different machines have differently compiled dynamic libraries which can easily be incompatible. The only way to ensure this is to statically link, which isn't yet easy.
--
You received this message because you are subscribed to the Google Groups "Crystal" group.
To unsubscribe from this group and stop receiving emails from it, send an email to crystal-lang...@googlegroups.com.
To post to this group, send email to crysta...@googlegroups.com.
Visit this group at https://groups.google.com/group/crystal-lang.

Julian Fondren

unread,
Sep 7, 2018, 8:05:12 AM9/7/18
to Crystal
On Friday, September 7, 2018 at 5:15:59 AM UTC-5, Johannes Müller wrote:
A few statements in this thread sound awefully weird or at least incomplete or over complicated.

There are two distinct groups here:

1. sysadmins who've built and deployed their own executables to their
own server farms

2. technical people who are very well read and very well versed in
both the currently hyped ways to do things, as well as shared wisdom
about problems with previous ways of doing things.

I don't intend to say that either group is necessarily better than the
other.  The first can be mired in bad tech; the second is *all of us*
if we try to be well-informed about matters we haven't gotten to yet.
But this discussion very clearly shows the conflict between the two
groups.

If you keep abreast of things, you'll learn

1. that BrandNewIdea is really cool, promises to do amazing things,
and BigCompany has proven it out in practice.

2. that BadOldIdeas have lots and lots of problems -- when you're
informed you can easily rattle of a dozen caveats about any given
technology.

People really like talking about #1, as the next big thing. People also
really like sharing horror stories (exciting firefighting, grim
post-mortems) about #2. Both of those are really good reading and
you'll get lots of internet points for sharing them. So "well-read" in
technology does not mean "you share an understanding with the masses
of people who do the thing"; it means "you're hyper-aware of what a
*few* well-positioned people are doing; you have a very clear picture
of the faults of anything those few people aren't doing anymore; and
you haven't heard yet about any problems with what those few people
are doing right now."

The above points can be restated:

1. BrandNewIdea has brand new unknown problems that we'll be
*completely blindsided* by, one day at 3AM on a holiday. And when that
happens you might have twenty otherwise useful people on a conference
call explaining that they don't know about BrandNewIdea yet, it's
Alice and Bob who know about that--has anyone reached them yet?

2. BadOldIdeas have lots of problems that we understand very well and
have already developed practices to mitigate. Oh no, a problem occurs
at 3AM on a holiday? Don't worry, that new hire that we don't let
touch anything, even he knows how to fix this problem.

Suppose you want to set up a lot of servers. Obviously, you need a lot
of storage for them. So how do you want your storage? Do you want
Ceph, a distributed object storage? Or do you want each server to have
a RAID of spinning disks?

I'm sure that everyone here can describe some compelling caveats for
the second solution. Have you ever dealt with a server where any
kind of write() to disk hangs uninterruptibly forever but reads() are
fine? Can you name a configuration option that if wrongly set could
wreck the filesystems across your entire farm all at once? How
about--it's already true that it's good to do more I/O with fewer I/O
syscalls via buffering, but Ceph can make that even *more* true.
Is that going to be a problem for you?

How about partitions vs. LVMs? Did you know that you can *wreck* a
thin-mount LVM setup by writing too much to it? Even when configured
to fail nicely, you don't get the same behavior as writes to a full partition.

Bare metal vs. virtualized servers? A funny thing about disk cache
that's completely outside of the kernel's awareness--the kernel isn't
very good about managing it.

Every server handling a dozen instances MINOR_SERVICE, or having them
all rely on a dozen clusters that are each dedicated to a MINOR_SERVICE ?
Well if you're bored of single-server failures that bother a few
customers, it's very possible to architect your farm so that you get
failures that bother *most* of your customers :)

Again, I'm not saying that either one of the options above is
definitely universally preferable over the other. It's simply the case
that people have a clearer vision of the potential problems with older
technologies vs. newer technologies.

Back to the topic...

On Friday, September 7, 2018 at 5:31:38 AM UTC-5, RX14 wrote:
This isn't true unless you statically link, because different machines have differently compiled dynamic libraries which can easily be incompatible. The only way to ensure this is to statically link, which isn't yet easy.

Tim's already said that he's wanting to deploy to his own servers. A
typical ad-hoc deployment scenario is "I have a bunch of x86_64 CentOS
servers on the same major version with very similar software installs
and I want to run this binary I've just built on on them." Deployments
like this are very, very easy.

Can there be complications? sure. You might have a few other archs
in there. If your major version of Linux distribution isn't *dead* then you
might get the odd dynamic library update that breaks your binary.
The more critical the function of your binary, the more you want to
take measures to avoid that.

And if it's not critical? Men make nothing that can last forever without
maintenance.

Johannes Müller

unread,
Sep 7, 2018, 11:47:51 AM9/7/18
to Crystal

Am Freitag, 7. September 2018 12:31:38 UTC+2 schrieb RX14:


On 07/09/18 11:15, Johannes Müller wrote:
A few statements in this thread sound awefully weird or at least incomplete or over complicated.

# System Compatibility
First of all, a binary compiled on one system should run on any other system using the same target triple.
As long as they're run on the same architecture, you can use the same binary for any linux distribution like Ubuntu, CentOS, Arch etc. An example for this is the compiler itself. In the the release process it is compiled once (on Alpine) and then used for packages in the different distributions.
This isn't true unless you statically link, because different machines have differently compiled dynamic libraries which can easily be incompatible. The only way to ensure this is to statically link, which isn't yet easy.

This section is only about binary compatibility. And in that regard, my statement is valid.

 Obviously, there might be incompatibilities with installed dynamic libraries. But you don't have to use the ones provided by the system/package manager. Worst case scenario, you can build incompatible libraries yourself. But I agree that in such a case it is probably better to just provide a statically linked binary.

Ryan Gonzalez

unread,
Sep 7, 2018, 11:50:48 AM9/7/18
to crysta...@googlegroups.com
No offense, but I literally have no clue as to what this has to do with the original discussion...

--
You received this message because you are subscribed to the Google Groups "Crystal" group.
To unsubscribe from this group and stop receiving emails from it, send an email to crystal-lang...@googlegroups.com.
To post to this group, send email to crysta...@googlegroups.com.
Visit this group at https://groups.google.com/group/crystal-lang.

For more options, visit https://groups.google.com/d/optout.

Tim Uckun

unread,
Sep 8, 2018, 1:05:49 AM9/8/18
to crysta...@googlegroups.com
My question isn't that complicated at all but obviously I didn't explain it very well so let me give it another try.

I want to write some utility type cli program. It's going to mess with the system so it can't be run from a container. I was looking at mruby because of the mruby-cli library will allow me to write the code on my laptop (mac) and generate binaries for linux where I want to deploy it.  From their docs it makes it sound like the generated binary is all I need to ship to the target machine. I don't want to use mruby because I want to write it in a strongly typed language.  I could do the same thing with go but I don't like go as a language. 

I don't have to build on a Mac, I could build the binary in a docker container (like the mruby-cli project does) and ship the binary generated to the destination machines. My question was "what packages do I need to install on the target machine in order for that binary to run"

I realize that if I use YAML or Pg libs or whatever I need to bootstrap with additional libs but do I need to install some packages if I only use the standard crystal libs? If so does anybody have a canonical list of the libs used by the crystal base classes?

--

Julian Fondren

unread,
Sep 8, 2018, 1:54:51 AM9/8/18
to Crystal
On Saturday, September 8, 2018 at 12:05:49 AM UTC-5, Tim Uckun wrote:
I don't have to build on a Mac, I could build the binary in a docker container (like the mruby-cli project does) and ship the binary generated to the destination machines. My question was "what packages do I need to install on the target machine in order for that binary to run"

I realize that if I use YAML or Pg libs or whatever I need to bootstrap with additional libs but do I need to install some packages if I only use the standard crystal libs? If so does anybody have a canonical list of the libs used by the crystal base classes.

You definitely don't want to build on a mac, that's an entirely different architecture. That it's x86 is not enough; it has its own kind of binaries and its own ABI, etc. VirtualBox will be easier to set up than a cross-compiler.

The question you're asking is not one that's typically answered for any language. It would just be a maintenance chore. Build something on the target architecture, ldd it, that's about what you need.

Tim Uckun

unread,
Sep 9, 2018, 5:39:37 PM9/9/18
to crysta...@googlegroups.com
I know I don't want to build on a mac (but I really should be able to because that's what cross compilation is all about), I was planning on building it in a docker container and then copying the binary to the server.

Go handles this task pretty well. Again I don't like the language but the tooling and the ecosystem is excellent.


--
You received this message because you are subscribed to the Google Groups "Crystal" group.
To unsubscribe from this group and stop receiving emails from it, send an email to crystal-lang...@googlegroups.com.
To post to this group, send email to crysta...@googlegroups.com.
Visit this group at https://groups.google.com/group/crystal-lang.

Ryan Gonzalez

unread,
Sep 9, 2018, 7:06:52 PM9/9/18
to crysta...@googlegroups.com
Go is basically able to handle it because it *reimplements everything found in libc*, which isn't without its own faults: https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/

Also, the second you call into C code you lose all that. 


For more options, visit https://groups.google.com/d/optout.

Roger Pack

unread,
Oct 10, 2018, 11:19:24 PM10/10/18
to Crystal


On Friday, September 7, 2018 at 4:31:38 AM UTC-6, RX14 wrote:


On 07/09/18 11:15, Johannes Müller wrote:
A few statements in this thread sound awefully weird or at least incomplete or over complicated.

# System Compatibility
First of all, a binary compiled on one system should run on any other system using the same target triple.
As long as they're run on the same architecture, you can use the same binary for any linux distribution like Ubuntu, CentOS, Arch etc. An example for this is the compiler itself. In the the release process it is compiled once (on Alpine) and then used for packages in the different distributions.
This isn't true unless you statically link, because different machines have differently compiled dynamic libraries which can easily be incompatible. The only way to ensure this is to statically link, which isn't yet easy.

If your distro's libevent etc dependency packages are dynamic libs, it's "usually" not too hard to remove them and install some local versions that are static.  Usually it's just $ ./configure --enable-static --disable-shared && sudo make install
If you want to stick with the dynamic lib route then possibly could make a .deb/.rpm that lists the dependencies for the target system.
Good luck!
-roger-
Reply all
Reply to author
Forward
0 new messages