Offline Installation in Linux and Windows

130 views
Skip to first unread message

Subhas Dandapani

unread,
Sep 1, 2014, 2:48:57 PM9/1/14
to rapi...@googlegroups.com
Hi All,

This post is about offline installation of RapidFTR on Linux and Windows, given that we've switched to Docker.

Key Objective:
Make installation single-click and painless.
User should just launch one file and everything else should install.
Should work offline.

We have a RapidFTR docker image as a single tar file that measures around 270 MB. But how do we create a one-click installer out of this? Do we bundle everything (including docker) into a Fat DEB or MSI? How do we tackle usual Linux dependency hell (i.e. rapidftr > needs docker > needs linux kernel 2.6.30 - how many will we package?). We had the same problems in the previous installer as well, so the previous solution was to first install LXC separately, and then install the DEB file. Windows is trickier - do we bundle everything as a single installer? Or do we have Boot2docker installer separate and RapidFTR separate?

Here are some options:

1. Create Fat DEBs. In case of Windows, this means bundling everything (including boot2docker) into a single installer.

+ Everything in one
- Tons of compatibiity issues and system matrices to test upon
- High chances of failing in most Linux distributions with external package managers, because of dependency hell
- Big matrix of artifacts to be maintained on the server (i.e. one for Ubuntu, one for CentOS, etc)

2. We can create a simple Shell script (just similar to how Homebrew or RVM work, curl | bash), which looks for the image in the same folder and imports it. Similarly create a shell script for Windows. It can also be clicked and run, so technically it will still be a single-click installer.

+ RapidFTR is separate and Docker is separate, no need to maintain matrices
+ Can handle different Linux distributions in the wrapper shell script
+ User will still launch one single file, and that will do all the work of setting up, so user is abstracted away from all those dependencies
- When downloading, we have to download multiple files. Probably should be able to zip all of them up just for easy distribution.

3. We can convert the Docker Image TAR into a DEB file, which upon post-hook imports the image and runs it. Similarly in case of Windows, create an installer which first triggers boot2docker, and then upon post-installation will have scripts to import the docker image.

So any suggestions? Or any other alternatives?

- Subhas

Tumwebaze Charles

unread,
Sep 2, 2014, 2:41:56 AM9/2/14
to rapi...@googlegroups.com
thanks Subhas for raising this.

Thinking about this from the point of view of the user, what i would prefer is having an msi file for windows and a deb file for linux because that is familiar to me. (Option 1)

On another note, is there a reason we want to support multiple distributions of linux? I think we should only support ubuntu (particular version may be >= 12) and windows. Wouldn't this solve the dependency and matrix problem?

Reading this thread about the dependencies; what would be the impact of having documentation around preparing the systems for deployment i.e we support this version of ubuntu or windows and that is what you should install on netbooks.?

For those that want to install on different distributions of linux, we just have documentation available for them of the different packages that are required and they can go ahead and do it themselves.

Charles


Regards,
Charles Tumwebaze
P.O.Box 35294 Kampala Uganda | Email: ctumw...@gmail.com 
Mob: +256 773 356153, +256 712 513893 | Skype Id: ctumwebaze


--
You received this message because you are subscribed to the Google Groups "rapidftr" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rapidftr+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Subhas Dandapani

unread,
Sep 2, 2014, 5:22:10 AM9/2/14
to rapi...@googlegroups.com
Hi Charles,

Sorry I know this is long, but please give this a detailed read.

In Windows - If you install CouchDB, you install CouchDB as one package, that's it. Everything required is bundled.

In Ubuntu 14.04 - If you install CouchDB, it will install 50 other packages (erlang-dev, erlang-http, couchdb-bin, couchdb-lib, etc, etc). Just install a fresh
Ubuntu and try running "apt-get install couchdb" on it, and you'll see that it asks "will install 50 packages, Y/N".

So one package depnds on X, X depends on Y, Y depends on Z and so on. e.g. couchdb depends on erlang-dev, which depends on [erlang-c6, erlang-bin, etc etc] - which in turn depends on [build-essential, curl] and usually culminating at the top level kernel-headers, glibc and libc6 dependencies.

And that's where fundamentally Dependency hell arises in Linux. You can run into that even within a single Ubuntu 14.04.

So if you install Docker - you need at least 10 other new packages, which in turn have some more core dependencies. We cannot make all these into one single package (Deeper linux folks: At this point you'll ask me about static linking, but here all these are core system services and not shared libraries, and most of the stuff are kernel modules as well!). It is extremely difficult because you'll have to manage every single dependency, and sometimes its flat impossible because of all the kernel modules involved in Docker.

Case example:
Default Ubuntu will not have any kernel development headers installed. But Docker needs those for compiling AUFS and other bits. So now you have to package kernel headers along with our DEB as well. But which version of kernel-headers will we pack? If you install fresh Ubuntu 14.04 - you'll have say kernel 3.13.0, and later when you do an apt-get update/upgrade - you'll have kernel 3.13.1. But we can't pack one single kernel-header. As you keep running apt-get update/upgrade - your distribution keeps changing and becomes a moving target. Hence a Matrix will be required even for simple Ubuntu 14.04 itself.

The only thing that works even inside a single Ubuntu:
We should NOT be managing or bundling dependencies. apt-get is the only thing in Ubuntu that is considered safe to take care of dependencies. So we just say "rapidftr depends on docker", and apt-get will install docker. If we start bundling Docker - then we should bundle docker's X dependencies, and Y dependencies, and Z and so on, which will break the system at one point.

If our dependencies are simply shared libraries (like libqt, libruby, etc) - we can do static linking (i.e. bundle all libraries along with our DEB). But unfortunately RapidFTR depends on Docker - which is all system and kernel components, packaging those with RapidFTR will be a definite recipe for incompatibilities.

This is a fundamental problem with Linux and that's why we can't subvert apt-get. On Windows, its a separate story because Windows doesn't install 50 things when you install one. Everything is by default statically linked, and there are very little shared libraries in Windows.

- Subhas
--
- Subhas

Subhas Dandapani

unread,
Sep 2, 2014, 5:25:15 AM9/2/14
to rapi...@googlegroups.com
TL;DR - We can make it one single DEB, but we can't bundle all dependencies. When you click the DEB file, Ubuntu will prompt and install Docker. But docker cannot be bundled into the same DEB.

- Subhas
--
- Subhas

James Cellini

unread,
Sep 2, 2014, 6:46:27 AM9/2/14
to rapi...@googlegroups.com
Hi Subhas,
I bet the best way around this to use the deb file which will ask for installation of the docker.
That way, the docker is kept out of the developers' concern. How about that?


Subhas Dandapani

unread,
Sep 2, 2014, 9:21:09 AM9/2/14
to rapi...@googlegroups.com
Hi James, Yes, the DEB can contain only the RapidFTR Image with a dependency to Docker.

But usually, the procedure to install Docker is sometimes complex - we have to add a PPA or something. That's why I felt that RVM or Homebrew style "curl | bash", or executing a shell script which simply does the same - would be good and also help with distro compatibility as well.

DEB is tied to Debian/Ubuntu specifically, but the RapidFTR image is not. So instead of in the future packaging the same image in multiple file formats - we could simply have the image and a shell script. The script will do whatever necessary to install docker and run the image.

Any suggestions or flaws with this approach?

- Subhas

Vijay Aravamudhan

unread,
Sep 2, 2014, 9:58:39 AM9/2/14
to rapi...@googlegroups.com
You could always use the tick mark (`) to query the OS before the curl command (passing it as an argument) - and the server could return a shell script that would work for that OS - and then invoke the package manager corresponding to that OS. Wouldnt that work?

Tumwebaze Charles

unread,
Sep 2, 2014, 10:11:30 AM9/2/14
to rapi...@googlegroups.com
Hi Subhas,

Wouldn't this approach still require the internet to be able to download the required dependencies? 

Regards,
Charles Tumwebaze
P.O.Box 35294 Kampala Uganda | Email: ctumw...@gmail.com 
Mob: +256 773 356153, +256 712 513893 | Skype Id: ctumwebaze


James Cellini

unread,
Sep 2, 2014, 10:19:32 AM9/2/14
to rapi...@googlegroups.com
Hi Subhas,
Well, I agree with your approach of having both the image and the shell script.
Though am wondering how you are going to package the two... sorry if I sound funny on this.
Would like to be updated/informed. Thanks.

Best Regards,
James

James Cellini

unread,
Sep 2, 2014, 10:22:29 AM9/2/14
to rapi...@googlegroups.com
Hi Charles,
Yes that approach does require the internet to download the docker dependencies. mostly when you run the apt-get install packagename command in the terminal. In other words, what Subhas is tryin to do, is to leave the docker issue out of the RapidFTR developer's concerns. Hope that clears the air on this.

BR,
James

James Cellini

unread,
Sep 2, 2014, 10:23:42 AM9/2/14
to rapi...@googlegroups.com
The perl script will do the importation Charles. Hope you get what am talking about.
Compare it to Maven. heheheh!!!

Tumwebaze Charles

unread,
Sep 2, 2014, 11:13:47 AM9/2/14
to rapi...@googlegroups.com
James, If that approach does require the internet to download dependencies, then it doesn't make sense for offline installation as the subject line for this thread suggests.

Regards,
Charles Tumwebaze
P.O.Box 35294 Kampala Uganda | Email: ctumw...@gmail.com 
Mob: +256 773 356153, +256 712 513893 | Skype Id: ctumwebaze


James Cellini

unread,
Sep 2, 2014, 11:26:25 AM9/2/14
to rapi...@googlegroups.com
Well Charles, I think you are right.
How are you looking at achieving this?
Because either way you will need dependencies to support the app installation on any linux platform.
please give us a way forward on this. Thanks

Subhas Dandapani

unread,
Sep 2, 2014, 12:21:31 PM9/2/14
to rapi...@googlegroups.com
Hi Charles, yes you're right, but again that has been the most limiting factors of Linux in general (leaving aside Windows).

If you want to install say even CouchDB offline, you're stuck. You have no single DEB. The least thing that you can do is, start a fresh machine, do an apt-get install of couch, and then take a backup of /var/cache/apt/archives. That will contain all the DEBs that are needed to install couchdb. And then you can copy them over to another machine and install them all using dpkg -i, and hope you don't run into any dependency conflicts (e.g. if your machine was a fresh ubuntu 14.04, and the other one is a latest up-to-date 14.04).

We can do the same here. If someone wants to install Docker offline, we can provide a collection of debs from the APT cache, and then the Shell script installer can install everything. And this would work. This is the reason why most popular packages (like RVM, etc) provide a shell-script based installer, which will do the most correct thing based on your environment.

- Subhas

Andrew Clarke

unread,
Sep 2, 2014, 2:45:28 PM9/2/14
to rapi...@googlegroups.com
Subhas are you suggesting the default route we take is to have a shell script approach that would install everything online, but if someone needed an offline solution we would also be providing the copy of some APT caches?

Subhas Dandapani

unread,
Sep 2, 2014, 3:44:33 PM9/2/14
to rapi...@googlegroups.com
Yes Andrew, the script should use and prefer online apt-get installation if there is network,
But if not, and if the APT caches are there in the same folder, it will use them.

We'll distribute the cached debs (for Docker) with a blank ubuntu 14.04 install, but the precaution is that its not guaranteed to work in certain situations (in say 14.04.1 or a differently up-to-date 14.04).

Script + Zip File (RapidFTR Image and Offline dependencies)

How does that sound?

- Subhas

Andrew Clarke

unread,
Sep 3, 2014, 1:28:07 AM9/3/14
to rapi...@googlegroups.com
That sounds like our best, if not only option.  I'm curious about the people who will be installing RapidFTR on the netbooks.  Will this sound like a reasonable request, are they Linux savvy enough to know the difference between, say, Ubuntu 14.04 and 14.04.1?  Not trying to insult anyone, just trying to gauge what we can expect so we can make their lives easier.

James Cellini

unread,
Sep 3, 2014, 2:54:09 AM9/3/14
to rapi...@googlegroups.com
Hi Subhas,
I have been a strong supporter of all your opinions regarding this matter. Though am worried about the size of the program if you are to include all those dependencies. It will definitely be huge and hence downloading the package will be for those with a good network connection. Let me know what you think. otherwise thanks for the good work.

BR,
James

Subhas Dandapani

unread,
Sep 3, 2014, 5:05:51 AM9/3/14
to rapi...@googlegroups.com
Hi James, Thanks for the insights, they were very useful!

Regarding the size, yes you're right. So far the approach has been - people download the installation files to a USB/CD from a place with good internet connection. And later they take it to the disaster hit areas and install offline.

RapidFTR image size is the biggest thing here - its around 300 MB (including couchdb, jdk, solr, ruby/rails and everything). Rest of the things (docker/etc) are going to be small. Maybe the Windows installation for docker/virtualbox will be bigger (around 160 MB). But definitely RapidFTR is the biggest part of the download.

So, hopefully the Windows/Linux installation scripts should also be in such a way that they re-use the same image file from the folder, rather than having one DEB (with one copy of image inside) and another MSI (with another copy of image inside). That's why keeping the image separate from the installer was the suggested option.

If we're downloading it right in the camps with a slow internet connection, then we'll have to look at other options (like torrenting, etc). But for now, the assumption is that it will be downloaded and then redistributed in a USB/CD/etc.

- Subhas


Subhas Dandapani

unread,
Sep 3, 2014, 5:08:14 AM9/3/14
to rapi...@googlegroups.com
@Vijay yes, even with supporting multiple linux distributions, the shell script should be quite small.

Cross checking - get.docker.io script is actually really small!

Subhas Dandapani

unread,
Sep 3, 2014, 5:14:23 AM9/3/14
to rapi...@googlegroups.com
@James I think my last email kinda answers this. Rather than packaging both Image + Installation script together, one option is to keep it separate. So that even the Windows installer also re-uses the same image.

Otherwise we'll end up with one Linux installer (+rapidftr image) and another Windows installer (+rapidftr image), which would be a waste of bandwidth to download...

So ideally:
One download for RapidFTR Image
One download for Linux Installer => Self extracting Zip (script to install docker + script to import and run the image + maybe bundle default docker DEBs with strict warning)
One download for Windows Installer MSI (Boot2docker for Windows + our own scripts to import and run the image)

- Subhas

James Cellini

unread,
Sep 3, 2014, 5:33:14 AM9/3/14
to rapi...@googlegroups.com
+1 Subhas,
That is the way to go.
Any questions from the rest?

BR,
James

John D. Hume

unread,
Sep 3, 2014, 10:07:33 AM9/3/14
to rapi...@googlegroups.com
On Tue, Sep 2, 2014 at 2:44 PM, Subhas Dandapani <r...@thoughtworks.com> wrote:
We'll distribute the cached debs (for Docker) with a blank ubuntu 14.04 install, but the precaution is that its not guaranteed to work in certain situations (in say 14.04.1 or a differently up-to-date 14.04).

Is there any practical way to distribute a bootable image?

James Cellini

unread,
Sep 3, 2014, 10:17:51 AM9/3/14
to rapi...@googlegroups.com
Hi John,
The practical way is like having it on an external hard disk or flash disk like the way Subhas had suggested.
That is after downloading the image of course.

​Cheers,
James​


Reply all
Reply to author
Forward
0 new messages