[GSoC 2019] Dockerize OpenWISP2

233 views
Skip to first unread message

Ajay Tripathi

unread,
Mar 24, 2019, 5:23:17 PM3/24/19
to OpenWISP
Hi, 

I am trying to interpret the text for this project. However, I am having troubles trying to match requirements and solutions.
Hence, I need some clarification/confirmation of my understanding.

Firstly, my meta understanding of the project is that we want to deliver ready made openwisp images to users and nginx, postgreSQL, postfix containers are just the "batteries-included" for easy installation with configurability.
However, I am not able to understand the reason for having a dedicated container for OpenWISP Admin interface(password reset, email confirmation), OpenWISP Controller (connections), OpenWISP Network Topology, OpenWISP Radius, Django-freeradius. Does this mean adding images with these tools over the docker based openwisp installation image or these tools independently and isolatedly running in a container? 
If it's the latter, can you please provide some context to understand so that I can put them in the picture correctly;
example: can a user need just OpenWISP Admin interface(password reset, email confirmation) isolated or we are going to use it in future for development/testing purposes!

> "Create one or more set of images based on Alpine Linux and python 3.7 which have all the python packages needed for the different services."

Does that multiple set of images mean each set with different commonly used configurations of openwisp for "plug-and-play" for a lot of basic uses?
**Where each set includes an openwisp instance image and all the other approriate dedicated containers with configurations.

> "Provide the websocket server of OpenWISP in a dedicated container"

To the best of my understanding, websockets is managed by django-channels, so I am unable to understand the contents of this container! What is the expected duty that this container is to perform? (Please let me know If my approah here is incorrect and I need to research something.)


Thanks,
Ajay Tripathi

Federico Capoano

unread,
Mar 25, 2019, 9:55:05 AM3/25/19
to OpenWISP
Hey Ajay,

On Sun, Mar 24, 2019 at 5:23 PM Ajay Tripathi <ajay...@gmail.com> wrote:
Hi, 

I am trying to interpret the text for this project. However, I am having troubles trying to match requirements and solutions.
Hence, I need some clarification/confirmation of my understanding.

Firstly, my meta understanding of the project is that we want to deliver ready made openwisp images to users and nginx, postgreSQL, postfix containers are just the "batteries-included" for easy installation with configurability.

Yes, but we also want to allow to scale horizontally more easily, so the traffic can be dispatched to more instances of the same containers if needed (imagine a kubernetes cluster spread on multiple nodes).
 
However, I am not able to understand the reason for having a dedicated container for OpenWISP Admin interface(password reset, email confirmation),

The Admin interface is used ony by administrators, not very often, so we likley won't need more instances of the same containers to scale.
The load on the other services increases with the size of the network (number of devices or also number of users in the case of the radius module), so we will need to scale up with those. 
 
OpenWISP Controller (connections), OpenWISP Network Topology, OpenWISP Radius, Django-freeradius. Does this mean adding images with these tools over the docker based openwisp installation image or these tools independently and isolatedly running in a container? 

each module has its own URLs and APIs that are not admin related which are used to provide configurations, update the network topology, communicate with freeradius and so on, each one of these group of features should run in isolation.
 
If it's the latter, can you please provide some context to understand so that I can put them in the picture correctly;
example: can a user need just OpenWISP Admin interface(password reset, email confirmation) isolated or we are going to use it in future for development/testing purposes!
 
A more realistic scenario is a user who wants to use only admin + radius module, or admin + controller module.
But we should not add restrictions on which containers the users want to use because we should consider this to be a base on which users and developers can build a solution tailored to their needs with custom modules.

> "Create one or more set of images based on Alpine Linux and python 3.7 which have all the python packages needed for the different services."

Does that multiple set of images mean each set with different commonly used configurations of openwisp for "plug-and-play" for a lot of basic uses?
**Where each set includes an openwisp instance image and all the other approriate dedicated containers with configurations.

Images usually do not have configurations, only pre-installed system packages and the python packages needed to run the OpenWISP module of that container.
We may also decide to have one image that is good for all the official modules, we have to do some research and analyze pros and cons.
 

> "Provide the websocket server of OpenWISP in a dedicated container"

To the best of my understanding, websockets is managed by django-channels, so I am unable to understand the contents of this container! What is the expected duty that this container is to perform? (Please let me know If my approah here is incorrect and I need to research something.

django-channels is the framework with which you build the websocket server, but the actual logic that allows you to do anything with the websocket is in OpenWISP.
At the moment only OpenWISP Controller has some websocket logic (inherited from django-loci) but in the future we will have more.

The duty of this container is to serve the websocket server and process data coming from websocket clients.

Immagine the same installation we have today, but instead of having it on a single VM, we have it spread on different containers, each container dedicated to a single service: the containers which receive more traffic can be scaled up, either vertically with more resources (RAM, CPU) or horizontally with more containers if possible (if a load balancer in front of the containers is available to distribute traffic, we can do this with nginx for all the containers which serve HTTP or WebSocket requests, with the celery containers we don't need a load balancer because they read from the broker service which in our case is redis by default).

I hope is clearer now!
Fed

Ajay Tripathi

unread,
Mar 26, 2019, 11:31:29 AM3/26/19
to OpenWISP
Hi,

On Monday, March 25, 2019 at 7:25:05 PM UTC+5:30, Federico Capoano wrote:
Yes, but we also want to allow to scale horizontally more easily, so the traffic can be dispatched to more instances of the same containers if needed (imagine a kubernetes cluster spread on multiple nodes).

Great. I made a docker swarm stack and tested horizontal scaling of my prototype of the docker-compose. I understand the requirement better now.
**The mentioned docker-compose stack is a simple openwisp container and a redis container.
 
The Admin interface is used ony by administrators, not very often, so we likley won't need more instances of the same containers to scale.
The load on the other services increases with the size of the network (number of devices or also number of users in the case of the radius module), so we will need to scale up with those.  

Thanks for clearing that up.
  
each module has its own URLs and APIs that are not admin related which are used to provide configurations, update the network topology, communicate with freeradius and so on, each one of these group of features should run in isolation.
A more realistic scenario is a user who wants to use only admin + radius module, or admin + controller module.
But we should not add restrictions on which containers the users want to use because we should consider this to be a base on which users and developers can build a solution tailored to their needs with custom modules.
django-channels is the framework with which you build the websocket server, but the actual logic that allows you to do anything with the websocket is in OpenWISP.
At the moment only OpenWISP Controller has some websocket logic (inherited from django-loci) but in the future we will have more.

The duty of this container is to serve the websocket server and process data coming from websocket clients.

Immagine the same installation we have today, but instead of having it on a single VM, we have it spread on different containers, each container dedicated to a single service: the containers which receive more traffic can be scaled up, either vertically with more resources (RAM, CPU) or horizontally with more containers if possible (if a load balancer in front of the containers is available to distribute traffic, we can do this with nginx for all the containers which serve HTTP or WebSocket requests, with the celery containers we don't need a load balancer because they read from the broker service which in our case is redis by default).

That explained some crucial points to help me understand! :)
 
I hope is clearer now!

Yes, thank you.  Currently, I am reading more on the same. :)
I am making a prototype with some of the features for understanding further, I am working on the features in whichever order I feel is most important for complete understanding.
Please let me know if there is a specific requirement of the prototype that you'd like to see before the deadline so that I can focus on that first.

Ajay 

Federico Capoano

unread,
Mar 26, 2019, 12:25:42 PM3/26/19
to OpenWISP
A demo on Kubernetes and using some kind of tool that makes it easy to provision (Marco Giuntini suggested https://github.com/ansible/awx, if you know something equivalent you could suggest) would be a big win.

Helping out in writing tests (using some sort of mock SSH server) for https://github.com/openwisp/openwisp-controller/pull/31 is also a big plus, because if the students really understand the code of that branch it will have an easier time to understand how to deploy it, I have some automated test samples that I can show if needed.

Federico


--
You received this message because you are subscribed to the Google Groups "OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openwisp+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Marco Giuntini

unread,
Mar 26, 2019, 1:56:27 PM3/26/19
to open...@googlegroups.com
Hi,

https://github.com/ansible/awx is not a tool but it is an example on how Ansible AWX project uses docker and ansible to deploy and configure the application.

Regards,

--
Marco Giuntini

Skype: hispanico70420
Twitter: @Hispanico81
PGP Key ID: BD774009

A Stanley

unread,
Mar 26, 2019, 2:14:24 PM3/26/19
to open...@googlegroups.com
Terraform.io may also be an option.  I've had some success with it and Docker/Ansible and it has support for Kubernetes. 

Federico Capoano

unread,
Mar 26, 2019, 3:44:31 PM3/26/19
to OpenWISP
Marco Giuntini also has some experience with Terraform, is it right Marco?

I am not the expert on this front, I'm open to suggestions. We need a tool which allows us to configure an instance.
I like ansible-openwisp2 because I use a playbook or a series of playbooks for an installation, for example:

- a playbook for all the openwisp configurations, with additional tasks and roles for postgresql, freeradius, login page etc.
- a playbook for the openvpn server

I like the fact that most of the configurations are in YAML format and I can easily understand the configuration of an installation by looking at the YAML.

The problem is that it is not simple to support horizontal scaling and that it become a pain to maintain over time.
For example, if you have installed an OpenWISP instance on Ubuntu 16 LTS and then you have to migrate to Ubuntu 18 LTS, you create a new VM, run the playbooks there, pray that it works, most of the time it won't work out of the box and then you have to spend 1 day or more to fix the issues that come up, which may happen in other roles that you don't control and then you have to fork them and patch them, send pull requests to the maintainers which will take 1 month on average to merge it.
Or if you have a custom module, maybe to deploy it you have to execute additional tasks that may conflict with other previous tasks and you find out only when you deploy.
And if someone with access to the VM has made some manual editing... you risk overwriting something they did.

With docker instead, if you migrate to a new OS, you work on the images (which can be also painful, but if something breaks you find out soon during testing, not when you deploy) and when the images compile successfully you can run a full instance locally and test it before deploying.
Same applies when you add custom modules by extending a docker image (eg: use it as a base).
The good thing is the immutability of the result and the fact that as soon as a system package or a python dependency needs to be upgraded, you generate a new image, you bring up new containers and the old ones die.
No manual configurations can be made, because the services are ephemeral (will be thrown away) by definition, everything can be and must be replaced easily, which means can also be replicated and scaled up very easily.

So the ideal situation is one in which:

- we can manage most configurations of the services in some textual format
- we can manage the django settings easily for a single installation (in which different containers may have mostly identical django settings with some minor differences, django has a way to specify default settings, we could use that low level feature for example)
- we can store all these configs under git in private repos

A Stanley

unread,
Mar 26, 2019, 4:28:06 PM3/26/19
to open...@googlegroups.com
That's what made me think to mention Terraform.  You can still leverage your ansible playbooks for configuration and use Terraform's Kubernetes integration to handle the infrastructure complexity.  Of course this is all easier said than done.  I've done something sort of similar with Terraform and docker-compose on different cloud providers but haven't made the jump to Kubernetes yet.  A simple non-scaling example is here;


Just wanted to add a suggestion.  I have not used AWX but I intend to check it out.

As for the pain with integrating multiple applications across multiple OSs, code bases, dependencies etc.  I feel your pain.  I recently started writing integration tests for Travis-CI for all of my containers.  Not just for code builds but for how the containers interact once they're built.  This has saved me some headaches lately. 

Ill be honest I haven't tried customizing containers with ansible.  I usually use ansible to customize my Docker Hosts. I like baking custom containers that can be modified as needed with environment variables.  The best examples I've seen come from cookiecutter-django.  I usually start there and then publish the container with the app preloaded for reuse later.  Not to drag on but this subject is very broad and there are lots of options. 

Federico Capoano

unread,
Mar 27, 2019, 11:39:42 AM3/27/19
to OpenWISP
The solution is interesting, maybe AWX is not the right tool but I would like to hear what Marco thinks about this.

The most important phase is going to be the initial one: the prototype will help us to refine the requirements.

Fed

Ajay Tripathi

unread,
Mar 27, 2019, 3:23:06 PM3/27/19
to OpenWISP
Hi,

On Tuesday, March 26, 2019 at 9:55:42 PM UTC+5:30, Federico Capoano wrote:
A demo on Kubernetes and using some kind of tool that makes it easy to provision (Marco Giuntini suggested https://github.com/ansible/awx, if you know something equivalent you could suggest) would be a big win.

I am getting started with Kubernetes with kubeadm (very close to deploying a sample.). :)
However, docker swarm has been really convinent and fast to implement our stack, while kubernetes is taking more resources to get started. (Didn't even work on my laptop, I had to move to a different machine!)
So, That got be thinking about the advantages of using kubernetes in our usecase but the best I could find was that kubernetes is a more "production-ready" solution.
So, Can you please point me to read about the advantages of using kubernetes in our development setup, thanks!


Helping out in writing tests (using some sort of mock SSH server) for https://github.com/openwisp/openwisp-controller/pull/31 is also a big plus, because if the students really understand the code of that branch it will have an easier time to understand how to deploy it, I have some automated test samples that I can show if needed.

Sure, I'll start with it ASAP, thank you. The samples will be very helpful. :) 
 
On Wednesday, March 27, 2019 at 1:58:06 AM UTC+5:30, 2stacks wrote:
That's what made me think to mention Terraform.  You can still leverage your ansible playbooks for configuration and use Terraform's Kubernetes integration to handle the infrastructure complexity.  Of course this is all easier said than done.  I've done something sort of similar with Terraform and docker-compose on different cloud providers but haven't made the jump to Kubernetes yet.  A simple non-scaling example is here;

 
I'll checkout Terraform ASAP. Many thanks, Samples really help save time when learning new things. :)

I have small question from the project list:

   - Provide the OpenWISP Admin interface and the views managing account information (password reset, email confirmation) in a dedicated container.

What all is constituted in the "OpenWISP Admin interface"?

On Wednesday, March 27, 2019 at 1:14:31 AM UTC+5:30, Federico Capoano wrote:
- we can manage most configurations of the services in some textual format
- we can manage the django settings easily for a single installation (in which different containers may have mostly identical django settings with some minor differences, django has a way to specify default settings, we could use that low level feature for example)
- we can store all these configs under git in private repos
 
Okay, the good news is that I finally understand the problem with a better prespective now. :)


Ajay

Ajay Tripathi

unread,
Mar 28, 2019, 12:19:57 PM3/28/19
to OpenWISP
Hi,

Update: I have finally worked my way to a working instance of openwisp on kubernetes[1].

More importantly, I have made the Dockerfile, docker-compose file and some other related file available here[2].

In context of avoiding to write multiple settings.py:
I made a variable in the settings.py and I am getting it's value from environment variables that I set it when I am running the container.
I am exploring other options as well to find the best one for our usecase.

Next, I going to checkout Terraform now.


Ajay

---
Ref:

A Stanley

unread,
Mar 29, 2019, 1:40:26 PM3/29/19
to open...@googlegroups.com
Looks good.  I'll try it out later today.  Have you looked at https://django-environ.readthedocs.io/en/latest/.  It's helpful for working with some of the more common settings in Django.

--

A Stanley

unread,
Mar 29, 2019, 1:40:27 PM3/29/19
to open...@googlegroups.com
I made some updates to include a django-freeradius server running inside docker.  The docker-compose.yml will launch the full stack (freeradius, django, postgres) and includes integration tests with Travis-ci.

Ajay Tripathi

unread,
Mar 29, 2019, 4:40:31 PM3/29/19
to OpenWISP
Hi,

Update: 
Terraform files working with kubernetes for the same two containers available in the repository[1] now.
I am really interested to confirm my understanding again. Please checkout the image here[2].
This is how I think the final end user's environment will look like. 
Please let know if i've understood something incorrectly. :)

On Friday, March 29, 2019 at 11:10:26 PM UTC+5:30, 2stacks wrote:
Looks good.  I'll try it out later today.  Have you looked at https://django-environ.readthedocs.io/en/latest/.  It's helpful for working with some of the more common settings in Django.

Thanks, I'll checkout the link as ASAP.
The final version is suppose to have the following settings easily changable:
- CORS settings
- Sentry logging (including the celery container) using the more recent sentry_sdk (in ansible-openwisp2 we use the old pyhon-raven module)
- DEFAULT_FROM_EMAIL
- SMTP settings
- language code
- timezone
- Leaflet settings
- default cert validity for django-x509
- default CA validity for django-x509

This list comes from the idea page[3], the first idea under the heading "Dockerization of OpenWISP 2".

On Friday, March 29, 2019 at 11:10:27 PM UTC+5:30, 2stacks wrote:
I made some updates to include a django-freeradius server running inside docker.  The docker-compose.yml will launch the full stack (freeradius, django, postgres) and includes integration tests with Travis-ci.


Ajay Tripathi

unread,
Mar 30, 2019, 4:47:44 AM3/30/19
to OpenWISP
Hi,

I made a small edit to the previous architecture[1].
Instead to keeping many services in one pod, I've moved all the services to different pods because, say, If radius module is receiving a lot of requests, then 
we can simply scale the radius service without needing to create a new nginx server as well.
The nginx in this case will send requests on ClusterIP service and kubernetes will do the load balancing and send the request to appropriate radius instance.

In anycase, at the level of designing the docker containers, I don't think there will be any change. Here is the new diagram[2].
Let me know if my line of thought is going in the correct direction or not.

Thanks,
Ajay

---
Ref:

Federico Capoano

unread,
Mar 30, 2019, 11:45:44 AM3/30/19
to OpenWISP
Great work Ajay!

Something you should keep in mind, since we are porting what we learned in ansible-openwisp2 to docker, we should somewhat follow the path we learned there and offer a similar feature set.

In your diagram, I see different boxes for PostgreSQL, I'm not sure if that's good, could you expand on that? Are those different postgresql server instances? How are they synced?
Is there a better way? Have you found some information about best practices around this subject?

There's a diagram with nginx and django-freeradius, this doesn't seem to make sense to me. OpenWISP Radius is based on django-freeradius and that's our radius module, we won't run django-freeradius alone (you can consider django-freeradius as a base library / django reusable app).

I see gunicorn being mentioned, I'm pretty happy with uWSGI and I would like to continue using that as our application server.

Would be great to see the different containers/services being listed in the diagram example:
  • OpenWISP Dashboard: admin + openwisp-users views
  • OpenWISP Controller (connections branch) views and APIs
  • OpenWISP Network Topology views and APIs
  • OpenWISP Radius views and APIs
  • websocket server of OpenWISP
  • celery worker
  • celery-beat
  • OpenVPN management VPN
  • freeradius instance
And do not forget to indicate the mounted volumes that allow to store persistently files that are uploaded by users (eg: floor plan images).

Federico


--

Ajay Tripathi

unread,
Mar 30, 2019, 2:43:36 PM3/30/19
to OpenWISP
Hi,

Please see the following project details:

We create base images for openwisp-modules with all the system and python packages installed.
The user would be expected to use these base images and create images for their organization by following the usage instructions provided after setting the appropriate values for their organization in the `.env` file. 
Note: During this build, django settings like `SECRET_KEY` would be set in the image, hence the generated images are to be kept in a private registry and the values of the `.env` can be saved privately for re-creating the images as well.
I have implemented this here[1].
The `base/` directory in this implementation contains the Dockerfiles and requirements for creating the base images that are to be pulled by the user.
The  user simply has to manipulate all the values in the `.env` file, then run `make_secret_key.py` to generate a new secret key.
Finally, build the images with `docker-compose build`. 
When the images are ready, the newly generated images can be used in production.

Let me know if that sounds good to you.

Something you should keep in mind, since we are porting what we learned in ansible-openwisp2 to docker, we should somewhat follow the path we learned there and offer a similar feature set.

Got it!


In your diagram, I see different boxes for PostgreSQL, I'm not sure if that's good, could you expand on that? Are those different postgresql server instances? How are they synced?
Is there a better way? Have you found some information about best practices around this subject?

Unfortunately I don't have much knowledge about the best practices. For now, I'll represent as one instance of postgres in the diagram and research on it and get back to you.
 

There's a diagram with nginx and django-freeradius, this doesn't seem to make sense to me. OpenWISP Radius is based on django-freeradius and that's our radius module, we won't run django-freeradius alone (you can consider django-freeradius as a base library / django reusable app).

I see gunicorn being mentioned, I'm pretty happy with uWSGI and I would like to continue using that as our application server.

Would be great to see the different containers/services being listed in the diagram example:
  • OpenWISP Dashboard: admin + openwisp-users views
  • OpenWISP Controller (connections branch) views and APIs
  • OpenWISP Network Topology views and APIs
  • OpenWISP Radius views and APIs
  • websocket server of OpenWISP
  • celery worker
  • celery-beat
  • OpenVPN management VPN
  • freeradius instance
And do not forget to indicate the mounted volumes that allow to store persistently files that are uploaded by users (eg: floor plan images).

Federico Capoano

unread,
Mar 30, 2019, 3:47:33 PM3/30/19
to OpenWISP
We should only generate images once, the env vars can be set at run time and configurations file can be loaded from volumes or from private git repositories if needed and can be manipulated with shell or python scripts.
We can't generate one image for each organization, I myself manage many different organizations and having to maintain one image for each would add a crazy load on people like me.
That is very inefficient and I see no advantage in doing that.

Keep in mind the following important concept:

- the image is the software, the base images will be identical for all users
- environment variables and django settings are configurations, those change by each organization and can be set at run time

The only env vars that make sense and I can think of now are env vars which contain the commit hash of the openwisp modules used. 

Please review this important detail before going ahead with your prototype.

Federico

A Stanley

unread,
Apr 1, 2019, 5:21:46 PM4/1/19
to open...@googlegroups.com
Federico, just to make sure I understand, how do you envision an end user setting things like Django's 'SECRET_KEY' and 'ALLOWED_HOSTS'?  I guess I'm confused by;

The only env vars that make sense and I can think of now are env vars which contain the commit hash of the openwisp modules used.

Also, for configuration files, should there be something like a 'base_settings.py' that includes all required settings and then an option for end users to load additional settings via bind mounts/volumes?

Thanks,

Andrew

Federico Capoano

unread,
Apr 1, 2019, 6:42:49 PM4/1/19
to OpenWISP
On Mon, Apr 1, 2019 at 5:21 PM A Stanley <2st...@2stacks.net> wrote:
Federico, just to make sure I understand, how do you envision an end user setting things like Django's 'SECRET_KEY' and 'ALLOWED_HOSTS'?  I guess I'm confused by;

The only env vars that make sense and I can think of now are env vars which contain the commit hash of the openwisp modules used.
 
We should surely use env vars for SECRET_KEY and ALLOWED_HOSTS, but we should not set these with --build-args during image build time, the env vars will be set by the system which brings up the container, right?

Do you know what --build-args does?
I used it recently to have the commit hash of a repo embedded in the image, so it could be used easily in sentry and from the sentry error detail you would see which version of the app is affected by a bug.
In our case is a bit more complicated because we have more modules but we may figure out a way to have that as well.

Also, for configuration files, should there be something like a 'base_settings.py' that includes all required settings and then an option for end users to load additional settings via bind mounts/volumes?
 
Django gives the possibility to set the default settings, so we could do that in openwisp-utils to set the settings that are usually the same for all openwisp instances (so the settings file becomes leaner and easier for us to read and maintain).
Then we can use env vars for the rest of the settings and have some settings generated dynamically depending on the content of some other settings or ENV variables, this kind of implementation could also live in openwisp-utils.
We have to figure out the best way, this will take some trial and error I believe. Don't you think?
I tried providing some ideas, if anyone has other interesting ideas please share.

Federico

A Stanley

unread,
Apr 1, 2019, 7:32:20 PM4/1/19
to open...@googlegroups.com
Thanks that clears it up for me.  Agreed, the system that launches the stack should provide all of the production settings which shouldn't be stored in git or docker images.

I am familiar with --buld-args, I use them sometimes to save the date/time the container was built inside the image.  The commit hash is good info.  I may have to borrow that from you :)

I'm still new to Django so I've just been following the guidance from "Two Scoops of Django" and django-cookiecutter.  It's worked well for my use cases but I need to get more familiar with all of the parts of Openwisp to be able to say if it does or does not fit.

--

Federico Capoano

unread,
Apr 1, 2019, 7:43:28 PM4/1/19
to OpenWISP
The best practices described in Two scoops of django and 12 factor app are good ones.

Ajay Tripathi

unread,
Apr 2, 2019, 5:18:29 AM4/2/19
to OpenWISP
Hi,

Thanks for the information.

Update on my end: I have moved the files to a new location[1] to make it easier to clone and test the code.
In the interest of avoiding any future confusion, i'll remove the directory from the old repository ASAP.

So far, I've made images of:
- openwisp-controller
- openwisp-network-topology
- openwisp-radius 
- openwisp-dashboard 

openwisp-controller is currently not working with postgis**, however, the rest of them have been moved to postgresql database with a persistent volume. I've also tested the part of saving the user generated data (floorplans) in a seperate persistent volume.
All the above has been tested in kubernetes and deployed with terraform.
I've also submitted the first draft of the proposal on the GSoC's website.

Next, I am going to write the connections branch testcases and then checkout django-environ and cookiecutter-django. :)


** Because of compatibility problems, so far it looks like I'll have to compile postgresql on alpine to make everything work. I'll update on this front as soon as possible.


Thanks,
Ajay Tripathi

---
Ref:

Ajay Tripathi

unread,
Apr 6, 2019, 8:30:54 AM4/6/19
to OpenWISP
Hi,

Update: I've added documentation for testing and building the containers..

Questions/Discussion:
1. About Database:
It looks like openwisp-radius and openwisp-controller can't migrate to the same database.
Unlike the network-topology module which migrated to the same database as the openwisp-controller, 
I had to change the database name for the radius module.
Is this an expected behaviour or do we need to make openwisp-controller database compatible with 
the openwisp-radius database in the final version?

2. Migration problem with docker-compose:
When we run docker-compose for the first time. All the containers start migrating parallelly which causes some containers 
to fail in migrating. If we re-run the containers everything works fine because the rest of the containers get a chance to migrate, 
I think we need some kind of flag for the containers to communicate about migrations in the first run. Please suggest on the same.

3. Terraform creating order:
When the database server starts, it takes a while to allow connections on port 5432.
But since terraform starts all the pods parallelly, some pods that try to connect before postgresql is ready 
and the pods fail, re-running the pods solve the problem but I could not find how I can communicate to 
terraform to let it know that database server is accepting connections and dependand pods can be 
created. Please advice how is it usually done. :)

Thanks,
Ajay Tripathi

A Stanley

unread,
Apr 6, 2019, 11:24:36 AM4/6/19
to open...@googlegroups.com
1. I'm pretty sure they need to share user table information but I'll defer to Federico.  Do you know the exact issue?

2.  I've been researching this lately as it pertains to running django-admin on a seperate instance (container).  Its recommended best practices that migrations only be run from one container when multiple containers share a code base or models.

3. I use a combination of shell scripts and docker entry points.  Check out
https://github.com/eficode/wait-for and https://github.com/2stacks/freeradius-django/blob/master/compose/django/entrypoint. It's also possible in Terraform to create things with implicit dependencies.  I haven't tried with Kubernetes but I'm sure it can be done https://learn.hashicorp.com/terraform/getting-started/dependencies.html.

Hope this helps.

--

Ajay Tripathi

unread,
Apr 6, 2019, 1:45:47 PM4/6/19
to OpenWISP
Hi,

On Saturday, April 6, 2019 at 8:54:36 PM UTC+5:30, 2stacks wrote:
1. I'm pretty sure they need to share user table information but I'll defer to Federico.  Do you know the exact issue?

I was thinking about the user table as well, I will investigate and report back.

2.  I've been researching this lately as it pertains to running django-admin on a seperate instance (container).  Its recommended best practices that migrations only be run from one container when multiple containers share a code base or models.

Okay. Thanks will look into it further.
However, I think making a container for running all the migrations might have a huge cost. We already have a lot of services and in future we will have more, so the container responsible for having all the modules and their dependencies for migration will start becoming bulky. What do you think?
What do you think about a control variable that allows only one of the services to migrate at a time ? (The rest stay in a while loop until the control variable allows another service to start it's migration.) Let me know If I should elaborate this idea further. 

3. I use a combination of shell scripts and docker entry points.  Check out

That's perfect, thanks, I'll implement this in the prototype.

It's also possible in Terraform to create things with implicit dependencies.  I haven't tried with Kubernetes but I'm sure it can be done https://learn.hashicorp.com/terraform/getting-started/dependencies.html.
 
I tested terraform's dependencies, actually they only wait for container to start running and not the services inside the container. 
However, When I incorporated the entrypoint solution mentioned above, I think this problem will also be solved.


Hope this helps.

Yes, many thanks. :-)

Best,
Ajay Tripathi

A Stanley

unread,
Apr 6, 2019, 2:08:43 PM4/6/19
to open...@googlegroups.com
I think points 1. and 2. are very much related and common to scenarios of breaking down a monolithic application in to micro services.  I could be wrong but from what I've been reading I think its recommended to have a common build for django.  Not all apps will be enabled in the django config and not all instances will have the same url config but for common models in either openwisp or for dependencies like allauth, trying to run migrations in parallel from different instances is risky.  I'm not an expert on this and still learning meself. 

If you look at a celery workers build its basically a replica of your base django app with celery services as the only delta.

Also I'm not sure if each openwisp service instance (container) should have it's own django-admin or if there should be a master admin interface colocated with one of the instances.  I think that depends on how the current build of openwisp behaves.  Again this is uncharted territory for me.  I've been reviewing you repository.   Very impressive.   I'm confident you'll figure it out.

--

A Stanley

unread,
Apr 6, 2019, 3:03:43 PM4/6/19
to open...@googlegroups.com

Federico Capoano

unread,
Apr 7, 2019, 5:12:52 PM4/7/19
to OpenWISP
Premise: I don't think we are not breaking down OpenWISP into microservices.

My exprience with microservices is not happy. They make things a lot more complicated.
OpenWISP 2 is modular, that gives us the advantage that we don't need to force people to write modular code by separating features in different services, which helps to keep things simple
and ease maintenance.

The modular nature of OpenWISP allows us to break a monolithic OpenWISP instance in different containers running different parts of it, but I would not call this microservice architecture.
I want to do this break down of the monolithic OpenWISP instance into services because it will make it easier to assign resources to the different services that need to be scaled up.

Let's not mention microservices anymore in this project because I believe it will fuel confusion.

On Sat, Apr 6, 2019 at 8:30 AM Ajay Tripathi <ajay...@gmail.com> wrote:
Hi,

Update: I've added documentation for testing and building the containers..

Questions/Discussion:
1. About Database:
It looks like openwisp-radius and openwisp-controller can't migrate to the same database.
Unlike the network-topology module which migrated to the same database as the openwisp-controller, 
I had to change the database name for the radius module.
Is this an expected behaviour or do we need to make openwisp-controller database compatible with 
the openwisp-radius database in the final version?

It's not the expected behaviour and it should not happen.

I have some openwisp instances running fine with both openwisp-radius and openwisp-controller (development version of all openwisp modules).

What issue are you having, can you paste the error you're getting?
 
2. Migration problem with docker-compose:
When we run docker-compose for the first time. All the containers start migrating parallelly which causes some containers 
to fail in migrating. If we re-run the containers everything works fine because the rest of the containers get a chance to migrate, 
I think we need some kind of flag for the containers to communicate about migrations in the first run. Please suggest on the same.

We should find a way to start the general admin dashboard container first, which should have all the django apps in INSTALLED_APPS, so migrations are run there.
Then the other services can be started in parallel afterwards.
 
3. Terraform creating order:
When the database server starts, it takes a while to allow connections on port 5432.
But since terraform starts all the pods parallelly, some pods that try to connect before postgresql is ready 
and the pods fail, re-running the pods solve the problem but I could not find how I can communicate to 
terraform to let it know that database server is accepting connections and dependand pods can be 
created. Please advice how is it usually done. :)

The suggestion given by 2stacks seems good to me, if we need some dependencies to be up, we need to use all the tools at our disposal to wait until those service become ready (with a configurable timeout).

I'll reply to other parts of the thread in my next email. 

Federico Capoano

unread,
Apr 7, 2019, 5:13:31 PM4/7/19
to OpenWISP
Errata, I meant: I don't think we are breaking down OpenWISP into microservices.

Federico Capoano

unread,
Apr 7, 2019, 5:19:32 PM4/7/19
to OpenWISP
On Sat, Apr 6, 2019 at 11:24 AM A Stanley <2st...@2stacks.net> wrote:
2.  I've been researching this lately as it pertains to running django-admin on a seperate instance (container).  Its recommended best practices that migrations only be run from one container when multiple containers share a code base or models.

I think this would work, as I anticipated in my previous email, the container which will have all the apps in "INSTALLED_APPS" is the following one:

Provide the OpenWISP Admin interface and the views managing account information (password reset, email confirmation) in a dedicated container.

We can also run migrations only there, it should work, because if we update any dependency that affects any of the services, most likely the admin/dashboard container will have to be re-deployed as well.

Ajay Tripathi

unread,
Apr 7, 2019, 6:12:12 PM4/7/19
to open...@googlegroups.com
Hi,

On Mon, Apr 8, 2019, 2:49 AM Federico Capoano <federico...@gmail.com> wrote:

Provide the OpenWISP Admin interface and the views managing account information (password reset, email confirmation) in a dedicated container.

We can also run migrations only there, it should work, because if we update any dependency that affects any of the services, most likely the admin/dashboard container will have to be re-deployed as well.

I think I am still a bit unclear about "admin interface"/dashboard. From what I understand now, dashboard means all the OpenWISP services in one container.
i.e openwisp-radius, openwisp-controller, openwisp-network-topology and in future openwisp-ipam and other services would be available in this container.

By this understanding, a basic user would only need the dashboard container in the beginning and should they feel the need to add instance of a specific service like radius, they can simply add a new container for openwisp-radius and additional traffic would flow to the new instance with the help of a load balancer.


Is this understanding correct and by dashboard we mean all the services? Please correct me otherwise.

I have also sent the proposal on GSoC's website.
Please let me know if there are any final corrections to be made or any section / information to be added or removed in the application.


Thanks,
Ajay Tripathi

Federico Capoano

unread,
Apr 7, 2019, 6:18:18 PM4/7/19
to OpenWISP
On Sun, Apr 7, 2019 at 6:12 PM Ajay Tripathi <ajay...@gmail.com> wrote:
Hi,

On Mon, Apr 8, 2019, 2:49 AM Federico Capoano <federico...@gmail.com> wrote:

Provide the OpenWISP Admin interface and the views managing account information (password reset, email confirmation) in a dedicated container.

We can also run migrations only there, it should work, because if we update any dependency that affects any of the services, most likely the admin/dashboard container will have to be re-deployed as well.

I think I am still a bit unclear about "admin interface"/dashboard. From what I understand now, dashboard means all the OpenWISP services in one container.
i.e openwisp-radius, openwisp-controller, openwisp-network-topology and in future openwisp-ipam and other services would be available in this container.

Not really.

the modules would be in INSTALLED_APPS, but only some views and URLs would be enabled (all the views needed to make the admin work).
The APIs (eg network topology API, openwisp-radius API, the controller view) would not run there.
 
By this understanding, a basic user would only need the dashboard container in the beginning and should they feel the need to add instance of a specific service like radius, they can simply add a new container for openwisp-radius and additional traffic would flow to the new instance with the help of a load balancer.

The service running openwisp-radius will be focused on running the openwisp-radius API, this is the part which receives a lot of traffic from freeradius instances that talk to the openwisp-radius API.
 
I have also sent the proposal on GSoC's website.
Please let me know if there are any final corrections to be made or any section / information to be added or removed in the application.

I will get there asap.

Federico 

Michael Baumhof

unread,
Apr 9, 2019, 11:39:19 AM4/9/19
to open...@googlegroups.com
Message has been deleted

Ajay Tripathi

unread,
Apr 10, 2019, 11:23:08 AM4/10/19
to OpenWISP
Hi, 

**Sorry for resending but my last email isn't visible on the groups.google.com so I am sending again to ensure that it is received well by all participants.**

On Wed, Apr 10, 2019 at 6:26 AM Ajay Tripathi <ajay...@gmail.com> wrote:
Hi,

Update: I've submitted the final application and I was able to make the video[1] demonstration of the prototype in time to add it in the proposal. :-)
 
1. About Database:

2. Migration problem with docker-compose:

3. Terraform creating order:
 
Thanks for all the pointers on the problems above. I've managed to work up a solution for them, please provide feedback on the solution:
When I started migrating from one container and moved all the containers to the development version of the openwisp these issues were solved.
Now, the migrations take place from openwisp_dashboard only. I've added a file named migration_settings.py containing all the apps in INSTALLED_APPS and i migrate with the command python manage.py migration --settings=openwisp.migration_settings and that solved it. 


However, I am still not clear about the "admin interface"/dashboard. I have following questions that I think will help me be certain about it:

1. Is "dashboard" and "admin interface" the same container or are they different containers?

2. In the dashboard container I have all the basic django INSTALLED_APPS with openwisp_users only and in the urls of this container I have:
    url(r'^admin/', admin.site.urls),
    url(r'^accounts/', include('openwisp_users.accounts.urls'))
does the dashboard/admin-interface user need any other urls as well? 

3. The admin container information explicitly mentions about "views managing account information (password reset, email confirmation)", by any chance, does that mean that the other containers should not contain views managing account information or am I thinking in the wrong direction about it now?


Will update about the rest tommorrow or day-after tommorrow. :-)

Cheers,
Ajay Tripathi

---

Ref: 

Federico Capoano

unread,
Apr 14, 2019, 9:25:28 PM4/14/19
to OpenWISP
Hi Ajay,

sorry for the late reply, see below.

On Tue, Apr 9, 2019 at 8:56 PM Ajay Tripathi <ajay...@gmail.com> wrote:
Hi,

Update: I've submitted the final application and I was able to make the video[1] demonstration of the prototype in time to add it in the proposal. :-)
 
1. About Database:
2. Migration problem with docker-compose:

3. Terraform creating order:
 
Thanks for all the pointers on the problems above. I've managed to work up a solution for them, please provide feedback on the solution:
When I started migrating from one container and moved all the containers to the development version of the openwisp these issues were solved.
Now, the migrations take place from openwisp_dashboard only. I've added a file named migration_settings.py containing all the apps in INSTALLED_APPS and i migrate with the command python manage.py migration --settings=openwisp.migration_settings and that solved it. 


However, I am still not clear about the "admin interface"/dashboard. I have following questions that I think will help me be certain about it:

1. Is "dashboard" and "admin interface" the same container or are they different containers?
 
Same thing, let's find a good name for this. What do you suggest?

2. In the dashboard container I have all the basic django INSTALLED_APPS with openwisp_users only and in the urls of this container I have:
    url(r'^admin/', admin.site.urls),
    url(r'^accounts/', include('openwisp_users.accounts.urls'))
does the dashboard/admin-interface user need any other urls as well? 

each module may have some URLs that are called from the admin.

For example:

- django_netjsonconfig has a URL for loading the JSON schemas in the schema-editor widget
- openwisp_controller.config has a URL for loading the default templates
- openwisp_controller.pki has a URL for downloading the CRL (certificate revocation list)

In the truth the cleanest solution would be to make sure all the views and URLs that are loaded from the admin are automatically created in the admin (like the preview view).
We could start taking notes of all these URLs, create issues in each module and then gradually over time fix this confusing situation.

Something you will have to do is to patiently try to use every feature of the admin (click on every button, try everything) and take note of what doesn't work.
 
3. The admin container information explicitly mentions about "views managing account information (password reset, email confirmation)", by any chance, does that mean that the other containers should not contain views managing account information or am I thinking in the wrong direction about it now?

I think you have understood.

The container which provides the API views for the controller module should not provide "openwisp_users.accounts.urls'".

Ajay Tripathi

unread,
Apr 22, 2019, 8:09:19 PM4/22/19
to OpenWISP
Hello,

Update: I implemented the prototype with uwsgi & nginx (previously python development server). 
The implementation allows user to manipulate nginx.conf file with help of environment variables using `envsubst`.
They may also add new custom blocks simply by adding them from an external volume, example:

volumes:
   - ./custom.conf:/etc/nginx/conf.d/custom.conf:ro


Ajay

---
P.S: I will reply to the openwisp-dashboard mail as soon as possible. :-)

Federico Capoano

unread,
Apr 22, 2019, 10:25:28 PM4/22/19
to OpenWISP
That's great Ajay, can you send a link to the commits that contain these changes please?

Federico

--

Ajay Tripathi

unread,
Apr 23, 2019, 8:42:27 PM4/23/19
to OpenWISP
Hi,


On Tuesday, April 23, 2019 at 7:55:28 AM UTC+5:30, Federico Capoano wrote:
That's great Ajay, can you send a link to the commits that contain these changes please?


Ajay Tripathi

unread,
Apr 27, 2019, 7:44:58 PM4/27/19
to OpenWISP
Hello,

Regarding the implementation of the SSL, currently I am planning to make secure connections to:

1. postgresql-server
2. redis
3. uswgi protocol
4. nginx

(1) Secure connection to postgresql-server has been done by @2stacks in on of the example[1], I am planning to implement that soon.

(2) & (3) redis and uswgi protocol don't seem to have a support for any native method for secure connections, all I could find was stunnel to secure the connections.
Something to note about stunnel is that it's distributed under GNU GPL version 2 or later with OpenSSL exception but stunnel is not a community project.

(4) While researching about how to automatically renew the certificates on kubernetes, I found cert-manager[2] that can be installed from helm.
cert-manager looks like a good option to me.

Please review it and let me know your thoughts on it.


Thank You,
Ajay


Ajay Tripathi

unread,
May 1, 2019, 10:21:03 AM5/1/19
to OpenWISP
Hi,

Update: I have implemented ssl connections for nginx <---> users[1].
I've added a new container named openwisp-orchestration, this container does the job of creating new certs, asking letsencrypt if DEBUG mode is off otherwise making self-signed certificates. This container will also update the certs as per the certbot renew policy. The renew process runs from cronjob at 3 AM on sundays. The nginx-server reload on 3:30 AM on sundays.

However, I have not implemented the ssl connections within the cluster. I think it wouldn't help. If a person has access to the cluster API they can already access the keys and containers, ssl will not help anyway. If they don't have access to the API, they can't even reach the cluster connections.
The only reason to encrypt the connection would be if some tries to implement a connection to outside the cluster. Like a seperate postgres instance on a different system outside the cluster. 
For the postgres connection, I have added option:
DB_SSLMODE=disable
DB_SSLROOTCERT=''
If someone decides to have the instance outside the cluster they can set these options.

What are your views on this? Is there any case where making secure connections within the cluster help?


Ajay

---
Ref:

A Stanley

unread,
May 1, 2019, 12:57:16 PM5/1/19
to open...@googlegroups.com
I agree, if the cluster itself is not multi-tenant then it's safe to leave off ssl.  If you were running in a multi-tenant or untrusted cluster then there are other ways to go about it (I'm thinking service mesh like istio).  The only other reason would be if you need to expose postgres outside the cluster but I agree I don't see a need.

I'll take a look at your orchestration container.  Did you look at https://github.com/jetstack/cert-manager as an alternative?  I have this working in the cluster I shared.  You just add a couple annotations to your ingress and the rest is magic ;)

--

Oscar STV

unread,
May 6, 2019, 12:24:35 PM5/6/19
to OpenWISP
Hello, Ajay.

First of all, great projecte!

I wonder if you could tell me how to build your containers. I'm on ARM64, so I need to build them.

Trying something like docker build -t openwisp-nginx .
being in the /build/nginx folder gives me errors:
Sending build context to Docker daemon  5.632kB
Step 1/17 : FROM nginx:alpine
 ---> 439a39fa0f80
Step 2/17 : COPY ./nginx/init_command.sh /etc/nginx/init_command.sh
COPY failed: stat /var/lib/docker/tmp/docker-builder310475260/nginx/init_command.sh: no such file or directory

And I have other errors when trying to build other containers...

Sorry, first time I need to build containers,

Thank you very much,
Oscar

El dissabte, 6 abril de 2019 19:45:47 UTC+2, Ajay Tripathi va escriure:

Ajay Tripathi

unread,
May 6, 2019, 12:35:19 PM5/6/19
to OpenWISP
Hi Oscar,

I think the error may be because you are trying to build without docker-compose, 
can you please confirm this?

You need to use docker-compose to build the images.
On the root directory of the repository there is a docker-compose file.
You need to do `docker-compose build` to build the images
and `docker-compose up` to bring up the containers.

Best,
Ajay T.

A Stanley

unread,
May 6, 2019, 1:11:37 PM5/6/19
to open...@googlegroups.com
I can confirm the same.  First time I tried to build from inside the directory with the docker file with docker build -t.  I got the exact same error as Oscar.  Building with docker-compose from the root directory is what worked for me.

--

Oscar Esteve

unread,
May 6, 2019, 1:19:35 PM5/6/19
to open...@googlegroups.com
Hello.

Yews, I'm using docker-compose.

If I try:
root@MyServer:~/dockerize-openwisp# docker-compose build .
WARNING: Some services (controller, dashboard, nginx, postgres, radius, redis, topology) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
ERROR: No such service: .
I know this docker-compose has missing parameters... but don't know which.
This is what I have in this folder:
root@MyServer:~/dockerize-openwisp# ls -l
total 44
drwxr-xr-x 8 root root 4096 May  6 17:12 build
-rw-r--r-- 1 root root 6059 May  6 17:12 docker-compose.yml
drwxr-xr-x 2 root root 4096 May  6 17:12 kubernetes
-rw-r--r-- 1 root root 1070 May  6 17:12 LICENSE
-rw-r--r-- 1 root root  988 May  6 17:12 make_secret_key.py
-rw-r--r-- 1 root root  187 May  6 17:12 Pipfile
-rw-r--r-- 1 root root 8397 May  6 17:12 README.md
drwxr-xr-x 2 root root 4096 May  6 17:12 terraform
root@MyServer:~/dockerize-openwisp#
Thank you very much,
Oscar


Missatge de Ajay Tripathi <ajay...@gmail.com> del dia dl., 6 de maig 2019 a les 18:35:
--

Ajay Tripathi

unread,
May 6, 2019, 1:36:08 PM5/6/19
to OpenWISP
On Monday, May 6, 2019 at 10:49:35 PM UTC+5:30, Oscar STV wrote:
If I try:
root@MyServer:~/dockerize-openwisp# docker-compose build .

The command is: `docker-compose build` notice the trailing `.`(dot)
When using docker-compose you don't need to mention the build directory. 
No arguements must be passed to the `docker-compose build`, it's all defined 
in the docker-compose file.

On Monday, May 6, 2019 at 10:41:37 PM UTC+5:30, 2stacks wrote:
I can confirm the same.  First time I tried to build from inside the directory with the docker file with docker build -t.  I got the exact same error as Oscar.  Building with docker-compose from the root directory is what worked for me.

The `docker build .` is only allowed to read files from within the current directory.
While these images are copying files from different directories. It *might* work if we 
provide correct parent target directory** as the build target directory but I've not tried to 
do that! :-)

**Parent target directory should be root directory of the repository.

 
Ajay

A Stanley

unread,
May 6, 2019, 1:37:38 PM5/6/19
to open...@googlegroups.com
Don't add the '.' Just use 'docker-compose build' or 'docker-compose build --pull'

Oscar STV

unread,
May 6, 2019, 2:10:54 PM5/6/19
to OpenWISP
Thank you all :)
Now I get:
root@S912:~/dockerize-openwisp# docker-compose build

WARNING: Some services (controller, dashboard, nginx, postgres, radius, redis, topology) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
redis uses an image, skipping
postgres uses an image, skipping
Building controller
Step 1/35 : FROM python:3.7-alpine AS BASE
3.7-alpine: Pulling from library/python
6f37394be673: Already exists
055b14f83961: Pull complete
cc200570323c: Pull complete
d41862fe4c69: Pull complete
91e72a154929: Pull complete
Digest: sha256:ef431c6357f42a8507e01584038d5bda38f01664678e5737d3ba05afcf70133d
Status: Downloaded newer image for python:3.7-alpine
 ---> 14c42b351a64
Step 2/35 : WORKDIR /opt/openwisp
 ---> Running in 1db1702aac3c
Removing intermediate container 1db1702aac3c
 ---> f07dfd8da064
Step 3/35 : RUN apk add --update --no-cache build-base libffi-dev openssl-dev postgresql-dev gettext linux-headers python3-dev zlib-dev jpeg-dev musl-dev git --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing gdal-dev geos-dev
 ---> Running in 39457af9df15
fetch http://dl-cdn.alpinelinux.org/alpine/edge/testing/aarch64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/aarch64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/aarch64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
  gdal-dev (missing):
    required by: world[gdal-dev]
  geos-dev (missing):
    required by: world[geos-dev]
ERROR: Service 'controller' failed to build: The command '/bin/sh -c apk add --update --no-cache build-base libffi-dev openssl-dev postgresql-dev gettext linux-headers python3-dev zlib-dev jpeg-dev musl-dev git --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing gdal-dev geos-dev' returned a non-zero code: 2


El dilluns, 6 maig de 2019 19:37:38 UTC+2, 2stacks va escriure:
To unsubscribe from this group and stop receiving emails from it, send an email to open...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open...@googlegroups.com.

Oscar STV

unread,
May 6, 2019, 2:14:34 PM5/6/19
to OpenWISP
Hi,

It seems docker-compose build --pull is what I needed :)

root@S912:~/dockerize-openwisp# docker-compose build --pull

WARNING: Some services (controller, dashboard, nginx, postgres, radius, redis, topology) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
postgres uses an image, skipping
redis uses an image, skipping
Building topology
Step 1/31 : FROM python:3.7-alpine AS BASE
3.7-alpine: Pulling from library/python
Digest: sha256:ef431c6357f42a8507e01584038d5bda38f01664678e5737d3ba05afcf70133d
Status: Image is up to date for python:3.7-alpine
 ---> 14c42b351a64
Step 2/31 : WORKDIR /opt/openwisp
 ---> Using cache
 ---> f07dfd8da064
Step 3/31 : RUN apk add  --update --no-cache build-base postgresql-dev gettext python3-dev linux-headers
 ---> Running in 0f0e3876c359
(1/32) Installing binutils (2.31.1-r2)
(2/32) Installing libmagic (5.36-r0)
(3/32) Installing file (5.36-r0)
(4/32) Installing gmp (6.1.2-r1)
etc.


El dilluns, 6 maig de 2019 19:37:38 UTC+2, 2stacks va escriure:
Don't add the '.' Just use 'docker-compose build' or 'docker-compose build --pull'

To unsubscribe from this group and stop receiving emails from it, send an email to open...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open...@googlegroups.com.

A Stanley

unread,
May 6, 2019, 2:47:44 PM5/6/19
to open...@googlegroups.com
👍

To unsubscribe from this group and stop receiving emails from it, send an email to openwisp+u...@googlegroups.com.

Oscar Esteve

unread,
May 6, 2019, 3:43:34 PM5/6/19
to open...@googlegroups.com
Hello.

Much better but I have an ERROR building "dashboard".
*****FIRST ATTEMPT*****
Building dashboard
Step 1/38 : FROM python:3.7-alpine AS BASE

3.7-alpine: Pulling from library/python
Digest: sha256:ef431c6357f42a8507e01584038d5bda38f01664678e5737d3ba05afcf70133d
Status: Image is up to date for python:3.7-alpine
 ---> 14c42b351a64
Step 2/38 : WORKDIR /opt/openwisp

 ---> Using cache
 ---> f07dfd8da064
Step 3/38 : RUN apk add --update --no-cache build-base libffi-dev openssl-dev postgresql-dev python3-dev zlib-dev jpeg-dev musl-dev gettext linux-headers git --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing gdal-dev geos-dev
 ---> Running in 789501417acd
fetch http://dl-cdn.alpinelinux.org/alpine/edge/testing/aarch64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
  gdal-dev (missing):
    required by: world[gdal-dev]
  geos-dev (missing):
    required by: world[geos-dev]
ERROR: Service 'dashboard' failed to build: The command '/bin/sh -c apk add --update --no-cache build-base libffi-dev openssl-dev postgresql-dev python3-dev zlib-dev jpeg-dev musl-dev gettext linux-headers git --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing gdal-dev geos-dev' returned a non-zero code: 2
**********second attempt**********
Successfully built f107dd577522
Successfully tagged atb00ker/ready-to-run:openwisp-topology
Building controller
Step 1/35 : FROM python:3.7-alpine AS BASE

3.7-alpine: Pulling from library/python
Digest: sha256:ef431c6357f42a8507e01584038d5bda38f01664678e5737d3ba05afcf70133d
Status: Image is up to date for python:3.7-alpine
 ---> 14c42b351a64
Step 2/35 : WORKDIR /opt/openwisp
 ---> Using cache
 ---> fd060497e5b8

Step 3/35 : RUN apk add --update --no-cache build-base libffi-dev openssl-dev postgresql-dev gettext linux-headers python3-dev zlib-dev jpeg-dev musl-dev git --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing gdal-dev geos-dev
ERROR: unsatisfiable constraints:
  gdal-dev (missing):
    required by: world[gdal-dev]
  geos-dev (missing):
    required by: world[geos-dev]
ERROR: Service 'controller' failed to build: The command '/bin/sh -c apk add --update --no-cache build-base libffi-dev openssl-dev postgresql-dev gettext linux-headers python3-dev zlib-dev jpeg-dev musl-dev git --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing gdal-dev geos-dev' returned a non-zero code: 2
*******
Some ideas?

Thank you very much,
Oscar

Missatge de A Stanley <2st...@2stacks.net> del dia dl., 6 de maig 2019 a les 19:37:

A Stanley

unread,
May 6, 2019, 4:10:40 PM5/6/19
to open...@googlegroups.com
It looks like gdal-dev and geos-dev are only available for x86 :(

Oscar Esteve

unread,
May 6, 2019, 4:21:58 PM5/6/19
to open...@googlegroups.com
Thank you very much :)

I understand.

I have just "docker-compose build" on an Intel Server I have: it works :)

Thank you all very much, you have helped me a lot, and I've learnt a lot of things about docker-compose,
Òscar Esteve

Missatge de A Stanley <2st...@2stacks.net> del dia dl., 6 de maig 2019 a les 22:10:

Oscar STV

unread,
May 8, 2019, 6:26:49 PM5/8/19
to OpenWISP
Hello, Ajay, and other guys here :)

I'm having some troubles with the HTTPS configuration.
I have installed your containers on an Intel machine, building them, as you know.
Now I'm trying to get my OpenWRT devices into OpenWisp.
I cannot, because I'm trying to get there by HTTP.

logread | grep openwisp
says:
Wed May  8 23:18:15 2019 daemon.info openwisp: Registering device...
Wed May  8 23:18:15 2019 daemon.err openwisp: Invalid url: missing X-Openwisp-Controller header

But I'm not sure how to set the certificates on your NGINX server.

So, I have tried to get into it using a reverse proxy.
I have a machine, 192.168.1.2, that responds to mydomain.com
I'm trying to tell it something like "When somebody asks mydomain.com/openwisp2, go to my OpenWISP machine (it's 192.168.1.10), port 8080."

But I cannot, because I don't know how to do that.
I've tried several configs in NGINX, the last I've tried:
location /openwisp2 {
                rewrite ^/openwisp2(.+)$ /admin/$1 break;
                proxy_pass http://192.168.1.10:8080;
                proxy_redirect off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

Do you know what I'm doing wrong?


Thank you very much,
Oscar
El diumenge, 28 abril de 2019 1:44:58 UTC+2, Ajay Tripathi va escriure:

Ajay Tripathi

unread,
May 8, 2019, 6:46:28 PM5/8/19
to OpenWISP
Hi,

I am still working on the SSL part. Yet to merge it to master branch. If all goes well, I should be able to upload a testable version by tomorrow. :-)
(Lookout for the next commit in the branch: sslmode.)

Also, It looks like you are trying to deploy this code with your devices.
Please remember that this project has only started and we are still changing a lot of things. That means it's possible that you may add your devices and configure everything and the next update may change something and you may lose your configuration!


Best,
Ajay Tripathi

Oscar Esteve

unread,
May 9, 2019, 7:10:33 AM5/9/19
to open...@googlegroups.com
Hello, Ajay!

Yeah, I know the project is in alpha state.

I've been working with OpenWRT APs for some years. This is my first time trying to use OpenWISP. I've had some troubles with Ansible, and, to be sincere, I think Docker is our present/near future.
But I'm learning Docker, it's very new for me.

Consider me one of your alpha testers, if you like the idea, please :)
Now I'm trying just to connect 2 OpenWRT APs I have at home, and I know TLS certificate is not solved, and, yes, I can wait, one day or one week, no problem with that. I just want to learn OpenWisp.

Now, I'll tell you the workaround I've done.
I have OpenWISP in 1.10
My Nginx reverse proxy in 1.2
My main OpenWRT router in 1.1

I have configured a CNAME on my domain, it's like openwisp2.myself.com
I have added this subdomain to my Let's Encrypt certificate, too.
And I have added a server to nginx config file, these are the lines:
server {
    listen 443;
    listen [::]:443;

    ssl_certificate /etc/letsencrypt/live/myself.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/myself.com/privkey.pem; # managed by Certbot

    server_name openwisp2.myself.com;

        location / {

        proxy_redirect  off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_buffering off;
        proxy_set_header Connection "Keep-Alive";
        proxy_pass http://192.168.1.10:8081;
        }
}

Now I can connect with HTTPS to OpenWisp Controlled, at home, from "the outside", and even from my router, simply changing one line in /etc/config/openwisp:
From option url 'http://192.168.1.10'
to

But now I get this error:
Thu May  9 12:54:28 2019 daemon.info openwisp: OpenWISP config agent started
Thu May  9 12:54:28 2019 daemon.info openwisp: Registering device...
Thu May  9 12:54:29 2019 daemon.err openwisp: Registration failed! error: unrecognized secret

I'm sure I'm using the same shared_secret, in you .env file and in /etc/config/openwisp file.
Am I missing something?

Maybe it's an OpenWisp newbie question, I've seen one thread saying you must configure shared_secret in the "organization", maybe?

If this is "out of scope", just tell it to me, but I don't know if this can be related to Docker instead of OpenWisp.

As always, thank you very much for your efforts and time spent on this project, and thank you for sharing your GitHub with all of us.
Oscar
PD I hope to be using your Docker containers in a bigger project, in July: in the ITC departament, we are planning to deploy a new WIFI network in my collegue. My idea is doing it with 30-40 Unifi APs, with OpenWRT firmware, and OpenWISP. I hope I will be able to defend the idea, and the other guys in the ITC Departament will be impressed with it :)

Missatge de Ajay Tripathi <ajay...@gmail.com> del dia dj., 9 de maig 2019 a les 0:46:
--
You received this message because you are subscribed to the Google Groups "OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openwisp+u...@googlegroups.com.

Oscar Esteve

unread,
May 9, 2019, 7:22:41 AM5/9/19
to open...@googlegroups.com

I get a new shared secret, for default organization, and when I use it on my OpenWrt AP and I type
/etc/init.d/openwisp_config start,
I get:
openwisp: you must either set uuid and key, or shared_secret in /etc/config/openwisp

How can I do that? In docker-compose.yml I cannot see any reference to this folder, it seems not to be declared as bind mount, maybe inside of some volume?
I thought the devices would auto register with the openwisp_config start command, and they would get an uuid and key automatically.

Thank you again,
Oscar


Missatge de Ajay Tripathi <ajay...@gmail.com> del dia dj., 9 de maig 2019 a les 0:46:
Hi,

--
You received this message because you are subscribed to the Google Groups "OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openwisp+u...@googlegroups.com.

A Stanley

unread,
May 9, 2019, 10:56:14 AM5/9/19
to open...@googlegroups.com
Is this error from your Openwisp container or your OpenWRT AP "openwisp: you must either set uuid and key, or shared_secret in /etc/config/openwisp"?

Israel Surio

unread,
May 9, 2019, 11:04:14 AM5/9/19
to open...@googlegroups.com
The error is associated with your OpenWRT AP.

As described in here:



Oscar Esteve

unread,
May 9, 2019, 11:58:00 AM5/9/19
to open...@googlegroups.com
Hello,

After going to "Organization", in OpenWisp controller, and copy the shared_key into /etc/config/openwisp of my OpenWrt device, I try to start the config utility:
root@OpenWrt1043Main:~# /etc/init.d/openwisp_config start

openwisp: you must either set uuid and key, or shared_secret in /etc/config/openwisp

Yes, I'm in the OpenWrt AP ssh shell: it's where I run the command and where I get the message.

I'm following the process described in https://github.com/openwisp/openwisp-config#automatic-registration, as Israel Surio states.

My /etc/config/openwisp file is:
config controller 'http'
        option url 'https://openwisp2.myself.com'
        #option interval '120'
        option verify_ssl '1' # I've used the '0' option with the same results
        option shared secret 'dxxxJ4cbibDgvQRkQmQJpAwaErTyetc'
        #option shared_secret 'myself_secretkey' #the same shared secret I used in .env file, but it seems it's not used?
        #option consistent_key '1'
        option mac_interface 'eth0'
        #option management_interface 'tun0'
        #option merge_config '1'
        #option test_config '1'
        #option test_script '/usr/sbin/mytest'
        #option hardware_id_script '/usr/sbin/read_hw_id'
        #option hardware_id_key '1'
        option uuid ''
        option key ''
        list unmanaged 'system.@led'
        list unmanaged 'network.loopback'
        list unmanaged 'network.@switch'
        list unmanaged 'network.@switch_vlan'
        # curl options
        #option connect_timeout '15'
        #option max_time '30'
        #option capath '/etc/ssl/certs'
        #option cacert '/etc/ssl/certs/ca-certificates.crt'
        # hooks
        #option pre_reload_hook '/usr/sbin/my_pre_reload_hook'
        #option post_reload_hook '/usr/sbin/my_post_reload_hook'

If there is a way to get into the container's /etc/config/openwisp, I'll set the shared_secret there, as my AP says I have to do.

image.png
As you can see, auto-registration is enabled.

Maybe... and it's a guess... is the /etc/config/openwisp file mounted inside the container as read-only?

Please, if you clearly see what I'm doing wrong... maybe it's a newbie error.

Thank you very much for your help,
Oscar

Missatge de A Stanley <2st...@2stacks.net> del dia dj., 9 de maig 2019 a les 16:56:

A Stanley

unread,
May 9, 2019, 12:13:43 PM5/9/19
to open...@googlegroups.com
I'm pretty sure the container doesn't have '/etc/config/openwisp'.  It does look like an issue with the auto registration if you have configured your APs /etc/config/openwisp correctly.  You could try manually setting the UUID and Key in your AP config using what is shown in your Openwisp controller.  For what it's worth you can launch a shell in the controller container with;

>docker exec -it <container_name> sh

If nothing else you can watch the logs there;

>tail -f log/error.log


Oscar Esteve

unread,
May 9, 2019, 12:19:54 PM5/9/19
to open...@googlegroups.com
Thank you very much, A Stanley, I'll do all you say and give you feedback.

A Stanley

unread,
May 9, 2019, 12:28:07 PM5/9/19
to open...@googlegroups.com
Also don't forget https://github.com/openwisp/openwisp-config#debugging for your AP.  Watch both sides and see if anything obvious pops out.

Oscar Esteve

unread,
May 10, 2019, 7:25:39 AM5/10/19
to open...@googlegroups.com
Trouble adding devices.

Hello,

I'm trying to add a device manually. I'm following the tutorial "Introduction to OpenWisp2" https://www.youtube.com/watch?time_continue=300&v=MY097Y2cPQ0
In minute 5, it adds a "configuration"; in my openwisp it's a "device", but, instead of getting the screen it's supposed to be, I get:
image.png
Is this a bug in the openwisp controller docker container? Maybe it's because I "docker-compose up -d" setting es-es as the languange in .env file?
I set:
DJANGO_LANGUAGE_CODE=es-es
DJANGO_TIME_ZONE=Europe/Madrid

Thank you,
Oscar

Missatge de A Stanley <2st...@2stacks.net> del dia dj., 9 de maig 2019 a les 18:28:

A Stanley

unread,
May 10, 2019, 9:22:15 AM5/10/19
to open...@googlegroups.com

Oscar Esteve

unread,
May 10, 2019, 5:42:21 PM5/10/19
to open...@googlegroups.com
Hello,

Writing just to tell you I've deleted all the stuff (containers, and so on), and recreated everything, even the cloned github repo.

Done docker-compose pull instead of building it... same issue.

Am I the only one facing this issue? I'm using Debian Stretch, and an Intel Atom Z8350, does it matter?

Thank you,
Oscar

Missatge de Oscar Esteve <osca...@gmail.com> del dia dv., 10 de maig 2019 a les 13:25:

A Stanley

unread,
May 10, 2019, 6:06:05 PM5/10/19
to open...@googlegroups.com
Which issue are you you referring to?  If you are talking about the bug you mentioned before, others have reported the same.   As for other issues with the docker version of Openwisp I think you Ajay and I are the only alpha testers at the moment.  You're a little ahead of me with integration testing.   I flashed an AP today that Ill start testing with next week so hopefully I can help more then.

Oscar STV

unread,
May 11, 2019, 5:20:15 PM5/11/19
to OpenWISP
Yes, I'm having the bug #111.
Thank you, I'll check inside the controller container. If I find the admin.css file, I'll change the code as Ajay states in the #111 thread.

Thank you very much 😃

A Stanley

unread,
May 11, 2019, 5:40:19 PM5/11/19
to open...@googlegroups.com
If you're using docker-compose look here on your local system.

/opt/openwisp/static/controller/django-netjsonconfig/css/admin.css

--
You received this message because you are subscribed to the Google Groups "OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openwisp+u...@googlegroups.com.

A Stanley

unread,
May 14, 2019, 1:36:54 PM5/14/19
to open...@googlegroups.com, Ajay Tripathi
Finally had a chance to connect an AP to the docker version.  There's a couple of things to note that I'm sure don't match existing documentation.

- We haven't implemented SSL yet.
- The 'Dashboard' container is a work in progress and you shouldn't point any APs to this.
- The controller container is listening on port 8081 so make sure to add that to the 'option url' config in openwisp_config.

I was able to get an AP to auto register using the following configuration.

root@OpenWrt:/etc/config# cat openwisp

config controller 'http'
        option url 'http://openwisp2.2stacks.lab:8081'
        option verify_ssl '0'
        option shared_secret '<shared_secret from controller>'

@Ajay Tripathi I couldn't reproduce this on my Kubernetes deployment.  The Ingress wasn't passing the HTTP headers correctly.  I'll try to get more details soon.

A Stanley

unread,
May 15, 2019, 10:54:24 AM5/15/19
to open...@googlegroups.com, Ajay Tripathi
@Ajay Tripathi forgive my persistence, I know you should be out partying but I believe I have it figured out 😉  Something in the URL config changed in the Openwsip-Controller code between the time you last pushed your containers to Docker Hub and now.  If I deploy to Kubernetes using recently built containers, the auto registration process works.

Ajay Tripathi

unread,
May 17, 2019, 3:58:03 PM5/17/19
to OpenWISP
Hello,

@2stacks, thanks for the help here.
My exams are over now and I am getting back to the work. :-)

Weekly Update:
In the past days, i've managed to complete the SSL work[1] and added correct headers on nginx.
I have also made minor code improvements and optimised docker images build time.
Only some finishing touch remains, when I am done with that, I will merge this code and move it to the official openwisp repository.


Ajay

---
Ref:

Oscar STV

unread,
May 20, 2019, 4:56:37 AM5/20/19
to OpenWISP
Hi, @2stacks, thank you very much for helping me.

Now, with the new nginx container, I can autoregister my devices :) Great!

BUT
I'm having a weird issue.

I modify the admin.css file.
vi /opt/openwisp/static/controller/django-netjsonconfig/css/admin.css
From this:
.change-form #device_form div.inline-group.tab-content > fieldset.module > h2,
.change-form #device_form div.inline-group.tab-content > .tabular > fieldset.module > h2,
.change-form #device_form > div > fieldset.module.aligned,
.change-form #device_form > div > {      
  display: none;                                       
}
to this:
.change-form #device_form div.inline-group.tab-content > fieldset.module > h2,
.change-form #device_form div.inline-group.tab-content > .tabular > fieldset.module > h2,
.change-form #device_form > div > fieldset.module.aligned,
.change-form #device_form > div > {      
  display: block;                                       
}

I go into the container:
docker exec -it dockerize-openwisp_nginx_1 sh

Flush Nginx cache:
rm -R /var/cache/nginx/

and reload it:
nginx -s reload

Then I delete all the caches from chrome/firefox... and reload the page:
https://openwisp2.myhomeserver.com/admin/config/device/add/
or
http://192.168.1.10:8081/admin/config/device/add

I see as I had "display none"!
If I download the page as HTML, and I open admin.css, it's still display:none.
If I modify the admin.css from "none" to "block", and I load the downloaded HTML page, I can see the add device block correctly.

I'm not behind a proxy, so I cannot understand what's happening.

Thank you again,
Oscar


El dissabte, 11 maig de 2019 23:40:19 UTC+2, 2stacks va escriure:
If you're using docker-compose look here on your local system.

/opt/openwisp/static/controller/django-netjsonconfig/css/admin.css

On Sat, May 11, 2019 at 5:20 PM Oscar STV <osca...@gmail.com> wrote:
Yes, I'm having the bug #111.
Thank you, I'll check inside the controller container. If I find the admin.css file, I'll change the code as Ajay states in the #111 thread.

Thank you very much 😃

--
You received this message because you are subscribed to the Google Groups "OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open...@googlegroups.com.

Oscar Esteve

unread,
May 20, 2019, 5:53:02 AM5/20/19
to OpenWISP
Solved.

Needed to change this file
/opt/openwisp/static/django-netjsonconfig/css/admin.css

Thank you!

Missatge de Oscar STV <osca...@gmail.com> del dia dl., 20 de maig 2019 a les 10:56:
To unsubscribe from this group and stop receiving emails from it, send an email to openwisp+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openwisp/fcb642be-86e4-4add-84b5-816c2d1b5db7%40googlegroups.com.

A Stanley

unread,
May 20, 2019, 10:44:03 AM5/20/19
to open...@googlegroups.com
Sorry I gave you the wrong file 😂 Thank you for helping us test.  If you find additional issues (I'm sure you will) we are now tracking everything in the official repository https://github.com/openwisp/docker-openwisp.

Reply all
Reply to author
Forward
0 new messages