Gocd in kubernetes Questions

56 views
Skip to first unread message

Bánhidy Krisztián

unread,
Jul 19, 2019, 7:03:44 AM7/19/19
to go-cd
Hello,

I am evaluating gocd to replace jenkins in an environment, but find some missing points I would ask for some guidance on.
The goal would be to run gocd in a kubernetes environmnet. I know there is helm charts but I would ask some further questions.

- Elastic agents have problems connecting to gocd-server. According to forums and issues I read I found out following:
  - main Loadbalancer can't be used because of Reverse proxy issue
  - the main service endpoint withing kubernetes doesnt work out of the box, because gocd generates self signed certificate for  its hostname:

bash-4.4# openssl s_client -connect gocd-server.gocd:8154
CONNECTED(00000003)
depth=0 CN = gocd-7766dcc46-jj5h9, OU = Cruise server webserver certificate
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = gocd-7766dcc46-jj5h9, OU = Cruise server webserver certificate
verify return:1
---

What is the "best practice" or recommended way to handle the ssl certificate on gocd server or agents? Should I generate a self signed certificate for gocd-server.gocd.svc.cluster.local and inject it into the container?
According to doc to replace the certificate I would need to run commands during the init container to inject it? in helm chart did not find any reference to this. 
Also the agents should get the certificate injected to be able to verify the chain? 

- There is declerative pipeline possibility from git repository. But could not find any documentation for defining setup for gocd server itself. I want to have a base configuration with Saml login configured (SAML plugin), also server settings I would like to have configured when I move gocd to new server enviroment. Even in salt formula I found no options to define settings that should be used during creation. 
How is this normally handled?

Thank you
Krisztian

Sasa Mitrovic

unread,
Jul 30, 2019, 8:13:11 AM7/30/19
to go-cd
Hi Krisztian,

I'm successfully implement GOCD in K8s but not using their Helm chart as he is not friendly when GOCD needs to be upgraded to new version. From my experience its best to write your own yml files and deploy it in K8s.
Use env variables to add 2 plugins for docker and elastic agents. Manually configure elastic profile and add persistent storage. Also some best practice was to edit agents pod templates to include ssh keys.

On this way all is working without errors, all upgrades went fine. Using it more that 1 year without issues in k8s.
As advice install GOCD with Helm chart and then see configuration he created there for elastic profiles, then use what you can in generated yaml's. 

Regards  

Aravind SV

unread,
Jul 30, 2019, 5:37:12 PM7/30/19
to go...@googlegroups.com
Hello Krisztian,

I would actually recommend using the helm chart, rather than rolling your own. At the end of the day, the helm chart is a set of k8s deployment yamls, which have done a lot of the work for you.

To get to your questions around the LoadBalancer:

1. Are you trying to connect agents from outside the Kubernetes cluster? From within, I'd expect them to not have any trouble connecting to port 8154.

2. Assuming that's true, what you mentioned about the self-signed certificate is relevant for GoCD versions less than 19.5.0. Since 19.5.0, there is a beta feature which allows you to terminate SSL outside. It is mentioned in the release notes: https://www.gocd.org/releases/#19-5-0 - you shouldn't even need port 8154 in that case.

Since you are having some trouble, we're assuming others would too. We're going to look into a setup such as the one you mention, so that we can improve the documentation, if nothing else. Please expect a reply here, with some more information.

If you can provide more information about the kind of setup you have, please let us know.


About your question around the initial setup: The server.persistence.subpath.godata property in the helm chart is probably going to be useful. It's usually pointed to a persistent volume, which has the setup you want.

Cheers,
Aravind
> --
> You received this message because you are subscribed to the Google Groups "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to go-cd+un...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/go-cd/2efd56e1-ab45-4181-af4d-6caac070abed%40googlegroups.com.

Aravind SV

unread,
Jul 30, 2019, 5:40:05 PM7/30/19
to go...@googlegroups.com
Hello Sasa,

On Tue, Jul 30, 2019 at 05:13:10 -0700, Sasa Mitrovic wrote:
> I'm successfully implement GOCD in K8s but not using their Helm chart as he
> is not friendly when GOCD needs to be upgraded to new version. From my
> experience its best to write your own yml files and deploy it in K8s.
> Use env variables to add 2 plugins for docker and elastic agents. Manually
> configure elastic profile and add persistent storage. Also some best
> practice was to edit agents pod templates to include ssh keys.
>
> On this way all is working without errors, all upgrades went fine. Using it
> more that 1 year without issues in k8s.
> As advice install GOCD with Helm chart and then see configuration he
> created there for elastic profiles, then use what you can in generated
> yaml's.

A normal "helm upgrade" should work. If there's anything that doesn't work with that, please feel free to either open an issue or mention it here. We'd love to improve the helm charts.

Thank you,
Aravind

Ketan Padegaonkar

unread,
Jul 31, 2019, 12:21:52 AM7/31/19
to go...@googlegroups.com
On Tue, Jul 30, 2019 at 5:37 PM Aravind SV <arv...@thoughtworks.com> wrote:
2. Assuming that's true, what you mentioned about the self-signed certificate is relevant for GoCD versions less than 19.5.0. Since 19.5.0, there is a beta feature which allows you to terminate SSL outside. It is mentioned in the release notes: https://www.gocd.org/releases/#19-5-0 - you shouldn't even need port 8154 in that case.

Update: Beginning with version 19.6 of GoCD, this feature is turned on by default, you really should not need port 8154 at all. Over the next few releases, we are considering (no promises) making port 8154 optional.


Reply all
Reply to author
Forward
0 new messages