I have setup Vault with Consul back-end with ACLs enabled. Here is brief info regarding environment:
Consul: 0.7.1
Vault: 0.6.2
Procedure was quite straightforward: Consul deployed with ACLs enabled. Generated a token for Vault:
[root@mmaster1 vault]# curl -X PUT http://consul1.sdncluster1.consul-host.marathon.mesos:8500/v1/acl/create?token=970fdd6b-b64b-416a-9289-977f9434a9d0 -d '{ "Name": "vault","Rules":"key \"vault/\"{policy=\"write\"}"}' | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 106 100 45 100 61 1724 2337 --:--:-- --:--:-- --:--:-- 2904
{
"ID": "84a2b6e7-556f-95e1-e7ba-52c5073a128a"
}
[root@mmaster1 vault]#
Configured config.hcl on vault to have this key:
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/vault/config/server-cert.pem"
tls_key_file = "/vault/config/server-key.pem"
}
backend "consul" {
address = "consul1.sdncluster1.consul-host.marathon.mesos:8500"
advertise_addr = "http://vault2.sdncluster1.vault.marathon.mesos:8200"
token = "84a2b6e7-556f-95e1-e7ba-52c5073a128a"
}
disable_mlock = true
Now, vault comes up, no errors. However, when I try to issue 'vault init', I get:
* Vault is already initialized
OK. I go ahead and query Consul for vault entries and indeed they are there:
/ # consul kv get -keys -separator="" -token=$TOKEN vault
vault/core/cluster/local/info
vault/core/keyring
vault/core/master
vault/core/mounts
vault/core/seal-config
/ #
So, it looks like vault container is going ahead and initializing itself on startup. However, I can't find my vault keys anywhere...They're not in stdout of the container console either:
[root@mslave2 config]# docker logs a67
Generating RSA private key, 2048 bit long modulus
.......................................................+++
.....................+++
e is 65537 (0x10001)
Signature ok
subject=/CN=*
Getting Private key
==> Vault server configuration:
Backend: consul (HA available)
Cluster Address: https://vault1.sdncluster1.vault.marathon.mesos:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "", tls: "enabled")
Log Level: info
Mlock: supported: true, enabled: false
Redirect Address: http://vault1.sdncluster1.vault.marathon.mesos:8200
Version: Vault v0.6.2
==> Vault server started! Log data will stream in below:
Any suggestions ?
Thanks,
Alex
Hi Alex,
There are two likely possibilities:
1) The container you're using is initializing Vault for you but storing the output somewhere.
2) You (or a process) previously initialized Vault but have not wiped the storage in Consul afterwards so now Vault thinks it's already initialized when pointed to that backend.
Best,
Jeff
--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
GitHub Issues: https://github.com/hashicorp/vault/issues
IRC: #vault-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Vault" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vault-tool+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/vault-tool/03294f46-a135-42ff-b90d-81de02a098b8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Consul environment up and running. No Vault K/Vs present:
############################################################################
/ # consul kv get -keys -recurse -token=970fdd6b-b64b-416a-9289-977f9434a9d0 -http-addr=consul1.sdncluster1.consul-host.marath
on.mesos:8500 vault/
/ #
- As you can see above, no Vault K/Vs present...
############################################################################
Let's go ahead deploy Vault environment, in a container format. It gets deployed. Below is our host - mslave2 host running vault as a container:
[root@mslave2 config]# docker logs -f b80
Generating RSA private key, 2048 bit long modulus
..................................+++
.....................................................+++
e is 65537 (0x10001)
Signature ok
subject=/CN=*
Getting Private key
==> Vault server configuration:
Backend: consul (HA available)
Cluster Address: https://vault1.sdncluster1.vault.marathon.mesos:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "", tls: "enabled")
Log Level: info
Mlock: supported: true, enabled: false
Redirect Address: http://vault1.sdncluster1.vault.marathon.mesos:8200
Version: Vault v0.6.2
==> Vault server started! Log data will stream in below:
- Good! Our Vault container node is up and running...
############################################################################
Display K/V values for Vault - just to be on a safe side, making sure no K/Vs present prior to 'vault init':
############################################################################
/ # consul kv get -keys -recurse -token=970fdd6b-b64b-416a-9289-977f9434a9d0 -http-addr=consul1.sdncluster1.consul-host.marath
on.mesos:8500 vault/
/ #
############################################################################
Still nothing. Great. Go ahead and initialize Vault:
############################################################################
[root@mmaster1 vault]# docker run -e "VAULT_SKIP_VERIFY=true" -e "VAULT_ADDR=https://vault1.sdncluster1.vault.marathon.mesos:8200" --entrypoint=vault -t akamalov/vault-consul:0.6.2 init
Error initializing Vault: Error making API request.
URL: PUT https://vault1.sdncluster1.vault.marathon.mesos:8200/v1/sys/init
Code: 400. Errors:
* Vault is already initialized
[root@mmaster1 vault]#
############################################################################
Huh? Ok. Let's go back to our host - mslave2 and display log:
[root@mslave2 config]# docker logs -f b80
Generating RSA private key, 2048 bit long modulus
..................................+++
.....................................................+++
e is 65537 (0x10001)
Signature ok
subject=/CN=*
Getting Private key
==> Vault server configuration:
Backend: consul (HA available)
Cluster Address: https://vault1.sdncluster1.vault.marathon.mesos:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "", tls: "enabled")
Log Level: info
Mlock: supported: true, enabled: false
Redirect Address: http://vault1.sdncluster1.vault.marathon.mesos:8200
Version: Vault v0.6.2
==> Vault server started! Log data will stream in below:
2016/11/25 18:19:04.731522 [INFO ] core: security barrier not initialized
2016/11/25 18:19:04.793629 [INFO ] core: security barrier initialized: shares=5 threshold=3
2016/11/25 18:19:04.892369 [INFO ] core: post-unseal setup starting
2016/11/25 18:19:04.919307 [INFO ] core: successfully mounted backend: type=generic path=secret/
2016/11/25 18:19:04.919413 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2016/11/25 18:19:04.919723 [INFO ] core: successfully mounted backend: type=system path=sys/
2016/11/25 18:19:04.919950 [INFO ] rollback: starting rollback manager
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x60edc2]
goroutine 97 [running]:
panic(0x12f6bc0, 0xc420010070)
/goroot/src/runtime/panic.go:500 +0x1a1
github.com/hashicorp/vault/helper/salt.(*Salt).SaltID(0x0, 0x0, 0x0, 0xa, 0x1)
/gopath/src/github.com/hashicorp/vault/helper/salt/salt.go:119 +0x22
github.com/hashicorp/vault/vault.(*Router).routeCommon(0xc4204fee70, 0xc4204f0280, 0x40ef00, 0x0, 0x1490000, 0x0, 0x0)
/gopath/src/github.com/hashicorp/vault/vault/router.go:244 +0x813
github.com/hashicorp/vault/vault.(*Router).Route(0xc4204fee70, 0xc4204f0280, 0xc4204cbf00, 0xc4204957b0, 0xc420200700)
/gopath/src/github.com/hashicorp/vault/vault/router.go:187 +0x3a
github.com/hashicorp/vault/vault.(*RollbackManager).attemptRollback(0xc420200700, 0x14ee0ce, 0xa, 0xc4204cbf00, 0x0, 0x0)
/gopath/src/github.com/hashicorp/vault/vault/rollback.go:161 +0x2b3
created by github.com/hashicorp/vault/vault.(*RollbackManager).startRollback
/gopath/src/github.com/hashicorp/vault/vault/rollback.go:136 +0x14e
[root@mslave2 config]#
############################################################################
It looks like our host -mslave2 had it's container killed off right after we've issued 'vault init' from a different server (but pointing to our vault node container which resides on
host mslave2.
Let's display if Consul has registered Vault K/V:
/ # consul kv get -keys -recurse -token=970fdd6b-b64b-416a-9289-977f9434a9d0 -http-addr=consul1.sdncluster1.consul-host.marath
on.mesos:8500 vault/core/
vault/core/cluster/
vault/core/keyring
vault/core/master
vault/core/mounts
vault/core/seal-config
Ta-da! So our 'vault init' went through, wrote out K/Vs to our Consul node, but then came back and said it is already populated...
Hi Alex,
I don't really have any ideas, but if you have both Consul and Vault in containers a good first step would be to first see if you can replicate the issue running Vault outside of a container with Consul inside. You also didn't mention which Vault container you are using, but if it's not our official container you should test with that one too.
Best,
Jeff
--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
GitHub Issues: https://github.com/hashicorp/vault/issues
IRC: #vault-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Vault" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vault-tool+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/vault-tool/df39e928-cbbd-442d-9a99-f442fdacc7a4%40googlegroups.com.