Hi all,
I have been trying now for almost a week to deploy a AWX with AWXOperator 26-29 version and latest stable AWX app on a Rancher v2.6 3NODE kubernetes cluster with NFS storage.
I kept hitting the same error - at the point when postgres-13 POD needs to be created I get an ERROR message that
message: 'Error: stat /data/postgres-13: no such file or directory'
Since I am relatively new to kubernetes and awx, I went with simplified setup using single node cluster and hostPath storage type, so here is my configuration setup:
AWX Operator: v 0.29.0
AWX app : latest
awx.yml:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
# AWX administrator user/pass
admin_user: admin
admin_password_secret: ***********
# Ingress params with TLS cert/key
service_type: clusterip
ingress_type: ingress
ingress_tls_secret: awx-secret-tls
# Specify custom AWX EE registry
ee_images:
- name: awx_ee
image: x.y.z.w:5000/awx_ee
ee_pull_credentials_secret: ************
# Managed postgres parameters
# postgres_configuration_secret: awx-postgres-configuration
# Postgres storage params
postgres_storage_class: awx-postgres-volume
# postgres_data_path: /data/postgres-13
postgres_storage_requirements:
requests:
storage: 8Gi
# Persistence for AWX
projects_persistence: true
projects_existing_claim: awx-projects-claim
pv.yml:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-postgres-13-volume
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 8Gi
storageClassName: awx-postgres-volume
hostPath:
path: "/data/postgres-13"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-projects-volume
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 2Gi
storageClassName: awx-projects-volume
hostPath:
path: "/data/projects"
pvc.yml:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-projects-claim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: awx-projects-volume
kustomization.yml:
-
--
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
# Disable name suffix for secretGenerator, so that we can reference generated secret #deterministically
generatorOptions:
disableNameSuffixHash: true
# Generate some secrets for web server cert/key, postgres and AWX admin account, repo pull credentials etc.
secretGenerator:
# Key and cert for ingress/web server
- name: awx-secret-tls
files:
- tls.crt
- tls.key
type: kubernetes.io/tls
# AWX Administrator password
- name: awx-admin-password
type: Opaque
literals:
- password=**********
# This one is for plain HTTP pull from internal private repo
- name: awx-ee-pull-credentials
type: Opaque
literals:
- url=x.y.z.w:5000/awx_ee
- ssl_verify=false
# We specify our base resource (awx operator)
# with our ingress awx setup on top
resources:
# Find the latest tag here: https://github.com/ansible/awx-operator/releases
- github.com/ansible/awx-operator/config/default?ref=0.29.0
- pv.yml
- pvc.yml
- awx.yml
# Set the image tags to match the git version from above
images:
- name: quay.io/ansible/awx-operator
newTag: 0.29.0
After deployment is executed with:
rancher kubectl apply -k base-awx-sl
I keep seeing from deployment logs:
com/v1beta1, Kind=AWX","event_type":"playbook_on_task_start","job":"5767333457580950644","EventData.Name":"installer : Wait for Database to initialize if managed DB"}
--------------------------- Ansible Task StdOut -------------------------------
TASK [installer : Wait for Database to initialize if managed DB] ***************
task path: /opt/ansible/roles/installer/tasks/database_configuration.yml:206
-------------------------------------------------------------------------------
{"level":"info","ts":1663766899.8263388,"logger":"proxy","msg":"cache miss: /v1, Kind=PodList err-Index with name field:status.phase does not exist"}
{"level":"info","ts":1663766905.4308045,"logger":"proxy","msg":"cache miss: /v1, Kind=PodList err-Index with name field:status.phase does not exist"}
{"level":"info","ts":1663766911.0348377,"logger":"proxy","msg":"cache miss: /v1, Kind=PodList err-Index with name field:status.phase does not exist"}
{"level":"info","ts":1663766916.6319776,"logger":"proxy","msg":"cache miss: /v1, Kind=PodList err-Index with name field:status.phase does not exist"}
{"level":"info","ts":1663766922.274335,"logger":"proxy","msg":"cache miss: /v1, Kind=PodList err-Index with name field:status.phase does not exist"}
{"level":"info","ts":1663766927.8747404,"logger":"proxy","msg":"cache miss: /v1, Kind=PodList err-Index with name field:status.phase does not exist"}
{"level":"info","ts":1663766933.4698737,"logger":"proxy","msg":"cache miss: /v1, Kind=PodList err-Index with name field:status.phase does not exist"}
Here are some screenshot from Rancher WebUI:
Addiitonally, from kubectl its vissible that both PV and PVCs are created and bound
If you have any suggestions or able to point where a mistake in conf is, it would be greatly appreciated.