I have the problem that olm pods won't stay running. I upped the memory and both then start, but they keep crashing. I think they are not able to authenticate to the api server because when I use the kubectl global server flag it says there is an unknown problem with the server. So I dropped that switch and kept the kubeconfig flag.
I used:
I cloned the repo to local workspace
kubectl create -f deploy/upstream/quickstart/crds.yaml --kubeconfig='/home/robmin/.kube/config'
kubectl create -f deploy/upstream/quickstart/olm.yaml --kubeconfig='/home/robmin/.kube/config'
then
With 8Gigs memory on my worker3 where the two olm pods are running, the olm catalog operator keeps staying in pending and the olm operator keeps crashing.
then I upped the worker3 to 12Gigs memory and now olm catalog operator starts, but then also goes into a crash loop.
With 8Gigs memory
catalog-operator-66d58f7877-45hff 0/1 Pending 0 12m <none> worker3 <none> <none>
olm-operator-5f75dd4c6c-ptrxb 0/1 CrashLoopBackOff 6 12m 192.168.189.77 worker2 <none> <none>
With 12gigs memory
olm catalog-operator-66d58f7877-45hff 0/1 CrashLoopBackOff 1 17m 192.168.182.12 worker3 <none> <none>
olm olm-operator-5f75dd4c6c-ptrxb 0/1 CrashLoopBackOff 7 17m 192.168.189.77 worker2 <none> <none>
anyone have experience solving this phenomenon?
thanks,
Robin Hood