Private GKE cluster with public endpoint can't connect to public Cloud SQL

564 views
Skip to first unread message

Juliusz Gonera

unread,
Mar 22, 2021, 6:37:44 AM3/22/21
to Google Cloud SQL discuss
Hi,

I've tried googling but I only find solutions to problems with private Cloud SQL instances. I'd be grateful for any help as I've been banging my head half of the day...

I have a GKE cluster created with this command:

gcloud container clusters create my-cluster \
  --disk-size=10GB \
  --machine-type=e2-small \
  --node-locations=us-central1-b,us-central1-c,us-central1-f \
  --num-nodes=1 \
  --preemptible \
  --release-channel=regular \
  --workload-pool=my-project.svc.id.goog \
  --zone=us-central1-f \
  --no-enable-master-authorized-networks \
  --enable-ip-alias \
  --enable-private-nodes \
  --master-ipv4-cidr 172.16.0.32/28

And a Cloud SQL instance created with:

gcloud services enable sqladmin.googleapis.com
gcloud sql instances create my-db \
  --database-version=POSTGRES_12 \
  --region=us-central1 \
  --storage-auto-increase \
  --storage-size=10 \
  --storage-type=SSD \
  --tier=db-f1-micro

In my pod I have the following sidecar container:

      - name: cloud-sql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.20.2
        command:
          - "/cloud_sql_proxy"
          - "-instances=my-project:us-central1:my-db=tcp:5432"
          - "-term_timeout=20s"
        securityContext:
          runAsNonRoot: true


The pod uses a service account that has been created and configured with these commands:

gcloud iam service-accounts create my-service-account
gcloud iam service-accounts add-iam-policy-binding \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:my-project.svc.id.goog[default/my-service-account]" \
  my-servic...@my-project.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding my-project \
  --member serviceAccount:"my-servic...@my-project.iam.gserviceaccount.com" \
  --role "roles/cloudsql.client"


Now when I try to connect to Postgres through cloud-sql-proxy in my app, the connection times out with the following error in cloud-sql-proxy's logs:

2021/03/19 21:51:29 couldn't connect to "my-project:us-central1:my-db": dial tcp MY_DB_PUBLIC_IP:3307: connect: connection timed out

Interestingly enough, I can run cloud-sql-proxy on my laptop to connect to the same instance without any problems. I checked my app's container in the pod and it has access to public Internet. What am I missing?

Thanks,
Juliusz

Tawatchai Worachattrakool

unread,
Mar 22, 2021, 9:53:23 AM3/22/21
to Google Cloud SQL discuss
Hi Juliusz,

I think your problem about Cloud NAT & Cloud Router because..
1. GKE private mode use Cloud NAT & Cloud Router for access public
2. CloudSQL proxy connect with public access


Using the proxy with private IP


Thanks,
Tawatchai W.

nibrass

unread,
Mar 23, 2021, 5:40:15 AM3/23/21
to Google Cloud SQL discuss

Hello,

The Cloud SQL proxy uses instance public IP to connect and as your cluster is private with no internet access from nodes so it is not possible to do that way. To mitigate this issue, you will need to use [private IP][1] for your SQL instance or by configuring the [NAT gateway for your cluster][2] .


Best Regards,

Nibrass 

[1]: https://cloud.google.com/sql/docs/mysql/private-ip

[2]: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine


Juliusz Gonera

unread,
Mar 23, 2021, 4:59:54 PM3/23/21
to Google Cloud SQL discuss
Thank you both for your answers! What confused me is that to determine if my pod has access to the Internet I tried fetching www.google.com which surprisingly worked. Trying to fetch any other website fails though so it seems that www.google.com is considered "internal" by GCP.
Reply all
Reply to author
Forward
0 new messages