Connect to Cloud SQL from within another pod using cloud proxy

736 views
Skip to first unread message

Shikha Agrawal

unread,
Nov 29, 2019, 8:45:30 AM11/29/19
to Google Cloud SQL discuss
Hi

I have setup a deployment with the following command:
 ./cloud_sql_proxy  -credential_file=/etc/secrets/mysql-gcp-sa.json -instances=kubeflow-test-servers1:us-central1:bitgrit-db=tcp:0.0.0.0:33
06

When I connect using below command:

mysql -u root --password -h 0.0.0.0

I get an entry in Stackdriver log but after a minute or so I get error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 22 "Invalid argument"
/

My stack driver log entry is :
{
insertId: "d1wgvneko75d" 
logName: "projects/kubeflow-test-servers1/logs/cloudaudit.googleapis.com%2Factivity" 
protoPayload: {…} 
receiveTimestamp: "2019-11-29T06:28:49.041963434Z" 
resource: {…} 
severity: "NOTICE" 
timestamp: "2019-11-29T06:28:48.170Z" 
}


What I am I doing wrong?

Why don't I get Sql prompt. Also in the pod I am using image:
gcr.io/cloudsql-docker/gce-proxy:1.14


David (Cloud Platform Support)

unread,
Dec 5, 2019, 10:11:27 AM12/5/19
to Google Cloud SQL discuss

Hello,


As a starting point, I would review the official documentation about connecting to Cloud SQL from Kubernetes Engine using the Cloud SQL Proxy Docker image if you haven’t since there are some requirements that your GKE cluster needs to meet such as:


Must be running version 1.2 or higher, with the kubectl command-line tool installed and configured to communicate with the cluster.

Having an application container in a pod on the GKE cluster.


In the Cloud SQL side:

The Cloud SQL Admin API is enabled.

You must know the location of the key file associated with a service account with the proper privileges for your Cloud SQL instance.


This is only the most important requirements that I see could cause such error for the complete requirements you can review the document shared above.


You can also follow the connection overview part of the documentation where you can see a properly configured pod configuration file as I see you are using 0.0.0.0:3306 on your connection strong but can’t see your pod configuration file.


If after reviewing the documentation shared above and making sure that the requirements are fulfilled, you have not been able to connect to your Cloud SQL instance, you can create an Issue Tracker and we may be able to investigate further.

Shikha Agrawal

unread,
Dec 6, 2019, 8:47:42 AM12/6/19
to Google Cloud SQL discuss
Hi David

Thank you for your reply. Very helpful and following your guidelines, I was able to connect with Cloud SQL instance setup in the same GCP project. The issue was that my cluster was not VPC native. Is it possible to connect to Cloud SQL instance from GKE using Cloud SQL proxy  in a different project? Can you point me to see documentation that would help me out with this?

Warm Regards
Shikha

Olu

unread,
Dec 13, 2019, 12:07:16 PM12/13/19
to Google Cloud SQL discuss
To answer your question directly about being able to connect to CloudSQL Instance from GKE using CloudSQL proxy in a different Project, I say Yes, it is possible.

There is this documentation[1] that explains how to connect to a CloudSQL from GKE however, to connect from within a different Project, there are a few changes that needed to be made:

Firstly, ensure that the network port 5434 (or whichever port you are deploying Cloud SQL **Proxy**) is not in use, then refresh the GKE pod. To do this, run "$ netstat -tulpn" in your console. If the port is listed at the end of one of the entires in the "Local Address" column, kill that connection by running "kill [PID]", where PID is the corresponding PID for that row. After this, reload the GKE pod that is trying to connect to Cloud SQL [2]. This is important to do because if the port is already occupied when the port runs Cloud SQL Proxy and attempts to bind to that port, the binding will fail and cause a connection error. 

- Refresh the credentials for service account cloud-sql@[project-ID].iam.gserviceaccount.com. Validate that the service account has the "roles/cloudsql.client" role [3]. Delete the service account's .json key and generate a new one [4]. Delete the GKE secret holding the service account crednetials and redeploy a new one. For example, if your service account credentials file is called "key.json", the target secret name is "credentials.json", and your console is in the directory containing "key.json", run: "$ kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=../key.json". Finally, reload the GKE pod that is trying to connect to Cloud SQL [2]. This reload is necessary to refresh the pod's consumption of the secret. 

[2] To restart your pods, go to Cloud Console > Kubernetes Engine > Workloads > select your workload > Overview tab. Then, under the "Managed pods" section, copy the pod name. In your console, first authenticate kubectl if you haven't already in this session ("$ gcloud container clusters get-credentials [CLUSTER_NAME] --zone europe-west1-b"). Then run "$ kubectl delete pod [POD_NAME]" to delete the pod. GKE will automatically spawn a new one, which will be ready to serve in a few seconds.

Reply all
Reply to author
Forward
0 new messages