Start Cloud Database Proxy...
2019/09/01 05:40:13 Cloud SQL Proxy logging has been disabled by the -quiet flag. All messages (including errors) will be suppressed.
Sun Sep 1 05:40:13 UTC 2019 - waiting for 127.0.0.1:5432... 1/10
Sun Sep 1 05:40:18 UTC 2019 - waiting for 127.0.0.1:5432... 2/10
Sun Sep 1 05:40:23 UTC 2019 - waiting for 127.0.0.1:5432... 3/10
Sun Sep 1 05:40:28 UTC 2019 - waiting for 127.0.0.1:5432... 4/10
Sun Sep 1 05:40:33 UTC 2019 - waiting for 127.0.0.1:5432... 5/10
Sun Sep 1 05:40:38 UTC 2019 - waiting for 127.0.0.1:5432... 6/10
Sun Sep 1 05:40:43 UTC 2019 - waiting for 127.0.0.1:5432... 7/10
Sun Sep 1 05:40:48 UTC 2019 - waiting for 127.0.0.1:5432... 8/10
Sun Sep 1 05:40:53 UTC 2019 - waiting for 127.0.0.1:5432... 9/10
Sun Sep 1 05:40:58 UTC 2019 - 127.0.0.1:5432 still not reachable, giving up
We are running Apache Airflow with Cloud SQL and Kubernetes. cloud_sql_proxy is started when a new pod starts, and we've been seeing this issue every few hours - although the example above may be the worst case.
The MySQL DB is 2nd generation, mysql 5.7.
Using cloud_sql_proxy 1.15 - we just fetch whatever is the latest during the Docker container build.
airflow@airflow-web-5d8756bfbd-bl2d5:~$ ./cloud_sql_proxy -version
Cloud SQL Proxy: version 1.15; sha d93c53a4824cdaf457656ab49a50001636cf3e36 built Wed Aug 28 23:14:31 UTC 2019
We're definitely not hitting the API request limit for CloudSQL Admin API or anything like that.
And I can't reproduce this issue when I run cloud_sql_proxy locally. There's no more than 30-50 connections to the Mysql db at a given time, and the connections are mostly dormant.
Any idea what it could be? I can't find any useful information to diagnose this issue. As far as I can tell, this is a cloud_sql_proxy behavior and not mysql.