Hi,
We have a project named "en-development" that contains a cluster "essence-stage-small-central-cluster" with the following configuration: 24 nodes, n1-standard-16. After I resized cluster to 3 nodes and then back to 24, I've got working only 19 out of 24 nodes. I've cleared up cluster (delete ingresses, services, pods), tried again the experiment - the problem was still persisting. After that I've decided to upgrade all nodes including master to the next version of kubernetes: from 1.4.5 to 1.4.6. The problem still persists and in the google console I see the following message:
Cluster updated to 1.4.6 but failed health check: All cluster resources were brought up, but the cluster API is reporting that only 19 nodes out of 24 have registered. Cluster may be unhealthy.