Here is more details on the problem.
The idea behind it is to manually manage pods into the not default nodepool.
The nodepool is created in autoscaling. It is not the default nodepool but an additional one created manually through gcloud api.
When a pod is created in the nodepool and that there is no remaining ressource, autoscaling is acting and a new node is being created.
The problem is that the whole cluster hangs and its api is not accessible as long as the new node is being created for the nodepool. This means during about 1-2 minutes.
It is as if kubernetes was not responding on the api, during one of its not default nodepool autoscaling.
This does not happen when autoscaling is happening on the default nodepool.
I think it is an issue with GKE.
Any help would be appreciated :)
Thanks!