32 core compute engine not starting

87 views
Skip to first unread message

Dave Greenly

unread,
May 21, 2015, 1:09:53 AM5/21/15
to gce-dis...@googlegroups.com
I have started a 32 core compute engine, manually using the GCE developers console, and it fired up fine and all was good.

But I also have an automated environment where I start up compute engines using the gcloud utility.

The following command gave me an error stating  that the resource is not found.  (See error below)

Is there something I need to do to allow my project to see the new Beta Version 32 core compute engines?

Thanks in advance!!
Dave

/usr/local/bin/gcloud compute instances create sn-esmo-string-vm --zone us-central1-a --machine-type n1-highmem-32 --address xxx.xx.xx.xx --disk name=xx-xxxx-string-vm mode=rw boot=yes device-name=sample-device-name auto-delete=no --scopes https://www.googleapis.com/auth/devstorage.full_control compute-rw

ERROR: (gcloud.compute.instances.create) Some requests did not succeed: - Invalid value for field 'resource.machineTypes': 'projects/xxxxxxxxx/zones/us-central1-a/machineTypes/n1-highmem-32'. Resource was not found.

Jesse Scherer (Google Cloud Support)

unread,
May 21, 2015, 2:33:34 PM5/21/15
to gce-dis...@googlegroups.com, dgree...@gmail.com
Have you tried the same command, but with "gcloud beta compute..." instead of just "gcloud compute?"

Scott Van Woudenberg

unread,
May 21, 2015, 3:08:58 PM5/21/15
to Jesse Scherer (Google Cloud Support), gce-discussion, dgree...@gmail.com
Hi Dave,

The 32-vCPU machine types are only available in our Ivy Bridge and Haswell zones. us-central1-a is a Sandy Bridge zone, which is why you are seeing that error (see Available regions & zones for the full list).

I could have sworn that we documented the machine type availability but I certainly cannot find it, so thank you for raising this, we will add that information shortly.

For now, you can see which machine types are available in a given zone as follows:

$ gcloud compute machine-types list --zone us-central1-a
NAME           ZONE          CPUS MEMORY_GB DEPRECATED
f1-micro       us-central1-a 1     0.60
g1-small       us-central1-a 1     1.70
n1-highcpu-16  us-central1-a 16   14.40
n1-highcpu-2   us-central1-a 2     1.80
n1-highcpu-4   us-central1-a 4     3.60
n1-highcpu-8   us-central1-a 8     7.20
n1-highmem-16  us-central1-a 16   104.00
n1-highmem-2   us-central1-a 2    13.00
n1-highmem-4   us-central1-a 4    26.00
n1-highmem-8   us-central1-a 8    52.00
n1-standard-1  us-central1-a 1     3.75
n1-standard-16 us-central1-a 16   60.00
n1-standard-2  us-central1-a 2     7.50
n1-standard-4  us-central1-a 4    15.00
n1-standard-8  us-central1-a 8    30.00

Regards,

-ScottVW

---
Scott Van Woudenberg
Product Manager
Google Compute Engine


--
© 2014 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043
 
Email preferences: You received this email because you signed up for the Google Compute Engine Discussion Google Group (gce-dis...@googlegroups.com) to participate in discussions with other members of the Google Compute Engine community and the Google Compute Engine Team.
---
You received this message because you are subscribed to the Google Groups "gce-discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gce-discussio...@googlegroups.com.
To post to this group, send email to gce-dis...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gce-discussion/a38f2cad-2c5c-43f0-a50f-144ed3ec1e39%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Dave Greenly

unread,
May 22, 2015, 1:15:59 AM5/22/15
to gce-dis...@googlegroups.com
Awesome Scott!!

Thank you very much.  I have a 32 core compute engine running now.  I used the same gcloud command as listed in my original  post, but I used the us-central1-b region and it worked.

I also had to move my persistent disk to the same us-central1-b region.  I did this by creating a snapshot of my original disk, and then creating a new persistent disk in the 1b region.

Thanks again!
Dave
Reply all
Reply to author
Forward
0 new messages