Google Cloud Platform Status
unread,Mar 14, 2019, 2:11:16 PM3/14/19Sign in to reply to author
Sign in to forward
Sign in to report message as abuse
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to google-appengine...@googlegroups.com
ISSUE SUMMARY
On Tuesday 12 March 2019, Google's internal blob storage service
experienced a service disruption for a duration of 4 hours and 10 minutes.
We apologize to customers whose service or application was impacted by this
incident. We know that our customers depend on Google Cloud Platform
services and we are taking immediate steps to improve our availability and
prevent outages of this type from recurring.
DETAILED DESCRIPTION OF IMPACT
On Tuesday 12 March 2019 from 18:40 to 22:50 PDT, Google's internal blob
(large data object) storage service experienced elevated error rates,
averaging 20% error rates with a short peak of 31% errors during the
incident. User-visible Google services including Gmail, Photos, and Google
Drive, which make use of the blob storage service also saw elevated error
rates, although (as was the case with GCS) the user impact was greatly
reduced by caching and redundancy built into those services. There will be
a separate incident report for non-GCP services affected by this incident.
The Google Cloud Platform services that experienced the most significant
customer impact were the following:
Google Cloud Storage experienced elevated long tail latency and an average
error rate of 4.8%. All bucket locations and storage classes were impacted.
GCP services that depend on Cloud Storage were also impacted.
Stackdriver Monitoring experienced up to 5% errors retrieving historical
time series data. Recent time series data was available. Alerting was not
impacted.
App Engine's Blobstore API experienced elevated latency and an error rate
that peaked at 21% for fetching blob data. App Engine deployments
experienced elevated errors that peaked at 90%. Serving of static files
from App Engine also experienced elevated errors.
ROOT CAUSE
On Monday 11 March 2019, Google SREs were alerted to a significant increase
in storage resources for metadata used by the internal blob service. On
Tuesday 12 March, to reduce resource usage, SREs made a configuration
change which had a side effect of overloading a key part of the system for
looking up the location of blob data. The increased load eventually lead to
a cascading failure.
REMEDIATION AND PREVENTION
SREs were alerted to the service disruption at 18:56 PDT and immediately
stopped the job that was making configuration changes. In order to recover
from the cascading failure, SREs manually reduced traffic levels to the
blob service to allow tasks to start up without crashing due to high load.
In order to prevent service disruptions of this type, we will be improving
the isolation between regions of the storage service so that failures are
less likely to have global impact. We will be improving our ability to more
quickly provision resources in order to recover from a cascading failure
triggered by high load. We will make software measures to prevent any
configuration changes that cause overloading of key parts of the system. We
will improve load shedding behavior of the metadata storage system so that
it degrades gracefully under overload.