Cloud Function execution could not start: Request too Large

1,912 views
Skip to first unread message

Bruce Sandler

unread,
May 14, 2019, 12:52:00 PM5/14/19
to gce-discussion
Hi there,

I have a Python cloud function that listens on a pub/sub topic. It has been working fine in production up until recently when it started reporting the following error:
Function execution could not start, status: 'request too large'

I looked in Stackdriver metrics and confirmed that no large messages have been published to that queue.
What I did find is that the number of unacknowledged messages has been growing steadily since that error first appeared.

The function does seem to process some of the messages, but apparently there is a "culprit" message or messages that keep getting resent and create a bottleneck.
There is no stacktrace on that error which means that the error does not appear in Stackdriver Error Reporting.

I suppose I could use the gcloud command 

gcloud alpha pubsub subscriptions seek command to ACK all old messages, but I would like to know what triggers this error in the first place.


Thanks,

Boris

Message has been deleted

Sam (Google Cloud Support)

unread,
May 16, 2019, 1:04:11 PM5/16/19
to gce-discussion
Hi Bruce,

From what I gather, those errors in Cloud Functions were due to issues encountered in Cloud Pub/Sub. 

It seems some messages published on your Cloud Pub/Sub topic may have hit a size limit of 10MB (this limit applies to the sum of the sizes of all outstanding messages, including message data and attributes) as documented here [1]. To verify the occurrence, use Stackdriver's Metric Explorer for that Pub/Sub topic. You can catch the errors in Stackdriver Monitoring [2].

Note that Pub/Sub retained those messages for 7 days, during this period it was retrying to deliver those messages (this explains all the error messages on the function). This explains the 'culprit' message that kept getting retried. Once the retention period expired those messages get deleted from the system.

Per unacknowledged messages that has been growing since the error, this behavior relates to the 'at-least-once-delivery', where Cloud Pub/Sub will repeatedly attempt to deliver any message that has not been acknowledged. But the subscriber has a configurable, limited amount of time, or ackDeadline, to acknowledge the message. Once the deadline has passed, an outstanding message becomes unacknowledged [3]. Using this would be a good way to avoid hitting the 10MB limit.

I hope this helps you understand what triggers the error and to help you resolve it. Here's more on troubleshooting Pub/Sub [4].

Reply all
Reply to author
Forward
0 new messages