Receiving the Error REQUEST_SIZE_LIMIT_EXCEEDED when creating Partitions.

82 views
Skip to first unread message

Eshaan Jayalath

unread,
Aug 28, 2014, 8:21:01 AM8/28/14
to adwor...@googlegroups.com
Hi

We are receiving the error REQUEST_SIZE_LIMIT_EXCEEDED when creating new Partitions. Do we have 5,000 limit on Partitions as well? Is this for a session or for the entire Partition level, meaning can we have more than 5,000 sub-divisions within a single level?

 

If we cannot exceed 5,000 per session when creating a Partition, we cannot adopt the method of dropping and re-creating the whole Partition whenever we modify the Partition. Because when we have more than 5000 sub-divisions; first, we have to create the new Partition with first 5,000 items (sub-divisions) and then modify the same Partition to continue attaching the rest of the sub-divisions. Will this be the case with Partition having more than 5,000 sub-divisions (items)?


Regards

Eshaan


ERROR
An API exception has occurred. See ApiException and InnerException fields for more details. [SizeLimitError.REQUEST_SIZE_LIMIT_EXCEEDED @ ] (dbname=DB123 partitionId=XXX).    at Google.Api.Ads.Common.Lib.AdsSoapClient.MakeApiCall(String methodName, Object[] parameters)     at Google.Api.Ads.Common.Lib.AdsSoapClient.Invoke(String methodName, Object[] parameters)     at Google.Api.Ads.AdWords.v201402.AdGroupCriterionService.mutate(AdGroupCriterionOperation[] operations)    

Sérgio Gomes (Shopping API Team)

unread,
Aug 28, 2014, 11:47:37 AM8/28/14
to adwor...@googlegroups.com
Hi Eshaan,

Yes, that's definitely an edge case that complicates things. You can, however, still delete a subtree of the entire partition tree in the same way (by simply deleting the root node of the subtree) and rebuild from there.

The details will of course be highly specific to your use case, but the idea would be to find the smallest subtree containing all your changes, delete its root (thus deleting the entire subtree), and reinsert the modified subtree with the changes.

If that's still over 5000 nodes, you will need a different approach. One option, assuming you're making small changes across the tree, is to find multiple change subtrees as described above.

However, if you're making a really big change to a really big tree (say, adding a new node very close to the root), your only option may be to use a staged approach. This would consist of deleting the whole thing (by deleting the root node), then building the tree up to a given depth. The complication with this is that you need to have a valid tree at the end of every mutate call, so this would mean replacing subdivision nodes that end up as a leaf at your current stage with placeholder unit nodes instead. At the next stage, you would then need to delete the placeholders and insert the new nodes all in one go. E.g., to build the following tree one level at a time:

    A
   / \
  B   C
 / \
D   E

You would first build:

    A
   / \
  X   C

(where X is a unit placeholder)

and then both delete X and insert the B subtree in one single mutate request:

    A
   / \
  B   C
 / \
D   E


A word of advice is you will save a ton of effort if you have any possibility at all of capturing the changes while they're being made (e.g. when a user is modifying the tree in a UI, or when an automated system decides to make a specific change to part of the partition tree), since you'll know which nodes were modified and how, instead of having to implement a generic tree diff algorithm.

Hope this helps!

Cheers,
Sérgio

---
Sérgio Gomes
Developer Relations

Google UK Limited
Registered Office: Belgrave House, 76 Buckingham Palace Road, London SW1W 9TQ
Registered in England Number: 3977902

Eshaan Jayalath

unread,
Sep 16, 2014, 9:51:23 AM9/16/14
to adwor...@googlegroups.com

Thanks Sergio for the info given, However can Google make this process simpler than what you’ve mentioned below, by adopting the process that we always use when creating a file with large volume of data in it, such as;

 

1. Create the file Handler with a unique file name.

2. Use the file Handler created and write data multiple times to it without exceeding the buffer limit.

3. Close the file Handler to create the complete file and finalize the creation of it.

 

In the context of Partitions, we could adopt the above theory as follows.

 

1. Create the Partition root node and obtain the Root Node ID (Handler).

2. Create subsequent levels using the Root Node ID mentioned above by posting data multiple times without exceeding the 5000 limit (Google won’t validate the Partition at this level).

3. Signal the completion of Partition creation (with a Google API call) using the same Root Node ID so that Google could then start the process of validating the Partition as a whole unit.

 

I believe, this sort of a method would be very much easier to adopt and not complicated at all, also it gives a chance to Google to process the Partition as whole at the end. If the Partition creation has never been closed with a time period (set by Google) in the 3rd step above, Google may discard the Partition creation as an abandon operation.

 

Is there any way we can do something like this for Google Partitions? I have some more ideas of the same if interested.



Regards

Eshaan
Reply all
Reply to author
Forward
0 new messages