Too many requests with Multiple Azure accounts

171 views
Skip to first unread message
Assigned to vfe.17141915...@thoughtworks.com by vfe.1695961...@thoughtworks.com

Paul Cheyne

unread,
Nov 5, 2021, 10:08:00 AM11/5/21
to Cloud Carbon Footprint
Hi all

I just started testing the Cloud Carbon Footprint tool.  It is excellent. I have it working with AWS and now testing with Azure.   I have came across an issue when working with multiple Azure accounts.  I receive the following error:

[ConsumptionManagement] warn: Azure ConsumptionManagementClient.usageDetails.listNext failed. Reason: Too many requests. Please retry after 60 seconds.

error.PNG
When I run it with just one azure account it runs with no issue.

Any help appreciated

Dan Lewis-Toakley

unread,
Nov 15, 2021, 10:05:43 AM11/15/21
to Paul Cheyne, Cloud Carbon Footprint
Hi Paul,

Apologies for the delay in getting back to you. 

This is an issue we also had with multiple Azure accounts at Thoughtworks, and is expected behaviour due to rate limitations of the Azure consumption management API. Because of this, we implemented retry logic based on the time period the API tells us to wait for - in your case, 60 seconds.

Do the API requests end up completing eventually, or do they fail? It might take some time, but the requests should be retried and completed after wait time stated - but let us know if you aren't seeing this! 

Best,
Dan

--
You received this message because you are subscribed to the Google Groups "Cloud Carbon Footprint" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cloud-carbon-foot...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cloud-carbon-footprint/37bd136d-d93f-42fa-8933-45f8108ff463n%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Dan Lewis-Toakley
Green Cloud Lead, North America
Pronouns He/Him
Email dan.lewi...@thoughtworks.com
Telephone +19172545068
ThoughtWorks

Paul Cheyne

unread,
Nov 17, 2021, 3:19:54 AM11/17/21
to Cloud Carbon Footprint
Hi Dan

Thanks for getting back to me.  I found if I left it to run, after around 20 minutes the application would just fail.  I will try it again this week and put up the errors i produces.

Paul Cheyne

unread,
Nov 17, 2021, 5:25:53 AM11/17/21
to Cloud Carbon Footprint
Here is the error I received:

[1] <--- Last few GCs --->
[1]
[1] [158:0x7ffff3693340]  1376581 ms: Mark-sweep 2020.5 (2050.8) -> 2019.4 (2050.8) MB, 1168.3 / 0.0 ms  (average mu = 0.076, current mu = 0.003) allocation failure scavenge might not succeed
[1] [158:0x7ffff3693340]  1377670 ms: Mark-sweep 2020.4 (2050.8) -> 2019.5 (2050.8) MB, 1086.1 / 0.0 ms  (average mu = 0.041, current mu = 0.003) allocation failure scavenge might not succeed
[1]
[1]
[1] <--- JS stacktrace --->
[1]
[1] ==== JS stack trace =========================================
[1]
[1]     0: ExitFrame [pc: 0x7f5f416e2719]
[1]     1: StubFrame [pc: 0x7f5f4171d240]
[1] Security context: 0x29dcb129b9a1 <JSObject>
[1]     2: match [0x29dcb128a0d1](this=0x04a154a84561 <String[#6]: String>,0x257d771cd071 <JSRegExp <String[#47]: ^(String|Enum|Object|Stream|Uuid|TimeSpan|any)$>>)
[1]     3: /* anonymous */ [0x2b7c146708e9] [/home/x/cloud-carbon-footprint/node_modules/@azure/ms-rest-js/dist/msRest.node.js:~608] [pc=0x24767dc1eb88](this=0x10...
[1]
[1] FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
[1]  1: 0x7f5f40ad10e8 node::Abort() [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  2: 0x7f5f40a0d01a  [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  3: 0x7f5f40c7ecf2 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  4: 0x7f5f40c7ef9f v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  5: 0x7f5f40e17c15  [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  6: 0x7f5f40e2796d v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  7: 0x7f5f40e28616 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  8: 0x7f5f40e2a79c v8::internal::Heap::AllocateRawWithLightRetry(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1]  9: 0x7f5f40e2a804 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1] 10: 0x7f5f40df0efb v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1] 11: 0x7f5f411223d5 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1] 12: 0x7f5f416e2719  [/usr/lib/x86_64-linux-gnu/libnode.so.72]
[1] yarn start-api exited with code 0

Message has been deleted

Cloud Carbon Footprint

unread,
Dec 3, 2021, 12:06:46 PM12/3/21
to Cloud Carbon Footprint
Hi all,

Thank you for sharing the details of the issue you've been facing! After doing some investigation on this, we discovered that this is related to some underlying performance issues regarding our approach to handling large requests with Azure. We are actively working on addressing this and finding a solution that won't cause the memory issue that you are encountering. We will make sure to keep you all posted and will send an update as soon as we are able to solve it!

Thanks again,
Arik
On Tuesday, November 30, 2021 at 11:26:34 AM UTC-6 Alban Boitier wrote:
I have the same error, among others.
Message has been deleted
Message has been deleted

Dan Lewis-Toakley

unread,
Dec 7, 2021, 1:43:24 PM12/7/21
to Alban Boitier, Cloud Carbon Footprint
Hi folks, 

I've pushed a branch "infinitaslearning/trunk" - see PR here for changes, that I'm hoping might help with the request limits for multiple Azure accounts, and maybe with the memory heap limits as well. 

In that branch you'll see back-off and retry logic has been added to the ConsumptionManagementClient.usageDetails.list function, which didn't have this earlier. It was already in place for the ConsumptionManagementClient.usageDetails.listNext (note the different function name with/without "Next") in it. 

If this doesn't help, you will also see in the branch commented out code in AzureAccount.ts that sequentially requests the data in chunks of 10. Would you be able to uncomment that code out and let me know how it goes / which option is better (sequential groups vs all in parallel)? It also includes some console logging to benmark the total request time. 

Based on your feedback, we'll decide which approach to merge down. We're also looking into other APIs or scopes for the same API to reduce the amount of requests and data stored in memory, but haven't yet found anything suitable / possible. Will update if/when that changes though!  

@Alban -- regarding the 504 error, could you share more details about where you are seeing that, and how you are able to replicate it? 

Many thanks,
Dan 



On Fri, Dec 3, 2021 at 6:19 PM 'Alban Boitier' via Cloud Carbon Footprint <cloud-carbo...@googlegroups.com> wrote:
I received the same "[1] FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory" error but not when working with Azure accounts, this happened with broadly 250 AWS accounts. It was after the server had ran for more than 20 minutes. Have a good time fixing those memory leaks

Also I have had a particularly cumbersome 504 error. Maybe you could add more info in the server debug regarding the 504 error ? I couldn't succeed at making the current Github version work so I reverted to an early October version that still works fine for me. Keep up the good work, this dashboard could become the de facto for multi-cloud companies.
--------------------------------------------------------------------------------------------
This e-mail transmission (message and any attached files) may contain information that is proprietary, privileged and/or confidential to Veolia Environnement and/or its affiliates and is intended exclusively for the person(s) to whom it is addressed. If you are not the intended recipient, please notify the sender by return e-mail and delete all copies of this e-mail, including all attachments. Unless expressly authorized, any use, disclosure, publication, retransmission or dissemination of this e-mail and/or of its attachments is strictly prohibited. 

Ce message electronique et ses fichiers attaches sont strictement confidentiels et peuvent contenir des elements dont Veolia Environnement et/ou l'une de ses entites affiliees sont proprietaires. Ils sont donc destines a l'usage de leurs seuls destinataires. Si vous avez recu ce message par erreur, merci de le retourner a son emetteur et de le detruire ainsi que toutes les pieces attachees. L'utilisation, la divulgation, la publication, la distribution, ou la reproduction non expressement autorisees de ce message et de ses pieces attachees sont interdites.
--------------------------------------------------------------------------------------------

--
We're excited to hear from you and hope you're enjoying Cloud Carbon Footprint.
Please fill out our feedback form: https://forms.gle/nFzkRioryy4R1DGB6
Add your name to ADOPTERS.md: https://github.com/cloud-carbon-footprint/cloud-carbon-footprint/blob/trunk/ADOPTERS.md
Give us a star on the github if you're enjoying the tool!
---
You received this message because you are subscribed to the Google Groups "Cloud Carbon Footprint" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cloud-carbon-foot...@googlegroups.com.

Zarul Zakuan

unread,
Jan 4, 2023, 11:42:43 PM1/4/23
to Cloud Carbon Footprint
Hi Team. I am facing the exact same error happened to Paul. Is there any fix for this yet? Thank you.
Reply all
Reply to author
Forward
0 new messages