No data to display, enable debug log

64 views
Skip to first unread message

Jaffer Li

unread,
Oct 23, 2021, 4:03:45 AM10/23/21
to Cloud Carbon Footprint
Hello,

I am current deploy the Cloud Carbon footprint code into a dedicated AWS account (which is not AWS payer/mater account). 

The web client (or even CLI) always shows "No data to display", I am thinking this might be some configure problem cause no data return to frontend. I would like to dig more insight in "api" package, but before doing this, I want to know is there any chance to enable the yarn debug log to show more api invoke tree. And to know where the problem it is ...

Jaffer.
Thanks.

Arik Smith

unread,
Oct 26, 2021, 5:27:35 PM10/26/21
to Cloud Carbon Footprint
Hi Jaffer,

Thanks for reaching out! This may be a configuration problem like you suggested. Have you made sure that you set the following variables in your .env file?

AWS_ATHENA_DB_NAME=
AWS_ATHENA_DB_TABLE=
AWS_ATHENA_REGION=
AWS_ATHENA_QUERY_RESULT_LOCATION=
AWS_BILLING_ACCOUNT_ID=
AWS_BILLING_ACCOUNT_NAME=
AWS_USE_BILLING_DATA=
AWS_AUTH_MODE=

I would also make sure to delete the estimates.cache.json in the api directory each time you run to make sure you're not pulling empty caches of data from the previous attempts.

You can also checkout the configuration section of the documentation on our microsite for additional potential fixes. If none of these work, we're unfortunately unfamiliar with yarn debug. However, we'd be happy to have one of the members of our team hop on a Zoom call with you to help further debug the issue.

Let us know if any of that helps!

Thanks,
Arik

Jaffer Li

unread,
Oct 27, 2021, 11:25:51 AM10/27/21
to Cloud Carbon Footprint

I update few line of code to make the Athena query against on the linked account (instead of payer account) packages/aws/src/application/AWSAccount.ts.

    // Jaffer monkey patch on it
    AWS.config.credentials = new AWS.EC2MetadataCredentials({
      httpOptions: { timeout: 5000 }, // 5 second timeout
      maxRetries: 10, // retry 10 times
      retryDelayOptions: { base: 200 } // see AWS.Config for information
    });
    AWS.config.update({region: 'us-west-2'});
    this.ath = new Athena()

Then this can query running on local account instead of payer account....  since payer account are consider as secured area don't allow any application running inside. So we ship the AWS CUR out of payer account to another account (via s3 replication), and setup cloud carbon footprint web app on linked account instead.

I met another challange is the api packages consume all memory.... 32 GB then it out of javascript heap... 

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc

I found the query loading from Athen result was too fat to fit into memory (32GB of r6g instance), the result are too huge to load into memory by (1 year date * account id * aws product code * region * ....) .....  I think I'll try to go with the CLI to run the report from small.

Thanks.

Arik Smith

unread,
Oct 28, 2021, 10:34:26 AM10/28/21
to Cloud Carbon Footprint

Hi Jaffer,

Sorry that you're still having trouble viewing data. I wanted to bring up another potential solution that was brought to my attention that should help with what you were trying to accomplish with yarn debug. It is possible to run the node process with inspector in the same sense to assist with debugging.
We use the following command in our package.json file to run our app in dev mode:"start:web": "ts-node-dev src/server.ts",
You could try updating this start command to: ts-node-dev --inspect -- my-script.ts
This should allow you to accomplish the same thing as yarn debugging, as yarn is essentially just invoking this same command behind the scenes. More information can be found in the nodejs documentation.

As for the api memory issues you've been having with your data, it may be due to you utilizing the app with a dataset that is too large for the default performance configuration. In your api's .env file, there should be a variable called GROUP_BY that has a default value of grouping by day. This can be set to either month, quarter, or year to help optimize grouping for larger data. You can find a deeper dive into these settings within our documentation for performance configurations.

If you're still having trouble after messing with those configurations, then it could be that your organization is a larger cloud customer than we have anticipated. In that case, our team would love to work with you as we're always looking for opportunities to support and continuously scale our app for larger cloud customers. Let us know if any of that helps and whether that's something you'd be interested in!

Best,
Arik
Reply all
Reply to author
Forward
0 new messages