The data I am retrieving in the EG project has multiple rows for a 'Vendor# & 'Invoice#'. Each of these rows has a column with a date field 'Last_updated'. I need to be able to retrieve the row for this Vendor#/Invoice# with the latest date in the 'Last_updated'. I am trying to build this using EG query builder but I do not see a function that I can use to accomplish this. Any suggestions on how to accomplish this in EG?
This works if I only include the Vendor & Invoice number. I have many more columns of data that I need, the Vendor & Invoice number are just keys that I would be looking to group by. I put the other columns on my select tab and since it is looking at each column on the select, I did not get what I need. I should have mentioned that I have other data I need on the select tab and the 'grouping' used for finding the lastest dated row would be by vendor# and invoice#.
Previously, with the automation in the screenshot below, every time you would select one department (Multi Select Option) it would create just the subtask for that department, even if other departments/options were already selected (we have one rule for each department to create a specific subtask)
Hi @Vanessa_N - Just wanted to circle back to see if there has been any updates on this? We use this workflow multiple times a day and every day that goes by it is creating frustration in our organization. Hopefully there is some workaround. Thanks!
Hi @Machteld_Vervaet, apologies for the confusion. Our Developers clarified that the classic rules builder is still available, and they will maintain that possibility to switch between versions until this issue is fixed. You can switch to the classic rule builder by clicking the three dots at the top of the rule set up page, as illustrated here. I hope this helps!
The Amazon EC2 Image Builder service helps users to build and maintain server images. The images created by EC2 Image Builder can be used with Amazon Elastic Compute Cloud (EC2) and on-premises. Image Builder reduces the effort of keeping images up-to-date and secure by providing a graphical interface, built-in automation, and AWS-provided security settings. Customers have told us that they manage multiple server images and are looking for ways to track the latest server images created by the pipelines.
In this blog post, I walk through a solution that uses AWS Lambda and AWS Systems Manager (SSM) Parameter Store. It tracks and updates the latest Amazon Machine Image (AMI) IDs every time an Image Builder pipeline is run. With Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the time it takes for your code to run. In this case, the Lambda function is invoked upon the completion of the image builder pipeline. Standard SSM parameters are available at no additional charge.
Users can reference the SSM parameters in automation scripts and AWS CloudFormation templates providing access to the latest AMI ID for your EC2 infrastructure. Consider the use case of updating Amazon Machine Image (AMI) IDs for the EC2 instances in your CloudFormation templates. Normally, you might map AMI IDs to specific instance types and Regions. Then to update these, you would manually change them in each of your templates. With the SSM parameter integration, your code remains untouched and a CloudFormation stack update operation automatically fetches the latest Parameter Store value.
This solution uses a Lambda function written in Python that subscribes to an Amazon Simple Notification Service (SNS) topic. The Lambda function and the SNS topic are deployed using AWS SAM CLI. Once deployed, the SNS topic must be configured in an existing Image Builder pipeline. This results in the Lambda function being invoked at the completion of the Image Builder pipeline.
When a Lambda function subscribes to an SNS topic, it is invoked with the payload of the published messages. The Lambda function receives the message payload as an input parameter. The Lambda function first checks the message payload to see if the image status is available. If the image state is available, it retrieves the AMI ID from the message payload and updates the SSM parameter.
After deploying the application, note the ARN of the created SNS topic. Next, update the infrastructure settings of an existing Image Builder pipeline with this newly created SNS topic. This results in the Lambda function being invoked upon the completion of the image builder pipeline.
After the completion of the image builder pipeline, use the AWS CLI or check the AWS Management Console to verify the updated SSM parameter. To verify via AWS CLI, run the following commands to retrieve and list the tags attached to the SSM parameter:
Users can reference the SSM parameters in automation scripts and AWS CloudFormation templates providing access to the latest AMI ID for your EC2 infrastructure. This sample code shows how to reference the SSM parameter in a CloudFormation template.
In this blog post, I demonstrate a solution that allows users to track and update the latest AMI ID created by the Image Builder pipelines. The Lambda function retrieves the AMI ID of the image created by a pipeline and update an AWS Systems Manager parameter. This Lambda function is triggered via an SNS topic configured in an Image Builder pipeline.
The solution is deployed using AWS SAM CLI. I also note how users can reference Systems Manager parameters in AWS CloudFormation templates providing access to the latest AMI ID for your EC2 infrastructure.
The amazon-ec2-image-builder-samples GitHub repository provides a number of examples for getting started with EC2 Image Builder. Image Builder can make it easier for you to build virtual machine (VM) images.
This release includes plenty of in-builder improvements and fixes. We tackled builder lag issues when adding and removing classes. Duplicating, renaming, and reordering elements should also be much snappier now.
Until now, editing global data (global classes, CSS variables, theme styles, color palettes, etc.) in Bricks could lead to unwanted overwrites if another team member edited the same global data simultaneously.
With the new Global Data Sync enabled for classes under Bricks > Settings > Builder > Global data sync, all global class changes (add, delete, or modify classes) made in any other builder instance are automatically pulled into your builder instance whenever you perform a save.
After introducing the global class manager in Bricks 1.9.5, we now extend the global style management capabilities of Bricks to CSS variables with the introduction of the new global variables manager ?
Previously, remote templates were requested every time you opened the template manager in the builder, which was a bit excessive. So now, all remote templates are cached locally on your machine for seven days via IndexedDB.
If you are using a lot of DD echo tags on your Bricks site, you can now take advantage of a more flexible way to whitelist and check the function names you want to call through the echo tag via regex patterns or 100% custom checks. Learn how, plus code examples, at -bricks-code-echo_function_names/#patterns
It also addresses a smaller, possible authenticated issue discovered while working on 1.9.7, which requires a contributor role or above, bad intentions, and a code execute user to perform certain additional steps. There is no need to panic or update in the next 5 minutes, but we recommend updating as soon as you have the chance.
One problem we encountered is, that when we deploy changes to org via sfdx force:source:deploy, sometimes a new inactive Process Builder version is spawned, even though a Process Builder is not changed. It doesn't happen for every Process Builder, and it's not persistent sometimes a deployment spawns new version of PB every deployment, sometimes it doesnt. The problem results in hitting Maximum flow versions reached error.
Ideally? Don't deploy Processes or Flows (same metadata type) to persistent orgs via Continuous Integration, because you will hit the limit sooner or later. If you use scratch orgs, they won't be affected, but the persistent org that is your metadata's ultimate destination will be.
If you are an ISV using a 1GP packaging org, you won't be able to delete Flow versions that have been packaged, making this issue extremely serious for users of CI/CD. Additionally, each managed package version that's installed by your subscribers containing a new Flow versions accrues that version in their org, so 50 total upgrades will result in a non-upgradeable subscriber org.
I've discussed this issue with the Flow PM and engineering team (I work on managed packages at Salesforce.org) and they are evaluating ways to address it in the future (safe harbor, no timeline available at present).
@ Curious Squirell, I am the Flow PM working with David & the .org team to find ways to resolve this. In most cases, if nothing's changed in the metadata, a deploy should not result in a new version creation. Are you still encountering this issue today? If so, please file a case & give me the case number so that we can investigate. Or, post in the Salesforce Automation group in the Trailblazer community and I can coordinate with you directly.
We had noticed this as well and the issue for us was that the API version of the manifest or Flow didn't match the target environment. So the environment was making changes to the Flow each time it was deployed in order to update it to latest API version which was causing it to appear as a new Version
Specifically our metadata for our Flows was API 49.0, and we were deploying to an environment on API 50.0. The changes to Flow added in that version cased SF to make changes to the metadata each time it was deployed to update it to 50.0, causing a new version
Changing the Flow's Api version will definitely create a new version, but I still see new Flow versions created from deploying identical files (same Flow API version that also matches the environment's current API Version, and the SFDX json project file's API version). I believe it's related to when the Flow/PB was created and the specific metadata tags from that time. For example, if a PB was created during v40 it won't matter if you manually update that Flow to v51 because the metadata tags in that older era are different (but still compatible) to what's currently used.
795a8134c1