Mulesoft Level 2

0 views
Skip to first unread message

Mary Hargrove

unread,
Aug 5, 2024, 7:58:37 AM8/5/24
to moonsharchamppu
Ifyou enable egress for a domain that includes redirection, such as when salesforce.com redirects to www.salesforce.com, allow egress for both the original target, salesforce.com, and the redirected target, www.salesforce.com.

Given the extra layer of validation, using the application-level egress rules can introduce three-millisecond to ten-millisecond network latency delays to your applications' connections. To balance security and performance:


Unless otherwise configured, policies are by default applied to the entire API. However, you can implement an additional level of policy granularity, one in which access is controlled based on a criteria. Policies with this granularity are called resource-level policies.


In Mule 4, resource-level policies support HTTP-based APIs in addition to RAML APIs.You can apply multiple conditions to filter your resources and HTTP methods using the URI template regex to any number of methods in your API.


A resource-level policy supports regular Java expressions. For example, you can use a wildcard to apply a policy to multiple resources. When you apply the policy to the API, specify the resources to which it applies:


A resource level policy supports Java regular expressions. For example, you can use the wildcard to apply a policy to multiple resources. When you apply the policy to the API, specify the resources to which it applies.


Do not use a placeholder, such as userid, in the regular expression. Using a placeholder in an expression fails because the placeholder does not match the actual node. In the case of the example placeholder userid, the node actually looks something like this:


You can add security to create, update, and delete operations, leaving read-only operations unsecured. For example, you want to apply an HTTP basic authentication policy to specific methods and resources. Select POST, PUT, PATCH and DELETE methods and use the following expression to cover every resource URI of the API:


You can enforce rate limiting to user-specific operations with different limits depending on the user action. You can apply rate limiting, for example, multiple times, limiting requests to a greater extent for some resources than others. For example, you can set different limits for read, create, and delete operations per user on the following nodes:


Next, you apply the rate limiting policy to specific methods and resources again, but selecting the POST (create) method and using the same regular expression as before. Configure 50 requests per 1 hour.


Finally, you can apply the rate limiting policy again to specific methods and resources, selecting the DELETE method. This time, you use and the same regular expression as before. For example, you can configure 25 requests per 2 hours.


Every app that you build in Studio comes with its own log4j2.xml file. The log contains information about any errors raised in the app (unless you have app logic to handle those errors). It also contains anything you want to explicitly log, if you build the logic in the app.


Mule automatically logs multiple messages and specific elements in your app flows to help you debug and keep track of events. You can also include the Logger component anywhere in a flow and set it up to output any message you want.


Mule uses slf4j, which is a logging facade that discovers and uses a logging strategy from the classpath, such as Apache Log4j 2 or the JDK Logger. By default, Mule includes Log4j 2, which is configured with a file called log4j2.xml.


By default, Mule logs messages asynchronously. When logging synchronously, the execution of the thread that is processing your message is interrupted to wait for the log message to be fully handled before it can continue:


When asynchronous logging is used, some actions might not be logged if there is a system crash. This situation occurs because log writing is performed on a separate thread that runs independently of other actions. See Exception Handling with Asynchronous Logging for information on mitigating this issue.


The chart below shows the performance difference between synchronous and asynchronous logging, and how much latency increased as more concurrent messages were added. In this test, an app logged about one million messages, using an increasingly higher amount of threads on each run. Each transaction resulted in 1000 messages.


The default configuration defines all loggers, including the root logger, as asynchronous. You can override this configuration at the domain or app level. To override this configuration at the app level, add a logConfigFile entry to the mule-artifact.json file. For example:


To log the Runtime Manager Agent state in a location other than the default mule_agent.log file, configure the $MULE_HOME/conf/log4j2.xml file to include a new Log4j 2 Appender called mule-agent-appender. If included, the Runtime Manager Agent plugin uses this appender to log its state.


There is a performance-reliability trade-off between asynchronous and synchronous logging. If the risk of losing log messages is a serious issue, then configure your loggers to be synchronous. You can have a mix of both synchronous and asynchronous logging.


This issue happens because there is another log4j2.xml file on your classpath that is getting picked up before your modified one. To find out which configuration file Log4j 2 is using, add the following switch when starting Mule (or add it to the container startup script if you are embedding Mule):


This switch writes the Log4j 2 startup information, including the location of the configuration file being used, to stdout. You must remove that configuration file before your modified configuration can work.


This is really an interesting topic you have raised. I am happy to be proved wrong but in Flexera FNMS content library, the ARL library does has sort of "Hierarchy" (Parent--Child) logic and within those logic such as (Title Precedency or SoftwareTitleSuite etc), the parent does cover the installation reported from Child app etc.


I don't recall in SKU library there are any 'Parent-Child' relationship and the product will further act differently if so.... that it means is that it purely depends on what 'application' the Flexera SKU library team determine to link inside the SKU. Whether say the parent SKU Should contain the Child application from Child SKU as well ?


Personally I am more interested to hear what you have found out from the Vendor's agreement of such interesting scenario that whether the Parent SKU (Parent entitlement can indeed cover the Child application/license consumption) ?? If that's document clearly in the vendor site, I think it's reasonable to ask SKU library team to update the SKU ARL Application linkage to reflect that accordingly.


We have no guidance from the vendor or reseller at this point. This is not an issue that is unique to Mulesoft/Salesforce. The worst offender is likely Cisco in my personal experience. I had occasion to look at the raw PO data for this SKU today: L-LIC-DNA-ADD. We had already submitted this SKU to the content team, and they have processed it as Maintenance. However, looking at the PO data I now see:


Thanks @dmathias , this won't be too easy for SKU library team find out until the user inform them with information related attached. From the product level, I believe you will need to include the child application along with the parent SKU's application in your license (so it becomes a Bundle license) if suit here...manual process until the SKU update can be accepted by the library team I hope.


Yeah. I imagine two scenarios: one where we know the BOM, as we do with the examples provided; and another where the BOM is not known, so submissions to the content teams get profiled incorrectly. This, of course, complicates purchase data processing. Automation may not be possible without verifying with reseller or other authoritative source that the SKU data is at the "lowest" level of BOM.


Notice that by using 1 as the index, the script returned the second item in the Array. This is because Arrays in DataWeave are zero-indexed; the item in the first position of the Array has an index of 0, the second has an index of 1, and so on.


If you need multiple sequential values from an Array, DataWeave allows you to select a range of values with the range selector. Instead of returning a single value as the index selector does, it will return an Array of values.


There are two more commonly-used selectors that are important to learn: the multi-value selector, and the descendants selector. They both work to return multiple values for the same key, but function in ways that are different but complementary to each other.


The multi-value selector also works with Arrays. With Objects, the multi-value selector only matched keys on the first level of nesting. With Arrays, the multi-value selectors do the same thing for each top-level Object in the Array.


DataWeave goes through each top-level Object in the Array and gets the value of any key that matches. In this case, that key is number. Since the multi-value selector is inspecting keys, this only works when the Array in question contains Objects. When working with Arrays, you can think of the multi-value selector as doing payload[0].*number, then payload[1].*number, and so on, collecting all those values into the Array that gets returned.


Boeing Commercial Airplanes (BCA) is seeking a highly skilled and experienced Mid-Level or Senior level (Level 3, 4) MuleSoft Data System Engineer to join our dynamic Reliability Data Management (RDM) team in Seal Beach, CA. As a Data Engineer specializing in MuleSoft within our Reliability Data Management team, you will play a pivotal role in designing, developing, and maintaining data integration solutions that enable seamless data flow, transformation, and analysis. Your work will directly impact the quality and accuracy of data used for optimizing aircraft reliability and customer support operations. You will collaborate with cross-functional teams to ensure the success of data-driven initiatives and contribute to the continuous improvement of our data management processes.

3a8082e126
Reply all
Reply to author
Forward
0 new messages