Best Free Apex Zen Script

0 views
Skip to first unread message

Romilda Tiger

unread,
Aug 3, 2024, 4:20:58 PM8/3/24
to chamroricul

Data Loader, Import Wizard, Batch Class, Queueable Class, Execute Anonymous, etc. Anything that can modify every record in the database in a single pass. It'll be different depending on if you have 5k records or 50m records as to which approach would be best.

Addressing this and any other ETL (Extract Transform Load) tool. You only need to export the ID values and then perform an update. You don't need to correct anything (that's why you built the trigger/flow/etc, right?).

You could do that, or you could build a Swiss Army Knife Batch Apex class. That's what we use in production, and it's pretty much only used for one-off purposes. Having a plug-and-play Batchable boilerplate lets us do one-off commands, and in a more modern iteration, also lets us disable most triggers (our trigger code is designed to skip triggers when a static variable is set) if we want to (obviously, not in your case, though, you want the records to be updated via trigger).

That's usually what's meant, more 2 than 3, though other platforms, such as Informatica, also allow you to devise scripts of sort for doing specific updates in bulk. Salesforce has been around long enough now that all kinds of people have built all kinds of solutions.

So we've basically moved to #1 in your list -- using dataloader, because it gives us the greatest flexibility in determining the criteria of which records are included. But instead of manually creating the dataloaded information, we rather dataload a flag that causes the system to calculate it.

Batch apex on the other hand requires you to specify your criteria in Apex (without some clever custom metadata options etc.), so while our org began there, we moved away from it. Our scheduled job for #3 runs a query against open fulfillment actions, and publishes platform events to run each one. This gives us the ability to replay actions with a hard failure due to transient errors, and if there is a hard failure (like a CPU timeout), it only stops a small subset of actions from running. When batch apex and queueables experience a hard failure, they kill the entire process, including any good records that follow the bad.

My client used the Insert Headers and Footers plugin to add code for Google Tag Manager. For some reason when you view the website on a mobile device, the code appears at the very top. View the website at

They do not need this code and they did not use the plugin to do anything else so I had her deactivate the plugin, and then delete it hoping that it would remove whatever code it had inserted, but nothing has happened.

Thanks for writing in! You have inserted the code incorrectly. The best way to do this is to use a 3rd party Insert Headers and Footers plugin. You can check out this thread as a reference: -pixels-header-code/19764/2

While I was viewing the source code to see if the Facebook script was there, I pasted a copy into Dreamweaver and found a section of CSS info in the head section which includes the offending code. You can view a screenshot at
Tom

The script is under the CSS, editing the CSS or going to that file will not fix it. My assumption is a code attached to wp_head hook. And the hook priority happened to be next to CSS enqueue. Please provide your FTP login credentials.

I have investigated the issue and it turns out that the issue is coming from the Google Analytics plugin. I have disabled the plugin and moved that Google Analytics code in the Insert headers and footers plugin and still the issue exist. Upon closer look, it came from your code itself.

The line with =UA-114047771-1 will display in the front end. I am suspecting that the GA code is not correct. Please log in to the Google Analytics account and regenerate the embed code again. This is to double check that you are inserting the correct code from Google Analytics because at the moment, it seems that the GA code was modified.

Just to be sure - Are you suggesting that I use the Insert headers and Footers plugin to insert the JavaScript tracking snippet offered at the link you provided ( -analytic-plugin-strips-code/26196/21)?

APEX 21.2
Supporting objects is a under-utilized feature of APEX. We're starting to use it more.
Question - When an app with supporting object scripts is imported using SQL*Plus and the export was generated with p_supporting_objects = I (install on import), how does APEX determine whether the Install scripts are run or the Upgrade scripts? I see that the Upgrade scripts have a condition, a SQL statement that should return TRUE in order for the Upgrade scripts to execute.
But an import always deletes an application before importing it and supporting object scripts are executed after the import so when would the Install set of scripts run?
Application Version 1 is deployed with Install scripts to create tables, views, packages, etc. - Which scripts are used, Install or Upgrade?
Application Version 2 is deployed with changes to app metadata and database objects - Which scripts are used, Install or Upgrade?
...
In general, do we need to delete the supporting objects scripts from prior deployments and only retain the most recent version that goes with the app code? Or is there a way to retain all the history in the app export file and comment out scripts from prior versions?
The Install feature has an option to create a script based on a database object (table, view, package, etc), the upgrade does not. Why is this?
There is a Refresh button to refresh the source code when script is based on a database object DDL. Is there a way to keep this always refreshed, clicking a button is a manual task that is best avoided.
Can someone please explain the lifecycle of how these scripts are supposed to be used?
Thanks

The only changes I needed to make to the PL/SQL to make it work in a database package were that bind variable references (e.g. :P1_CUSTOMER_NAME) needed to be changed to use the V() (for strings and dates) or NV() (for numbers) functions; and I had to convert the Conditions on the Processes into the equivalent logic in PL/SQL. Generally, I would retrieve the values of page items into a local variable before using it in a query.

Once all that code is compiled on the database, I can now make a change to a schema object (e.g. drop a column from a table, or modify a view definition) and see immediately what impact it will have across the application. No more time bombs waiting to go off in the middle of a customer demo. I can also query ALL_DEPENDENCIES to see where an object is being used.

Another change I made was to move most of the logic embedded in report queries into views on the database. This led to more efficiencies as logic used in a few pages here and there could now be consolidated in a single view.

My current client has a large number of APEX applications, one of which is a doozy. It is a mission-critical and complex application in APEX 4.0.2 used throughout the business, with an impressively long list of features, with an equally impressively long list of enhancement requests in the queue.

So we pushed back a bit, and the terms of the project were changed so that development of project A would be done first, and the development of project B would follow straight after. So at least now we know that v1.2 can be built on top of v1.1 with no merge required. However, we still had the problem that production defect fixes would still need to be done on a separate version of the application in dev, and that they needed to continue being deployed to sit/uat/prod without carrying any changes from our projects.

4. What would be really cool, would be if the export scripts from APEX were structured in such a way that existing source code merge tools could merge different versions of the same APEX script and result in a usable APEX script. This already works quite well for our schema scripts (table scripts, views, packages, etc), so why not?

That this situation is, in fact, not the best of all possible worlds, is something that we can all learn and learn again. Have a look, and see what you think: dbdebunk.blogspot.com.au.

Alternatively, one could still just loop through all the indices from the first to the last index; but the problem with this approach is that if an index is not found in the array, it will raise the NO_DATA_FOUND exception. Well, Method B simply catches the exception:

This code effectively works the same (with one important proviso*) as Method A. The difference, however, is in terms of relative performance. This method is much faster than Method A, if the array is relatively dense. If the array is relatively sparse, Method A is faster.

The problem with this approach is that it effectively checks the existence of i in the array twice: once for the EXISTS check, and if found, again when actually referencing a(i). For a large array which is densely populated, depending on what processing is being done inside the loop, this could have a measurable impact on performance.

Once, by mistake, I copied all of the files from the temp directory to the work area, and even mistakenly added them all to the Git staging area. After doing this unintentional blunder, I typed git status . To my surprise, despite all the files having a later timestamp, Git had automatically ignored adding to the staging area any file whose logical contents had not actually changed! This productivity gift meant I could simply bulk copy all the exported files to my work area without manually cherry-picking the changed files myself. Bravo, Git!

While exporting and importing APEX apps in the browser is straightforward, I researched how I could do it from the command line instead. This felt like an enabling ingredient to automating more of my common workflow. I learned that SQLcl supports the apex export command that looked promising.

The result of running the apex export command above is the single file f1234.sql containing the entire application source for application 1234, and this file gets created in the /tmp/f1234_stage directory.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages