Sfdx Force Source Deploy

0 views
Skip to first unread message

Bertoldo Beyer

unread,
Aug 4, 2024, 10:54:09 PM8/4/24
to tonlasilqui
Thefile and directory structures are different. force:source refers to the new "source" format, while force:mdapi refers to the old "metadata" format. As an example of the changes, in metadata format, a custom object is a single file, while in source format, an object has a core metadata file for the object itself, a folder full of files that represent each field, another folder for list views, etc.

For anything that is not a Scratch Org (including Production, Sandbox, and Developer Edition orgs). The difference here is that Scratch Orgs "track" changes, allowing developers to upload and download a delta (i.e. just the changes). Other orgs do not have this extra feature, and so you can only download and upload specified files, regardless if there has been any change. This means that it will typically take longer to upload and download changes, and it can be easier to miss changes.


The mdapi commands are to allow migration from existing source code repos to the newer format over time. You should prefer to use the force:source commands whenever possible, and use the force:mdapi commands only when dealing with legacy code repositories that have not yet been converted to "source" format.


Migrating to source format also means you'll get the advantage of 2GP (Second generation packages), which allows you to create versions, dependencies, etc, so you can just install packages instead of uploading metadata every time. This greatly reduces deployment times and grants additional features, like the ability to delete an obsolete field by removing it from a package.


You can get your goals done with nothing other than force:source (and optionally force:package) command(s). You can use a Developer Edition org, or you might even decide to eventually migrate to Scratch Orgs.


Thanks for the reply.

I was aware of the output from force:apex:test:run. However, in June the sfdx team added the ability to export reports from the force:source:deploy command. I was hoping to avoid another time-consuming command and just use the output from that into SonarCloud. However, it seems that not of the files output from force:source:deploy match the format you require.


OK, let's start by saying that this is NOT the rule-book for how to do this properly, this is me, documenting what is working, and knowing I have a lot to learn. (Which is the same as every page on this Wiki, and the reason for a wiki rather than a blog). But if this helps one person avoid using Change Sets or having to learn ANT then it's worth it.


Copy ALL the metadata and folders required for just that feature, as if you were building a Change Set. Now, there is no functionality like dependencies in Change Sets (the only reasonably redeeming feature of Change Sets) so as you start to learn what is required, you may want to build a change set then use that to double check your folder has everything needed).


Now, this works if you are going to create the folder and deploy right away. It is a terrible idea as soon as something changes and you need to update the folder. So this is where Packages helps, but I haven't got the gist of packages yet, but start with Bonny Hinners ideas on how to set all this up (see it seems that you have to do retrieves from packages in metadata format, and I don't want to stuff around with converting, but maybe I need to just get over it).


The reason I like using a folder that I have created, rather than deploying directly from a package, is that I can then modify the folder easily without affecting the package. If something fails in my deployment, I can remove it from my deployment folder and try the deployment again. It's easier than trying to fix everything so you can deploy in one shot.


What does it do... well it takes the Source format, converts it to Metadata format, zips it up, uploads it to Salesforce and starts a test deployment. WOW. Even ANT which is much easier than Change Sets requires you stuffing around with Zip files.


Now, for larger packages, the status will be updated every few seconds and all the errors will start appearing. It's up to you to try and cancel to fix those errors, or let it run until ALL the errors are displayed. I would let it run. I don't yet know how to get a list of errors that is formatted better than what is output.


There are many scenarios. For example, you could read this json file using a node.js script and do some additional processing. You could also store this file in an external application and create deployment metrics, like how many deployments succeed in a week.


The idea is that before you run this command, you would have set up a default alias or default org. This tells the CLI to run any subsequent commands against that org unless another org is specified.


Basically, I run the Salesforce CLI command and process the result with JavaScript. Then I can do whatever I want with that result. This a great example of how to combine the Salesforce CLI with JavaScript!


A validate-only deployment is where you send your changes to Salesforce, asking it to check if everything would deploy successfully, without actually applying the changes to the org. Think of it as a dry-run.


Using validate-only deployments is a best practice that can help you catch errors early and reduce the risk of deployment failures. Once validated, leveraging quick deployments can significantly speed up the deployment process. The SFDX CLI makes these tasks straightforward, providing a seamless experience for Salesforce DevOps.


Note: Always ensure you have backups and have conducted necessary tests in staging environments before deploying to production. Validate-only and quick deployments are tools to assist you, but thorough testing remains paramount.


The Salesforce CLI is a powerful command line interface that simplifies development and build automation when working with your Salesforce org.

Over the past few years I am using multiple sfdx commands from deploying metadata to running code snippet using sfdx command.


Pro tip: Make sure your pipeline works before implementing incremental deployments. Otherwise it will just make it harder to debug your pipeline.It's also important to implement a way to switch back to full deployment in case the incremental deployment does not behave as expected.


Node v16.20.0 or above is required.To check if Salesforce CLI runs under a supported node version for SGD, run sfdx --version. You should see a node version above v.16.20.0 to use SGD.


Because this plugin is not signed, you will get a warning saying that "This plugin is not digitally signed and its authenticity cannot be verified". This is expected, and you will have to answer y (yes) to proceed with the installation.


In CI/CD pipelines, for most of the CI/CD providers, the checkout operation fetch only the last commit of the branch currently evaluated.You need to fetch all the needed commits, as the plugin needs to have the branch to compare from as well,Example for Github action checkout here.If you use -n (--include) with metadata contained inside files you will need to have the full repo locally for the command to fully work.


In CI/CD pipelines, branches are not checked out locally when the repository is cloned, so you must specify the remote prefix.If you do not specify the remote in CI context, the git pointer check will raise an error (as the branch is not created locally).This applies to both --from and --to parameters as they both accept git pointers.


The plugin is compatible with git LFS.It will be able to read content from LFS locally.It is the user responsibility to ensure LFS content is present when the plugin is executed./!\ The plugin will not fetch content from the LFS server /!\


--from parameter is the base commit (the first, the oldest, the closest)--to parameter is the target commit (the last, the youngest, the farthest)If you want to deploy incrementally the content of a PR, --from parameter will be the base branch the PR branch wants to merge to, and --to parameter will be the PR branch.


2) A destructiveChanges.xml file, inside a destructiveChanges folder. This destructiveChanges.xml file contains just the removed/renamed metadata to delete from the target org. Note: the destructiveChanges folder also contains a minimal package.xml file, because deploying destructive changes requires a package.xml (even an empty one).


Note: it is also possible to generate a source folder containing added/changed metadata with the --generate-delta (-d) parameter. See the "Advanced use-cases" section for more examples.


However, keep in mind that the above command will fail if the destructive change was supposed to be executed before the deployment (i.e. as --predestructivechanges), or if a warning occurs during deployment. Make sure to protect your CI/CD pipeline from those scenarios, so that it doesn't get stuck by a failed destructive change.


One example is to speed up object deployments: the package.xml approach will deploy the entire sub-folder for a given object. Having a copy of the actual sources added/modified allows you to deploy only those components.


The --ignore [-i] parameter allows you to specify an ignore file to filter theelement on the diff to ignore. SGD ignores every diff line matching the pattern from the ignore file specified in the --ignore [-i]. package.xml generation, destructiveChanges.xml generation and --delta-generate will ignore those lines.

3a8082e126
Reply all
Reply to author
Forward
0 new messages