CauseYou created a patch policy in Quick Setup, and some of your managed nodes already had an instance profile attached (for EC2 instances) or a service role attached (for non-EC2 machines). However, you didn't select the Add required IAM policies to existing instance profiles attached to your instances check box, as shown in the following image.
When you create a patch policy, an Amazon S3 bucket is also created to store the policy's configuration baseline_overrides.json file. If you don't select the Add required IAM policies to existing instance profiles attached to your instances check box when creating the policy, the IAM policies and resource tags that are needed to access baseline_overrides.json in the S3 bucket are not automatically added to your existing IAM instance profiles and service roles.
Solution 1: Delete the existing patch policy configuration, then create a replacement, making sure to select the Add required IAM policies to existing instance profiles attached to your instances check box. This selection applies the IAM policies created by this Quick Setup configuration to nodes that already have an instance profile or service role attached. (By default, Quick Setup adds the required policies to instances and nodes that do not already have instance profiles or service roles.) For more information, see Automate organization-wide patching using a Quick Setup patch policy.
Solution 2: Manually add the required permissions and tags to each IAM instance profile and IAM service role that you use with Quick Setup. For instructions, see Permissions for the patch policy S3 bucket.
Possible cause: If more than one invocation of AWS-RunPatchBaseline occurs at a time, they can conflict with one another, causing patching tasks to fail. This might not be indicated in patching logs.
To check whether concurrent patching operations might have interrupted each other, review the command history in Run Command, a capability of AWS Systems Manager. For a managed node with a patching failure, check to see if multiple operations attempted to patch the machine within 2 minutes of one another. This scenario can sometimes cause a failure.
Solution: If you determine that patching failed because of competing patching operations on the same managed node, adjust your patching configurations to avoid this occurring again. For example, if two maintenance windows specify overlapping patching times, remove or revise one of them. If a maintenance windows specifies one patching operation, but a patch policy specifies a different one for the same time, consider removing the task from the maintenance window.
Problem: When reviewing the patching compliance details generated after a Scan operation, the results include information that don't reflect the rules set up in your patch baseline. For example, an exception you added to the Rejected patches list in a patch baseline is listed as Missing. Or patches classified as Important are listed as missing even though your patch baseline specifies Critical patches only.
When a Scan operation runs, it overwrites the compliance details from the most recent scan. If you have more than one method set up to run a Scan operation, and they use different patch baselines with different rules, they will result in differing patch compliance results.
Solution: To avoid unexpected patch compliance results, we recommend using only one method at a time for running the Patch Manager Scan operation. For more information, see Avoiding unintentional patch compliance data overwrites.
Cause 1: Two commands to run AWS-RunPatchBaseline were running at the same time on the same managed node. This creates a race condition that results in the temporary file patch-baseline-operations* not being created or accessed properly.
Solution 1: Ensure that no maintenance window has two or more Run Command tasks that run AWS-RunPatchBaseline with the same Priority level and that run on the same target IDs. If this is the case, reorder the priority. Run Command is a capability of AWS Systems Manager.
Solution 2: Ensure that only one maintenance window at a time is running Run Command tasks that use AWS-RunPatchBaseline on the same targets and on the same schedule. If this is the case, change the schedule.
Solution 3: Ensure that only one State Manager association is running AWS-RunPatchBaseline on the same schedule and targeting the same managed nodes. State Manager is a capability of AWS Systems Manager.
Solution: Ensure that no State Manager association, maintenance window tasks, or other configurations that run AWS-RunPatchBaseline on a schedule are targeting the same managed node around the same time.
Solution: Update your network configuration so that S3 endpoints are reachable. For more details, see information about required access to S3 buckets for Patch Manager in SSM Agent communications with AWS managed S3 buckets.
Problem: You have attempted to exclude certain packages by specifying them in the /etc/yum.conf file, in the format exclude=package-name, but they aren't excluded during the Patch Manager Install operation.
Cause: This message doesn't indicate an error. Instead, it's a warning that the older version of Python distributed with the operating system doesn't support TLS Server Name Indication. The Systems Manager patch payload script issues this warning when connecting to AWS APIs that support SNI.
Solution: To troubleshoot any patching failures when this message is reported, review the contents of the stdout and stderr files. If you haven't configured the patch baseline to store these files in an S3 bucket or in Amazon CloudWatch Logs, you can locate the files in the following location on your Linux managed node.
Cause: The curl tool in use on your systems lacks the permissions needed to write to the filesystem. This can occur when if the package manager's default curl tool was replaced by a different version, such as one installed with snap.
If you need to keep multiple curl versions installed, ensure that the version associated with the package manager is in the first directory listed in the PATH variable. You can check this by running the command echo $PATH to see the current order of directories that are checked for executable files on your system.
If the error recurs after this, we recommend reporting the issue to the organization that maintains the repository. Until a fix is available, you can edit the /etc/apt/sources.list file to omit the repository during the patching process.
Solution: We recommend reporting the issue to the organization that maintains the repository. Until the error is fixed, you can disable the repository at the operating system level. To do so, run the following command, replacing the value for repo-name with your repository name:
NAT/IGW in route table to provide connectivity to an S3 endpoint. If the instance doesn't have internet access, provide it connectivity with the S3 endpoint. To do that, add an S3 gateway endpoint in the VPC and integrate it with the route table of the managed node.
Cause: The package manager is already running another process on a managed node at the operating system level. If that other process takes a long time to complete, the Patch Manager patching operation can time out and fail.
Problem: The native package manager on the managed node is unable to resolve a package dependency and patching fails. The following error message example indicates this type of failure on an operating system that uses yum as the package manager.
Cause: On Linux operating systems, Patch Manager uses the native package manager on the machine to run patching operations. such as yum, dnf, apt, and zypper. The applications automatically detect, install, update, or remove dependent packages as required. However, some conditions can result in the package manager being unable to complete a dependency operation, such as:
Items in the product Obsolete or mismatched options sublist might have been entered in error through an SDK or AWS Command Line Interface (AWS CLI) create-patch-baseline command. This could mean a typo was introduced or a product was assigned to the wrong product family. A product is also included in the Obsolete or mismatched options sublist if it was specified for a previous patch baseline but has no patches available from Microsoft.
Solution: Confirm that the managed node has connectivity to the Microsoft Update Catalog through an internet gateway, NAT gateway, or NAT instance. If you're using WSUS, confirm that the managed node has connectivity to the WSUS server in your environment. If connectivity is available to the intended destination, check the Microsoft documentation for other potential causes of HResult 0x80072EE2. This might indicate an operating system level issue.
Solution: Check the managed node connectivity and permissions to Amazon Simple Storage Service (Amazon S3). The managed node's AWS Identity and Access Management (IAM) role must use the minimum permissions cited in SSM Agent communications with AWS managed S3 buckets. The node must communicate with the Amazon S3 endpoint through the Amazon S3 gateway endpoint, NAT gateway, or internet gateway. For more information about the VPC Endpoint requirements for AWS Systems Manager SSM Agent (SSM Agent), see Improve the security of EC2 instances by using VPCendpoints for Systems Manager.
If you can't find troubleshooting solutions in this section or in the Systems Manager issues in AWS re:Post, and you have a Developer, Business, or Enterprise AWS Support plan, you can create a technical support case at AWS Support.
I'm currently learning the
ASP.NET Core-Web-API (.NET 6) system. I added a class-library (.NET Framework) (v4.8) as a DataAccessLayer to my project. There is a DAO class with a sql-query that uses Dapper and System.Data.SqlClient.
3a8082e126