The only problem is: "Node is Warning. Windows Scheduled Tasks 'Windows Scheduled Tasks' has state: Warning. "
I disabled the task with the error. but the node is still in warning. Since this is not an alert, where can I configure this check that it does not check disabled tasks?
Lately I've been working with Python and Node both on Windows on WSL (WSL2 specifically) and I noticed a certain number of inconveniences with WSL when having Python and Node installed on both systems. For example, some node packages were found when running node on WSL, even if I never installed them there. When that happened, the performances were horrible.
(Optional) If the AmazonEKS_CNI_Policy managed IAMpolicy (if you have an IPv4 cluster) or theAmazonEKS_CNI_IPv6_Policy (that you created yourself if you have an IPv6cluster) is attached to your Amazon EKS node IAM role, we recommend assigning it to anIAM role that you associate to the Kubernetes aws-node service account instead. For moreinformation, see Configuring the Amazon VPC CNI plugin for Kubernetes to use IAM roles for service accounts.
Create your node group with the following command. Replaceregion-code with the AWS Region that your cluster is in. Replace my-cluster with your cluster name. The name can contain only alphanumeric characters (case-sensitive) andhyphens. It must start with an alphabetic character and can't be longer than 100 characters. Replace ng-windows with a name for your node group. The node group name can't be longer than 63 characters. It must start withletter or digit, but can also include hyphens and underscores for the remaining characters. For Kubernetes version 1.23 or later, you can replace 2019 with 2022 to use Windows Server 2022. Replace the rest of the example values with your own values.
To deploy a node group to AWS Outposts, AWS Wavelength, or AWS Local Zone subnets, don't pass the AWS Outposts, Wavelength, or Local Zone subnets when you create the cluster. Create the node group with a config file, specifying the AWS Outposts, Wavelength, or Local Zone subnets. For more information, see Create a nodegroup from a config file and Config file schema in the eksctl documentation.
Amazon EKS optimized Windows AMIs can be configured to use containerd as a runtime. When using eksctl for launching Windows nodes, specify containerRuntime as containerd in the node group configuration. For more information, see Enable the containerd runtime bootstrap flag in this user guide or Define container runtime in the eksctl documentation.
An existing Amazon EKS cluster and a Linux node group. If you don't have these resources, we recommend that you follow one of our Getting started with Amazon EKS guides to create them. The guides describe how to create an Amazon EKS cluster with Linux nodes.
NodeGroupName: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that's created for your nodes. The node group name can't be longer than 63 characters. It must start withletter or digit, but can also include hyphens and underscores for the remaining characters.
NodeImageId: (Optional) If you're using your own custom AMI (instead of the Amazon EKS optimized AMI), enter a node AMI ID for your AWS Region. If you specify a value for this field, it overrides any values in the NodeImageIdSSMParam field.
KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your nodes with after they launch. If you don't already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see Amazon EC2 key pairs in the Amazon EC2 User Guide for Windows Instances.
You can configure Amazon EKS optimized Windows AMIs to use containerd as a runtime. When using an AWS CloudFormation template to create Windows nodes, specify -ContainerRuntime containerd in a bootstrap argument to enable the containerd runtime. For more information, see Enable the containerd runtime bootstrap flag.
DisableIMDSv1: By default, each node supports the Instance Metadata Service Version 1 (IMDSv1) and IMDSv2. You can disable IMDSv1. To prevent future nodes and Pods in the node group from using MDSv1, set DisableIMDSv1 to true. For more information about IMDS, see Configuring the instance metadata service.
NodeSecurityGroups: Select the security group that was created for your Linux node group when you created your VPC. If your Linux nodes have more than one security group attached to them, specify all of them. This for, for example, if the Linux node group was created with eksctl.
Subnets: Choose the subnets that you created. If you created your VPC using the steps in Creating a VPC for your Amazon EKS cluster, then specify only the private subnets within the VPC for your nodes to launch into.
If any of the subnets are public subnets, then they must have the automatic public IP addressassignment setting enabled. If the setting isn't enabled for the public subnet, then any nodesthat you deploy to that public subnet won't be assigned a public IP address and won't beable to communicate with the cluster or other AWS services. If the subnet was deployed beforeMarch 26, 2020 using either of the Amazon EKSAWS CloudFormation VPC templates, or by using eksctl, then automatic public IP address assignment isdisabled for public subnets. For information about how to enable public IP address assignment for asubnet, see Modifying the publicIPv4 addressing attribute for your subnet. If the node is deployed to a privatesubnet, then it's able to communicate with the cluster and other AWS services through a NAT gateway.
In the aws-auth-cm-windows.yaml file, set the rolearn values to the applicable NodeInstanceRole values that you recorded in the previous procedures. You can do this with a text editor, or by replacing the example values and running the following command:
eksctl makes it easy to setup and manage an Amazon EKS cluster with Windows MNGs. Whether you are provisioning a new cluster or adding onto an existing, eksctl can help. eksctl automatically patches the ConfigMap to enable Windows IP address management when a Windows node group is created.
If you are already using Windows self-managed node groups and plan to switch to Windows MNGs, then you can provision a new MNG by using the following eksctl command. To learn more, visit migrating to a new node group for detailed instructions.
In this post, we showed you how to use Windows MNGs to remove undifferentiated heavy lifting of managing the provisioning and lifecycle of Windows Kubernetes nodes. We aligned operations to use the same tools for Linux and Windows. The services provide node updates and terminations, which automatically drain nodes to ensure that your applications stay available. They are free to use minus the resources you provision. With the added flexibility of using custom launch templates, MNGs should be the default provisioning method organizations use in the future.
And it has been working great! So I do not believe that this is a line-protocol format issue. The issue seems to be something to do with windows adding \r\n at the end of each line to represent a new line and influxdb not being capable of parsing that.
If I shutdown node1, then node2 will be active. If I then shut down node2, then cluster will be stopped (obviously), however, if I then start only node1, the cluster will never recover. Not only will it not recover, without node2, but I don't see an easy way to make the cluster come into service with the cluster manager. The only way I can recover the cluster, in this scenario, would be to start node2, however, that does not seem (to me) to be real high-availability. IMO I should be able to set a policy or have a reasonably easy way to bring the cluster back on-line (perhaps after a waiting period), even if node2 never recovers.
However, the witness was available at that time, which makes me suspect that this is a permission issue, that is, the witness share is available to the cluster but not the cluster service accounts on each node. Is that possible?
As @stuka noted, this is by design. The file was locked by a live node before the whole cluster went down. There's no way for Node1 to know that Node2 is not actually online but inaccessible over the cluster network. It has to rely on the locked file as being correct. It would be far worse for Node1 to come online in that scenario as if the cluster network went down, neither node would be able to break the quorum voting tie.
Two node clusters will always have a compromise in terms of HA. The witness file share establishes quorum, but it cannot cover all scenarios. A 3-node (or other odd node) cluster would provide better fault tolerance.
If the quorum witness share is accessible to the online node, it should definitely be able to bring the cluster online. This is standard WSFC behavior. If your cluster is not starting and the witness share is online, something else must be preventing it from starting. Look for any errors.
From version 1.14, Amazon EKS supports Windows Nodes that allow running Windows containers. In addition to having Windows nodes, a Linux node in the cluster is required to run CoreDNS, as Microsoft doesn't support host-networking mode yet. Thus, a Windows EKS cluster will be a mixture of Windows nodes and at least one Linux node. The Linux nodes are critical to the functioning of the cluster, and thus, for a production-grade cluster, it's recommended to have at least two t2.large Linux nodes for HA.
df19127ead