See <
https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/655/display/redirect?page=changes>
Changes:
[noreply] Support LoadBalancerIPMode in AntreaProxy (#6102)
[noreply] Update CHANGELOG for v1.15.1 release (#6144)
[noreply] replacing os.Setenv with t.Setenv in all unit tests (#6139)
[noreply] Fix L7 NetworkPolicy e2e test failure (#6138)
[noreply] Update apt for Kubernetes in vagrant playbook (#6114)
------------------------------------------
Started by timer
Running as SYSTEM
Building remotely on antrea-cloud-ci-vm (antrea-cloud) in workspace <
https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/>
No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository
https://github.com/antrea-io/antrea/
> git init <
https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/> # timeout=10
Fetching upstream changes from
https://github.com/antrea-io/antrea/
> git --version # timeout=10
> git fetch --tags --progress --
https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url
https://github.com/antrea-io/antrea/ # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url
https://github.com/antrea-io/antrea/ # timeout=10
Fetching upstream changes from
https://github.com/antrea-io/antrea/
> git fetch --tags --progress --
https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse origin/main^{commit} # timeout=10
Checking out Revision c2f2459001397cd8583cfa61b4d138e79b6314fa (origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f c2f2459001397cd8583cfa61b4d138e79b6314fa # timeout=10
Commit message: "Update apt for Kubernetes in vagrant playbook (#6114)"
> git rev-list --no-walk df82b76631a059d27b769fa0df5748bdac2d52e2 # timeout=10
[cloud-antrea-eks-conformance-net-policy] $ /bin/bash /tmp/jenkins18115881846314069598.sh
+ CLUSTER=
+ REGION=us-west-2
+ K8S_VERSION=1.27
+ AWS_NODE_TYPE=t3.medium
+ SSH_KEY_PATH=/home/ubuntu/.ssh/id_rsa.pub
+ SSH_PRIVATE_KEY_PATH=/home/ubuntu/.ssh/id_rsa
+ RUN_ALL=true
+ RUN_SETUP_ONLY=false
+ RUN_CLEANUP_ONLY=false
+ KUBECONFIG_PATH=/home/ubuntu/jenkins/out/eks
+ MODE=report
+ TEST_SCRIPT_RC=0
+ KUBE_CONFORMANCE_IMAGE_VERSION=auto
+ INSTALL_EKSCTL=true
+ AWS_SERVICE_USER_ROLE_ARN=
+ AWS_SERVICE_USER_NAME=
+ _usage='Usage: ./ci/test-conformance-eks.sh [--cluster-name <EKSClusterNameToUse>] [--kubeconfig <KubeconfigSavePath>] [--k8s-version <ClusterVersion>] [--aws-access-key <AccessKey>] [--aws-secret-key <SecretKey>] [--aws-region <Region>] [--aws-service-user <ServiceUserName>] [--aws-service-user-role-arn <ServiceUserRoleARN>] [--ssh-key <SSHKey] [--ssh-private-key <SSHPrivateKey] [--log-mode <SonobuoyResultLogLevel>] [--setup-only] [--cleanup-only]
Setup a EKS cluster to run K8s e2e community tests (Conformance & Network Policy).
--cluster-name The cluster name to be used for the generated EKS cluster. Must be specified if not run in Jenkins environment.
--kubeconfig Path to save kubeconfig of generated EKS cluster.
--k8s-version EKS K8s cluster version. Defaults to 1.27.
--aws-access-key AWS Acess Key for logging in to awscli.
--aws-secret-key AWS Secret Key for logging in to awscli.
--aws-service-user-role-arn AWS Service User Role ARN for logging in to awscli.
--aws-service-user AWS Service User Name for logging in to awscli.
--aws-region The AWS region where the cluster will be initiated. Defaults to us-east-2.
--ssh-key The path of key to be used for ssh access to worker nodes.
--log-mode Use the flag to set either '\''report'\'', '\''detail'\'', or '\''dump'\'' level data for sonobuoy results.
--setup-only Only perform setting up the cluster and run test.
--cleanup-only Only perform cleaning up the cluster.
--skip-eksctl-install Do not install the latest eksctl version. Eksctl must be installed already.'
+ [[ 13 -gt 0 ]]
+ key=--aws-access-key
+ case $key in
+ AWS_ACCESS_KEY=****
+ shift 2
+ [[ 11 -gt 0 ]]
+ key=--aws-secret-key
+ case $key in
+ AWS_SECRET_KEY=****
+ shift 2
+ [[ 9 -gt 0 ]]
+ key=--aws-service-user-role-arn
+ case $key in
+ AWS_SERVICE_USER_ROLE_ARN=****
+ shift 2
+ [[ 7 -gt 0 ]]
+ key=--aws-service-user
+ case $key in
+ AWS_SERVICE_USER_NAME=****
+ shift 2
+ [[ 5 -gt 0 ]]
+ key=--cluster-name
+ case $key in
+ CLUSTER=cloud-antrea-eks-conformance-net-policy-655
+ shift 2
+ [[ 3 -gt 0 ]]
+ key=--log-mode
+ case $key in
+ MODE=detail
+ shift 2
+ [[ 1 -gt 0 ]]
+ key=--setup-only
+ case $key in
+ RUN_SETUP_ONLY=true
+ RUN_ALL=false
+ shift
+ [[ 0 -gt 0 ]]
+ [[ cloud-antrea-eks-conformance-net-policy-655 == '' ]]
+++ dirname ./ci/test-conformance-eks.sh
++ cd ./ci
++ pwd
+ THIS_DIR=<
https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci>
+ GIT_CHECKOUT_DIR=<
https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci/..>
+ pushd <
https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci>
+ source <
https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci/jenkins/utils.sh>
+ [[ false == true ]]
+ [[ true == true ]]
+ setup_eks
+ echo '=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-655 ==='
=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-655 ===
+ echo CLUSTERNAME=cloud-antrea-eks-conformance-net-policy-655
+ [[ -n '' ]]
+ echo '=== Using the following awscli version ==='
=== Using the following awscli version ===
+ aws --version
aws-cli/2.11.20 Python/3.11.3 Linux/4.15.0-156-generic exe/x86_64.ubuntu.18 prompt/off
+ set +e
+ [[ **** != '' ]]
+ [[ **** != '' ]]
+ mkdir -p /home/ubuntu/.aws
+ cat
+ cat
+ [[ true == true ]]
+ echo '=== Installing latest version of eksctl ==='
=== Installing latest version of eksctl ===
+ tar xz -C /tmp
++ uname -s
+ curl --silent --location
https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz
+ sudo mv /tmp/eksctl /usr/local/bin
+ set -e
+ printf '\n'
+ echo '=== Using the following eksctl ==='
=== Using the following eksctl ===
+ which eksctl
/usr/local/bin/eksctl
+ echo '=== Using the following kubectl ==='
=== Using the following kubectl ===
+ which kubectl
/usr/bin/kubectl
+ echo '=== Creating a cluster in EKS ==='
=== Creating a cluster in EKS ===
++ generate_eksctl_config
+++ aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.27/amazon-linux-2/recommended/image_id --query Parameter.Value --output text
++ AMI_ID=ami-0e32ce55e2e8e876d
++ cat
++ echo eksctl-containerd.yaml
+ config=eksctl-containerd.yaml
+ eksctl create cluster -f eksctl-containerd.yaml
2024-03-27 04:43:09 [ℹ] eksctl version 0.175.0
2024-03-27 04:43:09 [ℹ] using region us-west-2
2024-03-27 04:43:09 [ℹ] setting availability zones to [us-west-2b us-west-2d us-west-2a]
2024-03-27 04:43:09 [ℹ] subnets for us-west-2b - public:
192.168.0.0/19 private:
192.168.96.0/19
2024-03-27 04:43:09 [ℹ] subnets for us-west-2d - public:
192.168.32.0/19 private:
192.168.128.0/19
2024-03-27 04:43:09 [ℹ] subnets for us-west-2a - public:
192.168.64.0/19 private:
192.168.160.0/19
2024-03-27 04:43:09 [ℹ] nodegroup "containerd" will use "ami-0e32ce55e2e8e876d" [AmazonLinux2/1.27]
2024-03-27 04:43:09 [ℹ] using SSH public key "/home/ubuntu/.ssh/id_rsa.pub" as "eksctl-cloud-antrea-eks-conformance-net-policy-655-nodegroup-containerd-26:0c:39:fe:ae:63:53:74:1b:76:11:69:73:09:a7:80"
2024-03-27 04:43:09 [ℹ] using Kubernetes version 1.27
2024-03-27 04:43:09 [ℹ] creating EKS cluster "cloud-antrea-eks-conformance-net-policy-655" in "us-west-2" region with managed nodes
2024-03-27 04:43:09 [ℹ] 1 nodegroup (containerd) was included (based on the include/exclude rules)
2024-03-27 04:43:09 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-03-27 04:43:09 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2024-03-27 04:43:09 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-655'
2024-03-27 04:43:09 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cloud-antrea-eks-conformance-net-policy-655" in "us-west-2"
2024-03-27 04:43:09 [ℹ] CloudWatch logging will not be enabled for cluster "cloud-antrea-eks-conformance-net-policy-655" in "us-west-2"
2024-03-27 04:43:09 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-655'
2024-03-27 04:43:09 [ℹ]
2 sequential tasks: { create cluster control plane "cloud-antrea-eks-conformance-net-policy-655",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "containerd",
}
}
2024-03-27 04:43:09 [ℹ] building cluster stack "eksctl-cloud-antrea-eks-conformance-net-policy-655-cluster"
2024-03-27 04:43:10 [ℹ] deploying stack "eksctl-cloud-antrea-eks-conformance-net-policy-655-cluster"
2024-03-27 04:43:40 [ℹ] waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-655-cluster"
2024-03-27 04:43:40 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-655-cluster"
2024-03-27 04:43:40 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-655-cluster"
2024-03-27 04:43:40 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2024-03-27 04:43:40 [!] AWS::IAM::Role/ServiceRole: DELETE_IN_PROGRESS
2024-03-27 04:43:40 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS
2024-03-27 04:43:40 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS
2024-03-27 04:43:40 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – "Resource creation cancelled"
2024-03-27 04:43:40 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – "Resource creation cancelled"
2024-03-27 04:43:40 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – "Resource creation cancelled"
2024-03-27 04:43:40 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – "Resource handler returned message: \"The maximum number of VPCs has been reached. (Service: Ec2, Status Code: 400, Request ID: 0a6e6b14-a8e6-4efe-acab-01e25cd7de44)\" (RequestToken: 2efd4514-3bb8-5ca6-60ce-dbd3c8e5dec8, HandlerErrorCode: GeneralServiceException)"
2024-03-27 04:43:40 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2024-03-27 04:43:40 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=cloud-antrea-eks-conformance-net-policy-655'
2024-03-27 04:43:40 [✖] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "cloud-antrea-eks-conformance-net-policy-655"
Build step 'Execute shell' marked build as failure
Archiving artifacts