Build failed in Jenkins: cloud-antrea-eks-conformance-net-policy #672

0 views
Skip to first unread message

antr...@gmail.com

unread,
Apr 29, 2024, 12:43:44 AM4/29/24
to projecta...@googlegroups.com
See <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/672/display/redirect>

Changes:


------------------------------------------
Started by timer
Running as SYSTEM
Building remotely on antrea-cloud-ci-vm (antrea-cloud) in workspace <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/>
No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository https://github.com/antrea-io/antrea/
> git init <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/> # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git --version # timeout=10
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse origin/main^{commit} # timeout=10
Checking out Revision 89363ac43777a5d0e1d81694b2816c9bced1c476 (origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f 89363ac43777a5d0e1d81694b2816c9bced1c476 # timeout=10
Commit message: "Update CHANGELOG for v2.0.0 release (#6266)"
> git rev-list --no-walk 89363ac43777a5d0e1d81694b2816c9bced1c476 # timeout=10
[cloud-antrea-eks-conformance-net-policy] $ /bin/bash /tmp/jenkins9779857277930681467.sh
+ CLUSTER=
+ REGION=us-west-2
+ K8S_VERSION=1.27
+ AWS_NODE_TYPE=t3.medium
+ SSH_KEY_PATH=/home/ubuntu/.ssh/id_rsa.pub
+ SSH_PRIVATE_KEY_PATH=/home/ubuntu/.ssh/id_rsa
+ RUN_ALL=true
+ RUN_SETUP_ONLY=false
+ RUN_CLEANUP_ONLY=false
+ KUBECONFIG_PATH=/home/ubuntu/jenkins/out/eks
+ MODE=report
+ TEST_SCRIPT_RC=0
+ KUBE_CONFORMANCE_IMAGE_VERSION=auto
+ INSTALL_EKSCTL=true
+ AWS_SERVICE_USER_ROLE_ARN=
+ AWS_SERVICE_USER_NAME=
+ _usage='Usage: ./ci/test-conformance-eks.sh [--cluster-name <EKSClusterNameToUse>] [--kubeconfig <KubeconfigSavePath>] [--k8s-version <ClusterVersion>] [--aws-access-key <AccessKey>] [--aws-secret-key <SecretKey>] [--aws-region <Region>] [--aws-service-user <ServiceUserName>] [--aws-service-user-role-arn <ServiceUserRoleARN>] [--ssh-key <SSHKey] [--ssh-private-key <SSHPrivateKey] [--log-mode <SonobuoyResultLogLevel>] [--setup-only] [--cleanup-only]

Setup a EKS cluster to run K8s e2e community tests (Conformance & Network Policy).

--cluster-name The cluster name to be used for the generated EKS cluster. Must be specified if not run in Jenkins environment.
--kubeconfig Path to save kubeconfig of generated EKS cluster.
--k8s-version EKS K8s cluster version. Defaults to 1.27.
--aws-access-key AWS Acess Key for logging in to awscli.
--aws-secret-key AWS Secret Key for logging in to awscli.
--aws-service-user-role-arn AWS Service User Role ARN for logging in to awscli.
--aws-service-user AWS Service User Name for logging in to awscli.
--aws-region The AWS region where the cluster will be initiated. Defaults to us-east-2.
--ssh-key The path of key to be used for ssh access to worker nodes.
--log-mode Use the flag to set either '\''report'\'', '\''detail'\'', or '\''dump'\'' level data for sonobuoy results.
--setup-only Only perform setting up the cluster and run test.
--cleanup-only Only perform cleaning up the cluster.
--skip-eksctl-install Do not install the latest eksctl version. Eksctl must be installed already.'
+ [[ 13 -gt 0 ]]
+ key=--aws-access-key
+ case $key in
+ AWS_ACCESS_KEY=****
+ shift 2
+ [[ 11 -gt 0 ]]
+ key=--aws-secret-key
+ case $key in
+ AWS_SECRET_KEY=****
+ shift 2
+ [[ 9 -gt 0 ]]
+ key=--aws-service-user-role-arn
+ case $key in
+ AWS_SERVICE_USER_ROLE_ARN=****
+ shift 2
+ [[ 7 -gt 0 ]]
+ key=--aws-service-user
+ case $key in
+ AWS_SERVICE_USER_NAME=****
+ shift 2
+ [[ 5 -gt 0 ]]
+ key=--cluster-name
+ case $key in
+ CLUSTER=cloud-antrea-eks-conformance-net-policy-672
+ shift 2
+ [[ 3 -gt 0 ]]
+ key=--log-mode
+ case $key in
+ MODE=detail
+ shift 2
+ [[ 1 -gt 0 ]]
+ key=--setup-only
+ case $key in
+ RUN_SETUP_ONLY=true
+ RUN_ALL=false
+ shift
+ [[ 0 -gt 0 ]]
+ [[ cloud-antrea-eks-conformance-net-policy-672 == '' ]]
+++ dirname ./ci/test-conformance-eks.sh
++ cd ./ci
++ pwd
+ THIS_DIR=<https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci>
+ GIT_CHECKOUT_DIR=<https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci/..>
+ pushd <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci>
+ source <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/ci/jenkins/utils.sh>
+ [[ false == true ]]
+ [[ true == true ]]
+ setup_eks
+ echo '=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-672 ==='
=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-672 ===
+ echo CLUSTERNAME=cloud-antrea-eks-conformance-net-policy-672
+ [[ -n '' ]]
+ echo '=== Using the following awscli version ==='
=== Using the following awscli version ===
+ aws --version
aws-cli/2.11.20 Python/3.11.3 Linux/4.15.0-156-generic exe/x86_64.ubuntu.18 prompt/off
+ set +e
+ [[ **** != '' ]]
+ [[ **** != '' ]]
+ mkdir -p /home/ubuntu/.aws
+ cat
+ cat
+ [[ true == true ]]
+ echo '=== Installing latest version of eksctl ==='
=== Installing latest version of eksctl ===
+ tar xz -C /tmp
++ uname -s
+ curl --silent --location https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz
+ sudo mv /tmp/eksctl /usr/local/bin
+ set -e
+ printf '\n'

+ echo '=== Using the following eksctl ==='
=== Using the following eksctl ===
+ which eksctl
/usr/local/bin/eksctl
+ echo '=== Using the following kubectl ==='
=== Using the following kubectl ===
+ which kubectl
/usr/bin/kubectl
+ echo '=== Creating a cluster in EKS ==='
=== Creating a cluster in EKS ===
++ generate_eksctl_config
+++ aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.27/amazon-linux-2/recommended/image_id --query Parameter.Value --output text
++ AMI_ID=ami-05eb3a655e73909a1
++ cat
++ echo eksctl-containerd.yaml
+ config=eksctl-containerd.yaml
+ eksctl create cluster -f eksctl-containerd.yaml
2024-04-29 04:43:09 [ℹ] eksctl version 0.176.0
2024-04-29 04:43:09 [ℹ] using region us-west-2
2024-04-29 04:43:09 [ℹ] setting availability zones to [us-west-2d us-west-2b us-west-2a]
2024-04-29 04:43:09 [ℹ] subnets for us-west-2d - public:192.168.0.0/19 private:192.168.96.0/19
2024-04-29 04:43:09 [ℹ] subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
2024-04-29 04:43:09 [ℹ] subnets for us-west-2a - public:192.168.64.0/19 private:192.168.160.0/19
2024-04-29 04:43:09 [ℹ] nodegroup "containerd" will use "ami-05eb3a655e73909a1" [AmazonLinux2/1.27]
2024-04-29 04:43:09 [ℹ] using SSH public key "/home/ubuntu/.ssh/id_rsa.pub" as "eksctl-cloud-antrea-eks-conformance-net-policy-672-nodegroup-containerd-26:0c:39:fe:ae:63:53:74:1b:76:11:69:73:09:a7:80"
2024-04-29 04:43:09 [ℹ] using Kubernetes version 1.27
2024-04-29 04:43:09 [ℹ] creating EKS cluster "cloud-antrea-eks-conformance-net-policy-672" in "us-west-2" region with managed nodes
2024-04-29 04:43:09 [ℹ] 1 nodegroup (containerd) was included (based on the include/exclude rules)
2024-04-29 04:43:09 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-04-29 04:43:09 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2024-04-29 04:43:09 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-672'
2024-04-29 04:43:09 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cloud-antrea-eks-conformance-net-policy-672" in "us-west-2"
2024-04-29 04:43:09 [ℹ] CloudWatch logging will not be enabled for cluster "cloud-antrea-eks-conformance-net-policy-672" in "us-west-2"
2024-04-29 04:43:09 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-672'
2024-04-29 04:43:09 [ℹ]
2 sequential tasks: { create cluster control plane "cloud-antrea-eks-conformance-net-policy-672",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "containerd",
}
}
2024-04-29 04:43:09 [ℹ] building cluster stack "eksctl-cloud-antrea-eks-conformance-net-policy-672-cluster"
2024-04-29 04:43:09 [ℹ] deploying stack "eksctl-cloud-antrea-eks-conformance-net-policy-672-cluster"
2024-04-29 04:43:39 [ℹ] waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-672-cluster"
2024-04-29 04:43:39 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-672-cluster"
2024-04-29 04:43:39 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-672-cluster"
2024-04-29 04:43:39 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2024-04-29 04:43:40 [!] AWS::IAM::Role/ServiceRole: DELETE_IN_PROGRESS
2024-04-29 04:43:40 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS
2024-04-29 04:43:40 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS
2024-04-29 04:43:40 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – "Resource creation cancelled"
2024-04-29 04:43:40 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – "Resource creation cancelled"
2024-04-29 04:43:40 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – "Resource creation cancelled"
2024-04-29 04:43:40 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – "Resource handler returned message: \"The maximum number of VPCs has been reached. (Service: Ec2, Status Code: 400, Request ID: 38a06ada-7a26-4da7-a795-7617aea44ec7)\" (RequestToken: 01481569-913d-922e-f332-ec1dd6c87d9c, HandlerErrorCode: GeneralServiceException)"
2024-04-29 04:43:40 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2024-04-29 04:43:40 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=cloud-antrea-eks-conformance-net-policy-672'
2024-04-29 04:43:40 [✖] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "cloud-antrea-eks-conformance-net-policy-672"
Build step 'Execute shell' marked build as failure
Archiving artifacts

antr...@gmail.com

unread,
May 1, 2024, 12:43:46 AM5/1/24
to projecta...@googlegroups.com
See <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/673/display/redirect?page=changes>

Changes:

[noreply] Add `antctl check installation` to conduct connectivity checks (#6133)

[noreply] Set VERSION to v2.1.0-dev (#6267)

[noreply] Bump github.com/onsi/ginkgo/v2 from 2.17.1 to 2.17.2 (#6272)

[noreply] Bump github.com/onsi/gomega from 1.33.0 to 1.33.1 (#6276)


------------------------------------------
Started by timer
Running as SYSTEM
Building remotely on antrea-cloud-ci-vm (antrea-cloud) in workspace <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/>
No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository https://github.com/antrea-io/antrea/
> git init <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/> # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git --version # timeout=10
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse origin/main^{commit} # timeout=10
Checking out Revision 572790d207ef8da205efb12b2c497809fdf96a2a (origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f 572790d207ef8da205efb12b2c497809fdf96a2a # timeout=10
Commit message: "Bump github.com/onsi/gomega from 1.33.0 to 1.33.1 (#6276)"
> git rev-list --no-walk 89363ac43777a5d0e1d81694b2816c9bced1c476 # timeout=10
[cloud-antrea-eks-conformance-net-policy] $ /bin/bash /tmp/jenkins15423770920029458579.sh
+ CLUSTER=cloud-antrea-eks-conformance-net-policy-673
+ shift 2
+ [[ 3 -gt 0 ]]
+ key=--log-mode
+ case $key in
+ MODE=detail
+ shift 2
+ [[ 1 -gt 0 ]]
+ key=--setup-only
+ case $key in
+ RUN_SETUP_ONLY=true
+ RUN_ALL=false
+ shift
+ [[ 0 -gt 0 ]]
+ [[ cloud-antrea-eks-conformance-net-policy-673 == '' ]]
+ echo '=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-673 ==='
=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-673 ===
+ echo CLUSTERNAME=cloud-antrea-eks-conformance-net-policy-673
++ AMI_ID=ami-005a0cb0498b68da9
++ cat
++ echo eksctl-containerd.yaml
+ config=eksctl-containerd.yaml
+ eksctl create cluster -f eksctl-containerd.yaml
2024-05-01 04:43:09 [ℹ] eksctl version 0.176.0
2024-05-01 04:43:09 [ℹ] using region us-west-2
2024-05-01 04:43:09 [ℹ] setting availability zones to [us-west-2a us-west-2b us-west-2d]
2024-05-01 04:43:09 [ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
2024-05-01 04:43:09 [ℹ] subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
2024-05-01 04:43:09 [ℹ] subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
2024-05-01 04:43:09 [ℹ] nodegroup "containerd" will use "ami-005a0cb0498b68da9" [AmazonLinux2/1.27]
2024-05-01 04:43:10 [ℹ] using SSH public key "/home/ubuntu/.ssh/id_rsa.pub" as "eksctl-cloud-antrea-eks-conformance-net-policy-673-nodegroup-containerd-26:0c:39:fe:ae:63:53:74:1b:76:11:69:73:09:a7:80"
2024-05-01 04:43:10 [ℹ] using Kubernetes version 1.27
2024-05-01 04:43:10 [ℹ] creating EKS cluster "cloud-antrea-eks-conformance-net-policy-673" in "us-west-2" region with managed nodes
2024-05-01 04:43:10 [ℹ] 1 nodegroup (containerd) was included (based on the include/exclude rules)
2024-05-01 04:43:10 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-05-01 04:43:10 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2024-05-01 04:43:10 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-673'
2024-05-01 04:43:10 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cloud-antrea-eks-conformance-net-policy-673" in "us-west-2"
2024-05-01 04:43:10 [ℹ] CloudWatch logging will not be enabled for cluster "cloud-antrea-eks-conformance-net-policy-673" in "us-west-2"
2024-05-01 04:43:10 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-673'
2024-05-01 04:43:10 [ℹ]
2 sequential tasks: { create cluster control plane "cloud-antrea-eks-conformance-net-policy-673",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "containerd",
}
}
2024-05-01 04:43:10 [ℹ] building cluster stack "eksctl-cloud-antrea-eks-conformance-net-policy-673-cluster"
2024-05-01 04:43:10 [ℹ] deploying stack "eksctl-cloud-antrea-eks-conformance-net-policy-673-cluster"
2024-05-01 04:43:40 [ℹ] waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-673-cluster"
2024-05-01 04:43:40 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-673-cluster"
2024-05-01 04:43:40 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-673-cluster"
2024-05-01 04:43:40 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2024-05-01 04:43:40 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS
2024-05-01 04:43:40 [!] AWS::IAM::Role/ServiceRole: DELETE_IN_PROGRESS
2024-05-01 04:43:40 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS
2024-05-01 04:43:40 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – "Resource creation cancelled"
2024-05-01 04:43:40 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – "Resource creation cancelled"
2024-05-01 04:43:40 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – "Resource creation cancelled"
2024-05-01 04:43:40 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – "Resource handler returned message: \"The maximum number of VPCs has been reached. (Service: Ec2, Status Code: 400, Request ID: d1635bc8-1322-4006-9742-15c5c92b3e7d)\" (RequestToken: a153f3a1-c977-aead-2189-81303345d8d7, HandlerErrorCode: GeneralServiceException)"
2024-05-01 04:43:40 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2024-05-01 04:43:40 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=cloud-antrea-eks-conformance-net-policy-673'
2024-05-01 04:43:40 [✖] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "cloud-antrea-eks-conformance-net-policy-673"

antr...@gmail.com

unread,
May 3, 2024, 12:43:45 AM5/3/24
to projecta...@googlegroups.com
See <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/674/display/redirect?page=changes>

Changes:

[noreply] Bump google.golang.org/protobuf from 1.33.0 to 1.34.0 (#6277)


------------------------------------------
Started by timer
Running as SYSTEM
Building remotely on antrea-cloud-ci-vm (antrea-cloud) in workspace <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/>
No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository https://github.com/antrea-io/antrea/
> git init <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/> # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git --version # timeout=10
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse origin/main^{commit} # timeout=10
Checking out Revision f9345daedbce39e77debe9bf3116397da647e57c (origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f f9345daedbce39e77debe9bf3116397da647e57c # timeout=10
Commit message: "Bump google.golang.org/protobuf from 1.33.0 to 1.34.0 (#6277)"
> git rev-list --no-walk 572790d207ef8da205efb12b2c497809fdf96a2a # timeout=10
[cloud-antrea-eks-conformance-net-policy] $ /bin/bash /tmp/jenkins10171053419909834909.sh
+ CLUSTER=cloud-antrea-eks-conformance-net-policy-674
+ shift 2
+ [[ 3 -gt 0 ]]
+ key=--log-mode
+ case $key in
+ MODE=detail
+ shift 2
+ [[ 1 -gt 0 ]]
+ key=--setup-only
+ case $key in
+ RUN_SETUP_ONLY=true
+ RUN_ALL=false
+ shift
+ [[ 0 -gt 0 ]]
+ [[ cloud-antrea-eks-conformance-net-policy-674 == '' ]]
+ echo '=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-674 ==='
=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-674 ===
+ echo CLUSTERNAME=cloud-antrea-eks-conformance-net-policy-674
2024-05-03 04:43:10 [ℹ] eksctl version 0.176.0
2024-05-03 04:43:10 [ℹ] using region us-west-2
2024-05-03 04:43:10 [ℹ] setting availability zones to [us-west-2a us-west-2d us-west-2b]
2024-05-03 04:43:10 [ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
2024-05-03 04:43:10 [ℹ] subnets for us-west-2d - public:192.168.32.0/19 private:192.168.128.0/19
2024-05-03 04:43:10 [ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19
2024-05-03 04:43:10 [ℹ] nodegroup "containerd" will use "ami-005a0cb0498b68da9" [AmazonLinux2/1.27]
2024-05-03 04:43:10 [ℹ] using SSH public key "/home/ubuntu/.ssh/id_rsa.pub" as "eksctl-cloud-antrea-eks-conformance-net-policy-674-nodegroup-containerd-26:0c:39:fe:ae:63:53:74:1b:76:11:69:73:09:a7:80"
2024-05-03 04:43:10 [ℹ] using Kubernetes version 1.27
2024-05-03 04:43:10 [ℹ] creating EKS cluster "cloud-antrea-eks-conformance-net-policy-674" in "us-west-2" region with managed nodes
2024-05-03 04:43:10 [ℹ] 1 nodegroup (containerd) was included (based on the include/exclude rules)
2024-05-03 04:43:10 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-05-03 04:43:10 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2024-05-03 04:43:10 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-674'
2024-05-03 04:43:10 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cloud-antrea-eks-conformance-net-policy-674" in "us-west-2"
2024-05-03 04:43:10 [ℹ] CloudWatch logging will not be enabled for cluster "cloud-antrea-eks-conformance-net-policy-674" in "us-west-2"
2024-05-03 04:43:10 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-674'
2024-05-03 04:43:10 [ℹ]
2 sequential tasks: { create cluster control plane "cloud-antrea-eks-conformance-net-policy-674",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "containerd",
}
}
2024-05-03 04:43:10 [ℹ] building cluster stack "eksctl-cloud-antrea-eks-conformance-net-policy-674-cluster"
2024-05-03 04:43:10 [ℹ] deploying stack "eksctl-cloud-antrea-eks-conformance-net-policy-674-cluster"
2024-05-03 04:43:40 [ℹ] waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-674-cluster"
2024-05-03 04:43:40 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-674-cluster"
2024-05-03 04:43:41 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-674-cluster"
2024-05-03 04:43:41 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2024-05-03 04:43:41 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS
2024-05-03 04:43:41 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS
2024-05-03 04:43:41 [!] AWS::IAM::Role/ServiceRole: DELETE_IN_PROGRESS
2024-05-03 04:43:41 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – "Resource creation cancelled"
2024-05-03 04:43:41 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – "Resource creation cancelled"
2024-05-03 04:43:41 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – "Resource creation cancelled"
2024-05-03 04:43:41 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – "Resource handler returned message: \"The maximum number of VPCs has been reached. (Service: Ec2, Status Code: 400, Request ID: 554273d5-9fcb-4fb0-bc32-9f967f747e95)\" (RequestToken: dd5f0817-9c66-e60d-65f2-96f1153c0662, HandlerErrorCode: GeneralServiceException)"
2024-05-03 04:43:41 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2024-05-03 04:43:41 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=cloud-antrea-eks-conformance-net-policy-674'
2024-05-03 04:43:41 [✖] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "cloud-antrea-eks-conformance-net-policy-674"

antr...@gmail.com

unread,
May 5, 2024, 12:43:44 AM5/5/24
to projecta...@googlegroups.com
See <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/675/display/redirect?page=changes>

Changes:

[noreply] Document Kind CI trigger phrases (#6258)

[noreply] Add documentation for the sameLabels feature in ACNP (#6280)


------------------------------------------
Started by timer
Running as SYSTEM
Building remotely on antrea-cloud-ci-vm (antrea-cloud) in workspace <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/>
No credentials specified
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository https://github.com/antrea-io/antrea/
> git init <https://jenkins.antrea-ci.rocks/job/cloud-antrea-eks-conformance-net-policy/ws/> # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git --version # timeout=10
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/antrea-io/antrea/ # timeout=10
Fetching upstream changes from https://github.com/antrea-io/antrea/
> git fetch --tags --progress -- https://github.com/antrea-io/antrea/ +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse origin/main^{commit} # timeout=10
Checking out Revision ad7209da108c393b706385f4413c93971e5c03a7 (origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f ad7209da108c393b706385f4413c93971e5c03a7 # timeout=10
Commit message: "Add documentation for the sameLabels feature in ACNP (#6280)"
> git rev-list --no-walk f9345daedbce39e77debe9bf3116397da647e57c # timeout=10
[cloud-antrea-eks-conformance-net-policy] $ /bin/bash /tmp/jenkins14505886516289223720.sh
+ CLUSTER=cloud-antrea-eks-conformance-net-policy-675
+ shift 2
+ [[ 3 -gt 0 ]]
+ key=--log-mode
+ case $key in
+ MODE=detail
+ shift 2
+ [[ 1 -gt 0 ]]
+ key=--setup-only
+ case $key in
+ RUN_SETUP_ONLY=true
+ RUN_ALL=false
+ shift
+ [[ 0 -gt 0 ]]
+ [[ cloud-antrea-eks-conformance-net-policy-675 == '' ]]
+ echo '=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-675 ==='
=== This cluster to be created is named: cloud-antrea-eks-conformance-net-policy-675 ===
+ echo CLUSTERNAME=cloud-antrea-eks-conformance-net-policy-675
2024-05-05 04:43:09 [ℹ] eksctl version 0.176.0
2024-05-05 04:43:09 [ℹ] using region us-west-2
2024-05-05 04:43:09 [ℹ] setting availability zones to [us-west-2b us-west-2a us-west-2d]
2024-05-05 04:43:09 [ℹ] subnets for us-west-2b - public:192.168.0.0/19 private:192.168.96.0/19
2024-05-05 04:43:09 [ℹ] subnets for us-west-2a - public:192.168.32.0/19 private:192.168.128.0/19
2024-05-05 04:43:09 [ℹ] subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
2024-05-05 04:43:09 [ℹ] nodegroup "containerd" will use "ami-005a0cb0498b68da9" [AmazonLinux2/1.27]
2024-05-05 04:43:09 [ℹ] using SSH public key "/home/ubuntu/.ssh/id_rsa.pub" as "eksctl-cloud-antrea-eks-conformance-net-policy-675-nodegroup-containerd-26:0c:39:fe:ae:63:53:74:1b:76:11:69:73:09:a7:80"
2024-05-05 04:43:09 [ℹ] using Kubernetes version 1.27
2024-05-05 04:43:09 [ℹ] creating EKS cluster "cloud-antrea-eks-conformance-net-policy-675" in "us-west-2" region with managed nodes
2024-05-05 04:43:09 [ℹ] 1 nodegroup (containerd) was included (based on the include/exclude rules)
2024-05-05 04:43:09 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-05-05 04:43:09 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2024-05-05 04:43:09 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-675'
2024-05-05 04:43:09 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cloud-antrea-eks-conformance-net-policy-675" in "us-west-2"
2024-05-05 04:43:09 [ℹ] CloudWatch logging will not be enabled for cluster "cloud-antrea-eks-conformance-net-policy-675" in "us-west-2"
2024-05-05 04:43:09 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=cloud-antrea-eks-conformance-net-policy-675'
2024-05-05 04:43:09 [ℹ]
2 sequential tasks: { create cluster control plane "cloud-antrea-eks-conformance-net-policy-675",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "containerd",
}
}
2024-05-05 04:43:09 [ℹ] building cluster stack "eksctl-cloud-antrea-eks-conformance-net-policy-675-cluster"
2024-05-05 04:43:09 [ℹ] deploying stack "eksctl-cloud-antrea-eks-conformance-net-policy-675-cluster"
2024-05-05 04:43:39 [ℹ] waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-675-cluster"
2024-05-05 04:43:39 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-675-cluster"
2024-05-05 04:43:39 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-cloud-antrea-eks-conformance-net-policy-675-cluster"
2024-05-05 04:43:39 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2024-05-05 04:43:39 [!] AWS::IAM::Role/ServiceRole: DELETE_IN_PROGRESS
2024-05-05 04:43:39 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS
2024-05-05 04:43:39 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS
2024-05-05 04:43:39 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – "Resource creation cancelled"
2024-05-05 04:43:39 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – "Resource creation cancelled"
2024-05-05 04:43:39 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – "Resource creation cancelled"
2024-05-05 04:43:39 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – "Resource handler returned message: \"The maximum number of VPCs has been reached. (Service: Ec2, Status Code: 400, Request ID: 21af1e59-f792-49c7-9c5a-7da4d35d143e)\" (RequestToken: 53b6ec2d-f4f6-2e8b-52ac-48408dded2d2, HandlerErrorCode: GeneralServiceException)"
2024-05-05 04:43:39 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2024-05-05 04:43:39 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-2 --name=cloud-antrea-eks-conformance-net-policy-675'
2024-05-05 04:43:39 [✖] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "cloud-antrea-eks-conformance-net-policy-675"

antr...@gmail.com

unread,
May 7, 2024, 1:38:19 AM5/7/24
to projecta...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages