Amazon EC2 Container Service Plugin Waiting for Executor

682 vistas
Ir al primer mensaje no leído

Tez Macca

no leída,
18 jul 2016, 12:46:41 a.m.18/7/2016
para Jenkins Users
Hi,

I have configured the "Amazon EC2 Container Service Plugin" with a single statically associated EC2 instance. My expectation is that multiple slaves/containers would be started when there are multiple jobs scheduled.
However, I get the following message when the second job runs:
Waiting for next available executor on aws-slaves-c50dc930a5d94
Where 'aws-slaves-c50dc930a5d94' is the node instance started by the first job. The jobs start and execute perfectly though.

To get multiple executors do I need to change something with:
1. how I have configured the AWS plugin, 
2. OR the ECS cluster in AWS, 
3. OR my Jenkinsfile needs something like concurrency set in the stage?

This also means that I am unable to run multiple parallel steps.

Mariusz

no leída,
27 abr 2017, 3:03:41 a.m.27/4/2017
para Jenkins Users
Did you ever find out what the issue was here?

I've been experiencing the exact same problem.  Only when the first job finishes will a new ECS task start, thus, starting the job.

Joshua Noble

no leída,
1 may 2017, 4:52:02 p.m.1/5/2017
para Jenkins Users
This plugin has an unfortunate known flaw - it will only spin up one executor/node per label. If you have two jobs waiting to build with the same label, it will build the first one, then kill after it's done, and then start the next one. (Instead of creating two containers/tasks in parallel) I've tried all the hacks (single build slave plugin, etc) and they no longer work.

For this reason I've decided not to use it. You're much better off using the EC2 plugin (which is very well supported), registering those nodes as build agents, and then using Jenkins' native Docker workflow with pipelines. (In essence, this means Jenkins will run docker containers on the EC2 machine) 

Douglas Manley

no leída,
8 jun 2017, 2:57:20 p.m.8/6/2017
para Jenkins Users
As far as I can tell, it's not actually the plugin that's the problem.  Rather, Jenkins only asks the "cloud" provider (in this case, ECS) to create a new node when there are zero nodes available.  Thus, as long as there is one job running, Jenkins sees it as a valid node and will politely wait until it finishes to run the next job.  Instead, the node kills itself and only then does Jenkins ask the cloud provider to create another node.

It seems like there just needs to be some kind of plugin that can read the queue and talk to the cloud providers based on the number of tasks outstanding.

Douglas Manley

no leída,
9 jul 2017, 10:44:30 p.m.9/7/2017
para Jenkins Users
Update: I believe that I have fixed the problem within the "amazon-ecs-plugin".  The short story is that it's possible to tell Jenkins that the automatically-created nodes, once they start a job, will never run another job again, causing Jenkins to spin up more nodes to handle the remaining items in the queue.  The responsiveness is reasonably good, and it totally meets my needs at work.

The pull request is here; it has not been merged in yet (as of 2017-07-09): https://github.com/jenkinsci/amazon-ecs-plugin/pull/48

Stephen Connolly

no leída,
10 jul 2017, 2:35:19 a.m.10/7/2017
para jenkins...@googlegroups.com
On Mon 10 Jul 2017 at 03:44, Douglas Manley <doug....@gmail.com> wrote:
Update: I believe that I have fixed the problem within the "amazon-ecs-plugin".  The short story is that it's possible to tell Jenkins that the automatically-created nodes, once they start a job, will never run another job again, causing Jenkins to spin up more nodes to handle the remaining items in the queue.  The responsiveness is reasonably good, and it totally meets my needs at work.

The pull request is here; it has not been merged in yet (as of 2017-07-09): https://github.com/jenkinsci/amazon-ecs-plugin/pull/48

Oh so close.... sadly looks like the cigar is missing.

Durable tasks are not going to work as they cannot be accepted so when they requeue they will run on a different agent, and think the shell step died.

If it were not for the fun of durable tasks you'd be totally on the money here.

There is a retention strategy that does what you want in durable tasks plugin. Try reworking the PR to use that.

Excellent work in perhaps the trickiest API of Jenkins to work with.



On Thursday, June 8, 2017 at 2:57:20 PM UTC-4, Douglas Manley wrote:
As far as I can tell, it's not actually the plugin that's the problem.  Rather, Jenkins only asks the "cloud" provider (in this case, ECS) to create a new node when there are zero nodes available.  Thus, as long as there is one job running, Jenkins sees it as a valid node and will politely wait until it finishes to run the next job.  Instead, the node kills itself and only then does Jenkins ask the cloud provider to create another node.

It seems like there just needs to be some kind of plugin that can read the queue and talk to the cloud providers based on the number of tasks outstanding.

On Monday, May 1, 2017 at 4:52:02 PM UTC-4, Joshua Noble wrote:
This plugin has an unfortunate known flaw - it will only spin up one executor/node per label. If you have two jobs waiting to build with the same label, it will build the first one, then kill after it's done, and then start the next one. (Instead of creating two containers/tasks in parallel) I've tried all the hacks (single build slave plugin, etc) and they no longer work.

For this reason I've decided not to use it. You're much better off using the EC2 plugin (which is very well supported), registering those nodes as build agents, and then using Jenkins' native Docker workflow with pipelines. (In essence, this means Jenkins will run docker containers on the EC2 machine) 

On Monday, July 18, 2016 at 12:46:41 AM UTC-4, Tez Macca wrote:
Hi,

I have configured the "Amazon EC2 Container Service Plugin" with a single statically associated EC2 instance. My expectation is that multiple slaves/containers would be started when there are multiple jobs scheduled.
However, I get the following message when the second job runs:
Waiting for next available executor on aws-slaves-c50dc930a5d94
Where 'aws-slaves-c50dc930a5d94' is the node instance started by the first job. The jobs start and execute perfectly though.

To get multiple executors do I need to change something with:
1. how I have configured the AWS plugin, 
2. OR the ECS cluster in AWS, 
3. OR my Jenkinsfile needs something like concurrency set in the stage?

This also means that I am unable to run multiple parallel steps.

--
You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/1292e400-6e91-4550-b080-b1311d1cab9d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Sent from my phone
Responder a todos
Responder al autor
Reenviar
0 mensajes nuevos