Thanks for the feedback. There are lots of relevant things to keep me
busy for today
> - I don't think there's a reason for you to subclass Plugin. It
> doesn't really do anything here. You could move the static stuff into
> the listener directly.
You are probably right. I will look into it.
> - You're missing the resource file that contains the plugin
> description for the plugin manager (src/main/resources/index.jelly).
Ok
> - Don't reinvent logging levels (especially if they cannot be changed
> once compiled), use java.util.logging consistently.
Ok
> - Don't write in the build log of jobs that you're not enabled for.
> That can break things like log parsing etc., as well as look weird.
> AFAIU this is added the moment another (upstream) job is aborted.
I don't. If the variable "HIERARCHY_KILLER_ENABLED" is not defined, it
will ignore the upstream job.
> - Don't prefix instance variables with 'i'.
Ok
> - I doubt you really need the RunData, why not query all running jobs
> for what their (upstream) causes are when you get notified about a run
> abort? Not sure what's saved/loaded in the Plugin class, but I expect
> the current design with the static > HashMap doesn't handle a
> Jenkins restart well (the queue is persisted and restored on restart).
Because it is not only supposed to kill upstream. If I kill the upstream
build, I want to kill other downstream builds as well. Problem is I
couldn't find a a way to get a list of all downstream builds for a
running build.
Therefore I collect this information at runtime.
Jenkins restart: I have to make sure that no persistent data is stored.
> - Don't rely on the JENKINS_URL env var to be set.
Ok
> - Given how many env vars you check, I think at least
> ENABLE_HIERARCHY_KILLER can be replaced by a job config option (e.g. a
> JobProperty). This would also have the advantage that the plugin would
> have some UI, and could have an inline help text
> explaining how it works.
But it wouldn't automatically be propagated downstream, and if it is,
the downstream job can not easily overwrite it again.
> A non-library plugin without any UI is not a great idea as admins may
> easily forget it exists, and users don't even know it's there.
I agree, you have a point there. Not sure how to align with my previous
comment though.
> - It likely doesn't work with Workflow plugin, whose Runs aren't
> AbstractBuilds. Build Flow plugin probably has the same problem.
These jobs would be ignored by my plugin. (I just updated it to ignore
the notifications for everything but AbstractBuilds.)
> - While I understand that Parameterized Trigger allows having an
> upstream project that still runs, I wonder doubt it's always only one,
> which seems to be the assumption in your code. While that feature may
> not consider quiet periods, the node may be missing, so two
> identically parameterized builds could be collapsed in the queue while
> waiting for the node to free up.
Isn't this something only the Workflow plugin does? I saw it with
workflow, but never with Parameterized Trigger. If this is really an
issue for normal builds as well, I will replace iUpstream also with a
list (vector).
Again, thanks for your feedback. For the naming: "Downstream-killer" is
incomplete, because to fail early it also needs to be able to kill
upstream. abort-related-jobs might work. However, I somehow like the
name; in our company we are often talking about job-hierarchies. Maybe
some others can chime in and give a comment?
Br,
Thorsten