Changing schedule of a running ejb timer task

43 views
Skip to first unread message

Stephen Sill II

unread,
Apr 8, 2026, 12:43:26 PMApr 8
to WildFly
Hi,
I discovered something I hadn't seen before.   I have an app running on wildfly 39.0.1.

This app has a scheduled task method that looks like
 @Schedule(second = "47", minute = "*/5", hour = "*", persistent = true)

I just upgrade the app, which runs in kubernetes.  The old version of the app had minute at "*/20"  while the new one has the value as "*/5".

Since kubernetes does a rolling update and new members/replicas join the wildfly cluster and old ones leave,   I discovered that the task I intended to start running every 5 min is still running every 20m.   I realize because timer tasks are sync with infinispan that the new version of app probably saw this schedule method as scheduled so didn't replace it with new one.

Is it possible via jboss cli to have this pick up the every 5 min schedule or will I have to scale to 0 pods and then back to 2 in order to pick up the new schedule?

Thanks,
Stephen

Stephen Sill II

unread,
Apr 9, 2026, 11:05:32 AMApr 9
to WildFly
I went ahead and just scaled the deployment to 0 and back to 2, but I'm curious for future deployments if there's a way to make sure the new schedule takes effect.  Should this behavior be considered a bug?

Stephen Sill II

unread,
Apr 13, 2026, 11:12:26 AMApr 13
to WildFly
just following up on this.

What should the behavior be when joining a new wildfly cluster node with a new version of an application(via helm upgrade in kubernetes cluster)  when that application has an ejb with a method annotated like this


@Schedule(second = "47", minute = "*/5", hour = "*", persistent = true)

In my case the the old version had `*/20` for the minute and the new version `*/5`.   Because the timer task was persisted in infinispan, the old version remained present and required a complete undeploy to get it to be the new version of the schedule annotation.   Is this expected behavior or should it be considered a bug?

Thanks again,
Stephen

Paul Ferraro

unread,
Apr 14, 2026, 12:24:51 PMApr 14
to WildFly
The Infinispan-based TimerService treats the schedule expression of an automatic timer as immutable (in the same way that the schedule expression of a programmatic calendar-based persistent timer is immutable once created).  Automatic persistent timers are identified by their associated method.  When the application is first deployed, the timer is persisted along with its schedule expression.  If an application is redeployed with a modified @Schedule, the timer is not recreated, as it already exists (per its identity), which also means that its schedule expression is not updated.  In the context of a cluster, it is questionable whether this would even make sense, since the schedule expression may conflict with the existing timer definition on other cluster members (in particular, the cluster member on which the timer is currently scheduled) which may contain the version of the application with the original schedule expression.  Since WildFly does not currently contain any notion of application version - it otherwise has no mechanism to determine which cluster member contains the "current"/"correct" schedule expression.

As far as I am aware, the Jakarta Enterprise Bean specification does not describe how an existing persistent timer should behave in the even that, upon redeploy on one JVM, its schedule expression has changed.  The most relevant mention appears to be this:

> By default, each Schedule annotation corresponds to a single persistent timer, regardless of the number of JVMs across which the container is distributed.

In the absence of clear requirements, I have always interpreted this to mean that: if the annotation corresponds to the timer, then the first cluster member to create the timer determines the schedule expression.

That said, changing the schedule does seem like a perfectly reasonable, even if the specification does not describe how exactly this is meant to work.
Since we have a mechanism by which the schedule is identified, we could theoretically just update the value in the distributed cache.
If the timer is owned by the newly started server, all is fine.
If the timer is owned by a different server, and the new schedule would fire the event further in the future, all is fine.
However, if the timer is owned by another server, and the new schedule would fire the timer event sooner than the old schedule, the new schedule will not take effect until the subsequent invocation.

That all sounds reasonable, however...

What, instead of modifying the schedule, the annotation was modified to be non-persistent?  What would you expect to happen?
Given how this stuff is implemented, this would effectively result in a new non-persistent timer being created.  However, there is now a phantom persistent timer that will continue to execute.  Timer entries are indexed by component/method name, so we technically have a relatively inexpensive means of identifying it, should these be auto-cancelled?
What if the schedule annotation was removed altogether?  Scanning for phantom timers on potential method that require cancellation would be relatively expensive.
What if the schedule annotation was removed *and* the method itself renamed?  This would require a full cache entry scan and would be very expensive to attempt an auto-cancel.
Thoughts?

Paul

Stephen Sill II

unread,
Apr 14, 2026, 2:28:12 PMApr 14
to WildFly
Hi Paul,

Thanks for your detailed reply!   Now that I know that''s the behavior I can plan accordingly.  I was just curious what the expected result should be.  Changing this schedule isn't frequent, so I'm not concerned.  And since I'm the one doing it I can plan accordingly.  My other app uses more dynamic schedule tasks and I've implemented a scheduling service ejb to interact with the TimerService and check if tasks that should be running are running and if they match the scheduling data from the task configuration in the db.  The app I'm seeing this on just has a limited number of tasks that almost never change schedule so I use the annotations for those methods.   I could see how a wildfly cluster with a lot of nodes could make updating an already established task expensive.  I would think if possible you'd want to cancel an existing invocation and schedule a new one, but if that task is currently running you'd want to make sure that it completes before rescheduling.

I think at this time as long as the behavior in wildfly is documented it should be enough.  I don't plan to change this schedule again any time soon so I'm not super concerned about it.

Again thanks for your detailed explanation!
Stephen
Reply all
Reply to author
Forward
0 new messages