WFLYEJB0017 error: what does it mean?

68 views
Skip to first unread message

Dustin Kut Moy Cheung

unread,
Aug 15, 2024, 5:37:27 PM8/15/24
to WildFly
Hi,

I've got an atypical question since I'm using an old version of wildfly running on Kieserver 7.12.

I'm seeing this error and I'm unsure what does it mean. Could you help me please?
I know it's from a kieserver node that should trigger after x seconds. I believe kieserver uses ejb timers in the background to do so. But the triggering never happens.

[2024-08-15T19:29:03.881Z] INFO [org.jboss.as.ejb3.timer] WFLYEJB0017: Next expiration is null. No tasks will be scheduled for timer [ID=9620811C-1019-4F7D-83F4-85F37ABB5478 TIMEDOBJECTID=ROOT.ROOT.EJBTIMERSCHEDULER AUTO-TIMER?:FALSE PERSISTENT?:TRUE TIMERSERVICE=ORG.JBOSS.AS.EJB3.TIMERSERVICE.TIMERSERVICEIMPL@5A398FFA PREVIOUSRUN=2024-08-15 19:29:02.0 INITIALEXPIRATION=2024-08-15 19:29:02.0 INTERVALDURATION(IN MILLI SEC)=0 NEXTEXPIRATION=NULL TIMERSTATE=ACTIVE INFO=EJBTIMERJOB [TIMERJOBINSTANCE=GLOBALJPATIMERJOBINSTANCE [TIMERSERVICEID=RHPAM_1.6.2-SNAPSHOT-TIMERSERVICEID, GETJOBHANDLE()=EJBGLOBALJOBHANDLE [UUID=13-11-1]]]]

I'm seeing this when we use an AWS Aurora Postgres server but this error is gone when I use a Postgresql pod running on Openshift. I'm unsure if it's because of different database timeout settings or something.

Thanks for any help!

Sincerely,
Dustin

Bartosz Baranowski

unread,
Aug 26, 2024, 2:10:28 AM8/26/24
to WildFly
Well src is kind of clear on it: https://github.com/wildfly/wildfly/blob/main/ejb3/src/main/java/org/jboss/as/ejb3/timerservice/TimerServiceImpl.java#L643
Also, detail in your message also gives it away: NEXTEXPIRATION=NULL

As to why - at this point its a guessing game. What you could do is set logger: "org.jboss.as.ejb3.timer" to debug level and compare two instances.

Dustin Kut Moy Cheung

unread,
Aug 26, 2024, 3:25:33 PM8/26/24
to WildFly
Hi!

Thanks Bartosz for the hints! We were able to find the issue thanks to it.

Turns out we were running a second instance of our application that we have forgotten in another Openshift cluster connected to the same database as the first one. They were interacting strangely and shutting down the second instance fixed it.

Special thanks also to Honza Brázdil who was investigating the issue further and found it! ❤️

Sincerely,
Dustin

Bartosz Baranowski

unread,
Aug 27, 2024, 7:18:32 AM8/27/24
to WildFly
Good to hear.
Reply all
Reply to author
Forward
0 new messages