"openHAB-job-scheduler_Worker-2" Id=84 in TIMED_WAITING on (Native Method)Ī few minutes ago, cron jobs were stopped again, and now I see this: Īt .blockForAvailableThreads(SimpleThreadPool.java:452)Īt .run(QuartzSchedulerThread.java:263) "openHAB-job-scheduler_Worker-1" Id=83 in TIMED_WAITING on (Native Method)Īt $n(SimpleThreadPool.java:568) "openHAB-job-scheduler_QuartzSchedulerThread" Id=85 in TIMED_WAITING on (Native Method)Īt .run(QuartzSchedulerThread.java:311) When the system is running fine, then it looks like this Looking into the threads via the karaf console, I found three quartz related threads. I’m not saying this is your problem, but we need to eliminate it as a possibility before we can say there is a bug. If you have a motion sensor going off every three seconds and the rule that triggers off of it takes three seconds or more you can easily build up this backlog. It isn’t always possible, but most of your rules should take on the order is hand a second or less to execute with a rare one that takes longer. But keep in mind that over those seconds more events are coming in so it is very easy to build up a backlog. With five threads you should be able to handle about five events per second so long as the rules take less than three seconds to complete. I can’t say that for certain though.ġ100 means about one event every three seconds. I don’t know what algorithm is used to select what events get worked off first but it doesn’t appear to be fifo. In the scenario I described above, the scheduler isn’t stopping, you just run out of threads and if you have rules that don’t log and that execute a lot it looks like rules stopped when in fact they are furiously working off events but doing so more slowly than new events come in. I now put it in a drawer to see if this has any effect. New: I just found out that one of my motion sensors is sending alerts every 3 seconds. Are these timer objects using the Quartz scheduler as well, or is there another mechanism for executing the timer lambdas? I do have quite some timers with long timeouts (a few minutes up to 1 hr). Is there a way to see/measure/track/log the size of the scheduler backlog?īut even then, how come the scheduler stops working without making noise? Counting lines in the events.log, I typically end up with about 1100 events/hr. There is a setting somewhere in the userdata/etc config files. If this is the case, you can increase the number of runtime threads. Only five can run at a time, so if you have a barrage of events occur all at once it could take some time to work them off and in the mean time the backlog continues to build. There can also be a problem even if you don’t have long running rules if you have A LOT of events firing off lots and lots of rules.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |