Project

General

Profile

Actions

Task #1837

closed

Task #724: Process long running task in more threads

Implement thread rejection policy for LRT pool

Added by Petr Fišer over 4 years ago. Updated about 4 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Radek Tomiška
Category:
Long running task
Target version:
Start date:
09/05/2019
Due date:
% Done:

100%

Estimated time:
Owner:

Description

I am running IdM with this configuration:

...
scheduler.task.executor.corePoolSize=2
scheduler.task.executor.maxPoolSize=10
scheduler.task.executor.queueCapacity=2
...

When I create more than 2 LRTs, the pool is expanded and additional LRTs become Running.
When I create more than maxPoolSize LRTs, all LRTs are enqueued. LRTs that were created after the number of LRTs reached maxPoolSize seem to be correct. However, when they are about to run, they enter the Failed state with following error:
java.lang.IllegalArgumentException: [Assertion failed] - this argument is required; it must not be null
    at org.springframework.util.Assert.notNull(Assert.java:115)
    at org.springframework.util.Assert.notNull(Assert.java:126)
    at eu.bcvsolutions.idm.core.api.bulk.action.AbstractBulkAction.process(AbstractBulkAction.java:172)
    at eu.bcvsolutions.idm.core.api.bulk.action.AbstractBulkAction.process(AbstractBulkAction.java:47)
    at eu.bcvsolutions.idm.core.scheduler.api.service.AbstractLongRunningTaskExecutor.call(AbstractLongRunningTaskExecutor.java:197)
    at eu.bcvsolutions.idm.core.scheduler.api.service.AbstractLongRunningTaskExecutor$$FastClassBySpringCGLIB$$f9eae371.invoke(<generated>)
    at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
    at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
    at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
    at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
    at eu.bcvsolutions.idm.acc.bulk.action.impl.IdentityAccountManagementBulkAction$$EnhancerBySpringCGLIB$$f9ddabf2.call(<generated>)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.springframework.security.concurrent.DelegatingSecurityContextRunnable.run(DelegatingSecurityContextRunnable.java:80)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

This is because a rejection policy on the task queue is not implemented yet.


Related issues

Related to IdStory Identity Manager - Feature #2040: Provisioning system timeout - Execute provisioning synchronously from long running task is stucked ClosedRadek Tomiška02/05/2020

Actions
Related to IdStory Identity Manager - Task #2107: LRT: persist bulk action into long running task agendaClosedRadek Tomiška03/09/2020

Actions
Actions #1

Updated by Radek Tomiška about 4 years ago

  • Category set to Long running task
  • Target version set to 10.2.0
Actions #2

Updated by Radek Tomiška about 4 years ago

  • Related to Feature #2040: Provisioning system timeout - Execute provisioning synchronously from long running task is stucked added
Actions #3

Updated by Radek Tomiška about 4 years ago

  • Status changed from New to In Progress
Actions #4

Updated by Petr Fišer about 4 years ago

Hi, one thing to mention.
Sometime ago I noticed that LRTs that are queued for running (but not started) tend to be started in random order.

What happened:
I had one task executor that was busy with deleting entries and ran for about 4 hours. In the meantime, more and more RetryProvisioning tasks waited in the queue. That's expected and OK.
After the long LRT finished, RetryProvisioning tasks were executed and it looked like they were executed in random order. It is not a problem for retry mechanism, but still it is something we should check.

Consider the situation where nightly LRTs (sync. of HR, user enabling/disabling) get enqueued and wait to be ran. Some LRT before could be running exceptionally slow this time, etc. Now, executing synchronizations, HR processes and such in a different order than they were meant to be executed could cause problems due to, say, HR processes running on semi-refreshed contracts data.
This also won't be easy to manage when there are multiple task executor threads... seems to me like some dependency-kinda thing may be needed.

Could you please look at this when you are implementing the ticket? Would be really helpful. :)

Actions #5

Updated by Radek Tomiška about 4 years ago

  • Status changed from In Progress to Needs feedback
  • Assignee changed from Radek Tomiška to Vít Švanda
  • % Done changed from 0 to 90

Rejection policy is implemented. For this reason, default product task executor configuration was changed and this change should be done for projects (but is not required), where default configuration is not used:

scheduler.task.executor.queueCapacity=20

With previously configured 'Integer.MAX' value was 'scheduler.task.executor.maxPoolSize' configuration ignored => 'scheduler.task.executor.corePoolSize' was used only.

Commit:
https://github.com/bcvsolutions/CzechIdMng/commit/84085a39f6995f4c0d8ea42b544bb7c2823eb025

Doc:
https://wiki.czechidm.com/devel/documentation/application_configuration/dev/backend#scheduler

Could you provide me a feedback, please?

Note to comment above: Is related to blocking queue mechanism (https://howtodoinjava.com/java/multi-threading/how-to-use-blockingqueue-and-threadpoolexecutor-in-java/) - When thread queue is full (by queueCapacity), then new threads are added (by maxPoolSize) and this threads are used for newly created LRT => LRT already in queue is processed, when some task ends (blocking queue mechanism). LRT order was never supported and this is the reason, why dependent task trigger was implemented.

Actions #6

Updated by Petr Fišer about 4 years ago

Radek Tomiška wrote:

Note to comment above: Is related to blocking queue mechanism (https://howtodoinjava.com/java/multi-threading/how-to-use-blockingqueue-and-threadpoolexecutor-in-java/) - When thread queue is full (by queueCapacity), then new threads are added (by maxPoolSize) and this threads are used for newly created LRT => LRT already in queue is processed, when some task ends (blocking queue mechanism). LRT order was never supported and this is the reason, why dependent task trigger was implemented.

Oh, got it. Thanks for pointing me in the right direction. :)

Actions #7

Updated by Radek Tomiška about 4 years ago

  • Related to Task #2107: LRT: persist bulk action into long running task agenda added
Actions #8

Updated by Vít Švanda about 4 years ago

  • Status changed from Needs feedback to Resolved
  • Assignee changed from Vít Švanda to Radek Tomiška
  • % Done changed from 90 to 100

I did review and test. Rejection works correctly. Thanks for this.

May be the RejectedExecutionException could be logged as a error in the catch block. Because the current warning is easy to overlook.

Actions #9

Updated by Radek Tomiška about 4 years ago

  • Status changed from Resolved to Closed
Actions

Also available in: Atom PDF