Scope
- API jobs are jobs that make use of the JobScheduler API Interface and include use of a JVM.
- Such jobs can be optimized considering the requirements and capabilities of the specific environment they are operated for.
- This article explains the impact of quantity structure, resource consumption and parallelism on the overall performance.
- The explanations given in this article are intended to make users understand the key factors that impact peformance and possible measures.
- Performance optimization for individual environments is part of the SOS Consulting Services.
Performance Goals
- Performance is about effective use of resources such as memory and CPU. Resources are shared across jobs, therefore you can speed up the execution of certain processes, however, you will have to accept a performance degradation for other processes.
- Performance optimization requires a clear goal to be achieved. There is no such thing as an "overall speed improvement", instead performance improvements are oriented towards balancing the use of resources for specific use cases.
Performance Factors
- For starters let's define the exact meaning when using the following terms:
- low number: 1 - 2000 objects
- medium number: 2000 - 4000 objects
- high number: 4000 - 20000 objects
- Keep in mind the difference between a JobScheduler single instance environment and a distributed environment:
- In a single instance environment all job related objects are managed and executed within the same JobScheduler Master instance on a computer.
- In a distributed environment all job related objects are managed by the same JobScheduler Master instance, but are executed by distributed JobScheduler Agent instances at run-time, i.e. resource consumption is distributed across different computers.
- Consider the key factors for performance that include quantity structure, resource consumption and concurrency.
Quantity Structure
The quantity structure is about the number of job related objects in use:
- Number of jobs, job chains and orders
- This is about the number of job-related objects that are available in the system, independently from the fact that they are running or not.
- JobScheduler has to track events for jobs, e.g. when to start and to stop jobs. Therefore a high number of job related objects creates some performance impact. Common scenarios used in enterprise level environments include up to 15000 jobs and 8000 job chains in a single JobScheduler instance.
- Number of job nodes
- This is about the number of jobs that are used in job nodes for job chains. Jobs can be re-used for any number of job chains.
- You could operate e.g. 1000 job chains with each using 5 job nodes with individual jobs which results in a total of 5000 individual jobs.
- You could operate e.g. 100 individual jobs that are used in 1000 job chains each using an individual sequence of 5 out of 100 jobs.
- The length of a job chain, i.e. the number of job nodes, is important:
- In the most common scenarios job chains with up to 30 job nodes are used.
- You can operate a single job chain with a high number of e.g. 4000 job nodes. In fact this will have a some effect on performance as JobScheduler has to check predecessor and successor nodes for each job node.
- - JS-1566Getting issue details... STATUS
- This is about the number of jobs that are used in job nodes for job chains. Jobs can be re-used for any number of job chains.
Resource Consumption
Consider what resources are consumed when running jobs:
- System Resources are consumed depending on the nature of your jobs and include resources such as
- an instance of a JVM,
- storage, memory and CPU as required by the job implementation,
- individual resources such as access to objects in a database or file system.
- .JobScheduler Resources are provided by the JobScheduler instance and are shared by jobs such as
- objects and methods of the JobScheduler API that are served by the JobScheduler Master,
- Locks that are accessed to prevent or to restrict concurrent access to resources.
Concurrency
Concurrent access to resources has the potential to slow down performance. The key factors are:
- Total of running jobs
- This number has less impact on JobScheduler than you might expect, but it affects the available resources like memory and CPU.
- Consider the information from the article How to determine the sizing of a JobScheduler environment for memory and CPU consumption.
- A common observation is the fact that a system behaves performant as long it's capacity is not used up. Exceeding e.g. the memory limit of a computer will result in the operating system swapping the memory and will cause inacceptable performance penalties.
- Synchronicity
- Multiple jobs accessing the same resources, e.g. shared Locks or objects in a database or file system tend to cause delays.
- Analyze the resources used by your jobs, e.g. use of exclusive vs. shared Locks or access to database tables to identify possible bottlenecks that would force serialized execution for processes that are assumed to run in parallel.
API Usage
The number of API calls to the JobScheduler Master affects the overall performance.
Performance Optimization
Parallelism
JobScheduler is designed for parallelism as the most effective means to improve performance.
Order Queue
- When using the order queue setting
<job_chain max_orders="number">
you will restrict the number of parallel orders in a job chain tonumber
. Any additional orders will be queued until a previous order has completed the job chain. - By default this attribute is not effective which allows an unlimited number of parallel orders in a job chain.
- When using a value
<job_chain max_orders="1">
then this will result in strict serializaton of orders. The next order will enter the job chain only after the first order has completed the final node of a job chain. - The recommendation is not to use the order queue setting. Better performance is achieved by enabling orders to be processed in parallel. Consider use of this setting if
- you have to restrict the number of tasks that are running in a job chain.
- your business requirements force orders to be serialized which is a somewhat contradicting requirement concerning performance.
Parallel Tasks
By default JobScheduler executes a job in a single task. Consider to enable parallel tasks to be started for the following scenarios:
- Parallelized Orders
- A job chain can hold a high number of orders. This is limited by the above-mentioned
max_orders
attribute of the job chain order queue. - Consider to allow e.g. 5 tasks to be started in parallel for a job in a job node. This will enable JobScheduler to run 5 orders in parallel for the respective job node.
- By default only 1 task is started for a job node which results in the fact that orders waiting for that job node will be queued for later execution.
- A job chain can hold a high number of orders. This is limited by the above-mentioned
- Re-used jobs
- If individual jobs are re-used in multiple job nodes of different job chains then multiple tasks can be started in parallel for instances of these job nodes.
- By default only 1 task is started and any additional orders in other job chains would have to wait for this task to become free.
The syntax to enable multiple tasks for a job is:
<job tasks="number"/>
wherenumber
is an integer between 1 and the maximum number of allowed tasks.- The effective number of tasks that are started is restricted by process classes, see below.
Consider use of resources for parallel tasks:
- For long-running jobs it might be preferable to have a higher number of tasks.
- For short-running jobs it might be more efficient to have a lower number of tasks as the orders would anyway pass the job node quickly and would not have to wait. However, if your system provides sufficient resources then high parallelism of tasks is the recommended measure.
Process Classes
Process classes restrict the number of parallel tasks for jobs.
- A default process class is active with the JobScheduler installation that enforces a maximum of 30 parallel tasks. The same default value applies if no process class setting is used:
The configuration item is available in ./config/scheduler.xml
- Modify this value to a higher number of tasks for jobs that are not assigned an individual process class.
- Modifications to ./config/scheduler.xml become active after a JobScheduler restart.
- Consider use of individual process classes
- You can create any number of process classes, e.g. by use of the JOE editor, and assign them individually to your jobs.
- The syntax for assignment is:
<job process_class="name"/>
wherename
is the name or path of your process class. - This allows to manage groups of jobs individually that are guaranteed the specified number of tasks from the assigned process class.
- Process limits from individual process classes are independent from the limit of the above-mentioned default process class.
- Individual process class configurations become active immediately when stored to a hot folder.
- Process class limit
- With the maximum number of processes of a process class being exceeded the jobs will have to wait until a process becomes free.
- Without the
max_processes
attribute in<process_class max_processes="number"/>
being used no limit applies to the number of parallel tasks. This is usually a bad idea as every system has some resource limit for execution of parallel tasks. It might be preferable to queue tasks that exceed the process class limit than to have the operating system take measures in case of heavy load, e.g. by memory swapping. - The same behavior applies to jobs that are executed with the JobScheduler Master and with Agents. Jobs that are shared for execution with multiple Agents respect the limit imposed by the process class with the tasks being technically executed on the Agent computer. For use with Agents the
max_processes
attribute should therefore consider the resources of the computer that the Agent is operated for.
Pre-loading and Re-use of Tasks
API jobs make use of a JVM that is loaded for each task start. This is an expensive operation that can be optimized like this:
- Pre-loading
- Tasks can be pre-loaded by use of the
<job min_tasks="number"/>
attribute wherenumber
specifies the number of tasks that are pre-loaded. - Such tasks will be loaded and keep running during the lifetime of JobScheduler. Such tasks will not consume CPU resources when being idle. The tasks will be re-used immediately for an incoming order and will execute the
spooler_process()
method of the job implementation. - Tasks that are pre-loaded can be configured to restart after expiration of the idle timeout setting, see below. This is effected by use of the
<job force_idle_timeout="yes/no"/>
attribute that forces a restart after the timeout set by the<job idle_timeout="duration
"/>
attribute. - Pre-loading comes at the disadvantage that the task log contains the log information of all orders that have been processed during the tasks' lifetime. Such logs might grow to a considerable size and will be written to the database only when the task is terminated which might slow down this operation.
- Tasks can be pre-loaded by use of the
- Re-use
- Tasks can be configured for re-use by the
<job idle_timeout="duration"/>
attribute whereduration
can be specified in seconds or in theHH:MM
orHH:MM:SS
formats. The default value for theidle_timeout
attribute is 5s. - The idle timeout lets a task continue after the processing of an order for the specified duration. Should the next order enter the job node within the duration of the idle timeout then the task will be re-used, otherwise the task is terminated.
- This setting is frequently used if pre-loading of a high number of tasks would consume too many system resources (memory) and the expected scenario is that multiple orders will arrive in parallel for the job node.
- It is not recommended to suppress the idle timeout by use of
<job idle_timeout="0"/>
as this would result in immediate termination of the task after processing of the current order should no additional orders be waiting for this job node. An order will immediately proceed to the next job node in a job chain without consideration of the task being continued, therefore this setting does not affect the time consumption of an order that leaves the job node but improves the performance for the next orders that enter the job node.
- Tasks can be configured for re-use by the
- Resource Consumption
- Pre-loading and re-use of tasks saves the effort of loading the JVM which can include 1s and more depending on the system performance.
- The memory allocated to the JVM will be retained during the lifetime of the task.
Memory Consumption
- Memory for API jobs is allocated on task start. The behavior of JVM memory allocation has changed between Java versions.
- Consider the setting
<job java_options="options"/>
whereoptions
can include memory settings such as e.g.-Xms24m -Xmx24m
for 24MB memory for the JVM. - See How to manage the Java heap space
- See How to increase or decrease the Java heap space
Performance Measurement
- Measurement of Tasks vs. Orders
- The default behavior for jobs in a job chain is to continue the task for the respective job for another 5 seconds after the order passed the task (idle timeout).
- This behavior is intended as an optimization that allows the same task (system process) to be re-used for the next order entering the job node.
- Therefore it is pointless to measure the duration of individual tasks, but instead the time consumption of orders has to be considered, i.e. the time required to pass an individual job node or to complete the job chain.
- Use of Profilers
- For Java API jobs the use of profilers is a proper means to check time consumption of an individual job execution.
- However, such tools are often unable to cope with the complexity of parallel processes in a system.
- Last but not least such tools cause an impact of their own on performance.
- Therefore we recommend to use profilers for frequency analysis of code in individual job implementations, but not for measurement of JobScheduler perforrmance.
- Recommendations
- For performance measurement use the timestamps provided from the JobScheduler database for orders:
- table SCHEDULER_ORDER_HISTORY: columns START_TIME and END_TIME provide the duration required to complete the job chain
- table SCHEDULER_ORDER_STEP_HISTORY: columns START_TIME and END_TIME provide the duration required to complete the respective job node.
- In addition you can use our own logging by use of the
spooler_log.info()
method that is available from the JobScheduler API or by logging timestamps to individual files.
- For performance measurement use the timestamps provided from the JobScheduler database for orders:
References
Change Management References
Product Knowledge Base References
- How to determine the sizing of a JobScheduler environment
- How to manage the Java heap space
- How to increase or decrease the Java heap space
Reference Documentation
- XML Element <job> (Configuration)
- XML Element <job_chain> (Configuration)
- XML Element <process_class> (Configuration)
More information on Consulting Services is available from the company web site.