Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Introduction
Consider the situation where files with different categories (i.e. files with similar characteristics such as type, source, or similar) can be processed in parallel but all the files of any one category have to be processed sequentially.
For example, the centralised data processing of a company receives stock movement reports from subsidiaries at regular intervals. Whilst the reports from different subsidiaries can be processed simultaneously, the reports from each subsidiary have to be processed sequentially.
The 'Best Practice' Solution
Our recommended approach for this situation would be:
...
- Use a single 'load_file' File Order Source directory to which all files are delivered.
- JobScheduler would use regular expressions to identify the files arriving in this directory on the basis of their names, timestamps or file extensions and forward them for processing accordingly.
JobScheduler would then set a 'lock' for the subsidiary whose file is being processed to prevent further files from this subsidiary being processed as long as processing continues.
Should a 'new' file from this subsidiary arrive whilst its predecessor is being processed, the job 'receiving' the new file will be 'set back' by JobScheduler as long as the lock for the subsidiary is set.
- This lock would be released by JobScheduler once processing of the 'first' file has been completed.
- The job 'receiving' the new file will now be able to forward the new file for processing.
The Solution in Detail
JS starts as soon as file matching with Regular Expression found in the directory.
This directory is set in the "File Order Sources" area in the "Steps/Nodes" view of the "load_files" job chain as shown in the screenshot below:
...
Code Block | ||
---|---|---|
| ||
function spooler_process() { try { var lock_name = "BERLIN_PROC"; spooler.locks().lock(lock_name).remove(); return true; } catch (e) { spooler_log.warn("error occurred: " + String(e)); return false; } } |
Limitations of this solution
The limitation of this approach will become apparent if more than one file should arrive from a subsidary at once.
This is because there is no guarantee that files arriving together will be processed in any given order. This situation typically occurs after an unplanned loss of a file transfer connection. After the connection has been restored, there is a) no guarantee that the files arriving as a batch will be written to the file systen in a particular order b) no way for JobScheduler to know how many files will arrive as a batch.
One solution here would be to wait until a steady state in the incoming directory has been reached (i.e. no new file has been added and the size of all files remains constant over a suitable period of time) before starting to identify files. JobScheduler could then order files according to their names before forwarding them for processing.
The downside of this second approach is that it brings a delay in starting processing with it, due to the need wait to check whether all members of a batch of files has arrived by checking for a steady state.
Solution Demo
A demonstration of this solution is available for download from:
Demo Installation
- Unpack the zip file to a local directory
Copy the 'SQLLoaderProc' folder to your JobScheduler 'live' folder.
No Format TODO - ADD LINK TO DOKU
Copy the 'Data' folder to the a suitable local location.
The default location for this folder, which is specified in the configurations in the demo jobs, is:
C:\sandbox
Note that the following paths have to be modified if the location of the 'Data' folder is changed:- The 'load_files' File Order Source directories in the
load_files.job_chain.xml
job chain object - the 'source_file' and 'target_file' paths specified as parameters in the
move_file_suc.job.xml
andmove_file_error.job.xml
objects
- The 'load_files' File Order Source directories in the
Running the demo
- Just copy files from the 'Data/__test-files' folder to the 'in' folder:
- JobScheduler will automatically start processing within a few seconds
- Once processing has been completed the file(s) added to the 'in' folder will be moved to the 'done' or 'failed' folders, depending on whether processing was successful or not.
- DO NOT attempt to start an order for the job chain. This will only cause an error in the
aquire_lock
job.
How does the Demo Work?
BoxTitle=The demo 'load_files' job chain | ||
BoxContent
|
...
- JobScheduler starts as soon as file matching with Regular Expression found in the File Order Source directory.
- JobScheduler's
aquire_lock
job matches file with regular expression and decide file's category i.e. Berlin or Munich. - Once aquire_lock finds the matching category its try to set an Semaphore (Flag) using JobScheduler's inbuilt LOCK mechanism
- There is only one instance on LOCK is allowed or once LOCK is assigned to first file of Berlin category, next file has to wait or setback until the LOCK is free.
- The same mechanism will be repeated for files from category Munich but since the LOCK is not acquired ( or Semaphore (Flag)) is not set for Munich, file from category Munich will be allowed to be processed.
- Once process is finished depending upon success or error , JS will move the file from the 'in' folder to either the 'done' (on success) or 'failed' (on error).
- After moving input file to correct target directory JobScheduler job release_lock will be called which will remove the lock/Semaphore from JS and next file from same category will be allowed.
See also:
- Our Using locks FAQ.
- The Locks section in the JobScheduler reference documentation.
- Our Best Practice FAQ.