...
Dockerfile
- Start Script
start_jobscheduler_agent.sh
- JobScheduler Agent tarball as is available from SOS for download.
Dockerfile
Download: Dockerfile
Code Block language bash title Dockerfile linenumbers true collapse true FROM openjdk:8 LABEL maintainer="Software- und Organisations-Service GmbH" # default user id has to match later run-time user ARG USER_ID=$UID # provide build arguments for release information ARG JS_MAJOR=1.13 ARG JS_RELEASE=1.13.3-SNAPSHOT # setup working directory RUN mkdir -p /var/sos-berlin.com WORKDIR /var/sos-berlin.com # add and extract tarball COPY ADD https://download.sos-berlin.com/JobScheduler.${JS_MAJOR}/jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz /usr/local/src/ RUN test -e /usr/local/src/jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz && \ tar xfvz /usr/local/src/jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz && \ rm /usr/local/src/jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz # add https keystore and private configuration file # COPY private-https.p12 /var/sos-berlin.com/jobscheduler_agent/var_4445/config/private/ # COPY private.conf /var/sos-berlin.com/jobscheduler_agent/var_4445/config/private/ # # create directories for file watching # RUN mkdir -p /var/sos-berlin.com/files/incoming && \ # mkdir -p /var/sos-berlin.com/files/success && \ # mkdir -p /var/sos-berlin.com/files/error # make default user the owner of directories RUN groupadd --gid ${USER_ID:-1000} jobscheduler && \ useradd --uid ${USER_ID:-1000} --gid jobscheduler --home-dir /home/jobscheduler --no-create-home --shell /bin/bash jobscheduler && \ chown -R jobscheduler:jobscheduler /var/sos-berlin.com # copy and prepare start script COPY start_jobscheduler_agent.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/start_jobscheduler_agent.sh # prepareexpose logsvolume directory RUNfor mkdir -p /var/sos-berlin.com/jobscheduler_agent/var_4445/logs && chown -R jobscheduler:jobscheduler storage persistence VOLUME /var/sos-berlin.com/jobscheduler_agent/var_4445/logs # expose volume for storagefile persistencewatching # VOLUME /var/sos-berlin.com/jobscheduler_agent/var_4445files # allow incoming traffic to port EXPOSE 4445 # run-time user, can be overwritten when running the container USER jobscheduler CMD ["/usr/local/bin/start_jobscheduler_agent.sh"]
- Explanations
- Line 1: We start from a CentOS an Alpine image that includes JDK 8. Newer Java version can be used, see Which Java versions is JobScheduler available for?
- Line 5: Consider that $UID provides the numeric ID of the account that the JobScheduler Master Agent installation inside the Docker container is performed for. This numeric ID typically starts above 1000 and should correspond to the account that is used on the Docker host, i.e. the account on the Docker Host and the account inside the container should use the same numeric ID. This mechanism simplifies exposure of the Docker container's file system.
- Line 8-9: Adjust the JobScheduler release number as required.
- Line 1216-1619: The Agent tarball is copied and extracted to the container.
- Line 22-23: Optionally a keystore file with an SSL private key and public certificate can be provided for use of the Agent with the HTTPS protocol
- Line 26-28: Optionally directories are created and later on mounted to the Docker host for file watching by the Agent.
- Line 31-33: An account and group "jobscheduler" is created that is handed over ownership of installed files.
- Line 3136-3237: The start script is copied to the container, see below chapter Start Script.
- Line 40: The Agent's run-time directory is exposed to a volume for later mount to the Docker host.
- Line 42: Optionally the file watching directory is exposed to a volume for later mount to the Docker host.
- Line 3845: Port 4445 is exposed for later mapping. This port is used for the connection between JobScheduler Master and Agent.
- Line 4148: The account "jobscheduler" that is the owner of the installation is exposed for later mapping. This account should be mapped at run-time to the account in the Docker Host that will mount the exposed volume.
- Line 4350: The start script is executed to launch the JobScheduler Agent daemon.
Start Script
The start script for Agents comes straightforward:Download: start_jobscheduler_agent.sh
Code Block language bash title ./build/start_jobscheduler_agent.sh: Start Script linenumbers true collapse true #!/bin/sh JS_HOSTNAME="`hostname`" /var/sos-berlin.com/jobscheduler_agent/bin/jobscheduler_agent.sh start -http-port=4445 && tail -f /dev/null # start Agent for http port on localhost interface and for https port on network interface # /var/sos-berlin.com/jobscheduler_agent/bin/jobscheduler_agent.sh start -http-port=localhost:4445 -https-port=$JS_HOSTNAME:4445 && tail -f /dev/null
- Explanations
- Line 35: The standard start script
jobscheduler_agent.sh
is used. Thetail
command prevents the start script from terminating in order to keep the container alive. - Line 8: Optionally the Agent can be started for use with the HTTPS protocol on a neetwork interface and the HTTP protocol on the localhost interface.
- Line 35: The standard start script
Build Command
There are a number of ways how to write a build command, find the following example:
A typical build command could look like this:
Code Block language bash title ./build.sh: Build Command linenumbers true collapse true #!/bin/sh set -e SCRIPT_HOME=$(dirname "$0") SCRIPT_HOME="`cd \"${SCRIPT_HOME}\" >/dev/null && pwd`" IMAGE_NAME="agent-1-13-4445"$(basename "$SCRIPT_HOME")" docker build --no-cache --rm --tag=$IMAGE_NAME --file=./build/Dockerfile --network=js --build-arg="USER_ID=$UID" ./build
- Explanations
- Line 7: The script starts from the naming convention that the name of the current directory represents the image name.
- Line 9: The script assumes a sub-folder
build
to represent the Docker build context. - Using a common network for JobScheduler components allows direct access to resources such as ports within the network. The network is required at build time to allow the installer to create and populate the JobScheduler database.
- Consider use of the
--build-arg
that injects theUSER_ID
environment variable into the image with the numeric ID of the account running the build command. This simplifies later access to the volume that optionally can be exposed by the Dockerfile as the same numeric user ID and group ID inside and outside of the container are used.
...
A typical run command could look like this:
Code Block language bash title ./run.sh: Run Command linenumbers true collapse true #!/bin/sh set -e SCRIPT_HOME=$(dirname "$0") SCRIPT_HOME="`cd \"${SCRIPT_HOME}\" >/dev/null && pwd`" IMAGE_NAME="agent-1-13-4445" "$(basename "$SCRIPT_HOME")" RUN_USER_ID="$(id -u $USER):$(id -g $USER)" mkdir -p /some/path/logs$SCRIPT_HOME/data docker run -dit --rm --user=$RUN_USER_ID --hostname=$IMAGE_NAME --network=js --publish=5445:4445 --volume=/some/path/logs$SCRIPT_HOME/data:/var/sos-berlin.com/jobscheduler_agent/var_4445/logs:Z --name=$IMAGE_NAME $IMAGE_NAME
- Explanations
- Using a common Docker network with the
--network
option for JobScheduler components allows direct access to resources such as ports within the Docker network. - The
RUN_USER_ID
variable is populated with the numeric ID of the account and the group that executes the run command. This value is assigned the--user
option in order to inject the account information into the container (replacing the account specified with theUSE jobscheduler
instruction in the Dockerfile. - Port
4445
for access to JobScheduler Agent by a Master can optionally be mapped to some outside port. This is not required if a Docker network is used. - Specify a
data
Specify alogs
directory to be created that is referenced with the--volume
option to expose the log directory of the JobScheduler Agent for readingrun-time directory of the JobScheduler Agent.
- Using a common Docker network with the
Stop
There are a number of ways how to terminate an Agent and its container, find the following example:
A typical stop command could look like this:
Code Block language bash title ./stop.sh: Stop Command linenumbers true collapse true #!/bin/sh set -e SCRIPT_HOME=$(dirname "$0") SCRIPT_HOME="`cd \"${SCRIPT_HOME}\" >/dev/null && pwd`" IMAGE_NAME="$(basename "$SCRIPT_HOME")" JS_ACTION="stop" for option in "$@" do case "$option" in -kill) JS_ACTION="kill" ;; -abort) JS_ACTION="abort" ;; esac done JS_CONTAINER=$(docker container ps -a -q --filter "ancestor=$IMAGE_NAME") if [ -z "$JS_CONTAINER" ] then echo ".. container not running: $IMAGE_NAME" exit 0 fi for f in "${JS_CONTAINER[@]}"; do echo ".. stopping Agent ($JS_ACTION): $f" docker container exec $f /bin/sh -c "/var/sos-berlin.com/jobscheduler_agent/bin/jobscheduler_agent.sh $JS_ACTION" echo ".. stopping container ($JS_ACTION): $f" if [ "$JS_ACTION" = "stop" ] then docker container stop $f else docker container kill $f fi done
- Explanations
- Before stopping the container the Agent daemon is terminated by use of its start script. This offers the following modes for termination:
stop
: terminate Agent normally, wait for any running tasks to complete.abort
: kill any running tasks and terminate the Agent abnormally.kill
: kill immediately the Agent process and any running tasks.
- Only after the Agent daemon is terminated the container can be stopped or killed.
- Before stopping the container the Agent daemon is terminated by use of its start script. This offers the following modes for termination: