Scope
- JOC Cockpit is operated with a Docker container.
- Prerequisites
- The JOC Cockpit requires a database to be available from any physical or virtual host or from a Docker container.
- Consider to prepare the files indicated with chapter Build.
Build
The following files are required for the build context:
Dockerfile
- Installer Response file
joc_install.xml
with individual installation settings. A template of this file is available when extracting the installer tarball. - Start Script
start_joc.sh
- JOC Cockpit installer tarball as available from SOS for download.
Dockerfile
Download: Dockerfile
- Explanations
- Line 1: We start from an Alpine image that includes JDK 8. Newer Java version can be used, see Which Java versions is JobScheduler available for?
- Line 5: Consider that $UID provides the numeric ID of the account that the JOC Cockpit installation inside the Docker container is performed for. This numeric ID typically starts above 1000 and should correspond to the account that is used on the Docker host, i.e. the account on the Docker Host and the account inside the container should use the same numeric ID. This mechanism simplifies exposure of the Docker container's file system.
- Line 8-9: Adjust the JobScheduler release number as required.
- Line 12-16: The installer tarball is copied and extracted to the container.
- Line 22-23: The installer response file is copied to the container, for details of this file see next chapter. Then the installer is executed for the current user.
- Line 26-28: An account and group "jobscheduler" is created that is handed over ownership of installed files.
- Line 31-32: The start script is copied to the container, see below chapter Start Script.
- Line 38: Port 4446 is exposed for later mapping. This port is used for the connection between user browsers and JOC Cockpit.
- Line 41: The account "jobscheduler" that is the owner of the installation is exposed for later mapping. This account should be mapped at run-time to the account in the Docker Host that would mount an optionally exposed volume.
- Line 43: The start script is executed to launch the JOC Cockpit daemon.
Installer Response File
Download: joc_install.xml
- Explanations
- The above installer response file works for releases 1.13. Other releases ship with different versions of this file. You should pick-up a template of this file that matches your JobScheduler release by extracting the installer tarball.
- Generally all defaults of the response file can be maintained.
- This includes use of port 4446 for the connection of user browsers to JOC Cockpit. At run-time this port can be mapped, see Dockerfile.
- Line 182-233: The database connection makes use of a hostname "mysql-5-7" that is assumed to be the hostname of a Docker container in the same Docker network running the MySQL database.
- Modify the database connection settings as required for use with your DBMS and access credentials.
Start Script
Download: start_joc.sh
Start Script#!/bin/sh /opt/sos-berlin.com/joc/jetty/bin/jetty.sh start && tail -f /dev/null
- Explanations
- Line 3: The standard start script
jetty.sh
is used. Thetail
command prevents the start script from terminating in order to keep the container alive.
- Line 3: The standard start script
Build Command
There are a number of ways how to write a build command, find the following example:
A typical build command could look like this:
Build Command#!/bin/sh IMAGE_NAME="joc-1-13" docker build --no-cache --rm --tag=$IMAGE_NAME --file=./build/Dockerfile --network=js --build-arg="USER_ID=$UID" ./build
- Explanations
- Using a common network for JobScheduler components allows direct access to resources such as ports within the network. The network is required at build time to allow the installer to use the JobScheduler database.
- Consider use of the
--build-arg
that injects theUSER_ID
environment variable into the image with the numeric ID of the account running the build command. This simplifies later access to the optionally exposed volume specified by the Dockerfile as the same numeric user ID and group ID inside and outside of the container are used.
Run
There are a number of ways how to write a run command, find the following example:
A typical run command could look like this:
Run Command#!/bin/sh IMAGE_NAME="joc-1-13" RUN_USER_ID="$(id -u $USER):$(id -g $USER)" mkdir -p /some/path/logs docker run -dit --rm --user=$RUN_USER_ID --hostname=$IMAGE_NAME --network=js --publish=4446:4446 --volume=/some/path/logs:/var/log/sos-berlin.com/joc:Z --name=$IMAGE_NAME $IMAGE_NAME
- Explanations
- Using a common network for JobScheduler components allows direct access to resources such as ports within the network.
- The
RUN_USER_ID
variable is populated with the numeric ID of the account and the group that executes the run command. This value is assigned the--user
option to inject the account information into the container (replacing the account specified with theUSE jobscheduler
instruction in the Dockerfile). - Port 4446 for access by user browsers to JOC Cockpit should be mapped to some outside port on the Docker host. Consider the firewall on the Docker host to allow incoming traffic to the mapped port.
- Specify a
logs
directory to be created that is referenced with the--volume
option to expose the log directory of the JOC Cockpit for reading. Avoid to modify log files in this directory and to add new files.