Introduction
- This article describes the build process for official JOC Cockpit images.
- Users can build their own container images for JOC Cockpit and adjust to their needs.
Build Environment
The following directory hierarchy is assumed for the build environment:
joc
build.sh
build
Dockerfile
entrypoint.sh
jetty.sh
js7_install_joc.sh
config
The joc
root directory can have any name. The build files listed above are available for download. Note that the build script described below will, by default, use the directory name and release number to determine the resulting image name.
Dockerfile
Download: Dockerfile
Container images for JS7 JOC Cockpit provided by SOS make use of the following Dockerfile:
# BUILD PRE-IMAGE
FROM alpine:3.20 AS js7-pre-image
# provide build arguments for release information
ARG JS_RELEASE
ARG JS_RELEASE_MAJOR
# image user id has to match later run-time user id
ARG JS_USER_ID=${JS_USER_ID:-1001}
ARG JS_HTTP_PORT=${JS_HTTP_PORT:-4446}
ARG JS_HTTPS_PORT=${JS_HTTPS_PORT:-4443}
ARG JS_JAVA_OPTIONS=${JS_JAVA_OPTIONS}
# add/copy installation tarball
# ADD https://download.sos-berlin.com/JobScheduler.${JS_RELEASE_MAJOR}/js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/
COPY js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/
# test installer tarball
RUN test -e /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz
# add/copy installer script
# ADD https://download.sos-berlin.com/JobScheduler.${JS_RELEASE_MAJOR}/js7_install_joc.sh /usr/local/bin/
COPY js7_install_joc.sh /usr/local/bin/
# copy configuration
COPY config/ /usr/local/src/resources
# install Java and JOC Cockpit
# create user account jobscheduler using root group
RUN apk upgrade --available && apk add --no-cache \
openjdk17-jre && \
sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \
adduser -u ${JS_USER_ID} -G root --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler && \
chmod +x /usr/local/bin/js7_install_joc.sh && \
printf "http port %s https port %s \n", "${JS_HTTP_PORT}", "${JS_HTTPS_PORT}" && \
/usr/local/bin/js7_install_joc.sh \
--home=/opt/sos-berlin.com/js7/joc \
--data=/var/sos-berlin.com/js7/joc \
--setup-dir=/usr/local/src/joc.setup \
--tarball=/usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz \
--http-port=${JS_HTTP_PORT} \
--https-port=${JS_HTTPS_PORT} \
--dbms-init=off \
--dbms-config=/usr/local/src/resources/hibernate.cfg.xml \
--keystore=/usr/local/src/resources/https-keystore.p12 \
--keystore-password=jobscheduler \
--truststore=/usr/local/src/resources/https-truststore.p12 \
--truststore-password=jobscheduler \
--user=jobscheduler \
--title="JOC Cockpit" \
--as-user \
--java-options="${JS_JAVA_OPTIONS}" \
--make-dirs && \
rm -f /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz
# BUILD IMAGE
FROM alpine:3.20 AS js7-image
LABEL maintainer="Software- und Organisations-Service GmbH"
# provide build arguments for release information
ARG JS_RELEASE
ARG JS_RELEASE_MAJOR
# image user id has to match later run-time user id
ARG JS_USER_ID=${JS_USER_ID:-1001}
ARG JS_HTTP_PORT=${JS_HTTP_PORT:-4446}
ARG JS_HTTPS_PORT=${JS_HTTPS_PORT:-4443}
ARG JS_JAVA_OPTIONS=${JS_JAVA_OPTIONS}
# JS7 user id, ports and Java options
ENV RUN_JS_USER_ID=${RUN_JS_USER_ID:-1001}
ENV RUN_JS_HTTP_PORT=${RUN_JS_HTTP_PORT:-$JS_HTTP_PORT}
ENV RUN_JS_HTTPS_PORT=${RUN_JS_HTTPS_PORT:-$JS_HTTPS_PORT}
ENV RUN_JS_JAVA_OPTIONS=${RUN_JS_JAVA_OPTIONS:-$JS_JAVA_OPTIONS}
COPY --from=js7-pre-image ["/opt/sos-berlin.com/js7", "/opt/sos-berlin.com/js7"]
COPY --from=js7-pre-image ["/var/sos-berlin.com/js7", "/var/sos-berlin.com/js7"]
# copy entrypoint script
COPY entrypoint.sh /usr/local/bin/
# install process tools, net tools, bash, openjdk
# add jobscheduler user account and group
# for JDK < 12, /dev/random does not provide sufficient entropy, see https://kb.sos-berlin.com/x/lIM3
RUN apk upgrade --available && apk add \
--no-cache \
--repository=http://dl-cdn.alpinelinux.org/alpine/edge/main \
procps \
net-tools \
bash \
su-exec \
shadow \
git \
openjdk17-jre && \
sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \
sed -i 's/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, \\/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, TLSv1, TLSv1.1, \\/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \
adduser -u ${JS_USER_ID} -G root --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler && \
mkdir -p /var/log/sos-berlin.com/js7/joc && \
chown -R jobscheduler:root /opt/sos-berlin.com /var/sos-berlin.com /var/log/sos-berlin.com/js7/joc && \
chmod -R g=u /etc/passwd /opt/sos-berlin.com /var/sos-berlin.com /var/log/sos-berlin.com/js7/joc && \
chmod +x /usr/local/bin/entrypoint.sh
# START
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
Explanation:
- The Dockerfile implements two stages to exclude installer files from the resulting image.
- Line 3: The base image is the current Alpine image at build-time.
- Line 6 - 7: The release identification is injected by build arguments. This information is used to determine the tarball to be downloaded or copied.
- Line 10 - 13: Defaults for the user id running the JOC Cockpit inside the container as well as HTTP and HTTPS ports are provided. These values can be overwritten by providing the relevant build arguments.
- Line 16 - 17: Users can either download the JOC Cockpit tarball directly from the SOS web site or store the tarball with the build directory and copy from this location.
Line 20: The tarball integrity is tested.
- Line 23 - 24: The JOC Cockpit Installer Script is downloaded or copied, see JS7 - Unix Shell Installation Script - js7_install_joc.sh
- Line 27: The
config
folder available in the build directory is copied to the appropriate config
folder in the image. This can be useful for creating an image with individual settings in configuration files, see the JS7 - JOC Cockpit Configuration Items article for more information.
- The
hibernate.cfg.xml
specifies the database connection. This file is not used at build-time. However, it is provided as a sample for run-time configuration. You will find details in the JS7 - Database article. - The default
https-keystore.p12
and https-truststore.p12
files are copied that would hold the private key and certificate required for server authentication with HTTPS. By default empty keystore and truststore files are used that users would add their private keys and certificates to at run-time.
- Line 32: A recent Java release is added to the pre-image.
- Line 33: The
jobscheduler
account is created. - Line 35 - 52: The JOC Cockpit Installer Script is executed with arguments performing installation for the
jobscheduler
account. For use of arguments see headless installation of the JOC Cockpit, see JS7 - JOC Cockpit Installation On Premises. In fact a JOC Cockpit installation is performed when building the image - Line 72 - 75: Environment variables are provided at run-time, not at build-time. They can be used to specify ports and Java options when running the container.
Line 81: The entrypoint.sh
script is copied from the build directory to the image, see next chapter.
- Line 82: The
jetty.sh
script is copied from the build directory to the image. This script ships with the Jetty Servlet Container and for on premises installations is available from the JETTY_HOME/bin
directory. Users might have to adjust the script to strip off commands that require root permissions, for example chown
, and commands that might not be applicable to their container environment, for example use of su
. - Line 87 - 93: The image OS is updated and additional packages are installed (ps, netstat, bash, git).
- Line 94: The most recent Java 11 package available with Alpine is applied. JOC Cockpit can be operated with newer Java releases. However, stick to Oracle, OpenJDK or AdoptOpenJDK as the source for your Java LTS release. Alternatively you can use your own base image and install Java on top of this. For details see Which Java versions is JobScheduler available for?
- Line 95: Java releases might make use of
/dev/random
for random number generation. This is a bottleneck as random number generation with this file is blocking. Instead /dev/urandom
should be used that implements non-blocking behavior. The change of the random file is applied to the Java security file. - Line 96: Users might want to disable certain TLS protocol versions or algorithms by applying changes to the Java security file.
- Line 97 - 100: The
jobscheduler
account is created and is assigned the user id handed over by the relevant build argument. This suggests that the account running the JOC Cockpit inside the container and the account that starts the container are assigned the same user id. This allows the account running the container to access any files created by the JOC Cockpit in mounted volumes with identical permissions.- Consider that the account is assigned the
root
group. For environments in which the entrypoint script is executed with an arbitrary non-root user id this allows access to files created by the JOC Cockpit provided to any accounts that are assigned the root
group. - Accordingly any files owned by the
jobscheduler
account are made accessible to the root
group with similar user permissions. Read access to /etc/passwd
can be required in such environments. - For details see JS7 - Running Containers for User Accounts.
- Line 105: The entrypoint script is executed and is dynamically parameterized from environment variables which are forwarded when starting the container.
Entrypoint Script
Download: entrypoint.sh
The following entrypoint script is used to start JOC Cockpit containers.
#!/bin/bash
JETTY_BASE="/var/sos-berlin.com/js7/joc"
update_joc_properties() {
# update joc.properties file: ${JETTY_BASE}/resources/joc/joc.properties
rc=$(grep -E '^cluster_id' "${JETTY_BASE}"/resources/joc/joc.properties)
if [ -z "${rc}" ]
then
echo ".. update_joc_properties [INFO] updating cluster_id in ${JETTY_BASE}/resources/joc/joc.properties"
printf "cluster_id = joc\n" >> "${JETTY_BASE}"/resources/joc/joc.properties
fi
rc=$(grep -E '^ordering' "${JETTY_BASE}"/resources/joc/joc.properties)
if [ -z "${rc}" ]
then
echo ".. update_joc_properties [INFO] updating ordering in ${JETTY_BASE}/resources/joc/joc.properties"
printf "ordering=%1\n" "$(shuf -i 0-99 -n 1)" >> "${JETTY_BASE}"/resources/joc/joc.properties
fi
}
startini_to_startd() {
# convert once ${JETTY_BASE}/resources/joc/start.ini to ${JETTY_BASE}/resources/joc/start.d
if [ -d "${JETTY_BASE}"/start.d ]; then
if [ -f "${JETTY_BASE}"/resources/joc/start.ini ] && [ -d "${JETTY_BASE}"/resources/joc/start.d ]; then
echo ".. startini_to_startd [INFO] converting start.ini to start.d ini files"
for file in "${JETTY_BASE}"/resources/joc/start.d/*.ini; do
module="$(basename "$file" | cut -d. -f1)"
echo ".... [INFO] processing module ${module}"
while read -r line; do
modulevariablekeyprefix="$(echo "${line}" | cut -d. -f1,2)"
if [ "${modulevariablekeyprefix}" = "jetty.${module}" ] || [ "${modulevariablekeyprefix}" = "jetty.${module}Context" ]; then
modulevariablekey="$(echo "${line}" | cut -d= -f1 | sed 's/\s*$//g')"
echo ".... startini_to_startd [INFO] ${line}"
sed -i "s;.*${modulevariablekey}\s*=.*;${line};g" "${file}"
fi
done < "${JETTY_BASE}"/resources/joc/start.ini
done
mv -f "${JETTY_BASE}"/resources/joc/start.ini "${JETTY_BASE}"/resources/joc/start.in~
fi
fi
}
add_start_configuration() {
# overwrite ini files in start.d if available from config folder
if [ -d "${JETTY_BASE}"/start.d ]; then
if [ -d "${JETTY_BASE}"/resources/joc/start.d ]; then
for file in "${JETTY_BASE}"/resources/joc/start.d/*.ini; do
echo ".. add_start_configuration [INFO] copying ${file} -> ${JETTY_BASE}/start.d/"
cp -f "$file" "${JETTY_BASE}"/start.d/
done
fi
fi
}
add_jdbc_and_license() {
# if license folder not empty then copy js7-license.jar to Jetty's class path
if [ -d "${JETTY_BASE}"/resources/joc/license ]; then
if [ -f "${JETTY_BASE}"/resources/joc/lib/js7-license.jar ]; then
echo ".. add_jdbc_and_license [INFO] copying ${JETTY_BASE}/resources/joc/lib/js7-license.jar -> ${JETTY_BASE}/lib/ext/joc/"
cp -f "${JETTY_BASE}"/resources/joc/lib/js7-license.jar "${JETTY_BASE}"/lib/ext/joc/
fi
fi
# if JDBC driver added then copy to Jetty's class path and move exiting JDBC drivers back to avoid conflicts
if [ -d "${JETTY_BASE}"/resources/joc/lib ]; then
if [ -n "$(ls "${JETTY_BASE}"/resources/joc/lib/*.jar 2>/dev/null | grep -v "js7-license.jar")" ]; then
for file in "${JETTY_BASE}"/lib/ext/joc/*.jar; do
if [ "$(basename "$file")" != "js7-license.jar" ]; then
echo ".. add_jdbc_and_license [INFO] moving ${file} -> ${JETTY_BASE}/resources/joc/lib/$(basename "$file")~"
mv -f "$file" "${JETTY_BASE}"/resources/joc/lib/"$(basename "$file")"~
fi
done
for file in "${JETTY_BASE}"/resources/joc/lib/*.jar; do
echo ".. add_jdbc_and_license [INFO] copying ${file} -> ${JETTY_BASE}/lib/ext/joc/"
cp -f "$file" "${JETTY_BASE}"/lib/ext/joc/
done
fi
fi
}
add_custom_logo() {
# if image folder in the configuration directory is not empty then images are copied to the installation directory
if [ -d "${JETTY_BASE}"/resources/joc/image ];then
mkdir -p "${JETTY_BASE}"/webapps/root/ext/images
echo ".. add_custom_logo [INFO] copying ${JETTY_BASE}/resources/joc/image/* -> ${JETTY_BASE}/webapps/root/ext/images/"
cp "${JETTY_BASE}"/resources/joc/image/* "${JETTY_BASE}"/webapps/root/ext/images/
fi
}
patch_api() {
if [ ! -d "${JETTY_BASE}"/resources/joc/patches ]; then
echo ".. patch_api [INFO] API patch directory not found: ${JETTY_BASE}/resources/joc/patches"
return
fi
if [ ! -d "${JETTY_BASE}"/webapps/joc/WEB-INF/classes ]; then
echo ".. patch_api [WARN] JOC Cockpit API sub-directory not found: ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
return
fi
jarfiles=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar 2>/dev/null)
if [ -n "${jarfiles}" ]; then
cd "${JETTY_BASE}"/webapps/joc/WEB-INF/classes > /dev/null || return
for jarfile in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar; do
echo ".. patch_api [INFO] extracting ${jarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
unzip -o "${jarfile}" || return
# rm -f "${jarfile}" || return
done
cd - > /dev/null || return
else
echo ".. patch_api [INFO] no API patches available from .jar files in directory: ${JETTY_BASE}/resources/joc/patches"
fi
tarballs=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.tar.gz 2>/dev/null)
if [ -n "${tarballs}" ]; then
if [ "$(echo "${tarball}" | wc -l)" -eq 1 ]; then
cd "${JETTY_BASE}"/webapps/joc/WEB-INF/classes > /dev/null || return
for tarfile in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.tar.gz; do
echo ".. patch_api [INFO] extracting ${tarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
tar -xpozf "${tarfile}" || return
# rm -f "${tarfile}" || return
for jarfile in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar; do
echo ".. patch_api [INFO] extracting ${jarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
unzip -o "${jarfile}"
# rm -f "${jarfile}" || return
done
# rm -f "${tarfile}" || return
done
cd - > /dev/null || return
else
echo ".. patch_api [WARN]: more than one tarball found for API patches. Please drop previous patch tarballs and use the latest API patch tarball only as it includes previous patches."
fi
else
echo ".. patch_api [INFO] no API patches available from .tar.gz files in directory: ${JETTY_BASE}/resources/joc/patches"
fi
}
patch_gui() {
if [ ! -d "${JETTY_BASE}"/resources/joc/patches ]; then
echo ".. patch_gui [INFO] GUI patch directory not found: ${JETTY_BASE}/resources/joc/patches"
return
fi
if [ ! -d "${JETTY_BASE}"/webapps/joc ]; then
echo ".. patch_gui [WARN] JOC Cockpit GUI sub-directory not found: ${JETTY_BASE}/webapps/joc"
return
fi
tarball=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.GUI-*.tar.gz 2>/dev/null)
if [ -n "${tarball}" ]; then
if [ "$(echo "${tarball}" | wc -l)" -eq 1 ]; then
echo ".. patch_gui [INFO] applying GUI patch tarball: ${tarball}"
cd "${JETTY_BASE}"/webapps/joc > /dev/null || return
find "${JETTY_BASE}"/webapps/joc -maxdepth 1 -type f -delete || return
if [ -d "${JETTY_BASE}"/webapps/joc/assets ]; then
rm -fr "${JETTY_BASE}"/webapps/joc/assets || return
fi
if [ -d "${JETTY_BASE}"/webapps/joc/styles ]; then
rm -fr "${JETTY_BASE}"/webapps/joc/styles || return
fi
tar -xpozf "${tarball}" || return
cd - > /dev/null || return
else
echo ".. patch_gui [WARN]: more than one tarball found for GUI patches. Please drop previous patch tarballs and use the latest GUI patch tarball only as it includes previous patches."
fi
else
echo ".. patch_gui [INFO] no GUI patches available from .tar.gz files in directory: ${JETTY_BASE}/resources/joc/patches"
fi
}
# create JOC Cockpit start script
echo '#!/bin/sh' > "${JETTY_BASE}"/start-joc.sh
echo 'trap "/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh stop; exit" TERM INT' >> "${JETTY_BASE}"/start-joc.sh
echo '/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start && tail -f /dev/null &' >> "${JETTY_BASE}"/start-joc.sh
echo 'wait' >> "${JETTY_BASE}"/start-joc.sh
chmod +x "${JETTY_BASE}"/start-joc.sh
echo "JS7 entrypoint script: updating image"
# update joc.properties file
update_joc_properties
# start.ini_to_startd
add_start_configuration
# copy custom logo
add_custom_logo
if [ -n "${RUN_JS_HTTP_PORT}" ]
then
if [ -f "${JETTY_BASE}"/start.d/http.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/http.ini ]; then
# enable http access in start.d directory
mv "${JETTY_BASE}"/start.d/http.in~ "${JETTY_BASE}"/start.d/http.ini
fi
if [ -f "${JETTY_BASE}"/start.d/http.ini ]; then
# set port for http access in start.d directory
sed -i "s/.*jetty.http.port\s*=.*/jetty.http.port=$RUN_JS_HTTP_PORT/g" "${JETTY_BASE}"/start.d/http.ini
fi
else
if [ -f "${JETTY_BASE}"/start.d/http.ini ]; then
# disable http access in start.d directory
mv -f "${JETTY_BASE}"/start.d/http.ini "${JETTY_BASE}"/start.d/http.in~
fi
fi
if [ -n "${RUN_JS_HTTPS_PORT}" ]
then
if [ -f "${JETTY_BASE}"/start.d/https.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/https.ini ]; then
# enable https access in start.d directory
mv "${JETTY_BASE}"/start.d/https.in~ "${JETTY_BASE}"/start.d/https.ini
fi
if [ -f "${JETTY_BASE}"/start.d/ssl.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/ssl.ini ]; then
# enable https access in start.d directory
mv "${JETTY_BASE}"/start.d/ssl.in~ "${JETTY_BASE}"/start.d/ssl.ini
fi
if [ -f "${JETTY_BASE}"/start.d/ssl.ini ]; then
# set port for https access in start.d directory
sed -i "s/.*jetty.ssl.port\s*=.*/jetty.ssl.port=${RUN_JS_HTTPS_PORT}/g" "${JETTY_BASE}"/start.d/ssl.ini
fi
else
if [ -f "${JETTY_BASE}"/start.d/https.ini ]; then
# disable https access in start.d directory
mv -f "${JETTY_BASE}"/start.d/https.ini "${JETTY_BASE}"/start.d/https.in~
fi
if [ -f "${JETTY_BASE}"/start.d/ssl.ini ]; then
# disable https access in start.d directory
mv -f "${JETTY_BASE}"/start.d/ssl.ini "${JETTY_BASE}"/start.d/ssl.in~
fi
fi
if [ -n "${RUN_JS_JAVA_OPTIONS}" ]
then
export JAVA_OPTIONS="${JAVA_OPTIONS} ${RUN_JS_JAVA_OPTIONS}"
fi
JS_USER_ID=$(echo "${RUN_JS_USER_ID}" | cut -d ':' -f 1)
JS_GROUP_ID=$(echo "${RUN_JS_USER_ID}" | cut -d ':' -f 2)
JS_USER_ID=${JS_USER_ID:-$(id -u)}
JS_GROUP_ID=${JS_GROUP_ID:-$(id -g)}
BUILD_GROUP_ID=$(grep 'jobscheduler' /etc/group | head -1 | cut -d ':' -f 3)
BUILD_USER_ID=$(grep 'jobscheduler' /etc/passwd | head -1 | cut -d ':' -f 3)
add_jdbc_and_license
patch_api
patch_gui
if [ "$(id -u)" = "0" ]
then
if [ ! "${BUILD_USER_ID}" = "${JS_USER_ID}" ]
then
echo "JS7 entrypoint script: switching ownership of image user id '${BUILD_USER_ID}' -> '${JS_USER_ID}'"
usermod -u "${JS_USER_ID}" jobscheduler
find /var/sos-berlin.com/ -user "${BUILD_USER_ID}" -exec chown -h jobscheduler {} \;
find /var/log/sos-berlin.com/ -user "${BUILD_USER_ID}" -exec chown -h jobscheduler {} \;
fi
if [ ! "${BUILD_GROUP_ID}" = "${JS_GROUP_ID}" ]
then
if grep -q "${JS_GROUP_ID}" /etc/group
then
groupmod -g "${JS_GROUP_ID}" jobscheduler
else
addgroup -g "${JS_GROUP_ID}" -S jobscheduler
fi
echo "JS7 entrypoint script: switching ownership of image group id '${BUILD_GROUP_ID}' -> '${JS_GROUP_ID}'"
find /var/sos-berlin.com/ -group "${BUILD_GROUP_ID}" -exec chgrp -h jobscheduler {} \;
find /var/log/sos-berlin.com/ -group "${BUILD_GROUP_ID}" -exec chgrp -h jobscheduler {} \;
fi
echo "JS7 entrypoint script: switching to user account 'jobscheduler' to run start script"
echo "JS7 entrypoint script: starting JOC Cockpit: exec su-exec ${JS_USER_ID}:${JS_GROUP_ID} /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start"
exec su-exec "${JS_USER_ID}":0 "${JETTY_BASE}"/start-joc.sh
else
if [ "${BUILD_USER_ID}" = "${JS_USER_ID}" ]
then
if [ "$(id -u)" = "${JS_USER_ID}" ]
then
echo "JS7 entrypoint script: running for user id '$(id -u)'"
else
echo "JS7 entrypoint script: running for user id '$(id -u)' using user id '${JS_USER_ID}', group id '${JS_GROUP_ID}'"
echo "JS7 entrypoint script: missing permission to switch user id and group id, consider to omit the 'docker run --user' option"
fi
else
echo "JS7 entrypoint script: running for user id '$(id -u)', image user id '${BUILD_USER_ID}' -> '${JS_USER_ID}', image group id '${BUILD_GROUP_ID}' -> '${JS_GROUP_ID}'"
fi
echo "JS7 entrypoint script: starting JOC Cockpit: exec sh -c /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start"
exec sh -c "${JETTY_BASE}/start-joc.sh"
fi
Explanation:
- Note that the entrypoint script runs the JOC Cockpit start script using
exec sh -c
. This is required to run the JOC Cockpit inside the current process that is assigned PID 1. A later docker stop <container>
command will send a SIGTERM signal to the process with PID 1 only. If JOC Cockpit were started directly as a shell script without use of exec
then a new process with a different PID would be created. This means that the docker stop
command would not normally terminate JOC Cockpit, but would abort JOC Cockpit when killing the container. This can cause delays for fail-over between clustered JOC Cockpit containers.
Build Script
The build script offers a number of options to parameterize the Dockerfile:
#!/bin/sh
set -e
SCRIPT_HOME=$(dirname "$0")
SCRIPT_HOME="`cd "${SCRIPT_HOME}" >/dev/null && pwd`"
SCRIPT_FOLDER="`basename $(dirname "$SCRIPT_HOME")`"
# ----- modify default settings -----
JS_RELEASE="2.5.0"
JS_REPOSITORY="sosberlin/js7"
JS_IMAGE="$(basename "${SCRIPT_HOME}")-${JS_RELEASE//\./-}"
JS_USER_ID="$UID"
JS_HTTP_PORT="4446"
JS_HTTPS_PORT=
JS_JAVA_OPTIONS="-Xmx128m"
JS_BUILD_ARGS=
# ----- modify default settings -----
for option in "$@"
do
case "$option" in
--release=*) JS_RELEASE=`echo "$option" | sed 's/--release=//'`
;;
--repository=*) JS_REPOSITORY=`echo "$option" | sed 's/--repository=//'`
;;
--image=*) JS_IMAGE=`echo "$option" | sed 's/--image=//'`
;;
--user-id=*) JS_USER_ID=`echo "$option" | sed 's/--user-id=//'`
;;
--http-port=*) JS_HTTP_PORT=`echo "$option" | sed 's/--http-port=//'`
;;
--https-port=*) JS_HTTPS_PORT=`echo "$option" | sed 's/--https-port=//'`
;;
--java-options=*) JS_JAVA_OPTIONS=`echo "$option" | sed 's/--java-options=//'`
;;
--build-args=*) JS_BUILD_ARGS=`echo "$option" | sed 's/--build-args=//'`
;;
*) echo "unknown argument: $option"
exit 1
;;
esac
done
set -x
docker build --no-cache --rm \
--tag=$JS_REPOSITORY:$JS_IMAGE \
--file=$SCRIPT_HOME/build/Dockerfile \
--build-arg="JS_RELEASE=$JS_RELEASE" \
--build-arg="JS_RELEASE_MAJOR=$(echo $JS_RELEASE | cut -d . -f 1,2)" \
--build-arg="JS_USER_ID=$JS_USER_ID" \
--build-arg="JS_HTTP_PORT=$JS_HTTP_PORT" \
--build-arg="JS_HTTPS_PORT=$JS_HTTPS_PORT" \
--build-arg="JS_JAVA_OPTIONS=$JS_JAVA_OPTIONS" \
$JS_BUILD_ARGS $SCRIPT_HOME/build
set +x
Explanation:
- Line 12 - 22: Default values are specified that are used if no command line arguments are provided. This includes values for:
- the release number: adjust this value to a current release of JS7.
- the repository which by default is
sosberlin:js7
. - the image name is determined from the current folder name and the release number.
- the user id is by default the id of the user running the build script.
- the HTTP port and HTTPS port: if the relevant port is not specified then the JOC Cockpit will not listen to that port for the protocol in question. You can, for example, disable the HTTP protocol by specifying an empty value. The default ports should be fine as they are mapped by the run script to outside ports on the container's host. However, you can modify ports as you like.
- Java options: typically you would specify default values for, for example, Java memory consumption. The Java options can be overwritten by the run script when starting the container. However, you might want to create your own image with adjusted default values.
- Line 27 - 50: The above options can be overwritten by command line arguments like this:
./build.sh --http-port=14445 --https-port=14443 --java-options="-Xmx1G"
- Line 54 - 63: The effective
docker build
command is executed with arguments. The Dockerfile is assumed to be located with the build
sub-directory of the current directory.