Page History
Table of Contents |
---|
Introduction
- This article explains a simplified build process for describes the build process for official JOC Cockpit images which is extracted from the SOS build environment.
- Users can build their own Docker container images for JOC Cockpit and adjust to their needs.
Build Environment
The following directory hierarchy is assumed for the build environment:
joc
build.sh
build
Dockerfile
entrypoint.sh
jetty.sh
js7
joc_install
.xmlstart-_joc.sh
config
The joc
root directory can have any name. The build files listed above are available for download. Note that the build script described below will, by default, use the directory name and release number to determine the resulting image name.
Dockerfile
Download: Dockerfile
Docker Container images for JS7 JOC Cockpit provided by SOS make use of the following Dockerfile:
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
# BUILD PRE-IMAGE FROM alpine:3.1520 AS js7-pre-image # provide build arguments for release information ARG JS_RELEASE ARG JS_RELEASE_MAJOR # image user addid installerhas archiveto file #match ADD https://download.sos-berlin.com/JobScheduler.${JS_RELEASE_MAJOR}/js7_joc_linux.${JS_RELEASE}.later run-time user id ARG JS_USER_ID=${JS_USER_ID:-1001} ARG JS_HTTP_PORT=${JS_HTTP_PORT:-4446} ARG JS_HTTPS_PORT=${JS_HTTPS_PORT:-4443} ARG JS_JAVA_OPTIONS=${JS_JAVA_OPTIONS} # add/copy installation tarball # ADD https://download.sos-berlin.com/JobScheduler.${JS_RELEASE_MAJOR}/js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/ COPY js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/ # test installer tarball RUN test -e /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz && \ # add/copy installer script # tarADD zxvf https:/usr/local/src/js7_joc_linux/download.sos-berlin.com/JobScheduler.${JS_RELEASE_MAJOR}.tar.gz -C/js7_install_joc.sh /usr/local/srcbin/ && \ rm -f COPY js7_install_joc.sh /usr/local/bin/ # copy configuration COPY config/ /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gzresources # copyinstall installerJava responseand file, entrypoint script and start script COPY joc_install.xml /usr/local/src/ COPY entrypoint.sh /usr/local/src/ # copy configuration COPY config/ /usr/local/src/resources # BUILD IMAGE FROM alpine:3.15 AS js7-image LABEL maintainer="Software- und Organisations-Service GmbH" # provide build arguments for release information ARG JS_RELEASE ARG JS_RELEASE_MAJOR # image user id has to match later run-time user id ARG JS_USER_ID=${JS_USER_ID:-1001} ARG JS_HTTP_PORT=${JS_HTTP_PORT:-4446} ARG JS_HTTPS_PORT=${JS_HTTPS_PORT:-4443} ARG JS_JAVA_OPTIONS=${JS_JAVA_OPTIONS} # JS7 user id, ports and Java options ENV RUN_JS_USER_ID=${RUN_JS_USER_ID:-1001} ENV RUN_JS_HTTP_PORT=${RUN_JS_HTTP_PORT:-$JS_HTTP_PORT} ENV RUN_JS_HTTPS_PORT=${RUN_JS_HTTPS_PORT:-$JS_HTTPS_PORT} ENV RUN_JS_JAVA_OPTIONS=${RUN_JS_JAVA_OPTIONS:-$JS_JAVA_OPTIONS} COPY --from=js7-pre-image ["/usr/local/src", "/usr/local/src"] # INSTALLATION # install process tools, net tools, bash, openjdk # add jobscheduler user account and group # for JDK < 12, /dev/random does not provide sufficient entropy, see https://kb.sos-berlin.com/x/lIM3 # substitute build arguments in installer response file # run setup RUN apk update && apk add --no-cache \ procps \ net-tools \ bash \ su-exec \ shadow \ git \ openjdk11JOC Cockpit # create user account jobscheduler using root group RUN apk upgrade --available && apk add --no-cache \ openjdk17-jre && \ sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \ adduser -u ${JS_USER_ID} -G root --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler && \ chmod +x /usr/local/bin/js7_install_joc.sh && \ printf "http port %s https port %s \n", "${JS_HTTP_PORT}", "${JS_HTTPS_PORT}" && \ /usr/local/bin/js7_install_joc.sh \ --home=/opt/sos-berlin.com/js7/joc \ --data=/var/sos-berlin.com/js7/joc \ --setup-dir=/usr/local/src/joc.setup \ --tarball=/usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz \ --http-port=${JS_HTTP_PORT} \ --https-port=${JS_HTTPS_PORT} \ --dbms-init=off \ --dbms-config=/usr/local/src/resources/hibernate.cfg.xml \ --keystore=/usr/local/src/resources/https-keystore.p12 \ --keystore-password=jobscheduler \ --truststore=/usr/local/src/resources/https-truststore.p12 \ --truststore-password=jobscheduler \ --user=jobscheduler \ --title="JOC Cockpit" \ --as-user \ --java-options="${JS_JAVA_OPTIONS}" \ --make-dirs && \ sedrm -if 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-11-openjdk/conf/security/java.security && \ sed -i 's/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, \\/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, TLSv1, TLSv1.1, \\/g' /usr/lib/jvm/java-11-openjdk/conf/security/java.security && \ adduser -u /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz # BUILD IMAGE FROM alpine:3.20 AS js7-image LABEL maintainer="Software- und Organisations-Service GmbH" # provide build arguments for release information ARG JS_RELEASE ARG JS_RELEASE_MAJOR # image user id has to match later run-time user id ARG JS_USER_ID=${JS_USER_ID:-1001} ARG --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler jobscheduler && \ ln -s /usr/local/src/joc.${JS_RELEASE} /usr/local/src/joc && \ mv /usr/local/src/joc_install.xml /usr/local/src/joc/ && \ mv /usr/local/src/resources/hibernate.cfg.xml /usr/local/src/joc/ && \ sed -i "s/\s*<entry\s*key\s*=\"jettyPort\".*\/>/<entry key=\"jettyPort\" value=\"$JS_HTTP_PORT\"\/>/g" /usr/local/src/joc/joc_install.xml && \ cd /usr/local/src/joc && ./setup.sh -u joc_install.xml && \ mv /usr/local/src/entrypoint.sh /usr/local/bin/ && \ chmod +x /usr/local/bin/entrypoint.sh && \ mv /usr/local/src/resources/* /var/sos-berlin.com/js7/joc/resources/joc/ && \ cat /var/sos-berlin.com/js7/joc/resources/joc/start.ini.add >> /var/sos-berlin.com/js7/joc/start.ini && \ cat /var/sos-berlin.com/js7/joc/resources/joc/joc.properties.add >> /var/sos-berlin.com/js7/joc/resources/joc/joc.properties && \ sed -i "s/\s*jetty.ssl.port\s*=.*/jetty.ssl.port=$JS_HTTPS_PORT/g" /var/sos-berlin.com/js7/joc/start.iniJS_HTTP_PORT=${JS_HTTP_PORT:-4446} ARG JS_HTTPS_PORT=${JS_HTTPS_PORT:-4443} ARG JS_JAVA_OPTIONS=${JS_JAVA_OPTIONS} # JS7 user id, ports and Java options ENV RUN_JS_USER_ID=${RUN_JS_USER_ID:-1001} ENV RUN_JS_HTTP_PORT=${RUN_JS_HTTP_PORT:-$JS_HTTP_PORT} ENV RUN_JS_HTTPS_PORT=${RUN_JS_HTTPS_PORT:-$JS_HTTPS_PORT} ENV RUN_JS_JAVA_OPTIONS=${RUN_JS_JAVA_OPTIONS:-$JS_JAVA_OPTIONS} COPY --from=js7-pre-image ["/opt/sos-berlin.com/js7", "/opt/sos-berlin.com/js7"] COPY --from=js7-pre-image ["/var/sos-berlin.com/js7", "/var/sos-berlin.com/js7"] # copy entrypoint script COPY entrypoint.sh /usr/local/bin/ # install process tools, net tools, bash, openjdk # add jobscheduler user account and group # for JDK < 12, /dev/random does not provide sufficient entropy, see https://kb.sos-berlin.com/x/lIM3 RUN apk upgrade --available && apk add \ --no-cache \ --repository=http://dl-cdn.alpinelinux.org/alpine/edge/main \ procps \ net-tools \ bash \ su-exec \ shadow \ git \ openjdk17-jre && \ sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \ javased -jari "/opt/sos-berlin.com/js7/joc/jetty/start.jar" -Djetty.home="/opt/sos-berlin.com/js7/joc/jetty" -Djetty.base="/var/sos-berlin.com/js7/joc" --add-to-start=https && \ mv /var/sos-berlin.com/js7/joc/start.ini /var/sos-berlin.com/js7/joc/resources/joc/ && \ ln -s /var/sos-berlin.com/js7/joc/resources/joc/start.ini /var/sos-berlin.com/js7/joc/start.ini's/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, \\/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, TLSv1, TLSv1.1, \\/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \ chownadduser -R jobscheduler:jobscheduler /var/sos-berlin.com # JDK 8 # openjdk8u ${JS_USER_ID} -G root --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler && \ # sedmkdir -ip 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/java.security/var/log/sos-berlin.com/js7/joc && \ # JDK 11 # chown openjdk11 && \ # sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-11-openjdk/conf/security/java.security-R jobscheduler:root /opt/sos-berlin.com /var/sos-berlin.com /var/log/sos-berlin.com/js7/joc && \ # Patch for SQL Serverchmod JDBC Driver # COPY --chown=jobscheduler:jobscheduler mssql-jdbc-9.2.1.jre8.jar-R g=u /etc/passwd /opt/sos-berlin.com /var/sos-berlin.com /js7var/joc/lib/ext/joc/ # Patch for Oracle JDBC Driver # COPY --chown=jobscheduler:jobscheduler ojdbc8-18.3.0.0.jar /var/sos-berlin.com/js7/joc/lib/ext/joc/ # Patch for PostgreSQL JDBC Driver # COPY --chown=jobscheduler:jobscheduler postgresql-42.2.19.jar /var/sos-berlin.com/js7/joc/lib/ext/joc/ # Patch for H2 JDBC Driver # COPY --chown=jobscheduler:jobscheduler h2-1.4.200.jar /var/sos-berlin.com/js7/joc/lib/ext/joc/ # license # COPY --chown=jobscheduler:jobscheduler js7-license.jar /var/sos-berlin.com/js7/joc/lib/ext/joc/ # START ENTRYPOINT ["/bin/sh", "-c", "/usr/local/bin/entrypoint.sh"] CMD ["/bin/sh", "-c", "/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh", "start"] |
Explanation:
log/sos-berlin.com/js7/joc && \
chmod +x /usr/local/bin/entrypoint.sh
# START
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"] |
Explanation:
- The Dockerfile implements two stages to exclude installer files from the resulting image.
- Line 3: The base image is the current Alpine image at build-time.
- Line 6 - 7: The release identification is injected by build arguments. This information is used to determine the tarball to be downloaded or copied.
- Line 10 - 13: Defaults for the user id running the JOC Cockpit inside the container as well as HTTP and HTTPS ports are provided. These values can be overwritten by providing the relevant build arguments.
- Line 16 - 17: Users can either download the JOC Cockpit tarball directly from the SOS web site or store the tarball with the build directory and copy from this location.
Line 20: The tarball integrity is tested.
- Line 23 - 24: The JOC Cockpit Installer Script is downloaded or copied, see JS7 - Unix Shell Installation Script - js7_install_joc.sh
- Line 27: The
config
folder available in the build directory is copied to the appropriateconfig
folder in the image. This can be useful for creating an image with individual settings in configuration files, see the JS7 - JOC Cockpit Configuration Items article for more information.- The
hibernate.cfg.xml
specifies the database connection. This file is not used at build-time. However, it is provided as a sample for run-time configuration. You will find details in the JS7 - Database article. - The default
https-keystore.p12
andhttps-truststore.p12
files are copied that would hold the private key and certificate required for server authentication with HTTPS. By default empty keystore and truststore files are used that users would add their private keys and certificates to at run-time.
- The
- Line 32: A recent Java release is added to the pre-image.
- Line 33: The
jobscheduler
account is created. - Line 35 - 52: The JOC Cockpit Installer Script is executed with arguments performing installation for the
jobscheduler
account. For use of arguments see - The build script implements two stages to exclude installer files from the resulting image.
- Line 3: The base image is the current Alpine image at build-time.
- Line 6 - 7: The release identification is injected by build arguments. This information is used to determine the tarball to be downloaded or copied.
- Line 10 - 11: You can either download the JOC Cockpit tarball directly from the SOS web site or you can store the tarball with the build directory and copy from this location.
Line 13 - 15: The tarball is extracted.
- Line 18: the
joc_install.xml
response file is copied to the image. This file includes settings for headless installation of the JOC Cockpit, see JS7 - JOC Cockpit Installation On Premises. In fact a JOC Cockpit installation is performed when building the image.Code Block <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- XML configuration file for JOC If you call the installer with this XML file then you accept at the same time the terms of the licence agreement under GNU GPL 2.0 License (see http://www.gnu.org/licenses/gpl-2.0.html) --> <AutomatedInstallation langpack="eng"> <com.izforge.izpack.panels.UserInputPanel id="home"> <userInput/> </com.izforge.izpack.panels.UserInputPanel> <com.izforge.izpack.panels.HTMLLicencePanel id="gpl_licence"/> <com.izforge.izpack.panels.TargetPanel id="target"> <!-- SELECT THE INSTALLATION PATH It must be absolute! For example: /opt/sos-berlin.com/joc on Linux C:\Program Files\sos-berlin.com\joc on Windows --> <installpath>/opt/sos-berlin.com/js7/joc</installpath> </com.izforge.izpack.panels.TargetPanel> <com.izforge.izpack.panels.UserInputPanel id="jetty"> <userInput> <!-- JOC requires a servlet container such as Jetty. If a servlet container already installed then you can use it. Otherwise a Jetty will be installed in addition if withJettyInstall=yes. You need root permissions to install JOC with Jetty. --> <entry key="withJettyInstall" value="yes"/> <entry key="jettyPort" value="4446"/> <!-- Specify the name of the Windows service or Linux Daemon (default: joc).language xml title JOC Cockpit Installer Response File linenumbers true collapse true - Line 72 - 75: Environment variables are provided at run-time, not at build-time. They can be used to specify ports and Java options when running the container.
Line 81: The
entrypoint.sh
script is copied from the build directory to the image, see next chapter.- Line 82: The
jetty.sh
script is copied from the build directory to the image. This script ships with the Jetty Servlet Container and for on premises installations is available from theJETTY_HOME/bin
directory. Users might have to adjust the script to strip off commands that require root permissions, for examplechown
, and commands that might not be applicable to their container environment, for example use ofsu
. - Line 87 - 93: The image OS is updated and additional packages are installed (ps, netstat, bash, git).
- Line 94: The most recent Java 11 package available with Alpine is applied. JOC Cockpit can be operated with newer Java releases. However, stick to Oracle, OpenJDK or AdoptOpenJDK as the source for your Java LTS release. Alternatively you can use your own base image and install Java on top of this. For details see Which Java versions is JobScheduler available for?
- Line 95: Java releases might make use of
/dev/random
for random number generation. This is a bottleneck as random number generation with this file is blocking. Instead/dev/urandom
should be used that implements non-blocking behavior. The change of the random file is applied to the Java security file. - Line 96: Users might want to disable certain TLS protocol versions or algorithms by applying changes to the Java security file.
- Line 97 - 100: The
jobscheduler
account is created and is assigned the user id handed over by the relevant build argument. This suggests that the account running the JOC Cockpit inside the container and the account that starts the container are assigned the same user id. This allows the account running the container to access any files created by the JOC Cockpit in mounted volumes with identical permissions.- Consider that the account is assigned the
root
group. For environments in which the entrypoint script is executed with an arbitrary non-root user id this allows access to files created by the JOC Cockpit provided to any accounts that are assigned theroot
group. - Accordingly any files owned by the
jobscheduler
account are made accessible to theroot
group with similar user permissions. Read access to/etc/passwd
can be required in such environments. - For details see JS7 - Running Containers for User Accounts.
- Consider that the account is assigned the
- Line 105: The entrypoint script is executed and is dynamically parameterized from environment variables which are forwarded when starting the container.
Entrypoint Script
Download: entrypoint.sh
The following entrypoint script is used to start JOC Cockpit containers.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
#!/bin/bash JETTY_BASE="/var/sos-berlin.com/js7/joc" update_joc_properties() { # update joc.properties file: ${JETTY_BASE}/resources/joc/joc.properties rc=$(grep -E '^cluster_id' "${JETTY_BASE}"/resources/joc/joc.properties) if [ -z "${rc}" ] then echo ".. update_joc_properties [INFO] updating cluster_id in ${JETTY_BASE}/resources/joc/joc.properties" printf "cluster_id = joc\n" >> "${JETTY_BASE}"/resources/joc/joc.properties fi rc=$(grep -E '^ordering' "${JETTY_BASE}"/resources/joc/joc.properties) if [ -z "${rc}" ] then echo ".. update_joc_properties [INFO] updating ordering in ${JETTY_BASE}/resources/joc/joc.properties" printf "ordering=%1\n" "$(shuf -i 0-99 -n 1)" >> "${JETTY_BASE}"/resources/joc/joc.properties fi } startini_to_startd() { # convert once ${JETTY_BASE}/resources/joc/start.ini to ${JETTY_BASE}/resources/joc/start.d if [ -d "${JETTY_BASE}"/start.d ]; then if [ -f "${JETTY_BASE}"/resources/joc/start.ini ] && [ -d "${JETTY_BASE}"/resources/joc/start.d ]; then echo ".. startini_to_startd [INFO] converting start.ini to start.d ini files" for file in "${JETTY_BASE}"/resources/joc/start.d/*.ini; do module="$(basename "$file" | cut -d. -f1)" echo ".... [INFO] processing module ${module}" while read -r line; do modulevariablekeyprefix="$(echo "${line}" | cut -d. -f1,2)" if [ "${modulevariablekeyprefix}" = "jetty.${module}" ] || [ "${modulevariablekeyprefix}" = "jetty.${module}Context" ]; then |
...
modulevariablekey="$(echo "${line}" | cut -d= -f1 | sed 's/\s*$//g')" |
...
echo ".... |
...
|
...
startini_to_startd [INFO] ${line}" sed |
...
-i "s;.*${modulevariablekey}\s*=.*;${line};g" "${file}" |
...
fi done < "${JETTY_BASE}"/resources/joc/start.ini done |
...
mv -f "${JETTY_BASE}"/resources/joc/start.ini "${JETTY_BASE}"/resources/joc/start.in~ fi fi } add_start_configuration() { # overwrite ini |
...
files in start.d if available from config folder if [ |
...
- |
...
d "${JETTY_BASE}"/start.d ]; then if |
...
[ -d "${JETTY_BASE}"/resources/joc/start.d ]; then for file |
...
in "${JETTY_BASE}"/resources/joc/start.d/*.ini; do echo ".. add_start_configuration [INFO] copying ${file} -> ${JETTY_BASE}/start.d/" |
...
cp -f "$file" "${JETTY_BASE}"/start.d/ done |
...
fi fi } add_jdbc_and_license() { # if license folder not empty then copy js7-license.jar to Jetty's class path if [ -d "${JETTY_BASE}"/resources/joc/license ]; then if [ -f "${JETTY_BASE}"/resources/joc/lib/js7-license.jar ]; then echo ".. |
...
add_jdbc_and_license [INFO] copying ${JETTY_BASE}/resources/joc/lib/js7-license.jar -> ${JETTY_BASE}/lib/ext/joc/" cp |
...
-f "${JETTY_BASE}"/resources/joc/lib/js7-license.jar "${JETTY_BASE}"/lib/ext/joc/ fi fi # if JDBC |
...
driver added then copy to Jetty's |
...
class |
...
path and move exiting JDBC drivers back to avoid conflicts if [ -d "${JETTY_BASE}"/resources/joc/lib ]; then if [ -n "$(ls "${JETTY_BASE}"/resources/joc/lib/*.jar 2>/dev/null | grep -v "js7-license.jar")" |
...
]; then for file |
...
in "${JETTY_BASE}"/lib/ext/joc/*.jar; do if [ "$(basename "$file")" |
...
!= "js7-license.jar" ]; then |
...
echo ".. add_jdbc_and_license [INFO] moving ${file} -> ${JETTY_BASE}/resources/joc/lib/$(basename "$file")~" mv -f "$file" "${JETTY_BASE}"/resources/joc/lib/"$(basename "$file")"~ fi |
...
|
...
done for file |
...
in "${JETTY_BASE}"/resources/joc/lib/*.jar; do echo ".. add_jdbc_and_license [INFO] copying ${file} -> ${JETTY_BASE}/lib/ext/joc/" cp -f "$file" "${JETTY_BASE}"/lib/ext/joc/ done |
...
fi fi } add_custom_logo() { # if image folder in the configuration directory is not empty then images are copied to the installation directory if [ -d "${JETTY_BASE}"/resources/joc/image ];then mkdir |
...
-p "${JETTY_BASE}"/webapps/root/ext/images echo ".. add_custom_logo [INFO] copying ${JETTY_BASE}/resources/joc/image/* -> ${JETTY_BASE}/webapps/root/ext/images/" |
...
cp "${JETTY_BASE}"/resources/joc/image/* "${JETTY_BASE}"/webapps/root/ext/images/ fi } patch_api() { if [ ! -d "${JETTY_BASE}"/resources/joc/patches ]; then echo ".. patch_api [INFO] API patch directory not |
...
found: ${JETTY_BASE}/resources/joc/patches" return fi |
...
if [ ! |
...
-d "${JETTY_BASE}"/webapps/joc/WEB-INF/classes ]; then echo ".. patch_api |
...
[WARN] JOC Cockpit API sub-directory not found: ${JETTY_BASE}/webapps/joc/WEB-INF/classes" return |
...
fi jarfiles=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar 2>/dev/null) if [ -n "${jarfiles}" ]; then cd "${JETTY_BASE}"/webapps/joc/WEB-INF/classes > /dev/null || return for jarfile |
...
in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar; do echo ".. patch_api [INFO] extracting |
...
${jarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes" unzip -o "${jarfile}" || return |
...
|
...
|
...
|
...
|
...
# |
...
rm |
...
-f "${jarfile}" || return done cd - > |
...
/dev/null || return else echo ".. patch_api [INFO] no API |
...
patches available from .jar files in directory: ${JETTY_BASE}/resources/joc/patches" fi tarballs=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.tar.gz 2>/dev/null) if [ -n "${tarballs}" ]; then if [ "$(echo "${tarball}" | wc -l)" -eq 1 ]; then cd "${JETTY_BASE}"/webapps/joc/WEB-INF/classes > /dev/null || return |
...
|
...
for |
...
tarfile |
...
in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.tar.gz; do echo ".. patch_api [INFO] extracting ${tarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes" |
...
|
...
|
...
|
...
tar |
...
-xpozf "${tarfile}" || return # rm -f |
...
"${tarfile}" || return for jarfile in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar; do |
...
echo |
...
".. patch_api [INFO] extracting ${jarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes" unzip -o "${jarfile}" |
...
|
...
|
...
# |
...
rm |
...
-f |
...
"${jarfile}" || return done # |
...
rm -f "${tarfile}" || return done cd - > |
...
/dev/null || return else echo ".. patch_api [WARN]: more than one tarball found for API patches. Please drop |
...
previous |
...
patch |
...
tarballs |
...
and |
...
use |
...
the |
...
latest |
...
API patch tarball only as it includes previous patches." fi |
...
else echo ".. patch_api [INFO] no |
...
API patches available from |
...
. |
...
tar.gz files in directory: ${JETTY_BASE}/resources/joc/patches" fi } patch_gui() { if [ ! -d "${JETTY_BASE}"/resources/joc/patches ]; then echo ".. patch_gui [INFO] GUI patch directory |
...
not found: ${JETTY_BASE}/resources/joc/patches" return fi if [ ! -d "${JETTY_BASE}"/webapps/joc ]; then echo ".. patch_gui [WARN] JOC Cockpit GUI sub-directory not found: ${JETTY_BASE}/webapps/joc" return fi tarball=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.GUI-*.tar.gz 2>/dev/null) if [ -n "${tarball}" ]; then if [ "$(echo "${tarball}" | wc -l)" -eq 1 ]; then echo ".. patch_gui [INFO] applying GUI patch tarball: ${tarball}" cd "${JETTY_BASE}"/webapps/joc > /dev/null || return find "${JETTY_BASE}"/webapps/joc -maxdepth 1 -type f |
...
- |
...
delete |
...
|| return if [ -d "${JETTY_BASE}"/webapps/joc/assets ]; then rm -fr "${JETTY_BASE}"/webapps/joc/assets || return fi if [ -d "${JETTY_BASE}"/webapps/joc/styles ]; then |
...
|
...
|
...
|
...
|
...
|
...
rm |
...
-fr "${JETTY_BASE}"/webapps/joc/styles || return fi tar -xpozf "${tarball}" || return |
...
...
|
...
cd - |
...
> |
...
/dev/null || return else |
...
|
...
echo ".. patch_gui [WARN]: more than one tarball found for GUI patches. Please drop previous patch tarballs and use the latest GUI patch tarball only as it includes previous patches." fi else echo ".. patch_gui [INFO] no GUI patches available from .tar.gz files in directory: ${JETTY_BASE}/resources/joc/patches" fi } # create JOC Cockpit start script echo '#!/bin/sh' > "${JETTY_BASE}"/start-joc.sh echo 'trap "/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh stop; exit" TERM INT' >> "${JETTY_BASE}"/start-joc.sh echo '/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start && tail -f /dev/null &' >> "${JETTY_BASE}"/start-joc.sh echo 'wait' >> "${JETTY_BASE}"/start-joc.sh chmod +x "${JETTY_BASE}"/start-joc.sh echo "JS7 entrypoint script: updating image" # update joc.properties file update_joc_properties # start.ini_to_startd add_start_configuration # copy custom logo add_custom_logo if [ -n "${RUN_JS_HTTP_PORT}" ] then if [ -f "${JETTY_BASE}"/start.d/http.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/http.ini ]; then # enable http access in start.d directory mv "${JETTY_BASE}"/start.d/http.in~ "${JETTY_BASE}"/start.d/http.ini fi if [ -f "${JETTY_BASE}"/start.d/http.ini ]; then # set port for http access in start.d directory sed -i "s/.*jetty.http.port\s*=.*/jetty.http.port=$RUN_JS_HTTP_PORT/g" "${JETTY_BASE}"/start.d/http.ini fi else if [ -f "${JETTY_BASE}"/start.d/http.ini ]; then # disable http access in start.d directory mv -f "${JETTY_BASE}"/start.d/http.ini "${JETTY_BASE}"/start.d/http.in~ fi fi if [ -n "${RUN_JS_HTTPS_PORT}" ] then if [ -f "${JETTY_BASE}"/start.d/https.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/https.ini ]; then # enable https access in start.d directory mv "${JETTY_BASE}"/start.d/https.in~ "${JETTY_BASE}"/start.d/https.ini fi if [ -f "${JETTY_BASE}"/start.d/ssl.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/ssl.ini ]; then # enable https access in start.d directory mv "${JETTY_BASE}"/start.d/ssl.in~ "${JETTY_BASE}"/start.d/ssl.ini fi if [ -f "${JETTY_BASE}"/start.d/ssl.ini ]; then # set port for https access in start.d directory sed -i "s/.*jetty.ssl.port\s*=.*/jetty.ssl.port=${RUN_JS_HTTPS_PORT}/g" "${JETTY_BASE}"/start.d/ssl.ini fi else if [ -f "${JETTY_BASE}"/start.d/https.ini ]; then # disable https access in start.d directory mv -f "${JETTY_BASE}"/start.d/https.ini "${JETTY_BASE}"/start.d/https.in~ fi if [ -f "${JETTY_BASE}"/start.d/ssl.ini ]; then # disable https access in start.d directory mv -f "${JETTY_BASE}"/start.d/ssl.ini "${JETTY_BASE}"/start.d/ssl.in~ fi fi if [ -n "${RUN_JS_JAVA_OPTIONS}" ] then export JAVA_OPTIONS="${JAVA_OPTIONS} ${RUN_JS_JAVA_OPTIONS}" fi JS_USER_ID=$(echo "${RUN_JS_USER_ID}" | cut -d ':' -f 1) JS_GROUP_ID=$(echo "${RUN_JS_USER_ID}" | cut -d ':' -f 2) JS_USER_ID=${JS_USER_ID:-$(id -u)} JS_GROUP_ID=${JS_GROUP_ID:-$(id -g)} BUILD_GROUP_ID=$(grep 'jobscheduler' /etc/group | head -1 | cut -d ':' -f 3) BUILD_USER_ID=$(grep 'jobscheduler' /etc/passwd | head -1 | cut -d ':' -f 3) add_jdbc_and_license patch_api patch_gui if [ "$(id -u)" = "0" ] then if [ ! "${BUILD_USER_ID}" = "${JS_USER_ID}" ] then echo "JS7 entrypoint script: switching ownership of image user id '${BUILD_USER_ID}' -> '${JS_USER_ID}'" usermod -u "${JS_USER_ID}" jobscheduler find /var/sos-berlin.com/ -user "${BUILD_USER_ID}" -exec chown -h jobscheduler {} \; find /var/log/sos-berlin.com/ -user "${BUILD_USER_ID}" -exec chown -h jobscheduler {} \; fi if [ ! "${BUILD_GROUP_ID}" = "${JS_GROUP_ID}" ] then if grep -q "${JS_GROUP_ID}" /etc/group then groupmod -g "${JS_GROUP_ID}" jobscheduler else addgroup -g "${JS_GROUP_ID}" -S jobscheduler fi echo "JS7 entrypoint script: switching ownership of image group id '${BUILD_GROUP_ID}' -> '${JS_GROUP_ID}'" find /var/sos-berlin.com/ -group "${BUILD_GROUP_ID}" -exec chgrp -h jobscheduler {} \; find /var/log/sos-berlin.com/ -group "${BUILD_GROUP_ID}" -exec chgrp -h jobscheduler {} \; fi echo "JS7 entrypoint script: switching to user account 'jobscheduler' to run start script" echo "JS7 entrypoint script: starting JOC Cockpit: exec su-exec ${JS_USER_ID}:${JS_GROUP_ID} /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start" exec su-exec "${JS_USER_ID}":0 "${JETTY_BASE}"/start-joc.sh else if [ "${BUILD_USER_ID}" = "${JS_USER_ID}" ] then if [ "$(id -u)" = "${JS_USER_ID}" ] then echo "JS7 entrypoint script: running for user id '$(id -u)'" else echo "JS7 entrypoint script: running for user id '$(id -u)' using user id '${JS_USER_ID}', group id '${JS_GROUP_ID}'" echo "JS7 entrypoint script: missing permission to switch user id and group id, consider to omit the 'docker run --user' option" fi else echo "JS7 entrypoint script: running for user id '$(id -u)', image user id '${BUILD_USER_ID}' -> '${JS_USER_ID}', image group id '${BUILD_GROUP_ID}' -> '${JS_GROUP_ID}'" fi echo "JS7 entrypoint script: starting JOC Cockpit: exec sh -c /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start" exec sh -c "${JETTY_BASE}/start-joc.sh" fi |
Explanation:
- Note that the entrypoint script runs the JOC Cockpit start script using
exec sh -c
. This is required to run the JOC Cockpit inside the current process that is assigned PID 1. A laterdocker stop <container>
command will send a SIGTERM signal to the process with PID 1 only. If JOC Cockpit were started directly as a shell script without use ofexec
then a new process with a different PID would be created. This means that thedocker stop
command would not normally terminate JOC Cockpit, but would abort JOC Cockpit when killing the container. This can cause delays for fail-over between clustered JOC Cockpit containers. <!-- You can choose between 'on' or 'off' to create the database tables.
If you have modified the initial data of an already existing installation,
then the modifications will be undone. Data added remains unchanged.
This entry should be only 'off', when you sure, that all tables are already created. -->
<entry key="databaseCreateTables" value="off"/>
</userInput>
</com.izforge.izpack.panels.UserInputPanel>
<com.izforge.izpack.panels.UserInputPanel id="dbconnection">
<userInput>
<!-- Database Configuration if databaseConfigurationMethod=withoutHibernateFile -->
<!-- Enter the name or ip address of the database host
This entry can also be used to configure the URL(s) for Oracle RAC databases.
For example:
<entry key="databaseHost" value="(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=tst-db1.myco.com)(PORT=1604))
(ADDRESS=(PROTOCOL=TCP)(HOST=tst-db2.myco.com)(PORT=1604)))
(CONNECT_DATA=(SERVICE_NAME=mydb1.myco.com)(SERVER=DEDICATED)))"/>
The "databaseSchema" and "databasePort" entries should then be left empty. -->
<entry key="databaseHost" value=""/>
<!-- Enter the port number for the database instance. Default ports are for MySQL 3306,
Oracle 1521, MS SQL Server 1433, postgreSQL 5432. -->
<entry key="databasePort" value=""/>
<!-- Enter the schema -->
<entry key="databaseSchema" value=""/>
<!-- Enter the user name for database access -->
<entry key="databaseUser" value=""/>
<!-- Enter the password for database access -->
<entry key="databasePassword" value=""/>
</userInput>
</com.izforge.izpack.panels.UserInputPanel>
<com.izforge.izpack.panels.UserInputPanel id="jdbc">
<userInput>
<!-- Database Configuration -->
<!-- You can specify an external JDBC connector then set internalConnector = no
For license reasons MySQL, MS SQL Server and Oracle ojdbc7 JDBC
drivers are not provided.
Alternatively you can use the mariadb JDBC Driver for MySQL and
the jTDS JDBC Driver for MS SQL Server which is provided.
An Oracle ojdbc6 JDBC driver is also provided. -->
<!-- You can choose between 'yes' or 'no' for using the internal JDBC connector
or not -->
<entry key="internalConnector" value="yes"/>
<!-- Select the path to JDBC Driver -->
<entry key="connector" value=""/>
</userInput>
</com.izforge.izpack.panels.UserInputPanel>
<com.izforge.izpack.panels.UserInputPanel id="end">
<userInput/>
</com.izforge.izpack.panels.UserInputPanel>
<com.izforge.izpack.panels.InstallPanel id="install"/>
<com.izforge.izpack.panels.ProcessPanel id="process"/>
<com.izforge.izpack.panels.FinishPanel id="finish"/>
</AutomatedInstallation> - Note that the entrypoint script runs the JOC Cockpit start script using
exec sh -c
. This is required to run the JOC Cockpit inside the current process that is assigned PID 1. A laterdocker kill <container>
command will send a SIGKILL signal to the process with PID 1 only. If JOC Cockpit were started directly as a shell script without use ofexec
then a new process with a different PID would be created. This means that thedocker kill
command would not be successful and, for example, would not cause a fail-over between clustered JOC Cockpit containers. - Line 22: The
config
folder available in the build directory is copied to the appropriateconfig
folder in the image. This can be useful for creating an image with individual settings in configuration files, see the JS7 - JOC Cockpit Configuration Items article for more information.- The
hibernate.cfg.xml
specifies the database connection. This file is not used at build-time. However, it is provided as a sample for run-time configuration. You will find details in the JS7 - Database article. - The default
https-keystore.p12
andhttps-truststore.p12
files are copied that would hold the private key and certificate required for server authentication with HTTPS. By default empty keystore and truststore files are used that users would add their private keys and certificates to at run-time.
- The
- Line 35 - 38: Defaults for the user id running the JOC Cockpit inside the container as well as HTTP and HTTPS ports are provided. These values can be overwritten by providing the respective build arguments.
- Line 41 - 44: Environment variables are provided at run-time, not at build-time. They can be used to specify ports and Java options when running the container.
- Line 55 - 61: The image OS is updated and additional packages are installed (ps, netstat, bash).
- Line 62: The most recent Java 11 package available with Alpine is applied. JOC Cockpit can be operated with newer Java releases. However, stick to Oracle, OpenJDK or AdoptOpenJDK as the source for your Java LTS release. Alternatively you can use your own base image and install Java 1.8 or later on top of this. For details see Which Java versions is JobScheduler available for?
- Line 63 - 64: Java releases might make use of
/dev/random
for random number generation. This is a bottleneck as random number generation with this file is blocking. Instead/dev/urandom
should be used that implements non-blocking behavior. The change of the random file is applied to the Java security file. - Line 65: The
jobscheduler
user account is created and assigned the user id and group id handed over by the relevant build arguments. This translates to the fact that the account running the JOC Cockpit inside the container and the account that starts the container are assigned the same user id and group id. This allows the account running the container to access any files created by the JOC Cockpit in mounted volumes with identical permissions. - Line 67 - 70: The JOC Cockpit setup is performed.
- Line 74 - 75: The keystore and truststore locations are added to the Jetty
start.ini
file andjoc.properties
file respectively.start.ini.add
is used for access e.g. by client browsers:Code Block language bash title Jetty HTTPS Configuration File start.ini.add linenumbers true collapse true ## Keystore file path (relative to $jetty.base) jetty.sslContext.keyStorePath=resources/joc/https-keystore.p12 ## Truststore file path (relative to $jetty.base) jetty.sslContext. trustStorePath=resources/joc/https-truststore.p12 ## Keystore password jetty.sslContext.keyStorePassword=jobscheduler ## KeyManager password (same as keystore password for pkcs12 keystore type) jetty.sslContext.keyManagerPassword=jobscheduler ## Truststore password jetty.sslContext.trustStorePassword=jobscheduler ## Connector port to listen on jetty.ssl.port=4443
joc.properties.add
is used for connections to the Controller if such connections require HTTPS mutual authentication:Code Block language bash title JOC Cockpit configuration File joc.properties.add linenumbers true collapse true ################################################################################ ### Location, type and password of the Java truststore which contains the ### certificates of eachJobScheduler Controller for HTTPS connections. Path can be ### absolute or relative to this file. keystore_path = ../../resources/joc/https-keystore.p12 keystore_type = PKCS12 keystore_password = jobscheduler key_password = jobscheduler truststore_path = ../../resources/joc/https-truststore.p12 truststore_type = PKCS12 truststore_password = jobscheduler
- Line 77: The Jetty servlet container is added the HTTPS module for use with the JOC Cockpit.
- Line 106 - 108: The entrypoint script and start script are executed and dynamically parameterized from environment variables which are forwarded when starting the container.
Line 19: The entrypoint.sh
script is copied from the build directory to the image. Users can apply their own version of the entrypoint script. The entrypoint script used by SOS looks like this:
Download: entrypoint.sh
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
#!/bin/sh
JETTY_BASE="/var/sos-berlin.com/js7/joc"
add_jdbc_and_license() {
# if license folder not empty then copy js7-license.jar to Jetty's class path
if [ -d "${JETTY_BASE}/resources/joc/license" ]; then
if [ ! -z "`ls -A ${JETTY_BASE}/resources/joc/license/`" ]; then
if [ -f "${JETTY_BASE}/resources/joc/lib/js7-license.jar" ]; then
echo "copy ${JETTY_BASE}/resources/joc/lib/js7-license.jar -> ${JETTY_BASE}/lib/ext/joc/"
cp -f ${JETTY_BASE}/resources/joc/lib/js7-license.jar ${JETTY_BASE}/lib/ext/joc/
fi
fi
fi
# if JDBC driver added then copy to Jetty's class path and move exiting JDBC drivers back to avoid conflicts
if [ -d "${JETTY_BASE}/resources/joc/lib" ]; then
if [ ! -z `ls ${JETTY_BASE}/resources/joc/lib/*.jar | grep -v "js7-license.jar"` ]; then
for file in ${JETTY_BASE}/lib/ext/joc/*.jar; do
if [ "`basename $file`" != "js7-license.jar" ]; then
echo "move ${file} -> ${JETTY_BASE}/resources/joc/lib/`basename $file`~"
mv -f $file ${JETTY_BASE}/resources/joc/lib/`basename $file`~
fi
done
for file in ${JETTY_BASE}/resources/joc/lib/*.jar; do
echo "copy ${file} -> ${JETTY_BASE}/lib/ext/joc/"
cp -f $file ${JETTY_BASE}/lib/ext/joc/
done
fi
fi
}
patch_jars() {
if [ -d "${JETTY_BASE}/resources/joc/patches" ]; then
echo "extract patch files if exist"
ls ${JETTY_BASE}/resources/joc/patches/*.jar 2>/dev/null
if [ ! -z "`ls ${JETTY_BASE}/resources/joc/patches/*.jar`" ]; then
if [ -d "${JETTY_BASE}/webapps/joc/WEB-INF/classes" ]; then
CUR_DIR="`pwd`"
cd "${JETTY_BASE}/webapps/joc/WEB-INF/classes"
for file in ${JETTY_BASE}/resources/joc/patches/*.jar; do
echo "extract ${file} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
unzip -o ${file}
done
cd "${CUR_DIR}"
fi
fi
fi
}
# create Jetty start script
echo "#!/bin/sh" > /usr/local/bin/start-joc.sh
echo "/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start && tail -f /dev/null" >> /usr/local/bin/start-joc.sh
chmod +x /usr/local/bin/start-joc.sh
if [ ! "$RUN_JS_HTTP_PORT" = "" ]
then
# enable http access
sed -i "s/.*--module=http$/--module=http/g" ${JETTY_BASE}/start.ini
# set port for http access
sed -i "s/.*jetty.http.port\s*=.*/jetty.http.port=$RUN_JS_HTTP_PORT/g" ${JETTY_BASE}/start.ini
else
# disable http access
sed -i "s/\s*--module=http$/# --module=http/g" ${JETTY_BASE}/start.ini
fi
if [ ! "$RUN_JS_HTTPS_PORT" = "" ]
then
# enable https access
sed -i "s/.*--module=https$/--module=https/g" ${JETTY_BASE}/start.ini
# set port for https access
sed -i "s/\s*jetty.ssl.port\s*=.*/jetty.ssl.port=$RUN_JS_HTTPS_PORT/g" ${JETTY_BASE}/start.ini
else
# disable https access
sed -i "s/\s*--module=https$/# --module=https/g" ${JETTY_BASE}/start.ini
fi
if [ ! -z "$RUN_JS_JAVA_OPTIONS" ]
then
export JAVA_OPTIONS="${JAVA_OPTIONS} $RUN_JS_JAVA_OPTIONS"
fi
JS_USER_ID=`echo $RUN_JS_USER_ID | cut -d ':' -f 1`
JS_GROUP_ID=`echo $RUN_JS_USER_ID | cut -d ':' -f 2`
BUILD_GROUP_ID=`cat /etc/group | grep jobscheduler | cut -d ':' -f 3`
BUILD_USER_ID=`cat /etc/passwd | grep jobscheduler | cut -d ':' -f 4`
add_jdbc_and_license
patch_jars
if [ "$(id -u)" = "0" ]
then
if [ ! "$BUILD_USER_ID" = "$JS_USER_ID" ]
then
echo "JS7 entrypoint script switching ownership of image user id '$BUILD_USER_ID' -> '$JS_USER_ID', group id '$BUILD_GROUP_ID' -> '$JS_GROUP_ID'"
usermod -u $JS_USER_ID jobscheduler
groupmod -g $JS_GROUP_ID jobscheduler
find /var/sos-berlin.com/ -group $BUILD_GROUP_ID -exec chgrp -h jobscheduler {} \;
find /var/sos-berlin.com/ -user $BUILD_USER_ID -exec chown -h jobscheduler {} \;
find /var/log/sos-berlin.com/ -group $BUILD_GROUP_ID -exec chgrp -h jobscheduler {} \;
find /var/log/sos-berlin.com/ -user $BUILD_USER_ID -exec chown -h jobscheduler {} \;
fi
echo "JS7 entrypoint script switching to user account 'jobscheduler' to run start script"
echo "JS7 entrypoint script starting JOC Cockpit: exec su-exec jobscheduler:$JS_GROUP_ID /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start"
exec su-exec jobscheduler:$JS_GROUP_ID /usr/local/bin/start-joc.sh
else
if [ "$BUILD_USER_ID" = "$JS_USER_ID" ]
then
if [ "$(id -u)" = "$JS_USER_ID" ]
then
echo "JS7 entrypoint script running for user id '$(id -u)'"
else
echo "JS7 entrypoint script running for user id '$(id -u)' cannot switch to user id '$JS_USER_ID', group id '$JS_GROUP_ID'"
echo "JS7 entrypoint script missing permission to switch user id and group id, consider to omit the 'docker run --user' option"
fi
else
echo "JS7 entrypoint script running for user id '$(id -u)' cannot switch image user id '$BUILD_USER_ID' -> '$JS_USER_ID', group id '$BUILD_GROUP_ID' -> '$JS_GROUP_ID'"
echo "JS7 entrypoint script missing permission to switch user id and group id, consider to omit the 'docker run --user' option"
fi
echo "JS7 entrypoint script starting JOC Cockpit: exec sh -c /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start"
exec sh -c "/usr/local/bin/start-joc.sh"
fi
|
Build Script
The build script offers a number of options to parameterize the Dockerfile:
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
#!/bin/sh set -e SCRIPT_HOME=$(dirname "$0") SCRIPT_HOME="`cd "${SCRIPT_HOME}" >/dev/null && pwd`" SCRIPT_FOLDER="`basename $(dirname "$SCRIPT_HOME")`" # ----- modify default settings ----- JS_RELEASE="2.15.0" JS_REPOSITORY="sosberlin/js7" JS_IMAGE="$(basename "${SCRIPT_HOME}")-${JS_RELEASE//\./-}" JS_USER_ID="$UID" JS_HTTP_PORT="4446" JS_HTTPS_PORT= JS_JAVA_OPTIONS="-Xmx128m" JS_BUILD_ARGS= # ----- modify default settings ----- for option in "$@" do case "$option" in --release=*) JS_RELEASE=`echo "$option" | sed 's/--release=//'` ;; --repository=*) JS_REPOSITORY=`echo "$option" | sed 's/--repository=//'` ;; --image=*) JS_IMAGE=`echo "$option" | sed 's/--image=//'` ;; --user-id=*) JS_USER_ID=`echo "$option" | sed 's/--user-id=//'` ;; --http-port=*) JS_HTTP_PORT=`echo "$option" | sed 's/--http-port=//'` ;; --https-port=*) JS_HTTPS_PORT=`echo "$option" | sed 's/--https-port=//'` ;; --java-options=*) JS_JAVA_OPTIONS=`echo "$option" | sed 's/--java-options=//'` ;; --build-args=*) JS_BUILD_ARGS=`echo "$option" | sed 's/--build-args=//'` ;; *) echo "unknown argument: $option" exit 1 ;; esac done set -x docker build --no-cache --rm \ --tag=$JS_REPOSITORY:$JS_IMAGE \ --file=$SCRIPT_HOME/build/Dockerfile \ --build-arg="JS_RELEASE=$JS_RELEASE" \ --build-arg="JS_RELEASE_MAJOR=$(echo $JS_RELEASE | cut -d . -f 1,2)" \ --build-arg="JS_USER_ID=$JS_USER_ID" \ --build-arg="JS_HTTP_PORT=$JS_HTTP_PORT" \ --build-arg="JS_HTTPS_PORT=$JS_HTTPS_PORT" \ --build-arg="JS_JAVA_OPTIONS=$JS_JAVA_OPTIONS" \ $JS_BUILD_ARGS $SCRIPT_HOME/build set +x |
...
- Line 12 - 22: Default values are specified that are used if no command line arguments are provided. This includes values for:
- the release number: adjust this value to a current release of JS7.
- the repository which by default is
sosberlin:js7
. - the image name is determined from the current folder name and the release number.
- the user id is by default the id of the user running the build script.
- the HTTP port and HTTPS port: if the relevant port is not specified then the JOC Cockpit will not listen to that port for the protocol in question. You can, for example, disable the HTTP protocol by specifying an empty value. The default ports should be fine as they are mapped by the run script to outside ports on the Docker container's host. However, you can modify ports as you like.
- Java options: typically you would specify default values for, for example, Java memory consumption. The Java options can be overwritten by the run script when starting the container. However, you might want to create your own image with adjusted default values.
- Line 27 - 50: The above options can be overwritten by command line arguments like this:
Code Block language bash title Running the Build Script with Arguments linenumbers true ./build.sh --http-port=14445 --https-port=14443 --java-options="-Xmx1G"
- Line 54 - 63: The effective
docker build
command is executed with arguments. The Dockerfile is assumed to be located with thebuild
sub-directory of the current directory.
...