Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

  • This article describes the build process for official JOC Cockpit images.
  • Users can build their own Docker container images for JOC Cockpit .This article explains options how to create the JOC Cockpit imageand adjust to their needs.

Build Environment

For the build environment the The following directory hierarchy is assumed for the build environment:

  • joc

    The joc root directory can have The root directory joc canhave any name. The build files listed above are available for download. Note that the build script described below will, by default, use the directory name and release number to determine the resulting image name.

    Dockerfile

    Download: Dockerfile

    Docker Container images for JS7 JOC Cockpit provided by SOS make use of the following Dockerfile:

    Code Block
    languagebash
    titleDockerfile for JOC Cockpit Image
    linenumberstrue
    collapsetrue
    FROM openjdk:8-jre-alpine
    
    LABEL maintainer="Software- und Organisations-Service GmbH"
    
    # BUILD SETTINGS
    
    # # BUILD PRE-IMAGE
    
    FROM alpine:3.20 AS js7-pre-image
    
    # provide build arguments for release information
    ARG JS_RELEASE
    ARG JS_RELEASE_MAJOR
    
    # defaultimage user id has to match later run-time user id
    ARG JS_USER_ID=${JS_USER_ID:-1001}
    ARG JS_HTTP_PORT=${JS_HTTP_PORT:-4446}
    ARG JS_HTTPS_PORT=${JS_HTTPS_PORT:-4443}
    ARG JS_JAVA_OPTIONS=${JS_JAVA_OPTIONS}
    
    # RUN-TIME SETTINGS
    
    # JS7 JobScheduler ports and Java options
    ENV RUN_JS_HTTP_PORT=${RUN_JS_HTTP_PORT:-$JS_HTTP_PORT}
    ENV RUN_JS_HTTPS_PORT=${RUN_JS_HTTPS_PORT:-$JS_HTTPS_PORT}
    ENV RUN_JS_JAVA_OPTIONS=${RUN_JS_JAVA_OPTIONS:-$JS_JAVA_OPTIONS}
    
    # PREPARATION
    
    # install process tools, net tools, bash
    RUN apk update && apk add --no-cache \
        procps \
        net-tools \
        bash
    
    # add installer tarball
    ADD  add/copy installation tarball
    # ADD https://download.sos-berlin.com/JobScheduler.${JS_RELEASE_MAJOR}/js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/
    COPY js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/
    
    # test installer tarball
    RUN test -e /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz
    
    # add/copy installer script
    # ADD  https://download.sos-berlin.com/JobScheduler.${JS_RELEASE_MAJOR}/js7_install_joc_linux.${JS_RELEASE}.tar.gz.sh /usr/local/srcbin/
    # COPY js7_install_joc_linux.${JS_RELEASE}.tar.gz.sh /usr/local/srcbin/
    
    # copy installer response file and start scriptconfiguration
    COPY joc_install.xml  config/ /usr/local/src/
    COPY start-joc.sh /usr/local/bin/
    resources
    
    # install Java and JOC Cockpit
    # add keystore andcreate truststoreuser foraccount privatejobscheduler keysusing androot certificatesgroup
    COPY https-keystore.p12 /usr/local/src/
    COPY https-truststore.p12 /usr/local/src
    
    # add keystore and truststore locations to configuration files
    COPY start.ini.add /usr/local/src/
    COPY joc.properties.add /usr/local/src/
    
    # INSTALLATION
    
    # extract installer tarball
    # for JDK < 12, /dev/random does not provide sufficient entropy, see https://kb.sos-berlin.com/x/lIM3
    # substitute build arguments in installer response file
    # add jobscheduler user account and group
    # run setup
    # add keystore and truststore locations to configuration files
    # enable https module
    # add keystore and truststore for private keys and certificates
    # link start.ini from configuration directory
    RUN test -e /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz && \
        tar zxvf /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz -C /usr/local/src/ && \
        rm -f RUN apk upgrade --available && apk add --no-cache \
        openjdk17-jre && \
        sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \
        adduser -u ${JS_USER_ID} -G root --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler && \
        chmod +x /usr/local/bin/js7_install_joc.sh && \
        printf "http port %s https port %s \n", "${JS_HTTP_PORT}", "${JS_HTTPS_PORT}" && \
        /usr/local/bin/js7_install_joc.sh \
            --home=/opt/sos-berlin.com/js7/joc \
            --data=/var/sos-berlin.com/js7/joc \
            --setup-dir=/usr/local/src/joc.setup \
            --tarball=/usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz && \
        ln -s /usr/local/src/joc.    --http-port=${JS_HTTP_RELEASE} /usr/local/src/joc &&PORT} \
            --https-port=${JS_HTTPS_PORT} \
            --dbms-init=off \
        mv     --dbms-config=/usr/local/src/joc_install.xml /resources/hibernate.cfg.xml \
            --keystore=/usr/local/src/jocresources/ && https-keystore.p12 \
        sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/java.security && \
        sed -i "s/\s*<entry\s*key\s*=\"jettyPort\".*\/>/<entry key=\"jettyPort\" value=\"$JS_HTTP_PORT\"\/>/g" /usr/local/src/joc/joc_install.xml &&    --keystore-password=jobscheduler \
            --truststore=/usr/local/src/resources/https-truststore.p12 \
            --truststore-password=jobscheduler \
            --user=jobscheduler \
            --title="JOC Cockpit" \
            --as-user \
        adduser -u     --java-options="${JS_USER_ID:-1001} --disabled-password --home /home/jobscheduler --no-create-home --shell /bin/bash jobscheduler jobscheduler JAVA_OPTIONS}" \
            --make-dirs && \
        cdrm -f /usr/local/src/js7_joc && ./setup.sh -u joc_install.xml && \
        cat /usr/local/src/start.ini.add >> /var/sos-berlin.com/js7/joc/start.ini && \
        sed -i "s/\s*jetty.ssl.port\s*=.*/jetty.ssl.port=$JS_HTTPS_PORT/g" /var/sos-berlin.com/js7/joc/start.ini && \
        cat /usr/local/src/joc.properties.add >> /var/sos-berlin.com/js7/joc/resources/joc/joc.properties && \
        java -jar "/opt/sos-berlin.com/js7/joc/jetty/start.jar" -Djetty.home=_linux.${JS_RELEASE}.tar.gz
    
    # BUILD IMAGE
    
    FROM alpine:3.20 AS js7-image
    
    LABEL maintainer="Software- und Organisations-Service GmbH"
    
    # provide build arguments for release information
    ARG JS_RELEASE
    ARG JS_RELEASE_MAJOR
    
    # image user id has to match later run-time user id
    ARG JS_USER_ID=${JS_USER_ID:-1001}
    ARG JS_HTTP_PORT=${JS_HTTP_PORT:-4446}
    ARG JS_HTTPS_PORT=${JS_HTTPS_PORT:-4443}
    ARG JS_JAVA_OPTIONS=${JS_JAVA_OPTIONS}
    
    # JS7 user id, ports and Java options
    ENV RUN_JS_USER_ID=${RUN_JS_USER_ID:-1001}
    ENV RUN_JS_HTTP_PORT=${RUN_JS_HTTP_PORT:-$JS_HTTP_PORT}
    ENV RUN_JS_HTTPS_PORT=${RUN_JS_HTTPS_PORT:-$JS_HTTPS_PORT}
    ENV RUN_JS_JAVA_OPTIONS=${RUN_JS_JAVA_OPTIONS:-$JS_JAVA_OPTIONS}
    
    COPY --from=js7-pre-image ["/opt/sos-berlin.com/js7/joc/jetty" -Djetty.base=", "/opt/sos-berlin.com/js7"]
    COPY --from=js7-pre-image ["/var/sos-berlin.com/js7/joc", --add-to-start=https && \
        mv"/var/sos-berlin.com/js7"]
    
    # copy entrypoint script
    COPY entrypoint.sh /usr/local/src/https-keystore.p12 /var/sos-berlin.com/js7/joc/resources/joc/ && \
        mv /usr/local/src/https-truststore.p12 /var/bin/
    
    # install process tools, net tools, bash, openjdk
    # add jobscheduler user account and group
    # for JDK < 12, /dev/random does not provide sufficient entropy, see https://kb.sos-berlin.com/js7/joc/resources/joc/x/lIM3
    RUN apk upgrade --available && apk add \
        --no-cache \
       mv --repository=http:/var/sosdl-berlin.com/js7/joc/start.ini /var/sos-berlin.com/js7/joc/resources/joc/ && cdn.alpinelinux.org/alpine/edge/main \
        ln -s /var/sos-berlin.com/js7/joc/resources/joc/start.ini /var/sos-berlin.com/js7/joc/start.ini && \
        chmod +x /usr/local/bin/start-joc.shprocps \
        net-tools \
        bash \
        su-exec \
        shadow \
        git \
        openjdk17-jre && \
        chownsed -R jobscheduler:jobscheduler /var/sos-berlin.com
    
    # CONFIGURATION
    
    # copy configuration
    # COPY --chown=jobscheduler:jobscheduler config/ /var/sos-berlin.com/js7/joc/resources/joc/
    
    # CODA
    
    # run-time user, can be overwritten when running the container
    USER jobscheduler
    
    CMD ["sh","-c","/usr/local/bin/start-joc.sh --http-port=$RUN_JS_HTTP_PORT --https-port=$RUN_JS_HTTPS_PORT --java-options=\"$RUN_JS_JAVA_OPTIONS\""]

    Explanation:

    i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \
        sed -i 's/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, \\/jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, TLSv1, TLSv1.1, \\/g' /usr/lib/jvm/java-17-openjdk/conf/security/java.security && \
        adduser -u ${JS_USER_ID} -G root --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler && \
        mkdir -p /var/log/sos-berlin.com/js7/joc && \
        chown -R jobscheduler:root /opt/sos-berlin.com /var/sos-berlin.com /var/log/sos-berlin.com/js7/joc && \
        chmod -R g=u /etc/passwd /opt/sos-berlin.com /var/sos-berlin.com /var/log/sos-berlin.com/js7/joc && \
        chmod +x /usr/local/bin/entrypoint.sh
    
    # START
    
    ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]  

    Explanation:

    • The Dockerfile implements two stages to exclude installer files from the resulting image.
    • Line 3: The base image is the current Alpine image at build-time.
    • Line 6 - 7: The release identification is injected by build arguments. This information is used to determine the tarball to be downloaded or copied.
    • Line 10 - 13: Defaults for the user id running the JOC Cockpit inside the container as well as HTTP and HTTPS ports are provided. These values can be overwritten by providing the relevant build arguments.
    • Line 16 - 17: Users
    • Line 1: The base image is OpenJDK Java 1.8 (Alpine based). You can run JOC Cockpit with newer Java releases, however, stick to Oracle, OpenJDK or AdoptOpenJDK as the source for your Java base image. Alternatively you can use your own base image and install Java 1.8 on top of this. Note that availability of JDBC Drivers can limit the Java version to be used.
    • Line 8 - 9: The release identification is injected by build arguments. This information is used to determine the tarball to be downloaded.
    • Line 12 - 15: Defaults for the user id running the JOC Cockpit inside the container as well as HTTP and HTTPS ports are provided. These values can be overwritten by providing the respective build arguments.
    • Line 20 - 22: Environment variables are provided at run-time, not at build-time. They can be used to specify ports and Java options when running the container.
    • Line 27 - 30: The image OS is updated and additional packages are installed (ps, netstat, bash).
    • Line 33 - 34: You can either download the JOC Cockpit tarball directly from the SOS web site or you store the tarball with the build directory and copy from this location.
    • Line 37: the joc_install.xml response file is copied to the image. This file includes settings for headless installation of JOC Cockpit20: The tarball integrity is tested. 

    • Line 23 - 24: The JOC Cockpit Installer Script is downloaded or copied, see JS7 - JOC Cockpit Installation On Premises. In fact when building the image a JOC Cockpit installation is performed. Code Block
      languagexml
      titleJOC Cockpit Installer Response File
      linenumberstrue
      collapsetrue
      <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- XML configuration file for JOC If you call the installer with this XML file then you accept at the same time the terms of the licence agreement under GNU GPL 2.0 License (see http://www.gnu.org/licenses/gpl-2.0.html) --> <AutomatedInstallation langpack="eng"> <com.izforge.izpack.panels.UserInputPanel id="home"> <userInput/> </com.izforge.izpack.panels.UserInputPanel> <com.izforge.izpack.panels.HTMLLicencePanel id="gpl_licence"/> <com.izforge.izpack.panels.TargetPanel id="target"> <!-- SELECT THE INSTALLATION PATH It must be absolute! For example: /opt/sos-berlin.com/joc on Linux C:\Program Files\sos-berlin.com\joc on Windows --> <installpath>/opt/sos-berlin.com/js7/joc</installpath> </com.izforge.izpack.panels.TargetPanel> <com.izforge.izpack.panels.UserInputPanel id="jetty"> <userInput> <!-- JOC requires a servlet container such as Jetty. If a servlet container already installed then you can use it. Otherwise a Jetty will be installed in addition if withJettyInstall=yes. You need root permissions to install JOC with Jetty. --> <entry key="withJettyInstall" value="yes"/> <entry key="jettyPort" value="4446"/> <!-- Specify the name of the Windows service or Linux Daemon (default: joc). Only necessary for multiple instances of JOC on one server. It must be unique per server. This entry is deactivated by a comment because it MUST NOT BE CHANGED DURING OVER-INSTALLATION! --> <!-- <entry key="jettyServiceName" value="joc"/> --> <!-- Only necessary for Windows --> <entry key="jettyStopPort" value="44446"/> <!-- Only necessary for Unix (root permissions required) --> <entry key="withJocInstallAsDaemon" value="yes"/> <!-- To enter a JOC User (default=current User). For Unix only (root permissions required)!!! --> <entry key="runningUser" value="jobscheduler"/> <!-- Path to Jetty base directory For example: /home/[user]/sos-berlin.com/joc on Linux C:\ProgramData\sos-berlin.com\joc on Windows --> <entry key="jettyBaseDir" value="/var/sos-berlin.com/js7/joc"/> <!-- Choose (yes or no) wether the JOC's Jetty should be (re)started at the end of the installation --> <entry key="launchJetty" value="no"/> <!-- Java options for Jetty. --> <!-- Initial memory pool (-Xms) in MB --> <entry key="jettyOptionXms" value="128"/> <!-- Maximum memory pool (-Xmx) in MB --> <entry key="jettyOptionXmx" value="512"/> <!-- Thread stack size (-Xss) in KB --> <entry key="jettyOptionXss" value="4000"/> <!-- Further Java options --> <entry key="jettyOptions" value=""/> </userInput> </com.izforge.izpack.panels.UserInputPanel> <com.izforge.izpack.panels.UserInputPanel id="joc"> <userInput> <!-- JOC can be installed in a cluster. Please type a unique title to identify the cluster node, e.g. hostname. Max. length is 30 characters --> <entry key="jocTitle" value="PRIMARY JOC COCKPIT"/> <!-- Choose yes if JOC is a standby node in a cluster --> <entry key="isStandby" value="no"/> <!-- Security Level for the signing mechanism: possibly values are 'LOW', 'MEDIUM' and 'HIGH' HIGH: public PGP keys are stored for verification only all signing will be done externally outside of JOC Cockpit Unix Shell Installation Script - js7_install_joc.sh
    • Line 27: The config folder available in the build directory is copied to the appropriate config folder in the image. This can be useful for creating an image with individual settings in configuration files, see the JS7 - JOC Cockpit Configuration Items article for more information.
      • The hibernate.cfg.xml specifies the database connection. This file is not used at build-time. However, it is provided as a sample for run-time configuration. You will find details in the JS7 - Database article.
      • The default https-keystore.p12 and https-truststore.p12 files are copied that would hold the private key and certificate required for server authentication with HTTPS. By default empty keystore and truststore files are used that users would add their private keys and certificates to at run-time.
    • Line 32: A recent Java release is added to the pre-image.
    • Line 33: The jobscheduler account is created.
    • Line 35 - 52: The JOC Cockpit Installer Script is executed with arguments performing installation for the jobscheduler account. For use of arguments see headless installation of the JOC Cockpit, see JS7 - JOC Cockpit Installation On Premises. In fact a JOC Cockpit installation is performed when building the image
    • Line 72 - 75: Environment variables are provided at run-time, not at build-time. They can be used to specify ports and Java options when running the container.


    • Line 81: The entrypoint.sh script is copied from the build directory to the image, see next chapter.

    • Line 82: The jetty.sh script is copied from the build directory to the image. This script ships with the Jetty Servlet Container and for on premises installations is available from the JETTY_HOME/bin directory. Users might have to adjust the script to strip off commands that require root permissions, for example chown, and commands that might not be applicable to their container environment, for example use of su
    • Line 87 - 93: The image OS is updated and additional packages are installed (ps, netstat, bash, git).
    • Line 94: The most recent Java 11 package available with Alpine is applied. JOC Cockpit can be operated with newer Java releases. However, stick to Oracle, OpenJDK or AdoptOpenJDK as the source for your Java LTS release. Alternatively you can use your own base image and install Java on top of this. For details see Which Java versions is JobScheduler available for?
    • Line 95: Java releases might make use of /dev/random for random number generation. This is a bottleneck as random number generation with this file is blocking. Instead /dev/urandom should be used that implements non-blocking behavior. The change of the random file is applied to the Java security file.
    • Line 96: Users might want to disable certain TLS protocol versions or algorithms by applying changes to the Java security file.
    • Line 97 - 100:  The jobscheduler account is created and is assigned the user id handed over by the relevant build argument. This suggests that the account running the JOC Cockpit inside the container and the account that starts the container are assigned the same user id. This allows the account running the container to access any files created by the JOC Cockpit in mounted volumes with identical permissions.
      • Consider that the account is assigned the root group. For environments in which the entrypoint script is executed with an arbitrary non-root user id this allows access to files created by the JOC Cockpit provided to any accounts that are assigned the root group.
      • Accordingly any files owned by the jobscheduler account are made accessible to the root group with similar user permissions. Read access to /etc/passwd can be required in such environments.
      • For details see JS7 - Running Containers for User Accounts.
    • Line 105: The entrypoint script is executed and is dynamically parameterized from environment variables which are forwarded when starting the container.

    Entrypoint Script

    Download: entrypoint.sh

    The following entrypoint script is used to start JOC Cockpit containers.

    Code Block
    languagebash
    titleJOC Cockpit Entrypoint Script
    linenumberstrue
    collapsetrue
    #!/bin/bash
    
    JETTY_BASE="/var/sos-berlin.com/js7/joc"
    
    update_joc_properties() {
      # update joc.properties file: ${JETTY_BASE}/resources/joc/joc.properties
      rc=$(grep -E '^cluster_id' "${JETTY_BASE}"/resources/joc/joc.properties)
      if [ -z "${rc}" ]
      then
        echo ".. update_joc_properties [INFO] updating cluster_id in ${JETTY_BASE}/resources/joc/joc.properties"
        printf "cluster_id = joc\n" >> "${JETTY_BASE}"/resources/joc/joc.properties
      fi
    
      rc=$(grep -E '^ordering' "${JETTY_BASE}"/resources/joc/joc.properties)
      if [ -z "${rc}" ]
      then
        echo ".. update_joc_properties [INFO] updating ordering in ${JETTY_BASE}/resources/joc/joc.properties"
        printf "ordering=%1\n" "$(shuf -i 0-99 -n 1)" >> "${JETTY_BASE}"/resources/joc/joc.properties
      fi
    }
    
    startini_to_startd() {
      # convert once ${JETTY_BASE}/resources/joc/start.ini to ${JETTY_BASE}/resources/joc/start.d
      if [ -d "${JETTY_BASE}"/start.d ]; then
        if [ -f "${JETTY_BASE}"/resources/joc/start.ini ] && [ -d "${JETTY_BASE}"/resources/joc/start.d ]; then
          echo ".. startini_to_startd [INFO] converting start.ini to start.d ini files"
          for file in "${JETTY_BASE}"/resources/joc/start.d/*.ini; do
            module="$(basename "$file" | cut -d. -f1)"
            echo ".... [INFO] processing module ${module}"
            while read -r line; do
              modulevariablekeyprefix="$(echo "${line}" | cut -d. -f1,2)"
              if [ "${modulevariablekeyprefix}" = 

    ...

    "jetty.${module}" ] || [ "${modulevariablekeyprefix}" = "jetty.${module}Context" ]; then
                

    ...

    modulevariablekey="$(echo "${line}" | cut -d= -f1 | sed 's/\s*$//g')"
                echo "....  startini_to_startd [INFO] ${line}"
       

    ...

     

    ...

     

    ...

     

    ...

     

    ...

     

    ...

        sed -i "s;.*${modulevariablekey}\s*=.*;${line};g" "${file}"
              

    ...

    fi
            done < "${JETTY_BASE}"/resources/joc/start.ini
          

    ...

    done
          mv -f "${JETTY_BASE}"/resources/joc/start.ini "${JETTY_BASE}"/resources/joc/start.in~
        

    ...

    fi
      fi
    }
    
    add_start_configuration() {
      # overwrite ini files in start.d if available from config folder
      if [ -d "${JETTY_BASE}"/start.d ]; then
        if [ -d "${JETTY_BASE}"/resources/joc/start.d ]; then
          for file 

    ...

    in "${JETTY_BASE}"/resources/joc/start.d/*.ini; do
            echo 

    ...

    ".. add_start_configuration [INFO] copying ${file} -> ${JETTY_BASE}/start.d/"
            cp -f "$file" "${JETTY_BASE}"/start.d/
      

    ...

     

    ...

     

    ...

     

    ...

     

    ...

    done
     

    ...

     

    ...

     

    ...

     

    ...

    fi
     

    ...

     fi
    }
    
    add_jdbc_and_license() {
      # if license folder not empty then copy js7-license.jar to Jetty's class path
      if [ -d "${JETTY_BASE}"/resources/joc/license ]; then
        if [ -f "${JETTY_BASE}"/resources/joc/lib/js7-license.jar ]; then
         

    ...

     

    ...

    echo 

    ...

    ".. 

    ...

    add_jdbc_and_license [INFO] copying ${JETTY_BASE}/resources/joc/lib/js7-license.jar -> ${JETTY_BASE}/lib/ext/joc/"
          cp -f "${JETTY_BASE}"/resources/joc/lib/js7-license.jar "${JETTY_BASE}"/lib/ext/joc/
        fi
      fi
    
      # if 

    ...

    JDBC 

    ...

    driver 

    ...

    added 

    ...

    then copy to Jetty's class path and move exiting JDBC drivers back to avoid conflicts
      if [ -d "${JETTY_BASE}"/resources/joc/lib ]; then
        if [ -n "$(ls "${JETTY_BASE}"/resources/joc/lib/*.jar 2>/dev/null | grep -v "js7-license.jar")" ]; then
          for file in "${JETTY_BASE}"/lib/ext/joc/*.jar; do
            if [ "$(basename "$file")" != "js7-license.jar" ]; then
              

    ...

    echo ".. add_jdbc_and_license [INFO] moving ${file} -> ${JETTY_BASE}/resources/joc/lib/$(basename "$file")~"
              mv -f "$file" "${JETTY_BASE}"/resources/joc/lib/"$(basename "$file")"~
         

    ...

     

    ...

     

    ...

     fi
          done
    
          for 

    ...

    file in "${JETTY_BASE}"/resources/joc/lib/*.jar; do
            echo ".. add_jdbc_and_license 

    ...

    [INFO] copying ${file} -> ${JETTY_BASE}/lib/ext/joc/"
            

    ...

    cp -f "$file" "${JETTY_BASE}"/lib/ext/joc/
          done
        fi
      fi
    }
    
    add_custom_logo() {
      # if 

    ...

    image folder in the configuration directory is not empty then images are copied to the installation directory
      if [ -d "${JETTY_BASE}"/resources/joc/image ];then
        mkdir -p  "${JETTY_BASE}"/webapps/root/ext/images
        echo ".. add_custom_logo [INFO] copying ${JETTY_BASE}/resources/joc/image/* -> ${JETTY_BASE}/webapps/root/ext/images/"
        cp "${JETTY_BASE}"/resources/joc/image/* "${JETTY_BASE}"/webapps/root/ext/images/
      fi
    }
    
    patch_api() {
      if [ ! -d "${JETTY_BASE}"/resources/joc/patches ]; then
        echo ".. patch_api [INFO] API patch directory not found: ${JETTY_BASE}/resources/joc/patches"
        return
      fi
    
      if [ ! -d "${JETTY_BASE}"/webapps/joc/WEB-INF/classes ]; then
        echo ".. patch_api [WARN] JOC Cockpit API sub-directory not found: ${JETTY_BASE}/webapps/joc/WEB-INF/classes" 
        return
      fi
    
      jarfiles=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar 2>/dev/null)
      if [ -n "${jarfiles}" ]; then
        cd "${JETTY_BASE}"/webapps/joc/WEB-INF/classes > /dev/null || return
        for jarfile in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar; do
          echo ".. patch_api [INFO] extracting ${jarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
          unzip -o "${jarfile}" || return
          # rm -f "${jarfile}" || return
        done
        cd - > /dev/null || return
      else
        echo ".. patch_api [INFO] no API patches available from .jar files in directory: ${JETTY_BASE}/resources/joc/patches"
      fi
    
      tarballs=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.tar.gz 2>/dev/null)
      if [ -n "${tarballs}" ]; then
        if [ "$(echo "${tarball}" | wc -l)" -eq 1 ]; then
          cd "${JETTY_BASE}"/webapps/joc/WEB-INF/classes > /dev/null || return
          for tarfile in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.tar.gz; do
            echo ".. patch_api [INFO] extracting ${tarfile} 

    ...

    -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
            tar 

    ...

    -xpozf "${tarfile}" || return
         

    ...

       # rm -f  "${tarfile}" || return
    
            for jarfile 

    ...

    in "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.API-*.jar; do
             

    ...

     

    ...

    echo 

    ...

    "

    ...

    .. patch_api [INFO] extracting ${jarfile} -> ${JETTY_BASE}/webapps/joc/WEB-INF/classes"
              unzip -o 

    ...

    "${jarfile}"
              # rm 

    ...

    -

    ...

    f  "${jarfile}" || return
            done
    
            # rm -f "${tarfile}" || return
          done
          cd - > /dev/null || return
        else
       

    ...

     

    ...

     

    ...

     

    ...

    echo 

    ...

    ".. patch_api [WARN]: more than one tarball found for API patches. Please drop previous patch tarballs and use the latest API patch tarball only as it includes previous patches."
        fi
      else
         echo ".. patch_api [INFO] no API patches 

    ...

    available from .tar.gz files in directory: ${JETTY_BASE}/resources/joc/patches"
      fi
    }
    
    patch_gui() {
      if [ ! -d "${JETTY_BASE}"/resources/joc/patches ]; then
        echo ".. patch_gui [INFO] GUI patch directory not 

    ...

    found: ${JETTY_BASE}/resources/joc/patches"
        return
      fi
    
      if [ ! 

    ...

    -

    ...

    d "${JETTY_BASE}"/webapps/joc ]; then
        echo ".. patch_gui [WARN] JOC Cockpit GUI sub-directory not found: ${JETTY_BASE}/webapps/joc"
        return
      fi
    
      tarball=$(ls "${JETTY_BASE}"/resources/joc/patches/js7_joc.*-PATCH.GUI-*.tar.gz 2>/dev/null)
      if [ -n "${tarball}" ]; then
        if [ "$(echo "${tarball}" | wc -l)" -eq 1 ]; then
          echo ".. patch_gui [INFO] applying GUI patch tarball: ${tarball}"
          cd "${JETTY_BASE}"/webapps/joc > /dev/null || return
          find "${JETTY_BASE}"/webapps/joc -maxdepth 1 -type f -delete || return
    
          if [ -d "${JETTY_BASE}"/webapps/joc/assets ]; then
            rm -fr "${JETTY_BASE}"/webapps/joc/assets || return
       

    ...

     

    ...

     

    ...

     

    ...

    fi
    
     

    ...

     

    ...

     

    ...

     

    ...

     

    ...

     

    ...

    if 

    ...

    [ -d "${JETTY_BASE}"/webapps/joc/styles ]; then
            rm -fr "${JETTY_BASE}"/webapps/joc/styles 

    ...

    || return
          fi
    
          tar -xpozf "${tarball}" || return
     

    ...

     

    ...

     

    ...

     

    ...

     

    ...

     

    ...

    cd 

    ...

    - 

    ...

    > /dev/null || return
        else
          echo ".. patch_gui [WARN]: more than one tarball found for GUI patches. Please drop previous patch tarballs and use the latest GUI patch tarball only as it includes previous patches."
        fi
      else
        echo ".. patch_gui [INFO] no GUI patches available from .tar.gz files in directory: ${JETTY_BASE}/resources/joc/patches"
      fi
    }
    
    
    # create JOC Cockpit start script
    echo '#!/bin/sh' > "${JETTY_BASE}"/start-joc.sh
    echo 'trap "/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh stop; exit" TERM INT' >> "${JETTY_BASE}"/start-joc.sh
    echo '/opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start && tail -f /dev/null &'   >> "${JETTY_BASE}"/start-joc.sh
    echo 'wait' >> "${JETTY_BASE}"/start-joc.sh
    chmod +x "${JETTY_BASE}"/start-joc.sh
    
    echo "JS7 entrypoint script: updating image"
    
    # update joc.properties file
    update_joc_properties
    
    # start.ini_to_startd
    add_start_configuration
    
    # copy custom logo
    add_custom_logo
    
    
    if [ -n "${RUN_JS_HTTP_PORT}" ]
    then
      if [ -f "${JETTY_BASE}"/start.d/http.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/http.ini ]; then
        # enable http access in start.d directory
        mv "${JETTY_BASE}"/start.d/http.in~ "${JETTY_BASE}"/start.d/http.ini
      fi
      if [ -f "${JETTY_BASE}"/start.d/http.ini ]; then
        # set port for http access in start.d directory
        sed -i "s/.*jetty.http.port\s*=.*/jetty.http.port=$RUN_JS_HTTP_PORT/g" "${JETTY_BASE}"/start.d/http.ini
      fi
    else
      if [ -f "${JETTY_BASE}"/start.d/http.ini ]; then
        # disable http access in start.d directory
        mv -f "${JETTY_BASE}"/start.d/http.ini "${JETTY_BASE}"/start.d/http.in~
      fi
    fi
    
    if [ -n "${RUN_JS_HTTPS_PORT}" ]
    then
      if [ -f "${JETTY_BASE}"/start.d/https.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/https.ini ]; then
        # enable https access in start.d directory
        mv "${JETTY_BASE}"/start.d/https.in~ "${JETTY_BASE}"/start.d/https.ini
      fi
      if [ -f "${JETTY_BASE}"/start.d/ssl.in~ ] && [ ! -f "${JETTY_BASE}"/start.d/ssl.ini ]; then
        # enable https access in start.d directory
        mv "${JETTY_BASE}"/start.d/ssl.in~ "${JETTY_BASE}"/start.d/ssl.ini
      fi
      if [ -f "${JETTY_BASE}"/start.d/ssl.ini ]; then
        # set port for https access in start.d directory
        sed -i "s/.*jetty.ssl.port\s*=.*/jetty.ssl.port=${RUN_JS_HTTPS_PORT}/g" "${JETTY_BASE}"/start.d/ssl.ini
      fi
    else
      if [ -f "${JETTY_BASE}"/start.d/https.ini ]; then
        # disable https access in start.d directory
        mv -f "${JETTY_BASE}"/start.d/https.ini "${JETTY_BASE}"/start.d/https.in~
      fi
      if [ -f "${JETTY_BASE}"/start.d/ssl.ini ]; then
        # disable https access in start.d directory
        mv -f "${JETTY_BASE}"/start.d/ssl.ini "${JETTY_BASE}"/start.d/ssl.in~
      fi
    fi
    
    if [ -n "${RUN_JS_JAVA_OPTIONS}" ]
    then
      export JAVA_OPTIONS="${JAVA_OPTIONS} ${RUN_JS_JAVA_OPTIONS}"
    fi
    
    JS_USER_ID=$(echo "${RUN_JS_USER_ID}" | cut -d ':' -f 1)
    JS_GROUP_ID=$(echo "${RUN_JS_USER_ID}" | cut -d ':' -f 2)
    
    JS_USER_ID=${JS_USER_ID:-$(id -u)}
    JS_GROUP_ID=${JS_GROUP_ID:-$(id -g)}
    
    BUILD_GROUP_ID=$(grep 'jobscheduler' /etc/group | head -1 | cut -d ':' -f 3)
    BUILD_USER_ID=$(grep 'jobscheduler' /etc/passwd | head -1 | cut -d ':' -f 3)
    
    add_jdbc_and_license
    patch_api
    patch_gui
    
    if [ "$(id -u)" = "0" ]
    then
      if [ ! "${BUILD_USER_ID}" = "${JS_USER_ID}" ]
      then
        echo "JS7 entrypoint script: switching ownership of image user id '${BUILD_USER_ID}' -> '${JS_USER_ID}'"
        usermod -u "${JS_USER_ID}" jobscheduler
        find /var/sos-berlin.com/ -user "${BUILD_USER_ID}" -exec chown -h jobscheduler {} \;
        find /var/log/sos-berlin.com/ -user "${BUILD_USER_ID}" -exec chown -h jobscheduler {} \;
      fi
    
      if [ ! "${BUILD_GROUP_ID}" = "${JS_GROUP_ID}" ]
      then
        if grep -q "${JS_GROUP_ID}" /etc/group
        then
          groupmod -g "${JS_GROUP_ID}" jobscheduler
        else
          addgroup -g "${JS_GROUP_ID}" -S jobscheduler
        fi
    
        echo "JS7 entrypoint script: switching ownership of image group id '${BUILD_GROUP_ID}' -> '${JS_GROUP_ID}'"
        find /var/sos-berlin.com/ -group "${BUILD_GROUP_ID}" -exec chgrp -h jobscheduler {} \;
        find /var/log/sos-berlin.com/ -group "${BUILD_GROUP_ID}" -exec chgrp -h jobscheduler {} \;
      fi
    
      echo "JS7 entrypoint script: switching to user account 'jobscheduler' to run start script"
      echo "JS7 entrypoint script: starting JOC Cockpit: exec su-exec ${JS_USER_ID}:${JS_GROUP_ID} /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start"
      exec su-exec "${JS_USER_ID}":0 "${JETTY_BASE}"/start-joc.sh
    else
      if [ "${BUILD_USER_ID}" = "${JS_USER_ID}" ]
      then
        if [ "$(id -u)" = "${JS_USER_ID}" ]
        then
          echo "JS7 entrypoint script: running for user id '$(id -u)'"
        else
          echo "JS7 entrypoint script: running for user id '$(id -u)' using user id '${JS_USER_ID}', group id '${JS_GROUP_ID}'"
          echo "JS7 entrypoint script: missing permission to switch user id and group id, consider to omit the 'docker run --user' option"
        fi
      else
        echo "JS7 entrypoint script: running for user id '$(id -u)', image user id '${BUILD_USER_ID}' -> '${JS_USER_ID}', image group id '${BUILD_GROUP_ID}' -> '${JS_GROUP_ID}'"
      fi
    
      echo "JS7 entrypoint script: starting JOC Cockpit: exec sh -c /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start"
      exec sh -c "${JETTY_BASE}/start-joc.sh"
    fi


    Explanation:

    • Note that the entrypoint script runs the JOC Cockpit start script using exec sh -c. This is required to run the JOC Cockpit inside the current process that is assigned PID 1. A later docker stop <container> command will send a SIGTERM signal to the process with PID 1 only. If JOC Cockpit were started directly as a shell script without use of exec then a new process with a different PID would be created. This means that the docker stop command would not normally terminate JOC Cockpit, but would abort JOC Cockpit when killing the container. This can cause delays for fail-over between clustered JOC Cockpit containers
    • the jTDS JDBC Driver for MS SQL Server which is provided. An Oracle ojdbc6 JDBC driver is also provided. --> <!-- You can choose between 'yes' or 'no' for using the internal JDBC connector or not --> <entry key="internalConnector" value="yes"/> <!-- Select the path to JDBC Driver --> <entry key="connector" value=""/> </userInput> </com.izforge.izpack.panels.UserInputPanel> <com.izforge.izpack.panels.UserInputPanel id="end"> <userInput/> </com.izforge.izpack.panels.UserInputPanel> <com.izforge.izpack.panels.InstallPanel id="install"/> <com.izforge.izpack.panels.ProcessPanel id="process"/> <com.izforge.izpack.panels.FinishPanel id="finish"/> </AutomatedInstallation>
    • Line 38: The start-joc.sh script is copied from the build directory to the image. Users can apply their own version of the start script. The start script used by SOS looks like this:

      Code Block
      languagebash
      titleJOC Cockpit Start Script
      linenumberstrue
      collapsetrue
      #!/bin/sh
      
      js_http_port=""
      js_https_port=""
      js_java_options=""
      
      for option in "$@"
      do
        case "$option" in
               --http-port=*)    js_http_port=`echo "$option" | sed 's/--http-port=//'`
                                 ;;
               --https-port=*)   js_https_port=`echo "$option" | sed 's/--https-port=//'`
                                 ;;
               --java-options=*) js_java_options=`echo "$option" | sed 's/--java-options=//'`
                                 ;;
               *)                echo "unknown argument: $option"
                                 exit 1
                                 ;;
        esac
      done
      
      
      js_args=""
      
      if [ ! "$js_http_port" = "" ]
      then
        # enable http access
        sed -i "s/.*--module=http$/--module=http/g" /var/sos-berlin.com/js7/joc/start.ini
        # set port for http access
        sed -i "s/.*jetty.http.port\s*=.*/jetty.http.port=$js_http_port/g" /var/sos-berlin.com/js7/joc/start.ini
      else
        # disable http access
        sed -i "s/\s*--module=http$/# --module=http/g" /var/sos-berlin.com/js7/joc/start.ini
      fi
      
      if [ ! "$js_https_port" = "" ]
      then
        # enable https access
        sed -i "s/.*--module=https$/--module=https/g" /var/sos-berlin.com/js7/joc/start.ini
        # set port for https access
        sed -i "s/\s*jetty.ssl.port\s*=.*/jetty.ssl.port=$js_https_port/g" /var/sos-berlin.com/js7/joc/start.ini
      else
        # disable https access
        sed -i "s/\s*--module=https$/# --module=https/g" /var/sos-berlin.com/js7/joc/start.ini
      fi
      
      if [ ! -z "$js_java_options" ]
      then
        export JAVA_OPTIONS="${JAVA_OPTIONS} $js_java_options"
      fi
      
      echo "starting JOC Cockpit: /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start"
      /opt/sos-berlin.com/js7/joc/jetty/bin/jetty.sh start && tail -f /dev/null
    • Line 41 - 42: The default keystore and truststore files are copied that would hold the private key and certificate required for server authentication with HTTPS. By default empty keystore and truststore files are used that you would later on add your private keys and certificates to.
    • Line 64: Java releases < Java 12 make use of /dev/random for random number generation. This is a bottleneck as random number generation with this file is blocking. Instead /dev/urandom should be used that implements non-blocking behavior. The change of the random file is applied to the Java security file.
    • Line 66: The user account jobscheduler is created and is assigned the user id and group id handed over by the respective build arguments. This translates to the fact that the account running the JOC Cockpit inside the container and the account that starts the container are assigned the same user id and group id. This allows the account running the container to access any files created by the JOC Cockpit in mounted volumes with identical permissions.
    • Line 67: The JOC Cockpit setup is performed.
    • Line 68 - 70: The keystore and truststore locations are added to the Jetty start.ini file and joc.properties file respectively. 
      • start.ini.add is used for access e.g. by client browsers:

        Code Block
        languagebash
        titleJetty HTTPS Configuration File start.ini.add
        linenumberstrue
        collapsetrue
        ## Keystore file path (relative to $jetty.base)
        jetty.sslContext.keyStorePath=resources/joc/https-keystore.p12
        
        ## Truststore file path (relative to $jetty.base)
        jetty.sslContext.
        trustStorePath=resources/joc/https-truststore.p12
        
        ## Keystore password
        jetty.sslContext.keyStorePassword=jobscheduler
        
        ## KeyManager password (same as keystore password for pkcs12 keystore type)
        jetty.sslContext.keyManagerPassword=jobscheduler
        
        ## Truststore password
        jetty.sslContext.trustStorePassword=jobscheduler
        
        ## Connector port to listen on
        jetty.ssl.port=4443
      • joc.properties.add is used for connections to the Controller should such connections require HTTPS mutual authentication:

        Code Block
        languagebash
        titleJOC Cockpit configuration File joc.properties.add
        linenumberstrue
        collapsetrue
        ################################################################################
        ### Location, type and password of the Java truststore which contains the
        ### certificates of eachJobScheduler Controller for HTTPS connections. Path can be
        ### absolute or relative to this file.
        
        keystore_path = ../../resources/joc/https-keystore.p12
        keystore_type = PKCS12
        keystore_password = jobscheduler
        key_password = jobscheduler
        
        truststore_path = ../../resources/joc/https-truststore.p12
        truststore_type = PKCS12
        truststore_password = jobscheduler
        
        
    • Line 71: The Jetty servlet container is added the HTTPS module for use with JOC Cockpit.
    • Line 82: If a config folder is available in the build directory then its contents are copied to the respective config folder in the image. This can be useful to create an image with individual settings in configuration files, see JS7 - JOC Cockpit Configuration Items.
    • Line 89: The start script is executed and is dynamically parameterized from environment variables that are forwarded when starting the container.

    Build Script

    The build script offers a number of options to parameterize the Dockerfile:

    Code Block
    languagebash
    titleBuild Script for JOC Cockpit Image
    linenumberstrue
    collapsetrue
    #!/bin/sh
    
    set -e
    
    SCRIPT_HOME=$(dirname "$0")
    SCRIPT_HOME="`cd "${SCRIPT_HOME}" >/dev/null && pwd`"
    SCRIPT_FOLDER="`basename $(dirname "$SCRIPT_HOME")`"
    
    
    # ----- modify default settings -----
    
    JS_RELEASE="2.05.0-SNAPSHOT"
    JS_REPOSITORY="sosberlin/js7"
    JS_IMAGE="$(basename "${SCRIPT_HOME}")-${JS_RELEASE//\./-}"
    
    JS_USER_ID="$UID"
    
    JS_HTTP_PORT="4446"
    JS_HTTPS_PORT=
    
    JS_JAVA_OPTIONS="-Xmx128m"
    JS_BUILD_ARGS=
    
    # ----- modify default settings -----
    
    
    for option in "$@"
    do
      case "$option" in
             --release=*)      JS_RELEASE=`echo "$option" | sed 's/--release=//'`
                               ;;
             --repository=*)   JS_REPOSITORY=`echo "$option" | sed 's/--repository=//'`
                               ;;
             --image=*)        JS_IMAGE=`echo "$option" | sed 's/--image=//'`
                               ;;
             --user-id=*)      JS_USER_ID=`echo "$option" | sed 's/--user-id=//'`
                               ;;
             --http-port=*)    JS_HTTP_PORT=`echo "$option" | sed 's/--http-port=//'`
                               ;;
             --https-port=*)   JS_HTTPS_PORT=`echo "$option" | sed 's/--https-port=//'`
                               ;;
             --java-options=*) JS_JAVA_OPTIONS=`echo "$option" | sed 's/--java-options=//'`
                               ;;
             --build-args=*)   JS_BUILD_ARGS=`echo "$option" | sed 's/--build-args=//'`
                               ;;
             *)                echo "unknown argument: $option"
                               exit 1
                               ;;
      esac
    done
    
    set -x
    
    docker build --no-cache --rm \
          --tag=$JS_REPOSITORY:$JS_IMAGE \
          --file=$SCRIPT_HOME/build/Dockerfile \
          --build-arg="JS_RELEASE=$JS_RELEASE" \
          --build-arg="JS_RELEASE_MAJOR=$(echo $JS_RELEASE | cut -d . -f 1,2)" \
          --build-arg="JS_USER_ID=$JS_USER_ID" \
          --build-arg="JS_HTTP_PORT=$JS_HTTP_PORT" \
          --build-arg="JS_HTTPS_PORT=$JS_HTTPS_PORT" \
          --build-arg="JS_JAVA_OPTIONS=$JS_JAVA_OPTIONS" \
          $JS_BUILD_ARGS $SCRIPT_HOME/build
    
    set +x

    ...

    • Line 12 - 22: Default values are specified that are used if no command line arguments are provided. This includes values for:
      • the release number: adjust this value to a current release of JS7.
      • the repository which by default is sosberlin:js7.
      • the image name is determined from the current folder name and the release number.
      • the user id is by default the user id of the user running the build script.
      • the HTTP port and HTTPS port: if the respective relevant port is not specified then the JOC Cockpit will not listen to a that port for the respective protocol in question. You can, for example, disable the HTTP protocol by specifying an empty value. The default ports should be fine as they are mapped by the run script to outside ports on the Docker container's host. However, you can modify ports as you like.
      • Java options: typically you would specify default values e.g. for for, for example, Java memory consumption. The Java options can be overwritten by the run script when starting the container, however. However, you might want to create your own image with adjusted default values.
    • Line 27 - 50: The above options can be overwritten by command line arguments like this:


      Code Block
      languagebash
      titleRunning the Build Script with Arguments
      linenumberstrue
      ./build.sh --http-port=14445 --https-port=14443 --java-options="-Xmx1G"
    • Line 54 - 63: The effective docker build command is executed with arguments. The Dockerfile is assumed to be located with the build sub-directory of the current directory.

    ...