Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • JOC Cockpit comes preinstalled with a Docker® image.
  • Before running the JOC Cockpit container the following requirements should be met:
    • Either the embedded H2® database should be used or an external database should be made available and accessible - see the JS7 - Database article for more information.
    • Docker volumes are created with for persistent JOC Cockpit configuration data and log files.
    • A Docker network or similar mechanism is made available to enable network access between JOC Cockpit, Controller instance(s) and Agents.
  • Initial operation for JOC Cockpit includes:
    • registering the Controller instance(s) and Agents that are used in the job scheduling environment.
    • optionally registering a JS7 Controller cluster. 

...

Widget Connector
urlhttps://www.youtube.com/watch?v=dCZXrju9lDE&ab_channel=JobScheduler

Prerequisites

Check that you operate Docker that  20.10 or newer is operated..

Pulling the JOC Cockpit Image

...

After pulling the JOC Cockpit image you users can run the container with a number of options such as:

...

  • --network The above example makes use of a Docker network - created, for example, using the docker network create js7 command - to allow network sharing between containers. Note that any inside ports used by Docker containers are visible within a Docker network. Therefore a JOC Cockpit instance running for the inside port 4446 can be accessed with the container's hostname and the same port within the Docker network.
  • --publish The JOC Cockpit has been configured to listen to the HTTP port 4446. An outside port of the Docker host can be is mapped to the JOC Cockpit inside HTTP port. This mapping is not required for use with a Docker network, see --network. However, it will required to allow direct access to the JOC Cockpit user interface from the Docker host via its outside port .
  • --env=RUN_JS_JAVA_OPTIONS This allows any Java options to be injected into the JOC Cockpit container. Preferably this is used to specify memory requirements of the JOC Cockpit, for example,  with -Xmx256m. For details see JS7 - FAQ - Which Java Options are recommended?
  • --env=RUN_JS_USER_ID Inside the container the JOC Cockpit is operated by running for the jobscheduler user account. In order to access, for example, log files created by the JOC Cockpit, which are mounted to the Docker host, it is recommended that you users map the account that starts the container to the jobscheduler account inside the container. The RUN_JS_USER_ID environment variable accepts the user ID and group ID of the account that will be mapped. The example above makes use of the current user.
  • --mount The following volume mounts are suggested:
    • config: The optional configuration folder allows specification of individual settings for JOC Cockpit operation - see the sections below and the JS7 - JOC Cockpit Configuration Items article. Without this folder the default settings are usedThis includes to specify the connection to the JS7 - Database.
    • logs: In order to make JOC Cockpit log files persistent they have to be written to a volume that is mounted for the container. Feel Users are free to adjust the volume name from the src attribute. However, the value of the dst attribute should not be changed as it reflects the directory hierarchy inside the container.
    • Docker offers a number of ways of mounting or binding volumes to containers including, for example, creation of local directories and binding them to volumes like this:

      Code Block
      languagebash
      titleExample how to create Docker volumes
      linenumberstrue
      # example to map volumes to directories on the Docker host prior to running the JOC Cockpit container
      mkdir -p /home/sos/js7/js7-joc-primary/config /home/sos/js7/js7-joc-primary/logs
      docker volume create --driver local --opt o=bind --opt type=none --opt device="/home/sos/js7/js7-joc-primary/config" js7-joc-primary-config
      docker volume create --driver local --opt o=bind --opt type=none --opt device="/home/sos/js7/js7-joc-primary/logs" js7-joc-primary-logs

      There are alternative ways of achieving this. As a result you should users have a access to the directories /var/sos-berlin.com/js7/joc/resources/joc and /var/log/sos-berlin.com/js7/joc inside the container and data in both locations should be persistent. If volumes are not created before running the container then they will be mounted automatically. However, you users should have access to data in the volumes, e.g. by access to /var/lib/docker/volumes/js7-joc-primary-config etc.

...

  • JDBC Drivers for use with MariaDB®, MySQL®, Oracle®, PostgreSQL® are included with JS7.
    • For details about JDBC Driver versions see the JS7 - Database article.
    • Should you users have a good reason reasons to use a different version of a JDBC Driver then you they can apply the JDBC Driver version of your their choice.
  • For use with H2®
    • The version of H2® successfully tested by SOS is h2-1.4.200.jar. At the time of writing later versions do not provide sufficient decent compatibility with MySQL to be applicable for JS7.
  • For use with Microsoft SQL Server®
    • the The JDBC Driver has to be downloaded by the user as it cannot be bundled with open source software due to license conflicts.
  • You Users can download JDBC Drivers from vendor's sites and store the resulting *.jar file(s) in the following location:
    • Location in the container: /var/sos-berlin.com/js7/joc/resources/joc/lib
    • Consider accessing this directory from the volume that is mounted when running the container, for example, from a local folder /home/sos/js7/js7-joc-primary/config/lib.
    • Refer to the JS7 - Database article for details about the procedure.

...

  • Location in the container: /var/sos-berlin.com/js7/joc/resources/joc/hibernate.cfg.xml
  • Consider accessing the configuration file from the volume that is mounted when running the container, for example, from a local folder /home/sos/js7/js7-joc-primary/config.
  • Information about modification use of the hibernate.cfg.xml file for your the respective DBMS can be found in the JS7 - Database article.

Create Database Objects

During installation JOC Cockpit can be configured:

  • not to create database objects by the installer,
  • to create database objects by the installer,
  • on start-up.to check if database objects exist and otherwise to create them on-the-fly.

Users can force creation of database objects by Database objects are created by executing the following script inside the container::

...

Access to log files is essential to identify problems during installation and operation of containers.

When By mounting a volume for log files as explained above you should users have access to the files indicated in the JS7 - Log Files and Locations article.

...

Info

For initial operation, the JOC Cockpit is used to make Controller instances instance(s) and Agent instances known to your the job scheduling environment.

General information about initial operation can be found in the following article:

Additional information about initial operation with containers can be found below.

Accessing JOC Cockpit from

...

the Browser

Explanations:

  • For the JOC Cockpit URL: in most situations you users can use the host name of the Docker host and the port that was specified when starting the container.  
    • From the example above this could be http://centostest_primary.sos17446 if centostest_primary.sos is your the Docker host and 17446 the outside HTTP port of the container.
    • Note that the Docker host has to allow incoming traffic to the port specified. This might require adjustment of the port or the creation of a firewall rulesrule.
  • By default JOC Cockpit ships with the following credentials:
    • User Account: root
    • Password: root

...

After logging in a dialog window pops up that asks for the registration of a Controller. You Users will find the same dialog later on in the User -> Manage Controllers/Agents menu.

You Users have a choice of registering a Standalone Controller or registering a JS7 - Controller Cluster for  for high availability (requires JS7 - License).

...

Use of the Standalone Controller is within the scope of the JS7 open source license.



Explanation:

  • You Users can add a title to the Controller instance that which will be shown in the JS7 - Dashboard View.
  • The URL of the Controller instance has to match the hostname and port used to operate the Controller instance.
    • Should you use a Docker network be used then all containers will "see" each other and all internal container ports will be accessible within the network.
      • In the above example a Docker network js7 was is used and the Controller container was will be started with the hostname js7-controller-primary.
      • The port 4444 is the inside HTTP port of the Controller that is visible in the Docker network.
    • If you are not using a no Docker network is used then you users are free to decide how to map hostnames:
      • The Controller container could be is accessible from the Docker host, i.e. you would specify the hostname of the Docker host is specified
      • The outside HTTP port of the Controller container is used, which is specified with the --publish option when starting the Controller container has to be used.

Register Controller Cluster

A Controller cluster implements high availability for automated fail-over if a Controller instance is terminated or becomes unavailable.

Note that the high availability clustering feature is subject to JS7 - License. Without a license, fail-over/switch-over will not take place between Controller cluster membersto the JS7 - License.



Explanation:

  • This The Primary Controller instance, Secondary Controller instance and Agent Cluster Watcher are specified in this dialog.
    • You Users can add a title for each Controller instance which will be shown in the JS7 - Dashboard view.
    • Primary and Secondary Controller instances require a URL as seen from the JOC Cockpit.
    • In addition, a URL can be specified for each Controller instance to allow it to be accessed by its partner cluster member.
      • Typically the URL used between Controller instances is the same as the URL used by the JOC Cockpit and therefore this setting is not used.
      • Should you operate e.g. users operate for example a proxy server between Primary and Secondary Controller instances then the URL for a given Controller instance to access its partner cluster member might be different from the URL used by the JOC Cockpit.
  • The URL of the Controller instance has to match the hostname and port that the Controller instance is operated on.
    • Should you use a Docker network be used then all containers will "see" each other and all inside container ports are accessible within the network.
      • In the above example a Docker network js7 was is used and the Primary Controller container was will be started with the hostname js7-controller-primary. The Secondary Controller was container will be started with the hostname js7-controller-secondary.
      • The port 4444 is the inside HTTP port of the Controller instance that is visible in the Docker network.
    • If you are not using a Docker network then you Should no Docker network be used then users are free to how decide how to map hostnames:
      • The Controller container could be is accessible from the Docker host , i.e. you would specify and the hostname of the Docker host is specified. 
      • The outside HTTP port of the Controller instance has to be is used that which was specified with the --publish option when starting the Controller container.
  • The Agent Cluster Watcher is required for operation of a Controller clusterCluster. The Agent is contacted by Controller cluster Cluster members to verify the cluster status if in addition to direct connection connections between Controller cluster Cluster members is not available.
    • Note that the example above makes use of an Agent that by default is configured for use with HTTP connections. 
    • For use of the Agent's hostname and port the same applies as for Controller instances.

Register Agents

After Having established the connection between JOC Cockpit and the Controller is established you can add Agents then Agents can be added like this:


Explanation:

  • For each Agent a unique identifier is specified, the Agent ID. The identifier remains in place for the lifetime of an Agent and cannot be modified.
  • You Users can add a name for the Agent that will be used when assigning jobs to be executed with this Agent. The Agent name can be modified later on.
  • In addition you users can add alias names to make the same Agent available under different names.

...

Note that it is not necessary to configure the JOC Cockpit - it runs out-of-the-box. The default configuration specifies that: HTTP connections are used which expose unencrypted communication between clients and JOC Cockpit. Authentication is performed by hashed passwords.

Users who intend to operate a compliant and secure job scheduling environment or who wish to operate JOC Cockpit as a cluster for high availability are recommended to familiarize themselves with the JS7 - JOC Cockpit Configuration for Docker Containers article series.

...

User who wish to create their own individual images of the JOC Cockpit can find instructions in the JS7 - JOC Cockpit Build for Docker Image article.

...