You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

Containerising Aspire using Docker

If you have ever thought about packaging Aspire to make it easier for DevOps to manage, then containers could be the answer. Containers are an efficient way of packaging software – including all the elements needed to run software such as code, system tools, libraries and settings.
With containers, all the moving parts and dependencies across various infrastructure become less complex enabling developers to work with identical development environments and stacks.
There are many benefits to using containers including:

  • DevOps teams benefit due to the more efficient approach to software development
  • Containers allow engineering teams to be more agile by reducing wasted resources
  • Containers improve scalability through a more lightweight and resource-efficient approach.

If you need to containerise Aspire, there are various container technology options available to you including Docker, Microsoft Containers, Kubernetes etc.
For the purposes of this document, Docker and Kubernetes (K8s) were used to containerise Aspire within the Google Cloud Platform (GCP) and therefore some knowledge of the core Docker concepts is advisable. Please see the following links if you require more information on Docker and Kubernetes:

Quick Reference

  • Docker – an open source OS-level virtualisation software platform primarily designed for Linux and Windows. Docker uses resource isolation features of the OS kernel to run multiple independent containers on the same OS.
  • Image – a docker image is made up of multiple layers which include system libraries, tools and other files and dependencies for the executable code.
  • Container - is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
  • Kubernetes – Started by Google in 2014, is an open-source project that focuses on building a robust orchestration system for running thousands of containers in production by automating manual tasks, reducing infrastructure complexity and helping with the deployment, scaling and management of containerised applications.
  • Pods – are a group of one or more containers with shared network/storage and a specification for how to run the containers. Typically, one pod will contain one container (in which case you can think of a pod or container as the same concept), however, there may-be situations where a pod will contain more than one container.

The following images from https://www.docker.com/resources/what-container shows the differences between virtual machines and containers.

The instructions outlined below describe how Aspire was containerised for one particular client where all applications (not only Aspire) were built utilising GCP.
NOTE:
The details shown below describe how Aspire was containerised for this project and it should be stressed that this is not the only method available to you. You should always liaise with your development and DevOps team to determine the right deployment method suitable for your project.
Scenario
Using GCP, containerise Aspire 3.2 whilst using a standard Mongo docker image available either via the Google MarketPlace, or Docker Hub as well as a standard ZooKeeper image.
Prerequisites:

  • Install Docker Community Edition (CE) on either your local machine or VM. For the purposes of this document, Docker CE was downloaded from the Docker site (https://docs.docker.com) and installed onto an Alpine Linux VM (a very lightweight Linux distribution).

Objectives:

  • Package Aspire into a Docker image
  • Run the container locally on your machine (to test)
  • Upload the image to a container registry
  • Deploying the image as part of continuous integration/continuous deployment (CI/CD)

Package Aspire into a Docker Image
To build a Docker image, you need to have an application (Aspire) and a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.
Prebuilt Docker images can be downloaded from https://hub.docker.com, however our scenario requires a custom Docker image containing Aspire 3.x. The following list of steps detailed what is required to build the Aspire image and these steps are shown in more detail in the Dockerfile described below.

  • Install OpenJDK Java 1.8.x
  • Download Aspire 3.x installation zip file as part of image build process (or separately download Aspire 3.x installation zip file to your working folder and then copy the installation zip file as part of the image build process)
  • Unzip the installation file as part of the image build process
  • Copy and overwrite the default Aspire config folder with a custom prebuilt config folder containing all the necessary updated folders and files i.e. settings.xml, content sources, workflow-library etc.
  • Copy and overwrite the bin/aspire.sh script since the container will start and then stop straight away with the default file.
  • Download prebuilt maven repository with all necessary dependencies (as we don't want it to download these when the container is running).
  • Copy the prebuilt maven repository to the image during the image build process

The diagram below shows a basic configuration structure (folders and files) which is used to build the Aspire image. Each resource is described in more detail below together with example configuration.

Configuration Details

  • config (folder)
    • Contains the default as well as the updated customised Aspire configuration files as required for the Aspire project i.e. updated Aspire folders such as "content sources", "workflow-libraries", settings.xml etc needed for your Aspire solution. The default "config" that comes with the default 3.2 installation will be deleted and this folder will then be copied as the replacement folder to the image. This config folder should contain all default as well as customised configuration files.
    • The settings.xml file was updated to reference several local variables including local maven repository and MongoDB location. These variables were set in the Dockerfile. An excerpt of the file is shown below.

      <?xml version="1.0" encoding="UTF-8"?>
      <settings>
      <repositories>
      <defaultVersion>3.2</defaultVersion>
      <offline>true</offline>
      <repository type="distribution">
      <directory>bundles/aspire</directory>
      </repository>
      <repository type="maven">
      <offline>false</offline>
      <localRepository>##MAVEN_REPOSITORY##</localRepository>
      </repository>
      </repositories>
      <configAdministration>
      <zookeeper enabled="true" libraryFolder="config/workflow-libraries" root="/aspire" updatesEnabled="true">
      <externalServer>##ZOOKEEPER_URI##</externalServer>
      <clientPort>1112</clientPort>
      <dataDir>config</dataDir>
      <maxConnections>60</maxConnections>
      <tickTime>2000</tickTime>
      <connectionTimeout>6000</connectionTimeout>
      </zookeeper>
      </configAdministration>
      <!-- noSql database provider for the 3.0 connector framework -->
      <noSQLConnectionProvider sslEnabled="false" sslInvalidHostNameAllowed="false">
      <servers>##MONGODB_URI##</servers>
      </noSQLConnectionProvider>
      <properties>
      <property name="aspireTemplatesDirectory">${aspire.home}/resources/templates</property>
      <property name="odrive.contentsource">aspire/fuse/share/o-drive</property>
      <property name="qdrive.contentsource">aspire/fuse/share/q-drive</property>
      <property name="sdrive.contentsource">aspire/fuse/share/s-drive</property>
      <property name="elastic.index">elastic-prod-elasticsearch-client.default.svc.cluster.local</property>
      <property name="stager.protocol">http</property>
      <property name="stager.host">stager-service.stager-app.svc.cluster.local</property>
      <property name="stager.port">3000</property>
      <external filename="${aspire.home}/config/template.properties" name="stagerServers"/>
      </properties>
      <!-- Auto-start applications -->
      <!-- These application are loaded and started when Aspire starts -->
      <autoStart>
      <!-- Aspire 3.x UI -->
      <application config="com.searchtechnologies.aspire:app-admin-ui" id="0"/>
      <application config="com.searchtechnologies.aspire:app-workflow-manager" id="1">
      <properties>
      <property name="templateFile">${appbundle.home}/data/templates.xml</property>
      <property name="libraryPath">${aspire.config.dir}/workflow-libraries</property>
      <property name="planFile"/>
      <property name="disableInternalTemplates">false</property>
      <property name="allowCustomRule">true</property>
      <property name="debug">false</property>
      </properties>
      </application>
      <application config="config/initialize-invalid-titles.xml" id="2"/>
      </autoStart>
      </settings>

       


  • maven-repository (folder)
    • All required maven dependency files were downloaded and then copied to this folder which is then copied to the image – there should be no requirement to connect back to the Aspire maven repository for further updates
  • aspire.sh (file)
    • This updated file needs to overwrite the base /bin/aspire.sh file that comes with the Aspire installation folder since when running the container, Aspire will start and then stop and the container will end due to the "nohup" command in the shell script.
    • The updated version replaces the line (removed the 'nohup'):
    • # nohup ${JAVA_EXE} ${ASPIRE_JAVA_OPTS} -Dcom.searchtechnologies.aspire.console.interactive=false -Dfelix.config.properties=file:config/felix.properties -Dcom.searchtechnologies.aspire.home=$ASPIRE_HOME $log_prop ${@:2} -jar ${SCRIPT_DIR}/felix.jar >> ${felix_log} 2>&1 &

With

    • ${JAVA_EXE} ${ASPIRE_JAVA_OPTS} -Dcom.searchtechnologies.aspire.console.interactive=false -Dfelix.config.properties=file:config/felix.properties -Dcom.searchtechnologies.aspire.home=$ASPIRE_HOME $log_prop ${@:2} -jar ${SCRIPT_DIR}/felix.jar
  • run.sh – the run script that executes Aspire.sh

    #!/bin/sh
     export ASPIRE_SCRIPT_PATH=/aspire/aspire-distribution-3.2/bin/aspire.sh
    exec $@
     exec sh ${ASPIRE_SCRIPT_PATH}


  • Dockerfile
    • OpenJDK - is required for Aspire as since 16 April 2019, Oracle specifies that its new license model can only be used in personal or development environments.
    • Aspire 3.x – was downloaded from the Aspire downloads repository using a valid username/password confirmation which was stored, for our requirements, using Google Key Management Service (KMS). It was not advisable to store the username/password needed to download Aspire in the Dockerfile itself.
    • There are a number of "sed" commands which update the variables in the settings.xml file which enables the Aspire image to be more flexible i.e. you only need to update the Dockerfile instead of updating the settings.xml file.

      FROM alpine:3.9

  1. Default to UTF-8 file.encoding
    ENV LANG C.UTF-8

  2. add a simple script that can auto-detect the appropriate JAVA_HOME value
  3. based on whether the JDK or only the JRE is installed
    RUN { \
    echo '#!/bin/sh'; \
    echo 'set -e'; \
    echo; \
    echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
    } > /usr/local/bin/docker-java-home \
    && chmod +x /usr/local/bin/docker-java-home
    ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
    ENV PATH $PATH:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin

    ENV JAVA_VERSION 8u212
    ENV JAVA_ALPINE_VERSION 8.212.04-r0

    RUN set -x \
    && apk add --no-cache \
    openjdk8="$JAVA_ALPINE_VERSION" \
    && [ "$JAVA_HOME" = "$(docker-java-home)" ]

    #ASPIRE

  4. Set environment variables
    ARG JAVA_MAX_MEMORY=16g
    ENV JAVA_MAX_MEMORY=$JAVA_MAX_MEMORY

    ARG JAVA_INITIAL_MEMORY=1g
    ENV JAVA_INITIAL_MEMORY=$JAVA_INITIAL_MEMORY

    ARG ASPIRE_HOME=/aspire/aspire-distribution-3.2
    ENV ASPIRE_HOME=$ASPIRE_HOME

    ARG ASPIRE_ADMIN_PORT=50505
    ENV ASPIRE_ADMIN_PORT=$ASPIRE_ADMIN_PORT

    ARG ASPIRE_MAVEN_REPOSITORY=/aspire/maven-repository
    ENV ASPIRE_MAVEN_REPOSITORY=$ASPIRE_MAVEN_REPOSITORY

    ARG ASPIRE_MONGODB_URI=xxxxx:27017
    ENV ASPIRE_MONGODB_URI=$ASPIRE_MONGODB_URI

    ARG ASPIRE_ZOOKEEPER_xxxx:2181
    ENV ASPIRE_ZOOKEEPER_URI=$ASPIRE_ZOOKEEPER_URI

    ARG ASPIRE_MAVEN_USERNAME=xxxxx
    ENV ASPIRE_MAVEN_USERNAME=$ASPIRE_MAVEN_USERNAME

    ARG ASPIRE_MAVEN_PASSWORD=xxxxx
    ENV ASPIRE_MAVEN_PASSWORD=$ASPIRE_MAVEN_PASSWORD

    ARG ASPIRE_LOG_DIR=/aspire/logs
    ENV ASPIRE_LOG_DIR=$ASPIRE_LOG_DIR

  5. Update / upgrade /wget / unzip
    RUN apk update && apk upgrade && apk add wget && apk add unzip

  6. add aspire user and group
    RUN addgroup -g 1000 -S aspire && adduser -u 1000 -S aspire -G aspire

  7. set to run as root temporarily
    USER root

  8. set working directory to "aspire"
    WORKDIR /aspire

  9. download aspire from repository
    RUN wget --user=${ASPIRE_MAVEN_USERNAME} --password=${ASPIRE_MAVEN_PASSWORD} https://repository.searchtechnologies.com/artifactory/public/com/searchtechnologies/aspire/binaries/3.2/aspire-distribution-3.2-binaries.zip -P /aspire

  10. Add the run script to /aspire directory
    ADD run.sh ./

  11. Add the custom maven repository
    ADD maven-repository ./

  12. Unzip the aspire distribution
    RUN unzip ./aspire-distribution-3.2-binaries.zip

  13. Copy over custom aspire.sh script which needs to be run instead
    RUN rm ./aspire-distribution-3.2/bin/aspire.sh
    ADD aspire.sh ./aspire-distribution-3.2/bin

  14. Delete original aspire config folder
    RUN rm -r ./aspire-distribution-3.2/config

  15. Add the custom Aspire configuration
    ADD config ./aspire-distribution-3.2/config

  16. Set execute rights on the run script
    RUN chmod +x ./run.sh

  17. Change ownership of the /aspire folder
    RUN chown -R aspire:aspire ./

  18. Change ownership on maven repository folder
    RUN chown -R aspire:aspire ./maven-repository

  19. Remove the zip files
    RUN rm ./aspire-distribution-3.2-binaries.zip

  20. make log file
    RUN mkdir ./logs
    RUN chmod 755 ./logs

    EXPOSE $ASPIRE_ADMIN_PORT

  21. Add execute rights on the Aspire start script
    RUN chmod +x /aspire/aspire-distribution-3.2/bin/aspire.sh

  22. Update specific environment values in Aspire settings.xml file
    RUN sed -i "s|##MONGODB_URI##|$ASPIRE_MONGODB_URI|g" /aspire/aspire-distribution-3.2/config/settings.xml
    RUN sed -i "s|##MAVEN_REPOSITORY##|$ASPIRE_MAVEN_REPOSITORY|g" /aspire/aspire-distribution-3.2/config/settings.xml
    RUN sed -i "s|##ZOOKEEPER_URI##|$ASPIRE_ZOOKEEPER_URI|g" /aspire/aspire-distribution-3.2/config/settings.xml
    RUN sed -i "s|##JAVA_MAX_MEMORY##|$JAVA_MAX_MEMORY|g" /aspire/aspire-distribution-3.2/bin/aspire.sh
    RUN sed -i "s|##JAVA_INITIAL_MEMORY##|$JAVA_INITIAL_MEMORY|g" /aspire/aspire-distribution-3.2/bin/aspire.sh

  23. change ownership of the /aspire folder
    RUN chown -R aspire:aspire ./

  24. Set to run as aspire user
    USER aspire

  25. seem to have to give full path otherwise when you try and run it cannot find the run.sh script
    CMD ["sh","/aspire/run.sh"]|
    NOTE: In the Dockerfile above, Aspire logs are stored locally within the container (which is ok when testing the container). However, should a K8s pod/container fail which contains the container, then any Aspire logs will be lost and possible causes for the pod/container failing will also be lost. The option would be to store any log files to persistent storage (K8s volume) which can then be accessed whether the pod/container is running or not. See https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage for more information.
    Run the container locally on your machine (to test)
    In order to test your Dockerfile works as expected, you can run a series of Docker commands as shown below:

  26. Build Aspire Image – this will create an image with a name of "aspire"

    sudo docker build <path to folder containing Dockerfile> -t aspire


    Alternatively, you can pass arguments into the Dockerfile which will override the any existing variables as shown below i.e this will use a different username/password combination to download Aspire and also reference a local version of MongoDB:

    docker build . -t aspire \
    --build-arg [email protected] \
    --build-arg=ASPIRE_MAVEN_PASSWORD=blahblah1234! \
    --build-arg ASPIRE_MONGODB_URI=192.168.1.207:27017


  27. Run the new container

This will run the latest version of the image named "aspire" exposing port 50505 to the host interface in interactive mode

sudo docker run -it -p 50505:50505 --name aspire aspire:latest

You should now be able to open a local browser window and access the Aspire Admin GUI on port 50505 i.e. http://<server_location>:50505

  1. To access a running container in interactive mode, you can run the following command:

    docker exec -ti aspire sh

    You can now login to the container and navigate around as if it where a local instance. This will enable you to check any Aspire logs. For testing, Aspire can also be stopped and started, if required, using the normal commands.
  2. Once you are happy that the container is running as expected, you can now upload the Aspire image to a container registry. This can be a Cloud Container Registry, Cloud Storage or any private third-party repository.

Upload the image to a container registry
In order to be able to use the image you have just built, it needs to be uploaded to a container registry. A container registry is a single place to manage Docker images, perform vulnerability analysis and decide who can access what with fine-grain control. For this scenario, the Google Container Registry was used and the steps to upload the image to the registry are as follows:

  1. Configure Docker to use the "gcloud" command-line tool. This authenticates requests to the Container Registry.

    gcloud auth configure-docker


  2. Tag the image with a registry name

    docker tag aspire eu.gcr.io/<PROJECT_ID>/aspire:latest


  3. Push the image to the container registry

    docker push eu.gcr.io/<PROJECT_ID>/aspire:latest


    Deploying as part of continuous integration/continuous delivery (CI/CD)
    If your project incorporates CI/CD, then using a technology such as in our case, Google Cloud Build, will allow you to have complete control over defining custom workflows for building, testing and deploying across multiple environments.
    In order to deploy the image from the container registry, a sample cloud deployment file is required as shown below (please note the reference to the username/password combination needed to download Aspire).
    Please liaise with your own DevOps resources to confirm your own requirements. The Aspire container can then be set to deploy manually or as part of an automated process in a workflow including, for example, setting the minimum/maximum number of pods and the pod machine types that get deployed.

    steps:

  • name: "gcr.io/cloud-builders/docker"
    args:
    [
    "build",
    "--build-arg",
    "ASPIRE_MAVEN_USERNAME",
    "--build-arg",
    "ASPIRE_MAVEN_PASSWORD",
    "-t",
    "gcr.io/<PROJECT_ID>/bitbucket.org/aspire",
    ".",
    ]
    secretEnv: ["ASPIRE_MAVEN_USERNAME", "ASPIRE_MAVEN_PASSWORD"]

  • name: "gcr.io/cloud-builders/docker"
    args: ["push", "gcr.io/<PROJECT_ID>/bitbucket.org/blahblah/aspire"]

    secrets:

  • kmsKeyName: projects/<PROJECT_ID>/locations/europe-west1/keyRings/blahblabhblah/cryptoKeys/aspire-build
    secretEnv:
    ASPIRE_MAVEN_USERNAME: CiQAXBzYgnYHKVthDeg3irhd4IY5diW0MwmZnwd9z+dg9KxfNs8SOgBT5dkb9haWWWoqTYBa+7rJdkQ4hz8c4wrdwwRm61COL5sblwsOtQ4Va5VBBQEbRq86MsZOFDp09TU=
    ASPIRE_MAVEN_PASSWORD: CiQAXBzYgqU3l4x8R1EyygiQEtKHyq8CEsDncxmzAaDZtuus+sQSMQBT5dkbeNG8w5FbkuzqPqAFqVKjYQZAGxHtCc1Q3u7zO4qAEaI4RaWedv0feynP44g=|
  • No labels