BLAM Docker Deployment
This guide will go through the steps for deploying the BLAM API, Workflow and other services in their Docker Containers on a Linux environment. This assumes you have already configured your servers, installed operating systems and set up network and SSH access to the target Linux host system. You should be able to use Putty or similar for shell access and an FTP Client for file transfer.
These steps are based around running a complete BLAM Deployment on a single Linux server. Some steps may need to be carried more than once for larger deployments where BLAM Docker Containers may be distributed across a number of different machines.
Files to Prepare
Each deployment requires at least one ‘docker-compose.yml’ file to define the Docker Containers, and one ‘init-letsencrypt.sh’ file and set of Nginx configuration files in a folder ‘nginx’, to be prepared in advance.
Examples can be provided by Blue Lucy, along with support writing these files correctly for your particular deployment.
For some steps, example commands are given. These are to be executed directly from the /home/#user# directory where #user# corresponds to the User Account used to connect to the system and configure the BLAM Deployment (e.g. ec2-user for AWS EC2 or azureuser for Microsoft Azure). Please substitute the correct value for #user# to match your environment.
Install the Docker Environment following the appropriate installation instructions for your Linux environment. Please see Docker’s Install pages for details: https://docs.docker.com/engine/install/
Install the Docker Compose tools following the appropriate instructions fro your Linux environment. Please see Docker’s Install Docker Compose page for details: https://docs.docker.com/compose/install/
Enable Docker on Startup
$ sudo systemctl enable docker
$ sudo service docker start
AWS Command Line Tools
*** IMPORTANT – DO THIS ON ALL SYSTEMS INCLUDING AMAZON LINUX 2 ***
Install the AWS CLI Version 2, further information can be found on the Installing Amazon CLI page here: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html#cliv2-linux-install
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
Configure AWS CLI
Configure AWS CLI for BLAM Container Access. The credentials will be provided to by Blue Lucy for access to their Elastic Container Repository and and S3 Bucket location used for installation files.
$ aws configure
AWS Access Key ID = [ACCESS_KEY_ID provided by Blue Lucy]
AWS Secret Access Key = [SECRET_ACCESS_KEY provided by Blue Lucy]
Default region name = eu-west-1
Default output format = None
Configure Folders and Files
Mount Network Storage
Create local directories within /mnt and mount any remote storage locations by modifying /etc/fstab.
Create Required Folders
Through SSH, or using your choice of FTP client (e.g. FileZilla from Windows), create the required BLAM Directories (to match those configured in docker-compose.yml).
#user# corresponds to the User Account used to connect to the system and configure the BLAM Deployment (e.g. ec2-user for AWS EC2 or azureuser for Microsoft Azure). Please substitute the correct value for #user# to match your environment.
These are example locations for the BLAM directories, please check against your specific ‘docker-compose.yml’ files to ensure they are created in the correct locations for your deployment.
Create BLAM directory
$ mkdir -p /home/#user#/blam
Create sub-directories inside the BLAM Directory or where required (/dmz, /logs, /blidgets). Systems with Workflow Runner containers on more than one machine need a configured Blidgets folder on each machine.
$ mkdir -p /home/#user#/blam/logs
$ mkdir -p /home/#user#/blam/blidgets
$ mkdir -p /home/#user#/blam/dmz
$ mkdir -p /home/#user#/blam/streaming-server-temp
Copy up Prepared Files
Copy ‘docker-compose.yml’ file into
Copy ‘init-letsencrypt.sh’ file into
Copy whole ‘nginx’ folder into
$ sudo chmod +x init-letsencrypt.sh
$ sudo ./init-letsencrypt.sh
Download BLAM Workflow Blidgets
Download and Extract all Blidget Files from S3 to the Blidgets folder set for the Workflow container. Systems with multiple Workflow containers on more than one machine need the Blidgets downloaded and extracted on each machine.
$ aws s3api get-object --bucket bl-installation-media --key BLAM3StandardBlidgets-latest.zip ./BLAM3StandardBlidgets-latest.zip
$ unzip BLAM3StandardBlidgets-latest.zip -d /home/#user#/blam/blidgets
BLAM Docker Containers
Login to AWS for Docker (for it to access the Docker Container images in our Elastic Container Repository)
$ aws ecr get-login-password --region eu-west-1 | sudo docker login --username AWS --password-stdin 365894649245.dkr.ecr.eu-west-1.amazonaws.com
Run Docker Compose
$ docker-compose pull #(get the Docker Container images specified in the Docker Compose file from our Elastic Container Repository)
First Time Database Setup
$ docker-compose up -d # (to run it as a daemon)
$ docker ps # (to check running services)
$ aws s3api get-object --bucket bl-installation-media --key BLAM3DbMigrationTool-linux-x64-v3-2106.zip ./BLAM3DbMigrationTool-linux-x64-v3-2106.zip
$ unzip BLAM3DbMigrationTool-linux-x64-v3-2106.zip -d ./blam-db-tool/
$ ./blam-db-tool/BLAM3DbMigrationTool # Run the DB Migration Tool and follow the prompts for Seeding the Database
Enter 1 – Seed Database
Enter 1 – Postgres Database type
Connection string: Host=localhost;Database=BLAM3;Username=postgres;Password=[#YOUR_DATABASE_PASSWORD as set in docker-compose.yml#]
Set your Organisation Name, Super Admin Username, Password and Email Address
Restart Database Container
$ sudo docker-compose up -d --no-deps --build --force-recreate db-postgres
Check Running Containers
$ sudo docker ps
If any of the containers show an issue, try to restart all containers.
$ sudo docker-compose down
$sudo docker-compose up -d
With the Docker Containers running, check you can reach the Login page of BLAM, and login.
To confirm the Workflow Container is running and correctly configured, check all Blidgets are displayed in the Workflow Builder.
Q: Using CentOS, there’s no DNS in the containers/the containers can’t see each other.