BLAM Docker Deployment

< All Topics

This guide will go through the steps for deploying the BLAM API, Workflow and other services in their Docker Containers on a Linux environment.  This assumes you have already configured your servers, installed operating systems and set up network and SSH access to the target Linux host system. You should be able to use Putty or similar for shell access and an FTP Client for file transfer.

These steps are based around running a complete BLAM Deployment on a single Linux server. Some steps may need to be carried more than once for larger deployments where BLAM Docker Containers may be distributed across a number of different machines.

Files to Prepare

Each deployment requires at least one ‘docker-compose.yml’ file to define the Docker Containers, and one ‘’ file and set of Nginx configuration files in a folder ‘nginx’, to be prepared in advance.

Examples can be provided by Blue Lucy, along with support writing these files correctly for your particular deployment.

Example Commands

For some steps, example commands are given. These are to be executed directly from the /home/#user# directory where #user# corresponds to the User Account used to connect to the system and configure the BLAM Deployment (e.g. ec2-user for AWS EC2 or azureuser for Microsoft Azure). Please substitute the correct value for #user# to match your environment.


Docker Environment

Install the Docker Environment following the appropriate installation instructions for your Linux environment. Please see Docker’s Install pages for details:

Docker Compose

Install the Docker Compose tools following the appropriate instructions fro your Linux environment. Please see Docker’s Install Docker Compose page for details:

Enable Docker on Startup

$ sudo systemctl enable docker

Start Docker

$ sudo service docker start

AWS Command Line Tools


Install the AWS CLI Version 2, further information can be found on the Installing Amazon CLI page here:

$ curl "" -o ""
$ unzip
$ sudo ./aws/install

Configure AWS CLI

Configure AWS CLI for BLAM Container Access. The credentials will be provided to by Blue Lucy for access to their Elastic Container Repository and and S3 Bucket location used for installation files.

$ aws configure

AWS Access Key ID = [ACCESS_KEY_ID provided by Blue Lucy]
AWS Secret Access Key = [SECRET_ACCESS_KEY provided by Blue Lucy]
Default region name = eu-west-1
Default output format = None

Configure Folders and Files

Mount Network Storage

Create local directories within /mnt and mount any remote storage locations by modifying /etc/fstab.

Create Required Folders

Through SSH, or using your choice of FTP client (e.g. FileZilla from Windows), create the required BLAM Directories (to match those configured in docker-compose.yml).

#user# corresponds to the User Account used to connect to the system and configure the BLAM Deployment (e.g. ec2-user for AWS EC2 or azureuser for Microsoft Azure). Please substitute the correct value for #user# to match your environment.

These are example locations for the BLAM directories, please check against your specific ‘docker-compose.yml’ files to ensure they are created in the correct locations for your deployment.

Create BLAM directory

$ mkdir -p /home/#user#/blam

Create sub-directories inside the BLAM Directory or where required (/dmz, /logs, /blidgets). Systems with Workflow Runner containers on more than one machine need a configured Blidgets folder on each machine.

$ mkdir -p /home/#user#/blam/logs
$ mkdir -p /home/#user#/blam/blidgets
$ mkdir -p /home/#user#/blam/dmz
$ mkdir -p /home/#user#/blam/streaming-server-temp

Copy up Prepared Files

Copy ‘docker-compose.yml’ file into /home/#user#

Copy ‘’ file into /home/#user#

Copy whole ‘nginx’ folder into /home/#user#


$ sudo chmod +x
$ sudo ./

Download BLAM Workflow Blidgets

Download and Extract all Blidget Files from S3 to the Blidgets folder set for the Workflow container. Systems with multiple Workflow containers on more than one machine need the Blidgets downloaded and extracted on each machine.

$ aws s3api get-object --bucket bl-installation-media --key ./
$ unzip -d /home/#user#/blam/blidgets

BLAM Docker Containers

Login to AWS for Docker (for it to access the Docker Container images in our Elastic Container Repository)

$ aws ecr get-login-password --region eu-west-1 | sudo docker login --username AWS --password-stdin

Run Docker Compose

$ docker-compose pull #(get the Docker Container images specified in the Docker Compose file from our Elastic Container Repository)

First Time Database Setup

$ docker-compose up -d # (to run it as a daemon)
$ docker ps # (to check running services)

$ aws s3api get-object --bucket bl-installation-media --key ./
$ unzip -d ./blam-db-tool/
$ ./blam-db-tool/BLAM3DbMigrationTool # Run the DB Migration Tool and follow the prompts for Seeding the Database

Enter 1 – Seed Database

Enter 1 – Postgres Database type

Connection string: Host=localhost;Database=BLAM3;Username=postgres;Password=[#YOUR_DATABASE_PASSWORD as set in docker-compose.yml#]

Set your Organisation Name, Super Admin Username, Password and Email Address




Restart Database Container

$ sudo docker-compose up -d --no-deps --build --force-recreate db-postgres

Check Running Containers

$ sudo docker ps

If any of the containers show an issue, try to restart all containers.

$ sudo docker-compose down
$sudo docker-compose up -d


With the Docker Containers running, check you can reach the Login page of BLAM, and login.

To confirm the Workflow Container is running and correctly configured, check all Blidgets are displayed in the Workflow Builder.


Q: Using CentOS, there’s no DNS in the containers/the containers can’t see each other.