Handling multiple services in your development environment

Over the past year we have been switching to a service orientated architecture (SOA). Our first service, Salk, was for accessing lab data. This provided an API consumed by ICIS Staff, one of our main applications. There was no user facing UI but there was one exposed to the developers to manage the system.

Next we were feeling some pain with the bloat in ICIS Staff. Specifically around the appointment scheduling component. We wanted to extract it into its own service (Cronos). As well as providing an API to manage patient appointments in ICIS Staff, Cronos would have a user facing UI to manage staff member availability. So before the extraction we decided to build a centralized authentication service (Snowflake). This was an omniauth service to provide single signon for ICIS Staff, Cronos, and any future services we may add. With Snowflake in place adding new services became relatively easy. So right now we have the following applications/services in our SOA (and there are more in the pipeline).

  • ICIS Staff, our clinical application used by our coworkers in our practices
  • ICIS Patients, our clinical application used by our patients
  • Salk1, for electronic lab results
  • Snowflake2, for authentication
  • Cronos, for practice appointments and scheduling
  • Bouncah3, for patient eligibility
  • SecretService, for secret stuff

There have been plenty of blog posts about why to use a SOA or how to do it. But today I wanted to talk about how to make developing with multiple dependant services as seamless as possible. By using Boxen, tmux, and tmuxinator we were able to have one command to launch our ICIS suite.

Launching the ICIS services with tmuxinator

The Problem

For the most part we can work on each application and service separately for development but when working through an integration we often like to be working on multiple systems at once. This is especially true when spiking on a problem.

To run our main applications, ICIS Staff & ICIS Patients, in this manner you need to have at least Snowflake and Cronos also running. These are all rails apps so starting from scratch you need to:

  1. checkout all the repos from Github
  2. rake db:create db:migrate each app
  3. seed data
  4. set the environment variables in each app/service (i.e. service urls and ports, api tokens, etc.)
  5. foreman start each app
  6. register each application in Snowflake for omniauth

And there may be more steps that I have missed. Not only is this difficult to manage but it can be confusing when you pull in some changes to a repo and it stops working because it, all of a sudden, depends on a new service.

In the beginning we found that we were solving the same problems over and over. i.e. the developer who created a new service would have to go around and explain how to get up and running. This was wasting a huge amount of time. One solution is to document the hell out of the setup. That takes time too and we figured that automating it would not take that much longer than documenting it (especially once we have a pattern in place).

Easing the pain

Checking out the repositories (problem 1)

We needed automation! Boxen which we were already using to set up developer laptops with the correct tools, can also be used to install an organization’s projects By defining projects in modules/projects/manifests we could ensure all our services were deployed to the same location (~src/) for each developer.

    $ cd /opt/boxen/repo
    $ ls modules/projects/manifests
    all.pp             secret_service.pp  icisstaff.pp
    bouncah.pp         icispatients.pp    snowflake.pp
# snowflake.pp

class projects::snowflake {
  boxen::project { 'snowflake':
    postgresql    => true,
    nginx         => true,
    ruby          => '1.9.3-p392',
    source        => 'IoraHealth/snowflake'

Having the location of our repos be consistent across developer laptops is good. It allows us to write simple shell scripts to launch services that we know will work across all laptops.

To simplify how services communicate we can specify the default ports on localhost. But this gets confusing. It is hard to remember which port a service runs on. It is not difficult, however, to remember the name of the service. Boxen can help here too. As long as your app is configured to listen on a socket at #{ENV['BOXEN_SOCKET_DIR']}/<project>, and you set nginx => true (line 6 above) it will be available locally at http://<project>.dev

# ~/src/snowflake/config/unicorn/development.rb`
listen "#{ENV['BOXEN_SOCKET_DIR']}/snowflake"
# ...

Creating and migrating the database (problem 2)

Boxen will also create the database for your application and migrate it.

Once Boxen is configured with your applications you need to run boxen all to install them. boxen all will do the following for each project:

  • Checkout the project’s repo into ~/src/<project>
  • Copy the dotenv file into the checked out project
  • Create the database for the project
  • Create an nginx config file for the project (/opt/boxen/config/nginx/sites/<project>)

Seed data (problem 3 & 6)

We have a staging setup that has acquired a lot of data over the years. So we seed our development apps/services from the staiging database. We have a bash script that asks you which development database you want to replace from a copy on staging. It makes a local dump, seeds the local database and then runs any migrations that are needed.

We also add data to our staging environment specific to development. In Snowflake we registered applications for omniauth that point to .dev URLs. Then when we seed our dev env with staging data everything is set up and ready to go.

This script was simple to write since our projects are installed identical locations across our team. The script itself could well be improved but it works for now. As we add more services we’ll likely DRY it up some more.

  local dump_file=$1
  local remote_db_name=$2

  echo "Dumping ${remote_db_name} to ${dump_file}"
  pg_dump -h -p 9999 -U $remote_db_name -Fc -c -f $dump_file $remote_db_name

recreate_dev_database() {
  local app_name=$1

  echo "Bringing down ${app_name} and recreating its database"
  sudo pkill -SIGTERM -f ${app_name}_development

  bundle check
  if [ $? != 0 ]; then
    echo 'Running bundle install'
    bundle install
  bundle exec rake db:drop db:create

  local app_name=$1
  local dump_file=$2

  echo "Restoring ${app_name} databse from ${dump_file}"
  pg_restore -O -x -n public -d ${app_name}_development $dump_file

migrate_and_prepare_test() {
  echo "Migrating the database and preparing test"
  bundle exec rake db:migrate db:test:prepare

  local app_name=$1
  local remote_db_name=$2
  local dump_file=/tmp/${app_name}-${TIMESTAMP}.dmp
  cd $HOME/src/$app_name

  echo ''
  dump_remote_database $dump_file $remote_db_name
  recreate_dev_database $app_name
  restore_dev_database $app_name $dump_file

display_options() {
  local index=0
  echo "Please select which applications you'd like to replace"
  echo ''
  echo "    ${index}: all"
  for app in ${APPS[@]}; do
    index=$[index + 1]
    echo "    ${index}: $app"
  echo ""
  echo "e.g. 1,2"
  echo ""
  read -e -p '> ' APP_NUMBERS

prompt_for_sudo_passwd() {
  echo "Enter your system password"
  sudo ls >/dev/null

APPS=('icispatients' 'icisstaff' 'snowflake', 'cronos', 'bouncah')

read -e -s -p 'Enter the pg password for staging:' pgpassword
export PGPASSWORD=$pgpassword

echo ''
echo ''
echo 'Please wait while we connect to staging'
source ~/.bashrc
ssh $BASTION_SERVER_IP -L 9999:$DB_SERVER:5432 -N &
sleep 8


TIMESTAMP=`date "+%Y-%m-%d---%H-%M"`

if [[ "${APP_NUMBERS}" =~ [10]+ ]]; then
  restore_dev_from_staging 'icispatients' 'patients_staging'

if [[ "${APP_NUMBERS}" =~ [20]+ ]]; then
  restore_dev_from_staging 'icisstaff' 'icis_staging'

if [[ "${APP_NUMBERS}" =~ [30]+ ]]; then
  restore_dev_from_staging 'snowflake' 'snowflake_staging'

if [[ "${APP_NUMBERS}" =~ [40]+ ]]; then
  restore_dev_from_staging 'cronos' 'cronos_staging'

if [[ "${APP_NUMBERS}" =~ [50]+ ]]; then
  restore_dev_from_staging 'bouncah' 'bouncah_staging'

kill %1

Environment files (problem 4)

We can ensure our team has the correct environment variables for your app/service by copying them from boxen into your projects .env file. By creating the file modules/projects/files/<project>/dotenv in your Boxen repo it will automatically copy it into ~/src/<project>/.env

Bringing up the services (problem 5).

Now that our machines all have the services installed in the same location we can get to work on automating starting up the suite.

For this I use tmux and the tmuxinator gem. Tmux provides pane, window, and session management. Check out Thoughtbot’s A tmux Crash Course post for quick intro to tmux. Tmuxinator provides a easy (and programmable) way to manage complex tmux sessions.

With tmuxinator installed (gem install tmuxinator) we can create different configurations for different setups. I have an ‘icis’ configuration to launch our suite. The image above shows it running. I simply type mux icis and it launches a tmux session names ‘icis’, creates the windows and panes that I have defined, and runs and commands I have specified.

# ~/.tmuxinator/icis.yml
name: icis
root: ~/src/
  - staff:
      layout: main-horizontal
        - cd ~/src/icisstaff; bundle check || bundle; foreman start
        - cd ~/src/icisstaff; tail -f ~/src/icisstaff/log/development.log
  - snowflake:
      layout: main-horizontal
        - cd ~/src/snowflake; bundle check || bundle; foreman start
        - cd ~/src/snowflake; tail -f ~/src/snowflake/log/development.log
  - patients:
      layout: main-horizontal
        - cd ~/src/icispatients; bundle check || bundle; foreman start
        - cd ~/src/icispatients; tail -f ~/src/icispatients/log/development.log
  - cronos:
      layout: main-horizontal
        - cd ~/src/cronos; bundle check || bundle; foreman start
        - cd ~/src/cronos; tail -f ~/src/cronos/log/development.log
  - secret:
      layout: main-horizontal
        - cd ~/src/secret_service; bundle check || bundle; PORT=5003 foreman start
        - cd ~/src/secret_service; tail -f ~/src/secret_service/log/development.log
  - dev:
      layout: main-horizontal
        - cd ~/src/icispatients;

If I detatch from the tmux session, I can just reattach by typing mux icis again. This will not launch things again just bring me back to where I was. I have many tmuxinator setups. Each one launches in its own session and I have a nice key combo to list/switch between them.

ls ~/.tmuxinator/
bouncah.yml   html_css.yml  ptz.yml
go.yml        icis.yml      scripts/
haskell.yml   ir_ptz.yml

I have customzied my tmux commands to make working within tmux even more awesome.

In closing

I am sure your configuration is unique to your organization but I hope that you can take some of these ideas and make the development process friendlier for your team (or just for yourself).

There is definitely an upfront cost moving to a system like Boxen but it pays off in the long run. Similarly ramping up on tmux can take a little time too but I’d be lost without it today.

I also think this is a valuable setup even if you do not have SOA. If you have a number of different applications you can simply have a tmuxinator configuration file for each one.

Note: I also use Powerline to make the tmux info bar a little prettier (i.e. displays the session and window names, and highlight the active window)

Where our service names come from

1. Salk: in honor of Jonas Salk.

2. Snowflake: We are all unique individuals after all.

3. Bouncah: A Boston pronunciation of bouncer. Only allowing in the eligible.