Intensive Code Unit

Iora Health's Technical Blog.

Handling multiple services in your development environment

| Comments

Over the past year we have been switching to a service orientated architecture (SOA). Our first service, Salk, was for accessing lab data. This provided an API consumed by ICIS Staff, one of our main applications. There was no user facing UI but there was one exposed to the developers to manage the system.

Next we were feeling some pain with the bloat in ICIS Staff. Specifically around the appointment scheduling component. We wanted to extract it into its own service (Cronos). As well as providing an API to manage patient appointments in ICIS Staff, Cronos would have a user facing UI to manage staff member availability. So before the extraction we decided to build a centralized authentication service (Snowflake). This was an omniauth service to provide single signon for ICIS Staff, Cronos, and any future services we may add. With Snowflake in place adding new services became relatively easy. So right now we have the following applications/services in our SOA (and there are more in the pipeline).

  • ICIS Staff, our clinical application used by our coworkers in our practices
  • ICIS Patients, our clinical application used by our patients
  • Salk1, for electronic lab results
  • Snowflake2, for authentication
  • Cronos, for practice appointments and scheduling
  • Bouncah3, for patient eligibility
  • SecretService, for secret stuff

There have been plenty of blog posts about why to use a SOA or how to do it. But today I wanted to talk about how to make developing with multiple dependant services as seamless as possible. By using Boxen, tmux, and tmuxinator we were able to have one command to launch our ICIS suite.

GIF of services launching in tmux

The Problem

For the most part we can work on each application and service separately for development but when working through an integration we often like to be working on multiple systems at once. This is especially true when spiking on a problem.

To run our main applications, ICIS Staff & ICIS Patients, in this manner you need to have at least Snowflake and Cronos also running. These are all rails apps so starting from scratch you need to:

  1. checkout all the repos from Github
  2. rake db:create db:migrate each app
  3. seed data
  4. set the environment variables in each app/service (i.e. service urls and ports, api tokens, etc.)
  5. foreman start each app
  6. register each application in Snowflake for omniauth

And there may be more steps that I have missed. Not only is this difficult to manage but it can be confusing when you pull in some changes to a repo and it stops working because it, all of a sudden, depends on a new service.

In the beginning we found that we were solving the same problems over and over. i.e. the developer who created a new service would have to go around and explain how to get up and running. This was wasting a huge amount of time. One solution is to document the hell out of the setup. That takes time too and we figured that automating it would not take that much longer than documenting it (especially once we have a pattern in place).

Easing the pain

Checking out the repositories (problem 1)

We needed automation! Boxen which we were already using to set up developer laptops with the correct tools, can also be used to install an organization’s projects By defining projects in modules/projects/manifests we could ensure all our services were deployed to the same location (~src/) for each developer.

$ cd /opt/boxen/repo
$ ls modules/projects/manifests
all.pp             secret_service.pp  icisstaff.pp
bouncah.pp         icispatients.pp    snowflake.pp
cronos.pp
1
2
3
4
5
6
7
8
9
10
# snowflake.pp

class projects::snowflake {
  boxen::project { 'snowflake':
    postgresql    => true,
    nginx         => true,
    ruby          => '1.9.3-p392',
    source        => 'IoraHealth/snowflake'
  }
}

Having the location of our repos be consistent across developer laptops is good. It allows us to write simple shell scripts to launch services that we know will work across all laptops.

To simplify how services communicate we can specify the default ports on localhost. But this gets confusing. It is hard to remember which port a service runs on. It is not difficult, however, to remember the name of the service. Boxen can help here too. As long as your app is configured to listen on a socket at #{ENV['BOXEN_SOCKET_DIR']}/<project>, and you set nginx => true (line 6 above) it will be available locally at http://<project>.dev

1
2
3
# ~/src/snowflake/config/unicorn/development.rb`
listen "#{ENV['BOXEN_SOCKET_DIR']}/snowflake"
...

Creating and migrating the database (problem 2)

Boxen will also create the database for your application and migrate it.

Once Boxen is configured with your applications you need to run boxen all to install them. boxen all will do the following for each project:

  • Checkout the project’s repo into ~/src/<project>
  • Copy the dotenv file into the checked out project
  • Create the database for the project
  • Create an nginx config file for the project (/opt/boxen/config/nginx/sites/<project>)

Seed data (problem 3 & 6)

We have a staging setup that has acquired a lot of data over the years. So we seed our development apps/services from the staiging database. We have a bash script that asks you which development database you want to replace from a copy on staging. It makes a local dump, seeds the local database and then runs any migrations that are needed.

We also add data to our staging environment specific to development. In Snowflake we registed applications for omniauth that point to .dev URLs. Then when we seed our dev env with staging data everything is set up and ready to go.

This script was simple to write since our projects are installed identical locations across our team. The script itself could well be imporved but it works for now. As we add more services we’ll likely DRY it up some more.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
dump_remote_database()
{
  local dump_file=$1
  local remote_db_name=$2

  echo "Dumping ${remote_db_name} to ${dump_file}"
  pg_dump -h 127.0.0.1 -p 9999 -U $remote_db_name -Fc -c -f $dump_file $remote_db_name
}

recreate_dev_database() {
  local app_name=$1

  echo "Bringing down ${app_name} and recreating its database"
  sudo pkill -SIGTERM -f ${app_name}_development

  bundle check
  if [ $? != 0 ]; then
    echo 'Running bundle install'
    bundle install
  fi
  bundle exec rake db:drop db:create
}

restore_dev_database()
{
  local app_name=$1
  local dump_file=$2

  echo "Restoring ${app_name} databse from ${dump_file}"
  pg_restore -O -x -n public -d ${app_name}_development $dump_file
}

migrate_and_prepare_test() {
  echo "Migrating the database and preparing test"
  bundle exec rake db:migrate db:test:prepare
}

restore_dev_from_staging()
{
  local app_name=$1
  local remote_db_name=$2
  local dump_file=/tmp/${app_name}-${TIMESTAMP}.dmp
  cd $HOME/src/$app_name

  echo ''
  dump_remote_database $dump_file $remote_db_name
  recreate_dev_database $app_name
  restore_dev_database $app_name $dump_file
  migrate_and_prepare_test
}

display_options() {
  local index=0
  echo "Please select which applications you'd like to replace"
  echo ''
  echo "    ${index}: all"
  for app in ${APPS[@]}; do
    index=$[index + 1]
    echo "    ${index}: $app"
  done
  echo ""
  echo "e.g. 1,2"
  echo ""
  read -e -p '> ' APP_NUMBERS
}

prompt_for_sudo_passwd() {
  echo "Enter your system password"
  sudo ls >/dev/null
}

APPS=('icispatients' 'icisstaff' 'snowflake', 'cronos', 'bouncah')
display_options

read -e -s -p 'Enter the pg password for staging:' pgpassword
export PGPASSWORD=$pgpassword

echo ''
echo ''
echo 'Please wait while we connect to staging'
source ~/.bashrc
ssh $BASTION_SERVER_IP -L 9999:$DB_SERVER:5432 -N &
sleep 8

prompt_for_sudo_passwd

TIMESTAMP=`date "+%Y-%m-%d---%H-%M"`

if [[ "${APP_NUMBERS}" =~ [10]+ ]]; then
  restore_dev_from_staging 'icispatients' 'patients_staging'
fi

if [[ "${APP_NUMBERS}" =~ [20]+ ]]; then
  restore_dev_from_staging 'icisstaff' 'icis_staging'
fi

if [[ "${APP_NUMBERS}" =~ [30]+ ]]; then
  restore_dev_from_staging 'snowflake' 'snowflake_staging'
fi

if [[ "${APP_NUMBERS}" =~ [40]+ ]]; then
  restore_dev_from_staging 'cronos' 'cronos_staging'
fi

if [[ "${APP_NUMBERS}" =~ [50]+ ]]; then
  restore_dev_from_staging 'bouncah' 'bouncah_staging'
fi

kill %1

Environment files (problem 4)

We can ensure our team has the correct environment variables for your app/service by copying them from boxen into your projects .env file. By creating the file modules/projects/files/<project>/dotenv in your Boxen repo it will automatically copy it into ~/src/<project>/.env

Bringing up the services (problem 5).

Now that our machines all have the services installed in the same location we can get to work on automating starting up the suite.

For this I use tmux and the tmuxinator gem. Tmux provides pane, window, and session management. Check out Thoughtbot’s A tmux Crash Course post for quick intro to tmux. Tmuxinator provides a easy (and programmable) way to manage complex tmux sessions.

With tmuxinator installed (gem install tmuxinator) we can create different configurations for different setups. I have an ‘icis’ configuration to launch our suite. The image above shows it running. I simply type mux icis and it launches a tmux session names ‘icis’, creates the windows and panes that I have defined, and runs and commands I have specified.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# ~/.tmuxinator/icis.yml
name: icis
root: ~/src/
windows:
  - staff:
      layout: main-horizontal
      panes:
        - cd ~/src/icisstaff; bundle check || bundle; foreman start
        - cd ~/src/icisstaff; tail -f ~/src/icisstaff/log/development.log
  - snowflake:
      layout: main-horizontal
      panes:
        - cd ~/src/snowflake; bundle check || bundle; foreman start
        - cd ~/src/snowflake; tail -f ~/src/snowflake/log/development.log
  - patients:
      layout: main-horizontal
      panes:
        - cd ~/src/icispatients; bundle check || bundle; foreman start
        - cd ~/src/icispatients; tail -f ~/src/icispatients/log/development.log
  - cronos:
      layout: main-horizontal
      panes:
        - cd ~/src/cronos; bundle check || bundle; foreman start
        - cd ~/src/cronos; tail -f ~/src/cronos/log/development.log
  - secret:
      layout: main-horizontal
      panes:
        - cd ~/src/secret_service; bundle check || bundle; PORT=5003 foreman start
        - cd ~/src/secret_service; tail -f ~/src/secret_service/log/development.log
  - dev:
      layout: main-horizontal
      panes:
        - cd ~/src/icispatients;

If I detatch from the tmux session, I can just reattach by typing mux icis again. This will not launch things again just bring me back to where I was. I have many tmuxinator setups. Each one launches in its own session and I have a nice key combo to list/switch between them.

1
2
3
4
ls ~/.tmuxinator/
bouncah.yml   html_css.yml  ptz.yml
go.yml        icis.yml      scripts/
haskell.yml   ir_ptz.yml

I have customzied my tmux commands to make working within tmux even more awesome.

In closing

I am sure your configuration is unique to your organization but I hope that you can take some of these ideas and make the development process friendlier for your team (or just for yourself).

There is definitely an upfront cost moving to a system like Boxen but it pays off in the long run. Similarly ramping up on tmux can take a little time too but I’d be lost without it today.

I also think this is a valuable setup even if you do not have SOA. If you have a number of different applications you can simply have a tmuxinator configuration file for each one.

Note: I also use Powerline to make the tmux info bar a little prettier (i.e. displays the session and window names, and highlight the active window)

Where our service names come from

1. Salk: in honor of Jonas Salk.

2. Snowflake: We are all unique individuals after all.

3. Bouncah: A Boston pronunciation of bouncer. Only allowing in the eligible.

Speeding Up the Konacha JavaScript Testing Framework with Views

| Comments

Konacha can only run as fast as Rails can serve it. Konacaha’s readme suggests including the JavaScript you need to run your tests in a JavaScript spec helper, but this can result in excessively expensive asset compilation. Replacing Konacha’s iframe view with your own implementation can yield a substantial improvement in execution time for a large test suite.

Moving to Konacha

Recently the Iora engineering team decided to start using Konacha as our JavaScript test runner after a few of us had started using Konacha in extra-curricular projects and fallen in love with Mocha, Chai, and the bustling Chai ecosystem. In fact, we loved Konacha so much that we decided to forgo our usual, conservative approach to tool adoption (port it as you touch it) and instead embarked on “Chaimageddon”: an afternoon devoted to all hands on deck porting of Jasmine tests to Chai. Chaimageddon was a big success (we migrated over half of our very large JavaScript test suite), and pull requests full of ported tests came flooding in.

Everything looked great, until we merged…

1
bundle exec rake konacha:run

A few seconds pass and I decide to grab a cup of coffee; I return and it’s still running. Ten minutes later, it finishes. Oooof. We had only ported about half of our tests and already the Konacha run was more than twice as long as the Jasmine run had been.

Tailing log/development.log revealed the following:

Processing by Konacha::SpecsController#iframe as HTML
  Parameters: {"name"=>"OptionPresenterSpec"}
  Rendered /Users/myke/.rvm/gems/ruby-1.9.2-p320@icis/gems/konacha-2.6.0/app/views/konacha/specs/iframe.html.erb (1599.6ms)
  Completed 200 OK in 1657ms (Views: 1605.3ms | ActiveRecord: 0.0ms)

Since the iframe endpoint gets hit once per spec, there was a ≈ 1500ms request for every spec file.

After profiling the iframe action in the Konacha engine the source of the sluggishness became obvious: we were requiring our spec_helper.js in each spec file, which looked something like this (note that this spec_helper contains requires for all supporting JavaScript, as suggested in the Konacha readme):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#= require application
#= require sinon
#= require sinon-chai
#= require js-factories
#= require chai-backbone
#= require chai-jquery
#= require chai-as-promised
#= require chai-null

beforeEach ->
  # universal test setup

afterEach ->
  # universal test cleanup

Since we were running these tests in the rails development environment, for each spec file, we would compile all of the assets shared across the entire suite.

Defining a custom iframe view

When these shared assets are required through the spec helper, sprockets is forced to compile them each time, as it follows the requires from the spec file itself, to the spec helper, and up the rest of the (potentially complex) require tree. An easy and cheap way to prevent needlessly re-compiling these assets each time the endpoint is hit is to require them in the view itself, rather than in the spec helper. Since Konacha is a rails engine, this is as easy as defining your own iframe.html.erb view in app/views/konacha/specs/. We simply replaced moved all of our required JavaScript from require statements in the spec helper to the view itself, so that:

1
<%= javascript_include_tag "chai", "konacha/iframe", debug: false %>

became:

1
<%= javascript_include_tag "chai", "konacha/iframe", "application", 'sinon', 'sinon-chai', 'js-factories', 'chai-backbone', 'chai-jquery', 'chai-as-promised', 'chai-nill', debug: false %>

When all was said and done we had added the following app/views/konacha/specs/iframe.html.erb to our main app (almost identical to the one in konacha):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<!doctype html>
<html data-path="<%= @spec.path %>">
  <head>
    <meta http-equiv="content-type" content="text/html;charset=utf-8" />
    <title>Konacha Tests</title>

    <% @stylesheets.each do |file| %>
      <%# Use :debug => false for stylesheets to improve page load performance. %>
      <%= stylesheet_link_tag file, :debug => false %>
    <% end %>

    <%= javascript_include_tag "chai", "konacha/iframe", "application", 'sinon', 'sinon-chai', 'js-factories', 'chai-backbone', 'chai-jquery', 'chai-as-promised', 'chai-nill', debug: false %>
    <%= javascript_include_tag @spec.asset_name %>
  </head>
  <body>
  </body>
</html>

And spec/konacha/spec_helper.js.coffee looked like:

1
2
3
4
5
6
7
# Notice there are no requires here!

beforeEach ->
  # universal test setup

afterEach ->
  # universal test cleanup

The Result

The benchmarks speak for themselves:

Run #   Time (m:ss)   Description
1       8:12.92       The existing implementation (Konacha 2.6.0)
2       10:47.82      The existing implementation (Konacha 2.6.0)
3       10:43.25      The existing implementation (Konacha 2.6.0)
4       9:58.47       The existing implementation (Konacha 2.6.0)
Average 9:55.615

1       2:40.99       Move requires from spec helper to iframe.html.erb (Konacha 2.6.0)
2       2:24.90       Move requires from spec helper to iframe.html.erb (Konacha 2.6.0)
3       1:09.45       Move requires from spec helper to iframe.html.erb (Konacha 2.6.0)
4       1:08.0        Move requires from spec helper to iframe.html.erb (Konacha 2.6.0)
Average 1:50.835

Benchmarks run using "time bundle exec rake konacha:run"

We plan on opening a pull request with a more elegant solution: require the spec_helper (or maybe konacha/manifest) from Konacha’s iframe.html.erb if they are defined, allowing the same performance boost without requiring a monkey patch of sorts (after all, as Konacha changes this view may change, or be eliminated entirely, leaving us in the dust!). But for now we’re happily running fast builds and back to loving Konacha.

Chazmine - a Vim Plugin for Jasmine-to-Chai Conversions

| Comments

We’re converting our JavaScript tests from Jasmine to Chai.

1
2
3
4
5
6
7
object = {}

# a simple Jasmine assertion
expect(object).toBe object

# the equivalent Chai assertion
expect(object).to.equal object

But converting assertions for ~2k specs is a bummer.

bummer

So I created Chazmine. It’s a Vim plugin that substitutes Jasmine assertions with Chai assertions. Install the Chazmine plugin, run the :Chaz command, and boom - your test file is Chai’d.

We’re adding new Jasmine-to-Chai substitutions to Chazmine as we need them. If you find a Jasmine-to-Chai substitution that’s not included there, or want to add something awesome, feel free to contribute.

Announcing Lift Off and Flight Plan

| Comments

If you ever wanted to make new Rails apps just like a member of the Iora Health you are now in luck! We have released two new libraries on Github solely for the provisioning of new Rails applications.

Lift Off

Lift Off is a Ruby based CLI that will create a new Rails application based upon our template. It’s very simple to use. Just install it like so:

1
gem install lift_off

And then create a new application:

1
2
3
lift_off RapBattle
cd rap_battle
rails g scaffold MozillaZillYall

So easy!

Flight Plan

Flight Plan is a living, breathing Rails application that we use as our template. It has a nice collection of libraries that we use on every Rails app we make that I believe represents a very cutting edge tech stack. It should leave you in a great place to start crafting code!

Why not a Rails Template?

I’ve used Rails templates in the past and I believe that they are extremely hard to maintain. I think an evolving Rails application ends up working out really well long term for keeping up to date. I want our curated choices of libraries to evolve over time – the barrier to modifying the template should be low.

Enjoy!

I hope you can get some enjoyment out of our curated Rails stack. Let me know how it goes for you if you build an app from Lift Off!

Eliminating Mock, Stub, and Spy Teardown with Sinon Collection Methods

| Comments

If you bear with me, I’ll eventually get to demonstrating how you can eliminate test teardown. Before we can dive into code I need to provide you a little background on why test teardown is traditionally an important thing and how as Rubyists we’ve moved away from most of our teardown needs.

I have a strong love of the four phase unit test. I was first exposed to the delicious structure of a well defined test in xUnit and passed this knowledge down to the younger members of my coding tribe in jUnit, test-unit, and eventually RSpec. RSpec was actually a revolutionary testing framework for me because I came to the realization that I didn’t actually need to care about all four phases of the well structured unit test. As it turns out, computers are really good at handling mundane tasks such as destroying objects you just created in the scope of a test and returning the system to a sane state.

I became a three phase unit tester. It was glorious. Unfortunately, someone hired me to start writing well-tested JavaScript code and I was thrown back into the world of caring about tearing down my mocks, stubs, and spies. This is not the world I want to live in. Let’s look at how I returned to the blissful world of the three phase unit test through code:

I hate my four phase life - spec/models/patient.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
describe '#hasCareTeam', ->
  beforeEach ->
    @patient        = new ICIS.Models.Patient
    @hasDoctor      = sinon.stub @patient, 'hasDoctor'
    @hasHealthCoach = sinon.stub @patient, 'hasHealthCoach'

  describe 'has a doctor', ->
    beforeEach -> @hasDoctor.returns true

    it 'returns true', ->
      result = @patient.hasCareTeam()

      expect(result).toBeTruthy()

  describe 'does not have a doctor, has a health coach', ->
    beforeEach ->
      @hasDoctor.returns false
      @hasHealthCoach.returns true

    it 'returns false', ->
      result = @patient.hasCareTeam()

      expect(result).toBeFalsy()

  afterEach ->
    @hasDoctor.restore()
    @hasHealthCoach.restore()

I think the structure and the readability of the test are going really well until I hit that afterEach block. My first reaction to working with sinon spies was “Really? I have to restore them all?”. The answer was yes. If you do not teardown your stubs/spies you’ll soon be in a world in which the order in which your tests matter. You may not feel this pain the first time you skip the restore method, but eventually you’ll add a test that fails or behaves in an entirely bizarre manner.

We toyed around with keeping an object in the testing namespace and then pushing sinon objects in an array and in a global after block restore each of them. That is until we did a source dive on sinon and found sinon.collection. The hard part of the implementation was done. You only need to perform two tasks:

First, add a global after hook (we did ours in the jasmine-sinon.js file)

spec/support/lib/jasmine-sinon.coffee
1
2
afterEach ->
  sinon.collection.restore()

Secondly, invoke mocks,stubs and spies with sinon.collection

Livin la vida loca - spec/models/patient.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
describe '#hasCareTeam', ->
  beforeEach ->
    @patient        = new ICIS.Models.Patient
    @hasDoctor      = sinon.collection.stub @patient, 'hasDoctor'
    @hasHealthCoach = sinon.collection.stub @patient, 'hasHealthCoach'

  describe 'has a doctor', ->
    beforeEach -> @hasDoctor.returns true

    it 'returns true', ->
      result = @patient.hasCareTeam()

      expect(result).toBeTruthy()

  describe 'does not have a doctor, has a health coach', ->
    beforeEach ->
      @hasDoctor.returns false
      @hasHealthCoach.returns true

    it 'returns false', ->
      result = @patient.hasCareTeam()

      expect(result).toBeFalsy()

Voila! No more need to restore each of your test spies in Jasmine while maintaining the ability to make assertions of the desired state of a system after declaring test spies.

Planning in the Clinic (Part 2 of 2)

| Comments

Part 2 of 2 (for part 1, see http://icu.iorahealth.com/blog/2012/05/10/planning-in-the-clinic-1-of-2/)

In the earliest days of ICIS planning, stakeholders and members of the product and engineering teams noticed affinities between our ideal Electronic Medical Record (EMR) system and the project planning systems used in software and design.

We’d like to take some time to talk about these affinities with one of our stakeholders and co-designers of ICIS, Dr. Andrew Schutzbank, MD, MPH. Andrew blogs at www.schutzblog.com, and recently won a “Costs of Care” award for his story regarding how pharmaceutical cost-shifting prevented him from discharging a patient from the hospital.



Q. You’ve taken a special interest in gaming in medical software innovation. Where is that going to take us?

A. At the guidance of our awesome product manager, Jess Kadar, I have been reading Jane McGonigal’s book Reality is Broken. Everyone should read it (parent, doctor, gamer, non-gamer, patient …) because it portends the future. Much of my thinking about why I like games comes from her work and her collection of other’s work. Gaming is defined as voluntarily overcoming unnecessary obstacles. Involuntarily overcoming necessary obstacles is called work.

The key then is to overcoming the drudgery of health care by making the work of health care fun. One of the best ways of doing this is socializing our health work. Whether through competition, leader boards, bragging, sharing, storytelling or just keeping each other up to date, there is tremendous power in games. We borrow this from Counterstrike and TF2 and Starcraft. However, what mostly fails is that the underlying game mechanics – taking pills, seeing doctors, getting mammograms – are not only not fun, but highly unpleasant. Starcraft was fun before it became the national sport of Korea. One could argue that repeatedly clicking buttons is not fun, but that is an oversimplification on what makes games games.

I think health care gaming is in its infancy because it is hard to design a game to motivate you to do something that you should already be motivated to do. Who doesn’t want to be in shape and healthy? And yet so few of us are. To try and solve that problem, early health game designers have given away a chance to win an iPad, or a trip to Aruba, or points towards purchases, effectively paying people via games to be helpful. While I think there are important ties between financial and health motivation (mostly that we give up our health for money in the form of poor/quick eating, deferred exercise and missed sleep), these games miss something important. Perhaps, getting back to the social nature of gaming, we need to reward people for getting each other healthy. I might eat a doughnut instead of winning game points, but there is no way in hell I am going to let you eat a doughnut instead of me winning game points.

Another component of non-medical games is that they are amazing at interface design. Many applications just have awful design, making you bend to the will of the designer. Not games. There is no other medium I can think of that so seamlessly sends so much data the way of the user, and has the user begging for more. There are plenty of games with bad interfaces, usually crowding the dustbins of Targets and Walmarts: I just cannot remember playing them (for very long). Contrast this with most medical software, which is just atrocious. I feel like EMR designers secretly hate doctors and wish to torture them, one check box click at a time. There is no way a game would be seriously released (and purchased!) in the sad interface state of so much medical software.

As a disclaimer I have always loved games. Since ColecoVision all the way through modern PC gaming, coming of age during the golden age of SNES RPGs, family board games (the brutal national sport of the Schutzbank household), endless chess matches with my dad (I only have 4 wins in ~26 years), pinball, etc., I have been hooked. I love the meditative state I enter while playing, the opportunity to overcome challenges, experience a story, improve skill, spend active time together with others and simply kick ass.

Q. In the O’Malley, et al., article “Are Electronic Medical Records Helpful for Care Coordination?” the authors write, “realizing EMRs’ potential for facilitating coordination requires evolution of practice operational processes” (Center for Studying Health System Change, 22 Dec. 2009). That’s a big claim. What do you think?

A. To quote Jess Kadar again: “You cannot create software to do your job for you. To write software to solve a problem, you have to know how to solve the problem.” Doctors are pretty bad at care coordination. Non-docs are even worse. Why would we think that people who either don’t coordinate care, or do it poorly, would be able to write software that makes it easy? Good words describing good processes precede good software. We have none of the above. How do you know? Just ask 4 doctors what care coordination is and expect 6 answers.

Q. They also claim “current fee-for-service reimbursement encourages EMR use for documentation of billable events (office visits, procedures) and not of care coordination (which is not a billable activity).” Can you describe some concrete cases where the EMR’s provision for what is billable has resulted in information loss in care coordination?

A. Every time something is documented? That is probably a glib answer to a serious question. Docs started writing notes to document interactions with patients, to communicate the day’s meeting to colleagues, future selves and lawyers. Not terrible, but patients undergo diseases and care continuously. We have hard-coded a discrete method for dealing with continuous problems. Unfortunately the note is just not good enough of technology to handle reality.

Writing a note in a modern EMR is actually a game! Like a perverse version of high stakes Yahtzee, I have to satisfy a number of categories in a number of columns to increase my note score ($). This results in multiple ways of cheating the game – check boxes, templates, copy/pasted text carried from previous notes, words/sentences added as flourish like “All other Review of Systems negative” to score points in the game the easy way. Not that we didn’t do the work, but it is hard enough to ask all of the damn questions, then write down that you did, then write down the answers, then try and make any sense of it. This really is not EMRs fault, it is actually the fault of regulators/payors abstracting what was once a research tool (the E&M code sheet) and turning into a torture device. I will try and withhold my comments on top-down bureaucratized medicine, but the billing sheet is a perfect example.

Care coordination is all about finesse. Calling a patient twice. Emailing a doctor buddy to get someone in earlier. Recognizing just after the patient left that what you just explained did not register with them despite their assurances, or even worse, remember to do something extra. Care coordination is about communication, influence, relationship building, trust and follow through. It is really hard to build software to do that, only to support the human activity. The more I think about medical records, they are ideally a great combination of a CRM and Project management tool—where each patient is both a client and a project.

Q. Andrew, thanks. And game on!

Planning in the Clinic (Part 1 of 2)

| Comments

In the earliest days of ICIS planning, stakeholders and members of the product and engineering teams noticed affinities between our ideal Electronic Medical Record (EMR) system and the project planning systems used in software and design.

We’d like to take some time to talk about these affinities with one of our stakeholders and co-designers of ICIS, Dr. Andrew Schutzbank, MD, MPH. Andrew blogs at www.schutzblog.com, and recently won a “Costs of Care” award for his story regarding how pharmaceutical cost-shifting prevented him from discharging a patient from the hospital.



Q. Andrew, one of the things that interested you in Trajectory, Pivotal Tracker, and Basecamp was the ability to manage tasks. How is that important in the clinical setting, and why isn’t it such a big feature in existing EMRs?

A. A task list (or scut list) is really at the heart of medical work. When caring for a patient there are myriad things that must be done, spanning routine clinical work such as ordering labs, more complex synchronous tasks such as communications with a family member or another doctor, and synthesizing complex clinical information to determine the next iteration of diagnosis, further testing and therapy for a given issue. And these tasks are generated by multiple members of a team and assigned to multiple members of a team. Given the stresses of clinical life, the performance pressure of having a patient in front of you, and little undistracted time to do work, the more complex tasks often get shifted to later, frequently never. Chaos and distraction increase the likelihood that these tasks are forgotten. But because the work we generate needs to get done, it felt natural to me that tracking, managing and thinking about tasks was of prime importance. In other words, a task list really represents the “work” part of clinical work. If we are going to take advantage of information and networking technologies, I can’t think of a better application of those technologies to a problem so horribly flawed by paper and verbal communication.

As interns, we each kept our own self-generated lists of things to do for each patient, our scut lists. An intern’s day consists of making and checking boxes. Easy ones first, asynchronous hard ones second, and the synchronous hard ones (meetings, calls) last. This method was inherently flawed because no one could check in on my progress, see if I needed help, or lighten my load. When I delegated a task, I had few ways to verify that it was done without interrupting the person that I asked for help. One stroke of brilliance had us photocopy our paper scut lists at the end of the “rounds” when all decisions were supposedly made. This allowed everyone to at least have the same starting conditions but was barely superior to individual lists. The closest successful implementation in the physical world was a large but portable white board in an Intensive Care Unit (ICU) visible to all, edited by all and serving as a single source of truth.

It is not entirely true that tasks are absent from current EMRs, it is just that the task feature, reflecting the culture of medicine that spawned it, fails to recognize that the doctor doesn’t do everything. Other EMRs I’ve used manage tasks like a message system, where disenfranchised phone/secretarial staff can transfer work, untriaged and unstarted to the appropriate physician. As a result, the task list/inbox feature of most EMRs is amongst the most dreaded, as physicians’ days end with a long string of unprocessed work. By contrast, because we went about building our team and culture differently, our task list is (and I can say this now that I am practicing) a method of team communication, delegation and accountability. It is by no means perfect, but leaps and bounds beyond what I have seen before.

There is a more fundamental issue here, which is the notion of “magic time” in medicine. “Magic time” is the time when doctors are supposed to do all of the things they do not in front of patients, like think about clinical issues, research clinical problems, make calls to family & specialists, read and write correspondence, etc. The problem is that there are always more patients with pressing concerns, and so there never is any time without patients in front of the doctor. As a result, we promise to call/write/think/research but often fail to do so, not out of malice, but because no opportunity exists to do so. With a task list we can begin to enumerate our days work, and do crazy things like assigning time to complete each task, and begin to actually recognize when we are overloaded, rather than waiting until the walls come crumbling down around us.

It is a joke that to make it through medical school you better have a good ToDo list. There are hundreds of forms, facts, meetings, certifications, etc. that must be met just to make it to doctor, almost none of which require the intelligence, creative thinking or compassion we hope make up the core of our physicians. I confess that I am a ToDo junkie, starting with beloved Palm Desktop and have now painstakingly migrated my ToDos to a platform that is iPhone, PC, iPad, and Mac OS friendly.

Q. In engineering nowadays, we tend to work off of a single backlog of tasks, with the highest priority task at the top, with the expectation that it will get done first. In a perfect world, how are medical tasks organized?

A. In a perfect world, each member of the team (patient included) would work of a common list and do the highest priority thing that they can do at any given time. Without sounding too naïve or presumptuous, I imagine there is more fungibility in who can do what on an engineering team. However most of good primary care is paperwork and nearly anyone can do it.

Having said that, there are two major problems with what I just said. The first is that priority means different things to different people—this is not unique to medicine. Threats to life and limb usually make it to the top. But for most of office-based medicine, there are murkier things. Important tasks like “eat more vegetables” or “remember to recheck the creatinine” often get put aside in favor of more urgent tasks. Paperwork forms, especially related to employment and always due today by 5pm, medication refills on the last day of the prescription and requests from regulatory bodies always bubble to the top of the list. Much of this is our fault as a clinical team—our process to handle such requests is woefully inadequate. Some would be alleviated by advanced planning on behalf of our patients (like if they had a todo list?). However, so many problems in medicine are not caused by either party in the room, but rather the result of decisions by regulators, lawyers & payors, who seem to assume all patients and doctors are either frauds or morons, loading up lives with meaningless paperwork designed (but unable) to mitigate such putative abuse.

The second problem with the ordering of tasks is that we have more than one patient at a time. Always. Is Ms. Jones’s high blood sugar more or less of a problem than Mr. Smith’s high cholesterol? Can such a distinction even be made? Rather than looking at importance, a near impossible task even with infinite resources, we again turn to urgency as our guide. Ms. Jones is coming in tomorrow, Mr. Smith next week. Therefore, let us take care of her sugar first.

This is a long way of saying that I don’t know what an ideal task list would look like, other than visible to everyone on the team, and dynamic. Which is how we designed it for ICIS.

Q. How do you deal with information overload?

A. Caffeine, long nights, frustration, delegation, more frustration, more caffeine, refusing to do certain things (looking at you insurance company forms) and finally, redesigning primary care from the ground up.

I actually found that in my private and professional life, the ability to get everything down on one list, spend a little time researching/describing the task problem when needed, and then assigning a time, place and resources to solving the problem has positive effects on both my psyche and my productivity. In other words, I deal with information overload with a really good Task list. (Notice a theme yet?)

Q. How do you balance what is the software’s responsibility and what is the doctor’s responsibility?

A. That’s easy—in front of the patient it is always the software’s fault. More seriously, something I have learned building software is to remember that it is just words. What I mean by that is that we must have a really good description of our problem and the solutions we would like to try before committing them to software. Not to be all waterfall, as it is rare that any solution we plan will actually be the right one, but rather that if I cannot describe my process to handle a problem without software, there is no way in hell I can create software to solve my problems.

Software is good at repeatable steps, never forgetting, and churning numbers. It can either recognize patterns, or help me recognize patterns (which is a huge part of clinical decision making, and probably all of human endeavor). It doesn’t need much sleep and can be in many places at once, but cannot feed or care for itself. It also, short of Skynet, cannot learn like a doctor can. Therefore it is the job of software to do what it is good at: automate steps that require no intervention, recognize patterns when my brain cannot, forget nothing, and help display things for me so I can use my honed clinical brain.

I tell all of my students and residents on day 1 that the most important part of being on a clinical team is that you have to do what you say you are going to do. Without confident delegation, a team will grind to a halt, and melt into a dysfunctional and wasteful group. The corollary is that to be a good team member, you must raise alarm as soon as you cannot do what you said you will. I expect the same from my software. If it claims that it will warn me about drug interactions, it better do it every time. If it is going to remind me that I didn’t do something that I set out to do, it better do it every time. If it said that my prescription was delivered, it better have been delivered. And if not, it better get my attention and tell me otherwise.

The time will come when software can do more and more of the physician work. What that really means is that we have gotten really good and describing the problems we face, and how we go about finding a solution. With better ideas come better words, better words leads to better software. Part of why I like being in the software world is that I think I have pretty good words for my processes.



Coming Thursday, May 17: Part 2 of 2 (thoughts on gaming, operations, and billing)

Expressing PostgreSQL timestamps without zones in local time

| Comments

TL;DR

To convert from UTC in a PostgreSQL database to a local time, convert twice. E.g., select starts_at at time zone 'UTC' at time zone 'US/Pacific';

Shoutout

If someone knows a better way to do this, we’re all ears.

The problem

You properly save your data in PostgreSQL in UTC (For Rails, the default data type for timestamps in PostgreSQL is: “timestamp without time zone”). But you want to write some SQL reports that express those UTC dates in the local time, taking into account Daylight Savings Time. One of the reasons you might do this is because you are very concerned about the date portion of the timestamp: You might want to aggregate by a date that the report reader will understand. Why else? You might want to produce a report that can be exported to CSV and then imported into Excel so that the times are in the zone of the stakeholder using the spreadsheet.

So how do you get your query right in PostgreSQL?

This turns out to be non-obvious.

Cases

In 2012, daylight savings time began at 2 AM on 11 March 2012. So let’s compare two timestamps, one in standard time, the other in daylight savings.

The first will be at 1 PM US/Pacific on 8 March 2012. Since this is before the 11 March switchover to DST, the zone is PST (UTC-8 hours). This will be recorded in our database as UTC (without timestamp): 2012-03-08 21:00:00. (A nice tool for helping with these translations is timeanddate.com, for instance: http://www.timeanddate.com/worldclock/converted.html?day=8&month=3&year=2012&hour=13&min=0&sec=0&p1=127&p2=0)

The second will be at 11 PM US/Pacific on 14 March 2012. Since this is after the 11 March switchoevr to DST, the zone is PDT (UTC-7 hours). This will be recorded in our database as UTC: 2012-03-15 06:00:00 (http://www.timeanddate.com/worldclock/converted.html?day=14&month=3&year=2012&hour=23&min=0&sec=0&p1=127&p2=0).

Experiments

First let’s start with simply getting a timestamp without a time zone:

select
timestamp '2012-03-08 21:00:00';

That produces 2012-03-08 21:00:00: Yay.

Now let’s try and view that timestamp in the US/Pacific timezone. Consulting the PostgreSQL documentation (http://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-ZONECONVERT-TABLE), we might try the “at time zone” syntax:

select
timestamp '2012-03-08 21:00:00',
timestamp '2012-03-08 21:00:00' at time zone 'US/Pacific';

That’s funny. We were expecting 2012-03-08 13:00:00-08 (see above). But here’s what we got (rearranging the output into rows):

2012-03-08 21:00:00
2012-03-09 06:00:00+01

Huh? Well, it happens that I’m running my database in Paris, France on May 3, 2012 (UTC+01). Here’s what the documentation says about the use of “at time zone” when applied to a timestamp without a timezone: “Treat given time stamp without time zone as located in the specified time zone.” Hmm. Then why is it showing “+01”? Well, it’s because when PostgreSQL displays a timestamp, it does it in your local time zone (here’s how it’s described for an example in the docs: “The first example takes a time stamp without time zone and interprets it as MST time (UTC-7), which is then converted to PST (UTC-8) for display”). So let’s check and see What 2012-03-09 06:00:00+01 in PST? It’s Thursday, 8 March 2012, 21:00:00 (http://www.timeanddate.com/worldclock/converted.html?day=9&month=3&year=2012&hour=6&min=0&sec=0&p1=195&p2=127). Well that’s dumb. All PostgreSQL did was take the literal timestamp value, pretend that it’s actually PST, and then display it in my local time zone.

This last paragraph is why you’re reading this blog post, isn’t it? PostgreSQL’s “at time zone” is surprising.

So how are we going to fix it?

Well, we know that our timestamps really are in UTC. Therefore, we are going to convert them to UTC, then we’re going to convert again to our target:

select
timestamp '2012-03-08 21:00:00',
timestamp '2012-03-08 21:00:00' at time zone 'UTC',
timestamp '2012-03-08 21:00:00' at time zone 'UTC' at time zone 'US/Pacific';

Now we get:

2012-03-08 21:00:00
2012-03-08 22:00:00+01
2012-03-08 13:00:00

Notice that the time zone appears for the first conversion and disappears for the second. This is what the docs say should happen.

Finally, let’s try this pattern with our DST example. This is where we want to see 2012-03-15 06:00:00 (UTC) converted to local time 2012-03-14 23:00:00:

select
timestamp '2012-03-15 06:00:00',
timestamp '2012-03-15 06:00:00' at time zone 'UTC',
timestamp '2012-03-15 06:00:00' at time zone 'UTC' at time zone 'US/Pacific';

Results:

2012-03-15 06:00:00
2012-03-15 07:00:00+01
2012-03-14 23:00:00

Happy now?

And about time zone names and Daylight Savings Time

You’ll notice that in these examples I’ve scrupulously used the time zone “US/Pacific” – This is because the three-letter time zone abbreviations are already encoded for standard or daylight savings time. If you want the automatic conversion, use the full name. You can get a full list of the names with select * from pg_timezone_names; (see http://www.postgresql.org/docs/9.1/static/view-pg-timezone-names.html). Our application is a Rails application, so we typically use a case statement to convert from the Rails-style name to the PostgreSQL-style name:

select
  patients.created_at as "UTC",
  patients.created_at
  at time zone 'UTC'
  at time zone
    case practices.time_zone
      when 'Eastern Time (US & Canada)' then 'US/Eastern'
      when 'Pacific Time (US & Canada)' then 'US/Pacific'
    end as "Local"
from  patients,
      practices
where practices.id = patients.practice_id

At some point we might create a helper table to manage the time zone name conversion.

Functional syntax for “at time zone”

One last thing. You might be more comfortable with the functional syntax for these conversions. Example:

select
timestamp '2012-03-15 06:00:00',
timezone('UTC', timestamp '2012-03-15 06:00:00'),
timezone('US/Pacific', timezone('UTC', timestamp '2012-03-15 06:00:00'));

When you Really, Truly, Need to Parameterize your Cucumber

| Comments

Before writing this post I did a quick search for “Cucumber worst practices parameterization,” and didn’t come up with much. (Apparently people don’t tag their terrible scenarios with @worst or the like.)

But parameterizing your Cucumber stories is surely one of the worst things you can do. It obviously creates dependencies on some external resource (the value of a variable), and it arguably makes your scenario non-deterministic. A better place for parameterization is probably in your RSpec.

But … Recently I wrote a small gem to provide for turning Pingdom alerts off and on before and after Capistrano deployment (pingdom-cap). And as I considered the way a potential user would evaluate the software, it struck me that I had better have an easily readable and exercisable integration test. It would look something like this:

pingdom-cap_executable.feature
1
2
3
Scenario: Pause check
  When I successfully run `pingdom-cap icu pause`
  Then the output should contain "Pausing Pingdom 'icu'"

This scenario is leveraging the Aruba gem to run commands and test the output. That parameter “icu” is the name of a Pingdom “check name.” When you type “pingdom-cap icu pause” you expect to be told that you’ve paused the check, so that the check won’t run and you won’t get those annoying emails and SMS’s during a deploy.

But I don’t think everyone wants to integrate against a check named “icu,” and it would be too hard and expensive to get everyone to change.

Were I evaluating this gem, the first thing I’d want to do is try it out on my own Pingdom configuration. And I really wouldn’t want to write any code. I’d like to run the scenario against my own parameters. I’d like to define environment variables and have them get picked up in the scenario. Something like this:

pingdom-cap_executable.feature
1
2
3
Scenario: Pause check
  When I successfully run `pingdom-cap <%= ENV['PINGDOM_CHECK_NAME'] %>  pause`
  Then the output should contain "Pausing Pingdom '<%= ENV['PINGDOM_CHECK_NAME'] %>'"

Nice idea. But Cucumber doesn’t run through ERB, and, as I say above, it’s probably a worst practice. But I want it. What to do? Really, what I want is to write something like:

1
When I (ahem, pass this dang thing through erb) and successfully run `pingdom-cap <%= ENV['PINGDOM_CHECK_NAME'] %>  pause`

More politely,

pingdom-cap_executable.feature
1
2
3
Scenario: Pause check
  When (erb) I successfully run `pingdom-cap <%= ENV['PINGDOM_CHECK_NAME'] %>  pause`
  Then (erb) the output should contain "Pausing Pingdom '<%= ENV['PINGDOM_CHECK_NAME'] %>'"

What this implies is selectively passing entire steps through ERB when the step is flagged at the start with “(erb)”. One might think that this should be a Cucumber-style tag, but I don’t think that’s the appropriate level for a transformation of this kind.

Here’s what I came up with:

erbify_steps.rb
1
2
3
4
5
6
require 'erb'

When /\A\(erb\) (.*)\z/ do |*matches|
  erbs = matches.map { |match| ERB.new(match).result(binding) }
  step *erbs
end

In other words: When the step begins with “(erb)” capture all of the matches, and run ERB over each match. Pass the splatted result to the step method. That is all.

Parting shots

Another way to do this would have been to constrain significantly the scope of what gets parsed into the scenario. In my code, any Ruby code gets evalulated in the ERB block. That’s just asking for trouble, and I apologize in advance for introducing this technique which will probably worsen Cucumber scenarios everywhere. Perhaps a better strategy would have been to introduce some different syntax that would only transform environment variable names. Something like this:

pingdom-cap_executable.feature
1
2
3
Scenario: Pause check
  When (env) I successfully run `pingdom-cap <<<PINGDOM_CHECK_NAME>>>  pause`
  Then (env) the output should contain "Pausing Pingdom '<<<PINGDOM_CHECK_NAME>>>'"

That might be a good gem. I’d use it.

Finally: As to whether this will work with substitution in scenario outlines … I’ll leave that as a problem for the reader.

Learning Test Strategy: BDD the Unknown

| Comments

Using spikes is a crucial tool in the agile developers arsenal. I’ve noticed that there is often a post-spike depression in a developers speed when they begin to incorporate the concepts from the “throw away” code into the application and they must resume writing tests. Usually when asked why they didn’t test-drive his or her spike a developer will answer “It’s nearly impossible to test code when you have no idea what you’re doing!”.

I’d like to walk you though my approach to spike development that keeps BDD in the loop.

Enter the Learning Test Strategy

The learning test strategy is a pretty simple concept. Instead of just writing code you throw away, you begin by writing acceptance tests for what you’d like to implement during your spike and then write throw away code to satisfy the criteria set forth in the tests. Instead of just having throw away code you now have an integration test suite for the features you are about to develop.

It’s Cucumber, I know this!

The very existence of a spike hints that you have one or more feature stories that implement the systems you are exploring inside your codebase. To me, this sounds an awful lot like acceptance criteria that I can build into executable Cucumber tests during my spike story.

I’ve recently developed a document management library that allows our application to create folders and files on a cloud based document management service. No one on the team had developed against the API before, so we needed to spike against it. We knew that ultimately our system should:

  • Create a folder when a new clinic is added to our system
  • Create a folder relative to the clinic folder for a patient when they are registered

So now I know what I need to ask from the API. I could just start writing code to discover how the API works, or I could write a Cucumber feature instead:

create_box_folder.feature
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@slow
Feature: Creating a folder on Box
  As a user,
  In order to organize documents on Box,
  I'd like to be able to create folders

  Scenario: Creating a folder in the root directory
    Given I have authenticated with Box
    When I create a folder named "Walrus Love"
    Then I should be able to retrieve the "Walrus Love" folder

  Scenario: Create a subfolder on Box
    Given I have authenticated with Box
    And the "Walrus Love" folder exists
    When I create a folder named "Stop clubbing, seals" within "Walrus Love"
    Then "Stop clubbing, seals" should be a child folder of "Walrus Love"

Do note the @slow tag, this indicates that I’m writing an integration test that interfaces directly with an external library and can take some time and have network connectivity issues. You can eliminate this by using VCR, which I will utilize elsewhere in the test suite, but there is something to be said for having a pure integration test set on a controllable tag. I can skip running the @slow tag while locally developing, but enforce running the test for a production build if I so desire.

Now I can feel free to write some terrible inefficient and hacky code in the step definitions to make these scenarios pass:

box_folder_steps.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
require 'box-sdk'

Given /^I have authenticated with Box$/ do
  @account = Box::Account.new BOX_AUTH_TOKEN, BOX_AUTH_KEY
end

When /^I create a folder named "([^"]*)"$/ do |folder_name|
  @account.root.create folder_name
end

Then /^I should be able to retrieve the "([^"]*)" folder$/ do |folder_name|
  retrieved_folder = @account.root.at("#{folder_name}/")

  retrieved_folder.name.should == folder_name
end

Given /^the "([^"]*)" folder exists$/ do |folder_name|
  step %{I create a folder named "#{folder_name}"}
end

When /^I create a folder named "([^"]*)" within "([^"]*)"$/ do |subfolder_name, folder_name|
  parent_folder = @account.root.at("#{folder_name}/")
  parent_folder.create subfolder_name
end

Then /^"([^"]*)" should be a child folder of "([^"]*)"$/ do |subfolder_name, folder_name|
  child_folder = @account.root.at("#{folder_name}/#{subfolder_name}/")

  child_folder.name.should == subfolder_name
end

The tests aren’t DRY, the code isn’t DRY, but now I have an understanding of the API I’m implementing. I can already start to see pain points in the API implementation, and in future stories I can write a more effective wrapper to alleviate them.

Refactoring Your Learning Tests During Feature Development

Once I begin feature development, I need to ensure I clean up my learning tests as I implement a wrapper for the API. This ensures I have a fully functional suite of intergration tests so that I can judiciously mock, stub, and VCR my unit tests where the application is concerned.

Here’s something closer to the actual result:

refactored_box_folder_steps.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
When /^I create a folder named "([^"]*)"$/ do |folder_name|
  DocumentManager::Folder.create folder_name
end

Then /^I should be able to retrieve the "([^"]*)" folder$/ do |folder_name|
  retrieved_folder = DocumentManager::Folder.find_by_name folder_name

  retrieved_folder.name.should == folder_name
end

Given /^the "([^"]*)" folder exists$/ do |folder_name|
  step %{I create a folder named "#{folder_name}"}
end

When /^I create a folder named "([^"]*)" within "([^"]*)"$/ do |subfolder_name, folder_name|
  DocumentManager::Folder.create(subfolder_name, folder_name)
end

Then /^"([^"]*)" should be a child folder of "([^"]*)"$/ do |subfolder_name, folder_name|
  child_folder = DocumentManager::Folder.find_by_name subfolder_name

  child_folder.parent.should == DocumentManager::Folder.find_by_name folder_name
end

Wrapping it up

It’s often easy to forget that Cucumber is more than just testing how a user will interact with the UI of an application. It can really be a powerful tool to keep you focused on learning exactly what you need when you are in uncharted coding waters. Furthermore, it turns an exercise where you’d typically throw away code into an exercise when you walk away with a basic framework for testing future features. I’ve found this to be extremely valuable when exploring third party API’s and new development techniques.