I18n testing with Cucumber

Earlier this year we announced support for various languages in our products, but we haven’t discussed any of the engineering work that went into the effort.

In order to localize our main Rails application powering lookout.com we wanted to make sure that we were correctly localizing many of the user flows that we already test with Cucumber and Capybara.

In this post I’ll detail some of the challenges we faced in our localization testing and the solutions we implemented.

1. Stringy Cucumber

It is not entirely uncommon to see Cucumber scenarios which reference very specific strings in the UI, e.g.:

Scenario: Log in to the site with valid credentials
  Given I am a registered user
  And my name is "Jonas"
  When I log in
  Then I should be greeted with "Welcome Jonas!"

The last step would be implemented with the following step definition:

Then /^I should be greeted with "([^"]*)"$/ do |greeting|
  page.should have_content(greeting)

This means that this scenario is going to fail 100% of the time if the user’s configured locale is German. Instead of seeing “Welcome Jonas!” the page would render “Wilkommen Jonas!” and the scenario will fail.

There are really two issues at hand here to be addressed:

  1. Hard-coding UI text into Cucumber scenarios

    When text is hard-coded into a scenario in this fashion, it makes the test more brittle not just to localization changes, but also marketing or product teams updating copy in the web application. If anybody updates a strings file and forgets about the Cucumber scenario, tests will all of a sudden start failing.

  2. Asserting page content based on hard-coded text

    It’s generally a good practice to check for specific CSS or XPath selectors instead of text, since the asserting page content based on the text can be slower and more brittle. If it can’t be avoided, let’s say if you’re checking for a specific error message, then there are ways to make the check localizable which is covered below.

2. Localized assertions

There are valid cases where you cannot avoid asserting that a specific message is displayed to the user. Since your Cucumber scenarios are running in the same general environment that your Rails application, you can access the same I18n methods that your controllers and views can access.

Let’s take the scenario above, and update it a bit to make it easier to test with localization:

Scenario: Log in to the site with valid credentials
  Given I am a registered user
  And my name is "Jonas"
  When I log in
  Then I should be greeted

Then we’ll change our step definition to check for a localized string:

Then /^I should be greeted$/ do
  page.should have_content(I18n.t('dashboard.welcome', :name => first_name))

Now we’re validating that the page is using the right localized string key “dashboard.welcome” and passing in our first_name variable. If this user uses a German localization, then we’ll be checking that they have the right welcome message for their locale.

3. Testing around the world

By far the hardest challenge faced was testing the various languages. The first step for us was running all of the scenarios using the different locales as the default.

Notice in the scenario used as an example above, we never explicitly stated the locale that the user would have, we naturally assume “en” is going to be Jonas’ locale, but it’s not specified.

Whenever we create a user, we use the “default locale” of “en” with the ability to override the locale with an environment variable, e.g.

Given /^I am a registered user$/ do
  # Call a remote API to create a randomly generated user
  user = create_user(:type => :free,
                     :locale => ENV['CUCUMBER_LOCALE'] || 'en')

  # Hold onto this user object for future steps
  current_user = user

This allows us to run the entire test suite with German users by simply invoking Cucumber with:

% CUCUMBER_LOCALE=de cucumber

Now we’re running scenarios with German localizations, or so we hope, but how can we check to make sure that the page has the correct translations? In some cases we can check specific strings as mentioned above, but it is impractical to do that for every string we render.

Instead, we want to make sure we’re just not missing translations, which required a custom Capybara driver to check the page after actions.

Since some content might be rendered after an onclick action or other on-page event, we’ve hooked the Capybara Selenium driver to run a few checks after certain actions. The hooked Selenium driver itself can be found in this gist, and then configure it properly with:

Capybara.register_driver :selenium do |app|
    # Using a custom http client for performance reasons
    http_client = Selenium::WebDriver::Remote::Http::Default.new
    http_client.timeout = 120

    # Create a new driver object
    HookedSelenium::Driver.new(app, :http_client => http_client)

To trigger the driver’s assertions, we use the CHECK_I18N environment variable, making our cucumber invocation look like:

% CUCUMBER_LOCALE=de CHECK_I18N=1 cucumber

During this run of Cucumber, all users will be created with the “de” locale by default, and the tests will raise an exception if we find anything that looks like a missing translation (see line 55 of the gist above).

A combination of the approaches detailed above have allowed us to continue to ensure the scenarios we write in Cucumber are portable across locales, as well as making sure we continue to properly support a plethora of languages in our Rails applications.

- R. Tyler Croy

posted in: · · · ·

Grok code with Puppet

(The following is a cross-post from R. Tyler Croy’s blog)

As the Lookout code-base grows, both in individual repositories but also in the sheer number of repos we maintain, I’ve found it often difficult to find what I’m looking for.

Some of that difficulty is due to ActiveRecord’s determination to implement the world through meta-programming, but for everything else I’m turning to OpenGrok.

OpenGrok itself is a Java-based code cross reference and search engine, which works surprisingly well with Ruby. While not perfect, it certainly blew LXR away in both ease-of-use but also cross-referencing of Ruby, Java, and C code.

Below is an example of searching for the AWS class in my Blimpy project.

Searching for Definitions in OpenGrok

You’ll notice in the search results that there is a green-colored annotation to the right of the code snippet denoting the actual definitions compared to other search results.

When I click on aws.rb and dive into the code itself, I can navigate based on method definitions, but also pull in git-annotate(1) style annotations (not pictured).

Navigating code itself

Since I’m no longer in the business of hand-crafting machines, I went ahead and created puppet-opengrok. This OpenGrok Puppet module, while lacking in rspec-puppet tests (see @glarizza, I’m a hypocrite), will allow you to stand up a simple OpenGrok instance on an Ubuntu Server of your choosing.

Take the following manifest for example:

node default {
  include opengrok

  opengrok::repo {
    "puppet" :
      git_url => 'git://github.com/puppetlabs/puppet.git';

The module will handle installing the packages tomcat6, git-core and a few others to make things possible, but after you’ve run Puppet you can navigate to: http://yourhostname.lan:8080/source and you’ll find OpenGrok has indexed the Puppet code base for you!

The module will also install a cronjob which updates the source trees and indexes every 10 minutes.

Currently puppet-opengrok is quite rough around the edges, since I used TDD (Tinker Driven Development) to build it intead of TDD (Test Driven Development). It will most certainly work on Ubuntu 10.04 LTS, but anywhere else your mileage may vary :)

If you’re familliar with Puppet Forge, you can install the rtyler-opengrok module with the puppet-module tool.

posted in: · · ·

Jasmine Testing with Sauce Labs

Over the past few months, members of our front-end team have been busy revamping much of our web application. While I can’t tell you about the specifics of what is coming down the pike, I can tell you about some of the technologies that we’re using along the way, most notably: Jasmine.

For the unfamiliar, Jasmine is a JavaScript-based unit testing library which can help make test-driving full applications in JavaScript feasible.

With some of the recent changes that I still can’t tell you about (sorry), Jasmine has become an important part of our application development stack. We now have a need to run our Jasmine unit tests as part of our “standard” continuous integration workflow and release process. This means we must run Jasmine tests when we run our normal unit tests, Rails functional tests and Selenium tests.

Since we’re already big users of the Selenium testing service provided by Sauce Labs, it made sense to build on top of their infrastructure to run our Jasmine tests.

Using the Sauce gem and the Jasmine gem from Pivotal Labs, we can effortlessly use either a local browser or a remote browser hosted by Sauce. The “trick” is some of the Rake tasks provided by the two gems:

-> % rake -T jasmine
rake jasmine                 # Run specs via server
rake jasmine:ci              # Run continuous integration tests
rake jasmine:sauce           # Execute Jasmine tests in a Chrome browser on Sauce Labs
rake jasmine:sauce:all       # Execute Jasmine tests in Chrome, Firefox and Internet Explorer on Sauce Labs
rake jasmine:sauce:chrome    # Execute Jasmine tests in chrome
rake jasmine:sauce:firefox   # Execute Jasmine tests in firefox
rake jasmine:sauce:iexplore  # Execute Jasmine tests in iexplore

The first two tasks (jasmine and jasmine:ci) are provided by the Jasmine gem, while the rest are provided by the Sauce gem (after requiring sauce/jasmine/rake in our Rakefile).

The only pre-requisite to running the jasmine:sauce tasks is that you need a Sauce Connect tunnel open, to do this you only need to open a separate terminal to run: sauce connect.

Once the tunnel is up and running, invoking rake jasmine:sauce will automatically spin up a Jasmine test server, a Sauce Labs VM with Chrome, and run your tests!

-> % rake jasmine:sauce
Waiting for jasmine server on 3001...
jasmine server started.
Starting job named: Jasmine Test Run 1341870335
Waiting for suite to finish in browser ...

Finished in 19.95 seconds
26 examples, 0 failures
-> %

Jasmine on Jenkins

Running tests locally is fine, but nothing compares to running those tests in Jenkins. At Lookout, we use both Gerrit and Jenkins heavily throughout our development process, so being able to kick-off Jasmine tests from Jenkins is very important.

To accomplish this we have a dedicated slave for running nothing but Jasmine tests. The reason for dedicating a slave is so that we can keep a Sauce Connect tunnel open across multiple test runs in order to keep tests as fast as possible.

We use the sauce-connect puppet module to run the tunnel, and then bind all our jobs to the “jasmine” label to make them only run on that slave. One thing we found is that the Sauce Connect tunnel can get a bit “wonky” if it’s not regularly restarted, so we had to make sure the tunnel is torn down every night.

Our node (in Puppet) for the builder looks something like this:

node /^jasmine-builder$/ inherits server {
  class {
    'sauceconnect' :
      username => 'REDACTED',
      apikey   => 'EVENMOREREDACTED';

  cron {
    'restart-sauceconnect' :
      ensure  => present,
      command => 'service sauce-connect restart',
      user    => 'root',
      minute  => 0,
      hour    => 3,
      require => Class['sauceconnect'];

By splitting off our Jasmine tasks to run on this dedicated Jasmine builder, it only takes about 2 minutes to run all the tests.

With a little bit of elbow grease we brought together a lot of the existing tools we had in our developer toolchest and now we’re able to provide first-class support for building, testing and deploying JavaScript applications.

Not bad for a “mobile company” if you ask me!

- R. Tyler Croy

posted in: · · ·