Introducing Border Patrol, service authentication at the border

As Lookout has grown over the years our server infrastructure has continued to evolve. Migrating from one application service to many presents a number of design and implementation challenges. This post touches on one of those challenges: cross-service session composition and the tool we developed to solve it, Border Patrol.

Growing pains

Historically, Lookout had used both Ruby and Rails; the original Lookout Mobile Security application back-end was built as a single Ruby on Rails application. It was a massive, sprawling application that did everything: served the JavaScript-based Lookout front-end applications, handled billing, housed asynchronous worker jobs, managed the various databases, held all metadata for backed up devices, was the interface for all devices communicating with Lookout and on and on and on.

As we set out building the Mobile Threat Network and began to make the transition from a monolithic Rails application to a service-oriented architecture, we quickly determined that the monorail application couldn’t own authentication for all services, too. We needed to fundamentally change the way we handled authentication.

Crawl before you walk

Since Lookout only ever had one service, we’d never had to think about service-to-service authentication. That changed as we dipped our big toe into the ocean of SOA and realized we would need to be a new foundation service that we named Keymaster.

Keymaster hands out short-lived authentication tokens to services and devices, allowing them to make authenticated calls to other services. It’s like Kerberos for RESTful API calls.

Keymaster is a whole other blog post, but there’s one important point to cover here: Keymaster tokens are issued by a specific service for another specific service. E.g.: The LocationService gets a token to talk to the PushService to initiate a device locate.

This is great for back-end services communicating with other back-end services. But when you’re dealing with Javascript front-end applications that might need to speak to or pull in data from multiple services, Keymaster tokens broke down. To make this work we’d have to do one of the following:

  1. The monorail would have to proxy requests to other services or
  2. We’d need to implement Keymaster token signing and encryption in JavaScript
  3. Have the JavaScript applications use the same APIs that devices did to request tokens from Keymaster.

None of these were attractive options. The first meant adding more code to the sprawl. The second meant implementing/relying on JavaScript cryptographic libraries. O, that way madness lies; let me shun that.

The third option meant potentially complex token management code shared across multiple web front-ends. Doable with a solid library, but really more complexity than we wanted to push onto our front-end. This was certainly the least offensive of the three options, but then I attended a talk that gave me another idea.

Dangerously good ideas

At Ricon West 2012 I saw a talk by Dana Contreras of Twitter on how they decomposed their monorail into services (Rebuilding a Bird in Flight (video)). In that talk she briefly mentions how Twitter has pushed their authentication to their proxy layer. This idea resonated with me and felt like the correct missing solution to our token management problem.

Lookout was gearing up to launch Lookout for Business (L4B), which had an entirely new stack separate from the monorail. Since the monorail still owned functionality like device Lock/Locate/Wipe, the L4B stack would need to make calls into the monorail to trigger those actions.

This seemed like the perfect use case to build a system like the one Dana had mentioned at Ricon. Lookout already had a service that generated authentication tokens for devices. We had a variety of other services using those tokens for authentication via an HTTP Header on RESTful API calls. All we needed was a service to sit between the web browser and back-end services that would manage the service-specific tokens internally and speak HTTP session cookies to the Javascript running in the browser.

Meet Border Patrol

Border Patrol is an nginx module implemented in Lua that performs authentication at the edge of the network. Border Patrol is basically a big session store whose values are the Keymaster tokens for the upstream services a browser wants to speak to.

Here’s an example:

  1. A client requests a protected resource from a service.
  2. Border Patrol determines there is not a valid session for this request, and simply let’s the request pass through
  3. The upstream service redirects the user to it’s own hosted login page.
  4. The user fills out their credentials and submits the form. This POSTs to Border Patrol who validates the credentials via Keymaster and returns one or more service tokens
  5. Border Patrol creates a session id and returns it to the client via an HTTP cookie: this session id informs Border Patrol how to retrieve the service tokens
  6. Subsequent requests from the browser present their session id via the cookie, and Border Patrol injects the appropriate service token into the request headers
Flow of
requests in Border Patrol
Series of requests made in Border Patrol

Currently, Border Patrol relies on memcached for its session store. Additionally, we lean on Keymaster to do the actual user authentication and token generation. However, any auth-token system could be used. Which leads me to…

Future Direction

The current Lua implementation is messy since the nginx Lua modules don’t allow for creation of directives. because of that, we’ve been forced to implement subrequest spaghetti inside of the nginx module itself. One thought here is to move to native-c nginx extensions which would allow us to add or extend existing nginx directives, making configuration simpler.

But given the direction we’re taking Border Patrol, we might need to do more than that. We’re already in the process of moving ownership of login and account management to a Rails service called Checkpoint. This lets us build services that never have to care about login or password management.

Furthermore, with the complexity of nginx, the fact that we don’t actually use most of it, the features we want to add over the year, and the performance requirements inherent in fronting of every request made into lookout.com, we’re currently thinking about dropping nginx as the engine and moving to a JVM-based platform. This would allow us to build rate limiting, load shedding, session storage, and request routing components as services, if we so desired, relying heavily on evented IO.

Lookout is pleased to announce that we’re open-sourcing Border Patrol. If you’d like to know more, further details can be found in the project’s public GitHub repository.

Credit where credit is due

Border Patrol was conceived of by me, started by R. Tyler Croy, and has been worked on by both of us and a variety of people at Lookout, all of whom should be given credit for this blog post, where the service is today, and where it’s going. Those people are Dirk Koehler, Nathan Smith, William Kimeria, and Christopher Chong.

- Rob Wygand

posted in: · · · ·



Private Parts - A completely serious not at all joking project to change the privacy policy

At Lookout, the idea that security and privacy are important is shared not just by us engineers, but permeates our entire company culture. This is one of the main reasons why our recently open-sourced project, Private Parts, enjoyed the medley of team members it did. Our legal team pushed the project forward and worked closely with designers, developers, marketers and project managers to get us to launch. At any one time there were many helping hands all over Private Parts. ….(heh)….

Our position is simple:

#### Privacy is important and it shouldn’t be confusing

If you’ve ever took a look at a privacy policy, possibly out of some morbid curiosity or maybe because you lost a bet with someone in legal, you will have likely encountered a dense and intentionally perplexing document which has been designed to make you give up on your silly quest to comprehend what’s going on with your data by the second sentence.

We don’t get down like that.

We set out in our cross-functional team, not completely unlike another certain fellowship, to conquer the privacy policy and fling the jargon-filled behemoth into the fiery depths of Mordor (and not lose anyone to the Nazgul along the way).

Proto-prototypes

From a development standpoint, our first attempt was a simple prototype of a new visual design based on the categories and concepts developed by the NTIA. Our design team performed user testing and came back to the team with results and learnings.

Screenshot of final design and layout

Some of the things we learned resulted in a few design tweaks, but nothing yet on the development side. At this point, we hadn’t yet begun thinking about the process others might take to adopt our proposed solution. During Hacksgiving, our annual hackathon just before Thanksgiving, our team completed an advanced prototype which had the beginnings of a build process.

Our goal was to create a tool based on all of our research that other developers and legal teams could take advantage of to improve their users experience.

This version was a big improvement over our first prototype and received good praise internally, but it had some limitations. It was built on Backbone.js which, while great for lots of things, didn’t serve us as well in this instance. The output was a lot more bloated than we wanted the final product to be. Our goal was to build something as lightweight as we could get away with and that as many people as possible could use.

In the spirit if dogfooding I did my best to act as a front-end developer and use this prototype to build our Privacy Policy from scratch. It was not as easy as we ultimately wanted it to be. I found that we hadn’t done a great job of making customization options accessible and easy to change. In fact, it was a bit of a nightmare. All of the icons were image based and having everything written in vanilla css caused a lot of problems when attempting to do even trivial things like change a typeface or background color for certain elements.

“Oh, I’m sorry, did you want to add your brand color and your special brand font? That’ll be 45 minutes of hunting through the css and making sure you’re replacing the correct things.”

This is not ideal.

The other issue was responsiveness. We’re a mobile-focused company and it would be very silly of us to build something that only looks good on a desktop. Our hacked version was a decent looker on mobile, but didn’t have the full responsive and reflexive behavior we required.

Build Process

#### Moving configuration from the client to static site generator
#### or #### “Grunt all the things”

While our backbone version did have a build process, it was much more complicated and involved than we wanted. A few weeks later the team got back together and started from scratch with a build process utilizing node and grunt for the heavy lifting. The goal of this iteration was to simplify the process and output, making it easier to get started and get done.

In dogfooding our previous version, I found that updating design and layout was a challenge, but integration proved to be an even more unwieldy process. I had to focus on how css was being applied and ensure proper namespacing of all css classes and id’s.

Our new build process moved complexity away from the client. There would be no more dynamic rendering of templates from a datasource, instead we would opt for a static output and as few files as possible.

We went with Jade as our templating language. I had never used Jade before, but found it great to work with and have since used it in other projects. Jade also allowed us to stay within our JavaScript-based universe of node/npm and grunt.

Jade enables mixins and includes which was great in helping us maintain a modular structure.

Below is the jade mixin for each of the categories:


mixin module(category, type)
  - if (category.options.length > 0) {
      div(class=['module', 'sharedCategory', category.class])
        if(type == 'share')
          span.dingbats-misc-corner-flag.mobile-share-icon
          span.dingbats-misc-rounded-check.mobile-share-icon
        div.module-icon
          span(class=['dingbats-'+category.class, 'pp-icon'])
        - if (type == 'share') {
          h3.module-title.dingbats-misc-rounded-check.pp-desktop-show= category.name
            span.dingbats-misc-play.pp-triangle
          h3.module-title.mobile-title.pp-mobile-show= category.name
          div.module-definition.pp-hide-this
            p= category.description
        - }
        - else {
          h3.module-title.pp-desktop-show= category.name
          h3.module-title.mobile-title.pp-mobile-show= category.name
        - }
        div.module-options-list
          ul
            - if (category.options.length > 0){
                each item in category.options
                  li!= item
            - }
            - else{
                p!= category.emptyText
            - }
  - }
  - else {
      div(class=['module', 'nonShared', category.class])
        if(type == 'share')
          span.dingbats-misc-corner-flag.mobile-share-icon.not-sharing
          span.dingbats-misc-rounded-x.mobile-share-icon
        div.module-icon
          span(class=['dingbats-'+category.class, 'pp-icon'])
        - if (type == 'share') {
        h3.module-title.dingbats-misc-rounded-x.pp-desktop-show= category.name
          span.dingbats-misc-play.pp-triangle
        h3.module-title.mobile-title.pp-mobile-show= category.name
        div.module-definition.pp-hide-this
          p= category.description
        - }
        div.module-options-list
          ul
            - if (category.options.length > 0){
                each item in category.options
                  li!= item
            - }
            - else{
                p!= category.emptyText
            - }
  	- }

As you can see, there is quite a bit of logic embedded in just this mixin (we have a couple others which manage different views).

Jade’s functionality was essential in allowing us to easily cycle through our JSON config file and put the data where it needs to be and keep our layout nicely structured no matter how many or how few categories were being utilized.

So that mixin, paired with the following bit of code, produces the complete “What do we collect?” section:


section#collected
 div.clearfix
  each option in pageOptions
   h2!= option.collectedSectionHeader
    div.module-row.clearfix
     - var count = 0;
      each category in collected
        - count++
        +module(category, 'collect')
        if count % 3 == 0
          div.module-row.clearfix

The Jade template engine supports full JavaScript code by prefixing each JavaScript line with a hyphen “ - “. Other built-in functions like “each…in” can be added without the prefix. In the above, both styles can be seen. The following:


each option in pageOptions
 h2!= option.collectedSectionHeader

simply loops through the pageOptions hash in our config file and renders the contents in an h2 tag. The addition of “!=” just after the h2 tells Jade to NOT convert the contents of this value into plain text. This allows HTML formatting to be added directly into the data source. The benefit of which is clear when you want to easily add a link or a line-break to the rendered content without messing with the template.


- var count = 0;
 each category in collected
  - count++
   +module(category, 'collect')
   if count % 3 == 0
    div.module-row.clearfix

This code again loops through our config file, specifically, collected and creates a new module with the category object with the type of ‘collect’ as arguments. “ - count++ “ allows us to increment the variable ‘count’ which we initialized two lines above. This enabled us to “dynamically” adjust and maintain the layout by adding a dividing module-row after every third module created, as a result keeping a three-module max per line layout regardless of how many items were added in the customization process.

Exposing our private par…errr…customization options

To deal with the issue of un-fun CSS, we again stayed within the JS family and went with Less as our CSS preprocessor.

With Less we were able to utilize variables which helped to overcome the issue I had ran into while trying to use our previous version.

In our assets directory we included custom.js, variables.less and custom.less. variables.less exposed the many standard configuration options we enabled for css including things like fonts, colors and a few element sizing options, among others.

Our custom.less file:


// Typography
@font-family: 'Source Sans Pro', helvetica, sans-serif;
@base-font-size: 100%;

// Colors
@background-color: #ffffff;
@dark-background-color: #f1f1f1;
@branding-color: #3db249;

@text-color: darken(@dark-background-color, 50%);
@link-color: @branding-color;
@dark-background-headline-color: @branding-color;
@dark-background-text-color: @background-color;

// General display
@button-corner-radius: 5px;
@module-height: 5.3em;

This ease of configurability takes the required skill level of the prospective developer-user way down. And for the higher skilled developers, we’ve saved you some time! You’re welcome!

Now, as a developer you can simply change the Less variable for @branding-color and having something that immediately feels more at home in your current website design.

Less color functions like darken(color, value) and lighten(color, value) enabled this high level of customization where we could simply set box-shadows, icons and hover events as percentages of the base colors.

Our design team produced icon fonts to replace all of the images used in the previous version. This greatly simplified the customization process by requiring a simple change to a css hex value instead of forcing people to crack open photoshop to make their own versions. In addition, by Base64 encoding our icon fonts into our CSS, we significantly reduce the number of files needed.

The output of this build process is two files, but you only need one. The difference being one that is a mostly standalone html page and the other one that is more easily dropped into an existing page or template.

In summation…

We’re very excited to have our private parts out in the open.

Okay, okay… that’s enough.

Really, we’re excited to be a part of the push for a better user experience and a more well-informed public. I imagine that if you’re reading this, you probably care about these things too. We’re looking forward to seeing what others do with this and would love to get feedback about your experiences with it. Email us at privacy@lookout.com.

That being said, we’re not done. We’re beginning work on the next iteration which will make it even easier for anyone, developer or not, to create a privacy policy that will benefit their users.

Check out Private Parts on github

Jesse Gortarez

posted in: · · · · ·



Factory JS : Dependency Injection Containers + AOP

Over the course of building many front end projects, we’ve learned a few lessons. One big finding is that reusing code between these projects can be difficult, tedious and error prone. Interfaces for a given piece of reusable code are often intertwined with the current stack. Extracting that behavior involves using much larger code segments than the reusing code requires. Testing this code becomes a chore since the testing must occur in each new context.

Current alternatives generally require that you buy in fully to a new technology stack (AngularJS) or simply don’t provide anyway to include new technologies at all (jQuery-UI). None of the alternative offer the unique feature set offered by Factory JS.

AMD Dependency Soup

Factory JS is a dependency injection container maker used to organize components (types) and behaviors (mixins). Built on underscore, backbone and jquery it can encapsulate and reuse any type of library without too much effort. One of the first benefits of this utility is that it can reduce the argument soup required for an AMD module definition. Consider the following:

    define(['a','b','c'], function(A, B, C){
      // we now have connascence of name and order
    });

we can leverage a factory to clean this up:

    define(['factory', 'a', 'b', 'c'], function(factory){
      // now it doesn't matter which order the dependencies are in
      // except for the factory, which must be first.
    });

To achieve this the modules ‘a’, ‘b’ and ‘c’ might have code looking something like the following:

    // a.js

    define(['factory'], function(factory){
      function A(){
        ...
      }

      factory.define('A', A);
      // return the constructor for legacy support
      return factory.getConstructor('A');
    });

As long as this code executes the factory will return an instance of A when invoked using the get method:

    var a = factory.get('A');

And if the A class takes arguments, no big deal, just pass those in as well:

    var a = factory.get('A', arg1, arg2, arg3);

You can define a constructed method on any class. Constructed methods execute after object creation and after applying all mixins to the instance. You can think of them as post-composition initializers. They will receive the arguments used in the constructor.

Mixins

So we have a constructor container, whoopadidoo. The real value is that this allows you to abstract behavior code into reusable chunks that can be applied in other applications. We do that here by using mixins and the factory’s defineMixin method.

    defineMixin(name, mixinDefinition, mixinSettings)

Name is the string you will use to add the behavior to a definition. A mixinDefinition is an object. It contains methods and properties to mix into instances using this behavior. A mixinitialize method can be define on the mixinDefinition. Mixinitialize is invoked during the composition of an instance.

Mixins defined this way can be used in any object in the factory by adding the mixins: [‘MixinName’] option in the define options:

    // Lion.js
    factory.defineMixin('Lion', {
      roar: function(){
        console.log("ROOARR!!!");
      },
      mixinitialize: function(){
        console.log('It behaves like a lion', this);
      }
    });

    // Tiger.js
    factory.defineMixin('Tiger', {
      pounce: function(){
        console.log("POUNCE!!!");
      },
      mixinitialize: function(){
        console.log('It behaves like a tiger', this);
      }
    });

    // Liger.js
    factory.defineMixin('Liger', {
      magic: function(){
        console.log("PHWOOOOSH!!!");
      },
      mixinitialize: function(){
        console.log('It is known for it\'s skills in magick!', this);
      }
    }, {
      mixins: ['Lion', 'Tiger']
    });

    // FavoriteAnimal.js
    factory.define('FavoriteAnimal', Animal, {
      mixins: ['Liger']
    });

Mixins can also define a mixins array in their mixinSettings to depend on other mixins. You can compose behaviors together and apply them to objects at will.

Singletons

Factory JS supports singletons as a native concept. Singletons can be used in javascript because the application loads as the result of a single request. It is destroyed when the page changes location or reloads. This alleviates much of the concern of singletons in other languages. If you are writing a single page application with a long life cycle you may want to use singletons judiciously.

To make a class a singleton just pass the singleton: true flag to the definition option. Once defined the first call to get will create the object and following calls will return the same instance. In general, singleton constructors do not take arguments. This prevents you from initializing the object with the wrong arguments.

    factory.define('TheOne', A, { singleton: true });

    var theone = factory.get('TheOne');
    factory.get('TheOne') === theone // => true

Evented Factory

One of the more interesting features of the factory is that it supports an evented interface. Let’s take a look at the events you can bind on:

- define: Factory emits this anytime you define a type.
- defineMixin: Factory emits this when you define a mixin.
- create: Factory emits this when you create an instance. Used for tag support.
- dispose: Factory emits this when you dispose an instance.

These events support the AOP and mirroring features we will talk about later. You can also use them to depend on functionality or track object creation and disposal.

Tags

Tags mark definitions as being part of a group. Tags can reach across type and mixin boundaries to apply behaviors to objects. This allows you to do aspect oriented things like logging, error handling and security in a consistent way across object types in your application. This is the effective equivalent of executing a callback against any instance in memory that has a tag, and binding that same callback to be applied to any instance that comes into memory with the same tag.

Let’s imagine that we have a large group of models and non model objects that rely on persistence strategies where the default is ajax. Let us also imagine that we have established an alternate strategy for when we are in maintenance mode where things will be routed to a local storage container for later upload. We have tagged all these model and non model objects with ‘Persists’.

    factory.extend('Model', 'PersistingModel', {
      save: function () {
        this.strategy.save();
      },
      // the following would be better implemented as a mixin
      strategy: function (command) {
        switch (command) {
          case 'online':
            this.strategy.save = function () {
              ... // normal strategy
            };
            break;
          case 'offline':
            this.strategy.save = function () {
              ... // local storage
            };
            break;
        }
      }
    }, {
      tags: ['Persists'] // this is how we set the tag
    })
    // let's say we get a socket call informing us of the maintenance mode
    socket.on('change:mode', function(data){
      if (data.mode === 'maintenance') {
        // we need to switch to the offline strategy
        factory.offTag('Persists');
        factory.onTag('Persists', function(instance){
          instance.strategy('offline');
        });
      } else {
        // we need to switch back to the online strategy
        factory.offTag('Persists');
        factory.onTag('Persists', function(instance){
          instance.strategy('online');
        });
      }
    });

Now we don’t need a global singleton to maintain this state, the objects can be tested in isolation as can the persistence behavior. Because onTag will execute against all the instances in memory and all instances created in the future we don’t have to worry about the objects having the wrong state if they are created after maintenance mode is started or stopped.

Mirror

As you can see factory js is designed to take core object domains and wrap them together into a single access container. This is great unless you want to use multiple domains of objects in a single project. Let’s say that you want to use a BackboneFactory (included in factory js) as well as defining your own factory of object definitions. You can easily do this by mirroring the BackboneFactory in your own factory, then any definitions that are added to the BackboneFactory will automatically be added to your factory.

    define(['Factory', 'BackboneFactory'], function(Factory, BackboneFactory){
      var MyFactory = new Factory(function(){...});

      MyFactory.mirror(BackboneFactory);
      MyFactory.hasDefinition('View'); // => true
      return MyFactory;
    });

This will allow you to compose factories from all over the place and utilize their mixins and definitions in your own factory without having to carry around multiple factories.

Summary

This is just the beginning of some of the great things you can do with Factory JS. Some of the other uses we have found are contextual dependency injection, late binding strategies and massive, virtually effortless, code reuse. In future articles we will cover more patterns and interesting ways to use this system.

posted in: · · · ·