Autotagger: A Case Study for Lean UX


How do you create a great UI experience without slowing down your engineering team’s rapid development cycle? As the head of user experience (UX) at AppNexus, it’s my job to answer this question. In the past we’ve done most UX work in advance, then handed off a finalized prototype or functional spec to the UI team for development. This seemingly orderly waterfall approach meant the UX team often worked in a vacuum without much input from either engineers or users, which often created problems down the road. Recently we introduced a UI tool called Autotagger, and because we built this feature somewhat differently I’d like to share some cool things about the Lean UX process that helped create it.

What is Autotagger?

Many AppNexus clients host their ads on a separate ad serving system. To serve these ads, the AppNexus servers write a short HTML snippet, provided by the third-party system, into the browser. That HTML uses either an iFrame or JavaScript tag to place the ad unit on the page.

To upload third-party ad tags into the AppNexus system, the user pastes the tag code into our upload form and inserts our click-tracking and cache-busting macros in the correct location. These macros will be replaced with data from the AppNexus server when the ad is served in order to pass information to the third-party server.

For example, a user might receive a raw MediaMind tag without AppNexus macros:

<script src=";c=28&amp;pli=123456&amp;PluID=0&amp;w=300&amp;h=250&amp;ord=&amp;ucm=true&amp;z=999999999"></script> <noscript>
<a href="" target="_blank"><img src="" border=0 width=300 height=250></a>

And they would need to manually insert AppNexus macros into the proper location:

<script src="$$${CLICK_URL_ENC}$$&amp;cn=rsb&amp;c=28&amp;pli=123456&amp;PluID=0&amp;w=300&amp;h=250&amp;ord=${CACHEBUSTER}&amp;ucm=true&amp;z=999999999"></script>
<a href="${CLICK_URL}" target="_blank"><img src="" border=0 width=300 height=250></a>

Because tag syntax varies widely among ad servers, and because a user might not be tech-savvy or familiar with all types of tags, the process of adding AppNexus macros to third-party ad tags is often slow, tedious and error-prone.

To alleviate the user’s macro-insertion headaches we built Autotagger, a new workflow tool that automatically recognizes and inserts macros into the most common third-party ad server tags.

The Lean UX Process

Lean UX stems from the Lean Startup movement, which combines agile software development methodologies with Stephen Blank’s theory of customer development. Waste reduction, the most compelling Lean tenet, is achieved by rapidly and repeatedly validating and refining a product hypothesis with customers to identify any faulty assumptions before going to market. In Lean UX, reducing waste means building the simplest prototype possible to validate a design idea, testing it quickly, and immediately incorporating what was learned into the next prototyping cycle. This eliminates guesswork and ensures the right tool is built for users.

Several factors led us to build Autotagger using a Lean UX development process. Autotagger attempts to recognize and modify third-party tags mid-workflow, so this simple tool needed to be extremely unintrusive. In addition, highly responsive, hair-trigger interactions can be challenging to design without experiencing them in action. And to make sure it was lightweight and fast, the tool needed to be optimized for UI performance, a consideration that often gets overlooked in the design stage. To tackle these requirements we brought together a cross-functional team comprising individuals from User Experience (me and Jes), UI (Eric), Product Management (David), and API (Jon).

David and I kicked off the Autotagger design process with an hour of brainstorming at the whiteboard. What steps does the user take to upload a third-party tag without Autotagger, and where would the new tool fit in? Once we’d identified the possible interaction points we handed off our scribbles to Eric to begin development without further design ado.

Including Eric early in the process was essential, because we knew his great engineering instincts would drive him to build the simplest, most lightweight functionality possible. Sure enough, Eric’s first cut provided a barebones foundation for the tool — just enough code to satisfy our requirements and not a line more. This starting point kept the UX team honest, because now we had to provide a good reason for any change we requested to this minimum viable code. The challenge was to take Eric’s skeletal tool and make it a living, breathing feature without adding any fat.

Once we had Eric’s initial prototype it was time to refine the interactions. The team worked iteratively to design, build, and user test the prototype over the course of a week or so. We probably went through a dozen rapid design-build-test cycles while Jon worked on the server side scripts that would attempt to recognize and insert macros into a number of types of tags.

During the iteration process Jes and David performed internal user tests on several variations of the tool we thought were promising. Because Eric was building prototypes directly on a branch in our sandbox environment, we were able to perform user tests immediately without having to create any extra artifacts. One of the early prototypes provided a message to the user only if the tag was recognized, which we thought might reduce the sting of negative feedback for users who weren’t working with the common ad servers we could autotag. The other prototype gave clear feedback on every tag, even when the tag was not recognized or it wasn’t formatted in a way that allowed us to autotag, which users might perceive as more informative and consistent.

The results from that initial internal test were definitive — people vastly preferred the constant-feedback prototype. Now we had a clear and swift direction for the final design, which after a little final polish was ready to be tested externally with local users at several different client ad networks. “Oh! Nice! I love that!” was one user’s response when she tried out the tool for the first time. Hearing such high praise from actual users gave us the validation we needed to release our now fully functional new feature, and we shipped it later that afternoon, about three weeks after we had kicked off the process.

Autotagger In Action

When a tag is pasted into the text area, it is sent to an API which determines if we can automatically insert macros. If it is, we offer to autotag:

Above, the pasted tag is recognized and autotagging is offered.

If autotagging is accepted, the AppNexus macros are inserted into the correct locations in the code:

Above, AppNexus macros have been successfully inserted into the tag

It’s that simple! Of course, there were lots of other cases we had to account for, like unrecognized tags or recognized tags that were formatted in a way that prevented autotagging. We also had to address questions like, what should happen if the user edits the tag after macros have been inserted? What do the “No, thanks?” and “Undo” links actually do? After we had nailed the most basic interaction, we worked through the alternate scenarios using the original case as a baseline for how simple and straightforward all the interactions should feel to the user.

Measuring Success

Now that Autotagger is in production we can continue learning how to make it better thanks to the metrics Eric built into the feature, which show us how features are being used and give us insight into what can be done to drive further adoption. The Autotagger metrics are currently showing that the majority of tags are not recognized, or recognized but not autotaggable. We’ll keep improving these numbers as David adds new tag types and increases Autotagger’s sophistication. The good news is that in the first few hours of usage about 30 of 60 recognized tags went on to be autotagged, which indicates that people began to adopt this great new feature as soon as it was released.

Working iteratively and testing Autotagger along the way really paid off. This process enabled the team to stay on the same page as the tool progressed and allowed us to continually validate our ideas first with each other, then with our internal test subjects and finally with our users. We were happy to find that we were able to design and build Autotagger in roughly the same amount of time as previous features. Not only had the Lean UX process preserved our rapid development cycle, it also helped us feel extremely confident that we were releasing the best possible feature because we had consistently seen people use it and be delighted.

The Lean UX process can work for just about any feature, and now that we’ve proved how valuable it is, we plan on practicing it as much as we can. Stay tuned for more Lean UX lessons learned!

About Suzanne

I lead the User Experience (UX) team at AppNexus. We're hiring!

This entry was posted in Front-end feature, User Experience. Bookmark the permalink.