tech blog

AppNexus is today’s most powerful, open, and customizable ad tech platform. Advertising’s largest and most innovative companies build their businesses on AppNexus.

Enable Your Python Developers by Making “Code Investments”

| Comments

Enable Your Python Developers by Making “Code Investments”

Note: portions of this post appeared on my personal blog under the title “Supercharge Your Python Developers

I think it’s safe to say that a project’s inception is the best, indeed perhaps only, opportunity to influence the quality of the code for years from now. Many (most?) projects are started without much direction; code simply springs into being and is put under version control. Making a series of thoughtful, upfront “investments,” however, can pay large dividends. In this post, I’ll describe investments I made at the start of a project that allowed a Python novice to quickly write concise, idiomatic, and well-tested code.

Some Background

I was recently appointed the technical lead on a project to reimplement an existing PHP application. For a PHP project, it was a beast, consisting of hundreds of thousands of lines of code. Rewriting it is something every member of the Data team at AppNexus has wanted to do at some point. I was glad to be given the opportunity to finally do so.

My team of five decided that Python and the Flask web framework would be good implementation choices as they would allow us to see results quickly. Python also has an excellent testing ecosystem, and we knew beforehand (because the previous version lacked them) that quality unit and integration tests would be a high priority. The group ranged from complete Python novices to Python journeymen.

They quickly became rockstars.

How did this happen? How was it possible that, in less than a month from the project’s inception, these python beginners were writing beautiful, idiomatic code? The answer may surprise you.

The Legacy System

Before we arrive at how novices became code-ninjas, some background about the legacy system is in order. The system had outgrown its original purpose. As is unfortunately common, the system did one thing well but, over the years, expanded to be responsible for things the original authors could never have anticipated. As a result, it was so difficult to develop against that even small changes took two to three times as long as we felt they should have.

Testing was a nightmare. Simply trying to run the unit tests required a connection to a sandbox MySQL instance that was shared among 20 developers and was never in the delicate state required for testing. Other tests were out of date and no longer matched the database schema, failing immediately.

There was no build process or continuous integration for the old project. Even if the tests worked perfectly, no one would be running them. Bugs were found only after the buggy code had already been committed. And we had no sense of how comprehensive the tests were; code coverage reporting was non-existent.

The codebase was also horribly confusing. There was layer upon layer of abstraction and indirection, followed by giant functions spanning hundreds of lines with no documentation. The lack of coding conventions meant that each new file one opened was written in a completely different style than the previous one.

Learning From the Past

I told myself that the guiding principle in the design and development of the new system would be “the new system is as straightforward, enjoyable, and easy to develop against as the old system is frustrating.” I formalized a lot of best practices regarding starting a large Python project (many of which are detailed in my post Open Sourcing a Python Program the Right Way). I was determined to actually learn from the missteps in the design and maintenance of the old system and create a project that, above all else, made it easy to quickly write and ship quality code. Further, I wanted developers to always be able to answer the question, “Did I just write good code?” immediately through automation.

Enabling Your Developers

Before a line of code was ever written, I came up with a simple set of coding standards and project requirements. These requirements were not related to the product, but rather the code itself. Some examples:

  • All code is required to be compatible with both the latest release of Python 2.7.x as well as the latest release of Python 3.x
  • All code must include tests with 100% code coverage
  • Code must conform to PEP-8 and PEP-257 guidelines

Many of these “requirements” are simply best practices that most developers would follow anyway. Making them explicit rules, however, reinforced our focus on testability and code quality. It also had implications for the design of the system: requiring 100% code coverage meant that the code had to be written in a modular, decoupled fashion.

The Setup

Now for the meat of the article: the specific steps I took that inadvertently turned novices into rockstars. As you read the steps I took in setting up our project, an overarching theme should become clear: automate every possible developer task that can be automated, even within the code itself. It removes the burden of slogging through tedious tasks and lets developers focus on the stuff they’re paid for.

Design For Simplicity

I spent a good deal of time thinking about the design of the system. My goal was to write the “scaffolding” code and then let my team’s other developers add functionality. As such, I strived for a design that would make adding new functionality as straight-forward and error-free as possible.

My implementation of the scaffolding (or “skeleton”) code contained more than a simple class structure and set of interactions. It included a number of convenience functions and mechanisms to automate coding tasks that would be repeated often.

For example, service endpoints are written as classes. Ones that are POSTed to take their arguments as JSON data. Checking for the existence of required fields and returning an error if not present, therefore, would be a common task. For that reason, I included a mechanism that allowed the developer to simply list the required and optional JSON fields in the class implementation and they would automagically be extracted and added as attributes to the class. That meant that one could write:

1
2
3
4
5
6
7
8
9
10
11
12
class MyEndpoint(BaseEndpoint):
    """Endpoint class implementing the '/foo' service endpoint."""

    __required_fields__ = ['date', 'time', 'event']
    __optional_fields__ = ['location']

    @extract_fields
    def post(self, request):
        if self.location:
            do_something(self.date, self.time, self.event, self.location)
        else:
        do_something_else(self.date, self.time, self.event)

rather than:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class MyEndpoint(BaseEndpoint):
    """Endpoint class implementing the '/foo' service endpoint."""

    def post(self, request):
        data = request.get_json(force=True, silent=True)
        if 'date' not in data:
            raise InvalidUsage(''date' is a required field')
        if 'time' not in data:
            raise InvalidUsage(''time' is a required field')
        if 'event' not in data:
            raise InvalidUsage(''event' is a required field')

        date = data['date']
        time = data['time']
        event = data['event']
        location = None

        if 'location' in data:
            location = data['location']
            do_something(self.date, self.time, self.event, self.location)
        else:
            do_something_else(self.date, self.time, self.event)

This type of convenience is important, as it allows developers to focus on what’s important rather than forcing them to perform mundane bookkeeping tasks. It also prevents errors, as the developer can no longer accidentally forget to check for an optional field or take the wrong action if a field isn’t present.

In short, reducing boilerplate should be a focus during the design phase. A design that, purposely or not, requires excess boilerplate code to add new features is, in my mind, a poor one. It frustrates developers, and frustrated developers don’t write the quality code they’re capable of producing. Boilerplate code is definitely code-smell.

Create a virtualenv

In terms of the actual project set up, first (as you would expect) I created a virtualenv and requirements.txt file, pegging our package requirements to specific versions of third-party packages. This allowed new developers to immediately get up and running via mkvirtualenv <project_name> -r requirements.txt. It also ensured that whatever packages an individual developer had on their machine didn’t interfere with the packages required for the new system.

Run unit tests on each commit via Jenkins

AppNexus uses Jenkins for continuous integration. I immediately set up py.test and wrote some quick unit tests against the scaffolding code I had written. Most importantly, I included a number of tests that mocked out the database connection and checked to ensure the queries we expected to run actually ran. These tests would serve as examples to other developers required to test database interaction (oftentimes a somewhat tricky subject).

I also installed pytest-cov, which gives py.test code coverage capabilities (based on the excellent coverage tool). I integrated this with Jenkins by using the --junitxml flag, producing test results in junit-style XML that Jenkins could interpret. Now, if the coverage drops below 100%, the build fails, plain and simple. Some scoff at the idea of requiring 100% test coverage, but that’s likely because the application they’re working on was not designed with testability as one of the guiding principles. When one works on such a system, it is not difficult to acheive 100% coverage through normal amounts of test code.

Use make to simplify everything

Next I created a Makefile to automatically create and activate the virtualenv, run the tests, and also clean the environment by deleting the virtualenv. Now, a simple make test installed the required packages and ran the unit tests straight from a fresh git clone. As a developer, that’s a nice convenience. Making the process to run your automated tests as simple as possible is very important. If it’s too difficult or takes to long, developers won’t do it.

Run pylint and pep8, CI style

After writing a quick coding conventions document, I installed pylint and pep8. For pylint, I generated a .pylintrc file (using the awesome --generate-rcfile flag. Seriously, why don’t more tools have this?) to hold project specific settings. I setup pylintto run with the--rcfile=.pylintrc flag and followed a similar process for pep8. Then I promptly added them to the Makefile to run during the tests and produce output that Jenkins could use to create reports.

I now had a project where unit tests were run on every commit and test results, test coverage, coding conventions, and “bad code” reports were generated. These reports are saved, and Jenkins produces graphs that track these metrics over time. More importantly, they impacted whether or not the build itself actually succeeded.

I set up Jenkins to fail the build if the number of pylint and pep8 violations passed some threshold. This was an important step, as it made it clear that writing idiomatic, properly formatted code was something to be taken seriously. More importantly, it took the burden of remembering to use the tools off of the developer. If a developer “forgets” to run pylint or pep8 before committing, the build process has their back.

Let your build process generate your documentation

Needless to say, documentation was a focus for the new system. I set up a Sphinx)1 build to automatically generate documentation for the project (using sphinx-apidoc) and added it to the Makefile as a new target. I also enabled documentation coverage. The coding conventions mandated docstrings for all modules, classes, and functions. Sphinx (and pylint) now enforce this automatically and fail the build if coverage isn’t 100%.

Even better, since the project uses Flask and is interacted with via HTTP endpoints, I installed a plugin for Sphinx that automatically generates beautiful endpoint documnentation (and understands/documents things like JSON parameters, HTTP verbs, example requests, etc). This documentation lives in the code for each individual endpoint and is generated on each commit.

docs became a Makefile target, so generating our pretty documentation is both part of the build process and can be accomplished by the developer simply by typing make docs. Here, again, it’s important to automate everything you can, as developers (by our very nature) hate manual processes and often “forget” to run things they think should be automated.

Developer-friendly Scripts

Perhaps most important of all, I spent a good deal of time on an oft-neglected topic: writing scripts to make my developers’ lives easier. For this project, I created the following:

  • A schema file and script that created the database from scratch
  • A dump script to fill the database with test data
    • The script first cleaned up the database to make sure it was in a known, easily recreatable state
  • A script that chose sensible default configuration values and started the server, giving the developer the option to run against a real MySQL database or an SQLite in-memory database. It also took care of sending stdout and stderrto a log file (in addition to thesyslog logging the system performs).
  • A script aware of the pre-populated database data that curled a request with JSON data to the server, then checked the database to make sure the expected changes were present.
  • A script called should_i_commit_this.sh. It runs pylint and pep8 with the project-specific configuration and determines if the code receives poor scores from either. If it does, the script says not to commit the code, gives the score assigned by the tool that complained, and prints that tool’s output.

Especially with the last script, my goal was to make it as easy as possible to answer the question, “Did I just write good code?”. Starting up the database and server, sending a test request, and performing static analysis on the code are each one command away. Another way of looking at it would be to say I tried to make it as difficult as possible to write (and commit) bad code.

The Results

With all of these tools and conveniences in place, my team’s developers took the reigns. Within a week, each team member had written the code for a non-trivial endpoint. The code they produced was truly impressive. They made excellent use of the utility code I had written, wrote extensive unit tests, documented everything, and had code that precisely followed PEP-8. One of the most telling signs we had succeeded was the fact that, after looking over the code, a member of another team thought it was all written by a single person. “The style is identical,” they said (after hearing five people worked on it).

The clearest indicator of success, though, has been code reviews. In every other instance during my career, code reviews have been tedious time-sinks. Reviewers always focused on style rather than substance. Now, code reviews are a source of interest rather than frustration. We never have to say, “Please add a space after the colon on line 14.” Reviews are focused on the logic and soundness of the approach rather than nitpicking style issues.

Looking Back

So there you have it. My secret for making your team of Python developers produce great code, regardless of skill level: focus on catching as much as possible in your build process and afford developers convenience through automation in the form of scripts, Makefiles, and easy-to-create/use development environments. What you’ll get in return is a team that is able to effortlessly churn out quality code.