Building an Execution Engine, Part 3: Tools

Abhijeet Vijayakar
9 min readApr 17, 2017

This is the last in a series of four posts on building a high velocity startup.

In Startups and the Art of Asking Questions, I wrote about a basic framework within which most startups operate. Startups need to be built to ask questions of the market at high velocity, using three primary factors: people, process and tools.

I described ways to solve for the first factor, and get the best people into your startup, in Building an Execution Engine, Part 1: People.

In Building an Execution Engine, Part 2: Process, I wrote about how to build robust processes to help the people in your startup be highly productive.

This post will talk about the third factor: the tools and systems you should use in your startup to enable execution at high velocity and quality.

In this post, I won’t talk about the core tools that your team uses to do the actual work of development: code editors, source control systems, visual design tools, etc. In other words, this post won’t discuss whether your application should be written in Ruby or Python, take a position on emacs versus vi, or discuss whether to put your code into Github or Bitbucket. Those are often context-specific discussions, where many different decisions are reasonable, and depend as much on your team’s expertise as any other factor.

Instead, I’ll talk about the tools that form the scaffolding around core development activities: enabling the right activities to occur, and helping those activities happen faster and with high quality. I’ll provide a categorization of the tools your startup should be using, and provide examples of some tools I’ve used successfully before.

Use the information below as a guide to the landscape of tools your startup should be using, as you explore your search space. The specific tools listed here will doubtless change and be superseded by newer products, but the general categories of tools that startups should use will likely not change that often.

The execution-related tools that teams use can typically be categorized into the following areas:

  1. Tools to plan development: idea management tools, idea dissemination tools, sprint planning tools
  2. Tools to help development go faster: automated testing tools, test servers, change deployment tools, application monitoring tools
  3. Tools to determine what to build: quantitative and qualitative data capture tools, product experimentation tools

Tools to plan development

These are the tools your team will use before a project is ready to start execution. Tools to plan development typically include the following types:

Idea Management Tools

These are typically idea organization and scheduling tools, for ideas that are not ready to put into execution. I’ve worked on teams that used Trello, Asana and JIRA Portfolio for idea tracking. The key features of these tools are the ability to quickly enter in new ideas at various levels of refinement, flesh out ideas within the tool, and group ideas into larger projects or themes that can be pushed into execution as single unit.

Some tools, such as JIRA Portfolio, offer the ability to integrate directly into your ticket tracking tool: when a project is ready to put into execution, tickets can be created from the items in JIRA Portfolio. This can be useful in theory, though I haven’t seen it work particularly well in practice.

Ideas should typically be open to the entire company to view; you may want to have “idea boards” for anyone in the company to add their own ideas, to be filtered later by product owners.

Idea Dissemination Tools

Wikis are useful as document repositories, or for providing detailed overviews for complete or in-flight projects. Many teams I’ve worked on have used wikis to list all documents related to a project in one place (for example, all feature design or technical design documents), or to provide a readable high level overview of a project (why it is a good idea, mockups of the final work product, and tickets in your ticket tracking system — see below — related to that project).

Confluence is a good, popular hosted wiki; other alternatives are hosting a wiki yourself, or using Google Sites if your company uses Google Apps.

Sprint Planning Tools

Ticket tracking tools (also called “bug trackers”) are often used for sprint planning. The key features that you likely want from your ticket tracking tool are the ability to enter input in multiple ways (by easily attaching media files in addition to text, for example); specify that tasks depend on each other; and to unambiguously indicate what state a task is in.

I’ve used JIRA, Pivotal Tracker and Bugzilla for ticket tracking; JIRA is the most flexible but can be hard to configure, while Pivotal Tracker is easier to set up but is more opinionated about the workflows it supports.

Some ticket trackers offer development-related analytics that may be useful to your team, such as burndown charts, team velocity indicators, and other measures of developer productivity. These metrics can be somewhat instructive, though I would caution against using them as authoritative indicators of team performance; often, software development has nuances that are not accurately captured by quantitative data, and I’ve seen over-reliance on such metrics lead to developer unhappiness and low team morale.

Whichever tool you use, I’ve found it useful to require that almost all development work can be linked to a ticket in the ticket tracking system; this makes it easy to map back a commit in your source control system to the ticket that necessitated that work, and from there to the larger project under which that work falls.

Tools to make development go faster

Tools that speed up development typically fall into the following areas:

Automated Testing Tools

Tests are a key component of high velocity, high quality software development. Encourage your team to build tests into your codebase from an early stage.

Unit tests allow you to validate components of your system in isolation. All major programming languages have mature libraries for quickly and easily writing unit tests: PHP has PHPUnit, Ruby has RSpec, and NodeJS has Mocha.

Integration and functional tests allow testing parts of your system together; in the broadest case, you can simulate user input on a user facing part of your product (mouse clicks or keyboard input), and programmatically verify that the product does the right thing (typically by verifying that the right output is displayed on the screen). Browser-based integration tests can be written with Selenium using a number of programming languages; you can then use a third party hosted service to run the tests (I’ve used Sauce Labs), or run them on your own infrastructure.

Build servers like Jenkins or CircleCI can help make sure that your automated tests run as frequently as you like: every time a PR is opened, every time a PR is merged, or multiple times a day on a schedule.

Test Servers

Test servers allow for faster iteration by not requiring product owners (product managers or visual designers) to be physically around engineers in order to collaborate with them. Engineers can push up work in progress to a test server, which can then be reviewed in isolation by a product owner. Making it easy to deploy to a test server, and having a large enough fleet of test servers, is key in speeding up turnaround time.

You should invest in making it easy to set up new test servers. Technologies like Ansible, Chef and Docker can make it easy to set up new servers in minutes, and it’s worth establishing a “configuration as code” culture early to discourage one-off server setups that are hard to replicate as you scale.

Change Deployment Tools

There should be as little friction as possible between an engineer getting a code change approved, and having that change appear in the production version of your product. Examples of friction include app store approvals, manual QA, and complex deployment processes. While some of these sources of friction (most prominently app store approval) are outside your control, you should do whatever you can to eliminate every friction point you can. The simplest deployment process is one where deployment happens automatically once the engineer merges their approved code change into your source control system.

Build systems like Jenkins can both run automated tests, and help with deployment.

Application Monitoring Tools

Once new code is deployed, you should have tools that allow you to monitor that your application is still behaving as expected, and alert the right people if it is not.

There is a large industry of monitoring and alerting tools. Some tools I’ve used and liked before include NewRelic for performance monitoring, and Loggly and Sentry for log parsing and alerting based on log output. If your startup uses AWS, it’s worth getting to know AWS Cloudwatch well; I’ve used AWS Cloudwatch alerts for robust monitoring of backend services.

Tools to determine what to build

This is arguably the most important tool category: without the tools to tell you whether your team is building the right thing, you are flying blind.

Tools in this category typically help you understand how users are using your product, and put different versions of your product in front of users.

Quantitative Data Capture Tools

These are typically clickstream tracking tools that help you gather fine-grained data about how users are using your product. Google Analytics is an example of this type of tool, and very powerful if used well; other tools that teams I’ve worked on have used include Segment and Adobe Omniture. The key capabilities you’re looking for here are the ability to quickly instrument any part of your application, across web and mobile, and pipe the resulting data into a database that can be queried (preferably using generic SQL queries).

Some tools (notably Google Analytics) also include in-built data visualization capabilities; for most others, you will need to plug a visualization tool like Chartio or Tableau into the clickstream database. Generally, stand-alone visualization tools are likely to be more powerful than visualization capabilities built into the clickstream tracking product; so being able to query the clickstream data using SQL from an external app is an important capability, allowing you to build sophisticated dashboards in a dedicated visualization tool.

Qualitative Data Capture Tools

In the early days, your startup may not have enough users to get meaningful information from quantitative data gathering tools, such as those listed in the previous section. In that case, you may find it useful to integrate tools into your product to capture user behavior in a more qualitative way. For example, FullStory provides useful videos of actual users using your product, classified by time, location and other attributes. This can be very useful to see which parts of your user interface might confuse or frustrate users, or view reproduction steps for bugs that only happen “in the wild”.

Another useful way to visualize qualitatively how users use your product can be looking at “heatmaps”: a view of pages on your site with a color overlay of areas where users clicked most. Google Analytics has the ability to overlay click percentages on top of any page on your website; CrazyEgg generates more traditional color-coded heatmaps.

Don’t overlook the importance of integrating chat tools directly into your product; by putting a widget like an Olark or Intercom chat box on your website, or in your app, you can allow users to contact you directly when they need help, which gives you a high quality communication channel with them while you are figuring out product-market fit.

Product Experimentation Tools

When you have a sufficient number of users, you can iterate quickly on your product by running A/B tests: showing different versions of your product to different groups of users. I’ve used Optimizely as a useful and robust way to run experiments.

Because A/B tests show each variation of the product to a fraction of users, it may take a long time to get statistical significance on whether one variation is truly better than another. Resist the temptation to lower your confidence level (typically, you want 95% or higher confidence levels before you stop the A/B test); any lower, and you are making decisions based on noise rather than data.

If you don’t have enough traffic to run true A/B tests, you can expose product variations sequentially to users; doing so means that you have 100% of your users being shown a single product variation at a time, which may help you get statistically valid data more quickly. The drawbacks with this approach are that users can get confused as the site experience changes significantly over a few days, and that time-based trends (e.g. weekly or seasonal changes in traffic or user behavior) can confound the results you get.

If you decide to go with the sequential variation approach, use feature flags to switch out parts of your product as you cycle through the variations. While there are tools like LaunchDarkly to help you do this, my experience has been that a simple in-house system is sufficient for most use cases.

In order to ask questions of the market at high velocity, startups need great people, great process, and great tools, all working well together. Hopefully, this series has given you ideas for how to put together the important pieces of your startup as you explore your search space.

Read the previous posts in this series here, here and here.

--

--

Abhijeet Vijayakar

Engineering leadership, growth, navigating complexity. Create, learn, repeat.