Skip to content

Make Measurable Statements

Posted on:March 9, 2014

Have you ever been assigned a task with a vague note like: “make it fast” or “it should be resilient?” When you were finished, did the result match your boss’s expectations? Probably not. That’s because these quality statements are completely vague. There’s no real way to tell exactly what the author is after.

The most obvious solution is to have a conversation with the person requesting this quality. In the small, this is a perfectly acceptable practice (commendable, even!). What comes out of that conversation, however, is between you and one other person. People get promoted, expectations change or someone new comes along, and all of the sudden your systems behaviour falls the wayside.

What you really needed from the start was a precise, measurable statement about the qualities to achieve or the risks to avoid. Instead of statements like “it should be fast,” prefer ones like “the web server should respond within 100ms.” Vague statements are problems to prove, while precise statements are problems to find. A problem to prove is one with no reasonable end, while a problem to find has a clear win condition.

As a best practice, always press clients and stakeholders for clear statements about important requirements. (I might even go so far as to suggest you always get clear requirements, but that might be too much for your situation. You be the judge.)

With a clear requirement in hand, you can respond in a number of ways

  1. Informally verify the requirement is met.

    This works when you’re moving fast or the requirement is not mission critical or valuable to keep track of.

    Examples of informal verification:

    • Asking a boss or co-worker to take a look.
    • Writing a script for QA or a second set of eyes to use to verify the intended behaviour.
  2. Write a unit or integration test.

    When there is a desire to ensure behaviour falls within some bounds long-term, codify the requirement as a test.

    This is where precise requirements are especially helpful. If a page needs to respond in 100ms, you can write an integration test for that. If an application needs to be able to withstand hundreds of concurrent users, you can prop a load or simulation test against it.

  3. Track a relevant metric.

    It may be too much for your project to aggressively assert behaviour, or maybe you’d like to track behaviour over time. This approach is especially helpful because you can correlate commits to drastic changes in behaviour.

    If you don’t have a statistics aggregation service running, I would suggest taking a look at the statsd library by Etsy.