Skip to content

Local setup aims

The aim of this chapter is to install and setup enough tooling to make development easier and hence quicker. This will mean we can ship updates to the customer earlier and with greater confidence the experience will be great.

Local development

The shorter the time between making a change to code, and being able to run and see the effect of the change the better. As this feedback loop is the main way to develop. For this reason we'll ensure that we can run all the code locally ideally with it auto reloading on any change.

Autoformatting

The style of code matters as code written in a different style to the one you are used to will take longer to understand and may, in addition, lead to missed bugs. This is problematic as almost everyone has a different preferred style, and this changes over time.

In the past I've used tooling to check the styling and report on any inconsistencies. This is helpful but wasteful, as every inconsistency must be fixed manually. Fortunately most languages now have an official, or dominant, autoformatter that both defines a style and changes all the code to match it.

We'll aim to set up our tooling such that there are autoformatters for as much of the code as is possible.

Linting

I think of linting in two parts, type checking and static analysis1.

I type hint, or used typed languages, where possible as this catches a large number of errors I typically make. It also helps document the code in that it makes it clear what objects (types) are expected. Whilst typing costs more effort to write, I think it easily pays off in bugs avoided. Therefore checking the typing should be our first aim of linting.

I also like to use linters to look for potential issues in naming, usage of functions, possible bugs, security issues, unused code, and to flag code that is too complex or poorly constructed. These linters are a very low cost sanity check as they are quick and easy to run and give few false issues.

Testing

Linting can only find some issues, notably it cannot detect logic errors where correctly written code in fact does the wrong thing. For this writing tests is the best option.

Often when tests are discussed a coverage target is introduced, with coverage defined as lines tested over total lines of code. I think this is an unhelpful definition, instead I'd encourage you to consider coverage in terms of what the user cares about i.e. have you tested the use cases your users rely on?


  1. Analysing the code without running it i.e. statically.