Test Suites:
Table-driven tests, though lauded in the go community, aren't always the most effective or maintainable choice.
The best case for a table-driven test is a single function with limited inputs and outputs that contains lots of logic and behavior. The best example I have ever seen, hands-down, is the table-driven test suite for fmt.Sprintf
.
However, when tempted to use a table-driven test in situation involving more 'moving parts' (more than 1-2 inputs, more than 1-2 outputs, anything that makes use of test doubles, externals, or has multiple functions/structs involved) please, please, please just use an xUnit-style test fixture. It's an elegant set of patterns that has stood the test of time.
A few good options for libraries that facilitate test fixtures/suites for go test
:
- github.com/mdwhatcott/testing/suite (full disclosure: I built this)
- github.com/smartystreets/gunit (full disclosure: I built this too)
- github.com/stretchr/testify/suite
TestMain
When tempted to use TestMain
(which is essentially an init func for tests) in order to setup a DB or MQ, reach instead for a test suite with suite-level setups/teardowns (mdwhatcott/testing or testify/suite).
Assertions:
Tests shouldn't have complicated or conditional logic (loops/ifs). They should be straightforward to read and very declarative in nature.
In the spirit of that idea, when tempted to check an error with an if statement in a test, reach for an assertion method instead:
assert.Nil(t, err)
(testify/assert)So(err, should.BeNil)
(gunit or mdwhatcott/testing)
If the presence of a non-nil error defeats the purpose of upcomign assertions, use FatalSo(...)
or require.Nil(...)
.
While we're on the topic of assertions, traditional xUnit-style assertions are of the form:
assertEqual(expected, actual)
I much prefer a more bdd-style approach, which reverses the ordering of the parameters:
So(actual, should.Equal, expected)
IMO, this style reads better, and has a nice flow to it:
So(actualResult, should.Equal, ComplicatedStruct{
ASDF: "expected",
QWER: "values",
})
Whenever possible, do a single equality assertion against an entire data structure rather than an assertion per field of said data structure (above example).
Libraries which make use of this style:
- github.com/mdwhatcott/testing/should
- github.com/smartystreets/assertions
- github.com/luontola/gospec
- Thanks to Esko Luontola for the idea for the mechanics of this style!
Test Output && Logging:
Tests should produce no output unless a test fails or -v
is passed to go test
. In general, no news is good news. The *testing.T
itself adheres to this guideline. The absolute worst output to see in a test run (that actually passes!) is something like the following:
$ go test
ERROR 13:34:33 server.go:33: borg id or ferengi name is required
starfleet/inertial-dampers.(*ServerSuite).TestContainmentFields()
/Users/starfleet/go/src/github.com/enterprise/inertial-dampers/server_test.go:122
reflect.Value.call()
/usr/local/go/src/reflect/value.go:476
reflect.Value.Call()
/usr/local/go/src/reflect/value.go:337
github.com/stretchr/testify/suite.Run.func1()
/Users/starfleet/go/pkg/mod/github.com/stretchr/testify@v1.7.0/suite/suite.go:158
testing.tRunner()
/usr/local/go/src/testing/testing.go:1193
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1371
PASS
ok starfleet/inertial-dampers 0.263s
This test passed, but logged a scary-looking stack trace. This kind of thing always produces a double-take. It can and should be avoided.
So, when you want to log something in a test, prefer t.Log(...)
over log.Print(...)
or whatever log wrapper your org has declared to be the one true way to log stuff. Figure out how to override the log wrapper and capture or redirect those logs to t.Log
.
Test Coverage
100% code coverage is an asymptotic goal, but it's also the only legitimate goal to strive for since it's our responsibility to prove that every element of our code works as it should. The last decade of my experience has shown that with judicious structuring of packages and the application of a few well-known design principles (like dependency inversion principle, open-closed principle, etc...) it's very reasonable for most code coverage reports to be in the 80-95% range.
Most good IDEs can show test coverage info, but I generally prefer to use the go
cli using the following bash function (will open a browser with test coverage info on every file in your project):
gocover() {
go test -coverprofile=/tmp/coverage.out $@ ./...
go tool cover -html=/tmp/coverage.out
}
Package Structure
It usually doesn't make sense to cover a program's main
function with unit tests. That's where all the dependencies come together concretely and the program (oftimes a long-running server) gets rolling. So, I've found it very helpful to put the code that main calls in separate packages (folders). This means that go test -cover
gives back actual coverage numbers you can trust.
Lots of projects (especially repos defining a single microservice) favor a flat, single-folder, project/repo structure, but this small change in structure (separating main from all other code with packages) can really help make test coverage percentages more meaningful. Often just knowing your coverage is low is motivation enough to increase it, but if main and other non-unit-testable stuff is mixed in then the coverage numbers are obscurred.
Here's a bash function that I call all the time to check test coverage:
makego() {
go version
bash -c -x 'go mod tidy'
bash -c -x 'go fmt ./...'
bash -c -x "go test -cover $@ ./..."
}
Example
Here's a personal project of mine (which is the software that generated the html your browser is currently rendering) in which I've tried to apply all of the advice I've just given above. Anyway, here's what the test run looks like:
~/go/src/github.com/mdwhatcott/huguinho (main)
$ makego
go version go1.16.5 darwin/amd64
+ go mod tidy
+ go fmt ./...
+ go test -cover -count 1 ./...
? github.com/mdwhatcott/huguinho/cmd/huguinho [no test files]
? github.com/mdwhatcott/huguinho/cmd/huguinho-dev [no test files]
? github.com/mdwhatcott/huguinho/contracts [no test files]
ok github.com/mdwhatcott/huguinho/core 0.130s coverage: 100.0% of statements
? github.com/mdwhatcott/huguinho/io [no test files]
Notice that the core
package of the app (which is about 90% of the code) is at 100% test coverage. The cmd
packages don't even have tests (those are the main functions), nor do the contracts
and io
packages (they only contain interfaces and data structures).
I've deliberately minimized code that isn't covered and separated it from code that should be covered. If the test coverage of the core
package ever drops below 100% I immediately fix it.