This repository aims to show some examples of how I've done different tests in Buffalo apps. I'm not saying that these are best-practice, just that I've found these techniques and patterns useful for validating what's happening.
Please feel free to raise issues and discussions where you have a comment or suggestion. And I'm always happy for someone to raise a PR to show off what they've discovered too.
Click on each of these named examples to see more details...
Testing in Github Actions
Here you can see an example of what's necessary to test your app using Github Actions.
- Compose file - this puts your tests together with the database
- database.yml - defines your database details
- Tests script - This does the testing for you.
- PR workflow - this runs the tests using docker compose
- Linting workfow - this enforces common linting rules
Let's start with the Docker Compose file. This sets up the DB and makes your app depend on
it, setting the database details using environment variables. These variables are consumed
in database.yml
. The compose file also gives access to all of the files in the project and
runs the tests script, that sets up the DB by running the migrations and then executes your
tests.
If you would like to run these tests from inside docker locally, just use the following command:
docker-compose down && docker-compose up --remove-orphans --abort-on-container-exit test
The PR workflow is the bit that triggers your tests when you open a PR and push new commits to it.
Lastly, we have the linting workflow - this is just a good idea really.
Little user testing tweaks
This is a really simple one, I've created a couple of helpers that have been useful when defining user interactions.
- actions_test.go - test helpers sit here
- fixture data - test data to be inserted in to the database
- example test -
Test_HomeHandler_LoggedIn
shows a helper being used
These are a couple of very simple helpers to:
- Set the current session using the email address as the key
- Get the a user details by ID
In both of these helpers, we just fail immediately if there's a problem. Don't bother propagating the error back up because this is a fundamental failure in the test environment.
Also, don't forget to load the fixtures at the beginning of your tests!
Workflow tests
Workflow tests allow you to put together several actions so that you can test more complex behaviours. This work by taking advantage of the session and other internal details to retain user state between actions.
The idea is that you can define behaviour that works across different endpoints in your app so that you can think about a feature from beginning to end. It's been particularly powerful when used with tests around user auth.
- Workflow tests - this is where the individual workflow tests live
A simple test has been created that describs a user attempting to view the home page, being refused because they aren't logged in, logging in, and then successfully viewing the home page.
This is a very simple example so it duplicate some of the testing that is generated by the auth plugin. It does, however, have a few interesting features:
- Each stage in the workflow goes in its own block, avoiding accidentally shadowing variables and making it easier to read which assertions belong to which action
- We don't assert much for each action, just enough to prove the right thing happened
- Session state is maintained between actions
- A single 'request' has been created at the beginning of the test. For the purposes of your tests, you can pretend that this object represents the browser. It will keep track of headers and cookies between requests so that you don't have to.
API tests
This technique allows you simply and consisely validate your API endpoints. This is much lighter weight than traditional tests in Buffalo, allowing you to create them much more quickly and easily.
- API tests - API tests and surrounding assertion framework
For these tests, a mini framework has been created to help make sure that certain invariant about APIs are correct (like the Content-Type being correct) and make individual tests fit on one line.
There are some interesting features to be aware of:
- The function
ln
is used to create a name based on the line number for the test. I found this made it easier to see which test was failing. If you have dozens of tests then it become hard to see what a long string of text maps to but a line number makes it super easy. - Test assertion failures output a lot of details about the individual request, response, session etc. This makes it easy to see everything that's going on in your test at the same. I found this makes it easier to quickly diagnose everthing that's going on in the test.
- Finally, I chose to use an expression language (github.com/antonmedv/expr) to allow you to add arbitrary assertions for each test, having access to all of the request, response, and session information. This array of assertions allows you to check as much or as little as you want for each test. It's a bit of an opinionated way of approaching the problem but I found that it's worked out really well in the testing that I've done so far.