4 min read

Effective Testing at Merit

Featured Image

It's rare to meet a developer that enjoys writing and maintaining test code. Often testing is seen as an additional (tedious) step to completing a story, a hindrance to the team's velocity, or yet another time sink required by their company's "Definition of Done." In my experience, this is the case when teams try to chase a specific code coverage percentage. At Merit we don't chase code coverage and we want developers to enjoy writing tests, or at least to feel the tests they write are effective.

This is one of the ways we write effective tests at Merit.

Note that the example in this post is specific to our front end web app which uses these relevant technologies; typescript, react, relay, react-router, jest, and testing-library/react.

You've probably heard the saying, "test the interface, not the implementation", I'm a firm believer in this and I'll show you how we do that. Though, what does "test the interface" mean in the context of a front end web app? It's actually quite simple. Every developer does this, potentially hundreds of times, manually during the development of a feature. Testing the interface is performing the actions you take as a user to complete a use case. Our goal at Merit is to automate those steps so the developer isn't manually clicking their way through the app to ensure nothing broke each time they make a change.

Let's look at a real use case we have at Merit. A user wants to accept a merit, but needs to login. The user is sent a merit from an organization, they receive an email that has a call to action to accept their new merit. Upon clicking that link, the user is directed to the Merit app where we determine; this is a legitimate request and the user needs to login to complete their acceptance of the new merit.

One of the most important factors in writing effective tests is defining hard boundaries. Here we have two hard boundaries, the client itself and the route. This means we do not want to test anything outside of the client or a specific route. So we will mock any request to our API as well as any navigation to another page. The above use case has a couple of steps involved that require some route transitions, each would require their own test. In this post we will focus purely on a simplified version of the login piece.

Let's start with our hard boundaries. We only want to test one route at a time, we do this by rendering a single route and writing expectations against `history.location` to assert navigation to another route actually happened.



The `history` parameter is an in memory browser history from the `react-router` library, it provides a simple way to inspect the state of the browsers location information. Our `renderRoute` function is a wrapper around the `testing-library` render function that sets up the routing context as well as our mock relay environment. Which leads us to our other boundary, calls to our API.​

Those four lines of code seem rather simple but there's a lot to unpack here. As you'd expect when a user submits our login form we make a call to our API. The `resolveMostRecentOperation` function is a wrapper on `relay-test-utils` that does two things; return a mocked response to our application code and return the operation plus mocked data to our test code so we can write some assertions. The first expectation is that the mutation called with the proper parameters. The second expectation is that our application code has set the newly logged in user with the information returned from the mocked api call. It's important to note that after awaiting the result of `resolveMostRecentOperation` our application code has completed its submit function. This means it has seen the mocked data result and taken appropriate action. This little detail means we don't sleep our test code to free up the event loop for the application code to complete its work.

It is important to note that this test knows nothing about underlying components being used or internal state. This allows us to refactor our implementation and rely on our tests to tell us if something broke. Our test is only concerned with a couple inputs and outputs, the only reason these would change is if there was an enhancement to the feature. If so desired these tests can even be written in a TDD fashion, which typically doesn't work well for UI code. Finally, we have no assertion about page structure or style (aside from some expected data test ids). This is intentional, structure and style are subject to change more frequently than business logic. If possible we only want to test business logic as this greatly reduces required maintenance on our tests. Structure and style are best tested with your eyes and those types of bugs are easier to catch than business logic bugs, this is especially true when you have a number of actions a user can take or edge cases to handle.

As you can imagine our job isn't done yet, there are some negative cases to write as well as tests for the other routes in our use case, but we'll leave it here for this post. To recap, test within your boundaries, codify the actions a user would perform, and avoid expectations on structure and style.

But seriously, what about code coverage?

When I joined Merit there was zero front end testing. We relied on manual testing and end to end smoke tests. Fast forward six months and every new feature delivered has maintained an 80% code coverage. This was achieved without excluding files, adding a hard failure in CI/CD if a pull request did not meet a specific percentage, or even telling developers to hit 80% code coverage. We also have a systemic practice of thinking about our test cases early in the development cycle and proving bugs exist with a test case to prevent future regressions. I won't claim that every developer at Merit now loves testing. However, each developer finds value in the tests they write and that improves both our developer and our users' experience.

Thank you for reading, I hope this has given you some insight into how we write tests at Merit. If you too would like to write effective tests, consider joining us. You can find our careers page here.

Bridging The Workforce Ecosystem: The Power of True Interoperability

We all know the frustration of running an outstanding learning program only to encounter challenges connecting talented individuals with available...

Read More

Optimizing Transitions: The Importance of Warm Handoffs for Education and Workforce Agencies

Connecting learners and job seekers to meaningful pathways requires steady collaboration between education, workforce, and training agencies. When...

Read More