The Ruby community has largely embraced automated testing, but that means that people often find themselves expected to write tests without having learned the purpose (or purposes) of automated tests. It’s pretty common for me to encounter someone writing a test without thinking about what kind of test it is.
Depending on your app, if you’re using Rails, the particular conventions you use, and other factors, you might have all kinds of different kinds of tests in your application. If you’re using RSpec and Rails, you probably even have “type” metadata on your tests, but that’s not what I’m talking about here.
Let’s take a step back and look at the high-level purposes of different types of tests. Bear in mind that test naming practices vary from community to community, so don’t put too much weight on the names themselves.
Acceptance tests (which often take the form of “feature” specs that use Capybara in Rails apps) are tests that verify that the application does what it’s supposed to do.
These tests provide feedback on the quality of your implementation (“does the app serve its intended purpose”) and provide a safety net against regressions.
They should be as “end-to-end” as possible and simulate external interactions with your application. There should be no mocking or stubbing, though sometimes “fake” external services are required when real ones can’t be used. They should use domain language (the language of the people using the app), not jargon related to the implementation.
Unit tests are tests which test a single component in the system. We won’t argue about what constitutes a component, but in Ruby most people write unit tests around individual classes or modules.
While these tests serve to ensure components serve their intended purpose, they do more than that. Unit tests also provide design feedback.
If testing a unit requires too much setup, then it probably is either not very cohesive (does too many things), is too coupled (dependent on too many things around it), or some combination of the two. If a unit test has a large number of examples, that’s good feedback too; it may be that the component is simply too complex and may need to be broken up.
There’s a whole host of “test smells” that you can keep an eye out for, but at the end of the day, if your code is difficult to test, then it’s probably difficult to use and even harder to reuse.
I have no interest in wading into the “should you use test doubles” debate here. You should, and you should be using them in your unit tests. If you’re against using mock objects, then your testing strategy is not compatible with mine. Test doubles/mock objects should be passed into components to verify that they interact correctly with their collaborators.
Integration tests are tests span multiple components. They ensure that a given set of components work together.
These tests serve a similar purpose to acceptance tests, but at an intermediate level. They can be useful for testing your own APIs that wrap third-party code, or verifying many branches of a feature that might be too slow or cumbersome to test from the outside.
These straddle the line between unit and acceptance test. There’s no hard and fast rule for where to draw boundaries in your integration tests, like there is with unit and acceptance tests (a single component and the whole system, respectively). You must decide what it is that you’re testing the integration of. It’s best to be explicit about this in your tests.
Sometimes these kinds of tests require test doubles. Often they don’t. It depends on what you decide you’re testing.
When you sit down to write a test, consider what kind of feedback you are trying to get from it. A unit test won’t help you assert an external behaviour. An acceptance test won’t give you feedback on whether your components are reusable. An integration test might do a little of both, but neither well.
Consider what you are testing. Write the test that gets you what you want out of the test.