Mike Clark posed the question, “How do you test controllers?” I found it really interesting that he said that testing controllers feels more challenging than testing models. Experience has shown it to be the complete opposite for me – testing controllers is generally child’s play, and the model is where I run into the most trouble.
I wanted to spend a couple days reflecting on this rather than immediately post a reaction. After all, if my experience is completely different from that of others, there’s probably something illuminating somewhere in the middle.
If you’re already a seasoned TDD/BDD/RSpec’er and don’t care about the steps, jump to the complete specs and my commentary. If you want to see a controller BDD’d from start to finish, continue on.
To start with, I’ll show you an example of how I would BDD a controller. To keep things simple, it’ll be the exact same controller that Mike used as an example. I’m not going to show it here, even for convenience, because it can’t yet exist if we’re doing BDD In fact, I’ve got autotest running, waiting for me to get to work.
So the first thing I do is run “./script/generate rspeccontroller menuitems” to create my controller along with some RSpec stuff setup. autotest informs me that two specs are already green. Nice. They’re worthless though, so I clear out menuitemscontroller_spec.rb.
I don’t know much about this spec at first. I’m not quite sure what should happen, and I don’t know what the givens are. I do know what the action is though, which is to post to my controller. So I’ll write that skeleton.
Now that we’ve figured out what action will take place, we need to specify what actually happens. In BDD we like to start with the simplest thing that could possibly work. I’ve found that in testing controllers, it’s almost always the HTTP response. With an empty action method, the response should be simply 200. If that messes up, you know something’s wrong!
In our case we’re expecting a redirect. So we’ll set up an expectation that the response is a redirect.
autotest comes up red now, telling me “No action responded to create.” The compiler would have told us about that if we were using Java, but since Ruby is interpreted we don’t know until runtime. Nifty little trick: RSpec helps you figure out the difference between “compile errors” and unmet expectations by coloring them differently. The “No action responded to create” appears in pink, and when I add the code:
I get the failure message “expected redirect? to return true, got false” in red. If you’ve written an expectation and can’t quite figure out how to make it pass, use RSpec’s coloring to help guide the way.
So like I said, the redirect expectation failed. Again it’s an easy fix.
It may seem silly to redirect to “foo” here, but in reality we’ve learned a lot. Our controller is up and running, and responds to requests to the create action. Now we can start to implement the logic. Can you imagine how frustrating it would be to spend hours (okay, more than 5 minutes implementing this action only to find out that it’s not even hooked up properly?
Redirecting to “foo” isn’t very useful. Instead we’ll want to redirect to the index. We could expect a redirect to “:controller => :menu_items, :action => :index” but I want my application to be RESTful so we’ll use the REST URL helper.
RSpec complains that the menu_items_url method doesn’t exist. Looks like we have to define the menu_items resources in our routes.rb file.
Now our spec passes with the failure message “expected redirect to ‘http://test.host/menu_items’, got redirect to ‘http://test.hostfoo’.” Quick change of the redirect call and we’re good.
It probably seems like we’re moving quite slowly. In reality this would take ~30 seconds to get to this point. I’m just a windbag.
Our entire spec now looks like this:
The first example is redundant, because if the response redirects to a certain URL then it has to be a redirect. So we’ll take out the first, more general example since it is now nothing but noise.
Now we want to specify that a flash message is shown.
Oh baby we’re rolling. Now it’s time for the hard part. Creating the model. We’ll set up an expectation that says an object should be saved. That seems easy enough.
RSpec complains that @menuitem doesn’t exist. Actually it says that NilClass expected save but didn’t receive it. It’s in pink though, which I know means that my code doesn’t run at all, not that the logic is run. I need to define @menuitem somewhere, which brings us to RSpec’s before block. This will run before each example, and is where we set up the fixtures.
Our full spec is now
autotest tells me the expectation isn’t met. “Mock ‘menu item’ expected :save with (any args) once, but received it 0 times.”
Our controller needs to create an object and save it. MenuItem is a reasonable class name, so we’ll just use that.
That looks like it should work, but we’re informed that MenuItem doesn’t exist. I’ll cheat and stick a dummy definition in the spec file.
Now it says that our new menuitem doesn’t have a save method. We could add save to the dummy class, but we’re going to toss that class very soon. Plus we can do better with mocks. As we saw when we started this example, we can create a mock that lets us know when the save message was called. So we’ll stub MenuItem to return our mock.
And now we see that our mock received an unexpected message in two examples. Mocks will complain if they receive any message that they aren’t expecting. This can make specs a bit more brittle, but otoh you don’t really want your objects responding to messages nilly-willy. If someone changes your code so it calls a method it shouldn’t, at least you have some early indication that maybe it shouldn’t do that, and you can talk it over.
Anyway we’ll tell our mock menu item to stub out calls to save.
Whew, our specs finally pass.
Again this may seem like it took a while to get here, but I’m really just trying to show all the steps. In reality that goes by very quickly.
Almost there. The last thing we want to know is that the object was created with the proper arguments. We can do this by setting an expectation on MenuItem.new.
The hash comes from the parameters since this is an HTTP request. We didn’t set them up in the do_post method because we didn’t need them up to this point. So first we’ll change that to pass in some parameters:
and update the action:
We run the specs and see that one of them fails. RSpec says that we tried to call save on a nil object. Oops, I forgot to return our mock in the MenuItem.new expectation.
Now all of our examples run.
That covers the happy path. What about when the menu item doesn’t save? We’ll write a new spec for that.
We can borrow the setup from the first spec since it’s exactly the same. This violates DRY but I’ve got a couple views on this
- Production code needs to be DRY, spec code can be moist
- The code is so simple there’s no real risk of screwing up
- We know from experience that the code won’t stay exactly the same. No point in removing duplication if we’re just going to split them back up in two minutes.
We start off with the basic skeleton:
Instead of expecting a redirect, we want to display the form again when the item doesn’t save.
That fails, which we know is because it’s getting a redirect. We could have actually taken a smaller step and expected the response to be a success, but I’m feeling bold on this one.
Hrm. Our new spec passes but two of our old ones fail. It should be pretty obvious though. None of our mocks return anything when save is called. Since we introduced a conditional, the first spec treats the object as though the save failed. We’ll change its stub and expectation to return true.
For clarity I’ll change the second spec to explicitly return false.
Expressing Intent through Expectations
You’ll notice that the second spec only has one expectation. We had already tested most of the code so there’s no reason to do that again. At this point we only need to specify what changes, which in this case is whether saving the record succeeded or failed.
These specs are pretty good, but there’s a minor changes that I would make so they express intent more clearly. Specifically, they should clearly express what changes between the specs. Our most useful tool in this case is mock expectations. I would change the second spec to read:
It’s really easy for anyone reading my specs to figure out what’s important. In the very first spec (save succeeds), the reader knows that a new object should be created, saved, and then we flash a message and redirect. In the second spec we don’t really care about creating the object since that’s already tested. The only thing that differs is that saving fails.
In one glance, the reader knows immediately how the behavior of collaborating objects affects the behavior of the unit under test. Contrast that with using fixtures that require you to know why the collaborating objects behave the way they do. My way is not only good programming style – application logic doesn’t need to change because the underlying business logic does – but it makes your life easier because there’s less stuff to keep in your head at once!
There’s a guideline in TDD that says to stub queries and expect commands. It’s a very useful guideline, but I think the exception is when stubbing queries expresses your intent. Jay Fields is concerned that mocks create “so much noise in the test that it’s hard to determine what the intent of the test is.” This is certainly true at times. However you can decrease the noise by deliberate use of expectations to differentiate between what is important to understand the test and what is merely scaffolding to make it run.
Why controllers are easy for me and hard for you
I hope my example showed just how easy it is to spec controllers using RSpec. Once you’ve done it a couple times you just sort of write the whole thing at once without the seemingly silly baby steps. Most of the code we write though is a lot more complicated and suddenly those baby steps feel like a comfort blanket.
So let’s look at why controllers are so easy to specify when we use RSpec. If you’re a Test::Unit head, you’ll be happy to know that it has basically nothing to do with RSpec and everything to do with our use of mocks.
Using mock objects in this situation allows us to completely decouple application logic from business logic. Controllers shouldn’t be smart, instead simply coordinating the actions of truly smart objects. As I pointed out earlier, I don’t need to know what a menu item requires in order to be saved. I simply need to know that it can save itself and let me know whether it succeeded or failed.
That’s it in a nutshell. I can write application code that doesn’t depend on the business logic. To illustrate this, look at how we implemented MenuItem:
There’s no need to create a MenuItem table, whose fields we don’t even know at this point. In fact we don’t need a database at all yet.
We also learned quite a bit about the interface:
- We need a real class MenuItem
- It needs a constructor which can take a hash
- It has a method named #save which returns true on success and false on failure
Hey, ActiveRecord gives us the last two, so we can just inherit from it and we’re good to go. Or we could use ActiveResource. Or anything else really. Our controller code doesn’t care how persistence is implemented, and it certainly doesn’t care what it means for domain objects to be valid.
Why models are easy for you and hard for me
Rails’ ActiveRecord is a fantastic implementation of the Active Record pattern found in Martin Fowler’s Patterns of Enterprise Architecture.
Active Record is a good choice for domain logic that isn’t too complex, such as creates, reads, updates, and deletes. Derivations and validations based on a single record work well in this structure.
It’s easy to build Active Records, and they are easy to understand. Their primary problem is that they work well only if the Active Record objects correspond directly to the database tables: an isomorphic schema. If your business logic is complex, you’ll soon want to use your object’s direct relationships, collections, inheritance, and so forth. These don’t map easily onto Active Record, and adding them piecemeal gets very messy.
It’s clear why AR is a great pattern for web apps. Our persisted objects have a generally flat inheritance heirarchy and don’t have particularly complex relationships or logic.
Testing at the beginning of an app is pretty easy. You can just write your specs using real AR objects and you’re good to go. However as you write more specs and evolve a deeper domain model, it becomes a lot harder. Tests start to slow down because of dependence on the database. You start having brittle tests because domain logic gets mixed with db logic (AR callbacks have been my biggest foe here, which I’ll discuss in a forthcoming post).
The problem we face is that AR promises huge productivity gains for the non TDD-er, and challenges the thinking of the die-hard TDD-er.
In our app at work we’ve got 46 AR classes that have quite a bit of behavior. I couldn’t tell you how that compares to other Rails apps, though I’m positive it’s not trivial and am even more positive it’s not enterprise (despite what my boss tells customers The domain logic gets complex in places, and AR’s muddling of responsibilities means I have to be even more careful about coding myself into a corner (which despite how careful I am, happens a lot. But this is Ruby so it doesn’t take long to fix anyway).
I’m at a point where I’m considering leaving AR for at least a part of our model code. The domain logic and persistence logic is getting complex enough that ideally I could test them independently. Just like the application logic used mocks for the domain logic, our domain objects can mock out the persistence logic, leaving us with a nice layered architecture. Then we can let the domain objects interact with real implementations of other domain objects because they’re lightweight and easy to work with since we’re not mixing concerns.
There’s a lot of debate on whether you should design code for testability. I’m of the camp that says you should, because I’m also of the camp that says that untested code is worthless at best, harmful at worst.
My controllers are easy to spec because they should be easy to spec. They just take some objects and tell them to do stuff. It’s what those objects do that is complex. You can use mock objects to isolate that complexity. If you don’t use mock objects, then you end up with leaky abstractions that make your tests harder to understand and maintain.
You know that folder you’ve got filled with coding projects that never made it off the ground?
- code snippets that could become libraries, with a few tests
- libraries that could become gems, with some new documentation
- gems that could become side projects, with a bit of refactoring
- side projects that could become products, with the right game plan
You start each project with the best intentions, even big plans for some of them… but then life gets in the way, and you focus your attention on more important things. Then when you get some free time you dream up another project, and the cycle repeats itself.
You know that you can ship – you get all kinds of other projects out the door at work. Why do your open source ideas collect dust and regret?
The rules you apply to get stuff done at work don’t apply to your hobby projects. If you take the objective-driven approach you use in your work and try to use it on your hobby projects, you will continue to fall short of your goals – and keep piling on the guilt.
You can break the cycle, by launching one of your open source project ideas. You can push the spiral upward, by getting one project out the door and moving on to the next one.
You just need to learn a few skills and techniques that will help you dig in to your project graveyard and bring your projects to life.
Get Hack Your Open Source Project for FREE and you will: