Monthly Archives: June 2013

Unit testing model validation with MVC’s DataAnnotations

In a previous post, I mentioned that model validation should be tested separately from controller logic. I will demonstrate a way of unit testing the validation of models implemented with System.ComponentModel.DataAnnotations.

It is actually quire easy to unit test model validation. Models are inherently easy to test separately due to their POD (Plain Old Data) nature. We can instantiate them directly. Moreover, DataAnnotations provides us with the necessary interface to run the validation against a model object completely separately from the rest of the application.

A first model validation test class

Here is a basic model that we will unit test for the demonstration:

As you can see, it is quite simple. We’ll go directly to the unit test implementation.

The strategy we are going to use consists in basing each test case on a valid model instance, then modifying it in such a way that it triggers one single validation error. We end up with a skeleton unit test class like this:

Some explanation:

  • ValidateRule() implements the test itself. However, it gets the specifications for each test via an argument. 
  • ValidateRule_Source() provides the specs for our test.
  • class ValidateRuleSpec holds the specifications. Its ToString() uses the spec values to render a distinct string per test. This makes unit test reports easy to read. In case of failure, you know exactly which spec failed.

And now the implementation of our ValidateRule_Source():

Refining the solution

This works but can be improved. Most of the functionality can be abstracted. The actual test need only provide the valid model object and the specifications. A bit of refactoring yields a nicer design for our test:

Notice how we trimmed CreatePersonModelValidationTests to a minimum. The class ModelValidationTestsBase can now be used for most of our model validation unit tests.

What do you think?

Unit test equality is not domain equality

I came across a question on stackoverflow concerning comparing object for equality in unit test. The poster basically wants to get rid of series of assertions like this:

I agree, this is ugly. The more so if you have multiple tests that assert on the equality of these properties. You can always factor it out into a helper function but you still end up actually writing the comparisons yourself.

The selected answer doesn’t feel right. The proposal is to implement Equals() in the class for your tested object. This is not always desirable or even possible. Consider the case where your use case actually makes use of Equals() in its logic. There may already exist an implementation of Equals() that satisfies different needs than those of your test. Moreover, when overriding Equals(), there is more to it than just this single function. GetHashCode() must be implemented too … and correctly ! If you don’t implement GetHashCode(), you may end up with subtle or not-so-subtle bugs if your object gets stored as a dictionary key. In most cases, it will not be an issue because only a very few classes are actually used a dictionary keys. However, if you get into the habit of overriding Equals() without GetHashCode(), you can be bitten hard !!

One of the most favored answer is to use reflection to discover the distinct properties. This is the way to go. Code that solely exists for testing purposes should be kept away from the classes you test. However, I find the proposed solution sub-optimal. For one thing, the method is solely dedicated to testing and directly calls Assert.AreEqual(). For another, I don’t like that it automatically recurse into IList properties, but this is a question of style.

I would propose a general purpose utility method like this.

and of course, the unit test that goes along with it:

It can then be used in a unit test like this:

You can wrap the method via a unit-test friendly static assert method or any way you like. The above test would fail in a quite explicit way. An error message looks like this:

What do you think ?

Asp.Net MVC: Testing your controller actions

I can’t say I like all the aspects of Microsoft ASP.Net MVC. But there is one aspect that I like though is the ability to unit test most of the components of your application. Standard ASP.Net did not prevent you from testing your application. However, the framework and documentation did not encourage you to organize your application in a way that is testable.

ASP.Net MVC really pushes for loosely coupled components and as such, encourages unit testing. Your controllers are not much more than plain methods taking some input, executing some logic and returning an output. Your controller is not responsible for much in the end. Even though it is a central part of any functionality, its responsibility is reduced to implementing the logic for responding to a certain type of request. It relies on the framework configuration and conventions to call its methods in an appropriate way.

Understanding the extend of a controller’s responsibility is a key to writing good, concise unit tests. Sometimes, it’s easier to remember what it is not responsible for:

  • Model binding: turning the encoded form data or routing information into .Net object. A controller action does not need to know and does not care that the id of the object your are updating is a part of your action’s path (eg. /admin/tags/edit/my-tag ) or if it is provided in the form of an encoded form field (eg. <input type="text" name="tagSlug" /> )
  • Model validation: it may seem surprising but in most cases, your controller should not implement the validation logic. From a controller’s action point of view, is there really a difference between a person’s last name missing or an unknown phone number format ? As far as your hypothetical EditPerson action is concerned, there is a validation error. Model validation should be tested separately.
  • Pure business logic: a controller action is meant to handle requests. Even though it does not directly cope with http intricacies, it still very close to the http request/response life cycle. For example, an action ComputeLoanSchedule that needs to return a loan amortization schedule should probably delegate the actual computation to a business service class (ILoanService.GetAmortizationSchedule()) whose sole purpose is to handle such computation. In the future, if you need to expose the amortization schedule feature as a web api, you will only need to implement another controller and call the same business service.

In the end, your controller is only responsible for:

  • Delegating work to domain services: as stated above, the domain logic of your application should be logically separated from your UI layer. It also gives you the flexibility to scale your web application and business domain services. 
  • Returning an appropriate response: a controller’s action responds to a request by returning a response. The standard MVC Controller class expects your actions provide a result that will drive the way the response if created. eg. Returning a ViewResult will trigger view rendering and a RedirectToRouteResult may respond with an HTTP 302.

Unit testing a controller action

When it comes to unit testing, the less to test, the better. As such, the limited responsibility of controller actions is a boon when writing unit tests. As an example, I will be using a very simple controller TagsController . It is part of a blog-like website. Its role is to allow for the management of tags to be applied to other components of the application like articles etc.

TagsController exposes a CRUD-like set of functionality:

  • Index: provides the user with a list of existing tags
  • Create: enables the creation of new tags
  • Edit: enables editing existing tags

The simple case

We will focus on the index functionality for now. It is implemented as a single action on our TagsController:

The TagsController constructor takes a ITagService. It provides our constructor access to the tags in our database. As you can see, the Index() action method does only 2 distinct operations. First, it asks for the tags to display. Then, it tells the framework to render the “Index” view using the retrieved tags as a model.

With such a simple implementation, the unit test will be quite simple also. I’ll follow the usual AAA (Arrange-Act-Assert) pattern.

The first part sets up an ITagService mock for TagsController to consume. We then simply call method Index() .

The assertions start with making sure we got a non-null result of the ViewResult type. Index()  should ask for the “Index” view to be rendered. I prefer explicitly specifying the view name to render in my controller action. I believe this reduces the mental gymnastic necessary when debugging action-view interactions.

mockService.Verify()  ensures ITagService.GetAll()  was called.

I then proceed with checking the model provided to the view is consistent with the data from the mock service object. One of the requirements is that the tags are sorted by name.

As you can see in the Index unit test, there is a lot more code than the method being tested. This is also one of the reason why you should reduce the scope of your tests as much as you can.

A more complex test case

I’ll cover the “create a new tag” functionality. In this case, the functionality is implemented by a pair of actions. A parameter-less Create()  action simply triggers the rendering of an empty tag editor (implemented by a view names “Save”. The other Create()  action takes a SaveTagModel  parameter and responds to form submissions. As such, it behaves differently in case there is a validation error or the tag already exists in the tag db.

I’ll show the tests I put together to cover the functionality of TagsController.Create() . The first of those test is ensuring the initial to Create() triggers the rendering of an empty tag editor. The tag editor is implemented by a view called “Save” shared between the Create()  and Edit()  operations. As you can see, I make sure the view is specified by name. I also make sure the model is in a state consistent with an empty SaveTagModel .

The next 2 tests cover the behavior of the Create(SaveTagModel tag)  action. That is the action that responds to form submission from the tag editor.  These tests need to cover the following

  • What happens when invalid input is provided?
  • What happens when a tag exists with the same ‘slug’?
  • What happens upon success?

It will not come as a surprise that I wrote 3 tests, one for each of the points. The first test ensure Create()  behaves when provided with invalid data. This test demonstrates an important point. The controller is not aware and does not care what the actual model errors are. Its ‘invalid input’ behavior is triggered by any model error. We want to reduce our controller logic as much as possible. There may be some cases where the controller’s action behave differently based on specific errors. If you can avoid it, do so. It is against the separation of concern. A controller is not responsible for validation.

Considering what I have just said, there is an issue with the current implementation of Create() . It checks that a tag does not exist. This is a form of validation and should probably be moved either to model validation (implemented by a custom validation attribute) or to the service (by adding a specific ITagService.Create()  method). On the other hand, since the validation relies on a service component one could argue that it is part of the orchestration the controller is responsible for. I will leave it here because it is such a trivial validation. Anything more complex I would extract it and test it separately. My rule of thumb is: if it takes more than one unit test to cover a piece of validation in the controller, the validation should be moved to its own class.

Here is the test that covers that part.

Last but not least, here is the test that covers the success path. Out action should have called ITagService.Save()  with an appropriate argument and it should redirect us to the index page for our tags. As you can see, the redirection is tested against route values, not against an actual URI. The routing configuration is covered by another set of tests. This will be the subject of another post in the next few days.

Conclusion

As you can see, even though our controller is quite simple (3 methods only and simple at that), we had to write quite a few lines of unit test code to cover all the code paths. If you want to keep your unit tests to a minimum, make sure you respect the separation of concern principle. Test each of the involved components separately and reduce to a minimum the contract between them. The less they know about the other, the easier it is to change one component without your change to ripple through your whole application.

In the next few posts, I will cover testing model binding and validation as well as routes.

Don’t hesitate to hail me in the comments or on Twitter

Testing XPathNavigator

In my previous post about XPathNavigator, I explained in what circumstances the default implementation of XPathNavigator is troublesome. I went over the design of the class and highlighted how that design helps us re-implement XPathNavigator to address the issue.

Testing XPathNavigator

First things first, before attacking the new implementation proper, we want to make sure our implementation is compatible with the default implementation. To do so, we will write tests that will be run both against the Microsoft implementation as well as our implementation once it exists. Our goal here is really twofold. On the one hand, we want to ensure the existing implementation actually works as documented. On the other hand, we want to check our own implementation against the specification tests.

What should we test ?

XPathNavigator is a complex class. So we want to limit my tests to what actually matters for the new implementation. Otherwise, we may be writing literally hundreds of tests.

It is obviously not necessary to test methods that will not be re-implemented. In the previous post, we identified a subset of methods that we will need to re-implement. All other methods are somehow using this basic subset to implement their functionality. The subset is the list of abstract members:

As you can see, we have two distinct groups:

  • The abstract properties expose information about the current node. Our tests will ensure that we get consistent information for all types of node.
  • The abstract methods are all concerned about moving the navigator to another node. The tests need to check that the move operations result in the navigator pointing to the right node given a known starting position.

How should we test it ?

We will test the properties by setting up a XPathNavigator that points to specific nodes of an xml document. Once setup, we simply check the properties expose consistent values. We will test the Move() operations in a very similar way. We will setup the XPathNavigator instance on a specific node, execute the Move() operation we want to test and then check that the XPathNavigator yields values through its properties that are consistent with the navigator’s new position.

This is actually very similar. The only difference is the Move() operation. The similarity will let us factor our most of the test code into a few utility functions.

CanMoveImpl() acts as a parametrized test. It takes 2 arguments:

  • args: a MoveTestArgs instance. This argument describes the test’s original state and the resulting state we should test against.
  • moveOperation: A delegate to the Move() operation to test. Passing the operation to test as a parameter let us also write non-Move() tests by simply passing a no-op callback.

NUnit: I am using NUnit to write the unit tests. It is only a matter of preference. You can adapt the tests to work against another testing framework such as Microsoft Unit Testing Framework. I find NUnit to be simple to use, non-obstrusive and very flexible. 

CanMoveImpl() is called by actual test methods like the following:

It is a parametrized test. The TestCaseSource attribute tells NUnit which method to call to get the MoveTestArgs instance for each test.

Method CanmoveToNext_Source() returns each test case for a given operation. In the above example, we have the test cases for “when position on document root, MoveToNext() should fail”, “When positioned on element whith no next sibling, MoveToNext() should fail” and “when positioned on an element with a next sibling, MoveToNext() should succeed and point to the specific node”.

Each test case is defined by specifying values for the fields of class CanMoveArgs.

Method ExpectNodeProperties() implements the assertions depending on the configuration of its MoveTestArgs instance:

Executing our tests

We want our tests to be executed against the Microsoft implementation as well as our own implementation. The most straight-forward way of achieving this is to implement our tests in an abstract test fixture. The abstract fixture has an factory method to create an instance of XPathNavigator to test against. For each implementation, we create a subclass of our fixture and override the factory method.

CreateNavigable returns an IXPathNavigable. In turn IXpathNavigable lets us create a navigator positioned on the document root thanks to its CreateNavigator() method.

We’ll add the test fixture for our own implementation when we have the skeleton available. In the mean time, this lets us verify our expectations against the actual implementation of XPathNavigator.

The next post on the topic will tackle the new implementation’s design. I’ll make the implementation and test available as a source code download at the end of this series of articles.