BDD ideas for structuring tests

Lately I've been thinking a lot about the best way to do BDD in C#. So when I saw Phil Haack's post about structuring unit tests, I think I had a joyful thought. Earlier I had been thinking in terms of using my Behavioral NUnit experimental project to hash out Haack's structuring idea with better BDD integration.

In short, his idea is to use nested classes. There is the normal one-to-one class-to-test-class mapping, but each method under test gets it's own inner class. To use his example:


In this example the Titleify and Knightify methods (imo two terrible uses of the -ify suffix) have corresponding test classes dedicating to testing only one method. Each method in the class (or Fact, in the case of xUnit. I actually haven't used xUnit but it seems to encourage a somewhat BDD readability) test one aspect of the method, much like the it method is used in rspec.

I generally like Haack's test structure. For example, he points out how it plays nicely with Visual Studio's natural class/method navigation which makes the tests even more navigable. The only issue I have with it is that I dislike having 1000+ SLOC classes - tests or regular. If I were to adopt this method, I would probably break each of those inner classes into separate files (and use partial classes to break up the top class).

My practice for a long time was to have one whole namespace per class under test. Consider my tests for objectflow. I actually picked up this practice from Garfield Moore, objectflow's original developer. Each class (or significant concept) has a namespace (e.g. objectflow.stateful.tests.unit.PossibleTransitions or PossibleTransitionTests). Each class in that namespace is names according to essentially what the Setup does. Some examples: WhenGivenOnlyBranches, WhenGivenOnlyYields, etc.

I like the way these tests read. It's very easy to find a particular test or to read up on how a particular method is supposed to operate. But in practice this has led to very deep hierarchies, often with single class namespaces. Further, I find that creating a whole new class for each setup tends to create too much extra code. As a result, I have a hard time sticking closely to this practice.

More recently I've felt a little overwhelmed with my original practice so I've evolved it slightly. Now I've started doing the one-to-one class to test mapping like commonly practiced. But each test has it's own method that does setup. For instance


I also sometimes use this small variation of that structure where I keep the BDD sentence-style naming scheme but use TestCase attributes to quickly cover edge cases.


I often use some hybrid of the last two approaches, especially if I would be using a TestCase attribute that breaks the BDD readability, I'll break the setup code into one of those Given_* support setup methods and reuse it between two different test methods.

I generally like my most recent ways of structuring tests because of it's readability and ability to gain excellent edge case coverage by adding additional test cases. But I do really like Haack's structuring, so I may find myself adopting part of his suggestion and further evolving my tests.

As far as this applies to Behavioral NUnit, I want to explore the possibility of a Describe attribute that mimics the usage of rspec's describe method. One idea is to make the new attribute generate another hierarchical level of test cases