I've used the FIT framework on a few recent projects with both positive and negative results. Here are some rough notes to my future self about what worked, what didn't and what to beware of. Here "specification" means the FIT HTML document and "test" means the act of interpreting the HTML with fixture classes in a test-run.
FIT's reflectopornographic architecture introduces a great deal of overhead when writing and maintaining tests and diagnosing test failures. That overhead might be worth it if FIT helps you communicate with users and customers. However, if the FIT specifications are not read by anyone outside the development team (and I include business analysts in the development team) then the overhead is just not worth it. Instead choose a tool that works smoothly with your IDE, such as JUnit, and cooperate closely with non-technical team members when writing end-to-end JUnit tests. JUnit extensions like LiFT might help with this communication. I've found it pretty easy to refactor JUnit tests into a DSL-style like that of LiFT for apps not supported by LiFT itself.
Don't let any one role in the team have sole responsibility for the specifications. They exist to help customers, BAs, testers and developers collaborate. If a developer finds the spec hard to read when they come to write fixtures or implement the specified functionality, the team should work together to make the spec more comprehensible. If BAs or testers think that more scenarios should be tested, the team should work together to make the spec more comprehensive. If the developers need to change the spec to use their library of common fixtures, the team should work together to refactor the specifications. If the team can't collaborate over the specifications the project has problems than can't be solved by a testing framework.
Write the specifications to focus on one aspect of system behaviour. Write multiple specifications that specify different slices of functionality. Separate happy and sad paths into different specifications. Don't include information in the specification that is also included in other specifications. Don't write specifications that contain a single enormous table that covers all possible combinations: it is very difficult understand what is specified which makes it much harder to diagnose test failures.
Intersperse explanatory text among the tables to explain what each table means. Don't write a paragraph or two of overview at the top of the document and then follow it with table after table of test data.
When writing a specification or its fixtures, watch the test fail for different reasons and make sure the fixture generates good diagnostics.
The default FIT fixtures have confusing names. Despite using FIT for quite some time I never could remember the difference between a RowFixture and a ColumnFixture. All tables have rows and columns! This is quite ironic when one considers that FIT was designed to aid communication.
In practice I found it easiest to extend DoFixture to write document level fixtures that and then use custom fixtures for each table in the document. When writing custom fixtures it was easier to extend Fixture base class than extend one of the generic, reusable fixture classes, or wrap a generic fixture class around a "system under test" object.
Because each fixture type introduces maintenance overhead because of the difficulty of refactoring, try and share fixture types between specifications.
If specifications concentrate on the differences between examples then it's quite easy to write a fixture that initialises properties to defaults, one or two of which are overridden from the HTML. That fixture can then be used to test many different specifications.
For example, if I wrote a fixture to enter an order into an order-processing system I'd default most of the order details in the fixture class but let the HTML override any of those defaults. I could then use the fixture to test specifications for happy path order processing (overriding different names or products), when the customer has not given their postcode, when the order contains multiple lines for the same product, or for products with different delivery schedules, etc. etc.
Because FIT uses a lot of reflection and does not include the reflected method names in the documents themselves, refactoring FIT code is very time consuming. However, if you don't refactor you end up with shed loads of fixture code that becomes an increasing maintenance burden. In situations like this, little and often is the best approach. The refactoring will be a continual drag on velocity but it's better than putting the work off until the refactoring is too much work to even contemplate.
To build fixtures that can be used to test multiple specifications you will need to find and emphasise the commonality between the specifications themselves. You will discover this commonality as you develop the system and its FIT tests, so specifications will not start out using common table structures. Therefore you will have to refactor the specifications themselves to redesign their tables to work with the reusable fixtures you write.
This is a good thing: it will make the system description more consistent and easier to read.
FIT is used to communicate the capabilities of the system. When the a customer sees a green FIT test, they understand that as meaning the system performs as specified, not that some objects, when poked in the right order, perform as specified. If you use FIT to drive business objects only, the actual end-to-end behaviour of the system can get forgotten. On one project, requirements had been signed off as complete because of green FIT tests but I found that the main function of the program contained nothing but a "TODO: finish this" comment!
The only reason to drive business objects directly from FIT tests is as an optimisation, if there are too many combinations to reasonably test in slow-running end-to-end tests. But make sure that there is at least one end-to-end test that verifies that the specified behaviour hangs together system-wide.
Update: In the comments, Rex Madden wrote:
I agree with everything but the last point. We used to test everything end-to-end with FIT, but found it to be less useful than using it to test groups of business objects. We now use them to test some part of the domain in isolation. We tend to call them "business rule" tests. We still always have some sort of test (usually in Selenium) for testing the system end to end. With FIT, we can focus on the domain driven design stuff without worrying about the database or gui.
What advantage does FIT give over JUnit for testing objects? Do you show the FIT reports to customers or users?
Personally I prefer JUnit for driving domain-driven design because I get rapid feedback on whether the object model reflects the domain while FIT hides the object model altogether.
Rex replied to be by email:
With FIT, the customer can write the tests. It helps them be more exhaustive in the examples, as they can play with scenarios. And they can do it well before we even think about the object model. It may be a few days before we get that section of the code, but if we have the FIT specifications beforehand, we can better make decisions on how to implement. The customer can also refer to them later.
Also, domain driven design is a lot more than just the object model. It's about getting the language of the customer into the code base somehow. Having the customer write the FIT tests helps drive the ubiquitous language by introducing key words into the comments and tables.
That certainly coincides with my experience of FIT. In my experience, customers and end-users find it incredibly easy to write examples. The FIT way of writing specifications through concrete examples is very intuitive. I once showed some preliminary FIT tests to a customer and they grabbed a pencil and paper and started sketching out tables of what they wanted to see the software do, without any explanation of how FIT worked. That really sold me on the value of FIT.
However, if customers don't write the specifications I don't think the high maintenance overhead of FIT is worth while.
I also wholeheartedly agree with the idea that "domain driven design is a lot more than just the object model". Developing an ubiquitous language (or System of Names) is very important. On my current project we're trying to encourage the development of an ubiquitous language by reflectively mapping object field names and values into the GUI, thereby forcing the code to use the same language as the user interface and the users. More on that another time...