I do use unit tests - but not for everything, and always in addition to functional/user testing.
I find unit tests are very good for data access layer code, mappers, core classes, software factories, web services and that sort of thing.
When you get higher up the stack the tests start to mean less. I tend to write more unit tests in lower layers, and then move more to functional/user testing at the layers where things are being combined together and/or presentation layer.
I have been thinking though that the usefulness of unit testing could be enhanced by the introduction of probablistic based unit testing. I have worked on a number of risk models in my capacity as a geotech engineer, and while I don’t want to get into a detailed description of what that was all about, it was fascinating work.
Basically it involved constructing a model from a set of equations for mine subsidence. In any geotechnical modeling there are a large number of uncertainties, so deterministic models often aren’t as realistic as they could be.
So: what you do is replace values in the model that are somewhat ‘fuzzy’ with probability distributions, then run the model in a monte-carlo approach.
The reason you do this is that there are often sensitivies in equation systems that are not apparent: interdepenancies based on values that cannot easily be seen. For example, a slight change in the output of one equations may have a drastic effect on the overall system’s result, where a change in another equation may not have much effect.
Using probabilities and monte carlo allows thousands of realizations of a system to be made - which helps expose where the senstitivies in the model are.
The same approach could be applied to unit tests - rather than using static tests, the inputs to the tests could be replaced with probability distributions. The inputs would therefore have a random nature to them. Depending on the type of testing desired, the distribution could be setup to produce values that could land outside the allowed bounds of the class - thus randomly producing invalid input into the class.
This would then in effect allow a more close model of what a ‘real’ person would do in testing - putting a range of values into classes, and inspecting the results.
This would likely be even more valuable if tests were applied against classes farther up in the layers - where you might feed probablistic inputs into several classes that are used together, and see what the results are.
Monte-carlo runs on this would then help flush out senstivies on class collections that otherwise are difficult to expose, and normally only possible to find when the application is fired up and run by a user.
How to impliment this so that it is not bizarre and complex? I am not sure yet
But that is only an issue of time and will…Would certainly make a great OOPSLA presentation topic!