I’ve been working lately with the many and varied build systems provided by Microsoft. Ok, so “many and varied” may be an overstatement, but any number more than one is far, far too many…
I’ve been trying to do two seemingly simple things. First, I want to automatically publish a web service to my local testing environment immediately after building it. And second, I want to extend the solution-level Clean target to delete additional cruft in the system (i.e. the published web services).
In both cases, my google questing led me to msbuild solutions. Which is fine, except for the fact that Visual Studio doesn’t use msbuild. Put another way, the methods and tools used to build a solution from the command-line (i.e. in automation) is not the same as those used to build during development.
Solutions to my problems that exist in the world of msbuild do not exist in Visual Studio, and vice versa.
Now I’m sure there are very understandable business, personal, and historic forces that led to this bifurcation of tools. But I feel reasonably safe in asserting that I speak for most developers when I say “I don’t care.” There is just no justification that supports building solutions differently interactively than in automation. I understand that automation may wrap additional activities or steps around the core act of building, but that core act must be the same!
Which brings me around to the Law of Parsimony, or the law that says “don’t have two or more ways to do the exact same thing”. This is just good sense, but in the world of software development it is even more important. In software, tiny differences in context, small variations in functionality, all lead to inevitable differences in behavior. Or, as we in the show like to call them, “bugs”.
In software, different == wrong.
To close, I’d like to mention something I’ve talked about before in other contexts, just to thoroughly maul the horse-corpse: unit tests are stronger when they are run in the context of the fully operational system, with an absolute minimum of shims and mocks. Violating the law of parsimony with unnecessary variations will result in serious and oftentimes extremely subtle errors later on…
You have been warned. Go forth and be parsimonious.
I ran into another case of this today. I have a Web API server for a web service. Normally this will run within IIS, but for unit testing — actually, just to get code coverage information — I need to self-host it. Which is fine; the Web API has a HttpSelfHostServer to complement the HttpApplication class that is used with IIS.
However, the two classes are in no way related! Which means that I can’t simply self-host my service. I actually have to have a different top-level implementation.
Again, I understand that there are reasons that this is the way it is. I just don’t think it is acceptable…
Yet another case: when you run an MSTest unit test from within the IDE, it is executed via a different mechanism than when you running it during a system build. As near as I can make out, when you run it as part of a build it is run with the “mstest” executable, but it is run differently from within the IDE. Unless you specify a test-settings file; in that case it is run via “mstest” as well.
Why is this a big deal? Well, I came across it when researching behaviors that were different in the two environments revolving around how global exceptions were handled. There are subtle (way too subtle) differences between the two execution environments regarding how they isolate the test processes, permissions, etc.
Evil, MSTest team: Evil!