Requirements are handled in a variety of ways by different projects. They might be written down formally, or stated informally, or not really written down at all. They might be determined before work begins, or be discovered during development, or discovered through intentional product iterations. They might be primarily based upon competitive needs, user needs, or be something so new there is no “need” context to fit it into.
And every combination thereof and more.
By all accounts, changing requirements have a huge impact on the development of the product, primarily, but not just, on its schedule. As requirements change, developers have to reevaluate their fundamental assumptions, which might invalidate a feature design, and sometimes, an entire architecture. I mean, not every requirement change will do that — some are quite benign and can even simplify the project.
But regardless of the development paradigm or the way requirements are managed, it is inevitably true that from the project start to the project end the requirements will change. Sometimes they’ll simply wobble about and induce fear and uncertainty, but other times they’ll be charging and shifting direction like a lunatic halfback. (Ha! I pulled off a sports metaphor!)
Why, you ask? Why, why, why? Can’t this unruly beast be brought to sanity?
Here are just a few of the reasons:
- Requirements are almost always poorly understood at the start of the project. Through the course of the project we are building knowledge that we didn’t have at the start. This is not really something that can be avoided via big upfront requirements-gathering either. It is a necessary part of any product development process. The problem with software is that we jump into an implementation which we then can’t throw away like industrial designers can throw away prototypes.
- The thoroughness and correctness of requirements is heavily dependent on having domain or subject-matter experts of one kind or another. If you don’t have them, or they aren’t the right ones for the project, the requirements-understanding process suffers.
- Especially over product lifetimes, or for projects with very long development cycles, these people may come and go from the team, yielding an ever-shifting view of the real requirements.
- Implementing a given requirement can often turn out to be much more difficult than was originally imagined. When this happens there needs to be some sort of negotiation process where the now-understood cost of the capability is balanced against its value, or even the need for it.
- Organizational dysfunction can severely effect requirements understanding too. If the developers and/or subject-matter experts are not empowered to define the requirements, well, you get what you might expect.
In my opinion and experience, requirements churn is inevitable. Avoidance is not possible, and adhering blindly to your first guess at requirements is a recipe for product failure and teamicide.
Wise processes and managers will plan for requirements churn, but the truly enlightened will welcome it because it means that your product has just gotten better. Which is all well and good until you start talking about schedule; that’s where the rubber meets the road, slips, and skitters out of control into the ditch.
If you change a requirement (or discover a new, extremely important requirement) and need more time to complete it, you have three choices:
- Accept the work
- Slip the schedule
- Add more resources
- Reject the work, sticking with the old requirement and let the product suffer
- Temporarily reject the work by postponing the release of the feature
Depending on the situation, any one of these might not be politically feasible. But adding more resources is often not technically feasible — some tasks simply can’t be effectively broken down further, or it may take too long to bring new resources up to speed.
So this is the requirements conundrum: Requirements will change, and you must deal with the changes, but you can’t really know how to deal with them until after the change has occurred.