IAM Myth: Identity Management Big Bang

Last modified 18 Mar 2024 11:02 +01:00

Gather the requirements, analyse them, create an architecture, design the details, implement, test and deploy. Do it all in one big project. This is how a proper project is done. This is how identity management solution should be deployed. This is the only professional way how to do it.

In fact, it is a recipe for a disaster. A very expensive disaster at that.

History Repeating

This analysis-design-implementation approach is in fact a well-known software development technique called the Waterfall model. It was first documented in 1970s and it is still being used. It is elegant, straightforward and easy to understand. The problem is that this method is fundamentally flawed. It just does not work. Any minor problem in the early project phases is amplified in later phases and the methodology does not have any correction mechanisms to deal with it. This technique was the reason for huge number of failures in software projects in 20th century. Software engineers knows this very well and actually no reasonable software engineer would ever voluntarily use waterfall technique any more. It was replaced by iterative & incremental development methods long time ago.

Waterfall technique is flawed and software engineers abandoned it. But other segments of the industry are still using it. And it is even recommended as a best practice. This is simply insane.

IDM Waterfall

What’s wrong with this method exactly? I mean in identity management? So, let’s have a look at it step by step:

Requirements: Have you ever met anyone who knew exactly what he needed? Not what he wanted, but what he really needed. Identity management is one of fields where requirements are especially vague. Existing processes are usually based on human judgement ("if it looks good I’ll approve it", "I choose an appropriate identifier") which are extremely difficult to formalize. The customers also tend to want features which they do not need (100% automation of process "X") and neglect the aspects that they really need (reliability, consistency, …​)

Analysis: Before an IDM is deployed, the existing data and processes are usually processed manually. People are very bad at objective judgement especially when it comes to data quality. People easily overlook error in data sets and correct them without thinking when they process that manually. But machines cannot do this. Any error in a data set can cause a major delay in the project. And it is almost impossible to see all the errors if the data from different systems are not analysed by a machine and correlated. There are usually huge gaps in the input data for analysis. E.g. system administrators often do not know exact meaning of each account attribute, they are too busy (or unwilling) to respond at all and so on.

Design: We engineers often tend to overestimate our intellectual abilities. However the sad fact is that there is perhaps not a single person in the world who could design a complex system flawlessly at the first try. The old saying goes: garbage in, garbage out. Design is based on requirements and analysis. If these are wrong then design cannot be right. Also everybody makes mistakes. Therefore it has to be expected that the designers simply make a bad decision from time to time.

Implementation: The system is implemented as designed. As it is extremely unlikely that the design was correct the implementation is doomed to be flawed. What is worse is that the small mistakes of analysis and design are significantly amplified during the implementation. What was a simple mistake in single paragraph of design specification may easily mean thousands of lines of code that are useless. Huge amount of money is wasted.

Testing: There two ways how to do testing in waterfall: bad and worse. Bad testing is verifying if the implemented system satisfies the requirements. But as requirements are often very vague and cannot be easily formalized this testing really does not tells much about the solution quality. As it is always very difficult to transform requirements to testing scenarios then an alternative approach is used. This where it gets worse. Testing scenarios are based on the design. Therefore the system is verified whether it was implemented as designed. Which tells almost nothing about system qualities. The design was wrong (we know it was) and the system was implemented as designed. If all the tests pass then it only indicates that the implementation is as good as the design: it is all wrong.

Deployment: Everybody expects success, applause and a big party at this stage. But this is actually the first time when the developed system meets the reality. And this is not a nice meeting. All the small mistakes were hugely amplified and the result just does not work. This is the part where everybody panics. Problems are quickly fixed to somehow pass the minimum requirements for system acceptance. Deadlines are missed, budgets are blown, managers go crazy.

All of this usually takes at least a year. Sometimes 2-3 years. It produces a lot of documents, diagrams, project plans, meeting minutes and other kind of very expensive but mostly meaningless papers. However the end result is almost invariably a very expensive but inferior IDM solution. If it works at all.

The Solution

The solution is the same one which worked for so software engineering: iterative & incremental development methods. Do not even try to deploy identity management solution in one huge project. Divide that project into several iterations. Each iteration should not be longer than a couple of months. Each iteration should contain all the activities: a bit of requirements analysis, a bit of design, implementation, testing and deployment. The first iterations will naturally tend to be more focused on "papers", but they should also include prototyping and testing. The last iterations will focus much more testing and deployments, but there should also be manpower available to rethink parts of the design if needed. It is crucial that each iteration ends with something tangible, that it confronts the theory (papers) with the implementation (code) and reality (deployment). This is the way how to identify mistakes early and do not amplify them in later phases.

It may seem that the extra development, testing and deployment activities in each iterations are expensive. But that’s relative. They are much less expensive that the work which is plainly wasted in waterfall-based methods. And the fact that it avoids the panic and severe damages caused by the waterfall deployment phase is just …​ priceless.

Iterative and incremental techniques are far superior to waterfall-based techniques. There is no doubt about that. But there is one business-related drawback: Commercial identity management systems are expensive. And the cost is concentrated at the beginning of the project (licensing, support). This cost is very high. Therefore the IDM project needs to bring substantial benefits to justify this cost. Otherwise the ROI calculation does not go well and the project won’t get approved. And it is just not realistic to bring a huge benefit in a short IDM project. Therefore deployment project that include commercial IDM products needs to be huge and long …​ and usually very risky.

But this is all very different with open source IDM products. Open source products do not have huge initial cost. There is no licensing cost and the cost of support can scale gradually as the project rolls out. Therefore open source software makes it possible to have a series of smaller iterative IDM projects to gradually develop an appropriate IDM solution. The cost of the project is increasing slowly and gradually and is proportional to the benefit that the result brings. Both financial and technological risk is significantly reduced. This is a recipe for a good IDM program that suits the customer needs and, that can adapt to a changing requirements and which is still economically feasible.

Was this page helpful?
YES NO
Thanks for your feedback