What Startups Can Learn from Government Mistakes
We probably all agree that the authorities had enough information to keep Umar Farouk Abdulmutallab off of the Northwest flight from Amsterdam to Detroit on Christmas day. His father contacted the US Embassy nearly six weeks before the incident and indicated both his son’s presence in Yemen and his son’s extremist views, there was intelligence indicating that extremists in Yemen were discussing a plot with someone known as “the Nigerian” and additional incriminating information will no doubt become available over the coming weeks. Given the comments on CNN’s blog, most of us also feel that the changes made to security procedures by the TSA, in reaction to this incident, will do more to hassle travelers and lessen their productivity than it will to make them more secure. As unfortunate as this incident was, it does provide startups with examples of how bad culture and flawed process can lead to overall failure to reach our desired objectives. In this article we will use the December 25th events to illustrate the importance of culture and process on effective organizational learning. We will also look at how the results of a learning and corrective action process can be used to determine if your process and culture are appropriate to your need.
There is a wealth of knowledge and data that tells us that we learn much more efficiently from our mistakes than from our successes. As such, we need to make effective use of each and every mistake and failure to not only learn personally but force our organizations to learn. Unfortunately, irrespective of the party in power, the government is unlikely to maximize its learning in the Abdulmutallab case. While the administration no doubt actually wants to increase the security of travelers and the US defense against terrorism, it is also motivated to reduce administration blame and culpability in order to increase the likelihood of reelection. These two things are not mutually supportive and as a result the outcome will be suboptimal compared to a pure goal of simply finding problems and resolving them quickly and permanently. In support of this statement, consider the President’s actions ordering a review of the system within five calendar days. Given the depth and complexity of the organizations and systems involved, could such a review truly be comprehensive? The organizations under review will no doubt believe that the administration is on the hunt for a fall person and as a result, information will likely be hidden and learning reduced. The culture of “covering your ass” and “finding a fall guy” are counterproductive to effective organizational learning.
The preceding isn’t meant as an indictment of our government, but rather an illustration of a barrier to learning for all organizations. In order to maximize organizational learning from failures you must have the right goal of learning from and correcting all of the associated failures, not finding a “fall guy”. This isn’t to say that people shouldn’t be fired for lack of judgment, dereliction of duty, or repeated failures to perform, but rather that the initial goal of the investigation isn’t to identify these people. To be successful, and to repeatedly maximize learning, an organization must both have a culture of openly learning from mistakes and a process to maximize that learning. Our book, “The Art of Scalability”, describes and diagrams our favorite Post Mortem (aka “Root Cause and Corrective Action” or “After Action Review”) process and a recent blog post highlights a lightweight version of this process for smaller issues or smaller companies. One method of conducting a post mortem is to use the “5 Whys” initially developed by Sakichi Toyoda of the Toyota Motor Corporation. The process, as we often modify it for our clients, is to ask “Why” at least 5 times to ensure that you get close to the true root cause.
Having discussed the culture of the organization and the process by which failures are reviewed, let’s turn to the expected results. The response by the TSA to this incident was to post new guidelines for security procedures within 2 days of the incident. Included is the statement that passengers may notice “increased gate screening including pat-downs and bag searches” and may be asked to “stow[ing] personal items, turning off electronic equipment and remaining seated during certain portions of the flight”. Remember that this is in response to an individual who was reported by his own family as a suspect; the 5 whys would indicate that the failure is as much in a failure to react to information as it is in detection of the substance. Testing these outcomes we can determine if our process led to the appropriate results. Where are we correcting the failure of acting upon the appropriate information in a timely and effective manner? Are our new procedures increasing security or simply an action to prove that we are doing SOMETHING? Will the pat downs and stowage rules keep a terrorist from hiding explosives in their underwear? Many believe the answers to these questions are “No”.
Admittedly a technology crisis is not anywhere near as critical as a terrorist attack but we can still draw some parallels. In a technology organization this incident would be akin to determining that an outage was caused by a patch that was applied even though it was reported as buggy and not ready for deployment. The reaction by the TSA would be like the organization’s management reacting by requiring even thoroughly tested and quality assurance approved patches go through extra testing. Instead of punishing and delaying all patches, why not just make sure untested patches don’t get deployed? You need the right process at the right time with the right culture and organizational mindset as we describe in detail in “The Art of Scalability”.
None of this is intended to take away from the immense job that our governmental agencies have protecting us from terrorism nor is our goal to make light of terrorist attacks by comparing them to technology crises. However, as we’ve seen in many instances, such as with the popular books “Freakonomics” and “Superfreakonomics” where the authors take an economic approach to non-economic problems, it is appropriate to attempt to extend the learning from one discipline to another. From this unfortunate attempt at terrorism we believe that technology startups can better understand how to learn from mistakes and how to modify process when attempting to prevent future problems.