AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

How to Deal With Unintended Consequences

Having just read Levitt and Dubner’s SuperFreakenomics as well as Tenner’s Why Things Bite Back, I’ve been thinking lately about unintended consequences, sometimes referred to as a “law of unintended consequences”. As defined by Rob Norton, it is “that actions of people – and especially of government – always have effects that are unanticipated or unintended.”  Norton sites the reference to unintended consequences to luminary economist and social scientist such as Adam Smith, in his “invisible hand” that self regulates markets, and John Locke, in his campaign against the 1692 legislation for limiting the maximum rate of interest that could be charged.

In Tenner’s book he covers a variety of topics including environmental, medical, and technology advances that have caused unintentionally problems. He uses the term “revenge” as in these advances have taken revenge on those who initiated them. What interested me was not the “revenge” but rather just our inability to predict accurately the outcome of our actions. All of the examples appear to be complex systems, meaning that the exhibit behavior not obvious from the properties of their individual parts. As much as we understand about the human body there is at least an equal amount that we do not understand. The same goes for our environment. Argue with me if you will but when the accuracy of a 7-day weather forecast is less than 50% (using the meteorologist standard of temp predictions within 3 degrees as accurate) or 85% accurate for precipitation 1 day out, it is hard believe that we have a great understanding of the earth’s climate and weather systems. Perhaps you can make a case that on a micro-scale we’re inept but from a macro-perspective we know what we are talking about.

When we add to our Software as a Service or Web 2.0 sites our human customers then I believe our systems start to take on attributes of complex systems. If we could accurately predict the human-computer interaction of our system then we would find all the bugs in our next release in QA. Unfortunately time and time again we learn of bugs or unintended consequences such as a huge drop off rate in sign-ups only after the release after it goes live and real people start interacting with it.

Why we fail at predicting human behavior or weather patterns or any number of other subjects is too broad for a post. However, with the knowledge that we will fail, we can take actions to minimize the impact of the results. The way to achieve this is through A/B testing or split-testing. If you approach changes in your releases as if you’re just as likely to reduce some desired behavior as you are to improve it, you’ll want to have a way to test the results in the real world before committing to them. In our practice we often tell clients that they should have an A/B testing framework or a wire-on/wire-off framework for a variety of reasons including risk mitigation for new features. In today’s world where there are free services that can help with this such as Google’s website optimizer, here is a great case study of how to use it, there really is no excuse for not having this ability within your site or service.