AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

When Should You Split Services?

The Y axis of the AKF Scale Cube indicates that growing companies should consider splitting their products along services (verb) or resources (noun) oriented boundaries. A common question we receive is “how granular should one make a services split?” A similar question to this is “how many swim lanes should our application be split into?” To help answer these questions, we’ve put together a list of considerations based on developer throughput, availability, scalability, and cost. By considering these, you can decide if your application should be grouped into a large, monolithic codebases or split up into smaller individual services and swim lanes. You must also keep in mind that splitting too aggressively can be overly costly and have little return for the effort involved. Companies with little to no growth will be better served focusing their resources on developing a marketable product than by fine tuning their service sizes using the considerations below.

 

Developer Throughput:

Frequency of Change – Services with a high rate of change in a monolithic codebase cause competition for code resources and can create a number of time to market impacting conflicts between teams including product merge conflicts. Such high change services should be split off into small granular services and ideally placed in their own fault isolative swim lane such that the frequent updates don’t impact other services. Services with low rates of change can be grouped together as there is little value created from disaggregation and a lower level of risk of being impacted by updates.

The diagram below illustrates the relationship we recommend between functionality, frequency of updates, and relative percentage of the codebase. Your high risk, business critical services should reside in the upper right portion being frequently updated by small, dedicated teams. The lower risk functions that rarely change can be grouped together into larger, monolithic services as shown in the bottom left.

frequency_v_functionality

Degree of Reuse – If libraries or services have a high level of reuse throughout the product, consider separating and maintaining them apart from code that is specialized for individual features or services. A service in this regard may be something that is linked at compile time, deployed as a shared dynamically loadable library or operate as an independent runtime service.

Team Size – Small, dedicated teams can handle micro services with limited functionality and high rates of change, or large functionality (monolithic solutions) with low rates of change. This will give them a better sense of ownership, increase specialization, and allow them to work autonomously. Team size also has an impact on whether a service should be split. The larger the team, the higher the coordination overhead inherent to the team and the greater the need to consider splitting the team to reduce codebase conflict. In this scenario, we are splitting the product up primarily based on reducing the size of the team in order to reduce product conflicts. Ideally splits would be made based on evaluating the availability increases they allow, the scalability they enable or how they decrease the time to market of development.

Specialized Skills – Some services may need special skills in development that are distinct from the remainder of the team. You may for instance have the need to have some portion of your product run very fast. They in turn may require a compiled language and a great depth of knowledge in algorithms and asymptotic analysis. These engineers may have a completely different skillset than the remainder of your code base which may in turn be interpreted and mostly focused on user interaction and experience. In other cases, you may have code that requires deep domain experience in a very specific area like payments. Each of these are examples of considerations that may indicate a need to split into a service and which may inform the size of that service.

 

Availability and Fault Tolerance Considerations:

Desired Reliability – If other functions can afford to be impacted when the service fails, then you may be fine grouping them together into a larger service. Indeed, sometimes certain functions should NOT work if another function fails (e.g. one should not be able to trade in an equity trading platform if the solution that understands how many equities are available to trade is not available). However, if you require each function to be available independent of the others, then split them into individual services.

Criticality to the Business – Determine how important the service is to business value creation while also taking into account the service’s visibility. One way to view this is to measure the cost of one hour of downtime against a day’s total revenue. If the business can’t afford for the service to fail, split it up until the impact is more acceptable.

Risk of Failure – Determine the different failure modes for the service (e.g. a billing service charging the wrong amount), what the likelihood and severity of each failure mode occurring is, and how likely you are to detect the failure should it happen.   The higher the risk, the greater the segmentation should be.

 

Scalability Considerations:

Scalability of Data – A service may be already be a small percentage of the codebase, but as the data that the service needs to operate scales up, it may make sense to split again.

Scalability of Services – What is the volume of usage relative to the rest of the services? For example, one service may need to support short bursts during peak hours while another has steady, gradual growth. If you separate them, you can address their needs independently without having to over engineer a solution to satisfy both.

Dependency on Other Service’s Data – If the dependency on another service’s data can’t be removed or handled with an asynchronous call, the benefits of disaggregating the service probably won’t outweigh the effort required to make the split.

 

Cost Considerations:

Effort to Split the Code – If the services are so tightly bound that it will take months to split them, you’ll have to decide whether the value created is worth the time spent. You’ll also need to take into account the effort required to develop the deployment scripts for the new service.

Shared Persistent Storage Tier – If you split off the new service, but it still relies on a shared database, you may not fully realize the benefits of disaggregation. Placing a read-only DB replica in the new service’s swim lane will increase performance and availability, but it can also raise the effort and cost required.

Network Configuration – Does the service need its own subdomain? Will you need to make changes load balancer routing or firewall rules? Depending on the team’s expertise, some network changes require more effort than others. Ensure you consider these changes in the total cost of the split.

 

 

The illustration below can be used to quickly determine whether a service or function should be segmented into smaller microservices, be grouped together with similar or dependent services, or remain in a multifunctional, infrequently changing monolith.

decision_matrix

Send to Kindle

The Downside of Stored Procedures

I started my engineering career as a developer at Sybase, the company producing a relational database going head-to-head with Oracle. This was in the late 80’s, and Sybase could lay claim to a very fast RDBMS with innovations like one of the earliest client/server architectures, procedural language extensions to SQL in the form of Transact-SQL, and yes, stored procedures. Fast forward a career, and now as an Associate Partner with AKF, I find myself encountering a number of companies that are unable to scale their data tier, primarily because of stored procedures. I guess it is true – you reap what you sow.

Why does the use of stored procedures (sprocs) inhibit scalability? To start with, a database filled with sprocs is burdened by the business logic contained in those sprocs. Rather than relatively straightforward CRUD statements, sprocs seem to typically become the main application ‘tier’, containing 100s of lines of SQL within one sproc. So, in addition to persisting and retrieving your precious data, maintaining referential integrity and transactional consistency, you are now asking for your database to run the bulk of your application as well. Too many eggs in one basket.

And, that basket will remain a single basket, as it is quite difficult to horizontally scale a database filled with sprocs. Out of the 3 axes described in the AKF scalability cube, the only axis that is an option to readily achieve horizontal scalability with is the Z axis, where you have divided or partitioned your data across multiple shards. The X axis, the classic horizontal approach, works only if you are able to replicate your database across multiple read-only replicas, and can segregate read-only activity out of your application, difficult to do if many of those reads are coming from sprocs that also write. Additionally, most applications built on top of a sproc-heavy system have no DAL, no data access layer that might control or route read access to a different location than writes. The Y axis, dividing your architecture up into services, is also difficult to do in a sproc-heavy environment, as these environments are often extremely monolithic, making it very difficult to separate sprocs by service.

Your database hardware is typically the beefiest, most expensive box you will have. Running business logic on it vs. on smaller, commodity boxes typically used for an application tier is simply causing the cost of computation to be inordinately high. Also, many companies will deploy object caching (e.g. memcached) to offload their database. This works fine when the application is separate from the database, but when the reads are primarily done in sprocs? I’d love to see someone attempt to insert memcached into their database.

Sprocs seem to be something of a gateway drug, where once you have tasted the flavor of application logic living in the database, the next step is to utilize abilities such as CLR (common language runtime). Sure, why not have the database invoke C# routines? Why not ask your database machine to also perform functions such as faxing? If Microsoft allows it, it must be good, right?

Many of the companies we visit that are suffering from stored procedure bloat started out building a simple application that was deployed in a turnkey fashion for a single customer with a small number of users. In that world, there are certainly benefits to using sprocs, but that world was 20th century and no longer viable in today’s world of cloud and SaaS models.

Just say no to stored procedures!

Send to Kindle

Comments Off on The Downside of Stored Procedures

Definition of MVP

We often use the term minimum viable product or MVP but do we all agree on what it means? In the Scrum spirt of Definition of Done, I believe the Definition of MVP is worth stating explicitly within your tech team. A quick search revealed these three similar yet different definitions:

  • A minimum viable product (MVP) is a development technique in which a new product or website is developed with sufficient features to satisfy early adopters. The final, complete set of features is only designed and developed after considering feedback from the product’s initial users. Source: Techopedia
  • In product development, the minimum viable product (MVP) is the product with the highest return on investment versus risk…A minimum viable product has just those core features that allow the product to be deployed, and no more. Source: Wikipedia
  • When Eric Ries used the term for the first time he described it as: A Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort. Source: Leanstack

I personally like a combination of these definitions. I would choose something along the lines of:

A minimum viable product (MVP) has sufficient features to solve a problem that exists for customers and provides the greatest return on investment reducing the risk of what we don’t know about our customers.

Just like no two teams implement Agile the same way, we don’t all have to agree on the definition of MVP but all your team members should agree. Otherwise, what is an MVP to one person is a full featured product to another. Take a few minutes to discuss with your cross-functional agile team and come to a decision on your Definition of MVP.

Send to Kindle

Comments Off on Definition of MVP

Guest Post – Effective CTOs

The following guest post comes from long time friend and former colleague Michael “Skippy” Wilson.  Michael was one of eBay’s first employees, and its first CTO.  He was responsible first for taking “eBay 1.0″ (Pierre Omidyar’s PERL scripts) and converting them into the modern site written in C with which most people are familiar.  He and his architecture team subsequently redesigned the site with multiple service oriented splits (circa 2000) to enable it to scale to what was then one of the largest transaction sites on the internet.  There was no “playbook” back then for how to scale something as large as eBay.  Michael was one of the early CTOs who had to earn his internet PhD from the school of hard knocks.

Effective CTOs

The excellent blog post This is How Effective CTOs Embrace Change talks about how Twilio CTO Evan Cooke views the evolution of a CTO through a company’s growth.

While this article does an excellent job of identifying the root causes and problems a CTO can run into in– especially working with his fellow C-Level colleagues – I think there is another problem it does not address.

Ironically, one of a CTO’s biggest challenges is actually technology itself and how the CTO manages the company’s relationship to it. These challenges manifest themselves in the following ways:

Keeping appropriate pace with technological change

When a company is young, it’s agile enough to adopt the latest technologies quickly; in fact many companies change technologies several times over as they grow.

Later, when (as the article says) the CTO starts saying “No” to their business partners, they may also react to the company’s need for stability and predictability by saying “No” too often to technological change.  It only takes a good outage or release slippage, triggered by some new technology for the newly minted CTO to go from being the “Yes” man to the “No” man, sacrificing agility at the altar of stability and predictability.

The same CTO who advocated changing languages, database back ends, and even operating systems and hardware platforms while the company was growing finds himself saying “No” to even the most innocuous of upgrades. While the initial result may be more stability and predictability, the end result is far from that: the company’s platform ossifies, becomes brittle and is eventually unable to adapt to the changing needs of the organization.

For example, years ago, before the domination of open-source Unix variants like OpenBSD and Linux, many companies “grew up” on proprietary systems like Solaris, Microsoft, and AIX, alongside database counterparts like Oracle, and Sybase.  While they were “growing up” open source systems matured, offering cost, technology, and even stability benefits over the proprietary solutions. But, in many cases, these companies couldn’t take advantage of this because of the perceived “instability” of these crazy “Open Source” operating systems. The end result was that competitors who could remain agile gained a competitive advantage because they could advance their technology.

Another (glaring) example was the advent of mobile. Most established companies completely failed to get on the mobile bandwagon soon enough and often ended up ceding their positions to younger and more agile competitors because they failed to recognize and keep up with the shift in technology

The problem is a lot like Real Life ™. How do you continue to have the fun you had in your 20s and 30s later on, when things like career and family take center stage? Ironically, the answer is the same: Moderation and Separation.

Moderation means just what it says: use common sense release and deployment planning to introduce change at a predictable – and recoverable – rate. Changing to a new database, a new hardware platform, or even a new back end OS isn’t a big change at all, as long as you find way to do it in an incremental and recoverable matter. In other words, you can get out of it before anyone notices something went wrong.

Separation means you build into the organization the ability to experiment and advance all the time.  While you front end production systems may advance at a mature pace, you still need to maintain Alpha and Beta systems where new technologies can be advanced, experimented with and exposed to (willing) end users. By building this into your organization as a “first class” citizen, the CTO keeps the spirit of agility and technological advance alive, while still meeting the needs of stability and predictability.

Making sure Technology keeps pace with the Organization

The best way to describe this problem is through example: Lots of today’s start-ups are built on what are called “NoSQL” database platforms. NoSQL databases have the advantages of being very performant, in some cases very scalable, and, depending on who you ask, very developer friendly. It’s very clear that lots of companies wouldn’t be where they are without the likes of MongoDB and Couchbase.

But, as many companies have found out, not all NoSQL solutions are created equal, and the solution selected in the company’s early years may not be appropriate as the company grows and it’s needs change.

For example, as the organization matures, parts of the organization will start asking for reports, they may find that while their NoSQL solution worked well as an OLTP solution, it doesn’t work as well for OLAP or Data Warehousing needs. Or, a NoSQL solution that worked well for warehousing data, doesn’t work as well when you give your customers the ability to query it on-line, in an OLTP-like fashion.

This can occur when the CTO isn’t able to help guide the company’s technology forward fast enough to keep pace with the changing organizational needs. Unfortunately, if the problem reaches the “critical” stage because, for example, the organization can’t get reports, the solution may become a politically charged hack job instead of a well-planned solution.

Every organization – engineering, database administration, operations, etc, will want to “solve” the problem as quickly as possible – so while the end solution may solve the immediate problem, it probably won’t be the one you’d choose if you’d planned ahead.

The solution here is for the CTO to continuously engage with the Company to understand and anticipate its upcoming technology needs, and engage with Engineering organizations to plan for them. It’s actually better for the CTO to arrange to engage the stakeholders and engineering organizations together rather than serially: it encourages the stakeholders to work together instead of through the CTO.

Making sure Technology keeps pace with Security requirements

Today’s engineers have an amazing community of Open Source Developers and Tools to draw on to get their work done.

And when a company is growing, tools like NPM, grunt, brew, and Hadoop are amazing tools. In many cases, they enabled development to move exponentially more quickly than they could have otherwise.

Unfortunately, many of these tools are gigantic security risks.  When an engineer types “grunt install” (a node-related command), or “brew update”, do they really know exactly it is doing? Do they know where it’s getting the update? Do they know if the updates can be trusted?

They don’t. Let’s not even going to go into the horrors they’re inadvertently inviting behind your firewall.

The problem for the CTO is that they probably had a hand in introducing these tools and behaviors to the organization, and will now be faced with reining them in. “Fixing” them will probably make them harder for engineers to use, and feel like a hassle instead of a common-sense security measure.

The CTO can expect the same situation when charged with guiding the organization towards things like always-on HTTPS, better encryption-in-place of customer data, etc. Not only will they have to convince their engineering organizations it’s important, they may also have to convince the rest of the organization that these measures deserve a place in line along with other product features.

One solution to this is a backward-looking one: Security should be part of the product and company’s culture from Day 1.  Not just protecting the company’s assets from unauthorized access, but protecting the customer’s assets (data) just as vigilantly. A culture like that would recognize the two examples above and have either solved them, or be willing to solve them quickly and without easily without politics getting in the way.

If the CTO’s organization wasn’t built that way, their job becomes much harder, since they now need to change the company’s culture. Fortunately, recent events like the Sony hack and Snowden’s revelations make this much easier than it was 2 years, or even a year ago.

Conclusion

The effective CTO faces at least two challenges: Becoming a effective part of the company’s executive team (not an “outside advisor”), and an effective part of the company’s technology and product teams. The CTO can’t forget that their job is to continue to keep the company on the appropriate forefront changing technology – far enough ahead to keep pace, but not so far ahead as to be “bleeding edge”. Almost universally the way to do that is to make technological change part of the company’s culture – not a special or one-off event.

–Michael Wilson

Send to Kindle

Comments Off on Guest Post – Effective CTOs

Just for Fun – The Many Meanings of “I Don’t Disagree”

OK.  Who was the knucklehead who thought up the ridiculous corporatism of “I don’t disagree”?

Seriously.  What does this mouth turd even mean?

Does it mean that you simply don’t want to commit to a position?  Do you need more time to determine a position?  Is your ego too large to admit that someone else is right?  Do you really disagree but are too spineless to say it?  Were you not listening and needed to spout some statement to save face?

We owe it to each other to speak clearly, to engage in meaningful dialogue about important topics, and to convey our thoughts and positions.  When political correctness overrides fiduciary responsibility, we destroy stakeholder value.  Get rid of this piece of mouth trash from your repertoire.

Send to Kindle

Comments Off on Just for Fun – The Many Meanings of “I Don’t Disagree”

DevOps – Not Good Enough

DevOps is a grassroots movement born from the passions of practitioners dissatisfied with the cultural and functional deficiencies endemic to functionally oriented (aka silo’d) organizations. The tools these practitioners employed and developed including Chef, Vagrant, Juju, Hudson and Logstash (to name a few) codified their collective feelings of how development and operations could work better together. These pioneers in effect duct taped together the development and operations teams – covering a chasm characterized by differences in culture, goals, leadership, philosophies, and toolsets. This technological equivalent of a “redneck” fix has been successful, but it is also time to see it for what it really is: the treatment for a symptom of the underlying outdated organizational concepts. For all that DevOps provides it cannot treat the many other symptoms of suboptimal organization construction. These include low levels of innovation, higher than necessary levels of conflict, and slower than necessary time-to-market. Why continue to duct tape together a poorly designed system? It’s time to replace the broken system and its temporary repair with one that works.

DevOps1

A functionally aligned technology group is a biped with a foot planted in two disjointed pasts, neither of which have a future in modern online products. One “foot” is planted in the world of corporate IT, where hosting platforms and enterprise architectures were defined to allow heterogeneous third party systems to coexist in a single, cost effective and easy to manage environment. Because IT is typically considered a cost center, the primary driver year-on-year is to reduce cost on a per employee basis. One way to do this is to standardize on infrastructure and require that third party providers conform to corporate standards – shifting the burden of cost of integration to those third party providers. Hence was born the traditional infrastructure and operations groups found within today’s technology organizations. These organizations and their solutions are awesome if you are integrating third party solutions – they are the last nail on your coffin if you are developing a product.

DevOps4

The other “foot” is planted in the past of packaged or “on-premise” software providers. In this past (heavy emphasis on the past), the software developer was king. Job one was to produce software that customers would buy. Forget about running it – who cares – that’s someone else’s job. In this past there was no infrastructure team. We just eat requirements and poop software. Sales engineers and systems integrators are responsible for figuring out how to shoe horn generic software into a wide variety of customer environments. As archaic and dysfunctional as this approach sounds, it actually worked because there was alignment between what the producing company created (software) and what the purchasing company bought (software). But it doesn’t work anymore because no one buys software. Get over it.

DevOps5

Today customers buy SERVICES not software. The mindset shift necessary to be successful in this new world is that the raw materials comprising these services are equal parts software and hardware (infrastructure). If your team doesn’t combine these raw materials purposefully, the resulting service won’t be as good as it should be. Continuing to “produce” software and “host it” on infrastructure is a recipe for suboptimal results at best and a possibly a disaster.

DevOps3

It’s time to move on from both functional organizations (e.g. Operations, Infrastructure, Software, QA) and the DevOps solution that duct tapes them together to organizations that can more easily design, implement and deploy these services. Organize your teams into cross-functional, experientially diverse, autonomous teams that own distinct services within your product. Each team and service should be driven by business focused KPIs. These teams should be self-sufficient enough to develop, deploy, and support their service without having to rely on other teams.

DevOps2

We understand this leap isn’t easy, and we are here to help. To be successful you must evolve both your architecture and your organization such that the two are compatible. In so doing, our clients experience higher levels of innovation, faster time-to-market, and lower levels of conflict than those that adopt a DevOps only duct tape solution. We call this approach to organizing the “Agile Organization”.  To help our clients evolve to experientially diverse teams, we often introduce as a starting point a concept that we call “GrowthOps”.  GrowthOps differs from the traditional DevOps approach in that it starts with an understanding of the desired outcomes of a product in terms of business KPIs (key drivers of growth for instance).  Rather than trying to duct-tape organizations together, we start by aligning disparate organizational objectives and focusing on customer value creation.  We then add in all of the tools necessary to speed time to market and monitor core KPIs.  This unique approach establishes a firm foundation for a company to grow – be that growing from on-premise to SaaS, growing from collocation centers into the cloud, or growing because of customer demand – and allows for an easier subsequent transition to the Agile Organization.

DevOps6

We love duct tape as much as the next person but it’s time organizations stopped using these types of solutions and get to the real root-cause of the friction between development and operations. Only by embracing the true root causes and fixing the organizational issues can teams get out of their own way and grow effectively.  Implementing GrowthOps as a first step and ultimately transitioning to the Agile Organization is a proven recipe for success.

Devops7

Send to Kindle

Comments Off on DevOps – Not Good Enough

Product Management Excellence

While not technical problem solvers, Product Managers arguably have the most difficult job in modern software development. Their core mission involves taking feedback from a diversity of sources (market trends, the business owner’s vision, the competitive landscape, customer complaints, feature usage statistics, engineering effort estimates) and somehow synthesize all these inputs into a single coherent vision (roadmap) and prioritized task list (product backlog). If that weren’t challenging enough by itself, they’re also on the hook for articulating feature implementations (through user stories and daily discussions with engineers) and for continually providing forecasts of feature release dates. (Btw, If anyone needs to know AKF’s position on whether or not you need dedicated full-time PMs read here.)

Given the difficulty of the task, it’s not surprising that many product owners fall short of excellence. In the worst cases, what was envisioned as a stream-lined agile development process devolves into waterfall by another name. Product sees developers as “ticket-takers” who can’t seem to work hard enough or fast enough to satisfy the wants of the business. To prevent this sort of downward slide into mediocrity and to keep your product management practices on point, below we’ve highlighted some key ways to differentiate a Great Product Manager from just a “Good” one.

Good Product Managers prioritize their backlog and communicate feature requirements through user stories and face-to-face discussions with engineers. Great Product Managers go beyond these core tasks and participate in Product Discovery and Product Validation. Product Discovery requires conducting market research to determine what the existing product might need to be (more) successful in the target market. This means watching market trends, tracking competitors, and keeping overall sense of where the competitive landscape is headed. Product Validation is quantifying the results of a feature release, asking if these met expectations, and learning how to make better predictions in the future. The very best PM’s establish “success metrics” for each and every feature prior to implementation.

Good Product Managers interact daily with their agile team. Great Product Managers are part of the agile team. They’re not simply interacting on an “as needed” basis; they’re an integral part of the agile process. They attend daily stand-ups and retrospectives, offer ideas on how to change course, and communicate continually with engineers about tradeoffs. They don’t just write a user story, they discuss the story with developers and test engineers to ensure everyone has a common understanding of what’s being built. If the Agile Team fails to deliver on time — or worse yet, builds the wrong feature — they own a part of that failure. If the Agile Team delivers a great release, they share that success with their teammates.

Good Product Managers prioritize new features that generate the greatest value to the business. They understand technical debt and aim to pay it down over time. Great Product Managers do all this, but also recognize that product management isn’t just accretive. It’s not about just about heaping feature after feature on your product until it’s saddled with bells and whistles. Great Product Managers put themselves in the end-user’s seat and choose to refactor, merge, or deprecate features when it makes sense to do so. Eliminating unused features helps to minimize technical debt. This feature selection process improves the maintainability of the code and gives end-users a clean interface that balances functionality and ease of use.

Good Product Managers avoid making mistakes. Great Product Managers recognize and retain a memory of mistakes and past failures. It’s all too easy to brush failed ideas under the carpet. Great Product Managers recognize this and choose to capitalize on their failures. They learn from mistakes and vow never to repeat them. They keep bad product ideas from being recycled and keep their teams focused on generating value.

Finally and most importantly:

Good Product Managers say “Yes” to ideas that create value to the business. Great Product Managers say “No” early and often to all but the most valuable ideas. You see, the problem most SaaS companies face isn’t a lack of ideas, but finding a way to identify the most promising ones and prioritizing them correctly. Keeping the team focused on delivering value requires the Product Manager to dish out a lot of ‘No’s to tasks that might steal from the team’s time.

It’s your engineering team’s job to build your product right, but your Product Manager is there to ensure that you build the right product. Building the Taj Mahal is no good if the customer really needed the Eiffel Tower! By no means is it an easy job, but by adopting these best practices your Product Managers can achieve excellence.

Steven Mason
Consultant, AKF Partners

Send to Kindle

The AKF Story

Tom Keeven came up with the idea for AKF Partners in December 2006. Each of the founding partners would spend a handful of days a month helping interesting companies with their technology challenges. It sounded like the perfect “lifestyle/retirement job” – help fun companies solve challenging problems while having lots of time on the side for personal projects. AKF was born in February of 2007 with 3 founding partners. When we founded the company in February 2007, we had no engagement model and no unique or differentiating approach for the business. Essentially, our company violated everything we had learned in business school.

Our Defining Moment

One early client stands out as helping us to create our unique and differentiating approach as a growth consulting firm. This company was a post-A series internet startup that had recently run into some fairly serious product problems. The company was the darling of the internet, and their CEO was being talked about in nearly every magazine. The executive team proudly proclaimed that they had a new management approach – one that would change the way companies were managed forever. But what the company thought was novel, we felt was contributing to their problems; the lack of management discipline was allowing product problems to linger unsolved and causing significant issues for the users of their product.

An example that everyone can probably relate to, whether you’re a weekend warrior or seasoned athlete, is a strained or torn muscle.. If a doctor prescribes pain medication, the meds will help treat the symptom of the problem (the pain) but they will neither address the cause of the incident (improper form) nor treat the cause of the pain (tear or inflammation). This early client made it clear that they only wanted our pain medication – technical fixes to the problems they had encountered to date. While we were happy to give them the advice they wanted, we also felt obligated to address the causes of their pain. We told the client if they wouldn’t listen to all of our advice, including how they should manage their team and which processes they should implement, we would leave and not charge them. The client could have the technical recommendations we had already made for free.

AKF’s Focus and Approach

From that moment on, AKF focused solely on client value creation. We believe that creating value for our clients will result in the best possible outcomes for our company. We’ve since told many clients that we won’t work with them if we believe they won’t take our advice. We aren’t simply drug peddlers or professional consultants whose primary goal is to sell more consulting. We provide pain relief in the form of architectural advice and injury avoidance through advice on organizations and processes.

Realizing that our approach of evaluating architecture, organizations and processes was unique within the industry, we wrote our first book, The Art of Scalability, to help clients who couldn’t directly afford onsite services. The book was an Amazon bestseller and now has over ten thousand copies sold. We subsequently wrote Scalability Rules, a companion to The Art of Scalability and a third book, The Power of Customer Misbehavior, which explores the attributes of successful viral products and the companies that build them. Together, these books help teach companies how to drive growth within a market, and service that growth within their products.

We successfully followed this approach of treating both the symptoms (architectural/technology pain) and causes (people, organizations, management and processes) through hundreds of clients until the Healthcare.gov debacle of 2014.

Putting Our Values to the Test

Upon launch, the healthcare.gov website supporting the Affordable Care Act simply could not keep up with user demand and repeatedly crashed. People attempting to sign up for government mandated insurance could not comply with the law. For many companies in our business, this would represent an incredible revenue opportunity. We saw it as an opportunity to help and continue our service to our nation.

The founders of AKF are neither registered democrats nor huge proponents of the Affordable Care Act. That said we do believe that expensive initiatives should fail or succeed based on their merits and not die as a result of easily avoided technical failures. When Jeff Zientz, who was appointed by the President to “fix” the ACA called on AKF asking for help, we agreed as long as we were allowed to contribute to both the technology symptoms and the organizational, management and process causes. We suggested that we do all of these things on a pro-bono basis such that taxpayers could sign up for healthcare in compliance with the law.

True to our past experiences with other clients, the technology failures were a result of poor management practices, a lack of coordination processes, and a failure to quickly address the root causes of early technical symptoms. And true to the principles and values of our company, we worked to help create client value (in this case some return on tax payer investment). To us, it was unfathomable to charge a fee for what was already an over-priced solution.

The Past and the Future

The early, unnamed startup (which has since gone out of business) and ACA examples are extremes in our experience. Very few of our clients display the complete lack of management or absence of processes that these examples represent. For most of our clients, simple tweaks pay incredible dividends. Most clients are staffed by hard working and focused people who suffer from the same tunnel vision we’ve personally experienced in our past jobs. Rising above the chaos of explosive growth is difficult. Having a partner can help force companies to make the time to consider their options. We approach every company as though we are extensions of that company’s team, helping to guide the team and leverage their and our combined experience to design the most appropriate solution. Most importantly, we never take on a client without believing we can help them create more value than what they’ve spent on our services. In eight years AKF has worked with hundreds of clients and grown from three individuals working part-time to twelve fulltime people, with still more to be hired in 2015. We continue to grow because we provide value to clients by identifying the true root causes of issues, not just quick technical patches.

Send to Kindle

1 comment

Selecting Metrics for Your Agile Teams

One of our favorite sayings is “you can’t improve that which you do not measure.” When working with clients, we often emphasize the need to select and track performance metrics. It’s quite surprising (disheartening really) to see how many companies are limping along with decision-making based entirely on intuition. Metrics-driven institutions demonstrably outperform those that rely on “gut feel” and are able to quickly refocus efforts on projects that offer the greatest ROI.

Just as your top-level business KPIs govern strategic decision making, your agile teams (and their respective services) need their own “tactical” metrics to focus efforts, guide decision making, and make performance measurable. The purpose of agile development is to deliver high quality value to your customers in an iterative fashion. Agile facilitates rapid deployment, but also allows you to garner feedback from your customers that will shape the product. Absent a set of KPIs, you will never truly understand the effectiveness of your process. Getting it right, however, isn’t an easy task. Poorly chosen metrics won’t reflect the quality of service, will be easily gamed, or difficult to track, and may result in distorted incentives or undesirable outcomes.

In contrast, well-chosen metrics make it simple to track to performance and shape team incentives. For example, a search service could be graded against the speed and accuracy of search results while the shopping cart service is measured on the percentage of abandoned shopping carts. These metrics are simple, easy to track, difficult to game, and directly reflect the quality of service.

Be sure to dedicate the time and the mental energy needed to pick the right metrics. Feedback from team members is essential but the final selection isn’t something you can delegate. After all, If I’m allowed to pick my own performance metrics — I can assure you I’m always going to look awesome.

To keep you on the right track, below is a checklist of considerations to take into account before finalizing the selection of metrics for your agile teams:

  1. A handful of carefully chosen metrics should be preferred over a large volume of metrics. Ideally, each Agile team (and respective service) should be evaluated/tasked with improving 2-3 metrics (no more than 5). We have witnessed at least one company that proposed a total of 20 different metrics to measure an agile teams performance! Needless to say, being graded on that many metrics is disempowering at best, and likely to illicit either a panic attack or total apathy from your team. In contrast, having only a handful metrics to be graded against is empowering and helps to focus efforts.
  2. Easy to collect and or calculate. One startup suggested they would track “Engineering Hours Spent Bug-Fixing” as a way to determine code quality. The issue was quickly raised: Who would be doing this tracking? And how much time/effort did they estimate it would take?  It became obvious that tracking the exact amount of time spent would add a heavy productivity-tax to an already burdened engineering team.  While providing a very granular measure, the cost of collecting this information simply outweighed the benefits.  Ultimately we helped them decide that the “Number of Customer Service Tickets per Week” was the right metric. Sometimes a cruder measure is the right choice, especially if it is easier to collect and act upon.
  3. Directly Controllable by the Team. Choose metrics that your agile team has more or less direct control over. A metric they contribute towards indirectly is less empowering than something they know they own. For example, when measuring a search service the “Speed and Accuracy of Search” is preferable to “Overall Revenue” which the team only indirectly controls.
  4. Reflect the Quality of Service. Be sure to pick metrics that reflect the quality of that service. For instance, the number of abandoned shopping carts reflects the quality of a shopping cart service, whereas number of shopping cart views is an input metric but doesn’t necessarily reflect service quality.
  5. Difficult to Game. The innate human tendency to game any system should be held in check by selecting metrics that can’t easily be gamed. Simple velocity measures are easily (read: notoriously) gamed while the number of “Severity 1″ incidents your service invoked can’t be so easily massaged.
  6. Near Real Time Feedback. Metrics than can be collected and presented over short-time intervals are the most actionable. Information is more valuable when fresh — providing availability data weekly (or even daily) will foster better results than a year-end update.

Most importantly, well-chosen metrics tracked regularly should pervade all aspects and all levels of your business. If you want your business to become a lean, performance driven machine, you need to step on the scale every day. It can often be uncomfortable, but it’s necessary to get the returns of which you are capable.

Send to Kindle

Comments Off on Selecting Metrics for Your Agile Teams

5 Product Team Must Dos – the New (Old) Approach to Product

Want to create great products?  The path to success in this endeavor has more to do about how you think about value creation, think about your customers and organize your team than it does having brilliant ideas.  And here’s the kicker – while many of us think some of these concepts are brand new (including the founders of AKF who contributed to the primary research on the topic), the fact is that great companies have known these secrets for quite some time.  Here are 5 “Must Dos” for product teams to create great products:

1) Focus on customer value creation first!

In Peter Drucker’s “The Practice of Management” (1954) he wrote that “There is only one valid definition of a business purpose:  to create a customer.”  In later works he expanded on that notion by saying businesses must create AND keep customers and that profit was a necessity to stay in business, but not the sole purpose of a business.

Profit and shareholder value maximization are of course important to businesses.  Without profits, you can’t stay in business long.  Without creating shareholder value, you will be locked out of equity markets.  But by and large these are dependent variables outside of a firm’s control.  While we can control costs directly to effect profits, doing so may constrain our future growth.

If we instead focus on delighting the customer with the products we create, we can create internal and external enthusiasm for the product.  In doing so we grow revenues (part of profits) and excitement within the company.

 

2) Eliminate the “IT Mindset” and develop as a Product team

In the first edition of The Art of Scalability (2009), we cautioned teams away from the “IT Mindset”.  The IT Mindset is cost and efficiency focused versus the Product Mindset which is customer, market, product and revenue focused (in that order).  The IT Mindset envisions product development as a manufacturing plant, rather than the creative and innovative process it must be to be successful.  The IT Mindset has a purpose – to serve the needs of employee efficiencies – but should come with a “Do Not Use with Products” warning label.  This IT Mindset stifles innovation and kills product team morale.  Marty Cagan, one of the greatest product minds and consultants alive, and a longtime colleague and business partner, has more to say about this debilitating phenomenon here.

 

 3) Organize around the objective – not functions.

While performing academic research on why some teams in the same industry seem to have higher levels of innovation than others, we stumbled upon some interesting veins of tangential research.  All of them pointed to multi-disciplinary teams comprised of all the skills necessary to complete an objective and organizing around outcomes having higher levels of innovation and success than teams organized around functional boundaries (e.g. product management, software engineering, QA, operations, infrastructure, etc).  While we contributed to this research, we found out we weren’t the first to identify this phenomenon!  In fact, this Harvard Business Review article (1986) hints at the same philosophy.

 

4) Don’t listen to what customers WANT – watch them and identify what they NEED

We’ve all heard Jobs’ famous saying that customers don’t know what they want until you show them.  Developing a customer “want” is costly and can greatly miss the greater market opportunity.  As we point out in this post, product teams need to focus on the hypothesis for a market need, start small (MVP) and iterate their way to success to maximize the potential of a product to meet the true need.

 

5) Eliminate the concept of insular innovation

Forget the concept of a single great innovator running a company and generating great product idea after great product idea.  That concept is a myth perpetuated by marketing teams and egotistical executives.  The existing research on this topic is clear:  The teams with the highest levels of innovation source innovation from a diverse network of individuals inside and outside the company.  See slide 10 on network diversity in our scalable organizations presentation.  But don’t just take our word on it, Walter Isaacson comes to the same conclusion in his NY Times Bestseller “The Innovators”.  And yes, he debunks the notion that Jobs was the sole reason for Apple’s product success as well.

Send to Kindle

Comments Off on 5 Product Team Must Dos – the New (Old) Approach to Product