AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

Data Driven Product Development

When we bring up the idea of data driven product development, almost every product manager or engineering director will quickly say that of course they do this. Probing further we usually hear about A/B testing, but of course they only do it when they aren’t sure what the customer wants. To all of this we say “hogwash!” Testing one landing page against another is a good idea, but it’s a far cry from true data driven product development. The bottom line is that if your agile teams don’t have business metrics (e.g. increase the conversion rate by 5% or decrease the shopping cart abandonment rate by 2%) that they measure after every release to see if they are coming closer to achieving, then you are not data driven. Most likely you are building products based on the HiPPO. The Highest Paid Person’s Opinion. In case that person isn’t always around, there’s this service.

Let’s break this down. Traditional agile processes have product owners seeking input from multiple sources (customers, stakeholders, marketplace). All these diverse inputs are considered and eventually they are combined into a product vision that helps prioritize the backlog. In the most advanced agile processes, the backlog is ranked based on effort and expected benefit. This is almost never done. And if it is done, the effort and benefit are never validated afterwards to see how accurate the estimates where. We even have an agile measurement for doing this for the effort, velocity. Most agile teams don’t bother with the velocity measurement and have no hope of measuring the expected benefit.

Would we approach anything else like this? Would we find it acceptable if other disciplines worked this way? What if doctors practiced medicine with input from patients and pharmaceutical companies but without monitoring how their actions impacted the patients’ health? We would probably think that they got off to a good start, understanding the symptoms from the patient and the possible benefits of certain drugs from the pharmaceutical representatives. But, they failed to take into account the most important part of the process. How their treatment was actually performing against their ultimate goal, the improved health of the patient.

We argue that you’re wasting time and effort doing anything that isn’t measured by a success factor. Why code a single line if you don’t have a hypothesis that this feature, enhancement, story, etc. will benefit a customer and in turn provide greater revenue for the business. Without this feedback mechanism, your effort would be better spent taking out the trash, watering the plants, and cleaning up the office. Here’s what we know about most people’s gut instinct with regards to products, they stink. Read, Lund’s very first Usability Maxims “Know thy user, and YOU are not thy user.” What does that mean? It means that you know more about your product, industry, service, etc. than any user ever will. Your perspective is skewed because you know too much. Real users don’t know where everything is on the page, they have to search for it. Real users lie when you ask them what they want. Real users have bosses, spouses, and children distracting them while they use your product. As Henry Ford supposedly said, “If I had asked customers what they wanted, they would have said a faster horse.” You can’t trust yourself and you can’t trust users. Trust the data. Form a hypothesis “If I add a checkout button here, shoppers will convert more.” Then add the button and watch the data.

This is not A/B or even multivariate testing. It is much more than that. It’s changing the way to develop products and services. Remember, Agile is a business process, not just a development methodology. Nothing gets shipped without an expected benefit that is measured. If you didn’t achieve the business benefit that was expected you either try again next sprint or you deprecate the code and start over. Why leave code in a product that isn’t achieving business results? It only serves to bloat the code base and make maintenance harder. Data driven product development means we don’t celebrate shipping code. That’s suppose to happen every two weeks or even daily if we’re practicing continuous delivery. You haven’t achieved anything pushing code to production. When the customer interacts with that code and a business result is achieved then we celebrate.

Data driven product development is how you explain the value of what the product development team is delivering. It’s also how you ensure your agile teams are empowered to make meaningful impact for the business. When the team is measured against real business KPIs that directly drive revenue, it raises the bar on product development. Don’t get caught in the trap of measuring success with feature releases or sprint completions. The real measurement of success is when the business metric shows the predicted improvement and the agile team becomes better informed of what the customers really want and what makes the business better.

Send to Kindle

Comments Off on Data Driven Product Development

Guest Post: Three Mistakes in Scaling Non-Relational Databases

The following guest post comes from another long time friend and business associate Chris LaLonde.  Chris was with us through our eBay/PayPal journey and joined us again at both Quigo and Bullhorn.  After Bullhorn, Chris and co-founders Erik Beebe and Kenny Gorman (also eBay and Quigo alums) started Object Rocket to solve what they saw as a serious deficiency in cloud infrastructure – the database.  Specifically they created the first high speed Mongo instance offered as a service.  Without further ado, here’s Chris’ post – thanks Chris!

Three Mistakes in Scaling Non-Relational Databases

In the last few years I’ve been lucky enough to work with a number of high profile customers who use and abuse non-relational databases (mostly MongoDB) and I’ve seen the same problems repeated frequently enough to identify them as patterns. I thought it might be helpful to highlight those issues at a high level and talk about some of the solutions I’ve seen be successful.

Scale Direction:

At first everyone tries to scale things up instead of out.  Sadly that almost always stops working at some point. So generally speaking you have two alternatives:

  • Split your database – yes it’s a pain but everyone gets there eventually. You probably already know some of the data you can split out; users, billing, etc.Starting early will make your life much simpler and your application more scalable down the road. However the number of people that don’t consider that they’ll ever need a 2nd or a 3rd database is frightening. Oh and one other thing, put your analytics some place else; the fewer things placing load on your production database from the beginning the better. Copying data off of a secondary is cheap.  
  • Scale out – Can you offload heavy reads or writes ?  Most non-relational databases ’s have horizontal scaling functionality built in (e.g. sharding in MongoDB).  Don’t let all the negative articles fool you, these solutions do work. However you’ll want an expert on your side to help in picking the right variable or key to scale against ( e.g shard keys in MongoDB ). Seek advice early as these decisions will have a huge impact on future scalability.

Pick the right tool:

Non-Relational databases are very different by design and most “suffer” from some of the “flaws” of the eventually consistent model.  My grandpa always said “use the right tool for the right job”  in this case that means if you’re writing a application or product that requires your data to be highly consistent you probably shouldn’t use a non-relational database. You can make most modern non-relational databases more consistent with configuration and management changes but almost all lack any form of ACID compliance   Luckily in the modern world databases are cheap; pick  several  and get someone to manage them for you, always use the right tool for the right job. When you need consistency  use an ACID  compliant database,   when you need raw speed  use an in-memory data store,  and when you need flexibility use a non-relational database .

Write and choose good code:

Unfortunately not all database drivers are created equal. While I understand that you should write code in the language you’re strongest in sometimes it pays to be flexible. There’s a reason why Facebook writes code in PHP, transpiles it to C++, and then compiles it into a binary for production. In the case of your database the driver is _the_ interface to your database, so if the interface is unreliable or difficult to deal with or not updated frequently, you’re going to have a lot of bad days.  If the interface is stable, well documented and is frequently updated,  you’ll have a lot time to work on features instead of fixes.  Make sure to take a look at the communities around each driver, look at the number of bugs reported and how quickly those bugs are being fixed.  Another thing about connecting to your database: please remember nothing is perfect so write code as if it’s going to fail. At some point in time some component, the network, the NIC, load balancer, a switch or the database itself crashing, is going to cause your application to _not_ be able to talk to your database. I can’t tell you how many times I’ve talked to or heard of a developer assuming that “the database is always up, it’s not my  responsibility” and that’s exactly the wrong assumption. Until the application knows that the data is written assume that it isn’t. Always assume that there’s going to be a problem until you get an “I’ve written the data” confirmation from the database, to assume otherwise is asking for data loss.   

These are just a few quick pointers to help guide folks in the right direction. As always the right answer is to get advice from an actual expert about your specific situation.

Send to Kindle

Comments Off on Guest Post: Three Mistakes in Scaling Non-Relational Databases

When Should You Split Services?

The Y axis of the AKF Scale Cube indicates that growing companies should consider splitting their products along services (verb) or resources (noun) oriented boundaries. A common question we receive is “how granular should one make a services split?” A similar question to this is “how many swim lanes should our application be split into?” To help answer these questions, we’ve put together a list of considerations based on developer throughput, availability, scalability, and cost. By considering these, you can decide if your application should be grouped into a large, monolithic codebases or split up into smaller individual services and swim lanes. You must also keep in mind that splitting too aggressively can be overly costly and have little return for the effort involved. Companies with little to no growth will be better served focusing their resources on developing a marketable product than by fine tuning their service sizes using the considerations below.


Developer Throughput:

Frequency of Change – Services with a high rate of change in a monolithic codebase cause competition for code resources and can create a number of time to market impacting conflicts between teams including product merge conflicts. Such high change services should be split off into small granular services and ideally placed in their own fault isolative swim lane such that the frequent updates don’t impact other services. Services with low rates of change can be grouped together as there is little value created from disaggregation and a lower level of risk of being impacted by updates.

The diagram below illustrates the relationship we recommend between functionality, frequency of updates, and relative percentage of the codebase. Your high risk, business critical services should reside in the upper right portion being frequently updated by small, dedicated teams. The lower risk functions that rarely change can be grouped together into larger, monolithic services as shown in the bottom left.


Degree of Reuse – If libraries or services have a high level of reuse throughout the product, consider separating and maintaining them apart from code that is specialized for individual features or services. A service in this regard may be something that is linked at compile time, deployed as a shared dynamically loadable library or operate as an independent runtime service.

Team Size – Small, dedicated teams can handle micro services with limited functionality and high rates of change, or large functionality (monolithic solutions) with low rates of change. This will give them a better sense of ownership, increase specialization, and allow them to work autonomously. Team size also has an impact on whether a service should be split. The larger the team, the higher the coordination overhead inherent to the team and the greater the need to consider splitting the team to reduce codebase conflict. In this scenario, we are splitting the product up primarily based on reducing the size of the team in order to reduce product conflicts. Ideally splits would be made based on evaluating the availability increases they allow, the scalability they enable or how they decrease the time to market of development.

Specialized Skills – Some services may need special skills in development that are distinct from the remainder of the team. You may for instance have the need to have some portion of your product run very fast. They in turn may require a compiled language and a great depth of knowledge in algorithms and asymptotic analysis. These engineers may have a completely different skillset than the remainder of your code base which may in turn be interpreted and mostly focused on user interaction and experience. In other cases, you may have code that requires deep domain experience in a very specific area like payments. Each of these are examples of considerations that may indicate a need to split into a service and which may inform the size of that service.


Availability and Fault Tolerance Considerations:

Desired Reliability – If other functions can afford to be impacted when the service fails, then you may be fine grouping them together into a larger service. Indeed, sometimes certain functions should NOT work if another function fails (e.g. one should not be able to trade in an equity trading platform if the solution that understands how many equities are available to trade is not available). However, if you require each function to be available independent of the others, then split them into individual services.

Criticality to the Business – Determine how important the service is to business value creation while also taking into account the service’s visibility. One way to view this is to measure the cost of one hour of downtime against a day’s total revenue. If the business can’t afford for the service to fail, split it up until the impact is more acceptable.

Risk of Failure – Determine the different failure modes for the service (e.g. a billing service charging the wrong amount), what the likelihood and severity of each failure mode occurring is, and how likely you are to detect the failure should it happen.   The higher the risk, the greater the segmentation should be.


Scalability Considerations:

Scalability of Data – A service may be already be a small percentage of the codebase, but as the data that the service needs to operate scales up, it may make sense to split again.

Scalability of Services – What is the volume of usage relative to the rest of the services? For example, one service may need to support short bursts during peak hours while another has steady, gradual growth. If you separate them, you can address their needs independently without having to over engineer a solution to satisfy both.

Dependency on Other Service’s Data – If the dependency on another service’s data can’t be removed or handled with an asynchronous call, the benefits of disaggregating the service probably won’t outweigh the effort required to make the split.


Cost Considerations:

Effort to Split the Code – If the services are so tightly bound that it will take months to split them, you’ll have to decide whether the value created is worth the time spent. You’ll also need to take into account the effort required to develop the deployment scripts for the new service.

Shared Persistent Storage Tier – If you split off the new service, but it still relies on a shared database, you may not fully realize the benefits of disaggregation. Placing a read-only DB replica in the new service’s swim lane will increase performance and availability, but it can also raise the effort and cost required.

Network Configuration – Does the service need its own subdomain? Will you need to make changes load balancer routing or firewall rules? Depending on the team’s expertise, some network changes require more effort than others. Ensure you consider these changes in the total cost of the split.



The illustration below can be used to quickly determine whether a service or function should be segmented into smaller microservices, be grouped together with similar or dependent services, or remain in a multifunctional, infrequently changing monolith.


Send to Kindle

Comments Off on When Should You Split Services?

The Downside of Stored Procedures

I started my engineering career as a developer at Sybase, the company producing a relational database going head-to-head with Oracle. This was in the late 80’s, and Sybase could lay claim to a very fast RDBMS with innovations like one of the earliest client/server architectures, procedural language extensions to SQL in the form of Transact-SQL, and yes, stored procedures. Fast forward a career, and now as an Associate Partner with AKF, I find myself encountering a number of companies that are unable to scale their data tier, primarily because of stored procedures. I guess it is true – you reap what you sow.

Why does the use of stored procedures (sprocs) inhibit scalability? To start with, a database filled with sprocs is burdened by the business logic contained in those sprocs. Rather than relatively straightforward CRUD statements, sprocs seem to typically become the main application ‘tier’, containing 100s of lines of SQL within one sproc. So, in addition to persisting and retrieving your precious data, maintaining referential integrity and transactional consistency, you are now asking for your database to run the bulk of your application as well. Too many eggs in one basket.

And, that basket will remain a single basket, as it is quite difficult to horizontally scale a database filled with sprocs. Out of the 3 axes described in the AKF scalability cube, the only axis that is an option to readily achieve horizontal scalability with is the Z axis, where you have divided or partitioned your data across multiple shards. The X axis, the classic horizontal approach, works only if you are able to replicate your database across multiple read-only replicas, and can segregate read-only activity out of your application, difficult to do if many of those reads are coming from sprocs that also write. Additionally, most applications built on top of a sproc-heavy system have no DAL, no data access layer that might control or route read access to a different location than writes. The Y axis, dividing your architecture up into services, is also difficult to do in a sproc-heavy environment, as these environments are often extremely monolithic, making it very difficult to separate sprocs by service.

Your database hardware is typically the beefiest, most expensive box you will have. Running business logic on it vs. on smaller, commodity boxes typically used for an application tier is simply causing the cost of computation to be inordinately high. Also, many companies will deploy object caching (e.g. memcached) to offload their database. This works fine when the application is separate from the database, but when the reads are primarily done in sprocs? I’d love to see someone attempt to insert memcached into their database.

Sprocs seem to be something of a gateway drug, where once you have tasted the flavor of application logic living in the database, the next step is to utilize abilities such as CLR (common language runtime). Sure, why not have the database invoke C# routines? Why not ask your database machine to also perform functions such as faxing? If Microsoft allows it, it must be good, right?

Many of the companies we visit that are suffering from stored procedure bloat started out building a simple application that was deployed in a turnkey fashion for a single customer with a small number of users. In that world, there are certainly benefits to using sprocs, but that world was 20th century and no longer viable in today’s world of cloud and SaaS models.

Just say no to stored procedures!

Send to Kindle

Comments Off on The Downside of Stored Procedures

Definition of MVP

We often use the term minimum viable product or MVP but do we all agree on what it means? In the Scrum spirt of Definition of Done, I believe the Definition of MVP is worth stating explicitly within your tech team. A quick search revealed these three similar yet different definitions:

  • A minimum viable product (MVP) is a development technique in which a new product or website is developed with sufficient features to satisfy early adopters. The final, complete set of features is only designed and developed after considering feedback from the product’s initial users. Source: Techopedia
  • In product development, the minimum viable product (MVP) is the product with the highest return on investment versus risk…A minimum viable product has just those core features that allow the product to be deployed, and no more. Source: Wikipedia
  • When Eric Ries used the term for the first time he described it as: A Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort. Source: Leanstack

I personally like a combination of these definitions. I would choose something along the lines of:

A minimum viable product (MVP) has sufficient features to solve a problem that exists for customers and provides the greatest return on investment reducing the risk of what we don’t know about our customers.

Just like no two teams implement Agile the same way, we don’t all have to agree on the definition of MVP but all your team members should agree. Otherwise, what is an MVP to one person is a full featured product to another. Take a few minutes to discuss with your cross-functional agile team and come to a decision on your Definition of MVP.

Send to Kindle

Comments Off on Definition of MVP

Guest Post – Effective CTOs

The following guest post comes from long time friend and former colleague Michael “Skippy” Wilson.  Michael was one of eBay’s first employees, and its first CTO.  He was responsible first for taking “eBay 1.0” (Pierre Omidyar’s PERL scripts) and converting them into the modern site written in C with which most people are familiar.  He and his architecture team subsequently redesigned the site with multiple service oriented splits (circa 2000) to enable it to scale to what was then one of the largest transaction sites on the internet.  There was no “playbook” back then for how to scale something as large as eBay.  Michael was one of the early CTOs who had to earn his internet PhD from the school of hard knocks.

Effective CTOs

The excellent blog post This is How Effective CTOs Embrace Change talks about how Twilio CTO Evan Cooke views the evolution of a CTO through a company’s growth.

While this article does an excellent job of identifying the root causes and problems a CTO can run into in– especially working with his fellow C-Level colleagues – I think there is another problem it does not address.

Ironically, one of a CTO’s biggest challenges is actually technology itself and how the CTO manages the company’s relationship to it. These challenges manifest themselves in the following ways:

Keeping appropriate pace with technological change

When a company is young, it’s agile enough to adopt the latest technologies quickly; in fact many companies change technologies several times over as they grow.

Later, when (as the article says) the CTO starts saying “No” to their business partners, they may also react to the company’s need for stability and predictability by saying “No” too often to technological change.  It only takes a good outage or release slippage, triggered by some new technology for the newly minted CTO to go from being the “Yes” man to the “No” man, sacrificing agility at the altar of stability and predictability.

The same CTO who advocated changing languages, database back ends, and even operating systems and hardware platforms while the company was growing finds himself saying “No” to even the most innocuous of upgrades. While the initial result may be more stability and predictability, the end result is far from that: the company’s platform ossifies, becomes brittle and is eventually unable to adapt to the changing needs of the organization.

For example, years ago, before the domination of open-source Unix variants like OpenBSD and Linux, many companies “grew up” on proprietary systems like Solaris, Microsoft, and AIX, alongside database counterparts like Oracle, and Sybase.  While they were “growing up” open source systems matured, offering cost, technology, and even stability benefits over the proprietary solutions. But, in many cases, these companies couldn’t take advantage of this because of the perceived “instability” of these crazy “Open Source” operating systems. The end result was that competitors who could remain agile gained a competitive advantage because they could advance their technology.

Another (glaring) example was the advent of mobile. Most established companies completely failed to get on the mobile bandwagon soon enough and often ended up ceding their positions to younger and more agile competitors because they failed to recognize and keep up with the shift in technology

The problem is a lot like Real Life ™. How do you continue to have the fun you had in your 20s and 30s later on, when things like career and family take center stage? Ironically, the answer is the same: Moderation and Separation.

Moderation means just what it says: use common sense release and deployment planning to introduce change at a predictable – and recoverable – rate. Changing to a new database, a new hardware platform, or even a new back end OS isn’t a big change at all, as long as you find way to do it in an incremental and recoverable matter. In other words, you can get out of it before anyone notices something went wrong.

Separation means you build into the organization the ability to experiment and advance all the time.  While you front end production systems may advance at a mature pace, you still need to maintain Alpha and Beta systems where new technologies can be advanced, experimented with and exposed to (willing) end users. By building this into your organization as a “first class” citizen, the CTO keeps the spirit of agility and technological advance alive, while still meeting the needs of stability and predictability.

Making sure Technology keeps pace with the Organization

The best way to describe this problem is through example: Lots of today’s start-ups are built on what are called “NoSQL” database platforms. NoSQL databases have the advantages of being very performant, in some cases very scalable, and, depending on who you ask, very developer friendly. It’s very clear that lots of companies wouldn’t be where they are without the likes of MongoDB and Couchbase.

But, as many companies have found out, not all NoSQL solutions are created equal, and the solution selected in the company’s early years may not be appropriate as the company grows and it’s needs change.

For example, as the organization matures, parts of the organization will start asking for reports, they may find that while their NoSQL solution worked well as an OLTP solution, it doesn’t work as well for OLAP or Data Warehousing needs. Or, a NoSQL solution that worked well for warehousing data, doesn’t work as well when you give your customers the ability to query it on-line, in an OLTP-like fashion.

This can occur when the CTO isn’t able to help guide the company’s technology forward fast enough to keep pace with the changing organizational needs. Unfortunately, if the problem reaches the “critical” stage because, for example, the organization can’t get reports, the solution may become a politically charged hack job instead of a well-planned solution.

Every organization – engineering, database administration, operations, etc, will want to “solve” the problem as quickly as possible – so while the end solution may solve the immediate problem, it probably won’t be the one you’d choose if you’d planned ahead.

The solution here is for the CTO to continuously engage with the Company to understand and anticipate its upcoming technology needs, and engage with Engineering organizations to plan for them. It’s actually better for the CTO to arrange to engage the stakeholders and engineering organizations together rather than serially: it encourages the stakeholders to work together instead of through the CTO.

Making sure Technology keeps pace with Security requirements

Today’s engineers have an amazing community of Open Source Developers and Tools to draw on to get their work done.

And when a company is growing, tools like NPM, grunt, brew, and Hadoop are amazing tools. In many cases, they enabled development to move exponentially more quickly than they could have otherwise.

Unfortunately, many of these tools are gigantic security risks.  When an engineer types “grunt install” (a node-related command), or “brew update”, do they really know exactly it is doing? Do they know where it’s getting the update? Do they know if the updates can be trusted?

They don’t. Let’s not even going to go into the horrors they’re inadvertently inviting behind your firewall.

The problem for the CTO is that they probably had a hand in introducing these tools and behaviors to the organization, and will now be faced with reining them in. “Fixing” them will probably make them harder for engineers to use, and feel like a hassle instead of a common-sense security measure.

The CTO can expect the same situation when charged with guiding the organization towards things like always-on HTTPS, better encryption-in-place of customer data, etc. Not only will they have to convince their engineering organizations it’s important, they may also have to convince the rest of the organization that these measures deserve a place in line along with other product features.

One solution to this is a backward-looking one: Security should be part of the product and company’s culture from Day 1.  Not just protecting the company’s assets from unauthorized access, but protecting the customer’s assets (data) just as vigilantly. A culture like that would recognize the two examples above and have either solved them, or be willing to solve them quickly and without easily without politics getting in the way.

If the CTO’s organization wasn’t built that way, their job becomes much harder, since they now need to change the company’s culture. Fortunately, recent events like the Sony hack and Snowden’s revelations make this much easier than it was 2 years, or even a year ago.


The effective CTO faces at least two challenges: Becoming a effective part of the company’s executive team (not an “outside advisor”), and an effective part of the company’s technology and product teams. The CTO can’t forget that their job is to continue to keep the company on the appropriate forefront changing technology – far enough ahead to keep pace, but not so far ahead as to be “bleeding edge”. Almost universally the way to do that is to make technological change part of the company’s culture – not a special or one-off event.

–Michael Wilson

Send to Kindle

Comments Off on Guest Post – Effective CTOs

Just for Fun – The Many Meanings of “I Don’t Disagree”

OK.  Who was the knucklehead who thought up the ridiculous corporatism of “I don’t disagree”?

Seriously.  What does this mouth turd even mean?

Does it mean that you simply don’t want to commit to a position?  Do you need more time to determine a position?  Is your ego too large to admit that someone else is right?  Do you really disagree but are too spineless to say it?  Were you not listening and needed to spout some statement to save face?

We owe it to each other to speak clearly, to engage in meaningful dialogue about important topics, and to convey our thoughts and positions.  When political correctness overrides fiduciary responsibility, we destroy stakeholder value.  Get rid of this piece of mouth trash from your repertoire.

Send to Kindle

Comments Off on Just for Fun – The Many Meanings of “I Don’t Disagree”

DevOps – Not Good Enough

DevOps is a grassroots movement born from the passions of practitioners dissatisfied with the cultural and functional deficiencies endemic to functionally oriented (aka silo’d) organizations. The tools these practitioners employed and developed including Chef, Vagrant, Juju, Hudson and Logstash (to name a few) codified their collective feelings of how development and operations could work better together. These pioneers in effect duct taped together the development and operations teams – covering a chasm characterized by differences in culture, goals, leadership, philosophies, and toolsets. This technological equivalent of a “redneck” fix has been successful, but it is also time to see it for what it really is: the treatment for a symptom of the underlying outdated organizational concepts. For all that DevOps provides it cannot treat the many other symptoms of suboptimal organization construction. These include low levels of innovation, higher than necessary levels of conflict, and slower than necessary time-to-market. Why continue to duct tape together a poorly designed system? It’s time to replace the broken system and its temporary repair with one that works.


A functionally aligned technology group is a biped with a foot planted in two disjointed pasts, neither of which have a future in modern online products. One “foot” is planted in the world of corporate IT, where hosting platforms and enterprise architectures were defined to allow heterogeneous third party systems to coexist in a single, cost effective and easy to manage environment. Because IT is typically considered a cost center, the primary driver year-on-year is to reduce cost on a per employee basis. One way to do this is to standardize on infrastructure and require that third party providers conform to corporate standards – shifting the burden of cost of integration to those third party providers. Hence was born the traditional infrastructure and operations groups found within today’s technology organizations. These organizations and their solutions are awesome if you are integrating third party solutions – they are the last nail on your coffin if you are developing a product.


The other “foot” is planted in the past of packaged or “on-premise” software providers. In this past (heavy emphasis on the past), the software developer was king. Job one was to produce software that customers would buy. Forget about running it – who cares – that’s someone else’s job. In this past there was no infrastructure team. We just eat requirements and poop software. Sales engineers and systems integrators are responsible for figuring out how to shoe horn generic software into a wide variety of customer environments. As archaic and dysfunctional as this approach sounds, it actually worked because there was alignment between what the producing company created (software) and what the purchasing company bought (software). But it doesn’t work anymore because no one buys software. Get over it.


Today customers buy SERVICES not software. The mindset shift necessary to be successful in this new world is that the raw materials comprising these services are equal parts software and hardware (infrastructure). If your team doesn’t combine these raw materials purposefully, the resulting service won’t be as good as it should be. Continuing to “produce” software and “host it” on infrastructure is a recipe for suboptimal results at best and a possibly a disaster.


It’s time to move on from both functional organizations (e.g. Operations, Infrastructure, Software, QA) and the DevOps solution that duct tapes them together to organizations that can more easily design, implement and deploy these services. Organize your teams into cross-functional, experientially diverse, autonomous teams that own distinct services within your product. Each team and service should be driven by business focused KPIs. These teams should be self-sufficient enough to develop, deploy, and support their service without having to rely on other teams.


We understand this leap isn’t easy, and we are here to help. To be successful you must evolve both your architecture and your organization such that the two are compatible. In so doing, our clients experience higher levels of innovation, faster time-to-market, and lower levels of conflict than those that adopt a DevOps only duct tape solution. We call this approach to organizing the “Agile Organization”.  To help our clients evolve to experientially diverse teams, we often introduce as a starting point a concept that we call “GrowthOps”.  GrowthOps differs from the traditional DevOps approach in that it starts with an understanding of the desired outcomes of a product in terms of business KPIs (key drivers of growth for instance).  Rather than trying to duct-tape organizations together, we start by aligning disparate organizational objectives and focusing on customer value creation.  We then add in all of the tools necessary to speed time to market and monitor core KPIs.  This unique approach establishes a firm foundation for a company to grow – be that growing from on-premise to SaaS, growing from collocation centers into the cloud, or growing because of customer demand – and allows for an easier subsequent transition to the Agile Organization.


We love duct tape as much as the next person but it’s time organizations stopped using these types of solutions and get to the real root-cause of the friction between development and operations. Only by embracing the true root causes and fixing the organizational issues can teams get out of their own way and grow effectively.  Implementing GrowthOps as a first step and ultimately transitioning to the Agile Organization is a proven recipe for success.


Send to Kindle

Comments Off on DevOps – Not Good Enough

Product Management Excellence

While not technical problem solvers, Product Managers arguably have the most difficult job in modern software development. Their core mission involves taking feedback from a diversity of sources (market trends, the business owner’s vision, the competitive landscape, customer complaints, feature usage statistics, engineering effort estimates) and somehow synthesize all these inputs into a single coherent vision (roadmap) and prioritized task list (product backlog). If that weren’t challenging enough by itself, they’re also on the hook for articulating feature implementations (through user stories and daily discussions with engineers) and for continually providing forecasts of feature release dates. (Btw, If anyone needs to know AKF’s position on whether or not you need dedicated full-time PMs read here.)

Given the difficulty of the task, it’s not surprising that many product owners fall short of excellence. In the worst cases, what was envisioned as a stream-lined agile development process devolves into waterfall by another name. Product sees developers as “ticket-takers” who can’t seem to work hard enough or fast enough to satisfy the wants of the business. To prevent this sort of downward slide into mediocrity and to keep your product management practices on point, below we’ve highlighted some key ways to differentiate a Great Product Manager from just a “Good” one.

Good Product Managers prioritize their backlog and communicate feature requirements through user stories and face-to-face discussions with engineers. Great Product Managers go beyond these core tasks and participate in Product Discovery and Product Validation. Product Discovery requires conducting market research to determine what the existing product might need to be (more) successful in the target market. This means watching market trends, tracking competitors, and keeping overall sense of where the competitive landscape is headed. Product Validation is quantifying the results of a feature release, asking if these met expectations, and learning how to make better predictions in the future. The very best PM’s establish “success metrics” for each and every feature prior to implementation.

Good Product Managers interact daily with their agile team. Great Product Managers are part of the agile team. They’re not simply interacting on an “as needed” basis; they’re an integral part of the agile process. They attend daily stand-ups and retrospectives, offer ideas on how to change course, and communicate continually with engineers about tradeoffs. They don’t just write a user story, they discuss the story with developers and test engineers to ensure everyone has a common understanding of what’s being built. If the Agile Team fails to deliver on time — or worse yet, builds the wrong feature — they own a part of that failure. If the Agile Team delivers a great release, they share that success with their teammates.

Good Product Managers prioritize new features that generate the greatest value to the business. They understand technical debt and aim to pay it down over time. Great Product Managers do all this, but also recognize that product management isn’t just accretive. It’s not about just about heaping feature after feature on your product until it’s saddled with bells and whistles. Great Product Managers put themselves in the end-user’s seat and choose to refactor, merge, or deprecate features when it makes sense to do so. Eliminating unused features helps to minimize technical debt. This feature selection process improves the maintainability of the code and gives end-users a clean interface that balances functionality and ease of use.

Good Product Managers avoid making mistakes. Great Product Managers recognize and retain a memory of mistakes and past failures. It’s all too easy to brush failed ideas under the carpet. Great Product Managers recognize this and choose to capitalize on their failures. They learn from mistakes and vow never to repeat them. They keep bad product ideas from being recycled and keep their teams focused on generating value.

Finally and most importantly:

Good Product Managers say “Yes” to ideas that create value to the business. Great Product Managers say “No” early and often to all but the most valuable ideas. You see, the problem most SaaS companies face isn’t a lack of ideas, but finding a way to identify the most promising ones and prioritizing them correctly. Keeping the team focused on delivering value requires the Product Manager to dish out a lot of ‘No’s to tasks that might steal from the team’s time.

It’s your engineering team’s job to build your product right, but your Product Manager is there to ensure that you build the right product. Building the Taj Mahal is no good if the customer really needed the Eiffel Tower! By no means is it an easy job, but by adopting these best practices your Product Managers can achieve excellence.

Steven Mason
Consultant, AKF Partners

Send to Kindle

Comments Off on Product Management Excellence

The AKF Story

Tom Keeven came up with the idea for AKF Partners in December 2006. Each of the founding partners would spend a handful of days a month helping interesting companies with their technology challenges. It sounded like the perfect “lifestyle/retirement job” – help fun companies solve challenging problems while having lots of time on the side for personal projects. AKF was born in February of 2007 with 3 founding partners. When we founded the company in February 2007, we had no engagement model and no unique or differentiating approach for the business. Essentially, our company violated everything we had learned in business school.

Our Defining Moment

One early client stands out as helping us to create our unique and differentiating approach as a growth consulting firm. This company was a post-A series internet startup that had recently run into some fairly serious product problems. The company was the darling of the internet, and their CEO was being talked about in nearly every magazine. The executive team proudly proclaimed that they had a new management approach – one that would change the way companies were managed forever. But what the company thought was novel, we felt was contributing to their problems; the lack of management discipline was allowing product problems to linger unsolved and causing significant issues for the users of their product.

An example that everyone can probably relate to, whether you’re a weekend warrior or seasoned athlete, is a strained or torn muscle.. If a doctor prescribes pain medication, the meds will help treat the symptom of the problem (the pain) but they will neither address the cause of the incident (improper form) nor treat the cause of the pain (tear or inflammation). This early client made it clear that they only wanted our pain medication – technical fixes to the problems they had encountered to date. While we were happy to give them the advice they wanted, we also felt obligated to address the causes of their pain. We told the client if they wouldn’t listen to all of our advice, including how they should manage their team and which processes they should implement, we would leave and not charge them. The client could have the technical recommendations we had already made for free.

AKF’s Focus and Approach

From that moment on, AKF focused solely on client value creation. We believe that creating value for our clients will result in the best possible outcomes for our company. We’ve since told many clients that we won’t work with them if we believe they won’t take our advice. We aren’t simply drug peddlers or professional consultants whose primary goal is to sell more consulting. We provide pain relief in the form of architectural advice and injury avoidance through advice on organizations and processes.

Realizing that our approach of evaluating architecture, organizations and processes was unique within the industry, we wrote our first book, The Art of Scalability, to help clients who couldn’t directly afford onsite services. The book was an Amazon bestseller and now has over ten thousand copies sold. We subsequently wrote Scalability Rules, a companion to The Art of Scalability and a third book, The Power of Customer Misbehavior, which explores the attributes of successful viral products and the companies that build them. Together, these books help teach companies how to drive growth within a market, and service that growth within their products.

We successfully followed this approach of treating both the symptoms (architectural/technology pain) and causes (people, organizations, management and processes) through hundreds of clients until the Healthcare.gov debacle of 2014.

Putting Our Values to the Test

Upon launch, the healthcare.gov website supporting the Affordable Care Act simply could not keep up with user demand and repeatedly crashed. People attempting to sign up for government mandated insurance could not comply with the law. For many companies in our business, this would represent an incredible revenue opportunity. We saw it as an opportunity to help and continue our service to our nation.

The founders of AKF are neither registered democrats nor huge proponents of the Affordable Care Act. That said we do believe that expensive initiatives should fail or succeed based on their merits and not die as a result of easily avoided technical failures. When Jeff Zientz, who was appointed by the President to “fix” the ACA called on AKF asking for help, we agreed as long as we were allowed to contribute to both the technology symptoms and the organizational, management and process causes. We suggested that we do all of these things on a pro-bono basis such that taxpayers could sign up for healthcare in compliance with the law.

True to our past experiences with other clients, the technology failures were a result of poor management practices, a lack of coordination processes, and a failure to quickly address the root causes of early technical symptoms. And true to the principles and values of our company, we worked to help create client value (in this case some return on tax payer investment). To us, it was unfathomable to charge a fee for what was already an over-priced solution.

The Past and the Future

The early, unnamed startup (which has since gone out of business) and ACA examples are extremes in our experience. Very few of our clients display the complete lack of management or absence of processes that these examples represent. For most of our clients, simple tweaks pay incredible dividends. Most clients are staffed by hard working and focused people who suffer from the same tunnel vision we’ve personally experienced in our past jobs. Rising above the chaos of explosive growth is difficult. Having a partner can help force companies to make the time to consider their options. We approach every company as though we are extensions of that company’s team, helping to guide the team and leverage their and our combined experience to design the most appropriate solution. Most importantly, we never take on a client without believing we can help them create more value than what they’ve spent on our services. In eight years AKF has worked with hundreds of clients and grown from three individuals working part-time to twelve fulltime people, with still more to be hired in 2015. We continue to grow because we provide value to clients by identifying the true root causes of issues, not just quick technical patches.

Send to Kindle

1 comment