GROWTH BLOG: Achieving Team Autonomy not Anarchy
AKF Partners Logo We Wrote the Book on Scalability

Growth Blog

Scalability and Technology Consulting Advice for SaaS and Technology Companies

What's the difference between VMs & Containers?

May 29, 2019  |  Posted By: Robin McGlothin

VMs vs Containers

Inefficiency and down time have traditionally kept CTO’s and IT decision makers up at night.  Now, new challenges are emerging driven by infrastructure inflexibility and vendor lock-in, limiting Technology more than ever and making strategic decisions more complex than ever.  Both VMs and containers can help get the most out of available hardware and software resources while easing the risk of vendor lock-in limitation. 

Containers are the new kids on the block, but VMs have been, and continue to be, tremendously popular in data centers of all sizes.  Having said that, the first lesson to learn, is containers are not virtual machines.  When I was first introduced to containers, I thought of them as light weight or trimmed down virtual instances.  This comparison made sense since most advertising material leaned on the concepts that containers use less memory and start much faster than virtual machines – basically marketing themselves as VMs.  Everywhere I looked, Docker was comparing themselves to VMs.  No wonder I was a bit confused when I started to dig into the benefits and differences between the two.

As containers evolved, they are bringing forth abstraction capabilities that are now being broadly applied to make enterprise IT more flexible. Thanks to the rise of Docker containers it’s now possible to more easily move workloads between different versions of Linux as well as orchestrate containers to create microservices.  Much like containers, a microservice is not a new idea either. The concept harkens back to service-oriented architectures (SOA). What is different is that microservices based on containers are more granular and simpler to manage. More on this topic in a blog post for another day!
If you’re looking for the best solution for running your own services in the cloud, you need to understand these virtualization technologies, how they compare to each other, and what are the best uses for each. Here’s our quick read.

VM’s vs. Containers – What’s the real scoop?

One way to think of containers vs. VMs is that while VMs run several different operating systems on one server, container technology offers the opportunity to virtualize the operating system itself.


                                                               
      Figure 1 – Virtual Machine                                                         Figure 2 - Container    

VMs help reduce expenses. Instead of running an application on a single server, a virtual machine enables utilizing one physical resource to do the job of many. Therefore, you do not have to buy, maintain and service several servers.  Because there is one host machine, it allows you to efficiently manage all the virtual environments with a centralized tool – the hypervisor. The decision to use VMs is typically made by DevOps/Infrastructure Team.  Containers help reduce expenses as well and they are remarkably lightweight and fast to launch.  Because of their small size, you can quickly scale in and out of containers and add identical containers as needed. 

Containers are excellent for Continuous Integration and Continuous Deployment (CI/CD) implementation. They foster collaborative development by distributing and merging images among developers.  Therefore, developers tend to favor Containers over VMs.  Most importantly, if the two teams work together (DevOps & Development) the decision on which technology to apply (VMs or Containers) can be made collaboratively with the best overall benefit to the product, client and company.

What are VMs?

The operating systems and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.

IT departments, both large and small, have embraced virtual machines to lower costs and increase efficiencies.  However, VMs can take up a lot of system resources because each VM needs a full copy of an operating system AND a virtual copy of all the hardware that the OS needs to run.  This quickly adds up to a lot of RAM and CPU cycles. And while this is still more economical than bare metal for some applications this is still overkill and thus, containers enter the scene.

Benefits of VMs
• Reduced hardware costs from server virtualization
• Multiple OS environments can exist simultaneously on the same machine, isolated from each other.
• Easy maintenance, application provisioning, availability and convenient recovery.
• Perhaps the greatest benefit of server virtualization is the capability to move a virtual machine from one server to another quickly and safely. Backing up critical data is done quickly and effectively because you can effortlessly create a replication site.
Popular VM Providers
• VMware vSphere ESXi, VMware has been active in the virtual space since 1998 and is an industry leader setting standards for reliability, performance, and support.
• Oracle VM VirtualBox - Not sure what operating systems you are likely to use? Then VirtualBox is a good choice because it supports an amazingly wide selection of host and client combinations. VirtualBox is powerful, comes with terrific features and, best of all, it’s free.
• Xen - Xen is the open source hypervisor included in the Linux kernel and, as such, it is available in all Linux distributions. The Xen Project is one of the many open source projects managed by the Linux Foundation.
• Hyper-V - is Microsoft’s virtualization platform, or ‘hypervisor’, which enables administrators to make better use of their hardware by virtualizing multiple operating systems to run off the same physical server simultaneously.
• KVM - Kernel-based Virtual Machine (KVM) is an open source virtualization technology built into Linux. Specifically, KVM lets you turn Linux into a hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual machines (VMs).

What are Containers?

Containers are a way to wrap up an application into its own isolated ”box”. For the application in its container, it has no knowledge of any other applications or processes that exist outside of its box. Everything the application depends on to run successfully also lives inside this container. Wherever the box may move, the application will always be satisfied because it is bundled up with everything it needs to run.

Containers virtualize the OS instead of virtualizing the underlying computer like a virtual machine.  They sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.

In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. This means you can put two to three times as many applications on a single server with containers than you can with VMs. In addition, with containers, you can create a portable, consistent operating environment for development, testing, and deployment. This is a huge benefit to keep the environments consistent.

Containers help isolate processes through differentiation in the operating system namespace and storage.  Leveraging operating system native capabilities, the container isolates process space, may create temporary file systems and relocate process “root” file system, etc.

Benefits of Containers

One of the biggest advantages to a container is the fact you can set aside less resources per container than you might per virtual machine. Keep in mind, containers are essentially for a single application while virtual machines need resources to run an entire operating system. For example, if you need to run multiple instances of MySQL, NGINX, or other services, using containers makes a lot of sense. If, however you need a full web server (LAMP) stack running on its own server, there is a lot to be said for running a virtual machine. A virtual machine gives you greater flexibility to choose your operating system and upgrading it as you see fit. A container by contrast, means that the container running the configured application is isolated in terms of OS upgrades from the host.

Popular Container Providers

1. Docker - Nearly synonymous with containerization, Docker is the name of both the world’s leading containerization platform and the company that is the primary sponsor of the Docker open source project.
2. Kubernetes - Google’s most significant contribution to the containerization trend is the open source containerization orchestration platform it created.
3. Although much of early work on containers was done on the Linux platform, Microsoft has fully embraced both Docker and Kubernetes containerization in general.  Azure offers two container orchestrators Azure Kubernetes Service (AKS) and Azure Service Fabric.  Service Fabric represents the next-generation platform for building and managing these enterprise-class, tier-1, applications running in containers.
4. Of course, Microsoft and Google aren’t the only vendors offering a cloud-based container service. Amazon Web Services (AWS) has its own EC2 Container Service (ECS).
5. Like the other major public cloud vendors, IBM Bluemix also offers a Docker-based container service.
6. One of the early proponents of container technology, Red Hat claims to be “the second largest contributor to the Docker and Kubernetes codebases,” and it is also part of the Open Container Initiative and the Cloud Native Computing Foundation. Its flagship container product is its OpenShift platform as a service (PaaS), which is based on Docker and Kubernetes.

Uses for VMs vs Uses for Containers

Both containers and VMs have benefits and drawbacks, and the ultimate decision will depend on your specific needs, but there are some general rules of thumb.
• VMs are a better choice for running applications that require all of the operating system’s resources and functionality when you need to run multiple applications on servers or have a wide variety of operating systems to manage.
• Containers are a better choice when your biggest priority is maximizing the number of applications running on a minimal number of servers.

Container orchestrators

Because of their small size and application orientation, containers are well suited for agile delivery environments and microservice-based architectures. When you use containers and microservices, however, you can easily have hundreds or thousands of components in your environment. You may be able to manually manage a few dozen virtual machines or physical servers, but there is no way you can manage a production-scale container environment without automation. The task of automating and managing a large number of containers and how they interact is known as orchestration.

Scaling Workloads

Scalability of containerized workloads is a completely different process from VM workloads. Modern containers include only the basic services their functions require, but one of them can be a web server, such as NGINX, which also acts as a load balancer. An orchestration system, such as Google Kubernetes is capable of determining, based upon traffic patterns, when the quantity of containers needs to scale out; can replicate container images automatically; and can then remove them from the system.
For most, the ideal setup is likely to include both. With the current state of virtualization technology, the flexibility of VMs and the minimal resource requirements of containers work together to provide environments with maximum functionality.

If your organization is running many instances of the same operating system, then you should look into whether containers are a good fit. They just might save you significant time and money over VMs.

Subscribe to the AKF Newsletter

Contact Us

Why is launching an MVP so difficult?

April 29, 2019  |  Posted By: Robin McGlothin

Man wearing backpack trying to decide which direction to go

Have you ever had that feeling of not knowing where to start?  For writers, it’s called writer’s block.  Painters call it blank-canvas syndrome.  Entrepreneurs & technologist refer to this phenomenon as analysis paralysis, an affliction experienced by all at one point or another.  It’s like having a stroke of genius for the next big idea, but not knowing where to start.

So let’s start by clearly defining the MVP:

A minimum viable product (MVP) is the version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least amount of effort.

Sounds simple … so what’s the issue?

MVP is one of the most misunderstood terms in our product jargon today.  We’ve heard from many a client that MVP is really just a crappy version of a product that is an embarrassment to show to customers.  Over and over, the dialog goes like this, “let’s just remove these features and call it the MVP version.”  Just last week, we heard, “just make it good enough to launch!”

But the purpose of the MVP is to LEARN about customer demand and usability before over-committing resources.  To make sure that we are only building what customers want.  An MVP is NOT a fully usable product that will delight customers.  It is simply a learning vehicle.

So, what’s the problem?

 
Marty Abbott talks about the need to stay competitive, and how firms need to build great products but they also need to lend these products to the uses and misuses of their customers and learn extensively from them in The Power of Customer Mis-Behavior.  He’s basically telling us to focus on discovery and learning what customers really need, not what they say they want.

The point of an MVP is to validate or invalidate a specific hypothesis.  This is why we recommend starting discovery as soon as possible and relying heavily on user testing of prototypes.  But for some reason, most people hear the word Product and assume that it means the first version of a product.  And so, they build that version, release it and guess what…no one likes it. Well…no Duh!

But where to start?

 
Our advice is this: 一 Don’t wait for the perfect product. Create an MVP and start discovery immediately. 

Discovery Happens Along Two Dimensions:

  1. Discovery of the “What” something should do – is product discovery in defining or expanding an MVP.  Discovering “What” the feature set (stories) needs to be successful.
  2. Discovery of the “How” something should work to accomplish the best outcome of the what – this is a hybrid of technical and product discovery meant to find the fastest or best path to a result.

Seems straight forward, but many clients have challenges keeping it simple. 

One common way companies have overbuilt MVPs is by applying an old prioritization technique used for requirements. Each requirement is tagged one of the following:

  • Must-have
  • Should-have
  • Could-have
  • Would like to have/won’t-have

“Must-haves” are the essentials, “should-haves” are really important, “could-haves” might be sacrificed, and “would-likes” probably aren’t going to happen.

The problem with this is at least 60% of any requirements list gets classified as “musts.”  Several stakeholders demand their request is a “must” and fight like wild dogs to avoid “could” or “would” status.

A vicious cycle is created as stakeholders realize that nothing except “musts” will get done.  If it gets to the point where more than 60% of requirements get classified as “musts,” there may even be some “musts” that don’t get done. In some organizations, a stakeholder stampede is triggered every time someone says “MVP,” leading to a bloated first release, unless someone steps in to put stricter limitations on the requirements.

At the same time, other MVP creation pitfalls we commonly see and warn our clients about include:

  1. Making a poor product. The word “minimum” in MVP does not mean bad, buggy or barely usable. “Minimum” means that the scope should be stripped of anything extra.  but whatever features remain should be done in an intuitive and user-friendly way. Products should be unique to what the customer is likely to by relevant to size.
  2. Building a product to sell. Change the sales approach for future customers, sell on risk shift – not features.  Move to discovery vs. sales-lead contract-based product development.
  3. Difficulty in defining the minimum. Often, you want your first product to be as beautiful as it can be, and you are reluctant to throw away all the nice features you’ve thought of. As a result, you spend too much time and money, and, even more damaging, lose focus on the core features. The rule of thumb when defining the scope of your MVP is “can we launch without this or not?” This should be your main criteria and then add all the bells and whistles later when the idea is validated.

Our recommended approach to avoid these pitfalls and launch a successful MVP is based on market-driven analysis with a minimum set of features identified for the go to market strategy. 

The need for speed

Speed is everything when testing an MVP.  Many clients resist launching until a product is “perfect,” but here’s the news flash – it will never be perfect, and holding out for that status could ruin your product going forward.

According to Openview, 50% of SaaS companies fail in their first year due to misunderstanding their market, while 95% close up shop within five years for the same reason.  But strong, early MVP assessments allow you to determine whether you’re onto something (or not) in a low-risk environment.

When it comes to launching an MVP, progress is better than perfection.  The only goal is to put together a scaled-down version of your product or service and see whether clients are willing to buy in.

If your company is struggling with getting their MVP to market, AKF Partners can help you implement a product strategy consistent with your innovation needs. 

Photo by Kun Fotografi from Pexels

Subscribe to the AKF Newsletter

Contact Us

The AKF Difference

December 4, 2018  |  Posted By: Marty Abbott

akf difference

During the last 12 years, many prospective clients have asked us some variation of the following questions: “What makes you different?”, “Why should we consider hiring you?”, or “How are you differentiated as a firm?”.

The answer has many components.  Sometimes our answers are clear indications that we are NOT the right firm for you.  Here are the reasons you should, or should not, hire AKF Partners:

Operators and Executives – Not Consultants

Most technology consulting firms are largely comprised of employees who have only been consultants or have only run consulting companies.  We’ve been in your shoes as engineers, managers and executives.  We make decisions and provide advice based on practical experience with living with the decisions we’ve made in the past.

Engineers – Not Technicians

Educational institutions haven’t graduated enough engineers to keep up with demand within the United States for at least forty years.  To make up for the delta between supply and demand, technical training services have sprung up throughout the US to teach people technical skills in a handful of weeks or months.  These technicians understand how to put building blocks together, but they are not especially skilled in how to architect highly available, low latency, low cost to develop and operate solutions.

The largest technology consulting companies are built around programs that hire employees with non-technical college degrees.  These companies then teach these employees internally using “boot camps” – creating their own technicians.

Our company is comprised almost entirely of “engineers”; employees with highly technical backgrounds who understand both how and why the “building blocks” work as well as how to put those blocks together.

Product – Not “IT”

Most technology consulting firms are comprised of consultants who have a deep understanding of employee-facing “Information Technology” solutions.  These companies are great at helping you implement packaged software solutions or SaaS solutions such as Enterprise Resource Management systems, Customer Relationship Management Systems and the like.  Put bluntly, these companies help you with solutions that you see as a cost center in your business.  While we’ve helped some partners who refuse to use anyone else with these systems, it’s not our focus and not where we consider ourselves to be differentiated.

Very few firms have experience building complex product (revenue generating) services and platforms online.  Products (not IT) represent nearly all of AKF’s work and most of AKF’s collective experience as engineers, managers and executives within companies.  If you want back-office IT consulting help focused on employee productivity there are likely better firms with which you can work.  If you are building a product, you do not want to hire the firms that specialize in back office IT work.

Business First – Not Technology First

Products only exist to further the needs of customers and through that relationship, further the needs of the business.  We take a business-first approach in all our engagements, seeking to answer the questions of:  Can we help a way to build it faster, better, or cheaper?  Can we find ways to make it respond to customers faster, be more highly available or be more scalable?  We are technology agnostic and believe that of the several “right” solutions for a company, a small handful will emerge displaying comparatively low cost, fast time to market, appropriate availability, scalability, appropriate quality, and low cost of operations.

Cure the Disease – Don’t Just Treat the Symptoms

Most consulting firms will gladly help you with your technology needs but stop short of solving the underlying causes creating your needs:  the skill, focus, processes, or organizational construction of your product team.  The reason for this is obvious, most consulting companies are betting that if the causes aren’t fixed, you will need them back again in the future.

At AKF Partners, we approach things differently.  We believe that we have failed if we haven’t helped you solve the reasons why you called us in the first place.  To that end, we try to find the source of any problem you may have.  Whether that be missing skillsets, the need for additional leadership, organization related work impediments, or processes that stand in the way of your success – we will bring these causes to your attention in a clear and concise manner.  Moreover, we will help you understand how to fix them.  If necessary, we will stay until they are fixed.

We recognize that in taking the above approach, you may not need us back.  Our hope is that you will instead refer us to other clients in the future.

Are We “Right” for You?

That’s a question for you, not for us, to answer.  We don’t employ sales people who help “close deals” or “shape demand”.  We won’t pressure you into making a decision or hound you with multiple calls.  We want to work with clients who “want” us to partner with them – partners with whom we can join forces to create an even better product solution.

 

Subscribe to the AKF Newsletter

Contact Us

The Importance of QA

November 20, 2018  |  Posted By: Robin McGlothin

“Quality in a service or product is not what you put into it.  It’s what the customer gets out of it.”  Peter Drucker


Importance of QA - Team sitting at conference table

The Importance of QA

High levels of quality are essential to achieving company business objectives. Quality can be a competitive advantage and in many cases will be table stakes for success. High quality is not just an added value, it is an essential basic requirement. With high market competition, quality has become the market differentiator for almost all products and services.

There are many methods followed by organizations to achieve and maintain the required level of quality. So, let’s review how world-class product organizations make the most out of their QA roles. But first, let’s define QA. 

According to Wikipedia, quality assurance is “a way of preventing mistakes or defects in products and avoiding problems when delivering solutions or services to customers. But there’s much more to quality assurance.”

There are numerous benefits of having a QA team in place:

  1. Helps increase productivity while decreasing costs (QA HC typically costs less)
  2. Effective for saving costs by detecting and fixing issues and flaws before they reach the client
  3. Shifts focus from detecting issues to issue prevention

Teams and organizations looking to get serious about (or to further improve) their software testing efforts can learn something from looking at how the industry leaders organize their testing and quality assurance activities. It stands to reason that companies such as Google, Microsoft, and Amazon would not be as successful as they are without paying proper attention to the quality of the products they’re releasing into the world.  Taking a look at these software giants reveals that there is no one single recipe for success. Here is how five of the world’s best-known product companies organize their QA and what we can learn from them.

Google: Searching for best practices

Google Search Logo
How does the company responsible for the world’s most widely used search engine organize its testing efforts? It depends on the product. The team responsible for the Google search engine, for example, maintains a large and rigorous testing framework. Since search is Google’s core business, the team wants to make sure that it keeps delivering the highest possible quality, and that it doesn’t screw it up.

To that end, Google employs a four-stage testing process for changes to the search engine, consisting of:

  1. Testing by dedicated, internal testers (Google employees)
  2. Further testing on a crowdtesting platform
  3. “Dogfooding,” which involves having Google employees use the product in their daily work
  4. Beta testing, which involves releasing the product to a small group of Google product end users

Even though this seems like a solid testing process, there is room for improvement, if only because communication between the different stages and the people responsible for them is suboptimal (leading to things being tested either twice over or not at all).

But the teams responsible for Google products that are further away from the company’s core business employ a much less strict QA process. In some cases, the only testing done by the developer responsible for a specific product, with no dedicated testers providing a safety net.

In any case, Google takes testing very seriously. In fact, testers’ and developers’ salaries are equal, something you don’t see very often in the industry.

Facebook: Developer-driven testing

Facebook does not employ any dedicated testers at all. Instead, the social media giant relies on its developers to test their own (as well as one another’s) work. Facebook employs a wide variety of automated testing solutions. The tools that are used range from PHPUnit for back-end unit testing to Jest (a JavaScript test tool developed internally at Facebook) to Watir for end-to-end testing efforts.

Like Google, Facebook uses dogfooding to make sure its software is usable. Furthermore, it is somewhat notorious for shaming developers who mess things up (breaking a build or causing the site to go down by accident, for example) by posting a picture of the culprit wearing a clown nose on an internal Facebook group. No one wants to be seen on the wall-of-shame!

Facebook recognizes that there are significant flaws in its testing process, but rather than going to great lengths to improve, it simply accepts the flaws, since, as they say, “social media is nonessential.” Also, focusing less on testing means that more resources are available to focus on other, more valuable things.

Rather than testing its software through and through, Facebook tends to use “canary” releases and an incremental rollout strategy to test fixes, updates, and new features in production. For example, a new feature might first be made available only to a small percentage of the total number of users.

                            Canary Incremental Rollout
Incremental rollout of features

By tracking the usage of the feature and the feedback received, the company decides either to increase the rollout or to disable the feature, either improving it or discarding it altogether.


Amazon: Deployment comes first

Amazon logo
Like Facebook, Amazon does not have a large QA infrastructure in place. It has even been suggested (at least in the past) that Amazon does not value the QA profession. Its ratio of about one test engineer to every seven developers also suggests that testing is not considered an essential activity at Amazon.

The company itself, though, takes a different view of this. To Amazon, the ratio of testers to developers is an output variable, not an input variable. In other words, as soon as it notices that revenue is decreasing or customers are moving away due to anomalies on the website, Amazon increases its testing efforts.

The feeling at Amazon is that its development and deployment processes are so mature (the company famously deploys software every 11.6 seconds!) that there is no need for elaborate and extensive testing efforts. It is all about making software easy to deploy, and, equally if not more important, easy to roll back in case of a failure.

Spotify: Squads, tribes and chapters

Spotify does employ dedicated testers. They are part of cross-functional teams, each with a specific mission. At Spotify, employees are organized according to what’s become known as the Spotify model, constructed of:

  1. Squads. A squad is basically the Spotify take on a Scrum team, with less focus on practices and more on principles. A Spotify dictum says, “Rules are a good start, but break them when needed.” Some squads might have one or more testers, and others might have no testers at all, depending on the mission.
  2. Tribes are groups of squads that belong together based on their business domain. Any tester that’s part of a squad automatically belongs to the overarching tribe of that squad.
  3. Chapters. Across different squads and tribes, Spotify also uses chapters to group people that have the same skillset, in order to promote learning and sharing experiences. For example, all testers from different squads are grouped together in a testing chapter.
  4. Guilds. Finally, there is the concept of a guild. A guild is a community of members with shared interests. These are a group of people across the organization who want to share knowledge, tools, code and practices.

                            Spotify Team Structure
Graphic showing how team guilds are built with experts on each team

Testing at Spotify is taken very seriously. Just like programming, testing is considered a creative process, and something that cannot be (fully) automated. Contrary to most other companies mentioned, Spotify heavily relies on dedicated testers that explore and evaluate the product, instead of trying to automate as much as possible.  One final fact: In order to minimize the efforts and costs associated with spinning up and maintaining test environments, Spotify does a lot of testing in its production environment.

Microsoft: Engineers and testers are one


Microsoft’s ratio of testers to developers is currently around 2:3, and like Google, Microsoft pays testers and developers equally—except they aren’t called testers; they’re software development engineers in test (or SDETs).

The high ratio of testers to developers at Microsoft is explained by the fact that a very large chunk of the company’s revenue comes from shippable products that are installed on client computers & desktops, rather than websites and online services. Since it’s much harder (or at least much more annoying) to update these products in case of bugs or new features, Microsoft invests a lot of time, effort, and money in making sure that the quality of its products is of a high standard before shipping.

What you can learn from world-class product organizations?  If the culture, views, and processes around testing and QA can vary so greatly at five of the biggest tech companies, then it may be true that there is no one right way of organizing testing efforts. All five have crafted their testing processes, choosing what fits best for them, and all five are highly successful. They must be doing something right, right?

Still, there are a few takeaways that can be derived from the stories above to apply to your testing strategy:

  1. There’s a “testing responsibility spectrum,” ranging from “We have dedicated testers that are primarily responsible for executing tests” to “Everybody is responsible for performing testing activities.” You should choose the one that best fits the skillset of your team.
  2. There is also a “testing importance spectrum,” ranging from “Nothing goes to production untested” to “We put everything in production, and then we test there, if at all.” Where your product and organization belong on this spectrum depends on the risks that will come with failure and how easy it is for you to roll back and fix problems when they emerge.
  3. Test automation has a significant presence in all five companies. The extent to which it is implemented differs, but all five employ tools to optimize their testing efforts. You probably should too.

Bottom line, QA is relevant and critical to the success of your product strategy. If you’d tried to implement a new QA process but failed, we can help.

 

 

Subscribe to the AKF Newsletter

Contact Us

Are you compromised?

September 14, 2018  |  Posted By: Larry Steinberg

It’s important to acknowledge that a core competency for hackers is hiding their tracks and maintaining dormancy for long periods of time after they’ve infiltrated an environment. They also could be utilizing exploits which you have not protected against - so given all of this potential how do you know that you are not currently compromised by the bad guys? Hackers are great hidden operators and have many ‘customers’ to prey on. They will focus on a customer or two at a time and then shut down activities to move on to another unsuspecting victim. It’s in their best interest to keep their profile low and you might not know that they are operating (or waiting) in your environment and have access to your key resources.

Most international hackers are well organized, well educated, and have development skills that most engineering managers would admire if not for the malevolent subject matter. Rarely are these hacks performed by bots, most occur by humans setting up a chain of software elements across unsuspecting entities enabling inbound and outbound access. 

What can you do? Well to start, don’t get complacent with your security, even if you have never been compromised or have been and eradicated what you know, you’ll never know for sure if you are currently compromised. As a practice, it’s best to always assume that you are and be looking for this evidence as well as identifying ways to keep them out. Hacking is dynamic and threats are constantly evolving.

There are standard practices of good security habits to follow - the NIST Cybersecurity Framework and OWASP Top 10. Further, for your highest value environments here are some questions that you should consider: would you know if these systems had configuration changes? Would you be aware of unexpected processes running? If you have interesting information in your operating or IT environment and the bad guys get in, it’s of no value unless they get that information back out of the environment; where is your traffic going? Can you model expected outbound traffic and monitor this? The answer should be yes. Then you can look for abnormalities and even correlate this traffic with other activities in your environment.

Just as you and your business are constantly evolving to service your customers and to attract new ones, the bad guys are evolving their practices too. Some of their approaches are rudimentary because we allow it but when we buckle down they have to get more innovative. Ensure that you are constantly identifying all the entry points and close them. Then remain diligent to new approaches they might take. 

Don’t forget the most common attack vector - humans. Continue evolving your training and keep the awareness high within your staff - technical and non-technical alike.

Your default mental model should be that you don’t know what you don’t know. Utilize best practices for security and continue to evolve. Utilize external or build internal expertise in the security space and ensure that those skills are dynamic and expanding. Utilize recurring testing practices to identify vulnerabilities in your environment and to prepare against emerging attack patterns. 

We commonly help organizations identify and prioritize security concerns through technical due diligence assessments. Contact us today.

Subscribe to the AKF Newsletter

Contact Us

Open Source Software as a malware “on ramp”

August 21, 2018  |  Posted By: Larry Steinberg

Open Source Software (OSS) is an efficient means to building out solutions rapidly with high quality.  You utilize crowdsourced design, development and validation to conveniently speed your engineering.  OSS also fosters a sense of sharing and building together - across company boundaries or in one’s free time.

So just pull a library down off the web, build your project, and your company is ready to go.  Or is that the best approach? What could be in this library you’re including in your solution which might not be wanted?  This code will be running in critical environments - like your SaaS servers, internal systems, or on customer systems.  Convenience comes at a price and there are some well known situations of hacks embedded in popular open source libraries.

What is the best approach to getting the benefits of OSS and maintaining the integrity of your solution? 

Good practices are a necessity to ensure a high level of security.  Just like when you utilize OSS and then test functionality, scale, and load - you should be validating against vulnerabilities.  Pre-production vulnerability and penetration testing is a good start.  Also, utilize good internal process and reviews.  Keep the process simple to maintain speed but establish internal accountability and vigilance on code that’s entering your environment.  You are practicing good coding techniques already with reviews and/or peer coding - build an equivalency with OSS.

Always utilize known good repositories and validate the project sponsors.  Perform diligence on the committers just like you would for your own employees.  You likely perform some type of background check on your employees before making an offer - whether going to a third party or simply looking them up on linkedin and asking around.  OSS committers have the same risk to your company - why not do the same for them?  Understandably, you probably wouldn’t do this for a third party purchased solution, but your contract or expectation is that the company is already doing this and abiding by minimum security standards.  That may not be true for your OSS solutions, and as such your responsibility for validation is at least slightly higher.  There are plenty of projects coming from reputable sources that you can rely on. 

Ensure that your path to production is only coming from artifacts which have been built on internal sources which were either developed or reviewed by your team.  Also, be intentional about OSS library upgrades, this should planned and part of the process.

OSS is highly leveraged in today’s software solutions and provides many benefits.  Be diligent in your approach to ensure you only see the upside of open source.

Need additional help?  Contact Us!

Subscribe to the AKF Newsletter

Contact Us

SaaS Risk and Value Shift

August 2, 2018  |  Posted By: Marty Abbott

Hand illustration of different risks
The movement to SaaS specifically, and more broadly “Anything” (X) as a Service (XaaS) is driven by demand side (buyer) forces.  In early cases within any industry, the buyer seeks competitive advantage over competitors.  The move to SaaS allows the buyer to focus on core competencies, increasing investments in the areas that create true differentiation.  Why spend payroll on an IT staff to support ERP solutions, mail solutions, CRM solutions, etc when that same payroll could otherwise be spent on engineers to build product differentiating features or enlarge a sales staff to generate more revenue?

As time moves on and as the technology adoption lifecycle advances, the remaining buyers for any product feel they have no choice; the talent and capabilities to run a compelling solution for the company simply do not exist.  As such, the late majority and laggard adopters are almost “forced” into renting a service over purchasing software.

Whether for competitive reasons, as in the case of early adopters through the early majority, or for lack of alternatives as in the case of late majority and laggards, the movement to SaaS and XaaS represents a shift in risk as compared to the existing purchased product options.  This shift in risk is very much like the shift that happens between purchasing and leasing a home.
 
Renting a home or an apartment is almost always more expensive than owning the same dwelling.  The reason for this should be clear: the person owning the property expects to make a profit beyond the costs of carrying a mortgage and performing upkeep on the property over the life of the owner’s investment.  There are special “inversion” cases where renting is less expensive, such as in a low rental demand market, but these cases tend to reset the market ownership prices (house prices fall) as rents no longer cover mortgages or ownership does not make sense.

Conversely, ownership is almost always less expensive than renting or leasing.  But owners take on more risk: the risk of maintenance activities; the risk of market prices; the risk and costs associated with remodeling to make the property attractive, etc. 

The matrix below helps put the shift described above into context.

Risk and Cost Shift Inherent to SaaS Transitions

A customer who “owns” an on-premise solution also “owns” a great deal of risk for all of the components necessary to achieve their desired outcomes: equipment, security, power, licenses, the “-ilities” (like availability), disaster recovery, release management, and monitoring of the solution.  The primary components of this risk include fluctuation in asset valuation, useful life of the asset, and most importantly – the risk that they do not have the right skills to maximize the value creation predicated on these components.

A customer who “rents” a SaaS solution transfers most of these risks to a provider who specializes in the solution and therefore should be better able to manage the risk and optimize outcomes.  In exchange, the customer typically pays a risk premium relative to ownership.  However, given that the provider can likely operate the solution more cost effectively, especially if it is a multi-tenant solution, the risk premium may be small.  Indeed, in extreme cases where the company can eliminate headcount, say after eliminating all on-premise solutions, the lessee may experience an overall reduction in cost.

But what about the provider of the service?  After all, the “old world” of simply slinging code and allowing customers to take all the risk was mighty appealing; the provider enjoyed low costs of goods sold (and high gross margins) and revenue streams associated with both licensing and customization.  The provider expects to achieve higher revenue from the risk premium charged for services.  The provider also expects overall margins through greater efficiencies in running solutions with significant demand concentration at scale.  The risk premium more than compensates the provider for the increased cost of goods sold relative to the on-premise business.  Overall, the provider assumes risk for greater value creation.  Both the customer and the provider win.

Architecture and product financial decisions are key to achieving the margins above. 

Cost Models of Various Cloud Implementations

Gross margins are directly correlated with the level of tenancy of any XaaS provider (Y axis).  As such, while we want to avoid “all tenancy” for availability reasons, we desire a high level of tenancy to maximize equipment utilization and increase gross margins.  Other drivers of gross margins include the level of demand upon shared components and the level of automation on all components – the latter driving down cost of labor.

The X axis of the chart above shows the operating expense associated with various business models.  Multi-tenant XaaS offerings collapse the number of “releases supported in the wild” – reducing the operating expense (and increasing gross margins) associated with managing a code base. 

Another way of viewing this is to look at the relative costs of software maintenance and administration costs for various business models.

Cloud Cost Models - Operating Expense and Cost of Goods Sold

Plotted in the “low COGS” (X axis), “low Maintenance quadrant of the figure above is “True XaaS”.  Few versions of a release reduce our cost to maintain a code base, and high equipment utilization and automation reduces our cost to provision a service. 

In the upper right and unattractive quadrant is the ASP (Application Service Provider) model, where we have less control over equipment utilization (it is typically provisioned for individual customers) and less control over defining the number of releases. 

Hosting a solution on-premise to the customer may reduce our maintenance fees, if we are successful in reducing releases, but significantly increases our costs.  This is different than the on-premise model (upper left) in which the customer bears the cost of equipment but for which we have a high number of releases to maintain.  The XaaS solution is clearly beneficial overall to maximize margins.

AKF Partners helps companies transition on-premise, licensed software products to SaaS and XaaS solutions.  Let us help you on your journey.

Subscribe to the AKF Newsletter

Contact Us

The Phases of SaaS Grief

July 24, 2018  |  Posted By: Marty Abbott

frustrated guy at a computer Photo by Tim Gouw from Pexels
Over a decade of helping on-premise and licensed software companies through the transition to “Something as a Service” – whether that be Software (SaaS), Platform (PaaS), Infrastructure (IaaS), or Analytics (AaaS) – has given us a rather unique perspective on the various phases through which these companies transition.  Very often we find ourselves in the position of a counselor, helping them recognize their current phase and making recommendations to deal with the cultural and operational roadblocks inherent to that phase.

While rather macabre, the phases somewhat resemble those of grieving after the loss of a loved one.  The similarities make some sense here, as very often we work with companies who have had a very successful on-premise and/or licensed software business; they dominated their respective markets and “genetically” evolved to be the “alphas” in their respective areas.  The relationship is strong, and their past successes have been very compelling.  Why would we expect anything other than grieving?

But to continue to evolve, succeed, and survive in light of secular forces these companies must let their loved one (the past business) go and move on with a new and different life.  To be successful, this life will require new behaviors that are often diametrically opposed to those that made the company successful in their past life.

It’s important to note that, unlike the grieving process with a loved one, these phases need not all be completed.  The most successful companies, through pure willpower of the management team, power through each of these quickly and even bypass some of them to accelerate to the healing phase.  The most important thing to note here is that you absolutely can move quickly – but it requires decisive action on the part of the executive team.


Phase 1: Denial This phase is characterized by the licensed/on-premise software provider completely denying a need to move to an “X” (something) as a Service (XaaS, e.g. SaaS, PaaS) model. 

Commonly heard quotes inside the company:

  • “Our customers will never move to a SaaS model.”
  • “Our customers are concerned about security.  IaaS, SaaS and PaaS simply aren’t an option.”
  • “Our industry won’t move to a Cloud model – they are too concerned about ownership of their data.”
  • “To serve this market, a solution needs to be flexible and customizable.  Proprietary customer processes are a competitive advantage in this space – and our solution needs to map exactly to them.”

Reinforcing this misconceived belief is an executive team, a sales force, and a professional services team trapped in a prison of cognitive biases.  Hypothesis myopia and asymmetric attention (both forms of confirmation bias) lead to psychological myopia.  In our experience, companies with a predisposed bias will latch on to anything any customer says that supports the notion that XaaS just won’t work.  These companies discard any evidence, such as pesky little startups picking up small deals, as aberrant data points. 

The inherent lack of paranoia blinds the company to the smaller companies starting to nibble away at the portions of the market that the successful company’s products do not serve well.  Think Seibel in the early days of Salesforce.  The company’s product is too expensive and too complex to adequately serve the smaller companies beneath them.  The cost of sales is simply too high, and the sales cycle too long to address the needs of the companies adopting XaaS solutions.  In this phase, the company isn’t yet subject to the Innovator’s Dilemma as the blinders will not let them see it.

Ignorance is bliss…for awhile…

How to Avoid or Limit This Phase

Successful executive teams identify denial early and simply shut it down.  They establish a clear vision and timeline to move to the delivery of a product as a service.  As with any successful change initiative, the executive team creates, as a minimum:

  1. The compelling reason and need for change.  This visionary element describes the financial and operational benefits in clear terms that everyone can understand.  It is the “pulling” force necessary to motivate people through difficult times.
  2. A sense of fear for not making the change.  This fear becomes the “stick” to the compelling “carrot” above.  Often given the secular forces, this fear is quite simply the slow demise of the company.
  3. A “villain” or competitor.  As is the case in nearly any athletic competition, where people perform better when they have a competitor of equivalent caliber (vs say running against a clock), so do companies perform better when competing against someone else.
  4. A “no excuses” cultural element.  Everyone on the team is either committed to the result, or they get removed from the team.  There is no room for passive-aggressive behavior, or behaviors inconsistent with the desired outcome.  People clinging to the past simply prolong or doom the change initiative.  Fast success, and even survival, requires that everyone be committed.

Phase 2: Reluctant but Only Partial Acceptance
This phase typically starts when a new executive, or sometimes a new and prominent product manager, is brought into the company.  This person understands at least some of – and potentially all of – the demand side forces “pulling” XaaS across the curve of adoption, and notices the competition from below.  Many times, the Innovator’s Dilemma keeps the company from attempting to go after the lower level competitors. 

Commonly heard quotes inside the company:

  • “Those guys (referring to the upstarts) don’t understand the complexities of our primary market – the large enterprise.”
  • “There’s a place for those products in the SMB and SME space – but ‘real’ companies want the security of being on-premise.”
  • “Sure, there are some companies entertaining SaaS, but it represents a tiny portion of our existing market.”
  • “We are not diluting our margins by going down market.”

The company embarks upon taking all their on-premise solutions and hosting them, nearly exactly as implemented on-premise, as a “service”. 

Many of the company’s existing customers aren’t yet ready to migrate to XaaS, but discussions are happening inside customer companies to move several solutions off-premise including email, CRM and ERP.  These customers see the possibility of moving everything – they are just uncertain as to when.

How to Avoid or Limit This Phase

The answer for how to speed through or skip this phase is not significantly different than that of the prior phase.  Vision, fear of death, a compelling adversary, and a “no excuses” culture are all necessary components. 

Secular forces drive customers to seek a shift of risk.  This shift is analogous to why one would rent instead of owning a home.  The customer no longer wants the hassle of maintenance, nor are they truly qualified to perform that maintenance.  They are willing to accept some potential increase in costs as capex shifts to opex, to relieve themselves of the burden of specializing in an area for which they are ill-equipped.

Risk shift from on premise to SaaS products

If not performed during the Denial phase, now is the time to remove executives who display behaviors inconsistent with the new desired SaaS Principles


Phase 3: Pretending to Have an Answer
The pretending phase starts with the company implementing essentially an on-premise solution as a “SaaS” solution.  With small modifications, it is exactly what was shipped to customers before but presented online and with a recurring revenue subscription model.  We often like to call this the “ASP” or “Application Service Provider” model.  While the revenue model of the company shifts for a small portion of its revenue to recurring services fees, the solution itself has not changed much.  In fact, for older client-server products Citrix or the like is often deployed to allow the solution to be hosted.

The product soon displays clear faults including lower than desired availability, higher than necessary cost of operations, higher than expected operating expenses, and lower margins than competitors overall.  Often the company will successfully hide these “SaaS” results as a majority of their income from operations still come from on-premise solutions.

The company will often use nebulous terms like “Cloud” when describing the service offering to steer clear of direct comparisons with other “born SaaS” or “true SaaS” solutions.  Sometimes, in extreme cases, the company will lie to itself about what they’ve accomplished, and it will almost always direct the conversation to topics seen as differentiating in the on-premise world rather than address SaaS Principles.

Commonly heard quotes inside the company:

  • “Our ‘Cloud’ solution is world class – it’s the Mercedes of all solutions in the space with more features and functionality than any other solution.”
  • “The smaller guys don’t have a chance.  Look at how long it will take them to reach feature parity.  The major players in the industry simply won’t wait for that.”
  • “We are the Burger King of SaaS providers – you can still have it your way.  And we know you need it your way.”

Meanwhile, customers are starting to look at true SaaS solutions.  They tire of availability problems, response time issues, the customization necessary to get the solution to work in a suitable fashion and the lack of configurability.  The lead time to implementation is still too long.

Sales people continue to sell the product the same way; promising whatever a customer wants to get a sale.  Engineers still develop the same way, using the same principles that made the company successful on-premise and completely ignorant of the principles necessary to be successful in the SaaS world. 

How to Avoid or Limit This Phase

It’s not completely a bad thing to launch, as a first step, a “hosted” version of a company’s licensed product.  But the company must understand internally that it is only an interim step.

In addition to the visionary and behavioral components of the previous phases, the company now must accept and be planning for a smaller functionality solution that will be more likely adopted by “innovators” and “early majority” companies.  The concept of MVP relative to the Technology Adoption Lifecycle is important here.

Further, the company must be aggressively weeding product, sales, and technology executives who lack the behaviors or skills to be successful and seeding the team with people who have “done this before”.  Sales teams must act similarly to used car sales people in that they can only sell “what is on the lot” that will fit customer “need”, as compared to new car sales people who can promise options and colors from the factory (“It will take more time”) that more precisely fit a customer’s “want”. 


Phase 4: Fear
The company loses its first major deal or two to a rival product that appears to be truly SaaS and abides by SaaS Principles.  They realize that their ASP product simply isn’t going to cut it, and Sales people are finally “getting” that the solution they have simply won’t work.  The question is:  Is the company too late?  The answer depends on how long it took the company to get to this position.  A true SaaS solution is at the very least months away and potentially years away.  If the company moves initially right to the “Fear” stage and properly understands the concepts behind the TALC, they have a chance.

Commonly heard quotes inside the company:

  • “We’re screwed unless we get this thing re-architected in order to properly compete in the XaaS space.”
  • “Stop behaving like we used to – stop promising customizations.  That’s not who we are anymore.”
  • “The new product needs to do everything the old product did.” [Incorrect and prone to failing]
  • “Think smaller, and faster.  Think some of today’s customers – not all of them – for the first release.  Understand MVP is relative to the TALC.” [Correct and will help drive success]

How to Avoid or Limit This Phase
This is the most easily avoided phase.  With proper planning and execution in prior phases, a company can completely skip the fear stage.

When companies find themselves here, its typically because they have not addressed the talents and approach of their sales, product, and engineering teams.  Sales behaviors must change to only sell what’s “on the car lot”.  Engineers must understand how to build the “-ilities” into products from day 1.  Product managers must switch to identifying market need, rather than fulfilling customer want.  Executives must now run an entirely different company.


Final Phase: Healing or Marginalization
Companies successful enough to achieve this phase do so only through launching a true XaaS product – one that abides by XaaS principles built and run by executives, managers and individual contributors who truly understand or are completely wedded to learning what it means to be successful in the XaaS world. 


Summary
The phases of grief are common among many of our customers.  But unlike grieving for a loved one, they are not necessary.  Quick progression, or better yet avoidance, of these phases can be accomplished by:

  1. Establishing a clear and compelling vision based on market and secular force analysis, and an understanding of the technology adoption lifecycle.  As with any vision, it should not only explain the reason for change, but the positive long-term financial impact of change.
  2. Ensuring that everyone understands the cost of failure, helping to instill some small level in fear that should help drive cultural change.
  3. Ensuring that a known villain or competitor exists, against which we are competing to help boost performance and speed of transition.
  4. Aggressively addressing the cultural and behavioral changes necessary to be successful.  Anyone who is not committed and displaying the appropriate changes in behavior needs to be weeded from the garden.

This shift often results in a significant portion of the company departing – sometimes willingly and sometimes forcefully.  Some people don’t want to change and can find other companies (for a while) where their skills and behaviors are relevant.  Some people have the desire, but may not be capable of changing in the time necessary to be successful. 

Image Credit: Tim Gouw from Pexels

Subscribe to the AKF Newsletter

Contact Us

Factors Driving SaaS and XaaS Adoption

July 12, 2018  |  Posted By: Marty Abbott

XaaS (e.g. SaaS, PaaS) mass-market adoption is largely a demand-side “pull” of a product through the Technology Adoption Lifecycle (TALC).  One of the best books describing this phenomenon is the book that made the TALC truly accessible to business executives in technology companies, Crossing the Chasm.

Technology Adoption Life Cycle Adopter Characteristics

An interesting side note here is that Crossing the Chasm was originally published in 1991.  While many people associate the TALC with Geoffrey Moore, it’s actual origin is Everett Rogers’ research on the diffusion of innovations.  Rogers’ initial research focused on the adoption of hybrid corn seed by rural farmers.  Rogers and team expanded upon and generalized the research between 1957 and 1962 when he released his book Diffusion of Innovations.  While the 1962 book was a critical success, I find it interesting that it took nearly 30 years to bring the work to most technology business executives.

Several forces combine to incent companies within each phase of the TALC to consider, and ultimately adopt, XaaS solutions.  Progressing from left to right through the TALC, companies within each phase are decreasingly less aware of these forces and increasingly more resistant to them until the forces are overwhelming for that segment or phase.  Innovators, therefore, are the most aware of emerging forces, seek to use them for competitive advantage and as a result are least resistant to them.  Laggards are least aware (think heads in the sand) and most resistant to change, desiring status quo over differentiation. 

While the list below is incomplete, it has a high degree of occurrence across industries and products.  Our experience is that even when they are experienced in isolation (without other demand side XaaS forces), they are sufficient to pull new XaaS products across the TALC.


Demand Side Forces

Factors Driving SaaS Adoption

Talent Forces:  Opportunity, Supply and Demand

The first two forces represent the talent available to the traditional internally focused corporate IT organization.  The rise of XaaS offerings and the equity compensation opportunities inherent to them has been a vacuum sucking what little great talent exists within corporate IT.  This first force really serves to add insult to injury to the second force: the initial low supply of talented engineers available in the US. 

Think about this:  Per-capita (normalized for population growth) US graduates for engineers within the US has remained relatively constant since 1945.  While there has been absolute growth, until very recently that growth has slightly underperformed relative to the growth in population.  Contrast this with the fact that college graduation rates in 2012 were higher than high school graduation rates in 1940.  Add in that most engineering fields were at or near economic full employment during the great recession and we have a clear indication that we simply do not produce enough engineers for our needs.  The same is true on a global level.

This constrained supply has led to alternative means of educating programmers, systems administrators, database administrators and data analysts.  “Boot Camps”, or short courses meant to help teach people basic skills to contribute to the product and IT workforces, have sprung up around the nation to help fulfill this need.  Once trained, the people fulfill many lower level needs in the product and IT space, but they are not typically equivalent to engineers with undergraduate degrees.

Constrained supply, growing demand and a clear differentiation in employment for engineers working in product organizations compared to IT organizations means that IT groups simply aren’t going to get great talent.  Without great talent, there can be no great development or operations teams to create or run solutions.  As such, the talent forces alone mandate that many IT teams look to outsource as much as they can:  everything from infrastructure to the platforms that help make employees more efficient.

Core vs. Context

Why would a company spend time or capital on running third party solutions or developing bespoke solutions unless those activities result in competitive differentiation?  Doing so is a waste of precious time and focus.  Companies simply can’t waste time running email servers, or building CRM solutions and soon will find themselves not spending time on anything that doesn’t directly lend itself to competitive advantage.

Factors Driving SaaS Adoption

Margins

Put simply, anytime a company can’t maximize profit margins through keeping in-house systems highly utilized, they should (and will) seek to outsource the workload to an IaaS provider.  Why purchase a server to do work 8 hours a day and sit idle 16?  Why purchase a server capable of handling peak period of 10 minutes of traffic, when a server one fifth its size is all that’s needed for the remainder of the day? 

Part of staying competitive is focusing on profit maximization to invest those profits in other, higher return opportunities.  Companies that don’t at least leverage Infrastructure as a Service to optimize their costs of operations and costs of goods sold (if doing so results in lowered COGS) are arguably violating their fiduciary responsibility.

Risk

Risk is the probability that an event occurs, multiplied by the impact of that event should it occur.  Companies that specialize in providing a service, whether it be infrastructure as a service or software as a service typically have better skills and more experience in providing that service than other companies.  This means that they can both reduce the probability of occurrence and likely the impact of the occurrence should it happen.  Anytime such a risk shift is possible, where company A can shift its risk of operating component B to a company C specializing in managing that risk at an affordable price, it should be taken.  The XaaS movement provides many such opportunities.

Time to Market

One of the most significant reasons why companies choose either SaaS or IaaS solutions is the time to market benefit that their usage provides.  In the SaaS world, it’s faster and easier to launch a solution that provides compelling business efficiencies than purchasing, implementing and running that system in-house.  In the IaaS world, infrastructure on demand is possible and companies need not wait for onerous lead times associated with ordering and provisioning equipment.

Factors Driving SaaS Adoption

Focus on Returns

Most projects, including packaged software purchased and implemented on site are delivered late and fail to produce the returns on which the purchase was predicated.  Further, these projects are “hard to kill”, with millions invested and long depreciation cycles; companies are financially motivated to “ride” their investments to the bitter end in the hopes of minimizing the loss.  In the extreme, XaaS solutions offer a simple way to flee a sinking ship:  leave.  Because the solution is leased, you can (obviously with some work) switch to a competing provider.  Capital is freed up in the company for higher return investments associated with revenue rather than costs.

Factors Driving SaaS Adoption

Factor Outcomes

The outcomes of these factors is clear.  In virtually every industry, and for nearly every product, companies will move from on-premise or built-in-house solutions to XaaS solutions.  CEOs, COOs, CTOs and CIOs will all seek to “Get this stuff out of my shop!” 

Companies that believe it will never happen in their industry are simply kidding themselves and completely ignoring the outcomes predicted within the technology adoption life-cycle.  Every company needs to focus on fewer, higher value things.  Every company should outsource risky operational activities if there is a better manager of that risk than them.  Every company is obligated to seek better margins and greater profitability – even those companies in the not for profit sector need to maximize the benefit of every donated dollar.  Every company seeks opportunities to bring their value to market faster.  And finally, few companies can find the great talent in today’s market necessary to be successful in their corporate IT and product operations endeavors. 

If you produce on-premise, licensed software products and have not yet considered the movement to a SaaS solution, you need to do so now.  Failing to do so could mean the demise of your company.

We are experts in helping companies migrate products to the XaaS model.  Contact us - we would love to help you.

Related Articles:

 

Subscribe to the AKF Newsletter

Contact Us

Agile and Dealing With The Cone of Uncertainty

July 8, 2018  |  Posted By: Dave Berardi

The Leap of Faith

When we embark on building SaaS product that will delight customers we are taking a leap of faith. We often don’t even know whether or not the outcomes targeted are possible. Investing and building software is often risky for several reasons:

  • We don’t know what the market wants.
  • The market is changing around us.
  • Competition is always improving their time to market (TTM) releasing competitive products and services.

We have to assume there will be project assumptions made that will be wrong and that the underlying development technology we use to build products is constantly changing and evolving. One thing is clear on the SaaS journey – the future is always murky!

The journey that’s plagued with uncertainty for developing SaaS is seen throughout the industry and is evidenced by success and failure from big and small companies – from Facebook to Apple to Salesforce to Google. Google is one of many innovating B2C companies that have used the cone of uncertainty to help inform how to go to market and whether or not to sunset a service. The company realizes that in addition to innovating, they need to reduce uncertainty quickly.

For example, Google Notebook, a browser-based note-taking and information sharing service, was killed and resurrected as part of Google Docs and has a mobile derivative called Keep. Google Buzz, Google’s first attempt at a social network was quickly killed after a little over a year in 2011. These are just a few B2C examples from Google.  All of these are examples of investments that faced the cone of uncertainty. Predicting successful outcomes longer term and locking in specifics about a product will only be wasteful and risky.

The cone of uncertainty describes the uncertainty and risk that exist when an investment is made for a software project. The cone depicts the amount of risk and degree of precision for certainty thru the funnel. The further out we try to forecast features, capabilities, and adoption, the more risk and uncertainty we must assume. This is true for what we attempt to define as a product to be delivered and the timing on when we will deliver it to market. Over time, firms must make adjustments to the planned path along the way to capture and embrace changing market needs.

In today’s market we must quickly test our hypothesis and drive innovation to be competitive. An Agile product development life cycle (PDLC) and appropriately aligned organization helps us to do just that. To address the challenge the cone represents, we must understand what an Agile PDLC can do for the firm and what it cannot do for the firm.


Address the Uncertainty of the Cone

When we use an Agile approach, we must fix time and cost for development and delivery of a product but we allow for adjustment and changes to scope to meet fixed dates. The team can extend time later in the project but the committed date to delivery does not change. We also do not add people since Brooks Law teaches us that adding human resources to a late software project only delays it further.  Instead we accelerate our ability to learn with frequent deployments to market resulting in a reduction in uncertainty. Throughout this process, discovery of both what the feature set needs to be for a successful outcome and how something should work is accomplished.

Agile allows for frequent iterations that can keep us close to the market thru data. After a deployment, if our system is designed to be monitored, we can capture rich information that will help to inform future prioritization, new ideas about features and modifications that may be needed to the existing feature set. Agile forces us to frequently estimate and as such produce valuable data for our business. The resulting velocity of our sprints can be used to revise future delivery range forecasts for both what will be delivered and when it will be delivered. Data will also be produced throughout our sprints that will help to identify what may be slowing us down ultimately impacting our time to market. Positive morale will be injected into the tams as results can be observed and felt in the short term.

What agile is not and how we must adjust?

While using an Agile method can help address the cone of uncertainty, it’s not the answer to all challenges. Agile does not help to provide a specific date when a feature or scope will be delivered. Instead we work towards ranges. It also does not improve TTM just because our teams started practicing it. Company philosophies, principles, and rules are not defined through an Agile PDLC. Those are up to the company to define. Once defined the teams can operate within the boundaries to innovate. Part of this boundary definition needs to start at the top. Executives need to paint a vivid picture of the desired outcome that stirs up emotion and can be measurable. The vision is at the opening of the cone. Measurable Key Results that executives define to achieve outcomes allow for teams to innovate making tradeoffs as they progress towards the vision. Agile alone does not empower teams or help to innovate. Outcomes, and Key Results (OKRs) cascaded into our organization coupled with an Agile PDLC can be a great combination that will empower teams giving us a better chance to innovate and achieve desirable time to market. Implementing an OKR framework helps to remove the focus of cranking out code to hit a date and redirects the needed attention on innovation and making tradeoffs to achieve the desired outcome.

Agile does not align well with annual budget cycles. While many times, an annual perspective is required by shareholders, an Agile approach is in conflict with annual budgeting. Since Agile sees changing market demands, frequent budget iterations are needed as teams may request additional funding to go after an opportunity. It’s key that finance leaders embrace the importance of adjusting the budgeting approach to align with an Agile PDLC. Otherwise the conflict created could be destructive and create a barrier to the firms desired outcome.

Applying Agile properly benefits a firm by helping to address the cone and reducing uncertainty, empowering teams to deliver on an outcome, and ultimately become more competitive in the global marketplace. Agile is on the verge of becoming table stakes for companies that want to be world class. And as we described above noting the importance of a different approach to something like budgeting, its not just for software – it’s the entire business.

Let Us Help

AKF has helped many companies of all sizes when transitioning to an organization, redefining PDLC to align with desired speed to market outcomes, and SaaS migrations. All three are closely tied and if done right, can help firms compete more effectively. Contact us for a free consultation. We would love to help!


RELATED CONTENT

Subscribe to the AKF Newsletter

Contact Us

SaaS Principles Part 2

June 29, 2018  |  Posted By: Marty Abbott

Following our first article on the conflict between licensed products and SaaS solutions, we now present a necessary (but not always sufficient) list of SaaS operating principles.  These principles are important whether one is building a new XaaS (PaaS, SaaS, IaaS) solution, or migrating to an XaaS solution from an on-premise, licensed product.

These principles are developed from the perspective of the product and engineering organization, but with business value (e.g. margins) in mind.  They do not address the financial or other business operations needs within a SaaS product company.

1. Build to Market Need – Not Customer Want
Reason:  Smaller product (less bloat).  Lower Cost of Goods (COGS).  Lower R&D cost.
Customer “wants” now help inform and validate professional product management analysis as to what true market “need” is for a product.  Products are based on the minimum viable product concept and iterated through a scientific method of developing a hypothesis, testing that hypothesis, correcting where necessary and expanding it when appropriate.  Sales teams need to learn to sell “the cars that are on the lot”, not sell something that is not available.  Smaller, less bloated, and more configurable products are cheaper to operate, and less costly to maintain.

2. Build “-ilities” First
Reason:  Availability, Customer Retention, and high Revenue Retention.
Availability, scalability, reliability, and nearly always on “utility” are now a must have, as the risk of failure moves from the customer in the on-premise world to the service provider.  No longer can product managers ignore what was once known as NFRs or “Non-Functional Requirements”.  The solution must always meet, as a bare minimum, the availability and response times necessary to be successful while scaling in a multi-tenant way under hopefully (significant) demand.

3. Design to be Monitored
Reason:  Availability, Customer Retention, and high Revenue Retention.
Sometimes considered part of the “ilities” to achieve specifically high availability, we call this one out specifically as engineers must completely change behavior.  Like the notion of test driven development, this principle requires that engineers think about how to log and create data to validate that the solution works as expected before starting development.  Gone are the days of closing a defect as “working as designed” – we now promote solutions to production that can easily validate usage and value creation and we concern ourselves only with “Does it work as expected for customer benefit?”

4. Design for Easy and Efficient Operations – Automate Always
Reason:  Lower COGS.  Availability.
Everything we do to develop product and deliver a customer experience needs to be enabled through automation:  environment creation, product releases, system analysis, database upgrades, etc.  Whether this happens within a traditional operations team, a DevOps group, or product engineering, automation is part of our “whole product” and “ships” with our service solution upgrades.

5. Design for Low Cost of Operations
Reason:  Lower COGS.
Automation helps us lower the cost of operations, but we must also be cognizant of infrastructure related costs.  How do we limit network utilization overall, such that we can lower our costs associated with transit fees?  How do we use less memory footprint and fewer compute cycles to perform the same activity, such that we can reduce server infrastructure related costs?  What do we really need to keep in terms of data to reduce storage related costs?  Few if any of these things are our concerns on-premise, but they all affect gross margin when we run a SaaS business.

6. Engage Developers in Incident Resolution and Post Mortems
Reason:  Faster Time to Resolution.  Availability.  Better Learning Processes.
On premise companies value developer time because new feature creation trumps everything else.  SaaS companies know that restoring services is more important than anything else.  Further, developers must understand and “feel” customer pain.  There is no better motivation for ensuring that problems do not recur, and that we create a learning organization, than ensuring engineers understand the pain and cost of failure.

7. Configuration over Customization
Reason:  Smaller Product.  Lower COGS.  Lower R&D Cost.  Higher Quality.
One “core”, lots of configuration, no customization is the mantra of every great SaaS company.  This principle enables others, and aids in creating a smaller code base with lower development costs and lower costs of operations.  Lower cyclomatic complexity means higher quality, lower cost of future augmentation, and lower overall maintenance costs.

8. Create and Maintain a Homogeneous Environment
Reason:  Lower COGS.
Just as the software that enables our product should not be unique per customer, similarly the infrastructure and configuration of our infrastructure should not be unique per customer.  Everyone orders off the menu, and the chef does not create something special for you.  The menu offers opportunities for configurations – sometimes at costs (e.g. bacon on your burger) but you cannot substitute if the menu does not indicate it (e.g. no chicken breast).

9. Publish One Single Release for All Customers
Reason:  Decreased COGS.  Decreased R&D Cost (low cost of maintenance).
The licensed software world is lousy with a large engineering burden associated with supporting multiple releases.  The SaaS world attempts to increase operating margins by significantly decreasing, ideally to one, the number of versions supported for any product.

10. Evolve Your Services, Don’t Revolutionize Them
Reason:  Easier Upgrades.  Availability.  Lower COGS.
No customer wants downtime associated with an upgrade.  The notion is just ridiculous.  How often does your utility company take your service offline (power for instance) because they need to perform an “upgrade”?  Infrequently (nearly never) at best – and if/when they do it is a giant pain for all customers.  Our upgrades need to be simple and small, capable of being reversed, and “boil the frog” as the rather morbid saying goes.  No more large upgrades with significant changes to data models requiring hours of downtime and months of preparation on the part of a customer.

11. Provide Frequent Updates
Reason:  Smaller product (less bloat).  Lower COGS.  Lower R&D cost.  Faster Time to Market (TTM).
Pursuant to the evolutionary principle above, updates need to happen frequently.  These two principles are really virtuous (or if not performed properly, vicious) when related to each other.  Doing small upgrades, as solutions are ready to ship, means that customers (and our company) benefit from the value the upgrades engender sooner.  Further, small upgrades mean incremental (evolutionary) changes.  Faster value, smaller impact.  It cures all ills and is both a dessert topping and floor wax.

12. Hire and Promote Experienced SaaS Talent
Reason:  Ability to Achieve Goals and SaaS Principles.
Running SaaS businesses, and developing SaaS products require different skills, knowledge and behaviors than licensed, on premise products.  While many of these can be learned or trained, attempting to be successful in the XaaS world without ensuring that you have the right knowledge, skills, and abilities on the team early on is equivalent to assuming that all athletes can play all sports.  Assembling a football team out of basketball players is unlikely to land you in the Super Bowl.

13. Restrict Access
Reason:  Lower Risk.  Higher Availability.
Licensed product engineers rarely have access to customer production environments.  But in our new world, its easy to grant it and in many ways it can be beneficial.  Unfortunately, unfettered access increases the risk of security breaches.  As such, we both need to restrict access to production environments and ensure that the engineering team has access to appropriate trouble shooting data outside of the production environment to ensure rapid problem and incident resolution.

14. Implement Multi-Tenancy
Reason:  Lower COGS.
Solutions should be multi-tenant to enable better resource utilization, but never all-tenant to ensure that we have proper fault isolation and appropriate availability.


How is your SasS product performing?  Need a checkup?  We can help

Need help with your SaaS migration

Related Articles:

Subscribe to the AKF Newsletter

Contact Us

The Many Unfortunate Meanings of Cloud

June 5, 2018  |  Posted By: Marty Abbott

Enough is enough already – stop using the term “Cloud”.  Somewhere during the last 20 years, the term “Cloud” started to mean to product offerings what Sriracha and Tabasco mean to food:  everything’s better if we can just find a way to incorporate it.  Just as Sriracha makes barely edible items palatable and further enhances the flavor of delicacies, so evidently does “Cloud” confuse the unsophisticated buyer or investor and enhance the value for more sophisticated buyers and investors. That’s a nice analogy, but it’s also bullshit.

The term cloud just means too many things – some of which are shown below:


various meanings of the word cloud and the confusion it causes


The real world of cloud offerings can be roughly separated into two groups:

  1. Pretenders This group of companies know, at some level, that they haven’t turned the corner and started truly offering “services”.  They support heavy customization, are addicted to maintenance revenue streams, and offer low levels of tenancy.  These companies simply can’t escape the sins of their past.  Instead, they slap the term “cloud” on their product in the hopes of being seen as being relevant.  At worst, it’s an outright lie.  At best, it’s slightly misleading relative to the intended meaning of the term.  Unless, of course, anything that’s accessible through a browser is “Cloud”.  These companies should stop using the term because deep down, when they are alone with a glass of bourbon, they know they aren’t a “cloud company”.
  2. Contenders This group of companies either blazed a path for the move to services offerings (think rentable instead of purchasable) products or quickly recognized the services revolution; they were “born cloud” or are truly embracing the cloud model.  They prefer configuration over customization, and stick to the notion of a small number of releases (countable on one hand) in production across their entire customer base.  They embrace physical and logical multi-tenancy both to increase margins and decrease customer costs.  These are the companies that pay the tax for the term “cloud” – a tax that funds the welfare checks for the “pretenders”.

The graph below plots Cloud Pretenders, Contenders and Not Cloud products along the axes of gross margin and operating margin:

Various models of cloud and on-premise plotted against cost of goods sold and operating expense

Consider one type of “Pretender” – the case of a company hosting a single tenant, client customized software release for each of their many customers.  This is an ASP (Application Service Provider) model.  But there is a reason the provider of the service won’t call themselves an ASP:  The margins of an ASP stinks relative to that of a true “SaaS” company.  The term ASP is old and antiquated.  The fix?  Just pour a bit of “cloud sauce” on it and everything will be fine.
Contrast the above case with that of a “Contender”:  physical and logical multi-tenancy at every layer of the architecture and \ a small number of production releases (one to three) across the entire customer base.  Both operating and gross margins increase as maintenance costs and hosting costs decrease when allocated across the entire customer base. 
Confused?  So are we.  Here are a few key takeaways:

  1. “Cloud” should mean more than just being accessed through the internet via a browser.  Unfortunately, it no longer does as anyone who can figure out how to replace their clients with a browser and host their product will call themselves a “Cloud” provider.
  2. Contenders should stop using the term “Cloud” because it invites comparison with companies to which they are clearly superior:  Superior in terms of margins, market needs, architecture and strategic advantage.
  3. Pretenders should stop using the term “Cloud” for both ethical reasons and reasons related to survivability.  Ethically the term is somewhere between an outright lie and an ethically contentious quibble or half-truth.  Survivability comes into play when the company believes its own lie and stops seeing a reason to change to become more competitive.

AKF Partners helps companies create “Cloud” (XaaS) transition plans to transform their business.  We help with financial models, product approach, market fit, product and technology architecture, business strategy and help companies ensure they organize properly to maximize their opportunity in XaaS.

RELATED CONTENT

The Scale Cube - Architecting for Scale

Microservices for Breadth, Libraries for Depth

SaaS Migration Challenges

When Should You Split Services?

 

Subscribe to the AKF Newsletter

Contact Us

The Top Five Most Common Agile PDLC Failures

April 27, 2018  |  Posted By: Dave Swenson
Top Five Agile Failures

Agile Software Development is a widely adopted methodology, and for good reason. When implemented properly, Agile can bring tremendous efficiencies, enabling your teams to move at their own pace, bringing your engineers closer to your customers, and delivering customer value
quicker with less risk. Yet, many companies fall short from realizing the full potential of Agile, treating it merely as a project management paradigm by picking and choosing a few Agile structural elements such as standups or retrospectives without actually changing the manner in which product delivery occurs. Managers in an Agile culture often forget that they are indeed still managers that need to measure and drive improvements across teams.

All too often, Agile is treated solely as an SDLC (Software Development Lifecycle), focused only upon the manner in which software is developed versus a PDLC (Product Development Lifecycle) that leads to incremental product discovery and spans the entire company, not just the Engineering department.


Here are the five most common Agile failures that we see with our clients:

     
  1. Technology Executives Abdicate Responsibility for their Team’s Effectiveness

Management in an Agile organization is certainly different than say a Waterfall-driven one. More autonomy is provided to Agile teams. Leadership within each team typically comes without a ‘Manager’ title. Often, this shift from a top-down, autocratic, “Do it this way” approach to a grass-roots, bottoms-up one sways way beyond desired autonomy towards anarchy, where teams have been given full freedom to pick their technologies, architecture, and even outcomes with no guardrails or constraints in place. See our Autonomy and Anarchy article for more on this. 


Executives often become focused solely on the removal of barriers the team calls out, rather than leading teams towards desired outcomes. They forget that their primary role in the company isn’t to keep their teams happy and content, but instead to ensure their teams are effectively achieving desired business-related outcomes.


The Agile technology executive is still responsible for their teams’ effectiveness in reaching specified outcomes (e.g.: achieve 2% lift in metric Y). She can allow a team to determine how they feel best to reach the outcome, within shared standards (e.g.: unit tests must be created, code reviews are required). She can encourage teams to experiment with new technologies on a limited basis, then apply those learnings or best practices across all teams. She must be able to compare the productivity and efficiencies from one team to another, ensuring all teams are reaching their full potential.

     
  1. No Metrics Are Used

The age-old saying “If you can’t measure it, you can’t improve it” still applies in an Agile organization. Yet, frequently Agile teams drop this basic tenet, perhaps believing that teams are self-aware and critical enough to know where improvements are required. Unfortunately, even the most transparent and aware individuals are biased, fall back on subjective characteristics (“The team is really working hard”), and need the grounding that quantifiable metrics provide. We are continually surprised at how many companies aren’t even measuring velocity, not necessarily to compare one team with another, but to compare a team’s sprint output vs. their prior ones. Other metrics still applicable in an Agile world include quality, estimation accuracy, predictability, percent of time spent coding, the ratio of enhancements vs. maintenance vs. tech debt paydown.

These metrics, their definitions and the means of measuring them should be standardized across the organization, with regular focus on results vs. desired goals. They should be designed to reveal structural hazards that are impeding team performance as well as best practices that should be adopted by all teams.

     
  1. Your Velocity is a Lie

Is your definition of velocity an honest one? Does it truly measure outcomes, or only effort? Are you consistent with your definition of ‘done’? Take a good look at how your teams are defining and measuring velocity. Is velocity only counted for true ‘ready to release’ tasks? If QA hasn’t been completed within a sprint, are the associated velocity points still counted or deferred?

Velocity should not be a measurement of how hard your teams are working, but instead an indicator of whether outcomes (again, e.g.: achieve 2% lift in metric Y) are likely to be realized - take credit for completion only when in the hands of customers.

     
  1. Failure to Leverage Agile for Product Discovery

From the Agile manifesto: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. Many companies work hard to get an Agile structure and its artifacts in place, but ignore the biggest benefit Agile can bring: iterative and continuous product discovery. Don’t break down a six-month waterfall project plan into two week sprints with standups and velocity measurements and declare Agile victory.


Work to create and deliver MVPs to your customers that allow you to test expected value and customer satisfaction without huge investment.

     
  1. Treating Agile as an SDLC vs. a PDLC

As explained in our article PDLC or SDLC, SDLC (Software Development Lifecycle) lives within PDLC (Product Development Lifecycle). Again, Agile should not be treated as a project management methodology, nor as a means of developing software. It should focus on your product, and hopefully the related customer success your product provides them. This means that Agile should permeate well beyond your developers, and include product and business personnel.


Business owners or their delegates (product owners) must be involved at every step of the PDLC process. PO’s need to be embedded within each Agile team, ideally colocated alongside team members. In order to provide product focus, POs should first bring to the team the targeted customer problem to be solved, rather than dictating only a solution, then work together with the team to implement the most effective solution to that problem.



AKF Partners helps companies transition to Agile as well as fine-tune their existing Agile processes. We can readily assess your PDLC, organization structure, metrics and personnel to provide a roadmap for you to reach the full value and benefits Agile can provide. Contact us to discuss how we can help.


Subscribe to the AKF Newsletter

Contact Us

SaaS Migration Challenges

March 12, 2018  |  Posted By: Dave Swenson

AKF scale cube cloud computing SaaS conversion

More and more companies are waking up from the 20th century, realizing that their on-premise, packaged, waterfall paradigms no longer play in today’s SaaS, agile world. SaaS (Software as a Service) has taken over, and for good reason. Companies (and investors) long for the higher valuation and increased margins that SaaS’ economies of scale provide. Many of these same companies realize that in order to fully benefit from a SaaS model, they need to release far more frequently, enhancing their products through frequent iterative cycles rather than massive upgrades occurring only 4 times a year. So, they not only perform a ‘lift and shift’ into the cloud, they also move to an Agile PDLC. Customers, tired of incurring on-premise IT costs and risks, are also pushing their software vendors towards SaaS.

SaaS Migration is About More Than Just Technology – It is An Organization Reboot
But, what many of the companies migrating to SaaS don’t realize is that migrating to SaaS is not just a technology exercise.  Successful SaaS migrations require a ‘reboot’ of the entire company. Certainly, the technology organization will be most affected, but almost every department in a company will need to change. Sales teams need to pitch the product differently, selling a leased service vs. a purchased product, and must learn to address customers’ typical concerns around security. The role of professional services teams in SaaS drastically changes, and in most cases, shrinks. Customer support personnel should have far greater insight into reported problems. Product management in a SaaS world requires small, nimble enhancements vs. massive, ‘big-bang’ upgrades. Your marketing organization will potentially need to target a different type of customer for your initial SaaS releases - leveraging the Technology Adoption Lifecycle to identify early adopters of your product in order to inform a small initial release (Minimum Viable Product).

It is important to recognize the risks that will shift from your customers to you. In an on-premise (“on-prem”) product, your customer carries the burden of capacity planning, security, availability, disaster recovery. SaaS companies sell a service (we like to say an outcome), not just a bundle of software.  That service represents a shift of the risks once held by a customer to the company provisioning the service.  In most cases, understanding and properly addressing these risks are new undertakings for the company in question and not something for which they have the proper mindset or skills to be successful.

This company-wide reboot can certainly be a daunting challenge, but if approached carefully and honestly, addressing key questions up front, communicating, educating, and transparently addressing likely organizational and personnel changes along the way, it is an accomplishment that transforms, even reignites, a company.

This is the first in a series of articles that captures AKF’s observations and first-hand experiences in guiding companies through this process.


Don’t treat this as a simple rewrite of your existing product –
Answer these questions first…


Any company about to launch into a SaaS migration should first take a long, hard look at their current product, determining what out of the legacy product is not worth carrying forward. Is all of that existing functionality really being used, and still relevant? Prior to any move towards SaaS, the following questions and issues need to be addressed:

Customization or Configuration?
SaaS efficiencies come from many angles, but certainly one of those is having a single codebase for all customers. If your product today is highly customized, where code has been written and is in use for specific customers, you’ve got a tough question to address. Most product variances can likely be handled through configuration, a data-driven mechanism that enables/disables or otherwise shapes functionality for each customer. No customer-specific code from the legacy product should be carried forward unless it is expected to be used by multiple clients. Note that this shift has implications on how a sales force promotes the product (they can no longer promise to build whatever a potential customer wants, but must sell the current, existing functionality) as well as professional services (no customizations means less work for them).

Single/Multi/All-Tenancy?
Many customers, even those who accept the improved security posture a cloud-hosted product provides over their own on-premise infrastructure, absolutely freak when they hear that their data will coexist with other customers’ data in a single multi-tenant instance, no matter what access management mechanisms exist. Multi-tenancy is another key to achieving economies of scale that bring greater SaaS efficiencies. Don’t let go of it easily, but if you must, price extra for it.

Who Owns the Data?
Many products focus only on the transactional set of functionality, leaving the analytics side to their customers. In an on-premise scenario, where the data resides in the customers’ facilities, ownership of the data is clear. Customers are free to slice & dice the data as they please. When that data is hosted, particularly in a multi-tenant scenario where multiple customers’ data lives in the same database, direct customer access presents significant challenges. Beyond the obvious related security issues is the need to keep your customers abreast of the more frequent updates that occur with SaaS product iterations. The decision is whether you replicate customer data into read-only instances, provide bulk export into their own hosted databases, or build analytics into your product?

All of these have costs - ensure you’re passing those on to your customers who need this functionality.

May I Upgrade Now?
Today, do your customers require permission for you to upgrade their installation? You’ll need to change that behavior to realize another SaaS efficiency - supporting of as few versions as possible. Ideally, you’ll typically only support a single version (other than during deployment). If your customers need to ‘bless’ a release before migrating on to it, you’re doing it wrong. Your releases should be small, incremental enhancements, potentially even reaching continuous deployment. Therefore, the changes should be far easier to accept and learn than the prior big-bang, huge upgrades of the past. If absolutely necessary, create a sandbox for customers to access new releases, but be prepared to deal with the potentially unwanted, non-representative feedback from the select few who try it out in that sandbox.

Wait? Who Are We Targeting?
All of the questions above lead to this fundamental issue: Are tomorrow’s SaaS customers the same as today’s? The answer? Not necessarily. First, in order to migrate existing customers on to your bright, shiny new SaaS platform, you’ll need to have functional parity with the legacy product. Reaching that parity will take significant effort and lead to a big-bang approach. Instead, pick a subset or an MVP of existing functionality, and find new customers who will be satisfied with that. Then, after proving out the SaaS architecture and related processes, gradually migrate more and more functionality, and once functional parity is close, move existing customers on to your SaaS platform.

To find those new customers interested in placing their bets on your initial SaaS MVP, you’ll need to shift your current focus on the right side of the Technology Adoption Lifecycle (TALC) to the left - from your current ‘Late Majority’ or ‘Laggards’ to ‘Early Adopters’ or ‘Early Majority’. Ideally, those customers on the left side of the TALC will be slightly more forgiving of the ‘learnings’ you’ll face along the way, as well as prove to be far more valuable partners with you as you further enhance your MVP.

The key is to think out of the existing box your customers are in, to reset your TALC targeting and to consider a new breed of customer, one that doesn’t need all that you’ve built, is willing to be an early adopter, and will be a cooperative partner throughout the process.


Our next article on SaaS migration will touch on organizational approaches, particularly during the build-out of the SaaS product, and the paradigm shifts your product and engineering teams need to embrace in order to be successful.

AKF has led many companies on their journey to SaaS, often getting called in as that journey has been derailed. We’ve seen the many potholes and pitfalls and have learned how to avoid them. Let us help you move your product into the 21st century.  See our SaaS Migration service


Related Content

 

Subscribe to the AKF Newsletter

Contact Us

Technical Due Diligence Best Practices

January 23, 2018  |  Posted By: Marty Abbott

Technical due diligence of products is about more than the solution architecture and the technologies employed.  Performing diligence correctly requires that companies evaluate the solution against the investment thesis, and evaluate the performance and relationship of the engineering and product management teams.  Here we present the best practices for technology due diligence in the format of things to do, and things not to do:


The Dos

1. Understand the Investment/Acquisition Thesis

One cannot perform any type of diligence without understanding the investment/acquisition thesis and equally as important, the desired outcomes.  Diligence is meant to not only uncover “what is” or “what exists”, but also identify the obstacles to achieve “what may or can be”.  The thesis becomes the standard by which the diligence is performed.

2. Evaluate the Team against the Desired Outcomes

The technology product landscape is littered with the carcasses of great ideas run into the ground with the wrong leadership or the wrong team.  Disagree?  We ask you to consider the Facebook and Friendster battle.  We often joke that the robot apocalypse hasn’t happened yet, and technology isn’t building itself.  Great teams are the reasons solutions succeed and substandard teams behind those solutions that fail technically.  Make sure your diligence is identifying whether you are getting the right team along with the product/company you acquire.

3. Understand the Tech/Product Relationship

Product Management teams are the engines of products, and engineering teams are the transmission.  Evaluating these teams in isolation is a mistake – as regardless of the PDLC (product development lifecycle) these teams must have an effective working relationship to build great products.  Make sure your diligence encompasses an evaluation of how these teams work together and the lifecycle they use to maximize product value and minimize time to market.

4. Evaluate the Security Posture

Cyber-crime and fraud is going to increase at a rate higher than the adoption of online solutions pursuant to a number of secular forces that we will enumerate in a future post.  As such, it is in your best interest as an investor to understand the degree to which the company is focused on increasing the perceived cost of malicious activity and decreasing the perceived value of said malicious activity.  Ensure that your diligence includes evaluating the security focus, spending, approach and mindset of the target company.  This need not be a separate diligence for small investments – just ensure that you are comfortable with the spend, attention and approach.  Ensure that your diligence properly evaluates the risk of the target solution.

5. Prepare Yourself and the Target

Any diligence will go better if you give the acquisition/investment target an opportunity to prepare documents.  Requesting materials in advance allows the investment target an opportunity to prepare for a deep discussion and ensures that you can familiarize yourself with the product architecture and product development processes ahead of time.  Check out our article on due diligence checklists which includes a list of items to request in advance.

6. Be Dynamic and Probe Constantly

While a thorough list of items to discuss is important, it is equally important to abide by the “2 ears and one mouth” rule:  Spend more time listening than talking.  Look for subtle clues as to the target’s comfort with particular answers.  Are there things with which they are uncomfortable?  Are they stressing certain words for a reason?  Don’t accept an answer at face value, dig into the answer to find the information that supports a claim.

7. Evaluate Debt

Part of the investment in your target could well be an ongoing premium payment against past technical debt.  Ensure that you properly evaluate what debt the company has acquired, and how they are paying the interest and premium payments against that debt.


The Don’ts

1. Don’t Waste Too Much Time (or money) on Code Reviews
The one thing I know from years of running engineering teams is that anytime an engineer reviews code for the first time she is going to say, “This code is crap and needs to be rewritten.”  Code reviews are great to find potential defects and to ensure that code conforms to the standards set forth by the company.  But you are unlikely to have the time or resources to review everything.  The company is also unlikely to give you unfettered access to all of their code (Google “Sybase Microsoft SQLServer” for reasons why).  That leaves you at the whims of the company to cherry-pick what you review, which in turn means you aren’t getting a good representative sample. 
Further, your standards likely differ from those of the target company.  As such, a review of the software is simply going to indicate that you have different standards. 
Lastly, we’ve seen great architecture and terrible code succeed whereas terrible architecture and great code rarely is successful.  You may find small code reviews enlightening, but we urge you to spend a majority of your time on the architecture, people and process of the acquisition or investment.

2. Don’t Start a Fight
Far too often technology diligence sessions start in discussion and end in a fight.  The people performing the diligence start asking questions in a way that may seem judgmental to the target company.  Then the investing/acquiring team shifts from questions to absolute statements that can only be taken as judgmental.  There’s simply no room for this.  Diligence is clinical – not personal.  It’s not a place to prove who is smarter than whom.  This dynamic is one of the many reasons it is often a good idea to have a third party perform your diligence:  The target company is less likely to feel threatened by the acquiring product team, and the third party is oftentimes more experienced with establishing a non-threatening environment.

3. Don’t Be Religious
In a services oriented world, it really doesn’t matter what code or what data persistence platform comprises a service you may be calling.  Assuming that you are acquiring a solution and its engineers, you need not worry about supporting the solution with your existing skillsets.  Debates around technology implementations too often come from a place of what one knows (“I know Java, Java rocks, and everything else is substandard”) than what one can prove.  There are certainly exceptions, like aging and unsupported technology – but stay focused on the architecture of a solution, not the technology that implements that architecture.

4. Don’t Do Diligence Remotely
As we’ve indicated before, diligence is as much about teams as it is the technology itself.  Performing diligence remotely without face to face interaction makes it difficult to identify certain cues that might otherwise be indicators that you should dig more deeply into a certain space or set of questions.  Examples are a CTO giving an authoritative answer to a certain question while members her team roll their eyes or slightly shake or bow their heads.

You may also want to read about the necessary components of technical due diligence in our article on optimizing technical diligence.


AKF Partners performs diligence on behalf of a number of venture capital and private equity firms as well as on behalf of strategic acquirers.  Whether for a third party view, or because your team has too much on their plate, we can help.  Read more about our technical due diligence services here

RELATED CONTENT

Subscribe to the AKF Newsletter

Contact Us

Definition of MVP

April 3, 2017  |  Posted By: AKF

We often use the term minimum viable product or MVP but do we all agree on what it means? In the Scrum spirt of Definition of Done, I believe the Definition of MVP is worth stating explicitly within your tech team. A quick search revealed these three similar yet different definitions:

     
  • A minimum viable product (MVP) is a development technique in which a new product or website is developed with sufficient features to satisfy early adopters. The final, complete set of features is only designed and developed after considering feedback from the product’s initial users. Source: Techopedia
  •  
  • In product development, the minimum viable product (MVP) is the product with the highest return on investment versus risk…A minimum viable product has just those core features that allow the product to be deployed, and no more. Source: Wikipedia
  •  
  • When Eric Ries used the term for the first time he described it as: A Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.
    Source: Leanstack

I personally like a combination of these definitions. I would choose something along the lines of:

A minimum viable product (MVP) has sufficient features to solve a problem that exists for customers and provides the greatest return on investment reducing the risk of what we don’t know about our customers

Just like no two teams implement Agile the same way, we don’t all have to agree on the definition of MVP but all your team members should agree. Otherwise, what is an MVP to one person is a full featured product to another. Take a few minutes to discuss with your crossfunctional agile team and come to a decision on your Definition of MVP

Subscribe to the AKF Newsletter

Contact Us

Build v. Buy

April 3, 2017  |  Posted By: AKF

In many of our engagements, we find ourselves helping our clients understand when it’s appropriate to build and when they should buy.

If you perform a simple web search for “build v. buy” you will find hundreds of articles, process flows and decision trees on when to build and when to buy. Many of these are costcentric decisions including discounted cash flows for maintenance of internal development and others are focused on strategy. Some of the articles blend the two.

Here is a simple set of questions that we often ask our customers to help them with the build v. buy decision:

1. Does this “thing” (product / architectural component / function) create strategic differentiation in our business?

Here we are talking about whether you are creating switching costs, lowering barriers to exit, increasing barriers to entry, etc that would give you a competitive advantage relative to your competition. See Porter’s Five Forces for more information about this topic. If the answer to this question is “No – it does not create competitive differentiation” then 99% of the time you should just stop there and attempt to find a packaged product, open source solution, or outsourcing vendor to build what you need. If the answer is “Yes”, proceed to question 2.

2. Are we the best company to create this “thing”?

This question helps inform whether you can effectively build it and achieve the value you need. This is a “core v. context” question; it asks both whether your business model supports building the item in question and also if you have the appropriate skills to build it better than anyone else. For instance, if you are a social networking site, you *probably* don’t have any business building relational databases for your own use. Go to question number (3) if you can answer “Yes” to this question and stop here and find an outside solution if the answer is “No”. And please, don’t fool yourselves – if you answer “Yes” because you believe you have the smartest people in the world (and you may), do you really need to dilute their efforts by focusing on more than just the things that will guarantee your success?

3. Are there few or no competing products to this “thing” that you want to create?

We know the question is awkwardly worded – but the intent is to be able to exit these four questions by answering “yes” everywhere in order to get to a “build” decision. If there are many providers of the “thing” to be created, it is a potential indication that the space might become a commodity. Commodity products differ little in feature sets over time and ultimately compete on price which in turn also lowers over time. As a result, a “build” decision today will look bad tomorrow as features converge and pricing declines. If you answer “Yes” (i.e. “Yes, there are few or no competing products”), proceed to question (4).

4. Can we build this “thing” cost effectively?

Is it cheaper to build than buy when considering the total lifecycle (implementation through endoflife)
of the “thing” in question? Many companies use cost as a justification, but all too often they miss the key points of how much it costs to maintain a proprietary “thing”, “widget”, “function”, etc. If your business REALLY grows and is extremely successful, do you really want to be continuing to support internally developed load balancers, databases, etc. through the life of your product? Don’t fool yourself into answering this affirmatively just because you want to work on something neat. Your job is to create shareholder value – not work on “neat things” – unless your “neat thing” creates shareholder value.

There are many more complex questions that can be asked and may justify the building rather than purchasing of your “thing”, but we feel these four questions are sufficient for most cases.

A “build” decision is indicated when the answers to all 4 questions are “Yes”.

We suggest seriously considering buying or outsourcing (with appropriate contractual protection when intellectual property is a
concern) anytime you answer “No” to any question above.

Subscribe to the AKF Newsletter

Contact Us

Newsletter Signup

Receive the newest blog posts in our newsletter!

Categories:

Most Popular: