GROWTH BLOG: Learning from Failure: Conducting Postmortems
AKF Partners Logo Technology ConsultingScalability - We wrote the book on it ℠

Growth Blog

Scalability and Technology Consulting Advice for SaaS and Technology Companies

Top 3 Failures in Digital Transformations

July 11, 2019  |  Posted By: Marty Abbott

Attempting to transform a company to compete effectively in the Digital Economy is difficult to say the least.  In the experience of AKF Partners, it is easier to be “born digital” than to transform a successful, long tenured business, to compete effectively in the Digital age. 

There is no single guaranteed fail-safe path to transformation.  There are, however, 10 principles by which you should abide and 3 guaranteed paths to failure. 

Avoid these 3 common mistakes at all costs or suffer a failed transformation.

Top 3 Digital Transformation Failures

Having the Wrong Team and the Wrong Structure

If you have a successful business, you very likely have a very bright and engaged team.  But unless a good portion of your existing team has run a successful “born digital” business, or better yet transformed a business in the digital age, they don’t have the experience necessary to complete your transformation in the timeframe necessary for you to compete.  If you needed lifesaving surgery, you wouldn’t bet your life on a doctor learning “on the job”.  At the very least, you’d ensure that doctor was alongside a veteran and more than likely you would find a doctor with a successful track record of the surgery in question.  You should take the same approach with your transformation.

This does not mean that you need to completely replace your team.  Companies have been successful with organization strategies that include augmenting the current team with veterans.  But you need new, experienced help, as employees on your team. 

Further, to meet the need for speed of the new digital world, you need to think differently about how you organize.  The best, fastest performing Digital teams organize themselves around the outcomes they hope to achieve, not the functions that they perform.  High performing digital teams are

It also helps to hire a firm that has helped guide companies through a transformation.  AKF Partners can help. 

Planning Instead of Doing

The digital world is ever evolving.  Plans that you make today will be incorrect within 6 months.  In the digital world, no plan survives first contact with the enemy.  In the old days of packaged software and brick and mortar retail, we had to put great effort into planning to reduce the risk associated with being incorrect after rather long lead times to project completion.  In the new world, we can iterate nearly at the speed of thought.  Whereas being incorrect in the old world may have meant project failure, in the new world we strive to be incorrect early such that we can iterate and make the final solution correct with respect to the needs of the market.  Speed kills the enemy.

Eschew waterfall models, prescriptive financial models and static planning in favor of Agile methodologies, near term adaptive financial plans and OKRs.  Spend 5 percent of your time planning and 95% of your time doing.  While in the doing phase, learn to adapt quickly to failures and quickly adjust your approach to market feedback and available data. 

The successful transformation starts with a compelling vision that is outcome based, followed by a clear near-term path of multiple small steps.  The remainder of the path is unclear as we want the results of our first few steps to inform what we should do in the next iteration of steps to our final outcome.  Transformation isn’t one large investment, but a series of small investments, each having a measurable return to the business.

Knowing Instead of Discovering

Few companies thrive by repeatedly being smarter than the market.  In fact, the opposite is true – the Digital landscape is strewn with the corpses of companies whose hubris prevented them from developing the real time feedback mechanisms necessary to sense and respond to changing market dynamics.  Yesterdays approaches to success at best have diminishing returns today and at worst put you at a competitive disadvantage.

Begin your journey as a campaign of exploration.  You are finding the best path to success, and you will do it by ensuring that every solution you deploy is instrumented with sensors that help you identify the efficacy of the solution in real time.  Real time data allows us to inductively identify patterns that form specific hypothesis.  We then deductively test these hypotheses through comparatively low-cost solutions, the results of which help inform further induction.  This circle of induction and deduction propels us through our journey to success.

Subscribe to the AKF Newsletter

Contact Us

What is Digital Transformation?

July 3, 2019  |  Posted By: James Fritz

AKF Digital Transformations

For many years after the introduction of the automobile, most industries benefited from somewhat static and predictable secular forces. Nascent technology forces primarily aided companies in increasing gross margins and operating margins by lowering the cost of labor, decreasing the cost of warehousing and decreasing logistical costs. Consumer behavior followed somewhat predictable and unchanged patterns, varying mostly with economic conditions and seasonal needs.

However, the advent of eCommerce and the “anything as a service” movement in the late 90’s through early 2000’s started to significantly change the behavior of individual and business consumers. Layer on logistics integrations that allowed near immediate gratification for non-service durable goods and consumers started to shift to purchasing from home. Similarly, businesses enjoy comparatively easy near-term gratification with the implementation of services; gone are the days of multi-year on-premise ERP implementations in favor of short-duration leasing of software as a service. 

Late majority and laggard businesses within the technology life cycle were as late to identify these shifts in business and individual consumer behavior as they were to adopt new technical solutions. While they may have thrown up digital storefronts or offered digital downloads of their solutions, they failed to envision how the nascent forces begged for new integrations and tighter business cycles that would benefit not only the buyer but the producing company as well.

So, what is Digital Transformation? It is taking digital technologies and using them to provide new and creative ways to conduct business. This isn’t just an update to a technology stack or developing a new code. It is a transformation, via digitization, of how business is done.

Recaptcha and the NY Times
In the mid 2000’s Captcha was taking the internet by storm. It was designed to weed out bots and only allow humans through websites. It started out rather simply with a challenge of words or letters that required feedback that, at the time, would be difficult for a computer to detect. It met with great success, logging over half a million hours per day of people filling out captchas to access sites or submit information. Although not a digital transformation itself, what was coupled with captcha became a transformation.

The NY Times had digitized all of their old issues and used computer software to recognize the different images and convert to text. However, 10 to 30 percent of the text was still missed. This then required two humans to go through the unrecognized text and come to the same conclusion about what was shown. A lengthy and expensive process. Partnering with the captcha team, the NY Times put those unrecognized words at the end of captchas. There would be a normal captcha question (still designed to determine whether a bot or not) followed by a picture of text from the NY Times. Once the picture had been verified by enough people, that picture would then be associated with that text. Using this resource of over half a million man hours per day, the NY Times was able to quickly close the gap on unrecognized text from older issues. This generated an increase in business for captcha and solved a manual problem digitally.

Who uses Digital Transformation and Why?

Startups are created digitally. They have no infrastructure or code or business model that existed previously. Instead of transformation, startups are born digital. This gives them the opportunity to carve out a piece of the total addressable market currently not being serviced, or as a way to siphon off business from other companies currently operating in the market. By looking at what is already being done in the market and identifying the dissatisfaction within in, smart engineers can create a new product and a new business (if done appropriately, well-funded as well) that dramatically transforms a technology sector. Digitization can come easy to startups. The business already exists and trying to compete by doing the same thing will lead to failure. Taking an antiquated method and using digital prowess to transform it is what gives startups their edge.

If the current competitors in the market are paying attention, they can use this disruption to their advantage. Just like startups were able to glean information about where and how to target from larger corporations, those same corporations can now identify how to transform from the startups. It is not always easy creating a disruptive technology that requires a transformation for how business is done. By being a close follower, large corporations can avoid a lot of the pitfalls and ensure that they can keep their market share, while also now targeting new consumers. Conversely, if unwilling to adapt to changing forces, or unwilling to see the future for what it is, these corporations will be abandoned by their consumers.

Second mouse gets the cheese from nicartoons.com

Digital Transformation is Not…

...just taking a current generation technology and updating it.
...version control or the introduction of new hardware or software.

Digital Transformation requires a complete re-tooling of how you conduct business. It will affect how you code, how you scale, how you sell, how you brand and even how you interact as a company. A caterpillar doesn’t become a butterfly simply by slapping on some wings. Inside the cocoon it completely dissolves itself and rebuilds from the ground up. That is Digital Transformation.

Digital Transformation isn’t easy. Marty Abbott has summed up 10 Principles that will assist you with your transformation. If you need further assistance with identifying Digital Transformation pitfalls and goals, AKF can help!

Subscribe to the AKF Newsletter

Contact Us

Technology Consulting Services: Key Differences between Due Diligence for IT and Due Diligence for Product Technology

June 28, 2019  |  Posted By: Roger Andelin

Due Diligence Header for AKF Growth Blog
The two most common types of technology due diligence requests we see at AKF are

  • Product Technology Due Diligence
  • Information Technology, or IT, Due Diligence.

Both types are very different from each other, but often get confused. This article will explain the differences between Product Technology and
Information Technology – and why understanding the difference is critical to a company’s success and profitability.

IT Department

An IT department is typically led by a Chief Information Officer (CIO). The focus of the CIO is on information technology that supports the ongoing operations of the business. The CIO and the IT team’s key outcomes are typically around employee productivity and efficiency, applying technology to improve productivity and to lower costs. This includes technologies that run the financial and accounting systems, sales and operations systems, customer support systems,  and the networks, servers, and storage underlying these systems. The CIO is also responsible for the technologies that are used in the office such as email, chat, video conferencing systems, printers, and employees’ desktop computers.

Product Technology

Conversely, Product Technology (or Digital Product Technology) is typically led by a Chief Technology Officer (CTO). The focus of the CTO is building a product or service for customers out of software and running that product or service on cloud systems or company-owned systems, although the latter is becoming less common. Put another way, CTOs build and run software as revenue generating products and services

Whereas the CIO runs a cost center and is responsible for employee productivity, the CTO is responsible for revenue and cash-flow. Sales growth, time to market, costs of goods sold, and R&D spend are some of the factors included within key outcomes for the CTO.

For example, if you were running a newspaper business, your primary product is the news. However, you also must build applications to read news like mobile apps and web apps. It is the job of your CTO to build, maintain, and run these apps for your customers. Your CTO would be accountable for business metrics – such as the number of downloads, users, and revenue. If your CTO is distracted by CIO issues of running the day-to-day business of the office, they are being taken away from their work to build and implement the revenue generating products and services your company is trying to create.

Differences

Product Technologies and IT Technologies are very different. CIOs and CTOs have very different skills and competencies to manage these differences. For example, a CIO often possesses deep knowledge of back-office applications such as accounting systems, finance systems, and warehouse management systems. In many cases they likely rose up through the technology ranks writing, maintaining, and running those systems for other departments in the company. They are excellent at business analysis, collecting requirements from company users and translating those requirements into project plans. CIOs are often proficient at waterfall development methodologies often used to implement back-office applications.

The IT team is often largely staffed with people who know how to integrate and configure third-party products, with a small amount of custom development. The opposite is true for most product technology teams typically staffed with software engineers who are building new solutions and a smaller number of engineers integrating infrastructure components.

CTOs possess entirely different technology skills needed to build and maintain software as a service (SaaS) for the company’s customers. CTOs possess the skills to architect software applications that are scalable as the company grows its customer base. The AKF Scale Cube is an invaluable reference tool for the CTO building a scalable software solution based on scalable microservices. CTOs must have the skills to run product teams, including user experience design, along with software development. CTOs are more likely to be proficient in Agile development methodologies, such as Scrum. CTOs are expected to know the product development lifecycle (PDLC), mitigate technical debt liability, and know how to build software release pipelines to support continuous integration and delivery(CI/CD).

What We See In The Wild

Regularly, when AKF is called to perform technology due diligence, we often find that Product and IT are combined!  This has been especially true for older, established companies, with traditional IT departments. CIOs took on the responsibility of building and running customer-facing internet and mobile software products and services rather than creating a separate Product Development Team under a CTO. The results are often not positive because the technical, product, and process skills are very different between the two.

This mistake is not exclusive to older companies. We see startup CEO’s making the same mistake, often under the rationale of reducing burn. However, when a startup CEO looks to the CTO to help set up new employees’ desktop computers or to fix a problem with email, it is a huge distraction for the CTO who should be focused on building and improving the company’s revenue-generating products.

AKF Recommendations

AKF recommends that CEOs not combine Product Development and IT departments. Understanding this distinction and why these two very different departments need to function separately is critically important.  Our primary expertise at AKF is to help successful companies become more successful at delivering digital products. AKF focuses its expertise on helping product development teams succeed. We have developed intellectual property, including the AKF Scale Cube, used to evaluate product architecture and to guide successful product development teams. 

We also help CEOs, CTOs, CIOs and IT departments who are looking to improve performance and deliver more business value by creating efficient product development processes and architectures. Give us a call, we can help!

Subscribe to the AKF Newsletter

Contact Us

What's the difference between VMs & Containers?

May 29, 2019  |  Posted By: Robin McGlothin

VMs vs Containers

Inefficiency and down time have traditionally kept CTO’s and IT decision makers up at night.  Now, new challenges are emerging driven by infrastructure inflexibility and vendor lock-in, limiting Technology more than ever and making strategic decisions more complex than ever.  Both VMs and containers can help get the most out of available hardware and software resources while easing the risk of vendor lock-in limitation. 

Containers are the new kids on the block, but VMs have been, and continue to be, tremendously popular in data centers of all sizes.  Having said that, the first lesson to learn, is containers are not virtual machines.  When I was first introduced to containers, I thought of them as light weight or trimmed down virtual instances.  This comparison made sense since most advertising material leaned on the concepts that containers use less memory and start much faster than virtual machines – basically marketing themselves as VMs.  Everywhere I looked, Docker was comparing themselves to VMs.  No wonder I was a bit confused when I started to dig into the benefits and differences between the two.

As containers evolved, they are bringing forth abstraction capabilities that are now being broadly applied to make enterprise IT more flexible. Thanks to the rise of Docker containers it’s now possible to more easily move workloads between different versions of Linux as well as orchestrate containers to create microservices.  Much like containers, a microservice is not a new idea either. The concept harkens back to service-oriented architectures (SOA). What is different is that microservices based on containers are more granular and simpler to manage. More on this topic in a blog post for another day!
If you’re looking for the best solution for running your own services in the cloud, you need to understand these virtualization technologies, how they compare to each other, and what are the best uses for each. Here’s our quick read.

VM’s vs. Containers – What’s the real scoop?

One way to think of containers vs. VMs is that while VMs run several different operating systems on one server, container technology offers the opportunity to virtualize the operating system itself.


                                                               
      Figure 1 – Virtual Machine                                                         Figure 2 - Container    

VMs help reduce expenses. Instead of running an application on a single server, a virtual machine enables utilizing one physical resource to do the job of many. Therefore, you do not have to buy, maintain and service several servers.  Because there is one host machine, it allows you to efficiently manage all the virtual environments with a centralized tool – the hypervisor. The decision to use VMs is typically made by DevOps/Infrastructure Team.  Containers help reduce expenses as well and they are remarkably lightweight and fast to launch.  Because of their small size, you can quickly scale in and out of containers and add identical containers as needed. 

Containers are excellent for Continuous Integration and Continuous Deployment (CI/CD) implementation. They foster collaborative development by distributing and merging images among developers.  Therefore, developers tend to favor Containers over VMs.  Most importantly, if the two teams work together (DevOps & Development) the decision on which technology to apply (VMs or Containers) can be made collaboratively with the best overall benefit to the product, client and company.

What are VMs?

The operating systems and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.

IT departments, both large and small, have embraced virtual machines to lower costs and increase efficiencies.  However, VMs can take up a lot of system resources because each VM needs a full copy of an operating system AND a virtual copy of all the hardware that the OS needs to run.  This quickly adds up to a lot of RAM and CPU cycles. And while this is still more economical than bare metal for some applications this is still overkill and thus, containers enter the scene.

Benefits of VMs
• Reduced hardware costs from server virtualization
• Multiple OS environments can exist simultaneously on the same machine, isolated from each other.
• Easy maintenance, application provisioning, availability and convenient recovery.
• Perhaps the greatest benefit of server virtualization is the capability to move a virtual machine from one server to another quickly and safely. Backing up critical data is done quickly and effectively because you can effortlessly create a replication site.
Popular VM Providers
• VMware vSphere ESXi, VMware has been active in the virtual space since 1998 and is an industry leader setting standards for reliability, performance, and support.
• Oracle VM VirtualBox - Not sure what operating systems you are likely to use? Then VirtualBox is a good choice because it supports an amazingly wide selection of host and client combinations. VirtualBox is powerful, comes with terrific features and, best of all, it’s free.
• Xen - Xen is the open source hypervisor included in the Linux kernel and, as such, it is available in all Linux distributions. The Xen Project is one of the many open source projects managed by the Linux Foundation.
• Hyper-V - is Microsoft’s virtualization platform, or ‘hypervisor’, which enables administrators to make better use of their hardware by virtualizing multiple operating systems to run off the same physical server simultaneously.
• KVM - Kernel-based Virtual Machine (KVM) is an open source virtualization technology built into Linux. Specifically, KVM lets you turn Linux into a hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual machines (VMs).

What are Containers?

Containers are a way to wrap up an application into its own isolated ”box”. For the application in its container, it has no knowledge of any other applications or processes that exist outside of its box. Everything the application depends on to run successfully also lives inside this container. Wherever the box may move, the application will always be satisfied because it is bundled up with everything it needs to run.

Containers virtualize the OS instead of virtualizing the underlying computer like a virtual machine.  They sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.

In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. This means you can put two to three times as many applications on a single server with containers than you can with VMs. In addition, with containers, you can create a portable, consistent operating environment for development, testing, and deployment. This is a huge benefit to keep the environments consistent.

Containers help isolate processes through differentiation in the operating system namespace and storage.  Leveraging operating system native capabilities, the container isolates process space, may create temporary file systems and relocate process “root” file system, etc.

Benefits of Containers

One of the biggest advantages to a container is the fact you can set aside less resources per container than you might per virtual machine. Keep in mind, containers are essentially for a single application while virtual machines need resources to run an entire operating system. For example, if you need to run multiple instances of MySQL, NGINX, or other services, using containers makes a lot of sense. If, however you need a full web server (LAMP) stack running on its own server, there is a lot to be said for running a virtual machine. A virtual machine gives you greater flexibility to choose your operating system and upgrading it as you see fit. A container by contrast, means that the container running the configured application is isolated in terms of OS upgrades from the host.

Popular Container Providers

1. Docker - Nearly synonymous with containerization, Docker is the name of both the world’s leading containerization platform and the company that is the primary sponsor of the Docker open source project.
2. Kubernetes - Google’s most significant contribution to the containerization trend is the open source containerization orchestration platform it created.
3. Although much of early work on containers was done on the Linux platform, Microsoft has fully embraced both Docker and Kubernetes containerization in general.  Azure offers two container orchestrators Azure Kubernetes Service (AKS) and Azure Service Fabric.  Service Fabric represents the next-generation platform for building and managing these enterprise-class, tier-1, applications running in containers.
4. Of course, Microsoft and Google aren’t the only vendors offering a cloud-based container service. Amazon Web Services (AWS) has its own EC2 Container Service (ECS).
5. Like the other major public cloud vendors, IBM Bluemix also offers a Docker-based container service.
6. One of the early proponents of container technology, Red Hat claims to be “the second largest contributor to the Docker and Kubernetes codebases,” and it is also part of the Open Container Initiative and the Cloud Native Computing Foundation. Its flagship container product is its OpenShift platform as a service (PaaS), which is based on Docker and Kubernetes.

Uses for VMs vs Uses for Containers

Both containers and VMs have benefits and drawbacks, and the ultimate decision will depend on your specific needs, but there are some general rules of thumb.
• VMs are a better choice for running applications that require all of the operating system’s resources and functionality when you need to run multiple applications on servers or have a wide variety of operating systems to manage.
• Containers are a better choice when your biggest priority is maximizing the number of applications running on a minimal number of servers.

Container orchestrators

Because of their small size and application orientation, containers are well suited for agile delivery environments and microservice-based architectures. When you use containers and microservices, however, you can easily have hundreds or thousands of components in your environment. You may be able to manually manage a few dozen virtual machines or physical servers, but there is no way you can manage a production-scale container environment without automation. The task of automating and managing a large number of containers and how they interact is known as orchestration.

Scaling Workloads

Scalability of containerized workloads is a completely different process from VM workloads. Modern containers include only the basic services their functions require, but one of them can be a web server, such as NGINX, which also acts as a load balancer. An orchestration system, such as Google Kubernetes is capable of determining, based upon traffic patterns, when the quantity of containers needs to scale out; can replicate container images automatically; and can then remove them from the system.
For most, the ideal setup is likely to include both. With the current state of virtualization technology, the flexibility of VMs and the minimal resource requirements of containers work together to provide environments with maximum functionality.

If your organization is running many instances of the same operating system, then you should look into whether containers are a good fit. They just might save you significant time and money over VMs.

Subscribe to the AKF Newsletter

Contact Us

Why is launching an MVP so difficult?

April 29, 2019  |  Posted By: Robin McGlothin

Man wearing backpack trying to decide which direction to go

Have you ever had that feeling of not knowing where to start?  For writers, it’s called writer’s block.  Painters call it blank-canvas syndrome.  Entrepreneurs & technologist refer to this phenomenon as analysis paralysis, an affliction experienced by all at one point or another.  It’s like having a stroke of genius for the next big idea, but not knowing where to start.

So let’s start by clearly defining the MVP:

A minimum viable product (MVP) is the version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least amount of effort.

Sounds simple … so what’s the issue?

MVP is one of the most misunderstood terms in our product jargon today.  We’ve heard from many a client that MVP is really just a crappy version of a product that is an embarrassment to show to customers.  Over and over, the dialog goes like this, “let’s just remove these features and call it the MVP version.”  Just last week, we heard, “just make it good enough to launch!”

But the purpose of the MVP is to LEARN about customer demand and usability before over-committing resources.  To make sure that we are only building what customers want.  An MVP is NOT a fully usable product that will delight customers.  It is simply a learning vehicle.

So, what’s the problem?

 
Marty Abbott talks about the need to stay competitive, and how firms need to build great products but they also need to lend these products to the uses and misuses of their customers and learn extensively from them in The Power of Customer Mis-Behavior.  He’s basically telling us to focus on discovery and learning what customers really need, not what they say they want.

The point of an MVP is to validate or invalidate a specific hypothesis.  This is why we recommend starting discovery as soon as possible and relying heavily on user testing of prototypes.  But for some reason, most people hear the word Product and assume that it means the first version of a product.  And so, they build that version, release it and guess what…no one likes it. Well…no Duh!

But where to start?

 
Our advice is this: 一 Don’t wait for the perfect product. Create an MVP and start discovery immediately. 

Discovery Happens Along Two Dimensions:

  1. Discovery of the “What” something should do – is product discovery in defining or expanding an MVP.  Discovering “What” the feature set (stories) needs to be successful.
  2. Discovery of the “How” something should work to accomplish the best outcome of the what – this is a hybrid of technical and product discovery meant to find the fastest or best path to a result.

Seems straight forward, but many clients have challenges keeping it simple. 

One common way companies have overbuilt MVPs is by applying an old prioritization technique used for requirements. Each requirement is tagged one of the following:

  • Must-have
  • Should-have
  • Could-have
  • Would like to have/won’t-have

“Must-haves” are the essentials, “should-haves” are really important, “could-haves” might be sacrificed, and “would-likes” probably aren’t going to happen.

The problem with this is at least 60% of any requirements list gets classified as “musts.”  Several stakeholders demand their request is a “must” and fight like wild dogs to avoid “could” or “would” status.

A vicious cycle is created as stakeholders realize that nothing except “musts” will get done.  If it gets to the point where more than 60% of requirements get classified as “musts,” there may even be some “musts” that don’t get done. In some organizations, a stakeholder stampede is triggered every time someone says “MVP,” leading to a bloated first release, unless someone steps in to put stricter limitations on the requirements.

At the same time, other MVP creation pitfalls we commonly see and warn our clients about include:

  1. Making a poor product. The word “minimum” in MVP does not mean bad, buggy or barely usable. “Minimum” means that the scope should be stripped of anything extra.  but whatever features remain should be done in an intuitive and user-friendly way. Products should be unique to what the customer is likely to by relevant to size.
  2. Building a product to sell. Change the sales approach for future customers, sell on risk shift – not features.  Move to discovery vs. sales-lead contract-based product development.
  3. Difficulty in defining the minimum. Often, you want your first product to be as beautiful as it can be, and you are reluctant to throw away all the nice features you’ve thought of. As a result, you spend too much time and money, and, even more damaging, lose focus on the core features. The rule of thumb when defining the scope of your MVP is “can we launch without this or not?” This should be your main criteria and then add all the bells and whistles later when the idea is validated.

Our recommended approach to avoid these pitfalls and launch a successful MVP is based on market-driven analysis with a minimum set of features identified for the go to market strategy. 

The need for speed

Speed is everything when testing an MVP.  Many clients resist launching until a product is “perfect,” but here’s the news flash – it will never be perfect, and holding out for that status could ruin your product going forward.

According to Openview, 50% of SaaS companies fail in their first year due to misunderstanding their market, while 95% close up shop within five years for the same reason.  But strong, early MVP assessments allow you to determine whether you’re onto something (or not) in a low-risk environment.

When it comes to launching an MVP, progress is better than perfection.  The only goal is to put together a scaled-down version of your product or service and see whether clients are willing to buy in.

If your company is struggling with getting their MVP to market, AKF Partners can help you implement a product strategy consistent with your innovation needs. 

Photo by Kun Fotografi from Pexels

Subscribe to the AKF Newsletter

Contact Us

The AKF Difference

December 4, 2018  |  Posted By: Marty Abbott

akf difference

During the last 12 years, many prospective clients have asked us some variation of the following questions: “What makes you different?”, “Why should we consider hiring you?”, or “How are you differentiated as a firm?”.

The answer has many components.  Sometimes our answers are clear indications that we are NOT the right firm for you.  Here are the reasons you should, or should not, hire AKF Partners:

Operators and Executives – Not Consultants

Most technology consulting firms are largely comprised of employees who have only been consultants or have only run consulting companies.  We’ve been in your shoes as engineers, managers and executives.  We make decisions and provide advice based on practical experience with living with the decisions we’ve made in the past.

Engineers – Not Technicians

Educational institutions haven’t graduated enough engineers to keep up with demand within the United States for at least forty years.  To make up for the delta between supply and demand, technical training services have sprung up throughout the US to teach people technical skills in a handful of weeks or months.  These technicians understand how to put building blocks together, but they are not especially skilled in how to architect highly available, low latency, low cost to develop and operate solutions.

The largest technology consulting companies are built around programs that hire employees with non-technical college degrees.  These companies then teach these employees internally using “boot camps” – creating their own technicians.

Our company is comprised almost entirely of “engineers”; employees with highly technical backgrounds who understand both how and why the “building blocks” work as well as how to put those blocks together.

Product – Not “IT”

Most technology consulting firms are comprised of consultants who have a deep understanding of employee-facing “Information Technology” solutions.  These companies are great at helping you implement packaged software solutions or SaaS solutions such as Enterprise Resource Management systems, Customer Relationship Management Systems and the like.  Put bluntly, these companies help you with solutions that you see as a cost center in your business.  While we’ve helped some partners who refuse to use anyone else with these systems, it’s not our focus and not where we consider ourselves to be differentiated.

Very few firms have experience building complex product (revenue generating) services and platforms online.  Products (not IT) represent nearly all of AKF’s work and most of AKF’s collective experience as engineers, managers and executives within companies.  If you want back-office IT consulting help focused on employee productivity there are likely better firms with which you can work.  If you are building a product, you do not want to hire the firms that specialize in back office IT work.

Business First – Not Technology First

Products only exist to further the needs of customers and through that relationship, further the needs of the business.  We take a business-first approach in all our engagements, seeking to answer the questions of:  Can we help a way to build it faster, better, or cheaper?  Can we find ways to make it respond to customers faster, be more highly available or be more scalable?  We are technology agnostic and believe that of the several “right” solutions for a company, a small handful will emerge displaying comparatively low cost, fast time to market, appropriate availability, scalability, appropriate quality, and low cost of operations.

Cure the Disease – Don’t Just Treat the Symptoms

Most consulting firms will gladly help you with your technology needs but stop short of solving the underlying causes creating your needs:  the skill, focus, processes, or organizational construction of your product team.  The reason for this is obvious, most consulting companies are betting that if the causes aren’t fixed, you will need them back again in the future.

At AKF Partners, we approach things differently.  We believe that we have failed if we haven’t helped you solve the reasons why you called us in the first place.  To that end, we try to find the source of any problem you may have.  Whether that be missing skillsets, the need for additional leadership, organization related work impediments, or processes that stand in the way of your success – we will bring these causes to your attention in a clear and concise manner.  Moreover, we will help you understand how to fix them.  If necessary, we will stay until they are fixed.

We recognize that in taking the above approach, you may not need us back.  Our hope is that you will instead refer us to other clients in the future.

Are We “Right” for You?

That’s a question for you, not for us, to answer.  We don’t employ sales people who help “close deals” or “shape demand”.  We won’t pressure you into making a decision or hound you with multiple calls.  We want to work with clients who “want” us to partner with them – partners with whom we can join forces to create an even better product solution.

 

Subscribe to the AKF Newsletter

Contact Us

The Importance of QA

November 20, 2018  |  Posted By: Robin McGlothin

“Quality in a service or product is not what you put into it.  It’s what the customer gets out of it.”  Peter Drucker


Importance of QA - Team sitting at conference table

The Importance of QA

High levels of quality are essential to achieving company business objectives. Quality can be a competitive advantage and in many cases will be table stakes for success. High quality is not just an added value, it is an essential basic requirement. With high market competition, quality has become the market differentiator for almost all products and services.

There are many methods followed by organizations to achieve and maintain the required level of quality. So, let’s review how world-class product organizations make the most out of their QA roles. But first, let’s define QA. 

According to Wikipedia, quality assurance is “a way of preventing mistakes or defects in products and avoiding problems when delivering solutions or services to customers. But there’s much more to quality assurance.”

There are numerous benefits of having a QA team in place:

  1. Helps increase productivity while decreasing costs (QA HC typically costs less)
  2. Effective for saving costs by detecting and fixing issues and flaws before they reach the client
  3. Shifts focus from detecting issues to issue prevention

Teams and organizations looking to get serious about (or to further improve) their software testing efforts can learn something from looking at how the industry leaders organize their testing and quality assurance activities. It stands to reason that companies such as Google, Microsoft, and Amazon would not be as successful as they are without paying proper attention to the quality of the products they’re releasing into the world.  Taking a look at these software giants reveals that there is no one single recipe for success. Here is how five of the world’s best-known product companies organize their QA and what we can learn from them.

Google: Searching for best practices

Google Search Logo
How does the company responsible for the world’s most widely used search engine organize its testing efforts? It depends on the product. The team responsible for the Google search engine, for example, maintains a large and rigorous testing framework. Since search is Google’s core business, the team wants to make sure that it keeps delivering the highest possible quality, and that it doesn’t screw it up.

To that end, Google employs a four-stage testing process for changes to the search engine, consisting of:

  1. Testing by dedicated, internal testers (Google employees)
  2. Further testing on a crowdtesting platform
  3. “Dogfooding,” which involves having Google employees use the product in their daily work
  4. Beta testing, which involves releasing the product to a small group of Google product end users

Even though this seems like a solid testing process, there is room for improvement, if only because communication between the different stages and the people responsible for them is suboptimal (leading to things being tested either twice over or not at all).

But the teams responsible for Google products that are further away from the company’s core business employ a much less strict QA process. In some cases, the only testing done by the developer responsible for a specific product, with no dedicated testers providing a safety net.

In any case, Google takes testing very seriously. In fact, testers’ and developers’ salaries are equal, something you don’t see very often in the industry.

Facebook: Developer-driven testing

Facebook does not employ any dedicated testers at all. Instead, the social media giant relies on its developers to test their own (as well as one another’s) work. Facebook employs a wide variety of automated testing solutions. The tools that are used range from PHPUnit for back-end unit testing to Jest (a JavaScript test tool developed internally at Facebook) to Watir for end-to-end testing efforts.

Like Google, Facebook uses dogfooding to make sure its software is usable. Furthermore, it is somewhat notorious for shaming developers who mess things up (breaking a build or causing the site to go down by accident, for example) by posting a picture of the culprit wearing a clown nose on an internal Facebook group. No one wants to be seen on the wall-of-shame!

Facebook recognizes that there are significant flaws in its testing process, but rather than going to great lengths to improve, it simply accepts the flaws, since, as they say, “social media is nonessential.” Also, focusing less on testing means that more resources are available to focus on other, more valuable things.

Rather than testing its software through and through, Facebook tends to use “canary” releases and an incremental rollout strategy to test fixes, updates, and new features in production. For example, a new feature might first be made available only to a small percentage of the total number of users.

                            Canary Incremental Rollout
Incremental rollout of features

By tracking the usage of the feature and the feedback received, the company decides either to increase the rollout or to disable the feature, either improving it or discarding it altogether.


Amazon: Deployment comes first

Amazon logo
Like Facebook, Amazon does not have a large QA infrastructure in place. It has even been suggested (at least in the past) that Amazon does not value the QA profession. Its ratio of about one test engineer to every seven developers also suggests that testing is not considered an essential activity at Amazon.

The company itself, though, takes a different view of this. To Amazon, the ratio of testers to developers is an output variable, not an input variable. In other words, as soon as it notices that revenue is decreasing or customers are moving away due to anomalies on the website, Amazon increases its testing efforts.

The feeling at Amazon is that its development and deployment processes are so mature (the company famously deploys software every 11.6 seconds!) that there is no need for elaborate and extensive testing efforts. It is all about making software easy to deploy, and, equally if not more important, easy to roll back in case of a failure.

Spotify: Squads, tribes and chapters

Spotify does employ dedicated testers. They are part of cross-functional teams, each with a specific mission. At Spotify, employees are organized according to what’s become known as the Spotify model, constructed of:

  1. Squads. A squad is basically the Spotify take on a Scrum team, with less focus on practices and more on principles. A Spotify dictum says, “Rules are a good start, but break them when needed.” Some squads might have one or more testers, and others might have no testers at all, depending on the mission.
  2. Tribes are groups of squads that belong together based on their business domain. Any tester that’s part of a squad automatically belongs to the overarching tribe of that squad.
  3. Chapters. Across different squads and tribes, Spotify also uses chapters to group people that have the same skillset, in order to promote learning and sharing experiences. For example, all testers from different squads are grouped together in a testing chapter.
  4. Guilds. Finally, there is the concept of a guild. A guild is a community of members with shared interests. These are a group of people across the organization who want to share knowledge, tools, code and practices.

                            Spotify Team Structure
Graphic showing how team guilds are built with experts on each team

Testing at Spotify is taken very seriously. Just like programming, testing is considered a creative process, and something that cannot be (fully) automated. Contrary to most other companies mentioned, Spotify heavily relies on dedicated testers that explore and evaluate the product, instead of trying to automate as much as possible.  One final fact: In order to minimize the efforts and costs associated with spinning up and maintaining test environments, Spotify does a lot of testing in its production environment.

Microsoft: Engineers and testers are one


Microsoft’s ratio of testers to developers is currently around 2:3, and like Google, Microsoft pays testers and developers equally—except they aren’t called testers; they’re software development engineers in test (or SDETs).

The high ratio of testers to developers at Microsoft is explained by the fact that a very large chunk of the company’s revenue comes from shippable products that are installed on client computers & desktops, rather than websites and online services. Since it’s much harder (or at least much more annoying) to update these products in case of bugs or new features, Microsoft invests a lot of time, effort, and money in making sure that the quality of its products is of a high standard before shipping.

What you can learn from world-class product organizations?  If the culture, views, and processes around testing and QA can vary so greatly at five of the biggest tech companies, then it may be true that there is no one right way of organizing testing efforts. All five have crafted their testing processes, choosing what fits best for them, and all five are highly successful. They must be doing something right, right?

Still, there are a few takeaways that can be derived from the stories above to apply to your testing strategy:

  1. There’s a “testing responsibility spectrum,” ranging from “We have dedicated testers that are primarily responsible for executing tests” to “Everybody is responsible for performing testing activities.” You should choose the one that best fits the skillset of your team.
  2. There is also a “testing importance spectrum,” ranging from “Nothing goes to production untested” to “We put everything in production, and then we test there, if at all.” Where your product and organization belong on this spectrum depends on the risks that will come with failure and how easy it is for you to roll back and fix problems when they emerge.
  3. Test automation has a significant presence in all five companies. The extent to which it is implemented differs, but all five employ tools to optimize their testing efforts. You probably should too.

Bottom line, QA is relevant and critical to the success of your product strategy. If you’d tried to implement a new QA process but failed, we can help.

 

 

Subscribe to the AKF Newsletter

Contact Us

Are you compromised?

September 14, 2018  |  Posted By: Larry Steinberg

It’s important to acknowledge that a core competency for hackers is hiding their tracks and maintaining dormancy for long periods of time after they’ve infiltrated an environment. They also could be utilizing exploits which you have not protected against - so given all of this potential how do you know that you are not currently compromised by the bad guys? Hackers are great hidden operators and have many ‘customers’ to prey on. They will focus on a customer or two at a time and then shut down activities to move on to another unsuspecting victim. It’s in their best interest to keep their profile low and you might not know that they are operating (or waiting) in your environment and have access to your key resources.

Most international hackers are well organized, well educated, and have development skills that most engineering managers would admire if not for the malevolent subject matter. Rarely are these hacks performed by bots, most occur by humans setting up a chain of software elements across unsuspecting entities enabling inbound and outbound access. 

What can you do? Well to start, don’t get complacent with your security, even if you have never been compromised or have been and eradicated what you know, you’ll never know for sure if you are currently compromised. As a practice, it’s best to always assume that you are and be looking for this evidence as well as identifying ways to keep them out. Hacking is dynamic and threats are constantly evolving.

There are standard practices of good security habits to follow - the NIST Cybersecurity Framework and OWASP Top 10. Further, for your highest value environments here are some questions that you should consider: would you know if these systems had configuration changes? Would you be aware of unexpected processes running? If you have interesting information in your operating or IT environment and the bad guys get in, it’s of no value unless they get that information back out of the environment; where is your traffic going? Can you model expected outbound traffic and monitor this? The answer should be yes. Then you can look for abnormalities and even correlate this traffic with other activities in your environment.

Just as you and your business are constantly evolving to service your customers and to attract new ones, the bad guys are evolving their practices too. Some of their approaches are rudimentary because we allow it but when we buckle down they have to get more innovative. Ensure that you are constantly identifying all the entry points and close them. Then remain diligent to new approaches they might take. 

Don’t forget the most common attack vector - humans. Continue evolving your training and keep the awareness high within your staff - technical and non-technical alike.

Your default mental model should be that you don’t know what you don’t know. Utilize best practices for security and continue to evolve. Utilize external or build internal expertise in the security space and ensure that those skills are dynamic and expanding. Utilize recurring testing practices to identify vulnerabilities in your environment and to prepare against emerging attack patterns. 

We commonly help organizations identify and prioritize security concerns through technical due diligence assessments. Contact us today.

Subscribe to the AKF Newsletter

Contact Us

Open Source Software as a malware “on ramp”

August 21, 2018  |  Posted By: Larry Steinberg

Open Source Software (OSS) is an efficient means to building out solutions rapidly with high quality.  You utilize crowdsourced design, development and validation to conveniently speed your engineering.  OSS also fosters a sense of sharing and building together - across company boundaries or in one’s free time.

So just pull a library down off the web, build your project, and your company is ready to go.  Or is that the best approach? What could be in this library you’re including in your solution which might not be wanted?  This code will be running in critical environments - like your SaaS servers, internal systems, or on customer systems.  Convenience comes at a price and there are some well known situations of hacks embedded in popular open source libraries.

What is the best approach to getting the benefits of OSS and maintaining the integrity of your solution? 

Good practices are a necessity to ensure a high level of security.  Just like when you utilize OSS and then test functionality, scale, and load - you should be validating against vulnerabilities.  Pre-production vulnerability and penetration testing is a good start.  Also, utilize good internal process and reviews.  Keep the process simple to maintain speed but establish internal accountability and vigilance on code that’s entering your environment.  You are practicing good coding techniques already with reviews and/or peer coding - build an equivalency with OSS.

Always utilize known good repositories and validate the project sponsors.  Perform diligence on the committers just like you would for your own employees.  You likely perform some type of background check on your employees before making an offer - whether going to a third party or simply looking them up on linkedin and asking around.  OSS committers have the same risk to your company - why not do the same for them?  Understandably, you probably wouldn’t do this for a third party purchased solution, but your contract or expectation is that the company is already doing this and abiding by minimum security standards.  That may not be true for your OSS solutions, and as such your responsibility for validation is at least slightly higher.  There are plenty of projects coming from reputable sources that you can rely on. 

Ensure that your path to production is only coming from artifacts which have been built on internal sources which were either developed or reviewed by your team.  Also, be intentional about OSS library upgrades, this should planned and part of the process.

OSS is highly leveraged in today’s software solutions and provides many benefits.  Be diligent in your approach to ensure you only see the upside of open source.

Need additional help?  Contact Us!

Subscribe to the AKF Newsletter

Contact Us

SaaS Risk and Value Shift

August 2, 2018  |  Posted By: Marty Abbott

Hand illustration of different risks
The movement to SaaS specifically, and more broadly “Anything” (X) as a Service (XaaS) is driven by demand side (buyer) forces.  In early cases within any industry, the buyer seeks competitive advantage over competitors.  The move to SaaS allows the buyer to focus on core competencies, increasing investments in the areas that create true differentiation.  Why spend payroll on an IT staff to support ERP solutions, mail solutions, CRM solutions, etc when that same payroll could otherwise be spent on engineers to build product differentiating features or enlarge a sales staff to generate more revenue?

As time moves on and as the technology adoption lifecycle advances, the remaining buyers for any product feel they have no choice; the talent and capabilities to run a compelling solution for the company simply do not exist.  As such, the late majority and laggard adopters are almost “forced” into renting a service over purchasing software.

Whether for competitive reasons, as in the case of early adopters through the early majority, or for lack of alternatives as in the case of late majority and laggards, the movement to SaaS and XaaS represents a shift in risk as compared to the existing purchased product options.  This shift in risk is very much like the shift that happens between purchasing and leasing a home.
 
Renting a home or an apartment is almost always more expensive than owning the same dwelling.  The reason for this should be clear: the person owning the property expects to make a profit beyond the costs of carrying a mortgage and performing upkeep on the property over the life of the owner’s investment.  There are special “inversion” cases where renting is less expensive, such as in a low rental demand market, but these cases tend to reset the market ownership prices (house prices fall) as rents no longer cover mortgages or ownership does not make sense.

Conversely, ownership is almost always less expensive than renting or leasing.  But owners take on more risk: the risk of maintenance activities; the risk of market prices; the risk and costs associated with remodeling to make the property attractive, etc. 

The matrix below helps put the shift described above into context.

Risk and Cost Shift Inherent to SaaS Transitions

A customer who “owns” an on-premise solution also “owns” a great deal of risk for all of the components necessary to achieve their desired outcomes: equipment, security, power, licenses, the “-ilities” (like availability), disaster recovery, release management, and monitoring of the solution.  The primary components of this risk include fluctuation in asset valuation, useful life of the asset, and most importantly – the risk that they do not have the right skills to maximize the value creation predicated on these components.

A customer who “rents” a SaaS solution transfers most of these risks to a provider who specializes in the solution and therefore should be better able to manage the risk and optimize outcomes.  In exchange, the customer typically pays a risk premium relative to ownership.  However, given that the provider can likely operate the solution more cost effectively, especially if it is a multi-tenant solution, the risk premium may be small.  Indeed, in extreme cases where the company can eliminate headcount, say after eliminating all on-premise solutions, the lessee may experience an overall reduction in cost.

But what about the provider of the service?  After all, the “old world” of simply slinging code and allowing customers to take all the risk was mighty appealing; the provider enjoyed low costs of goods sold (and high gross margins) and revenue streams associated with both licensing and customization.  The provider expects to achieve higher revenue from the risk premium charged for services.  The provider also expects overall margins through greater efficiencies in running solutions with significant demand concentration at scale.  The risk premium more than compensates the provider for the increased cost of goods sold relative to the on-premise business.  Overall, the provider assumes risk for greater value creation.  Both the customer and the provider win.

Architecture and product financial decisions are key to achieving the margins above. 

Cost Models of Various Cloud Implementations

Gross margins are directly correlated with the level of tenancy of any XaaS provider (Y axis).  As such, while we want to avoid “all tenancy” for availability reasons, we desire a high level of tenancy to maximize equipment utilization and increase gross margins.  Other drivers of gross margins include the level of demand upon shared components and the level of automation on all components – the latter driving down cost of labor.

The X axis of the chart above shows the operating expense associated with various business models.  Multi-tenant XaaS offerings collapse the number of “releases supported in the wild” – reducing the operating expense (and increasing gross margins) associated with managing a code base. 

Another way of viewing this is to look at the relative costs of software maintenance and administration costs for various business models.

Cloud Cost Models - Operating Expense and Cost of Goods Sold

Plotted in the “low COGS” (X axis), “low Maintenance quadrant of the figure above is “True XaaS”.  Few versions of a release reduce our cost to maintain a code base, and high equipment utilization and automation reduces our cost to provision a service. 

In the upper right and unattractive quadrant is the ASP (Application Service Provider) model, where we have less control over equipment utilization (it is typically provisioned for individual customers) and less control over defining the number of releases. 

Hosting a solution on-premise to the customer may reduce our maintenance fees, if we are successful in reducing releases, but significantly increases our costs.  This is different than the on-premise model (upper left) in which the customer bears the cost of equipment but for which we have a high number of releases to maintain.  The XaaS solution is clearly beneficial overall to maximize margins.

AKF Partners helps companies transition on-premise, licensed software products to SaaS and XaaS solutions.  Let us help you on your journey.

Subscribe to the AKF Newsletter

Contact Us

 1 2 > 

Categories:

Most Popular: