GROWTH BLOG: What are Microservices?
AKF Partners Logo Technology ConsultingScalability - We wrote the book on it ℠

Growth Blog

Scalability and Technology Consulting Advice for SaaS and Technology Companies

The Importance of QA

November 20, 2018  |  Posted By: Robin McGlothin

“Quality in a service or product is not what you put into it.  It’s what the customer gets out of it.”  Peter Drucker


Importance of QA - Team sitting at conference table

The Importance of QA

High levels of quality are essential to achieving company business objectives. Quality can be a competitive advantage and in many cases will be table stakes for success. High quality is not just an added value, it is an essential basic requirement. With high market competition, quality has become the market differentiator for almost all products and services.

There are many methods followed by organizations to achieve and maintain the required level of quality. So, let’s review how world-class product organizations make the most out of their QA roles. But first, let’s define QA. 

According to Wikipedia, quality assurance is “a way of preventing mistakes or defects in products and avoiding problems when delivering solutions or services to customers. But there’s much more to quality assurance.”

There are numerous benefits of having a QA team in place:

  1. Helps increase productivity while decreasing costs (QA HC typically costs less)
  2. Effective for saving costs by detecting and fixing issues and flaws before they reach the client
  3. Shifts focus from detecting issues to issue prevention

Teams and organizations looking to get serious about (or to further improve) their software testing efforts can learn something from looking at how the industry leaders organize their testing and quality assurance activities. It stands to reason that companies such as Google, Microsoft, and Amazon would not be as successful as they are without paying proper attention to the quality of the products they’re releasing into the world.  Taking a look at these software giants reveals that there is no one single recipe for success. Here is how five of the world’s best-known product companies organize their QA and what we can learn from them.

Google: Searching for best practices

Google Search Logo
How does the company responsible for the world’s most widely used search engine organize its testing efforts? It depends on the product. The team responsible for the Google search engine, for example, maintains a large and rigorous testing framework. Since search is Google’s core business, the team wants to make sure that it keeps delivering the highest possible quality, and that it doesn’t screw it up.

To that end, Google employs a four-stage testing process for changes to the search engine, consisting of:

  1. Testing by dedicated, internal testers (Google employees)
  2. Further testing on a crowdtesting platform
  3. “Dogfooding,” which involves having Google employees use the product in their daily work
  4. Beta testing, which involves releasing the product to a small group of Google product end users

Even though this seems like a solid testing process, there is room for improvement, if only because communication between the different stages and the people responsible for them is suboptimal (leading to things being tested either twice over or not at all).

But the teams responsible for Google products that are further away from the company’s core business employ a much less strict QA process. In some cases, the only testing done by the developer responsible for a specific product, with no dedicated testers providing a safety net.

In any case, Google takes testing very seriously. In fact, testers’ and developers’ salaries are equal, something you don’t see very often in the industry.

Facebook: Developer-driven testing

Facebook does not employ any dedicated testers at all. Instead, the social media giant relies on its developers to test their own (as well as one another’s) work. Facebook employs a wide variety of automated testing solutions. The tools that are used range from PHPUnit for back-end unit testing to Jest (a JavaScript test tool developed internally at Facebook) to Watir for end-to-end testing efforts.

Like Google, Facebook uses dogfooding to make sure its software is usable. Furthermore, it is somewhat notorious for shaming developers who mess things up (breaking a build or causing the site to go down by accident, for example) by posting a picture of the culprit wearing a clown nose on an internal Facebook group. No one wants to be seen on the wall-of-shame!

Facebook recognizes that there are significant flaws in its testing process, but rather than going to great lengths to improve, it simply accepts the flaws, since, as they say, “social media is nonessential.” Also, focusing less on testing means that more resources are available to focus on other, more valuable things.

Rather than testing its software through and through, Facebook tends to use “canary” releases and an incremental rollout strategy to test fixes, updates, and new features in production. For example, a new feature might first be made available only to a small percentage of the total number of users.

                            Canary Incremental Rollout
Incremental rollout of features

By tracking the usage of the feature and the feedback received, the company decides either to increase the rollout or to disable the feature, either improving it or discarding it altogether.


Amazon: Deployment comes first

Amazon logo
Like Facebook, Amazon does not have a large QA infrastructure in place. It has even been suggested (at least in the past) that Amazon does not value the QA profession. Its ratio of about one test engineer to every seven developers also suggests that testing is not considered an essential activity at Amazon.

The company itself, though, takes a different view of this. To Amazon, the ratio of testers to developers is an output variable, not an input variable. In other words, as soon as it notices that revenue is decreasing or customers are moving away due to anomalies on the website, Amazon increases its testing efforts.

The feeling at Amazon is that its development and deployment processes are so mature (the company famously deploys software every 11.6 seconds!) that there is no need for elaborate and extensive testing efforts. It is all about making software easy to deploy, and, equally if not more important, easy to roll back in case of a failure.

Spotify: Squads, tribes and chapters

Spotify does employ dedicated testers. They are part of cross-functional teams, each with a specific mission. At Spotify, employees are organized according to what’s become known as the Spotify model, constructed of:

  1. Squads. A squad is basically the Spotify take on a Scrum team, with less focus on practices and more on principles. A Spotify dictum says, “Rules are a good start, but break them when needed.” Some squads might have one or more testers, and others might have no testers at all, depending on the mission.
  2. Tribes are groups of squads that belong together based on their business domain. Any tester that’s part of a squad automatically belongs to the overarching tribe of that squad.
  3. Chapters. Across different squads and tribes, Spotify also uses chapters to group people that have the same skillset, in order to promote learning and sharing experiences. For example, all testers from different squads are grouped together in a testing chapter.
  4. Guilds. Finally, there is the concept of a guild. A guild is a community of members with shared interests. These are a group of people across the organization who want to share knowledge, tools, code and practices.

                            Spotify Team Structure
Graphic showing how team guilds are built with experts on each team

Testing at Spotify is taken very seriously. Just like programming, testing is considered a creative process, and something that cannot be (fully) automated. Contrary to most other companies mentioned, Spotify heavily relies on dedicated testers that explore and evaluate the product, instead of trying to automate as much as possible.  One final fact: In order to minimize the efforts and costs associated with spinning up and maintaining test environments, Spotify does a lot of testing in its production environment.

Microsoft: Engineers and testers are one


Microsoft’s ratio of testers to developers is currently around 2:3, and like Google, Microsoft pays testers and developers equally—except they aren’t called testers; they’re software development engineers in test (or SDETs).

The high ratio of testers to developers at Microsoft is explained by the fact that a very large chunk of the company’s revenue comes from shippable products that are installed on client computers & desktops, rather than websites and online services. Since it’s much harder (or at least much more annoying) to update these products in case of bugs or new features, Microsoft invests a lot of time, effort, and money in making sure that the quality of its products is of a high standard before shipping.

What you can learn from world-class product organizations?  If the culture, views, and processes around testing and QA can vary so greatly at five of the biggest tech companies, then it may be true that there is no one right way of organizing testing efforts. All five have crafted their testing processes, choosing what fits best for them, and all five are highly successful. They must be doing something right, right?

Still, there are a few takeaways that can be derived from the stories above to apply to your testing strategy:

  1. There’s a “testing responsibility spectrum,” ranging from “We have dedicated testers that are primarily responsible for executing tests” to “Everybody is responsible for performing testing activities.” You should choose the one that best fits the skillset of your team.
  2. There is also a “testing importance spectrum,” ranging from “Nothing goes to production untested” to “We put everything in production, and then we test there, if at all.” Where your product and organization belong on this spectrum depends on the risks that will come with failure and how easy it is for you to roll back and fix problems when they emerge.
  3. Test automation has a significant presence in all five companies. The extent to which it is implemented differs, but all five employ tools to optimize their testing efforts. You probably should too.

Bottom line, QA is relevant and critical to the success of your product strategy. If you’d tried to implement a new QA process but failed, we can help.

 

 

Permalink

Diagnosing and Fixing Software Development Performance

November 20, 2018  |  Posted By: Roger Andelin

Two software developers at a desk computer

Diagnosing the cause of poor performance from your engineering team is difficult and can be costly for the organization if done incorrectly.  Most everyone will agree that a high performing team is more desirable than a low performing team.  However, there is rarely agreement as to why teams are not performing well and how to help them improve performance.  For example, your CFO may believe the team does not have good project management and that more project management will improve the team’s performance.  Alternatively, the CEO may believe engineers are not working hard enough because they arrive to the office late.  The CMO may believe the team is simply bad and everyone needs to be replaced.

Often times, your CTO may not even know the root causes of poor performance or even recognize there is a performance problem until peers begin to complain.  However, there are steps an organization can take to uncover the root cause of poor performance quickly, present those findings to stakeholders for greater understanding, and take steps that will properly remove the impediments to higher performance.  Those steps may include some of the solutions suggested by others, but without a complete understanding of the problem, performance will not improve and incorrect remedies will often make the situation worse.  In other words, adding more project management does not always solve a problem with on time delivery, but it will add more cost and overhead.  Requiring engineers to start each day at 8 AM sharp may give the appearance that work is getting done, but it may not directly improve velocity.  Firing good engineers who face legitimate challenges to their performance may do irreversible harm to the organization.  For instance, it may appear arbitrary to others and create more fear in the department resulting in unwanted attrition. Taking improper action will make things worse rather than improve the situation.

How can you know what action to take to fix an engineering performance problem?  The first step in that process is to correctly define and agree upon what good performance looks like.  Good performance is comprised of two factors: velocity and value.

Velocity is defined as the speed at which the team works and value is defined as achievement of business goals.  Velocity is measured in story points which represent the amount of work completed.  Value is measured in business terms such as revenue, customer satisfaction or conversion.  High performing engineering teams work quickly and their work has a measurable impact on business goals.  High performing teams put as much focus on delivering a timely release as they do on delivering the right release to achieve a business goal.

Once you have agreement on the definition of good engineering performance, rate each of your engineering teams against the two criteria: velocity and value.  You may use a chart like the one below:

AKF Partners Velocity plus Business Value equals 10 xers which are high performers

Once each team has been rated, write down a narrative that justifies the rating.  Here are a few examples:

Bottom Left: Velocity and Value are Low

“My requests always seem to take a long time.  Even the most simple of requests takes forever.  And, when the team finally gets around to completing the request, often times there are problems in production once the release is completed.  These problems have negatively impacted customers’ confidence in us so not only are engineers not delivering value – they are eroding it!”

Upper/Middle Left: Velocity is Good and Value is Low

“The team does get stuff done.  Of course I’d like them to go faster, but generally speaking they are able to get things done in a reasonable amount of time.  However, I can’t say if they are delivering value – when we release something we are not tracking any business metrics so I have no way of knowing!”

Upper Right: Velocity is High and Value is High

“The team is really good.  They are tracking their velocity in story points and have goals to improve velocity.  They are already up 10% over last year.  Also, they instrument all their releases to measure business value.  They are actively working with product management to understand what value needs to be delivered and hypothesize with the stakeholders as to what features will be best to deliver the intended business goal.  This team is a pleasure to work with.”

Unknown Velocity and Unknown Value

“I don’t know how to rate this team.  I don’t know their velocity; its always changing and seems meaningless.  I think the team does deliver business value, but they are not measuring it so I cannot say if it is low or high.”

With narratives in hand it’s time to begin digging for more data to support or invalidate the ratings.

Diagnosing Velocity Problems

Engineering velocity is a function of time spent developing.  Therefore, the first question to answer is “what is the maximum amount of time my team is able to spend on engineering work under ideal conditions?”

This is a calculated value.  For example, start with a 40 hour work week.  Next, assuming your teams are following an Agile software development process, for each engineering role subtract out the time needed each week for meetings and other non-development work.  For individual contributors working in an Agile process that number is about 5 hours per week (for stand up, review, planning and retro).  For managers the number may be larger.  For each role on the team sum up the hours.  This is your ideal maximum.

Next, with the ideal maximum in hand, compare that to the actual achievement.  If your teams are not logging hours against their engineering tasks, they will need to do this in order to complete this exercise.  Evaluate the gap between the ideal maximum and the actual.  For example, if the ideal number is 280 hours and the team is logging 200 hours, then the gap is 80 hours.  You need to determine where that 80 hours is going and why.  Here are some potential problems to consider:

  1. Teams are spending extra time in planning meetings to refine requirements and evaluating effort.
  2. Team members are being interrupted by customer incidents which they are required to support.
  3. The team must support the weekly release process in addition to their other engineering tasks.
  4. Miscellaneous meeting are being called by stakeholders including project status meetings and updates.

As you dig into this gap it will become clear what needs to be fixed.  The results will probably surprise you.  For example, one client was faced with a software quality problem.  Determined to improve their software quality, the client added more quality engineers, built more unit tests, and built more automated system tests.  While there is nothing inherently wrong with this, it did not address the root cause of their poor quality: Rushing.  Engineers were spending about 3-4 hours per day on their engineering tasks.  Context switching, interruptions and unnecessary meetings eroded quality engineering time each day.  As a result, engineers rushing to complete their work tasks made novice mistakes.  Improving engineering performance required a plan for reducing engineering interruptions, unnecessary meetings, and enabling engineers to spend more uninterrupted time on their development tasks.

At another client, the frequency of production support incidents were impacting team velocity.  Engineers were being pulled away from their daily engineering tasks to work on problems in production.  This had gone on so long that while nobody liked it, they accepted it as normal.  It’s not normal!  Digging into the issue, the root cause was uncovered: The process for managing production incidents was ineffective.  Every incident was urgent and nearly every incident disrupted the engineering team.  To improve this, a triage process was introduced whereby each incident was classified and either assigned an urgent status (which would create an interruption for the team) or something lower which was then placed on the product backlog (no interruption for the team).  We also learned the old process (every incident was urgent) was in part a response to another velocity problem; stakeholders believed that unless something was considered urgent it would never get fixed by the engineering team.  By having an incident triage process, a procedure for when something would get fixed based on its urgency, the engineering team and the stakeholders solved this problem.

At AKF, we are experts at helping engineering teams improve efficiency, performance, fixing velocity problems, and improving value.  In many cases, the prescription for the team is not obvious.  Our consultants help company leaders uncover the root causes of their performance problems, establish vision and execute prescriptions that result in meaningful change.  Let us help you with your performance problems so your teams can perform at their best!

 

Permalink

The Elusive Data Breach

November 15, 2018  |  Posted By: James Fritz

“but in this world nothing can be said to be certain, except death and taxes.”

-Benjamin Franklin

...and data breaches.  Given the era that Benjamin Franklin lived in the concept of a data breach was far from any possibility.  But in the world we live in it is a certainty.  Death, taxes and data breaches.  Welcome to the 21st Century.

Data Breach

So how did we get here?  The death and taxes is for someone else to explain, but the data breach I will help flesh out.  The following article will begin to explore the world we live in where we know data breaches are not something you hope never happens, but something you prepare for to happen.  Following this article, in the coming weeks I will explore what can be done when the inevitable does occur.

Data Breaches in the 21st Century

“My system is completely secure,” says the guy who is already breached and just doesn’t know it.
Why is a data breach such a certainty in these days?  It comes down to four areas: similarity, interconnections, users and motive.

Similarity
In 2015 Windows had a great marketing plan to upgrade as many older OSes to the current release: offer the upgrade for free.  Issues with upgrading (or tricking users to upgrade against their will) aside this built a quick base for Windows 10 and quickly allowed Windows 10 to overtake version 7 in December 2017 as the most adopted version of Windows.  Couple this with the fact that Windows is one of the highest used OSes and you now have a nicely populated target base. 
This isn’t to say that Windows machines are more susceptible than other machines, but that given their popularity and the scheduled release of updates, malicious people are able to identify the weaknesses being patched and target machines that are slower to update.  In an ideal world patches would be applied in a timely manner but there always extenuating circumstances that keep this from happening.  So now if your POS (Point of Sale) system is several patches behind there is an exploit that can target its weakened state.

Interconnections
Don’t feel like being breached and exploited via the internet?  Never get on the internet.  Simple answer, but not a feasible one given the world in which we live. 
At AKF we have a tenet of Build vs. Buy.  From a cost perspective it doesn’t make sense to build something that you know very little about if a 3rd party already offers it for a reasonable price.  But cost isn’t the only decision to weigh when it comes to connecting to a 3rd party.  Risk is another major factor.  Is the interaction between your system and the 3rd party enough to help insulate you from their potential compromise?  Integrations through API usually help solve this issue, but sometimes a more thorough coupling of the software is necessary.
So now being reliant on an additional entity (or even more) in that 3rd party helps create another vector with which to be compromised.  And to top it off, you usually don’t have any insight into their security posture.  They may be obligated to provide you with quarterly security scans, but that doesn’t mean they don’t turn off their highly vulnerable machines prior to each scan.

Users
Congratulations on having an extremely secure system that doesn’t rely on 3rd parties being secure as well.  You are now brought down by an employee who thinks they won a raffle.
If this all sounds like a horrible “Choose Your Own Adventure” then you are in the right mindset.  It doesn’t matter what you do to protect your systems because you have Users.  This isn’t to say that all Users can’t be trusted but there are degrees to how much trust they should have.  And whether inadvertent or purposeful they are an extremely susceptible entry point for a breach to occur.  Advanced threats are getting smarter and smarter at crafting emails that get past basic email filters and once opened, create a backdoor for them to access the system.  Once persistent call backs are established, all traffic now looks like the User is generating it internally and most security allows User initiated traffic a higher degree of freedom. 

Motive
Have you ever had a bone to pick with a company and didn’t care about the legal ramifications of what you did?  Hopefully not.  But that segment does exist.  Whether you inadvertently wronged a former customer, at least according to them, or you have something that someone else covets, they are going to move hell and high water to get it.  The only thing worse than a malicious actor casting a wide net in the hopes of getting a compromise to stick, is someone specifically targeting your business.  It can become an obsession for them.
Maybe they want access to the banking records you protect, or they would just like to see you embarrassed, this is a worst case scenario for a business.  Someone who refuses to stop until they compromise your system.  They will use everything available, leveraging similarity, interconnections and your Users to gain access.

You’ve Been Breached
Congratulations!  You’ve been breached?!?  Not really the accolade you were looking for, but one you need to accept.  They say the first step is Acceptance, so if you’ve made it this far, you’re where you need to be.  Don’t believe you are breached, or will be in the future?  Feel free to read the article again and start to really ask yourself if you are secure as you think you are.  The above are just small snippets of the overall vulnerability you may have. 

-Don’t use Windows?  Well Linux doesn’t guarantee not being compromised. 
-Not connected to anyone?  Possible if you are brick and mortar store that only accepts cash.
-You employ the savviest Users?  Everyone makes a mistake from time to time.
-Never upset someone or owned something they could want?  You aren’t a business then.


So what comes next?  Well for that there is a lot of articles out there explaining how to help shore up your system.  One such article comes from our very own Larry Steinberg, Are you compromised?  The important thing is to pick an area where you feel that you are weakest at and go from there.  More often than not this revolves around user training.  But maybe checking off some items from the Australian Cyber Security Centre’s Essential Eight will help. 

Ultimately you should have the best ideas on how to help secure your system, but if you find that you may need some assistance looking at you product holistically, with security in mind, AKF can help.

Permalink

Putting Customer Interaction First in Software Development

November 1, 2018  |  Posted By: Pete Ferguson

Apathy picture of a phone with cobwebs and caption of if we dont take care of the customer maybe theyll stop bugging us from despair dot com

Eat Your Own Dog Food

Eating your own dog food is a common phrase that is cynical from the start – unless you like eating dog food! A more positive, but often overused cliche, is “Be the Customer.” Regardless of how you want to phrase it, the goal is to create solutions that win your customers (which by the way hopefully include your engineers!) over.

We recently had an opportunity to walk in the customer’s shoes of one of our clients and it was painfully obvious within our first minute that user input to the ultimate design of the product was not considered. The methodology for feedback involved Post-it notes and paper forms as opposed to a simple feedback button on the application appliance. The end users were frustrated and reoccurring problems require creative manipulation by the person closest to the customer while software developers are insulated from valuable input.

The rise and fall of companies from top dog to B player (or worse) occurs at an ever quickening pace. Some companies get a second chance, but it takes a lot of effort to get even close to catching up. The best scenario of course is to never lose the number one spot, and the key to staying ahead lies in understanding your customers and innovating and providing appealing solutions for them to be successful.

It takes months to find a customer, but only seconds to lose one ... the good news is that we should run out of them in no time - Disservice Despair.com

Combat Complacency

Tenure, stock options, and other “Golden Handcuffs” are meant for retention, but can often backfire into lulling employees into a comfortable complacency.

So, how do you combat complacency or customer disservice? Many companies take creative routes to hold hackathons and other contests to improve user experience. The best companies allow their customers to vote on which products should be prioritized in the pipeline. But the most effective path to success is to ensure an open dialogue between engineers and those on the front lines using your products.

When I was at eBay there was an issue that had been dogging (no pun intended) customers for a long time and was frustrating customer service agents to no end. John Donahoe, then CEO, was visiting the customer service location in Utah and at a lunch Q&A the frustration came out. John had someone contact the California-based engineers responsible during the luncheon and arranged to have them all fly to the CS center the next morning if not that evening. It was communicated to John that the flights home for the engineers at the end of the week were sold out so he rearranged his travel and took them back on the corporate jet.

I was a driver to get them to the executive terminal at the airport. John raced out of the car and stood at the foot of the red carpet to salute and shake the engineer’s hands after a few grueling days of eating their dog food by sitting on customer calls and meeting with very frustrated agents.

The message was clear to all involved, “no more finger pointing” – engineers were tied at the hip with customer service reps on the front lines with the customers.

Avoid complacency by ensuring your developers and product managers walk in the customer’s shoes and hear from customers regularly. The better informed they are, the better the solutions they will develop.

Redefine the Definition of “Done”

An important aspect of incorporating customers into the development process is an OKRs (Objectives [at AKF we prefer “Outcomes”] and Key Results) focus. What is the desired customer behavior for new features, fixing existing features, etc.? It’s the big “so what” question that needs to be asked often.

If you are trying to increase customer engagement by 10% for a new product or service, then the project is not “done” because of a code release for a new product or features – it is “done” when customer engagement is increased by 10%! So that is when you have the party, not when code is released.

Word usage may seem trivial, but it is our experience that clarity must be uniform and consistent with actions. Team members pay attention to the little things and make the correlations between the words a CEO speaks at an All Hands and what behaviors are observed day in and day out. Having goals around work performed instead of changed customer behavior will likely result in a lot of code being released with very little “so what” for your customers which provides space for an upstart or competitor to edge their way in. If AOL, WebCrawler, Yahoo, AskJeeves, or Excite had provided great consistent search for their customers, Google wouldn’t have been able to take over and dominate. And if Google doesn’t continue to provide great search, the next “Google” will find a space to wiggle in and dominate in the future.

Customers have a balance of wanting their current needs filled with a look to the future. Be clear on what “done” means for your customers on each project and hold off celebrating success – not just effort exerted – until your customers can see that you are done.

Stay Focused on the Bigger Picture

The now infamous quote from Henry Ford is “If I had asked people what they wanted, they would have said faster horses.”

Often we see teams getting microfocused on a fringe case and creating solutions for the minority of end users. While your product will demonstrate value for a few, you may miss the boat for the majority of your customers and they may move on.

I recall touring a facility in the US for a security software integrator. I walked by a set of cubicles with hundreds of security cameras set up and asked what they were doing. I was told the team of 30+ engineers were testing each camera to ensure it works with their software.

Tradition Demotivator Caption Just Because You have always done it that way does not mean it is not incredibly stupid

Meanwhile their software was not capable of two-factor authentication, which was fast becoming a major blindside for their organization. If making sure every brand of camera was really a profit center, they should have outsourced it offshore for the same result for a fraction of the cost and put their top engineers on something with customer value and profitability. I doubt supporting hundreds of cameras was a major differentiator - certainly not enough to tie up top-paid engineers. As the consumer if I knew they supported 30-60 cameras perfectly, I’d be fine with picking one from the list. Lack of two-factor authentication was causing major roadblocks for my team in getting infosec approval to continue using their software.

It is important to step back often and look at where teams are exerting the most effort and to ask the simple question “why are we doing this?” If your answer is “because we’ve always done it that way” you are likely not maximizing customer value. If your reasoning aligns with how to maximize customer value for longer term value, then you are on track.

Stay Two Steps Ahead

One of my first jobs was as an apprentice for a building contractor. Chris had a very small crew and specialized in high-end home remodeling and additions. My first day on the job he said “see that pile of trash?” Yes, I replied. “See those dumpsters out that window?” Yes, I said again. “Stop standing around and get to work!” Over the summer I learned that when we showed up on a new job site the saws and compressor and hoses and nail guns needed to be set up as soon as his truck slowed to a stop. My job was to anticipate what we would be doing next and make sure the right equipment was set up and ready to go before we needed it.

Steve Job’s famous modernization of Henry Ford’s faster horses quote: “It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.”

Much to the complaint of many customers, Jobs drug everyone into USB with the original iMac (while eliminating the floppy disk) and Apple has continued to drag customers into faster adoption of Bluetooth, SSDs, and now USB-C. For the most part, the gambles have paid off and customers adopt and enjoy single cable interfaces, faster transfer speeds, etc. (and Apple generates a lot of revenue on dongles for themselves and others …).

Agile is all about quickly adapting and changing. Similarly, as you look at where your customers are today, don’t lose site of where you need to take them tomorrow. Provide the vision two steps ahead of where you are today. 

Conclusions

Look, it is easy – and entertaining – to get lost in pet projects that provide a challenge and are good career development, but if they aren’t providing customer value, save these projects for for when there is overflow time. In any large corporation it can be easy to lose sight of customer and end user needs and get lost in endless meetings, pet projects, and other seemingly urgent, but not important, activities.

The main thing is to keep the main thing the main thing … and your customers/end users are THE Main Thing! Agile development provides the ongoing opportunity to comb the backlog and prioritize projects that have the most customer (shareholder, and hopefully employee) value. So make sure the efforts of your team and your organization are laser focused on what will immediately provide the maximum customer value. This will provide the needed profits to hire more staff and provide room to add in non-functional requirements and R&D projects as part of the larger, ongoing development process.

Let us help you improve your main thing!

Image Credits

For a good laugh, visit demotivators.com.

Permalink

A Financial Analogy for Technical Debt

October 12, 2018  |  Posted By: Bill Armelin

akf partners tech debt

Understanding Technical Debt

During the course of our client engagements, there are a few common topics or themes that are always discussed, and the clients themselves usually introduce them. One such area is technical debt. Every team has it, every team believes they have too much of it, and every team struggles to explain why it’s important to address it.

Let’s start by defining what technical debt means. It is the difference between doing something the “desired” or “best” way and doing something quickly (i.e. reduce time to market). The difference results in the company taking on “debt” within the solution. Technical debt requires acting with forethought.  In other words, you only assume technical debt knowingly and with commission.  Acts of omission (forgetting to plan or do something) do not count as debt.  Our partners in business may think we are hiding the truth if we do not clearly delineate the difference between debt (known assumptions) and mistakes, failures or other issues related to maintenance.

The following list provides examples of things that are not tech debt:

  • Software defects (unless we decide to NOT fix them for an extended period of time – but defects are still human failures – not debt.)
  • Failures in design that are not previously tagged as debt.
  • Failures to identify scalability bottle necks.
  • Poor choices in technology components that fail to scale.
  • Failure to properly identify infrastructure failures, or high failure rates of vendors in infrastructure.

A Financial Analogy for Tech Debt

When you hear the words “technical debt”, it invokes a negative connotation. However, the judicious use of tech debt is a valuable addition to your product development process. Tech debt is analogous to financial debt. Companies can raise capital to grow their business by either issuing equity or issuing debt. Issuing equity means giving up a percentage of ownership in the company and dilutes current shareholder value. Issuing debt requires the payment of interest but does not give up ownership or dilute shareholder value. Issuing debt is good, until you can’t service it. Once you have too much debt and cannot pay the interest, you are in trouble.

Financial Debt

Tech debt operates in the same manner. Companies use tech debt to defer performing work on a product. As we develop our minimum viable product, we build a prototype, gather feedback from the market, and iterate. The parts of the product that didn’t meet the definition of minimum or the decisions/shortcuts made during development represent the tech debt that was taken on to get to the MVP. This is the debt that we must service in later iterations. In fact, our definition of done must include the servicing of the resulting tech debt. Taking on tech debt early can pay big dividends by getting your product to market faster. However, like financial debt, you must service the interest. If you don’t, you will begin to see scalability and availability issues. At that point, refactoring the debt becomes more difficult and time critical. It begins to affect your customers’ experience.

Technical Debt

Many development teams have a hard time convincing leadership that technical debt is a worthy use of their time. Why spend time refactoring something that already “works” when you could use that time to build new features customers and markets are demanding now? The danger with this philosophy is that by the time technical debt manifests itself into a noticeable customer problem, it’s often too late to address it without a major undertaking. It’s akin to not having a disaster recovery plan when a major availability outage strikes. To get the business on-board, you must make the case using language business leaders understand – again this is often financial in nature.  Be clear about the cost of such efforts and quantify the business value they will bring by calculating their ROI. Demonstrate the cost avoidance that is achieved by addressing critical debt sooner rather than later - calculate how much cost would be in the future if the debt is not addressed now. The best practice is to get leadership to agree and commit to a certain percentage of development time that can be allocated to addressing technical debt on an on-going basis. If they do, it’s important not to abuse this responsibility.  Do not let engineers alone determine what technical debt should be paid down and at what rate – it must have true business value that is greater than or equal to spending that time on other activities.

Just as with debt that a company assumes, in and of itself, technical debt is not bad.  It can be looked at as a leveraging tool to optimize the technology resources in the short term - delaying a hardware tech refresh or the release date for HTML 5. Delaying attention to address technical issues allows greater resources to be focused on higher priority endeavors. The absence of technical debt probably means missed business opportunities– use technical debt as a tool to best meet the needs of the business. However, excessive technical debt will cause availability and scalability issues, and can choke business innovation (too much engineering time dealing with debt rather than focusing on the product).

Develop a technology balance sheet and profit and loss (income) statement to discuss tech debt with the business in a manner they understand – finance. Let’s first look at the balance sheet, where Assets = Liabilities + Equity. Our assets are the engineering time spent creating the product. Liabilities are the principle of the tech debt (i.e. the difference between “desired” and “actual.” Equity is the remainder, or the engineering resources spent creating the product while not contributing to tech debt.

Tecnology balance sheet

Here is an example of a technology balance sheet:

Example of a technology balance sheet

To further the financial analogy, we need to have a technology P&L statement. Here, the interest on tech debt is the difficulty or increased level of effort in modifying something in subsequent releases. This manifests as a reduction in developer productivity per value created. The more debt you take on or less principle you pay down, the higher your interest payment becomes, and the cost to the organization.

Technology p&l

Dedicating resources on an ongoing basis to service technical debt can be a challenging discussion with the business.  Resources are always limited and employing them in the manner which best benefits the business is a critical business priority decision.  Similar to the notion of debt within business, you should never take on technical debt without a plan to pay the interest (increased future cost of development) and principal (fixing the difference between appropriate and as-is). Relating technical debt to financial debt can help those outside of your technology organization grasp the concept and understand the need to keep technical debt under control.

One way to make the concept of debt real is to estimate, for any debt item, the amount of “interest” one will need to pay in the future to modify the solution in question.

    Example:

    • For the benefit of time to market, you decide to “hard code” a number of “display strings” that you’d rather set aside in a resource file to modify and translate later.
    • You save 2 weeks of development time, creating a 2-week liability on your balance sheet.  You have a 2-week principal to fix.
    • You estimate that for all future string modifications (or translations) it will take an additional day of development.  Your interest is 1 day, payable for each modification.

Just as retiring all financial liabilities at once does not make good business sense, trying to wipe out technical debt in one fell swoop is a bad idea. Continuous service to the technical debt is required to prevent technical liabilities from wiping out technical equity.  An informed decision to increase debt service to reduce the principal will result in more productive product development time (smaller debt requires less on-going service).  A short-term decision to reduce tech debt service in favor of a critical product launch may be viable if not used often. Keep track of both your principal (balance sheet) and your interest payments (income statement).  Use these to help your business partners with debt related decisions.

Do NOT mix the cost of defects, or other infrastructure and software mistakes with tech debt.  Doing so creates two very big problems:

  • It becomes harder for the technology team to learn from past mistakes.  Mistakes are mistakes and we should use them as learning opportunities.  Debt is taken thoughtfully. Track them separately and treat them differently.
  • Using the debt term for non-debt related items, will lower the level of trust between you and the business.  Businesses don’t for instance “mistakenly” take on debt.  Mixing these terms can cause relationship problems.

Additionally, be clear about how you define technical debt, so time spent paying it down is not commingled with other activities. Bugs in your code are not technical debt. Refactoring your code base to make it more scalable, however, would be.  A good test is to ask if the path you chose was a conscious or unconscious decision. Meaning, if you decided to go in one direction knowing that you would later need to refactor. You are making a specific decision to do or not to do something knowing that you will need to address it later. Bugs are found in sloppy code, and that is not tech debt, it is just bad code.

Prioritizing Tech Debt

So how do you decide what tech debt should be addressed and how do you prioritize? If you have been tracking work with Agile storyboards and product backlogs, you should have an idea where to begin. Also, if you track your problems and incidents like we recommend, then this will show elements of tech debt that have begun to manifest themselves as scalability and availability concerns. Set a budget and begin paying down the debt.  If you are working on less 12%, you are not spending enough effort. If you are spending over 25%, you are probably fixing issues that have already manifested themselves, and you are trying to catch up. Setting an appropriate budget and maintaining it over the course of your development efforts will pay down the interest and help prevent issues from arising.

Taking on technical debt to fund your product development efforts is an effective method to get your product to market quicker. But, just like financial debt, you need to take on an appropriate amount of tech debt that you can service by making the necessary interest and principle payments to reduce the outstanding balance. Failing to set an appropriate budget will result in a technical “bankruptcy” that will be much harder to dig yourself out of later.

Tech Debt Takeaways

Here is a list of our tech debt takeaways:

Tech Debt Takeaways

Want help reducing your tech debt? We can help.

 

Permalink

Cloud Security Misconceptions

September 19, 2018  |  Posted By: Greg Fennewald

Cloud hosting is growing rapidly, with many companies leveraging the cloud to deliver all or a portion of their products and services.  This trend is unlikely to change any time soon as cloud hosting has commoditized digital infrastructure.

One of the concerns with cloud hosting we often hear from our clients is security – security of data stored in the cloud, access controls for the compute resources, and even physical access concerns.  While these concerns are valid to a certain extent, they are all rooted in misconceptions about cloud hosting.

Stripped of all marketing glitz, buzzword bingo points, and misconceptions, cloud hosting is a passel of servers, switches, and storage devices living in a large data center.  Who owns and maintains the hardware and facility is really the primary difference between cloud hosting and company owned data centers or traditional colocation services.

Let’s look at some of the common cloud security misconceptions;

Data Security and System Access - there is a fear that energy drink guzzling teenagers will steal your sensitive data if you store it in the cloud.  Your sensitive data is encrypted at rest, right?  If not, you’re right in thinking that cloud is not for you,  Neither is technology.  Polish up that resume.  Encrypting data is an industry best practice that is rapidly becoming a base expectation, but does not alleviate you from notifying those potentially impacted by a breach. 

The appropriate risk management approach are the policies and procedures controlling system access and thus access to data.  In addition to your own policies, the major players in cloud hosting have proven policies and procedures that comply with multiple regulatory requirements and have been repeatedly audited.  They are most likely better at it than you.  The security certifications of major cloud hosting providers can be found here and here.  How does that compare to your program?  How much would it cost for your company to achieve and maintain the same level of certification?  Are your security requirements drastically different from other companies already using cloud hosting?  Chances are that the cloud provider capabilities and your own security program can meet your security needs.

Physical Security - concerns about physical security at cloud hosting locations are typically the result of a lack of topical knowledge.  Cloud data centers have fewer people entering them each day as compared to a traditional colocation data center, where customers bring in their own hardware and work on it inside the shared data center.  Cloud hosting customers do not have physical access to the cloud data centers.  Those entering a cloud data center on a daily basis are either provider employees or service partners - people who have undergone mature access control procedures.

Major cloud hosting providers operate dozens of data centers.  Physical security policies and safeguards have evolved over time and are thoroughly tested.  Just as with system access controls, cloud providers are most likely better at physical security than you.

Economies of Scale

A key reason behind cloud providers being good at logical access control, regulatory compliance, and physical security is the scale at which the major players operate.  They can afford the talent, technology, tools, and oversight. 

The economies of scale that enable cloud providers to deliver the capacity and service quality the market demands are at work in the security arena as well.  Combined with the broad regulatory compliance needs of their customers, these economies of scale enable cloud providers to be better than most across the board in security.

In Conclusion

Regardless of where the infrastructure is hosted, a sound security program should include practices such as;

  • Secure coding standards
  • Role based access control
  • Multi-factor authentication
  • Logged access to systems and data
  • Data encryption at rest
  • Data classification procedure
  • Network segmentation
  • Data egress monitoring
  • Security threat matrix
  • Incident response plan

Combined with the security capabilities of cloud providers, a sound security program should enable nearly any company to make use of cloud hosting in a manner that benefits the business.

Interested in cloud options, but unsure how to proceed?  AKF Partners has helped many clients with could strategy and SaaS transition.  More about our services can be found here.

Permalink

The Domino or Multiplicative Effect of Failure

September 18, 2018  |  Posted By: Pete Ferguson

AKF Partners swim lanes
As part of our Technical Due Diligence and Architectural reviews, we always want to see a company’s system architecture, understand their process, and review their org chart.  Without ever stepping foot at a client we can begin to see the forensic evidence of potential problems.

Like that ugly couch you bought when you were in college and still have in your front room, often inefficiencies in architecture, process, and organization are nostalgic memories that have long since outlived their purpose – and while you have become used to the ugly couch, outsiders look in and recognize it as the eyesore it is immediately and often customers feel the inefficiencies through slow page loads and shopping cart issues.  “That’s how it has always been” is never a good motto when designing systems, processes, and organizations for flexibility, availability, and scalability.

It is always interesting to hear companies talk with the pride of a parent about their unruly kid when they use words like “our architecture/organization is very complex” or “our systems/organization has a lot of interdependent components” – as if either of these things are something special or desirable!  Great architectures are sketched out on a napkin in seconds, not hours.

Great architectures are sketched out on a napkin in seconds, not hours.

All systems fail.  Complex systems fail miserably, and – like Dominos – take down neighboring systems as well resulting in latency, down time, and/or flat out failure.

ARCHITECTURE & SOFTWARE

Some common observations in hardware/software we repeatedly see:

Encrypt Everything

Problem: Overloaded F5 or other similar firewalls are trying to encrypt all data because Personal Identifiable Information (PII) is stored in plain text, usually left over from a business decision made long ago that no one can quite recall and an auditor once said “encrypt everything” to protect it.  Because no one person is responsible for a 30,000 foot view of the architecture, each team happily works in their silo and the decision to encrypt is held up like a trophy without seeing that the F5 is often running hot, causing latency, and is now a bottleneck (resulting in costly requests for more F5s) doing something it has no business doing in the first place.

Solution: Segregate all PII, tokenize it and only encrypt the data that needs to be encrypted, speeding up throughput and better isolating and protecting PII.

AKF Scale Cube - Sensitive Data Segregation

Integration (or Rather Lack Thereof) Of Mergers & Acquisitions

Problem: A recent (and often not so recent) flurry of acquisitions is resulting in cross data center calls in and out of firewalls.  Purchased companies are still in their own data center or public cloud and the entire workflow of a customer request is crisscrossing the country multiple times not only causing latency, but if one thing goes wrong (remember, everything fails …) timeouts result in customer frustration and lost transactions. 

Solution: Integrate services within one isolated stack or swim lane – either hosted or public cloud – to avoid cross data center calls.  Replicate services so that each datacenter or cloud instance has everything it needs.

Monolithic Databases

Problem: As the company grew and gained more market share, the search for bigger and better has resulted in a monolithic database that is slow, requires specialized hardware, specialized support, ongoing expensive software licenses, and maintenance fees.  As a result, during peak times the database slows everyone and everything down.  The temptation is to buy bigger and better hardware and pay higher monthly fees for more bandwidth.

Solution: Break down databases by customer, region, or other Z-Axis splits on the AKF Scale Cube.  This has multiple wins – you can use commodity servers instead of large complex file storage, failure for one database will not affect the others, you can place customer data closest to the customer by region, and adding additional servers does not required a long lead time or request for substantial capital expenditure.

AKF Scale Cube - Swim Lanes


PROCESSES & ORGANIZATION

What sets AKF apart is that we don’t just look at systems, we always want to understand the people and organization supporting the system architecture as well and here there are additional multiplicative effects of failure.  We have considerable expertise working for and with Fortune 100 companies, startups, and agencies in many different competencies.  The common mistakes we see on the organization side of the equation:

Lack of Cross Functional Teams

Problem: Agile Scrum teams do not have all the resources needed within the team to be self sufficient and autonomous.  As a result, teams are waiting on other internal resources for approvals or answers to questions in order to complete a Sprint - or keep these items on the backlog because effort estimation is too high.  This results in decreased time to market, losing what could have been a competitive advantage, and lower revenue.

Solution: Create cross-functional teams so that each Sprint can be completed with necessary access to security, architecture, QA, and other resources.  This doesn’t mean each team needs a dedicated resource from each discipline – one resource can support multiple teams.  The information needed can be greatly augmented by creating guildes where the subject matter expert (SME) can “deputize” multiple people on what is required to meet policy.  Guilds utilize published standards and provide a dedicated channel of communication to the SME greatly simplifying and speeding up the approval process.

Lack of Automation

Problem: It isn’t done enough!  As a result, people are waiting on other people for needed approvals.  Often the excuse is that there isn’t enough time or resources.  In most cases when we do the math, the cost of not automating far outweighs the short-term investment with a continuous long-term payout that automation would bring.  We often see that the individual with the deployment knowledge is insecure and doesn’t want automation as they feel their job is threatened.  This is a very short-sighted approach that requires coaching for them to see how much more valuable they can be to the organization by getting out of the way of stifling progress!

Solution: Automate everything possible from testing, quality assurance, security compliance, code compliance (which means you need a good architectural review board and standards), etc! Automation is the gift that keeps on giving and is part of the “secret sauce” of top companies who are our clients.

Not Empowering Teams to Get Stuff Done!

Problem: Often teams work in a silo, only focused on their own tasks and are quick to blame others for their lack of success.  They have been delegated tasks, but do not have the ability to get stuff done.

Solution: Similar to cross functional teams, each team must also be given the authority to make decisions (hence why you want the right people from a variety of dependencies on the team) and get stuff done.  An empowered team will iterate much faster and likely with a lot more innovation.

CONCLUSION

While each organization will have many variables both enabling and hindering success, the items listed here are common denominators we see time and time again often needing an outside perspective to identify.  Back to the ugly couch analogy, it is often easy to walk into someone else’s house and immediately spot their ugly couch! 

Pay attention to those you have hired away from the competition in their early days and seek their opinions and input as your organization’s old bad habits likely look ridiculous to them.  Of course only do this with an intent to listen and to learn – getting defensive or stubbornly trying to explain why things are the way they are will not only bring a dead end to you learning, but will also abruptly stop any budding trust with your new hire. 

And of course, we are always more than happy to pop the hood and take a look at your organization just as we have been doing for the top banks, Fortune 100, healthcare, and many other organizations.  Put our experience to work for you!

Permalink

Effective Incident Communications

September 17, 2018  |  Posted By: Bill Armelin

military communications

Everything fails! This is a mantra that we are always espousing at AKF. At some point, these failures will manifest themselves as an outage. In a SaaS world, restoring service as quickly as possible is critical. It requires having the right people available and being able to communicate with them effectively. A lack of good communications can cause an incident to drag on.

For startups and smaller companies, problems with communications during incidents is less of an issue. Systems tend to be smaller or monolithic. Teams supporting these systems also tend to be small. When something happens, everyone jumps on a call to figure out the problem. As companies grow, the number of people needed to resolve an incident grows. Coordinating communications between a large group of people becomes difficult. Adding to the chaos are executives joining the conference bridges demanding updates about service restoration.

In order to minimize the time to restore a system during an incident, companies need the right people on the call. For large, complex systems, identify the right resources to solve a problem can be difficult. We recommend swarming an issue with everyone that could be needed to resolve an incident, and then release those that are no longer needed. But, with such a large number of people, it can be difficult to coordinate communications, especially on a single conference call bridge.

Managing the communications of a large group of people working an incident is critical to minimizing the restoration time. We recommend a communication method that many of us at AKF learned in the military. It involves using multiple voice and chat channels to coordinate work and the flow of information. Before we get into the details of managing communications, we need to first look at the leadership required to effectively work the incident.

Technical Incident Manager and Incident Communications Manager

Managing a large incident is usually too much for a single individual. She cannot manage coordinating the work occurring to resolve the incident, as well as reporting status to and answering questions from executives eager to know what is going on. We recommend that companies manage incidents with two people. The first person is the individual that is responsible for directing all activities geared towards restoration of service. We call this person the Technical Incident Manager. This individual’s main job is to reduce the mean time to restoration. She needs an overall architectural knowledge of the product and systems to direct the work. She is responsible for leading the call and deescalating after diagnosis informs who needs to be involved. She identifies and diagnoses the service issues and engages the appropriate subject matter experts to assist in restoration.

The second individual is the Incident Communications Manager. He is responsible for supporting the Technical Incident Manager be listening to the technical resolution chatter and summarizing it for a non-technical audience. His focus is on communications speed, quality, and accuracy. He is the primary communications channel for both internal and external messaging. He owns the incident communications process.

Incident Communications Process

This process involves using multiple communication channels to control information and work performed. The first channel established is the Control Channel. This is in the form of a conference bridge and a chat channel. The Technical Incident Manager controls both of these channels. The second channel created is the Status Channel. This also has a voice bridge and a chat channel. The Incident Communication Manager is responsible for managing this channel.

akf effective incident communications

The Control Channel is used for all communication related to the restoration of service. People only use the voice channel for immediate communication and to announce work that is occurring or address immediate questions that need to be answered. Detailed work conducted is placed in the chat channel. This reduces the chatter on the voice channel to command and control messages. It also serves as a record of actions taken that can be referenced in the post mortem/RCA process. If specific teams need to discuss the work they are performing, separate voice and chat breakout channels are created for them. They move off the main channel into their breakout channels to perform the work. The leader of these teams periodically communicates status back up to the control channel.

As the work is progressing, the Incident Communications Manager monitors the Control Channel to provide the basis for his messaging. He formulates updates that he delivers over the Status bridge and chat channel. He keeps executives and customers informed of progress and status, keeping the control channel free of requests for frequent updates and dedicated to restoring service.

akf effective incident communications

This method of communications has worked well in the military for years and has been adopted by many large companies to manage their incident communications. While it is overkill for small companies, it becomes an effective process as companies grow and systems become more complex.

Let us help your organization with incident and crisis management process.

Permalink

Are you compromised?

September 14, 2018  |  Posted By: Larry Steinberg

It’s important to acknowledge that a core competency for hackers is hiding their tracks and maintaining dormancy for long periods of time after they’ve infiltrated an environment. They also could be utilizing exploits which you have not protected against - so given all of this potential how do you know that you are not currently compromised by the bad guys? Hackers are great hidden operators and have many ‘customers’ to prey on. They will focus on a customer or two at a time and then shut down activities to move on to another unsuspecting victim. It’s in their best interest to keep their profile low and you might not know that they are operating (or waiting) in your environment and have access to your key resources.

Most international hackers are well organized, well educated, and have development skills that most engineering managers would admire if not for the malevolent subject matter. Rarely are these hacks performed by bots, most occur by humans setting up a chain of software elements across unsuspecting entities enabling inbound and outbound access. 

What can you do? Well to start, don’t get complacent with your security, even if you have never been compromised or have been and eradicated what you know, you’ll never know for sure if you are currently compromised. As a practice, it’s best to always assume that you are and be looking for this evidence as well as identifying ways to keep them out. Hacking is dynamic and threats are constantly evolving.

There are standard practices of good security habits to follow - the NIST Cybersecurity Framework and OWASP Top 10. Further, for your highest value environments here are some questions that you should consider: would you know if these systems had configuration changes? Would you be aware of unexpected processes running? If you have interesting information in your operating or IT environment and the bad guys get in, it’s of no value unless they get that information back out of the environment; where is your traffic going? Can you model expected outbound traffic and monitor this? The answer should be yes. Then you can look for abnormalities and even correlate this traffic with other activities in your environment.

Just as you and your business are constantly evolving to service your customers and to attract new ones, the bad guys are evolving their practices too. Some of their approaches are rudimentary because we allow it but when we buckle down they have to get more innovative. Ensure that you are constantly identifying all the entry points and close them. Then remain diligent to new approaches they might take. 

Don’t forget the most common attack vector - humans. Continue evolving your training and keep the awareness high within your staff - technical and non-technical alike.

Your default mental model should be that you don’t know what you don’t know. Utilize best practices for security and continue to evolve. Utilize external or build internal expertise in the security space and ensure that those skills are dynamic and expanding. Utilize recurring testing practices to identify vulnerabilities in your environment and to prepare against emerging attack patterns. 

We commonly help organizations identify and prioritize security concerns through technical due diligence assessments. Contact us today.

Permalink

Scalability

September 10, 2018  |  Posted By: Robin McGlothin

The Scalability Cube – Your Guide to Evaluating Scalability



Perhaps the most common question we get at AKF Partners when performing technical due diligence on a company is, “Will this thing scale?” After all, investors want to see a return on their investment in a company, and a common way to achieve that is to grow the number of users on an application or platform. How do they ensure that the technology can support that growth? By evaluating scalability.

Let’s start by defining scalability from the technical perspective. The Wikipedia definition of “scalability” is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth. That definition is accurate when applied to common investment objectives.  The question is, what are the key attributes of software that allow it to scale, along with the anti-patterns that prevent scaling? Or, in other words, what do we look for at AKF Partners when determining scalability?

While an exhaustive list is beyond the scope of this blog post, we can quickly use the Scalability Cube and apply the analytical methodology that helps us quickly determine where the application will experience issues. 

AKF Partners introduced the scalability cube, a scale design model for building resilience application architectures using patterns and practices that apply broadly to any application.  This is a best practices model that describes all scale dimensions from “The Art of Scalability” book (AKF Partners – Abbot, Keeven & Fisher Partners). 

The “Scale Cube” is composed of an X-Axis, Y-Axis, and Z-Axis:

1. Technical Architectural Layering (X-Axis ) – No single points of failure.  Duplicate everything. 
2. Functional Decomposition Segmentation – Componentization to Modules & Microservices (Y-Axis).  Split Report, Message, Locate, Forms, Calendar into fault isolated swim lanes. 
3. Horizontal Data Partitioning - Shards (Z-Axis).  Beginning with pilot users, start with “podding” users for scalability and availability.

                                                      Figure 1

The Scale Cube helps teams keep critical dimensions of system scale in mind when solutions are designed.  Scalability is all about the capability of a design to support ever growing client traffic without compromising performance. It is important to understand there are no “silver bullets” in designing scalable solutions.

An architecture is scalable if each layer in the multi-layered architecture is scalable. For example, a well-designed application should be able to scale seamlessly as demand increases and decreases and be resilient enough to withstand the loss of one or more computer resources.

Let’s start by looking at the typical monolithic application.  A large system that must be deployed holistically is difficult to scale. In the case where your application was designed to be stateless, scale is possible by adding more machines, virtual or physical. However, adding instances requires powerful machines that are not cost-effective to scale. Additionally, you have the added risk of extensive regression testing because you cannot update small components on their own. Instead, we recommend a microservices-based architecture using containers (e.g. Docker) that allows for independent deployment of small pieces and the scale of individual services instead of one big application.

Monolithic applications have other negative effects, such as development complexity. What is “development complexity”? As more developers are added to the team, be aware of the effects suffering from Brooks’ Law.  Brooks’ law states that adding more software developers to a late project makes the project even later. For example, one large solution loaded in the development environment can slow down a developer and gets worse as more developers add components. This causes slower and slower load times on development machines, and developers stomping on each other with changes (or creating complex merges) as they modify the same files. 

Another example of development complexity issue is large outdated pieces of the architecture or database where one person is an expert. That person becomes a bottleneck to changes in a specific part of the system. As well, they are now a SPOF (single point of failure) if they are the only resource that understands the monolithic beast.  The monolithic complexity and the rate of code change make it hard for any developer to know all the idiosyncrasies of the system, hence more defects are introduced.  A decoupled system with small components helps prevents this problem.

When validating database design for appropriate scale, there are some key anti-patterns to check. For example:
• Do synchronous database accesses block other connections to the database when retrieving or writing data? This design can end up blocking queries and holding up the application.
• Are queries written efficiently? Large data footprints, with significant locking, can quickly slow database performance to a crawl.
• Is there a heavy report function in the application that relies on a single transactional database? Report generation can severely hamper the performance of critical user scenarios. Separating out read-only data from read-write data can positively improve scale.
• Can the data be partitioned across different load databases and/or database servers (sharding)? For example, Customers in different geographies may be partitioned to various servers more compatible with their locations. In turn, separating out the data allows for enhanced scale since requests can be split out.
• Is the right database technology being used for the problem? Storing BLOBs in a relational database has negative effects – instead, use the right technology for the job, such as a NoSQL document store. Forcing less structured data into a relational database can also lead to waste and performance issues, and here, a NoSQL solution may be more suitable.

We also look for mixed presentation and business logic. A software anti-pattern that can be prevalent in legacy code is not separating out the UI code from the underlying logic. This practice makes it impossible to scale individual layers of the application and takes away the capability to easily do A/B testing to validate different UI changes. Layer separation allows putting just enough hardware against each layer for more minimal resource usage and overall cost efficiency. The separation of the business logic from SPROCs (stored procedures) also improves the maintainability and scalability of the system.

Another key area we dig for is stateful application servers. Designing an application that stores state on an individual server is problematic for scalability. For example, if some business logic runs on one server and stores user session information (or other data) in a cache on only one server, all user requests must use that same server instead of a generic machine in a cluster. This prevents adding new machine instances that can field any request that a load balancer passes its way. Caching is a great practice for performance, but it cannot interfere with horizontal scale.

Finally, long-running jobs and/or synchronous dependencies are key areas for scalability issues.  Actions on the system that trigger processing times of minutes or more can affect scalability (e.g. execution of a report that requires large amounts of data to generate). Continuing to add machines to the set doesn’t help the problem as the system can never keep up in the presence of many requests. Blocking operations exasperate the problem. Look for solutions that queue up long-running requests, execute them in the background, send events when they are complete (asynchronous communication) and do not tie up key application and database servers. Communication with dependent systems for long-running requests using synchronous methods also affects performance, scale, and reliability. Common solutions for intersystem communication and asynchronous messaging include RabbitMQ and Kafka.

Again, the list above is not exhaustive but outlines some key areas that AKF Partners look for when evaluating an architecture for scalability.  If you’re looking for a checklist to help you perform your own diligence, feel free to use ours.  If you’re wondering more about our diligence practice, you may be interested in our thoughts on best practices, or our beliefs around diligence and how to get it right.  We’ve performed technical diligence for seed rounds, A-series and beyond, carve-outs, strategic investments and taking public companies private.  From $5 million invested to over $1 billion. No matter the size of company or size of the investment, we can help
 

Permalink

‹ First  < 3 4 5 6 7 >  Last ›

Categories:

Most Popular: