AKF Partners

Abbott, Keeven & Fisher Partners Partners in Technology

Growth Blog

Scalability and Technology Consulting Advice for SaaS and Technology Companies

Cloud Security Misconceptions

September 19, 2018  |  Posted By: Greg Fennewald

Cloud hosting is growing rapidly, with many companies leveraging the cloud to deliver all or a portion of their products and services.  This trend is unlikely to change any time soon as cloud hosting has commoditized digital infrastructure.

One of the concerns with cloud hosting we often hear from our clients is security – security of data stored in the cloud, access controls for the compute resources, and even physical access concerns.  While these concerns are valid to a certain extent, they are all rooted in misconceptions about what cloud hosting.

Stripped of all marketing glitz, buzzword bingo points, and misconceptions, cloud hosting is a passel of servers, switches, and storage devices living in a large data center.  Who owns and maintains the hardware and facility is really the primary difference between cloud hosting and company owned data centers or traditional colocation services.

Let’s look at some of the common cloud security misconceptions;

Data Security and System Access - there is a fear that energy drink guzzling teenagers will steal your sensitive data if you store it in the cloud.  Your sensitive data is encrypted at rest, right?  If not, you’re right in thinking that cloud is not for you,  Neither is technology.  Polish up that resume.  Encrypting data is an industry best practice that is rapidly becoming a base expectation, but does not alleviate you from notifying those potentially impacted by a breach. 

The appropriate risk management approach are the policies and procedures controlling system access and thus access to data.  In addition to your own policies, the major players in cloud hosting have proven policies and procedures that comply with multiple regulatory requirements and have been repeatedly audited.  They are most likely better at it than you.  The security certifications of major cloud hosting providers can be found here and here.  How does that compare to your program?  How much would it cost for your company to achieve and maintain the same level of certification?  Are your security requirements drastically different from other companies already using cloud hosting?  Chances are that the cloud provider capabilities and your own security program can meet your security needs.

Physical Security - concerns about physical security at cloud hosting locations are typically the result of a lack of topical knowledge.  Cloud data centers have fewer people entering them each day as compared to a traditional colocation data center, where customers bring in their own hardware and work on it inside the shared data center.  Cloud hosting customers do not have physical access to the cloud data centers.  Those entering a cloud data center on a daily basis are either provider employees or service partners - people who have undergone mature access control procedures.

Major cloud hosting providers operate dozens of data centers.  Physical security policies and safeguards have evolved over time and are thoroughly tested.  Just as with system access controls, cloud providers are most likely better at physical security than you.

Economies of Scale

A key reason behind cloud providers being good at logical access control, regulatory compliance, and physical security is the scale at which the major players operate.  They can afford the talent, technology, tools, and oversight. 

The economies of scale that enable cloud providers to deliver the capacity and service quality the market demands are at work in the security arena as well.  Combined with the broad regulatory compliance needs of their customers, these economies of scale enable cloud providers to be better than most across the board in security.

In Conclusion

Regardless of where the infrastructure is hosted, a sound security program should include practices such as;

  • Secure coding standards
  • Role based access control
  • Multi-factor authentication
  • Logged access to systems and data
  • Data encryption at rest
  • Data classification procedure
  • Network segmentation
  • Data egress monitoring
  • Security threat matrix
  • Incident response plan

Combined with the security capabilities of cloud providers, a sound security program should enable nearly any company to make use of cloud hosting in a manner that benefits the business.

Interested in cloud options, but unsure how to proceed?  AKF Partners has helped many clients with could strategy and SaaS transition.  More about our services can be found here.


RELATED CONTENT

Permalink

The Domino or Multiplicative Effect of Failure

September 18, 2018  |  Posted By: Pete Ferguson

akf scale cube avoiding domino failure
As part of our Technical Due Diligence and Architectural reviews, we always want to see a company’s system architecture, understand their process, and review their org chart.  Without ever stepping foot at a client we can begin to see the forensic evidence of potential problems.

Like that ugly couch you bought when you were in college and still have in your front room, often inefficiencies in architecture, process, and organization are nostalgic memories that have long since outlived their purpose – and while you have become used to the ugly couch, outsiders look in and recognize it as the eyesore it is immediately and often customers feel the inefficiencies through slow page loads and shopping cart issues.  “That’s how it has always been” is never a good motto when designing systems, processes, and organizations for flexibility, availability, and scalability.

It is always interesting to hear companies talk with the pride of a parent about their unruly kid when they use words like “our architecture/organization is very complex” or “our systems/organization has a lot of interdependent components” – as if either of these things are something special or desirable!

All systems fail.  Complex systems fail miserably, and – like Dominos – take down neighboring systems as well resulting in latency, down time, and/or flat out failure.

ARCHITECTURE & SOFTWARE

Some common observations in hardware/software we repeatedly see:

Encrypt Everything

Problem: Overloaded F5 or other similar firewalls are trying to encrypt all data because Personal Identifiable Information (PII) is stored in plain text, usually left over from a business decision made long ago that no one can quite recall and an auditor once said “encrypt everything” to protect it.  Because no one person is responsible for a 30,000 foot view of the architecture, each team happily works in their silo and the decision to encrypt is held up like a trophy without seeing that the F5 is often running hot, causing latency, and is now a bottleneck (resulting in costly requests for more F5s) doing something it has no business doing in the first place.

Solution: Segregate all PII, tokenize it and only encrypt the data that needs to be encrypted, speeding up throughput and better isolating and protecting PII.

AKF Scale Cube - Segregate Sensitive Data

Integration (or Rather Lack Thereof) Of Mergers & Acquisitions

Problem: A recent (and often not so recent) flurry of acquisitions is resulting in cross data center calls in and out of firewalls.  Purchased companies are still in their own data center or public cloud and the entire workflow of a customer request is crisscrossing the country multiple times not only causing latency, but if one thing goes wrong (remember, everything fails …) timeouts result in customer frustration and lost transactions. 

Solution: Integrate services within one isolated stack or swim lane – either hosted or public cloud – to avoid cross data center calls.  Replicate services so that each datacenter or cloud instance has everything it needs.

Monolithic Databases

Problem: As the company grew and gained more market share, the search for bigger and better has resulted in a monolithic database that is slow, requires specialized hardware, specialized support, ongoing expensive software licenses, and maintenance fees.  As a result, during peak times the database slows everyone and everything down.  The temptation is to buy bigger and better hardware and pay higher monthly fees for more bandwidth.

Solution: Break down databases by customer, region, or other Z-Axis splits on the AKF Scale Cube.  This has multiple wins – you can use commodity servers instead of large complex file storage, failure for one database will not affect the others, you can place customer data closest to the customer by region, and adding additional servers does not required a long lead time or request for substantial capital expenditure.

AKF Scale Cube - Swim Lanes


PROCESSES & ORGANIZATION

What sets AKF apart is that we don’t just look at systems, we always want to understand the people and organization supporting the system architecture as well and here there are additional multiplicative effects of failure.  We have considerable expertise working for and with Fortune 100 companies, startups, and agencies in many different competencies.  The common mistakes we see on the organization side of the equation:

Lack of Cross Functional Teams

Problem: Agile Scrum teams do not have all the resources needed within the team to be self sufficient and autonomous.  As a result, teams are waiting on other internal resources for approvals or answers to questions in order to complete a Sprint - or keep these items on the backlog because effort estimation is too high.  This results in decreased time to market, losing what could have been a competitive advantage, and lower revenue.

Solution: Create cross-functional teams so that each Sprint can be completed with necessary access to security, architecture, QA, and other resources.  This doesn’t mean each team needs a dedicated resource from each discipline – one resource can support multiple teams and the information needed can be greatly augmented by creating guildes where the subject matter expert (SME) can “deputize” multiple people on what is required to meet policy and published standards as well as a dedicated channel of communication to the SME greatly simplifying and speeding up the approval process.

Lack of Automation

Problem: It isn’t done enough!  As a result, people are waiting on other people for needed approvals.  Often the excuse is that there isn’t enough time or resources.  In most cases when we do the math, the cost of not automating far outweighs the short-term investment with a continuous long-term payout that automation would bring.  We often see that the individual with the deployment knowledge is insecure and doesn’t want automation as they feel their job is threatened.  This is a very short-sighted approach that requires coaching for them to see how much more valuable they can be to the organization by getting out of the way of stifling progress!

Solution: Automate everything possible from testing, quality assurance, security compliance, code compliance (which means you need a good architectural review board and standards), etc! Automation is the gift that keeps on giving and is part of the “secret sauce” of top companies who are our clients.

Not Empowering Teams to Get Stuff Done!

Problem: Often teams work in a silo, only focused on their own tasks and are quick to blame others for their lack of success.  They have been delegated tasks, but do not have the ability to get stuff done.

Solution: Similar to cross functional teams, each team must also be given the authority to make decisions (hence why you want the right people from a variety of dependencies on the team) and get stuff done.  Am empowered team will iterate much faster and likely with a lot more innovation.

CONCLUSION

While each organization will have many variables both enabling and hindering success, the items listed here are common denominators we see time and time again often needing an outside perspective to identify.  Back to the ugly couch analogy, it is often easy to walk into someone else’s house and immediately spot their ugly couch! 

Pay attention to those you have hired away from the competition in their early days and seek their opinions and input as your organization’s old bad habits likely look ridiculous to them.  Of course only do this with an intent to listen and to learn – getting defensive or stubbornly trying to explain why things are the way they are will not only bring a dead end to you learning, but will also abruptly stop any budding trust with your new hire. 

And of course, we are always more than happy to pop the hood and take a look at your organization just as we have been doing for the top banks, Fortune 100, healthcare, and many other organizations.  Put our experience to work for you!


RELATED CONTENT

 

Permalink

Effective Incident Communications

September 17, 2018  |  Posted By: Bill Armelin

military communications

Everything fails! This is a mantra that we are always espousing at AKF. At some point, these failures will manifest themselves as an outage. In a SaaS world, restoring service as quickly as possible is critical. It requires having the right people available and being able to communicate with them effectively. A lack of good communications can cause an incident to drag on.

For startups and smaller companies, problems with communications during incidents is less of an issue. Systems tend to be smaller or monolithic. Teams supporting these systems also tend to be small. When something happens, everyone jumps on a call to figure out the problem. As companies grow, the number of people needed to resolve an incident grows. Coordinating communications between a large group of people becomes difficult. Adding to the chaos are executives joining the conference bridges demanding updates about service restoration.

In order to minimize the time to restore a system during an incident, companies need the right people on the call. For large, complex systems, identify the right resources to solve a problem can be difficult. We recommend swarming an issue with everyone that could be needed to resolve an incident, and then release those that are no longer needed. But, with such a large number of people, it can be difficult to coordinate communications, especially on a single conference call bridge.

Managing the communications of a large group of people working an incident is critical to minimizing the restoration time. We recommend a communication method that many of us at AKF learned in the military. It involves using multiple voice and chat channels to coordinate work and the flow of information. Before we get into the details of managing communications, we need to first look at the leadership required to effectively work the incident.

Technical Incident Manager and Incident Communications Manager

Managing a large incident is usually too much for a single individual. She cannot manage coordinating the work occurring to resolve the incident, as well as reporting status to and answering questions from executives eager to know what is going on. We recommend that companies manage incidents with two people. The first person is the individual that is responsible for directing all activities geared towards restoration of service. We call this person the Technical Incident Manager. This individual’s main job is to reduce the mean time to restoration. She needs an overall architectural knowledge of the product and systems to direct the work. She is responsible for leading the call and deescalating after diagnosis informs who needs to be involved. She identifies and diagnoses the service issues and engages the appropriate subject matter experts to assist in restoration.

The second individual is the Incident Communications Manager. He is responsible for supporting the Technical Incident Manager be listening to the technical resolution chatter and summarizing it for a non-technical audience. His focus is on communications speed, quality, and accuracy. He is the primary communications channel for both internal and external messaging. He owns the incident communications process.

Incident Communications Process

This process involves using multiple communication channels to control information and work performed. The first channel established is the Control Channel. This is in the form of a conference bridge and a chat channel. The Technical Incident Manager controls both of these channels. The second channel created is the Status Channel. This also has a voice bridge and a chat channel. The Incident Communication Manager is responsible for managing this channel.

akf effective incident communications

The Control Channel is used for all communication related to the restoration of service. People only use the voice channel for immediate communication and to announce work that is occurring or address immediate questions that need to be answered. Detailed work conducted is placed in the chat channel. This reduces the chatter on the voice channel to command and control messages. It also serves as a record of actions taken that can be referenced in the post mortem/RCA process. If specific teams need to discuss the work they are performing, separate voice and chat breakout channels are created for them. They move off the main channel into their breakout channels to perform the work. The leader of these teams periodically communicates status back up to the control channel.

As the work is progressing, the Incident Communications Manager monitors the Control Channel to provide the basis for his messaging. He formulates updates that he delivers over the Status bridge and chat channel. He keeps executives and customers informed of progress and status, keeping the control channel free of requests for frequent updates and dedicated to restoring service.

akf effective incident communications

This method of communications has worked well in the military for years and has been adopted by many large companies to manage their incident communications. While it is overkill for small companies, it becomes an effective process as companies grow and systems become more complex.

Permalink

Are you compromised?

September 14, 2018  |  Posted By: Larry Steinberg

It’s important to acknowledge that a core competency for hackers is hiding their tracks and maintaining dormancy for long periods of time after they’ve infiltrated an environment. They also could be utilizing exploits which you have not protected against - so given all of this potential how do you know that you are not currently compromised by the bad guys? Hackers are great hidden operators and have many ‘customers’ to prey on. They will focus on a customer or two at a time and then shut down activities to move on to another unsuspecting victim. It’s in their best interest to keep their profile low and you might not know that they are operating (or waiting) in your environment and have access to your key resources.

Most international hackers are well organized, well educated, and have development skills that most engineering managers would admire if not for the malevolent subject matter. Rarely are these hacks performed by bots, most occur by humans setting up a chain of software elements across unsuspecting entities enabling inbound and outbound access. 

What can you do? Well to start, don’t get complacent with your security, even if you have never been compromised or have been and eradicated what you know, you’ll never know for sure if you are currently compromised. As a practice, it’s best to always assume that you are and be looking for this evidence as well as identifying ways to keep them out. Hacking is dynamic and threats are constantly evolving.

There are standard practices of good security habits to follow - the NIST Cybersecurity Framework and OWASP Top 10. Further, for your highest value environments here are some questions that you should consider: would you know if these systems had configuration changes? Would you be aware of unexpected processes running? If you have interesting information in your operating or IT environment and the bad guys get in, it’s of no value unless they get that information back out of the environment; where is your traffic going? Can you model expected outbound traffic and monitor this? The answer should be yes. Then you can look for abnormalities and even correlate this traffic with other activities in your environment.

Just as you and your business are constantly evolving to service your customers and to attract new ones, the bad guys are evolving their practices too. Some of their approaches are rudimentary because we allow it but when we buckle down they have to get more innovative. Ensure that you are constantly identifying all the entry points and close them. Then remain diligent to new approaches they might take. 

Don’t forget the most common attack vector - humans. Continue evolving your training and keep the awareness high within your staff - technical and non-technical alike.

Your default mental model should be that you don’t know what you don’t know. Utilize best practices for security and continue to evolve. Utilize external or build internal expertise in the security space and ensure that those skills are dynamic and expanding. Utilize recurring testing practices to identify vulnerabilities in your environment and to prepare against emerging attack patterns. 

RELATED CONTENT

Open Source Software as a malware on ramp

5 Focuses for a Better Security Culture

3 Practices Your Security Program Needs

Security Considerations for Technical Due Diligence

Permalink

The Importance of a Post Mortem

September 6, 2018  |  Posted By: James Fritz

“An incident is a terrible thing to waste” is a common mantra that AKF repeats during its Engagements.  And rightfully so as many companies have an incident response plan in place but stop there.  Why are incidents so important?  What is the true value in doing a proper Post Mortem and actually learning from an incident?

Incidents identify issues in your product.  But if that is all you take out of an incident then you are missing out on so much more information that an incident can provide.  An incident is the first step to identifying a problem that exists in your product, infrastructure processes, and perhaps, people.  “But aren’t incidents and problems the same thing?”  Not necessarily.  An incident is a one time event.  It can occur multiple times if you never address the problem, but it is not isolated.

Conducting a Post Mortem

Gather as many data points as possible shortly after an incident concludes and schedule a Post Mortem review meeting.

Start with the incident timeline.  Sufficiently logging events over time provides ready access to the needed data for forensic analysis.  From this information you can then start to identify what went wrong, when it went wrong and how quickly you were able to respond to it.  The below definitions are all factors that need to be identified:

     
  • Time To Detect: How quickly did you identify that an incident had occurred
  •  
  • Time To Escalate: How quickly did you get everyone necessary to fix the incident involved
  •  
  • Time To Isolate: How quickly did you stop the incident from affecting other portions
  •  
  • Time To Restore: How quickly did the system get brought back up
  •  
  • Time To Repair: How quickly did you fix the incident

This all leads to the Incident Timeline Analysis. 

AKF Incident Timeline Analysis

If you can gather information from several incidents and look at them in your Post Mortem review, then you can figure out where your biggest issues are when it comes to incidents and getting the system back up and running.  It is not uncommon for us to see that it often takes longer to detect the incident than to restore from it.  This could be mitigated with more monitoring at more appropriate positions then you currently have. 

Or maybe the time to escalate is an issue.  Why does it take so long to get the proper engineers involved?  Maybe a real-time alert system is required or a phone tree.  And it is important to track and measure total time of an incident as beginning with when it occured (not when it was reported) all the way through to when customers were back up at 100% (not just when your systems were restored).

Problem vs. Incident

How do you know if your incident is also a problem?  It’s actually fairly easy to determine.  If you have an incident, you have a problem.  The scale of the problem may vary by incident but every incident is caused because of something larger than itself.

During our Technical Due Diligences we always want to know how companies categorize incidents vs. problems.  If the company properly categorizes problems related to incidents, they will be able to answer “Can you rank your problems to show which cause the most customer impact?”  Many times, they can’t - but that ranking is critical to show which problems to attack first.

An incident, at its core, is caused by a problem.  If your product crashes anytime someone attempts to access it via an unapproved protocol, the incident is the attempted access.  The problem may be an improper review of your architecture.  Or it may be lack of QA.  Identifying the problem is much more difficult than identifying the incident.  Imagine you find a termite on your deck.  This small pest could be considered an incident.  If you deal with the incident and get rid of the termite everything is good, right?  If you don’t look any further than the incident you can’t identify the problem.  And in this case the problem could be exposed, untreated wood allowing termites to slowly eat away at the inside of your house.

If you are keeping proper documentation each time you conduct a Post Mortem review, then you will have a history that will start to paint of a picture of ongoing and recurring problems that exist.  Remedying the problem will stop the incident from occurring in the same exact way in the future.  But small variations of the incident can still occur.  If you fix the problem then you are stopping future iterations of that incident from happening again.


Related Content

 

Permalink

Expanding Agile Throughout

September 6, 2018  |  Posted By: James Fritz

In our experience we have seen how Agile practices provide organizations within successful companies many benefits which is leading to more and more companies adopting frameworks of Agile outside of software development.  Whether they are looking for reduced risk, higher product quality, or even the capability to “fail fast” and rectify mistakes, Agile provides many benefits, particularly in management.

While effort has been expended to identify how to create Agile product delivery teams (Organizing Product Teams for Innovation) and conversely why they fail (The Top Five Most Common Agile PDLC Failures) – a lot of the focus is on the successes and failures of the delivery teams themselves.  But the delivery is only as good as the group that surrounds that team. 

So how does Agile work beyond your delivery teams?  An essay published in 1970 by Robert K. Greenleaf, The Servant as Leader, is credited with introducing the idea of a Servant-Leader, someone who puts their employees’ needs ahead of their own.  This is counter-intuitive to a normal management style where management has a list of needs that require completion. 

Looking at an Agile team, the concept of waiting for management to drive needs is not conducive to meeting the requirements of the market.  A highly competent Agile team has all the necessary tools and authority to get the job done that is required of them.  If normal management tactics sit over an Agile team, failure is going to occur.

This is where the philosophy of Servant-Leadership comes into play.  If managers, all the way to the C-Suite, understand that they work for their employees, but their employees are accountable to them, then everyone is working towards one goal: the needs of the market.  Management needs to be focused on securing the resources necessary for product delivery teams to meet the demands of the market, whether from a high level of the CEO and CFO for additional funding or further down with ensuring that technical debt and other tasks are assigned out appropriately to meet delivery goals.  This empowerment for teams may seem risky, but the morale improvement and greater innovation that can be achieved far exceeds the level of risk that would be accepted.

Embracing Agile throughout a company is key to the company being able to survive beyond the first couple sprints.  Small changes in management can play a huge role in that.  Asking simple questions like, “what do you need to meet your goals”, or “what factors stand in your way of accomplishment” help to enable employees instead of limiting them.  Asking yourself why you are successful as a company also helps to identify what segment is responsible for your success. 

If the delivery of your services is what customers buy, then identifying ways to enable employees who create those services is vital.  This isn’t to say that other roles in the company aren’t important.  Without support from the entire company, no one particular segment can succeed.  This is why it is so vital for Agile to permeate throughout your entire organization.  If you need assistance in identifying gaps in Agile and figuring out how to employ it, feel free to reach out to AKF.


Related Content

Permalink

Scaling Your Systems in the Cloud - AKF Scale Cube Explained

September 5, 2018  |  Posted By: Pete Ferguson

AKF Scale Cube Diagram


Scalability doesn’t somehow magically appear when you trust a cloud provider to host your systems.  While Amazon, Google, Microsoft, and others likely will be able to provide a lot more redundancy in power, network, cooling, and expertise in infrastructure than hosting yourself – how you are set up using their tools is still very much up to your budget and which tools you choose to utilize.  Additionally, how well your code is written to take advantage of additional resources will affect scalability and availability.

We see more and more new startups in AWS or Azure – in addition to assisting well-established companies make the transition to the cloud.  Regardless of the hosting platform, in our technical due diligence reviews we often see the same scalability gaps common to hosted solutions written about in our first edition of “Scalability Rules.” (Abbott, Martin L.. Scalability Rules: Principles for Scaling Web Sites. Pearson Education.)

This blog is a summary recap of the AKF Scale Cube (much of the content contains direct quotes from the original text), an explanation of each axis, and how you can be better prepared to scale within the cloud.


Scalability Rules – Chapter 2: Distribute Your Work

Using ServiceNow as an early example of designing, implementing, and deploying for scale early in its life, we outlined how building in fault tolerance helped scale in early development – and a decade + later the once little known company has been able to keep up with fast growth with over $2B in revenue and some forecasts expecting that number to climb to $15B in the coming years.

So how did they do it?  ServiceNow contracted with AKF Partners over a number of engagements to help them think through their future architectural needs and ultimately hired one of the founding partners to augment their already-talented engineering staff.

“The AKF Scale Cube was helpful in offsetting both the increasing size of our customers and the increased demands of rapid functionality extensions and value creation.”
~ Tom Keevan (Founding Partner, AKF Partners)

AKF Scale Cube

The original scale cube has stood the test of time and we have used the same three-dimensional model with security, people development, and many other crucial organizational areas needing to rapidly expand with high availability.

At the heart of the AKF Scale Cube are three simple axes, each with an associated rule for scalability.  The cube is a great way to represent the path from minimal scale (lower left front of the cube) to near-infinite scalability (upper right back corner of the cube).  Sometimes, it’s easier to see these three axes without the confined space of the cube.

AKF Scale Cube Simplified


X Axis – Horizontal Duplication

akf scale cube x axis infographic Scalability Rules 7

The X Axis allows transaction volumes to increase easily and quickly.  If data is starting to become unwieldy on databases, distributed architecture allows for reducing the degree of multi-tenancy (Z Axis) or split discrete services off (Y Axis) onto similarly sized hardware.

A simple example of X Axis splits is cloning web servers and application servers and placing them behind a load balancer.  This cloning allows the distribution of transactions across systems evenly for horizontal scale.  Cloning of application or web services tends to be relatively easy to perform and allows us to scale the number of transactions processed.  Unfortunately, it doesn’t really help us when trying to scale the data we must manipulate to perform these transactions as memory caching of data unique to several customers or unique to disparate functions might create a bottleneck that keeps us from scaling these services without significant impact on customer response time.  To solve these memory constraints we’ll look to the Y and Z Axes of our scale cube.


Y Axis – Split by Function, Service, or Resource

akf scale cube y axis infographic Scalability Rules 8

Looking at a relatively simple e-commerce site, Y Axis splits resources by the verbs of signup, login, search, browse, view, add to cart, and purchase/buy.  The data necessary to perform any one of these transactions can vary significantly from the data necessary for the other transactions.

In terms of security, using the Y Axis to segregate and encrypt Personally Identifiable Information (PII) to a separate database provides the required security without requiring all other services to go through a firewall and encryption.  This decreases cost, puts less load on your firewall, and ensures greater availability and uptime.

Y Axis splits also apply to a noun approach.  Within a simple e-commerce site data can be split by product catalog, product inventory, user account information, marketing information, and so on.

While Y axis splits are most useful in scaling data sets, they are also useful in scaling code bases.  Because services or resources are now split, the actions performed and the code necessary to perform them are split up as well.  This works very well for small Agile development teams as each team can become experts in subsets of larger systems and don’t need to worry about or become experts on every other part of the system.


Z Axis – Separate Similar Things

akf scale cube z axis infographic Scalability Rules 9

Z Axis splits are effective at helping you to scale customer bases but can also be applied to other very large data sets that can’t be pulled apart using the Y Axis methodology.  Z Axis separation is useful for containerizing customers or a geographical replication of data.  If Y Axis splits are the layers in a cake with each verb or noun having their own separate layer, a Z Axis split is having a separate cake (sharding) for each customer, geograpy, or other subset of data.

This means that each larger customer or geography could have its own dedicated Web, application, and database servers.  Given that we also want to leverage the cost efficiencies enabled by multitenancy, we also want to have multiple small customers exist within a single shard which can later be isolated when one of the customers grows to a predetermined size that makes financial or contractual sense.

For hyper-growth companies the speed with which any request can be answered to is at least partially determined by the cache hit ratio of near and distant caches.  This speed in turn indicates how many transactions any given system can process, which in turn determines how many systems are needed to process a number of requests.

Splitting up data by geography or customer allows each segment higher availability, scalability, and reliability as problems within one subset will not affect other subsets.  In continuous deployment environments, it also allows fragmented code rollout and testing of new features a little at a time instead of an all-or-nothing approach.


Conclusions

This is a quick and dirty breakdown of Scalability Rules that have been applied at thousands of successful companies and provided near infinite scalability when properly implemented.  We love helping companies of all shapes and sizes (we have experience with development teams of 2-3 engineers to thousands).  Contact us to explore how we can help guide your company to scale your organization, processes, and technology for hyper growth!


Related Content

 

Permalink

Migrating from a legacy product to a SaaS service? Don't make these mistakes!!

September 4, 2018  |  Posted By: Dave Swenson

AKF has been kept quite busy over the last decade helping companies move from an on-premise product to a SaaS service - often one of the most difficult transitions a company can face. We have found that the following are the top mistakes made during that migration.

With apologies to David Letterman…


The Top 5 SaaS Migration Mistakes

5. Treat Your SaaS Migration Only as a Marketing Exercise

Wall Street values SaaS companies significantly higher than traditional software companies, typically double the revenue multiples. A key reason for this is the improved margins the economies of scale true SaaS companies gain. If you are primarily addressing your customer’s desires to move their IT infrastructure out of their shop, an ASP model hosted in the cloud is fine. However, if your investors or Wall Street are viewing you as a SaaS company, ASP gross margins will not be accepted. A SaaS company is expected to produce in excess of 80% gross margin, whereas an ASP model typically caps out at around 60 or 70%.

How to Avoid?

Make a decision up front on what you want your gross and operating margins to be. This decision will guide how you sell your product (e.g.: highest margins require no code-level customizations), how you architect your systems (multi-tenancy provides greater economies of scale), and even how you release (you, not your customers, control release timing and frequencies). A note of warning: without SaaS margins, you will likely face pricing pressure from an existing or entrant competitor who has achieved SaaS margins.

4. Tack the word ‘cloud’ on to your existing on-prem product and host it

Often a direct result of the above mistake, this is an ASP (Application Service Provider) model, not a SaaS one. While this exercise might be useful in exploring some hosting aspects, it won’t truly inform you about what your product and organization needs to become in order to successfully migrate to SaaS. It will result in nowhere near the gross and operating margins true SaaS provides, and your Board and Wall Street expect to see. As discussed in The Many Meanings of Cloud, the danger of tacking the word “Cloud” onto your product offering is that your company will start believing you are a “Contender”, and will stop pushing for true SaaS.

How to Avoid?

Again, if an ASP model is ‘good enough’, fine - just don’t label or market yourselves as SaaS. If you start the SaaS journey with an ASP model, make sure all within your company recognize your ASP implementation is a dead-end, a short-term solution, and that the real endpoint is a true SaaS offering.

3. Target Your Existing Customer Base

Many companies are so focused upon their existing customer base that they forget about entire markets that might be better suited for SaaS. The mistaken perception is

“We need to take customers along with us”,

when in fact, the SaaS reality is

“We need to use the Technology Adoption Lifecycle, compete with ourselves and address a different or smaller customer base first”.

How to Avoid?

Ignore your current customers and find early adopters to target, even if your top customers are pushing you to move to SaaS. A move to true SaaS from your on-premise product almost always requires significant architectural changes. Putting your entire product and code base through these changes will take time.

Instead, apply the Crossing the Chasm Bowling Alley strategy to grow your SaaS offering into your current customer base rather than fork-lifting your current solution in an ASP fashion into the cloud…

Carve out key slices of functionality from your product that can provide value in a standalone fashion, and find early adopters to help shake out your new offering. Even if you are in a ‘laggard’ industry (banking, healthcare, education, insurance), you will find early adopters within your targeted customer base. Seek them out; they will likely be far better partners in your SaaS migration than your existing ones.

2. Ignore Risk Shifts

It may come as a surprise to some within your company to find out how much risk your customers bear in order to host and use your product on-premise. These risks include security, availability, capacity, scalability, and disaster recovery. They also include costs such as software licensing that have been passed through to your customers, but are now yours, part of your operating margins.

How to Avoid?

Many of these “-ilities” are likely to be new disciplines that now must be instilled in your company, some by hiring key individuals (e.g.: a CISO), some through additional focus and rigor during your PDLC. Part of the process of rearchitecting for SaaS includes ensuring you have adequate scalability, are designed with availability in mind, and a production topology that enables disaster recovery. Where vertical scalability might be acceptable in your on-prem world (“just buy a bigger machine”), you now need to ensure you have horizontal scalability, ideally in an elastic form. The cost of proprietary software (e.g.: database licenses) is now yours to carry, and a shift to open source software can significantly improve your margins. These “-ilities” are also known as non-functional requirements (NFRs), and need to be considered with at least as much weight as your functional requirements during your backlog planning and prioritization.

And now, the biggest mistake we see made in SaaS migrations…

1. Underestimate Inertia

Inertia is a powerful force. Over the years spent building up your on-premise capabilities, you’ve almost certainly developed tremendous inertia - defined for our purposes as “a tendency to do nothing or remain unchanged”. In order to achieve true SaaS, in order to satisfy your investors and reach SaaS-like multiples, nearly every part of your company needs to act differently.

How to Avoid?

First, ensure your entire company is ready to embrace change. For many companies, the move to SaaS is the only answer to an existential threat that a known (or unknown competitor) presents, one who is listening to your customers say “Get this IT stuff out of my shop!”. Examples of SaaS disruptors include:

  • Salesforce destroying Siebel
  • ServiceNow vs. Remedy
  • Workday taking on Oracle/Peoplesoft

Is there a disruptor waiting to take over your business?

Many companies choose to disrupt themselves, and after switching to SaaS, drive their stock price through the roof. Look at Adobe’s stock price once they fully embraced (or made their customers embrace) SaaS over packaged software.

Regardless of how you position the importance of SaaS to your employees, there will still be some that are stuck by inertia in the on-prem ways. Either relegate them to stay with the on-prem effort, or ‘weed’ them out.

Once you’ve got some built up momentum and desire within your company to make the move to SaaS, make a concerted examination department-by-department to determine how they will need to change. All involved need to recognize the risk shifts as mentioned, and associated required mindsets. While you likely have excellent, seasoned on-prem employees, do you have enough SaaS experience across each team? The SaaS migration should in no way be treated solely as an engineering exercise.

It always comes as a shock how many departments need to break out of their existing inertia, and act differently. Some examples:

Sales:

  • Can no longer promise code customizations.
  • Need to address cloud security concerns.
  • Must ensure existing customers know they can no longer dictate when releases occur.

Finance:

  • Learn to speak ARR (annual recurring revenue).
  • Should look at alternative revenue schemes (seat vs. utility).
  • SaaS presents changes in revenue recognition. Be prepared.

Product:

  • Should focus in iterative releases that enable product discovery.
  • Must learn to balance the NFRs/”-ilities” along with new features.
  • Need to consider alternative customers and markets for your new SaaS offering.

Customer Support:

  • Will likely need to become continually engaged in the PDLC process, in order to stay abreast of releases occurring at far greater frequency than old.
  • Must develop (along with engineering) incident management processes that deal with multiple customers simultaneously having issues.

Security:

  • Better spin up this department in a hurry!

Professional Services:

  • As no code-level customizations should be happening, you might end up reducing this team, focusing them more on integrations with your customers’ IT or 3rd party products.

Engineering:

Hmm, so much change for this department. Where to start? Start by bringing AKF on board to examine your SaaS migration effort. Here is where we can help you the most.

Related Content

 

Permalink

Open Source Software as a malware “on ramp”

August 21, 2018  |  Posted By: Larry Steinberg

Open Source Software (OSS) is an efficient means to building out solutions rapidly with high quality. You utilize crowdsourced design, development and validation to conveniently speed your engineering.  OSS also fosters a sense of sharing and building together - across company boundaries or in one’s free time.

So just pull a library down off the web, build your project, and your company is ready to go. Or is that the best approach? What could be in this library you’re including in your solution which might not be wanted? This code will be running in critical environments - like your SaaS servers, internal systems, or on customer systems.  Convenience comes at a price and there are some well known situations of hacks embedded in popular open source libraries.

What is the best approach to getting the benefits of OSS and maintaining the integrity of your solution? 

Good practices are a necessity to ensure a high level of security. Just like when you utilize OSS and then test functionality, scale, and load - you should be validating against vulnerabilities.  Pre-production vulnerability and penetration testing is a good start.  Also, utilize good internal process and reviews. Keep the process simple to maintain speed but establish internal accountability and vigilance on code that’s entering your environment. You are practicing good coding techniques already with reviews and/or peer coding - build an equivalency with OSS.

Always utilize known good repositories and validate the project sponsors. Perform diligence on the committers just like you would for your own employees. You likely perform some type of background check on your employees before making an offer - whether going to a third party or simply looking them up on linkedin and asking around.  OSS committers have the same risk to your company - why not do the same for them? Understandably, you probably wouldn’t do this for a third party purchased solution, but your contract or expectation is that the company is already doing this and abiding by minimum security standards. That may not be true for your OSS solutions, and as such your responsibility for validation is at least slightly higher. There are plenty of projects coming from reputable sources that you can rely on. 

Ensure that your path to production is only coming from artifacts which have been built on internal sources which were either developed or reviewed by your team.  Also, be intentional about OSS library upgrades, this should planned and part of the process.

OSS is highly leveraged in today’s software solutions and provides many benefits. Be diligent in your approach to ensure you only see the upside of open source.

RELATED CONTENT


5 Focuses for a Better Security Culture

3 Practices Your Security Program Needs

Security Considerations for Technical Due Diligence

 

Permalink

3 Practices Your Security Program Needs

August 9, 2018  |  Posted By: Greg Fennewald


AKF Partners has worked with over 400 companies in our history and we’ve seen a wide variety of both good and bad things.  The rise of server virtualization, the spread of NoSQL in the persistence tier, and the growing prevalence of cloud hosting are some of the technology developments in recent years.  In the information security arena, there are several practices that are a good indicator of overall security program efficacy.  Do them well and your security program is probably in good shape.  Do them poorly – or not at all – and your security program might be headed for trouble.

1.  Annual Security Training and Testing

Everyone loathes mandated training topics, especially those that require a defined amount of time be spent on the training (many of which are legislative requirements).  There’s no reliable method to make security training fun or enjoyable, so let’s hold our noses and focus on why it is important:

·    Testing establishes accountability – people do not want to fail and there should be consequences for failure.
·    Security threats change over time – annually recurring training provides a vehicle for updating awareness on current threats.  Look through the OWASP Top 10 for several years to see how threats change.
·    Recurring training and testing are becoming table stakes – any audit is going to start with asking about your training and awareness program.

2.  Security Incident Response Plan

An IRP is not amongst the first few security policies a company needs, but when it is needed, it is needed urgently.

·    A security incident is virtually a certainty over a sufficiently large time horizon.
·    Similar to parachutes and fire extinguishers, planning and practice dramatically improve results.
·    Evolving data privacy regulations, GDPR for instance,  are likely to heighten incident disclosure requirements – a solid IRP will address disclosure. 

3.  Open Source Software Inventory



Open source software inventory?  How is that related to security?  Many consider OSS inventory as a compliance requirement – ensuring the company complies with the licensing requirements of the open source components used, particularly important if the business redistributes the software.  OSS inventory also has security applicability.

·    Provides ability to identify risks when open source component vulnerabilities and exploits are disclosed – what’s in your stack and is the latest exploit a risk to your business?
·    Most effective when coupled with a policy on how new open source components can be safely utilized.
·    Lends itself well to automation and tooling with security resource oversight.
·    Efficient, serving two purposes – open source license compliance and security vulnerabilities.

What do these three security practices have in common?  People – not technology.  Firewall rules and the latest intrusion detection tools are not on this list.  Many security breaches occur as the result of a human error leading to a compromised account or improper system access.  Training and testing your people on the basics, having a plan on how to respond should an incident occur, and being able to know if an open source disclosure affects your risk profile are three human-focused practices that help establish a security-minded culture.  Without the proper culture, tools and automation are less likely to succeed.

RELATED CONTENT


5 Focuses for a Better Security Culture

Tech Due Diligence - 5 Common Security Mistakes

Security Considerations for Technical Due Diligence

 

Permalink

 1 2 3 >  Last ›