AKF Partners

Abbott, Keeven & Fisher Partners Partners in Technology

Growth Blog

Scalability and Technology Consulting Advice for SaaS and Technology Companies

Eight Reasons To Avoid Stored Procedures

June 18, 2018  |  Posted By: Pete Ferguson

In my short tenure at AKF, I have found the topic of Stored Procedures (SPROCs) to be provocatively polarizing.  As we conduct a technical due diligence with a fairly new upstart for an investment firm and ask if they use stored procedures on their database, we often get a puzzled look as though we just accused them of dating their sister and their answer is a resounding “NO!”

However, when conducting assessments of companies that have been around awhile and are struggling to quickly scale, move to a SaaS model, and/or migrate from hosted servers to the cloud, we find “server huggers” who love to keep their stored procedures on their database.

At two different clients earlier this year, we found companies who have thousands of stored procedures in their database.  What was once seen as a time-saving efficiency is now one of several major obstacles to SaaS and cloud migration.

AKF Scalability Rules Why Stored Procedures Shouldn't be Saved on the Database

In our book, Scalability Rules: Principles for Scaling Web Sites, (Abbott, Martin L.. Scalability Rules: Principles for Scaling Web Sites) Marty outlines many reasons why stored procedures should not be kept in the database, here are the top 8:

  1. Cost: Databases tend to be one of the most expensive systems or services within the system architecture.  Each transaction cost increases with each additional SPROC.  Increase cost of scale by making a synchronous call to the ERP system for each transaction – while also reducing the availability of the product platform by adding yet another system in series – doesn’t make good business sense.
  2. Creates a Monolith: SPROCs on a database create a monolithic system which cannot be easily scaled.
  3. Limits Scalability: The database is a governor of scale, SPROCS steal capacity by running other than relational transactions on the database.
  4.  
  5. Limits Automated Testing: SPROCs limit the automation of code testing (in many cases it is not as easy to test stored procedures as it is the other code that developers write), slowing time to market and increasing cost while decreasing quality.
  6. Creates Lockin: Changing to an open-source or a NoSQL solution requires the need to develop a plan to migrate SPROCs or replace the logic in the application.  It also makes it more difficult to switch to new and compelling technologies, negotiate better pricing, etc.
  7. Adds Unneeded Complexity to Shard Databases: Using SPROCs and business logic on the database makes sharding and replacement of the underlying database much more challenging.
  8. Limits Speed To The Weakest Link: Systems should scale independently relative to individual needs. When business logic is tied to the database, each of them needs to scale at the same rate as the system making requests of them - which means growth is tied to the slowest system.
  9. More Team Composition Flexibility: By separating product and business intelligence in your platform, you can also separate the teams that build and support those systems.  If a product team is required to understand how their changes impact all related business intelligence systems, it will slow down their pace of innovation as it significantly broadens the scope when implementing and testing product changes and enhancements.

Per the AKF Scale Cube, we desire to separate dissimilar services - having stored procedures on the database means it cannot be split easily.

Need help migrating from hosted hardware to the cloud or migrating your installed software to a SaaS solution?  We have helped hundreds of companies from small startups to well-established Fortune 50 companies better architect, scale, and deliver their products.  We offer a host of services from technical due diligences, onsite workshops, and provide mentoring and interim staffing for your company.

 

Subscribe to the AKF Newsletter

Contact Us

Multi-Tenant Defined

June 11, 2018  |  Posted By: Marty Abbott

Of the many SaaS operating principles, perhaps one of the most misunderstood is the principle of tenancy.

Most people have a definition in their mind for the term “multi-tenant”.  Unfortunately, because the term has so many valid interpretations its usage can sometimes be confusing.  Does multi-tenant refer to the physical or logical implementation of our product?  What does multi-tenant mean when it comes to an implementation in a database?

This article first covers the goals of increasing tenancy within solutions, then delves into the various meanings of tenancy.

Multi-Tenant (Multitenant) Solutions and Cost

One of the primary reasons why companies that present products as a service strive for higher levels of tenancy is the cost reduction it affords the company in presenting a service.  With multiple customers sharing applications and infrastructure, system utilization goes up:  We get more value production out of each server that we use, or alternatively we get greater asset utilization.  Because most companies view the cost of serving customers as a “Cost of Goods Sold’, multitenant solutions have better gross margins than single-tenant solutions.  The X Axis of the figure below shows the effect of increasing tenancy on the cost of goods sold on a per customer basis:

On Prem vs ASP vs SaaS models and cost implications

Interestingly, multitenant solutions often “force” another SaaS principle to be true:  No more than 1 to 3 versions of software for the entire customer base.  This is especially true if the database is shared at a logical (row-level) basis (more on that later).  Lowering the number of versions of the product, decreases the operating expense necessary to maintain multiple versions and therefore also increases operating margins.

Single Tenant, Multi-Tenant and All-Tenant

An important point to keep in mind is that “tenancy” occurs along a spectrum moving from single-tenant to all-tenant.  Multitenant is any solution where the number of tenants from a physical or logical perspective is greater than one, including all-tenant implementations.  As tenancy increases, so does Cost of Goods Sold (COGS from the above figure) decrease and Gross Margins increase. 

The problem with All-Tenant solutions, while attractive from a cost perspective, is that they create a single failure domain [insert https://akfpartners.com/growth-blog/fault-isolation], thereby decreasing overall availability.  When something goes poorly with our product, everything is off line.  For that reason, we differentiate between solutions that enable multi-tenancy for cost reasons and all-tenant solutions. 

Multi-tenancy compared to single-tenant and all tenant

The Many Meanings and Implementations of Tenancy

Multitenant solutions can be implemented in many ways and at many tiers.

Physical and Logical

Physical multi-tenancy is having multiple customers share a number of servers.  This helps increase the overall utilization of these servers and therefore reduce costs of goods sold.  Customers need not share the application for a solution to be physically multitenant.  One could, for instance, run a webserver, application server or database per customer.  Many customers with concerns over data separation and privacy are fine with physical multitenancy as long as their data is logically separated.

Logical multi-tenancy is having data share the same application.  The same webserver instances, application server instances and database is used for any customer.  The situation becomes a bit murkier however when it comes to databases.

Different relational databases use different terms for similar implementations.  A SQLServer database, for instance, looks very much like an Oracle Schema.  Within databases, a solution can be logically multitenant by either implementing tenancy in a table (we call that row level multitenancy) or within a schema/database (we call that schema multitenancy).  In either case, a single instance of the relational database management system or software (RDBMS) is used, while customer transactions are separated by a customer id inside a table, or by database/schema id if separated as such.

While physical multitenancy provides cost benefits, logical multitenancy often provides significantly greater cost benefits.  Because applications are shared, we need less system overhead to run an application for each customer and thusly can get even greater throughput and efficiencies out of our physical or virtualized servers.

Depth of Multi-Tenancy

The diagram below helps to illustrate that every layer in our service architecture has an impact to multi-tenancy.  We can be physically or logically multi-tenant at the network layer, the web server layer, the application layer and the persistence or database layer.

The deeper into the stack our tenancy goes, the greater the beneficial impact (cost savings) to costs of goods sold and the higher our gross margins.

Review of tenancy options in the traditional deployment stack

The AKF Multi-Tenant Cube

To further the understanding of tenancy, we introduce the AKF Multi-Tenant Cube. 
Multi-tenancy and cost implications mapped by degree, mode and type of multi-tenancy

The X axis describes the “mode’ of tenancy, moving from shared nothing, to physical, to logical.  As we progress from sharing nothing to sharing everything, utilization goes up and cost of goods sold goes down.

The Y axis describes the depth of tenancy from shared nothing, through network, web, app and finally persistence or database tier.  Again, as the depth of tenancy increase, so do Gross Margins.

The Z axis describes the degree of tenancy, or the number of tenants.  Higher levels of tenancy decrease costs of goods sold, but architecturally we never want a failure domain that encompasses all tenants. 

When running a XaaS (SaaS, etc) business, we are best off implementing logical multitenancy through every layer of our architecture.  While we want tenancy to be high per instance, we also do not want all tenants to be in a single implementation.

AKF Partners helps companies of all sizes achieve their availability, time to market, scalability, cost and business goals.

RELATED CONTENT

 

Subscribe to the AKF Newsletter

Contact Us

Technical Due Diligence Checklists

June 6, 2018  |  Posted By: Greg Fennewald

Technical Due Diligence Checklists

What should one examine during a technical due diligence?  Are there industry best practices that should be used as a guide?  Are here any particular items to request in advance?  The answers to these questions are both simple and complex.  Yes, there are industry best practices that can be used as a guide.  Where to find them and their inherent quality is a more complex question.

AKF partners has conducted countless diligence sessions spanning greater than a decade and we’ve developed lists for information to request in advance and items to evaluate during the due diligence session.  These engagements range from very small pre-product seed-round companies to very large investments in companies with revenue greater than $2Bn ARR.  We share those lists with you below with a note of caution.  A technical due diligence is far more than a list of “yes” or “no” answers, evaluating the responses into a comprehensive measure of technical merit and risk is the secret sauce.

10 Things to Request in Advance

1. Architectural overview of the platform and software.
2. Organizational chart of the technology team, including headcount, responsibilities, and any outsourced development or support.
3. Technology team budget and actuals for the previous 3 years.  Ideally these should be separated by Gross Margin activities and Operating Margin activities.
4. 3d party audit reports – vulnerability scans, penetrations tests, PCI, HIPAA, ISO certification reports.
5. 3d party software in use including open source software.
6. Current product and technology roadmaps, previous 2 years of major enhancements and releases.
7. Hosting plan – data centers, colocation, and cloud plus disaster recovery and business continuity plans.
8. Previous 3 years of material technology failures.
9. Previous 5 years of security breaches/incidents/investigations.
10. Description of the software development and delivery process.


Areas to Cover During the Due Diligence

The below represent a necessary, but often insufficient, set of technical due diligence questions.  These questions are intended to jump start a conversation leading to deeper levels of understanding of a service offering and the maintenance/operations of that service offering.  Feel free to use them to start your own diligence program or to augment your existing program.  But be careful – having a checklist, does not make one a successful pilot.

Scalability

Are incoming requests load balanced?  If so, how?  Is any presistance required for the purposes of session affinity?  If so, why?
Are the sessions being stored? If yes, where?  How?  Why?
Are services separated across different servers?  For what purpose? 
Are services sized appropriately?  What are the bounds and constraints?
Are databases dedicated to a subset of services/data?
Are users segmented into independent pods?

Fault Tolerance

Do services communicate asynchronously?
Do services have dedicated databases? Is any database shared between services?
Are single points of failure eliminated?
Is infrastructure designed for graceful failure?  Is software designed for graceful failure?  What is the evidence of both?
Is N+1 an architectural requirement at all levels?
Are active-active or multiple AZs utilized?  Tested?
Are data centers located in low risk areas?
To what does the company design in terms of both RPO and RTO?

Session and State

Are the solutions completely stateless?
Where, if any place is session stored and why?
Is session required to be replicated between services? 
Is session stored in browsers?
Is session stored in databases?

Cost Effectiveness

Is auto-scaling enabled?
Is reliance on costly 3rd party software mitigated?
Are stored procedures eliminated in the architecture?
Are servers distributed across different physical or virtual tiers?
Can cloud providers be easily switched?
Is the amount of technical debt quantified?
Is only necessary traffic routed through the firewall?
Are data centers located in low cost areas?

Processes

Can the product management team make decisions to alter features?
Are business goals owned by those who enact them?
Are success metrics used to determine when a goal is reached?
Is velocity measured?
Are coding standards documented and applied?
Are unit tests required?
Are feature flags standard?
Is continuous integration used?
Is continuous deployment utilized?
Are payloads smaller and frequent vs larger and seldom?
Can the product be rolled back if issues arise?
Is automated testing coverage greater than 75%?
Are changes being logged and made readily available to engineers?
Is load and performance testing being conducted prior to release?

Operations

Are incidents logged with enough detail to ascertain potential problems?
Are alerts sent real time?
Are systems designed for monitoring?
Are user behaviors (logins, downloads, checkouts) used to create business metric monitors?
Is remaining infrastructure headroom known?
Are post mortems conducted and fed back into the system?

Organization

Are teams seeded, fed and weeded?
Are teams aligned to the services or products they create?
Are teams cross-functional?
Do team goals aligned to top level business goals?
Do teams sit together?

Security

Are there approved and published security policies?
Are security responsibilities clearly defined?
Does the organization adhere to legal/regulatory requirements as necessary (PCI, HIPAA, SOX, etc)?
Has an inventory of all data assets been conducted and maintained?
Is multi-factor authentication in place?
Are vulnerability scans conducted?
Is a security risk matrix maintained?
Is production system access role based and logged?

RELATED CONTENT

AKF has conducted countless due diligence engagements over the last decade.  We can take a checklist and add to it context and real world experience to create a finished product that captures the technical risks and merits of a company, improving the quality of your diligence process.

 

Subscribe to the AKF Newsletter

Contact Us

Crossing the People Chasm Within Your Organization

June 6, 2018  |  Posted By: Pete Ferguson

Crossing the People Chasm Within Your Organization

In Geoffrey Moore’s book “Crossing the Chasm,” he argues there is a chasm between the early adopters of a product (the technology enthusiasts and visionaries) and the early majority (the pragmatists).  He illustrates well the differences in each of their self-interests and their very differing needs for security verses willing to take on risk.

People’s talents, attitudes, and skills similarly must cross the rapid growth chasm within your organization if your company is to remain viable and competitive.

As AKF Partners assess fast-growing companies in technical due diligence engagements, we often observe Moore’s chasm principle in play with an organization’s people and the ability for legacy employees to make the jump to the “next big thing” and keep up with explosive growth.  Or conversely, we have also seen the “why change” attitude greatly hinder and blindside the scalability of a fledgling company. 

The Chasm From Startup to Established Company

Young, well-funded startups have a lot of flare that Millenials and corporate escapees love – free food, eccentric workplaces, schedule flexibility, and very little bureaucracy, policy, or procedure.  This works very well for small, talented teams during a very scrappy period of rapid growth where the common goals of the organization are well-known and lived and breathed on a daily basis and personal and group conversations with the CEO and CTO occur regularly, sometimes daily.  Often there is minimal rigor around Agile rituals – and during periods of startup and rapid growth, their likely is very little time to formalize processes and the outcomes - 100-200% customer acquisition and profits - can be mistaken as a “full steam ahead” desire to not make any changes.

Recently we worked with a company that was a decade old and was fairly large compared to many startups we see in our technical due diligences for investors.  The founders has seen the need to bring in experienced and open-minded senior leadership and it was inspiring to see the the vigor, enthusiasm, growth, and speed of a young fledgling company, but with defined metrics and compliance to set ground rules.

There was not an observed bureaucracy.  There was clear direction.

As unfortunately this is more of an outlier than it should be, I was impressed and wanted to know what set them apart from other more mature companies I have worked with or worked for and I found several differentiators.

My observations of companies who bridge the organizational chasm of growth:

  • Successful companies do not confine themselves to one segment of the market - they are thoughtful and disciplined when taking on new segments.  They follow Moore’s observations well and saturate one small subset of a new market with marketing, sales, customer service and provide steep discounts to get a foothold.  Once established, they expand horizontally within the subset and rinse and repeat until they are the market leader.  This allows them to fail forward fast through constant innovation and iterations.  This requires the people in their organizations to have an Agile mindset and not rest on their laurels.
  • Successful companies have teams with a good diversity of opinions, but are unified in how they execute on their plan.  The senior leadership are very successful in constantly communicating the vision of the company through desired outcomes and allow teams the autonomy to get there however they can.  Because the focus is on the outcome rather than the process, there is very little bureaucratic red tape, trust is very high, and teams are not afraid to fail fast, learn, and reiterate more successfully.
  • Successful companies keep things simple and team members are onboard with the company philosophy and understand how their role fits into the larger scheme of things.  Google pioneered OKRs - “Objectives and Key Results” - as how they measure success within their organization.  OKRs allow for nested outcomes to be defined, aligning teams with the broader company goal and successful companies have the common thread of how success is defined through outcomes from the top to the bottom of the org chart.

While some of the companies highlighted in Moore’s 1999 version of the book eventually could not cross the chasm with newer products (Blackberry, 3Comm/Palm Pilot), the principles he outlined are common to companies who are enduring today (Apple desktop to MP3 player to mobile phone/tablet to watch to … [insert Apple’s next category DOMINATOR here]). 

When looking at products, according to Moore, the marketer should focus on one group of customers at a time, using each group as a base for marketing to the next group to create a bandwagon effect with momentum that spreads to others in the next marketing segment.  The focus on each segment is intense and an “all hands on deck” blitz approach to include marketing, software engineering, product, customer service, sales, and others.

Similarly when it comes to what is going on inside of organizations, it is important to ensure your people cross the chasm of change required for your products to remain viable and enduring.  Successful companies know that either their people have to make the transition to new skill sets/mindsets or they will need to be transitioned out of the company.  Either way it’s important to inject new people with the experience needed into the organization.  At AKF, we refer to this as Seed, Feed, and Weed.

What we see in successful companies is an early focus on standardization but with freedom for exploration.  Allowing each team to use their own communication devices (Slack, Hive, Spark) and Agile methods is something that does not scale well.  But seeking input from team members and having each team follow a standard software development cycle with similar Agile methodology does scale well and allows teams to interoperate without administrative and communicative friction.

Conclusions

Successful companies endure because individuals are allowed autonomy to reach shared outcomes.  Tools are provided to help individuals succeed, fail forward fast, learn and share their learning, automate mundane tasks, and are not a bureaucratic bottleneck.  To remain successful, companies must constantly focus on how to take their team members with them through the chasms of growth into new and emerging markets by continually upgrading their skills and contribution to the company desired outcomes.

Measuring success is not just in the stock price (many failing companies - i.e. Palm Pilot - had a good stock price while the internal decay had been going on for several quarters), it must be a thorough measurement of all aspects of the company’s technical abilities - architecture, process, organization, and security. 

RELATED CONTENT

Technical Due Diligence Checklists

Do You Know What is Negatively Affecting Your Engineer’s Productivity? Shouldn’t You?

SaaS Migration Challenges

The No Surprises Rule

We’d love your feedback and an opportunity to see how AKF Partners can help your organization maximize outcomes. Contact us now.

Subscribe to the AKF Newsletter

Contact Us

The Many Unfortunate Meanings of Cloud

June 5, 2018  |  Posted By: Marty Abbott

Enough is enough already – stop using the term “Cloud”.  Somewhere during the last 20 years, the term “Cloud” started to mean to product offerings what Sriracha and Tabasco mean to food:  everything’s better if we can just find a way to incorporate it.  Just as Sriracha makes barely edible items palatable and further enhances the flavor of delicacies, so evidently does “Cloud” confuse the unsophisticated buyer or investor and enhance the value for more sophisticated buyers and investors. That’s a nice analogy, but it’s also bullshit.

The term cloud just means too many things – some of which are shown below:


various meanings of the word cloud and the confusion it causes


The real world of cloud offerings can be roughly separated into two groups:

  1. Pretenders This group of companies know, at some level, that they haven’t turned the corner and started truly offering “services”.  They support heavy customization, are addicted to maintenance revenue streams, and offer low levels of tenancy.  These companies simply can’t escape the sins of their past.  Instead, they slap the term “cloud” on their product in the hopes of being seen as being relevant.  At worst, it’s an outright lie.  At best, it’s slightly misleading relative to the intended meaning of the term.  Unless, of course, anything that’s accessible through a browser is “Cloud”.  These companies should stop using the term because deep down, when they are alone with a glass of bourbon, they know they aren’t a “cloud company”.
  2. Contenders This group of companies either blazed a path for the move to services offerings (think rentable instead of purchasable) products or quickly recognized the services revolution; they were “born cloud” or are truly embracing the cloud model.  They prefer configuration over customization, and stick to the notion of a small number of releases (countable on one hand) in production across their entire customer base.  They embrace physical and logical multi-tenancy both to increase margins and decrease customer costs.  These are the companies that pay the tax for the term “cloud” – a tax that funds the welfare checks for the “pretenders”.

The graph below plots Cloud Pretenders, Contenders and Not Cloud products along the axes of gross margin and operating margin:

Various models of cloud and on-premise plotted against cost of goods sold and operating expense

Consider one type of “Pretender” – the case of a company hosting a single tenant, client customized software release for each of their many customers.  This is an ASP (Application Service Provider) model.  But there is a reason the provider of the service won’t call themselves an ASP:  The margins of an ASP stinks relative to that of a true “SaaS” company.  The term ASP is old and antiquated.  The fix?  Just pour a bit of “cloud sauce” on it and everything will be fine.
Contrast the above case with that of a “Contender”:  physical and logical multi-tenancy at every layer of the architecture and \ a small number of production releases (one to three) across the entire customer base.  Both operating and gross margins increase as maintenance costs and hosting costs decrease when allocated across the entire customer base. 
Confused?  So are we.  Here are a few key takeaways:

  1. “Cloud” should mean more than just being accessed through the internet via a browser.  Unfortunately, it no longer does as anyone who can figure out how to replace their clients with a browser and host their product will call themselves a “Cloud” provider.
  2. Contenders should stop using the term “Cloud” because it invites comparison with companies to which they are clearly superior:  Superior in terms of margins, market needs, architecture and strategic advantage.
  3. Pretenders should stop using the term “Cloud” for both ethical reasons and reasons related to survivability.  Ethically the term is somewhere between an outright lie and an ethically contentious quibble or half-truth.  Survivability comes into play when the company believes its own lie and stops seeing a reason to change to become more competitive.

AKF Partners helps companies create “Cloud” (XaaS) transition plans to transform their business.  We help with financial models, product approach, market fit, product and technology architecture, business strategy and help companies ensure they organize properly to maximize their opportunity in XaaS.

RELATED CONTENT

The Scale Cube - Architecting for Scale

Microservices for Breadth, Libraries for Depth

SaaS Migration Challenges

When Should You Split Services?

 

Subscribe to the AKF Newsletter

Contact Us

The No Surprises Rule

May 23, 2018  |  Posted By: Geoffrey Weber

No Surprises

We blogged recently about how to write precisely and concisely, highlighting how important it was to learn the “Three Sentence Rule” early in our careers so that when we communicated with other executives, we communicated with extreme brevity and clarity.  We might think of this as the “what” of executive communication.  Today, we’d like to quickly describe a few ground rules with respect to the “when” and “how” of communicating as executives.

10 or 15 years ago, a fad swept through technology, executives everywhere were writing “How to Communicate with Me” articles for their teams and co-workers.  In the most positive light, these were serious attempts by quirky executives to help their teams learn to conform to their own bizarre communications requirements.  We would argue that a modern technology executive with a reasonably non-quirky personality need not pen such narcissistic claptrap. Communications is so basic, we should not over-think the process.

In today’s world, we have a variety of communications channels available: face-to-face, email, text message, internal communications tools (e.g. Slack) and the good old telephone.  When an unexpected issue occurs on our watch, our primary duty is to inform our superior, by any means necessary as quickly as possible.  

Whether we work in a large corporate environment with thousands of employees or in a small team with 10 people, immediate communications are an absolute requirement.  If we fail to do so, our superiors may hear of the unexpected news before we have a chance to tell them.  Think of a major system outage…  while we work to determine a root cause, the VP of Marketing sends a quick text to our boss (let’s say the CEO in this case.). Now the CEO is in possession of bad news about something we are responsible for.  Our phone will ring immediately, and we’ll be on our back feet explaining why we hadn’t taken a moment to call.

A worse example might be a system outage that we, as CTO, were not aware of, and the very same VP of Marketing texts the CEO again.  Now when the phone rings, we are surprised, just as the CEO was surprised by the VP of Marketing.  Our team has failed at a very fundamental level.

There’s an informal rule that states: No Surprises.  The corollary is, communicate as early as possible and as often as possible.  A site outage demands an immediate upward missive with frequent updates.  The leaders who work under us must also live by this rule.  We can never be left out in the cold when it comes to significant information.  Furthermore, we are solely accountable for the communication of negative news up to our bosses.

The idea of communicating early and communicating often has a number of uses beyond crisis communications.  In the early days of eBay, Marty Abbott (managing partner of AKF Partners) set 4 objectives for the site operations teams: Availability (99.9%), Scalability, Cost and Operational Excellence.  Every member of the operations teams knew the current availability as it was communicated nearly continuously.  The other 3 objectives were communicated with equal frequency.  It would be a significant surprise if a colleague was working on a project that was not associated with Availability, Scalability, Cost or Operational excellence.  A few years later, we borrowed Marty’s objectives at Shutterfly and simplified: Up, Fast, Cheap and Easy. All 50 operations team members knew those goals and we repeated them like a mantra.

The quickest path to failure as technology executives is non-communication, the opposite of communicating clearly and frequently.  Worse, those executives that don’t stay ahead of the surprises technology throws at us every day will find themselves working in a different industry.

To summarize how to communicate:

When: early and often

How: any means available

What: 3 sentences.

We don’t need to write 5 page essays on how to communicate unless we are quite peculiar.  

 

Subscribe to the AKF Newsletter

Contact Us

4 Landmines When Using Serverless Architecture

May 20, 2018  |  Posted By: Dave Berardi

Physical Bare Metal, Virtualization, Cloud Compute, Containers, and now Serverless in your SaaS? We are starting to hear more and more about Serverless computing. Sometimes you will hear it called function as a service. In this next iteration of Infrastructure-as-a-Service, users can execute a task or function without having to provision a server, virtual machine, or any other underlying resource. The word Serverless is a misnomer as provisioning the underlying resources are abstracted away from the user, but they still exist underneath the covers. It’s just that Amazon, Microsoft, and Google manage it for you with their code. AWS Lambda, Azure Functions, and Google Cloud Functions are becoming more common in the architecture of a SaaS product. As technology leaders responsible for architectural decisions for scale and availability, we must understand its pros and cons and take the right actions to apply it.

Several advantages of serverless computing include:

• Software engineers can deploy and run code without having to manage any underlying infrastructure effectively creating a No-Ops environment.
• Auto-scaling is easier and requires less orchestration as compared to a containerized environment running services.
• True On-Demand capacity – no orphaned containers or other resources that might be idling.
• They are cost effective IF we are running the right size workloads.

Disadvantages and potential landmines to watch out for:

• Landmine #1 - No control over the execution environment meaning you are unable to isolate your operational environment. Compute and networking resources are virtualized with no visibility into either of them. Availability is the hands of our cloud provider and uptime is not guaranteed.
• Landmine #2 - SLAs cannot guarantee uptime. Start-up time can take a second causing latency that might not be acceptable.
• Landmine #3 - It’s going to become much easier for engineers to create code, host it rapidly, and forget about it leading to unnecessary compute and additional attack vectors creating a security risk.
• Landmine #4 - You will create vendor lock-in with your cloud provider as you set up your event driven functions to trigger from other AWS or Azure Services or your own services running on compute instances.

AKF is often asked about our position on serverless computing. There are 4 key rules considering the advantages and the landmines that we outlined:

1) Gradually introduce it into your architecture and use it for the right use cases
2) Establish architectural principles that guide its use in your organization that will minimize availability impact for Serverless. You will tie your availability to the FaaS in your cloud provider.
3) Watch out for a false sense of security among your engineering teams. Understand how serverless works before you use it and so you can monitor it for performance and availability.
4) Manage how and what it’s used for - monitor it (eg. AWS Cloud Watch) to avoid neglect and misuse along with cost inefficiencies.

AWS, Azure, or Google Cloud Serverless platforms could provide an affective computing abstraction in your architecture if it’s used for the right use cases, good monitoring is in place, and architectural principles are established.

AKF Partners has helped many companies create highly available and scalable systems that are designed to be monitored. Contact us for a free consultation.

Subscribe to the AKF Newsletter

Contact Us

Do you know what is negatively affecting your engineers' productivity? Shouldn't you?

May 13, 2018  |  Posted By: Dave Swenson
The Impact of Meetings on Engineers

Meetings, meetings, meetings. How many times have we said that? Visiting dozens and dozens of clients per year, we see a number of customers whose culture seems to be extremely meeting-centric, as ifthe only way any decision can be made or information communicated is via a meeting.

Paul Graham, co-founder of Y Combinator and Hacker News, wrote back in 2009 of the impact of meetings upon engineers. Coding typically is best performed in multi-hour solid chunks of time, with no interruptions. It takes awhile to get into the ‘zone’, and any context switch will disrupt that zone, in Graham’s words “like throwing an exception”. He even suggests that the impact of a meeting goes far beyond the actual time spent at the meeting, that simply knowing you are going to be disrupted prevents you from reaching that zone - something like when you know you have to get up early in the morning say for a flight, you toss and turn all night long, unable to get into that deep REM state.

Many companies recognize the disruptive impact of meetings, and put rules stating ‘no meeting’ afternoons, or perhaps a full day in place. Pinterest’s recent blog post recounts their somewhat extreme move along these lines - putting a three-day no meeting block in place for engineers - engineers were not to be invited to meetings 3 days a week. The blog post is worth a read, covering some of the challenges and objections of eliminating engineer-attended meetings 3 days a week, but overall touts the success of the approach citing a 92% positive response rate to a survey question asking “Are you more productive…?”.

Really, Pinterest? Really??

I’m all for the reduction of meetings, though I do wonder if three days a week with no meetings is a bit overboard. What I’m disappointed by is that Pinterest has no (or at least did not cite any) quantifiable evidence that their engineers were actually more productive. Now, I’m not suggesting they should have a before and after count of, say, lines of code. But, assuming that Pinterest is at least something of an Agile shop, did they not see an increase in velocity, in story points being delivered?

In our visits to our clients, just as we see a wide variation in the dependency upon meetings to get anything done, we see some clients living and breathing by their team-by-team velocity numbers, while other clients totally disregard that key productivity metric. To you technology leaders out there, how better can you measure your teams’ efficiencies?

And, even more so, do you know why your teams’ current velocity is what it is? Are you actively seeking out the context switches, delays, and disruptions that are throwing exceptions in your engineers’ brains?

We’ve been pulled in many times to analyze a team’s efficiency (or lack thereof), only to find out that, yes, meetings are a negative influence, but beyond that:

  • Interviews (worthwhile, but hiring should be a highly optimized process)
  • Environmental issues (are you measuring your dev environments’ availability?)
  • Waiting for a pull request approvals (do you have an SLA around this?)
  • Long build times that are due to weak hardware or poor dependency management (compare the cost of faster build machines or code optimization of your builds vs. the value of wait or down-time of your engineers?)
  • Waiting to receive clarification from a product owner on a feature (again, do you have an SLA around this? Is your team colocated, so a question can be asked/answered quickly?)
  • Other surprising items ranging from having to feed a parking meter to miserable network latency for those remote engineers.

Yet again, the mantra of “If you can’t measure it, you can’t improve it” applies. We view metrics such as actual hours spent coding vs. expected hours spent coding as not only a measurement of your teams’ productivity, but as a management effectiveness gauge. Are you as a manager effectively protecting your engineers?

Are you able to see the impact of ‘no-meeting’ days, or the factors today that negatively affect your developers’ coding efficiencies?

If not, AKF can more than help. We have run productivity surveys at many clients, and always enjoy the look on technology leaders’ faces when we present the results. Let us help you.

Subscribe to the AKF Newsletter

Contact Us

Three Reasons Your Software Engineers May Not Be Successful

May 10, 2018  |  Posted By: Pete Ferguson

Three Reasons Your Software Engineers May Not Be Successful

At AKF Partners, we have the unique opportunity to see trends among startups and well-established companies in the dozens of technical due diligence and more in-depth technology assessments we regularly perform, in addition to filling interim leadership roles within organizations.  Because we often talk with a variety of folks from the CEO, investors, business leadership, and technical talent, we get a unique top-to-bottom perspective of an organization.

Three common observations

  • People mostly identify with their job title, not the service they perform.
  • Software Engineers can be siloed in their own code vs. contributing to the greater outcome.
  • CEO’s vision vs. frontline perception of things as they really are.

Job Titles Vs. Services

The programmer who identifies herself as “a search engineer” is likely not going to be as engaged as her counterpart who describes herself as someone who “helps improve our search platform for our customers.”

Shifting focus from a job title to a desired outcome is a best practice from top organizations.  We like to describe this as separating nouns and verbs – “I am a software engineer” focuses on the noun without an action: software engineer instead of “I simplify search” where the focus is on verb of the desired outcome: simplify.  It may seem minor or trivial, but this shift can be a contributing impact on how team members understand their contribution to your overall organization. 

Removing this barrier to the customer puts team members on the front line of accountability to customer needs – and hopefully also the vision and purpose of the company at large.  To instill a customer experience, outcome based approach often requires a reworking of product teams given our experience with successful companies.  Creating a diverse product team (containing members of the Architecture, Product, QA and Service teams for example) that owns the outcomes of what they produce promotes:

  • Motivation
  • Quality
  • Creating products customers love

If you have had experience in a Ford vehicle with the first version of Sync (bluetooth connectivity and onscreen menus) – then you are well aware of the frustration of scrolling through three layers of menus to select “bluetooth audio” ([Menu] -> [OK] -> [OK] -> [Down Arrow]-> [OK] -> [Down Arrow] -> [OK]) each time you get into your car.  The novelty of wireless streaming was a key differentiator when Sync first was introduced – but is now table stakes in the auto industry – and quickly wears off when having to navigate the confusing UI likely designed by product engineers each focused on a specific task but void of designing for a great user experience.  What was missing is someone with the vision and job description: “I design wireless streaming to be seamless and awesome - like a button that says “Bluetooth Audio!!!”

Hire for – and encourage – people who believe and practice “my real job is to make things simple for our customers.”

Avoiding Siloed Approach

Creating great products requires engineers to look outside of their current project specific tasks and focus on creating great customer experiences.  Moving from reactively responding to customer reported problems to proactively identifying issues with service delivery in real time goes well beyond just writing software.  It moves to creating solutions.

Long gone are the “fire and forget” days of writing software, burning to a CD and pushing off tech debt until the next version.  To Millennials, this Waterfall approach is foreign, but unfortunately we still see this mentality engrained in many company cultures.

Today it is all about services.  A release is one of many in a very long evolution of continual improvement and progression.  There isn’t Facebook V1 to be followed by V2 … it is a continual rolling out of upgrades and bug fixes that are done in the background with minimum to no downtime.  Engineers can’t afford to be laggard in their approach to continual evolution, addressing tech debt, and contributing to internal libraries for the greater good.

Ensure your technical team understands and is very closely connected to the evolving customer experience and have skin in the game.  Among your customers, there likely is very little patience with “wait until our next release.”  They expect immediately resolution or they will start shopping the competition.

Translating the Vision of the CEO to the Front Lines

During our our more in-depth technology review engagements we interview many people from different layers of management and different functions within the organization.  This gives us a unique opportunity to see how the vision of the CEO migrates down through layers of management to the front-line programmers who are responsible for translating the vision into reality.

Usually - although not always - the larger the company, the larger the divide between what is being promised to investors/Wall Street and what is understood as the company vision by those who are actually doing the work.  Best practices at larger companies include regular all-hands where the CEO and other leaders share their vision and are held accountable to deliverables and leadership checks that the vision is conveyed in product roadmaps and daily stand up meetings.  When incentive plans focus directly on how well a team and individual understand and produce products to accomplish the company vision, communication gaps close considerably.

Creating and sustaining successful teams requires a diverse mix of individuals with a service mindset.  This is why we stress that Product Teams need to be all inclusive of multiple functions.  Architecture, Product, Service, QA, Customer Service, Sales and others need to be included in stand up meetings and take ownership in the outcome of the product. 

The Dev Team shouldn’t be the garbage disposal for what Sales has promised in the most recent contract or what other teams have ideated without giving much thought to how it will actually be implemented. 

When your team understands the vision of the company - and how customers are interacting with the services of your company - they are in a much better position to implement it into reality.

As a CTO or CIO, it is your responsibility to ensure what is promised to Wall Street, private investors, and customers is translated correctly into the services you ultimately create, improve, and publish.

Conclusions

As we look at new start-ups facing explosive 100-200% year-over-year growth, our question is always “how will the current laser focus vision and culture scale?”  Standardization, good Agile practices, understanding technical debt, and creating a scalable on-boarding and mentoring process all lend to best answers to this question.

When your development teams are each appropriately sized, include good representation of functional groups, each team member identifies with verbs vs. nouns (“I improve search” vs. “I’m a software engineer”), and understand how their efforts tie into company success, your opportunities for success, scalability, and adaptability are maximized.

RELATED CONTENT

Do You Know What is Negatively Affecting Your Engineers’ Productivity? Shouldn’t You?

Enabling Time to Market (TTM) With Contributor Model Teams

—-

Experiencing growing or scaling pains?  AKF is here to help!  We are an industry expert in technology scalability, due diligence, and helping to fill leadership gaps with interim CIO/CTO and other positions in addition to helping you in your search for technical leaders.  Put our 200+ years of combined experience to work for you today!

Subscribe to the AKF Newsletter

Contact Us

Enabling TTM With Contributor Model Teams

May 6, 2018  |  Posted By: Dave Berardi

Enabling TTM With Contributor Model Teams

We often speak about the benefits of aligning agile teams with the system’s architecture.  As Conway’s Law describes, product/solution architectures and organizations cannot be developed in isolation.  (See https://akfpartners.com/growth-blog/conways-law) Agile autonomous teams are able to act more efficiently, with faster time to market (TTM).  Ideally, each team should be able to behave like a startup with the skills and tools needed to iterate until they reach the desired outcome.

Many of our clients are under pressure to achieve both effective TTM and reduce the risk of redundant services that produce the same results. During due diligence, we will sometimes discover redundant services that individual teams develop within their own silo for a TTM benefit.  Rather than competing with priorities and waiting for a shared service team to deliver code, the team will build their own flavor of a common service to get to market faster.

Instead, we recommend a shared service team own common services. In this type of team alignment, the team has a shared service or feature on which other autonomous teams depend. For example, many teams within a product may require email delivery as a feature.  Asking each team to develop and operate its own email capability would be wasteful, resulting in engineers designing redundant functionality leading to cost inefficiencies and unneeded complexity.  Rather than wasting time on duplicative services, we recommend that organizations create a team that would focus on email and be used by other teams.

Teams make requests in the form of stories for product enhancements that are deposited in the shared services team’s backlog. (email in this case) To mitigate the risk of having each of these requesting teams waiting for requests to be fulfilled by the shared services team, we suggest thinking of the shared services as an open source project or as some call it – the contributor model.

Open sourcing our solution (at least internally) doesn’t mean opening up the email code base to all engineers and letting them have at it.  It does mean mechanisms should be established to help control the quality and design for the business. An open source project often has its own repo and typically only allows trusted engineers, called Committers, to commit. Committers have Contribution Standards defined by the project owning team. In our email example, the team should designate trusted and experienced engineers from other Agile teams that can code and commit to the email repo. Engineers on the email team can be focused on making sure new functionality aligns with architectural and design principles that have been established. Code reviews are conducting before its accepted. Allowing for outside contribution will help to mitigate the potential bottleneck such a team could create.

Now that the development of email has been spread out across contributors on different teams, who really owns it?

Remember, ownership by many is ownership by none.  In our example, the email team ultimately owns the services and code base. As other developers commit new code to the repo, the email team should conduct code, design, and architectural reviews and ultimately deployments and operations.  They should also confirm that the contributions align with the strategic direction of the email mission.  Whatever mechanisms are put in place, teams that adopt a contributor model should be a gas pedal and not a brake for TTM.

If your organization needs help with building an Agile organization that can innovate and achieve competitive TTM, we would love to partner with you. Contact us for a free consultation.

Subscribe to the AKF Newsletter

Contact Us

 1 2 3 >  Last ›