AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

Tag » technology

It’s Not About the Technology

Perhaps it’s because we’re technologists that we love shiny new technologies. However, for years now AKF has been telling anyone that will listen or read, that “scaling is not about the technology”. Scale comes from system-level or deployment-focused architecture which is the intersection of software and hardware. No doubt, we have some amazingly scalable technologies available to us today like Hadoop and MongoDB but when your entire datacenter goes down (like Amazon and Calgary and GoDaddy and Sears and the list goes on…) these scalable technologies don’t keep your service available. And if you think that your customers care whether it was your fault or your vendor’s fault…you’re wrong. They pay you for your service and expect it available when they need it.

No doubt, the technology decisions are important. Whether you use Backbone or Knockout or whether you choose Memcached or Redis, all of these technology decisions have pros and cons which can effect your team for a long time. But, at the end of the day these decisions are not ones that will affect whether your application and organization can scale with growth. These technology decisions affect the speed and cost factors of your organization and technology. Some technologies your team knows more about or are naturally faster to learn; therefore, these cost you less. Other technologies are very popular (PHP) and thus engineers’ salaries are lower because there is more supply. Yet still other technologies (assembly language) are complex, appeal to a select group of engineers, are very costly to develop in but might cost very little to process transactions because of the efficiency of that technology.

Technology decisions are important but for different reasons than scaling. Relying on a technology or single vendor to scale is risky. To the vendor or open source project, you are one of many customers and the longevity of their business or project doesn’t depend on keeping your service available. However, your business does depend on this. Take control of the future of your business by scaling your service and organization based on systems-level or deployment-focused architecture. Leave the technology decisions outside of the systems architecture.

 


Comments Off on It’s Not About the Technology

The End of Scalability?

If you received any sort of liberal arts education in the past twenty years you’ve probably read or at least had an assignment to read Francis Fukuyama’s 1989 essay “The End of History?”[1] If you haven’t read the article or the follow on book, Fukuyama argues that the advent of Western liberal democracy is the final form of human government and therefore it is the end point of humanity’s sociocultural evolution. He isn’t arguing that events will stop happening in the future but rather that democracy will become more and more prevalent in the long term, despite possible setbacks such as totalitarian governments for periods of time.

I have been involved, in some form or another, in scaling technology systems for nearly two decades, which does not take into account the decade before when I was hacking on Commodore PETs and Apple IIs learning how to program. Over that time period there have been noticeable trends such as the centralization/decentralization cycle within both technology and organizations. With regards to technology, think about the transitions from mainframe (centralized) to client/server (decentralized) to web 1.0 (centralized) to web 2.0/Ajax (decentralized) as an example of the cycle. The trend that has lately attracted my attention is about scaling. I’m proposing that we’re approaching the end of scalability.

As a scalability consultant who travels almost every week to clients in order to help them scale their systems, I don’t make this end of scalability statement lightly. However, before we jump into my reasoning, we first need to define scalability. To some scalability means that a system needs to scale infinitely no matter what the load over any period of time. While certainly ideal, the challenge with this definition is that it doesn’t take into account the underlying business needs. Investing too much in scalability before its necessary isn’t a wise investment for a business when there are other great projects in which to invest such as more customer facing features. A definition that takes this into account defines scalability as “the ability of a system to maintain the satisfaction of its quality goals to levels that are acceptable to its stakeholders when characteristics of the application domain and the system design vary over expected operational ranges.” [2:119]

The most difficult problem with scaling a system is typically the database or persistent data storage. AKF Partners teaches general database and applications scale theory in terms of a three-dimensional cube where the X-axis of the cube represents replication of identical code or data, the Y-axis represents a split by dissimilar functions or services, and the Z-axis represents a split across similar transactions or data.[3] Having taught this scaling technique and seen it implemented in hundreds of systems, we know that by combining all three axes a system can scale infinitely. However, the cost of this scalability is increased complexity for development, deployment, and management of the system. But is this really the only option?

NoSQL
The NoSQL and NewSQL movement has produced a host of new persistent storage solutions that attempt to solve the scalability challenges without increased complexity. Solutions such as MongoDB, a self-proclaimed “scalable, high-performance, open source NoSQL database”, attempt to solve scaling by combining replica data sets (X-axis splits) with sharded clusters (Y & Z-axis splits) to provide high levels of redundancy for large data sets transparently for applications. Undoubtedly, these technologies have advanced many systems scalability and reduced the complexity of requiring developers to address replica sets and sharding.

But the problem is that hosting MongoDB or any other persistent storage solution requires keeping the hardware capacity on hand for any expected increase in traffic. The obvious solution to this is to host it in the cloud, where we can utilize someone else’s hardware capacity to satisfy our demand. Unless you are utilizing a hybrid-cloud with physical hardware you are not getting direct attached storage. The problem with this is that I/O in the cloud is very unpredictable, primarily because it requires traversing the network of the cloud provider. Enter Solid-State Drives (SSD).

SSD
Chris Lalonde, CEO of ObjectRocket a MongoDB cloud provider hosted entirely on SSDs, states that “Developers have been thinking that they need to keep their data set size the same size as memory because of poor I/O in the cloud and prior to 2.2.x MongoDB had a single lock, both making it unfeasible to go to disk in systems that require high performance. With SSDs the I/O performance gains are so large that it effectively negates this and people need to re-examine how their apps/platforms are architected.”

Lots of forward thinking technology organizations are moving towards SSDs. Facebook’s appetite for solid-state storage has made it the largest customer for Fusion-io, putting NAND Flash memory products in its new data centers in Oregon and North Carolina. Lalonde says “When I/O becomes cheap and fast it drastically changes how developers think about architecting their application e.g. a flat file might be just fine for storing some data types vs. the heavy over head of any kind of structured data.” ObjectRocket’s service offering also provides some other nice features such as “instant sharding” where through the click of a button provides an entire 3-node shard on demand.

GAE
Besides the advances being made in leveraging NoSQL and SSDs to allow applications to scale using Infrastructure as a Service (IaaS), there are advances in Platform as a Service (PaaS) offerings such as Google App Engine (GAE) that are also helping systems scale with little to no burden on developers. GAE allows applications to take advantage of scalable technologies like BigTable that Google applications use, allowing them to claim that, “Automatic scaling is built in with App Engine, all you have to do is write your application code and we’ll do the rest. No matter how many users you have or how much data your application stores, App Engine can scale to meet your needs.” While GAE doesn’t have customers as large as Netflix who run exclusively on Amazon’s Web Services (AWS) their customers do include companies like the Khan Academy, which has over 3.8 million monthly unique visitors to their growing collection of over 2,000 videos.

So with solutions like ObjectRocket, GAE, and the myriad of others that make it easier to scale to significant numbers of users and customers without having to worry about data set replication (X-axis splits) or sharding (Y & Z-axis splits), are we at the end of scalability? If we’re not there yet we soon will be. “But hold on” you say, “our systems are producing more and consuming more data…much more.” No doubt the amount of data that we process is rapidly expanding. In fact, according to the EMC sponsored IDC study in 2009, the amount of digital data in the world was almost 500 exabytes and doubling every 18 months. But when we combine the benefits we achieve from such advances as transistor density on circuits (Moore’s Law), NoSQL technologies, cheaper and faster storage (e.g. SSD), IaaS, and PaaS offerings, we are likely to see the end of the need for most applications developers to care about manually scaling their applications themselves. This will at some point in the future all be done for them in “the cloud”.

So What?
Where does this leave us as experts in scalability? Do we close up shop and go home? Fortunately, no, there are still reasons that application developers or technologists need to be concerned with splitting data replica sets and sharding data across nodes. Two of these reasons are 1) reduce risk and 2) improve developer efficiency.

Reducing Risk
As we’ve written about before, risk has several high-level components (probability of an incident, duration, and % of customers impacted). Google “GAE outages” or “AWS outages” or any other IaaS or PaaS provider and the word “outage” and see what you find. All hosting providers that I’m aware of have had outages in their not-so-distant past. GAE had a major outage on October 26, 2012 for 4 hours. GAE proudly states at the bottom of their outage post “Since launching the High Replication Datastore in January 2011, App Engine has not experienced a widespread system outage.” Which sounds impressive until you do the math and realize that this one outage caused their availability to drop to 99.975% for the entire year and a half that the service has been available. Not to mention that they have much more frequent local outages or issues that affect some percentage of their customers. We have been at clients when they’ve experienced incidents caused by GAE.

The point here is not to call out GAE, trust me all other providers have the exact same issue. The point is that when you rely on a 3rd party for 100% of your availability you by definition have their availability as your ceiling. Now add on your availability issues because of maintenance, code releases, bugs in your code, etc. Why is this? Incidents are almost always have multiple root causes that include architecture, people, and process. Everything eventually fails including our people and processes.

Given that you cannot reduce the probability of an incident to 0%, no matter whether you run the datacenter or a 3rd party provider does, you must focus on the other risk factors (reduce the duration and reduce the % of customers impacted). The way you achieve this is by splitting services (Y-axis splits) and by separating customers (Z-axis splits). While leveraging AWS’s RDS or GAE’s HRD provides cross availability zone / datacenter redundancy, in order to have your application take advantage of these you still have to do the work to split it. And if you want even higher redundancy (across vendors) you definitely have to do the work to split applications or customers between IaaS or PaaS providers.

Improving Efficiency
Let’s say you’re happy with GAE’s 99.95% SLA which no doubt is pretty good especially when you don’t have to worry about scaling. But don’t throw away the AKF Scalability Cube just yet. One of the major reasons we get called in to clients is because their BOD or CEO aren’t happy with how the development team is delivering new products. They recall the days when there were 10 developers and features flew out of the door. Now that they have 100 developers, everything seems to take forever. The reason for this loss of efficiency is that with a monolithic code base (no splits for different services) all 100 developers trying to make changes and add functionality, they are stepping all over each other. There needs to be much more coordination, more communication, more integration, etc. By splitting the application in to separate services (Y-axis splits) with separate code bases the developers can split into independent teams or pods that makes them much more efficient.

Conclusion
We are likely to see continued improvement in IaaS and PaaS solutions that auto-scale and perform to such a degree that most applications will not need to worry about scaling because of user traffic. However, this does not obviate the need to consider scaling for greater availability / vendor independence or to improve a development teams efficiency. All great technologist, CTOs, or application developers will continue to care about scalability for these and other reasons.


REFERENCES
1. Francis, F., The End of History? The National Interest, 1989. 16(4).
2. Duboc, L., E. Leiter, and D. Rosenblum, Systematic Elaboration of Scalability Requirements through Goal-Obstacle Analysis. 2012.
3. Abbott, M.L. and M.T. Fisher, The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise. 2009: Addison-Wesley Professional.


Comments Off on The End of Scalability?

Signs That You May Be Disconnected From Your Business

We as engineers love to problem solve. In fact, we love to explore new technologies and debate with our colleagues as to what may be the best to use. Should we use Python? Should we use PHP? Should we use Java? Should we try Google’s App Engine and code in GO? Should we use Amazon Web Services exclusively for our product? What database technology should we use? All of these decisions are important and factors such as skillsets, security, performance, and cost should be considered. We love to code and see the product in action as it provides us with a sense of accomplishment. Once we are done with a project and its deployed in production we typically celebrate all of the hard work that went into the solution and we move on to the next project. Many times after diving deep into the technical aspect of the solution, for weeks and maybe even months, we wake up and we discover we are not in touch with the business like we should be. All of us have seen this in practice and many of our clients face this challenge.

What are some of the signs that you may not be aligned closely enough with the business and its performance and what should you do about it?

1) Not understanding feature impact – New features are introduced into your product without an understanding of the business impact.

You should never introduce new features without understanding the impact it is supposed to have on your business. In other words, establish a business goal for new features. For example, a new feature that allows for one click purchase is expected to improve conversion rates by 15% within 2 months. Remember all goals should be SMART goals (per chapter 1 in The Art of Scalability – Specific, Measurable, Attainable, Realistic, and Time constrained).

2) Celebrating launch and not success – Upon deployment of your product and confirmation that everything is working as expected, fireworks go off, confetti falls from the ceiling, and the gourmet lunch is ordered.

While recognizing your team for their efforts to launch a feature can be important, you really should celebrate when you have reached the business goal that is associated with that feature (see item #1 above). This might be achieving a revenue target, increasing the number of active accounts, or reaching a conversion target. If you deploy a feature or a product, and your business targets are not met as expected, you have more work to do. This should not be a surprise. Agile development’s basic premise is that we do not know what the final state of the feature and thus we must iterate.

3) No business metric monitoring – Your DevOps team is alerted that something is wrong with one of your services but you rely on your customer support or service desk to tell you if customers are impacted.

This is something that many of our clients struggle with. We believe its critical to detect a problem before you customers have to tell you or you have to ask them. You should be able determine if your business is being impacted without asking your customers.

By monitoring real time business metrics and comparing the actual data to a historical curve you can more quickly detect if there is a problem and avoid sifting trough alerting and monitoring white noise that your systems will inevitability produce. For more details on our preferred strategy, visit one of our earlier blogs. You can also read more about the Design To Be Monitored principle in our book Scalability Rules.

As you company grows, make sure that your product, your engineering team, and your technical operations are closely aligned with your business. We firmly believe that we as technologists must stay aligned with our business for success. In fact, we really are the business. Our solutions enable the company to earn revenue.


1 comment

Don’t Interrupt the Doers

We get called in occasionally because a company’s leaders don’t feel that their product development is happening rapidly enough. They recall how fast the product evolved when the company was first started and they want that pace again. There are many reasons for the pace of development to have slowed. Certainly one of the more popular catch phrases that people use is technical debt, which is a metaphor to explain the eventual consequences of fast paced development. As you incur technical debt, your pace of development slows.

I think there is another factor that is equally or possibly more responsible for slowing the pace of development, interruptions. Engineers need large blocks of uninterrupted time to think, design, plan, code, and test. Disrupting an engineer during these tasks often require a wholesale reset of their thought process. There have been lots of studies that support this one such study found that when tasks were interrupted people require upwards of 27% more time to complete, commit twice the number of errors, and experience twice the increase in anxiety as compared to uninterrrupted tasks. And, as a recent CNN article explained, this problem of disruptions affecting our productivity gets worse as we get older.

So what is interrupting engineers? I’d wager it’s predominantly meetings. While communicating, coordinating, interviewing, etc are all very important for engineers to participate in, doing so in a haphazard manner can be devasting to productivity. In this competitive hiring environment, interruptions might just be driving your engineers out the door. Try a few of these suggestions to reduce interruptions for engineers:

  • Have at least one day per week where meetings are not allowed
  • Only allow meetings at the beginning or end of the day
  • Require all meetings to have agendas and goals
  • Question standing meetings to ensure all participants are necessary

While measuring productivity is incredibly difficult most organizations can feel when the pace of development has slowed. Reduce the interruptions of your engineers and see if this doesn’t help increase the pace again.


Comments Off on Don’t Interrupt the Doers

Scalability Rules TOC

We’ve completed the first draft of our new book “Scalability Rules – 50 Principles For Scaling Web Sites” and wanted to share the table of contents with everyone. We have a terrific team of technical editors who are reviewing every rule in detail but would also like to offer this opportunity to anyone else so inclined. Our publisher, Addison-Wesley Professional, has posted the draft versions of Chapters 1-5 (Rules 1-19) on line at Safari Rough Cuts and should have a couple more chapters available soon. If you’re interested in a sneak preview or would like to provide feedback, sign up and take a look. Below is the book’s table of contents.

Chapter 1 – Reduce the Equation

  • Rule 1 Don’t Over Engineer The Solution
  • Rule 2 Design Scale Into the Solution (D-I-D Process)
  • Rule 3 Simplify the Solution 3 Times Over
  • Rule 4 Reduce DNS Lookups
  • Rule 5 Reduce Objects Where Possible
  • Rule 6 Use Homogenous Networks

Chapter 2 – Distribute Your Work

  • Rule 7 Design to Split Reads and Writes (X axis)
  • Rule 8 Design to Split Different Things (Y axis)
  • Rule 9 Design to Split Similar Things   (Z axis)

Chapter 3 – Design to Scale Out Horizontally

  • Rule 10 Design Your Solution to Scale Out – Not Just Up
  • Rule 11 Use Commodity Systems (Goldfish not Thoroughbreds)
  • Rule 12 Scale Out Your Data Centers
  • Rule 13 Design to Leverage the Cloud

Chapter 4 – Use The Right Tools

  • Rule 14 Use Databases Appropriately
  • Rule 15 Firewalls, Firewalls, Everywhere!
  • Rule 16 Actively Use Log Files

Chapter 5 – Don’t Duplicate Your Work (Nov 30th)

  • Rule 17 Don’t Check Your Work
  • Rule 18 Stop Redirecting Traffic
  • Rule 19 Relax Temporal Constraints

Chapter 6 – Use Caching Aggressively

  • Rule 20 Leverage CDNs
  • Rule 21 Use Expires Headers
  • Rule 22 Cache Ajax Calls
  • Rule 23 Leverage Page Caches
  • Rule 24 Utilize Application Caches
  • Rule 25 Make Use of Object Caches
  • Rule 26 Put Object Caches on Their Own “Tier”

Chapter 7 – Learn From Your Mistakes

  • Rule 27 Learn Aggressively
  • Rule 28 Don’t Rely on QA to Find Mistakes
  • Rule 29 Failing to Design for Rollback Is Designing to Fail
  • Rule 30 Discuss and Learn from Failures

Chapter 8 – Database Rules

  • Rule 31 Be Aware of Costly Relationships
  • Rule 32 Use the Right Type of Database Locks
  • Rule 33 Pass on Using Multi-phase Commits
  • Rule 34 Try Not to Use “Select For Update”
  • Rule 35 Don’t Select Everything

Chapter 9 – Design for Fault Tolerance and Graceful Failure

  • Rule 36 Design Using Fault Isolative “Swim Lanes”
  • Rule 37 Never Trust Single Points of Failure
  • Rule 38 Avoid Putting Systems in Series
  • Rule 39 Ensure You Can Wire On and Off Functions

Chapter 10 – Avoid or Distribute State

  • Rule 40 Strive For Statelessness
  • Rule 41 Maintain Sessions in the Browser When Possible
  • Rule 42 Make Use of a Distributed Cache For States

Chapter 11 – Asynchronous Communication and Message Buses

  • Rule 43 Communicate Asynchronously As Much As Possible
  • Rule 44 Ensure Your Message Bus Can Scale
  • Rule 45 Avoid Overcrowding Your Message Bus

Chapter 12 – Miscellaneous Rules

  • Rule 46 Be Wary of Scaling Through 3rd Parties
  • Rule 47 Purge, Archive, and Cost-justify Storage
  • Rule 48 Remove Business Intelligence from Transaction Processing
  • Rule 49 Design Your Application to Be Monitored

Chapter 13 – Rule Review and Prioritization


2 comments

Simultaneous Discovery

The Paleolithic Era (Old Stone Age) lasted roughly from 2.5M to 10,000 years ago. During this time humans moved around in small bands as hunter/gatherers. Sometime around the Neolithic Age (New Stone Age) humans invented or discovered farming. While turning unedible crops like wheat into food is impressive, what’s even more impressive is that humans separately invented farming at least three times and possibly as many as seven times. Different civilizations from Eastern Mediterranean to China to Mexico all came up with the idea of farming, presumably without sharing this knowledge in any way.

While the discovery of farming might seem an evolutionary necessity for long term survival the coincidental simultaneous invention by disparate individuals is apparently not uncommon at all.  In 1611, sun spots were discovered at least four different times, in 1869 both Cros and du Hauron invented color photography, and one that you might be more familiar with the invention of the phone by Bell, Gray, and la Cour to name a few of the individuals involved.  Napier and Briggs are credited with logarithms but Burgi also invented them a few years earlier.  Another popular one is the theory of natural selection being developed independently but simultaneously by Wallace and Darwin. There are so many of these simultaneous discoveries or inventions that William F. Ogburn and Dorothy Thomas published a paper “Are Inventions Inevitable? A Note on Social Evolution” in 1922 that documented 148 of these simultaneous discoveries.

No one is really sure why this happens. Some believe in a sort of efficient-market hypothesis, which in financial markets means that information is ubiquitous and therefore you cannot consistently beat the market because everyone knows the same information almost simultaneously. Ogburn and Thomas postulated in their paper that because there are very few completely new discoveries, most inventions are inevitable.  Inventions are built on top of other inventions such as the steam boat being dependent on boats and steam engines being invented prior.

While a curiosity, you’re probably wondering how this applies to hyper growth startups. The key takeaway is that while you’re coming up with a great idea so is everyone else. The ability to iterate quickly on ideas is more critical than ever. Combine this absolute need for quick iterations with the requirement for measuring results of effort, lest it be completely wasted and you have A/B testing on features that are launched in weekly sprints. SaaS companies have no excuse for not releasing in very short sprints (if not continuously), watching user behavior to learn what works and what doesn’t, then iterating again.

Despite the plethora of articles and books to the contrary, there are very few million dollar ideas, just million dollar executions of ideas. If investors are looking for key attributes about a team that make them more likely to succeed or not, I’d suggest looking for a team that can deliver quickly and knows the importance of measuring success.


2 comments

97 Things Every Programmer Should Know – Book Review

97 Things Every Programmer Should Know is the third O’Reilly’s 97 Things series.  Editor and contributor Kevlin Henney has done a nice job bringing together some insights from some of today’s most experienced and respected practioners. The book is a compilation of short essays ranging on topics as diverse as Bugs, Error Handling, Customers, Refactoring, and Expertise.

Having tossed my hat into the ring of debate of whether software development is a craft or an engineering discipline, I was thrilled to see essays such as Neal Ford’s “Testing Is the Engineering Rigor of Software Development” that states “Compared to ‘hard’ engineering, the software development world is at about the same place the bridge builders were when the common strategy was to build a bridge and then roll something heavy over it.”  Along this vein I was intrigued by how many times the authors used terms such as ‘simple’ or ‘beautiful’. Peter Sommerlad suggest that we treat our code like “a poem, an essay, a public blog…” while Linda Rising uses the phrase “…a beautiful piece of code this is.” In fact there is an entire collection of essays around the theme of “Simplicity”.  This brings up my one and only annoyance with the book, the organization of the essay’s by title. There is an index of the contributions by category but the actual essays are laid out in alphabetical order. I would have much preferred to read all of the essays on a particular theme together without having to flip through to find them.

The purpose of the short essay is not to answer all your questions or be a definitive guide to programming. Rather the purpose is to provide a starting point for a conversation. To this end, I think a practical way to use this book whether in academia or a development team would be to assign groups of essays to be read ahead of time to stimulate classroom or team meeting discussions.

To highlight a couple of my other favorite essays, I particularly enjoyed Marcus Baker’s whimsical treatment of convincing someone to install your software in “Install Me” and being a script junkie myself, I found myself nodding in agreement with Cay Horstmann’s “Step Back and Automate, Automate, Automate.” As a conversation starter, thought provoker, or small collection of wisdom’s, you will likely enjoy many of these essays yourself.


2 comments

Principles of War as Applied to Business Leadership – Part 1

 

Many authors have previously described the relationship between business and war and we believe that the most successful businesses approach their operations as would General Douglas MacArthur when he claimed that “In war, there is no substitute for victory”.

Carl von Clausewitz offered several tenets of war in his essay “Principles of War” and later expanded upon those in his book “On War”.  Many armed forces throughout the world have taken portions of these tenets and adopted them for their own use.  This post is the first in a two part series relating the 9 US Armed Forces Principles of War to your everyday business activities, strategy and tactics.  The 9 US Principles of War are Objective, Offensive, Mass, Economy of Force, Manuever, Unity of Command, Security, Surprise and Simplicity.  We will discuss the first 5 in this post and the next 4 in a subsequent post.

Objective.  The US Armed Forces definition is to direct every military operation toward a clearly defined, decisive and attainable objective.  We think this is pretty self explanatory and includes concepts about which we’ve previously blogged such as the need to set aggressive but achievable goals.  The most important aspects of “Objective” as applied to your business are for your goals to be clearly defined, well understood, measurable and attainable.

Offensive.  The military definition is to seize, retain and exploit the initiative.  The business definition here is found by looking at what Offensive implies – specifically that it’s all about time to market and getting the right features, products and services out and adopted first.  Being first offers the best chance at achieving virility within the market, and creating a viral marketplace or product is the military equivalent of seizing the high ground.

Mass.  The military definition is to mass the overwhelming effects of combat power at the decisive place and time.  Mass here in military terms is different from the concentration of forces which may not be desirable.  Combat power refers to all the aspects of military power from infantry and armor, to field artillery and other combat multipliers. The business equivalent is to ensure that your business units are aligned with your greater business objective and that they are contributing to it properly.  Your technology, product, marketing and finance teams should all realize and be contributing to the core objectives necessary to win your business battle.  If you wish to win quickly, they cannot be marching to separate agendas and they should not be fighting with each other.

Economy of Force.  This one can be confusing, but within the military definition is a reference to “No part of the force should be left without a purpose”.  The military definition also hints that every part of the force should be used in the most effective way possible.  Goals and objectives are again part of this, but more importantly you should be able to answer the question of whether you are using the right team for the job at hand.  Not only should you ensure that every organization has a purpose directly relating to your most important initiatives, you need to ensure that they are the best team to have those specific goals and objectives.  Client Services and Customer Support teams might be useful in helping to QA new products but allocating them 100% to such an endeavor is probably not the most leveraged use of their time.  Conversely, forgetting to include Customer Support or Client Services in any product rollout is a failure to employ a very important part of your “combat power” in achieving product success.  While its useful for engineers to understand customer needs and complaints, allowing more than 5 to 10% of their time to be taken up by such activities is a costly endeavor relative to your future product needs.

Maneuver. Place the enemy in a position of disadvantage through the flexible application of combat power.  This one relates to how flexible you are in your product delivery lifecycle, and whether you are set up to respond to your competitors actions in the marketplace.  This IS NOT an argument that you should abandon products in flight and constantly change your strategy.  Constant change in strategy is a clear indication of a management team incapable of defining a winning path and it’s a early indication of likely future failure.  You should be flexible, and changing features or making course corrections a few times a year is appropriate.  Ensuring that your product delivery processes allow you the flexibility to change (with the additional cost that implies) is critical to success.  But constant change is not a strategy – it’s a recipe for disaster.


1 comment