AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

Archives from author » smason

AKF Interim Leadership Case Study

Read about one of our success stories in which we filled an interim CTO role at a marketing subscription company in New York.

AKF Long-term Case Study


Comments Off on AKF Interim Leadership Case Study

Big Data Anti-Patterns

Recent research by Gartner indicates that while Big Data investment continues to grow, the intent to invest in Big Data projects is finally showing signs of tapering. While this is a natural part of the hype-cycle, poor ROI on Big Data projects continues to impact the industry. Gartner sites the effective “lack of business leadership” around Big Data initiatives as a primary cause. We’ve kept a watchful eye on these Big Data trends to better serve our clients.

In a recent presentation, Marty Abbott addresses the struggles of getting Big Data Analytics right. There he identifed seven anti-patterns that hamper the proper implementation of Big Data initiatives and provided suggested remedies for each.

Anti-Pattern #1 — Assuming you know the answers.

By setting out to confirm a particular hypothesis, you end up building an analytics system hard-wired to answer a narrow set of questions. This lack of extensibility and blinds you from achieving deeper insights.

Gain greater insights by focusing on correlations without presumption and adopt an analytics process that follows continuous cycle induction, hypothesis, deduction, and data creation. (see hermeneutic cycle.)

Anti-Pattern #2 — No QA in analytics.

Big Data analytics systems are just as prone to bugs and defects as production systems, if not more so. All too often Analytics systems are designed for perfect execution.

Improve QA by building a pre-processing, cleaning stage in your data flow while retaining raw immutable data elsewhere. Maintain prior results and implement the ability to run analytics over your entire dataset to reverify results.

Anti-Pattern #3 — Using production Engineers to run your analytics.
Big Data involves different skills sets, different technologies, and, perhaps most importantly, a different mindset from production operations. Data scientists and analytics experts are explorers who theorize correlations, while production developers are operationalists, making pragmatic decisions to keep systems running.

Improve effectiveness by creating a separate team focused on implementing analytics architecture.

Anti-Pattern #4 — Using OLTP Datastore for Analytics

Using your transactional datastore for analytics will impact the performance of your operational systems. Furthermore, transactional and analytics data have different requirements for normalization and response time latency.

Separate analytics from transactional systems by using a separate Operational Datastore and ingesting data from your application in an asynchronous fashion.

Anti-Pattern #5 — Ignoring Performance Requirements

Performance matters. The longer you take to produce results the more out of date they become. Furthermore, no one can leverage results if they can’t quickly and easily access them.

Improve analytics performance using cluster computing technologies such as Apache’s Spark and Hadoop. Store frequently queried results in a Datamart for quick access.

Anti-Pattern #6 — OLTP is the only input to your Big Data system

If transactional data is the only input to your analytics you’re missing out on the much larger picture.

Customer interactions, social media scrapes, stock performance, Google analytics data, and transaction logs should be brought together to form a comprehensive picture.

Anti-Pattern #7 — Anyone can run their own Analytics

A well-built analytics system can become the victim of it’s own success. As it becomes more popular among stakeholders, overall performance may tank.

To preserve performance, move aggregated data out of analytics processing into OLAP/Datamarts and create a chargeback system allocating resources to users/departments.


Comments Off on Big Data Anti-Patterns

Three Strategies for Migrating to the Cloud

Cloud-deployments are becoming increasingly popular across all industries. Despite initial concerns around vendor lock-in and the need for increased security in multi-tenant environments, many companies have found a strong business case for migration. Overall, the economics aren’t surprising. Cloud providers can leverage huge economies of scale, discounts on power-consumption, and demand diversification to drive down costs, which are then passed on to a fast-growing customer base.

As a technology leader, you’ve done your own research, and you’re confident that a cloud deployment can both save money and promote faster innovation amongst your development team. The next challenge is to develop a clear migration plan; however, what you’ll quickly learn is that the technical and business realities of no two tech companies are alike and no single cloud migration strategy can be relied upon to fit every circumstance.

We’ve laid out three commons scenarios that you might consider for your product. Each describes a different business case and an accompanying cloud migration pattern.

Service-by-Service Migration

In the first scenario you already have a well-architected solution running in a private (company-owned or managed host) datacenter. Your main production application is separated into different services and unburdened by large amounts of technical debt. Here a service-by-service transition into the cloud is the clear choice for smooth transition that doesn’t have nearly the risk of a “big-bang” approach and doesn’t trigger a request for proposal (RFP) event for customers. If you make your customers live through a massive migration, they are often going to take the opportunity to look around for alternative services – thus the RFP.

You’ll want to start with the smallest or least heavily coupled service first. The goal being to build your team’s confidence and experience with cloud deployments and tools before tackling larger challenges.

In most cases your engineers will likely need to get their hands dirty with a little re-architecting. (Depending on how closely you’ve followed our recommendations regarding fault isolation zones or “swim lanes”, many of these steps may have already been completed.) Your team’s first goal should be to rewrite (or eliminate) any cross-service calls to be asynchronous. Once this is complete, except in rare cases, virtualizing the web & app tiers should be a straightforward process. Often, the greater challenge is disentangling the persistent data-tier. If you already have a separate data store for each service, it should be easy to complete service migration by virtualizing or importing the data to a PaaS solution (RDS, Azure’s SQL, etc.). If that’s not the case, you’ll want to either:

(1) Separate the service’s data (choosing an appropriate SQL or NoSQL technology). Sharding the persistent data-tier has the added benefit of reducing load on the primary datastore, reducing size and I/OPS requirements, thereby further easing the transition to the cloud.

or

(2) Eliminate service’s need for data storage entirely by passing data back to another service. (Less desirable, but suitable for smaller services).

Once you’ve deployed your first cloud service. Redirect a small portion of the customer base and monitor for correct functionality. As monitoring improves confidence in the service’s operation, you can continue a slow rollout over your entire customer base, minimize both risk and operational downtime. Repeat this process as for each service to complete the migration.

Down Market MVP

Another common scenario is that your company runs an older mature product, but one that is loaded with technical debt. Perhaps it’s poorly architected, written using older frameworks or languages, or contains undocumented or “black box” code that even your seasoned engineers are afraid to touch. The sheer complexity screams re-write, but at the same time there’s a pressing business appetite to move into the cloud quickly.

In this case, a possible strategy is to enter the cloud with a down-market, minimum viable product (MVP). Have your team start from scratch and develop an MVP in the cloud that encompasses the core features of your main offering. By targeting the down-market, you’re not only entering quickly (with a minimal feature set) but you’re also blocking other upstart competitors from overtaking you.

During development, you’ll need to balance the business demands of time-to-market vs. the technical risk of vendor lock-in. Using PaaS offerings such as Amazon’s SQS or Azure’s DocumentDB can help to get in your new product up-and-running quickly, but at the expense of greater vendor lock-in. There’s no, silver bullet with these decisions, just be sure to make these choices consciously and recognize your accumulating what could become tech debt should you decided to change cloud providers down the road.

Once you’ve established this MVP with a small customer-base, continue to articulate it with feature additions, possibly in a tiered pricing model (e.g. standard, professional, premium). As the new cloud product matures, development efforts should slowly pivot. Developers are moved from the older legacy product to help further articulate features on the new. At the same time, as the new cloud product matures and has more articulated features, customers will be increasing highly incentivized to migrate to the new platform.

Private to Hybrid to Public

In this scenario your company has large capital infrastructure investments that you can’t immediately walk away from. Perhaps you have one or more privately owned data centers, a long-term contract with a managed host provider, or recently purchased expensive new servers during a recent tech-refresh. Cloud migration is a long-term goal for eventual cost savings, but you need to realize the value on infrastructure that has already been purchased.

The pattern in this case is a crawl-walk-run strategy from the private, to hybrid, to public cloud. Initial focus will be on changing your internal processes to run your datacenter(s) as its own mini-cloud. By adopting technology and process changes such as virtualization, resource pooling, automated provisioning, (and possibly even chargeback accounting), you’re readying your organization for an eventual transition to the public cloud.

Once you’ve established your own private cloud, you’ll want to leverage a public cloud provider for your data-backups and DR/Business continuity plan. This is where you’re likely to see your first reduction in CAPEX. Quarterly DR drills will help you test the viability and compatibility of systems your cloud provider.

As demand from your customer base increases, you’ll avoid making further infrastructure investments. Instead, you’ll want to adopt a hybrid cloud strategy, moving a handful of customers to the cloud or leveraging bursting into the cloud to accommodate usage spikes.

Finally, as servers are progressively retired or when your managed provider contracts run out, you’ll be ready to complete a smooth migration fully into the public cloud.

Conclusion

While each of these patterns are vastly different, they all share one thing in common — there are no “Big Bang” moments, no high-risk events that’ll have your stakeholders nail-biting on the edge of there seats. The best technologists are always looking to meet business objectives while reducing risk.

The business and technical realities for every company are different and it’s likely you’ll need to combine one or more of these migration patterns in your own strategy. While we’re confident that many of our customers will benefit from migrating to the cloud, it’s important to tailor your strategy to the needs of your business. We continue to develop new migration patterns that minimize risk and generate business value for our clients.


Comments Off on Three Strategies for Migrating to the Cloud

Product Discovery (the Right Way)

Product Discovery: Sales Driven vs. Market Driven

Product discovery is a core process to any successful software company. While your engineers need to focus on building the product right (making it highly scalable and available), your product owners need to ensure they’re building the right product.

All too often, we find product owners serving essentially as “ticket takers” that run requests from sales to engineering. While the product owners may add a little design and elaboration, the ideas originate solely from the “asks” of the sales department. This “sales-driven” approach is limiting. It tends to stifle innovation and frequently allocates engineering effort to one-off features requested by a single customer. Overall, it is indicative of an immature product management process.

In contrast, mature product organizations take a “market-driven” approach, receive input from a variety of sources, and focus engineering efforts on penetrating new market segments. These differences translate into measurable business outcomes. Market-driven companies (e.g. Salesforce, Apple, Amazon, WorkDay) are routinely valued at 10-15 times revenue, while sales-driven companies (e.g. Dell, Samsung, Oracle, SAP, Rackspace, WalMart.com) typically trade for only 1.5-6 times revenue.

Highlighting the differences of these two approaches will demonstrate how organizations focused on innovation and faster time to market are better served by a market-driven approach.

Sales Led Product Discovery

In a sales-driven product organization, the product ideation process begins with sales obtaining feedback from customers (or potential customers) on desired features and functionality. These requests are then translated into requirements and added to the product backlog. Prioritization — if it occurs at all — is based on the size of the contract and/or relative importance of the customer. The process itself is simple and straightforward, but unfortunately it often leads to poor outcomes.

Frequently, features not yet implemented are promised and assigned deadlines as part of the sales contract. These binding commitments tie the hands of product and engineering anywhere from several weeks to months, forcing them on a treadmill that never quite allows enough breathing room to experiment with innovative features or new products. It becomes impossible for the product organization to pivot without months of lead time or executive intervention.

Additionally, the particular needs of each customer lead them to request features and functionality that only they will use. Product evolution therefore becomes haphazard, with the engineering team jumping from one custom feature to the next.

Furthermore, these feature commitments are often granted in an all or nothing approach with highly articulated use cases. A consequence is that releases to production are infrequent, perhaps quarterly or less, making it difficult to acquire feedback from the larger customer base. Few releases result in less feedback from customers overall and fewer opportunities to pivot away from a poorly chosen feature set.

Consequently, the product itself becomes bloated with a large number of features that one or maybe two customers use and these one-off features add complexity both to the interface (making it difficult to use) and to the code (making it difficult to maintain). This over development makes the product unsatisfying to use and a drain on engineering resources to maintain.

The bottom-line is sales-driven products simply lack vision. Your sales department (quite rightly) is too focused on selling and your customers (quite naturally) are too focused on solving the business problems of today to steer the future vision of your product. The limitations of sales driven approaches are best summarized by the old Henry Ford quote “If I asked my customers what they wanted, it would have been a faster horse.”

Market Led Product Discovery

If a sales-driven approach is a dead end, what is the alternative?

In a market-driven approach, product ideas aren’t limited to sales and customer asks but come from a variety of sources (e.g. charter customer feedback, actual usage metrics, customer service, engineering, sales, executive team). This generates a surplus of ideas — the majority of which, the product owners will have to say “no” to. In fact, the most important function of product owners is to say “no” to ideas that will divert engineering effort away from creating maximum value.

When a new product is launched, an MVP concept is prototyped and tested among a small set of charter customers. Prototypes (interactive mockups) are created with tools like Balsamiq, Axure, etc. and iterated over multiple times, allowing a huge number of ideas to be tested and refined with almost no coding. This process also identifies the minimum feature set required to solve a problem in a particular market segment, thereby minimizing the engineering effort required to implement it.

Once in production, these MVPs are enhanced by further feature development. However, features aren’t selected to satisfy the needs of individual customers but to meet the collective needs of the market segment (i.e. Keep the aggregate customer mostly happy). Features themselves are initially released in minimum viable fashion and further articulated in later releases, allowing for a frequent release schedule and ready feedback from the customer base. Overall, this strategy avoids over-development while capturing as much of the market segment as possible. Specific customer one-off product needs can be pushed to a professional services group (internally or externally) and built as add-ons through a mechanism like an API framework, but not incorporated into the core product.

Once the current market segment has been saturated, focus shifts to conquering another market segment. This is the expansion strategy advocated by Geoffery Moore in Crossing the Chasm: “[target] a very specific niche market where you can dominate from the outset, drive your competitors out of that market niche, and then use it as a base for broader operations.”[1]

In contrast to the sales-driven approach above, this expansion is a deliberate process. Market research is conducted to identify the needs of these new customers. In some cases this means adding a collection of features to the existing product, but often it requires the launch of an entirely new product. Again, the use of extensive prototyping and feedback from charter customers play a key role.

This quick advance – prenetrating and saturating market segments in rapid succession – explains the stark differences in investor valuations between market-driven (10-15x Rev) and sales-driven (1.5-6x Rev) companies.

Conclusion

While more complex, the market-driven product discovery process helps your business focus on long-term growth vs. near-term individual customer wins. It draws upon a variety of sources for ideas, allows for early testing and feedback (for both MVP products and features), minimizes wasted engineering effort, and opens new market segments quicker than sales-driven approaches.

Many of our product management ideas are influenced by Marty Cagan at the SVPG. Click here to visit his website and learn more.

[1] Moore, Geoffrey A. (2014-01-28). Crossing the Chasm, 3rd Edition (Collins Business Essentials) (p. 79). HarperCollins. Kindle Edition.

Comments Off on Product Discovery (the Right Way)

Communicating Across Swim-lanes

We often emphasize the need to break apart complex applications into separate swim-laned services. Doing so not only improves architectural scalability but also permits the organization to scale and innovate faster as each separate team can operate almost as its own independent startup.

Ideally, there would be no need to pass information between these services and they could be easily stitched together on the client-side with a little HTML for a seamless user experience. However, in the real world, you’re likely to find that some message passing between services is needed (If only to save your users the trouble of not entering information twice).

Every cross-service call adds complexity to your application. Often times, teams will implement internal APIs when cross-service communication is needed. This is a good start and solves part of the problem by formalizing communication and discouraging unnecessary cross-service calls.

However, APIs alone don’t address the more important issue with cross-service communication — the reduction in availability. Anytime one service synchronously communicates with another you’ve effectively connected them in series and reduced overall availability. If Service-A calls Service-B synchronously, and Service-B crashes or slows to a crawl, Service-A will do the same. To avoid this trap, communication needs to take place asynchronously between services. This, in fact, is how we define “swim-laning”.

So what are the best means to facilitate asynchronous cross-service communication and what are the pro and cons of each method?

Client Side

Information that needs to be shared between services can be passed via JavaScript & JSON in the browser. This has the advantage of keeping the backend services completely separate and gives you the option to move business logic to the client side, thus reducing load on your transactional systems.

The downside, however, is the increased security risks. It doesn’t take an expert hacker to manipulate variables in JavaScript. (Just think, $price_var gets set to $0 somewhere between the shopping cart and checkout services). Furthermore, data is passed back to the server-side these cross-service calls will now suffer from the same latency and reliability issues as any TCP call over the internet.

Message Bus / Enterprise Service Bus

Message buses and enterprise service buses provide a ready means to transmit messages between services asynchronously in pub/sub model. Advantages include providing a rich set of features for tracking, manipulating, and delivering messages, as well as the ability centralized logging and monitoring. However, the potential for congestion as well as occasional message loss makes them less desirable in many cases than asynchronous point-to-point calls (discussed below).

To limit congestion, it’s best to implement several independent channels and limit message bus traffic to events that will have multiple subscribers.

Asynchronous Point-to-Point

Point-to-point communication (i.e. one service directly calling another) is another effective means of message passing. Advantages include simplicity, speed, and reliability. However, be sure to implement this asynchronously with a queuing mechanism with timeouts, retries (if needed), and exceptions handled in the event of service failure. This will prevent failures from propagating across service boundaries.

If properly implemented, asynchronous point-to-point communication is excellent for invoking a function in another service or transmitting a small amount of information. This method can be implemented for the majority of cross-service communication needs. However, for larger data transfers, you’ll need to consider one of the methods below.

ETL

ETL jobs can be used to move a large amount of data from one datastore to another. Since these are generally implemented as separate batch processes they won’t bring down the availability of your services. However, drawbacks include increased load on transactional databases (unless a read replica of the source DB is used) and the poor timeliness/consistency of data resulting from periodic batch process.

ETL processes are likely best reserved for transferring data from your OLTP to OLAP systems, where immediate consistency isn’t required. If you need both a large amount of data transferred and up to the second consistency consider a DB read replica.

DB Read-Replica

Most common databases (Oracle, MySQL, PostgreSQL) support native replication of DB clones with replication lag measured in milliseconds. By placing a read replica of one service’s DB into another service’s swim-lane, you can successfully transfer a large amount of data in near real-time. The downsides are increased IOPS requirements and a lack of abstraction between services that — in contrast to the abstraction provided by an API — fosters greater service interdependency.

Conclusion

In conclusion, you can see there are a variety of cross-service communication methods that permit fault-isolation between services. The trick is knowing the strengths and weaknesses of each and implementing the right method where appropriate.


Comments Off on Communicating Across Swim-lanes

Product Management Excellence

While not technical problem solvers, Product Managers arguably have the most difficult job in modern software development. Their core mission involves taking feedback from a diversity of sources (market trends, the business owner’s vision, the competitive landscape, customer complaints, feature usage statistics, engineering effort estimates) and somehow synthesize all these inputs into a single coherent vision (roadmap) and prioritized task list (product backlog). If that weren’t challenging enough by itself, they’re also on the hook for articulating feature implementations (through user stories and daily discussions with engineers) and for continually providing forecasts of feature release dates. (Btw, If anyone needs to know AKF’s position on whether or not you need dedicated full-time PMs read here.)

Given the difficulty of the task, it’s not surprising that many product owners fall short of excellence. In the worst cases, what was envisioned as a stream-lined agile development process devolves into waterfall by another name. Product sees developers as “ticket-takers” who can’t seem to work hard enough or fast enough to satisfy the wants of the business. To prevent this sort of downward slide into mediocrity and to keep your product management practices on point, below we’ve highlighted some key ways to differentiate a Great Product Manager from just a “Good” one.

Good Product Managers prioritize their backlog and communicate feature requirements through user stories and face-to-face discussions with engineers. Great Product Managers go beyond these core tasks and participate in Product Discovery and Product Validation. Product Discovery requires conducting market research to determine what the existing product might need to be (more) successful in the target market. This means watching market trends, tracking competitors, and keeping overall sense of where the competitive landscape is headed. Product Validation is quantifying the results of a feature release, asking if these met expectations, and learning how to make better predictions in the future. The very best PM’s establish “success metrics” for each and every feature prior to implementation.

Good Product Managers interact daily with their agile team. Great Product Managers are part of the agile team. They’re not simply interacting on an “as needed” basis; they’re an integral part of the agile process. They attend daily stand-ups and retrospectives, offer ideas on how to change course, and communicate continually with engineers about tradeoffs. They don’t just write a user story, they discuss the story with developers and test engineers to ensure everyone has a common understanding of what’s being built. If the Agile Team fails to deliver on time — or worse yet, builds the wrong feature — they own a part of that failure. If the Agile Team delivers a great release, they share that success with their teammates.

Good Product Managers prioritize new features that generate the greatest value to the business. They understand technical debt and aim to pay it down over time. Great Product Managers do all this, but also recognize that product management isn’t just accretive. It’s not about just about heaping feature after feature on your product until it’s saddled with bells and whistles. Great Product Managers put themselves in the end-user’s seat and choose to refactor, merge, or deprecate features when it makes sense to do so. Eliminating unused features helps to minimize technical debt. This feature selection process improves the maintainability of the code and gives end-users a clean interface that balances functionality and ease of use.

Good Product Managers avoid making mistakes. Great Product Managers recognize and retain a memory of mistakes and past failures. It’s all too easy to brush failed ideas under the carpet. Great Product Managers recognize this and choose to capitalize on their failures. They learn from mistakes and vow never to repeat them. They keep bad product ideas from being recycled and keep their teams focused on generating value.

Finally and most importantly:

Good Product Managers say “Yes” to ideas that create value to the business. Great Product Managers say “No” early and often to all but the most valuable ideas. You see, the problem most SaaS companies face isn’t a lack of ideas, but finding a way to identify the most promising ones and prioritizing them correctly. Keeping the team focused on delivering value requires the Product Manager to dish out a lot of ‘No’s to tasks that might steal from the team’s time.

It’s your engineering team’s job to build your product right, but your Product Manager is there to ensure that you build the right product. Building the Taj Mahal is no good if the customer really needed the Eiffel Tower! By no means is it an easy job, but by adopting these best practices your Product Managers can achieve excellence.

Steven Mason
Consultant, AKF Partners


Comments Off on Product Management Excellence

Selecting Metrics for Your Agile Teams

One of our favorite sayings is “you can’t improve that which you do not measure.” When working with clients, we often emphasize the need to select and track performance metrics. It’s quite surprising (disheartening really) to see how many companies are limping along with decision-making based entirely on intuition. Metrics-driven institutions demonstrably outperform those that rely on “gut feel” and are able to quickly refocus efforts on projects that offer the greatest ROI.

Just as your top-level business KPIs govern strategic decision making, your agile teams (and their respective services) need their own “tactical” metrics to focus efforts, guide decision making, and make performance measurable. The purpose of agile development is to deliver high quality value to your customers in an iterative fashion. Agile facilitates rapid deployment, but also allows you to garner feedback from your customers that will shape the product. Absent a set of KPIs, you will never truly understand the effectiveness of your process. Getting it right, however, isn’t an easy task. Poorly chosen metrics won’t reflect the quality of service, will be easily gamed, or difficult to track, and may result in distorted incentives or undesirable outcomes.

In contrast, well-chosen metrics make it simple to track to performance and shape team incentives. For example, a search service could be graded against the speed and accuracy of search results while the shopping cart service is measured on the percentage of abandoned shopping carts. These metrics are simple, easy to track, difficult to game, and directly reflect the quality of service.

Be sure to dedicate the time and the mental energy needed to pick the right metrics. Feedback from team members is essential but the final selection isn’t something you can delegate. After all, If I’m allowed to pick my own performance metrics — I can assure you I’m always going to look awesome.

To keep you on the right track, below is a checklist of considerations to take into account before finalizing the selection of metrics for your agile teams:

  1. A handful of carefully chosen metrics should be preferred over a large volume of metrics. Ideally, each Agile team (and respective service) should be evaluated/tasked with improving 2-3 metrics (no more than 5). We have witnessed at least one company that proposed a total of 20 different metrics to measure an agile teams performance! Needless to say, being graded on that many metrics is disempowering at best, and likely to illicit either a panic attack or total apathy from your team. In contrast, having only a handful metrics to be graded against is empowering and helps to focus efforts.
  2. Easy to collect and or calculate. One startup suggested they would track “Engineering Hours Spent Bug-Fixing” as a way to determine code quality. The issue was quickly raised: Who would be doing this tracking? And how much time/effort did they estimate it would take?  It became obvious that tracking the exact amount of time spent would add a heavy productivity-tax to an already burdened engineering team.  While providing a very granular measure, the cost of collecting this information simply outweighed the benefits.  Ultimately we helped them decide that the “Number of Customer Service Tickets per Week” was the right metric. Sometimes a cruder measure is the right choice, especially if it is easier to collect and act upon.
  3. Directly Controllable by the Team. Choose metrics that your agile team has more or less direct control over. A metric they contribute towards indirectly is less empowering than something they know they own. For example, when measuring a search service the “Speed and Accuracy of Search” is preferable to “Overall Revenue” which the team only indirectly controls.
  4. Reflect the Quality of Service. Be sure to pick metrics that reflect the quality of that service. For instance, the number of abandoned shopping carts reflects the quality of a shopping cart service, whereas number of shopping cart views is an input metric but doesn’t necessarily reflect service quality.
  5. Difficult to Game. The innate human tendency to game any system should be held in check by selecting metrics that can’t easily be gamed. Simple velocity measures are easily (read: notoriously) gamed while the number of “Severity 1” incidents your service invoked can’t be so easily massaged.
  6. Near Real Time Feedback. Metrics than can be collected and presented over short-time intervals are the most actionable. Information is more valuable when fresh — providing availability data weekly (or even daily) will foster better results than a year-end update.

Most importantly, well-chosen metrics tracked regularly should pervade all aspects and all levels of your business. If you want your business to become a lean, performance driven machine, you need to step on the scale every day. It can often be uncomfortable, but it’s necessary to get the returns of which you are capable.


Comments Off on Selecting Metrics for Your Agile Teams

WebScaleSQL

Just over a month ago, the WebscaleSQL collaboration project was launched.  This project aims to create a community-developed branch of the popular MySQL DBMS that incorporates features to make large-scale deployments easier.  As many of our clients run large clusters of MySQL on commodity hardware (as a means to reduce costs and improve scalability) the WebScaleSQL project naturally drew our attention.

The project is currently developed by a small collaboration of engineers from Facebook, Google, Twitter, and LinkedIn.  Certainly no strangers to scalability challenges, the developers from these web giants all face similar challenges and must work continuously to improve performance and scalability of their MySQL deployments to remain competitive.  Aimed at minimizing the duplication of effort across these engineering teams, WebScaleSQL’s development is run as an open collaboration between these major contributors.  Contributions aren’t limited to major companies, however, and participation from outside engineers is encouraged.

Despite having only been around a few weeks, the WebScaleSQL project boasts some significant improvements overs its upstream parent (Oracle’s MySQL ver 5.6).  These advances include an automated testing framework, a stress testing suite, query optimizations and host of other changes that promise to improve the performance, testing, and deployment of large-scale DBs.

As WebScaleSQL matures, we’ll continue to track its development and report on our clients’  experiences (both good and bad) working with this new “scale friendly” branch of MySQL.

WebScaleSQL source is currently hosted on GitHub (https://github.com/webscalesql/webscalesql-5.6) and released under version 2 of the GNU public license.


Comments Off on WebScaleSQL