AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

Tag » Software project management

How to Choose a Development Methodology

One of our most viewed blog entries is PDLC or SDLC, while we don’t know definitively why we suspect that technology leaders are looking for ways to improve their organization’s performance e.g. better quality, faster development. etc. Additionally, we often get asked the question “what is the best software development methodology?” or “should we change PDLC’s?” The simple answer is that there is no “best” and changing methodologies is not likely to fix organizational or process problems. What I’d like to do in this posts is 1) give you a very brief overview of the different methodologies – consider this a primer, a refresher, or feel free to skip and 2) provide a framework for considering which methodology is right for your organization.

The waterfall model is an often used software development processes that occurs in a sequential set of steps. Progress is seen as flowing steadily downwards (like a waterfall) through the phases such as Idea, Analysis, Design, Development, Testing, Implementation and Maintenance. The waterfall development model originated in the hardware manufacturing arena where after-the-fact changes are prohibitively costly. Since no formal software development methodologies existed at the time, this hardware-oriented model was adapted for software development. There are many variations on the waterfall methodology that changes the phases but all have the similar quality of a sequential set of steps. Some waterfall variations include those from Six Sigma (DMAIC, DMADV).

Agile software development is a group of software development methodologies based on iterative and incremental development. The requirements and ultimate products or services that get delivered evolve through collaboration between self-organizing, cross-functional teams. This methodology promotes adaptive planning, evolutionary development and delivery. The Agile Manifesto was introduced in 2001 and introduced a conceptual framework that promotes foreseen interactions throughout the development cycle. The time boxed iterative approach encourages rapid and flexible response to change. There are many variations of Agile including XP, Scrum, FDD, DSDM, and RUP.

With so many different methodologies available how do you decide which is right for your team? Here are a few questions that will help guide you through the decision.

1) Is the business willing to be involved in the entire product development cycle? This involvement takes the form of dedicated resources (no split roles such as running a P&L by day and being a product manager by night), everyone goes through training, and joint ownership of the product / service (no blaming technology for slow delivery or quality problems).
YES – Consider any of the agile methodologies.
NO – Bypass all forms of agile. All of these require commitment and involvement by the business in order to be successful.

2) What is the size of your development teams? Is the entire development team less than 10 people or have you divided the code into small components / services that teams of less than 10 people own and support?
YES – Consider XP or Scrum flavors of the agile methodology.
NO – Consider FDD and DSDM which are capable of scaling up to 100 developers. If you team is even larger consider RUP. Note that with agile, when the development team gets larger so does the amount of documentation and communication and this tends to make the project less agile.

3) Are your teams located in the same location?
YES – Consider any flavor of agile.
NO – While remote teams can and do follow agile methodologies it is much more difficult. If the business owners are not collocated with the developers I would highly recommend sticking with a waterfall methodology.

4) Are you hiring a lot of developers?
YES – Consider the more popular forms of agile or waterfall to minimize the ramp up time of new developers coming on board. If you really want an agile methodology, consider XP which includes paired programming as a concept, and is a good way to bring new developers up to speed quickly.
NO – Any methodology is fine.

A last important idea is that it isn’t important to follow a pure flavor of any methodology. Purist or zealots of process or technology are dangerous because a single tool in your tool box doesn’t provide the flexibility needed in the real world. Feel free to mix or alter concepts of any methodology to make it fit better in your organization or the service being provided.

There are of course counter examples to every one of these questions, in fact I can probably give the examples from our client list. These questions/answers are not definitive but they should provide a starting point or framework for how you can determine your team’s development methodology.


3 comments

The Agile Executive

In this third installment of our “Agile Organization” series we discuss the qualities and attributes necessary for someone to lead a group of cross functional Agile teams in the development of a web-centric product.  For the purposes of this discussion, the Agile Executive is the person responsible for leading a group of agile teams in the development of a product.

In a world with less focus on functional organizations such as the one we’ve described in our Agile Organization articles, it is imperative that the leadership have a good understanding of all domains from the overall business through product management and finally each of the technical domains.  Clearly such an individual can’t be an expert in every one of these areas, but they should be deep in at least one and broad through all of them.  Ideally this would have been the case in the functional world as well, but alas functional organizations exist to support deep rather than broad domain knowledge.  In the Agile world we need deep in at least area and broad in all areas.

Such a deep yet broad individual could come from any of the functional domains.  The former head of product management may be one such candidate assuming that he or she had good engineering and operations understanding.  The head of engineering and operations may be heads of other agile teams, assuming that they have a high business acumen and good product understanding.  In fact, it should not matter whence the individual comes, but rather whether he or she has the business acumen, product savvy and technical chops to lead teams.

In our view of the world, such an individual will likely have a strong education consisting of an undergraduate STEM (science, technology, engineering or math) degree.  This helps give them the fundamentals necessary to effectively interact with engineers and add value to the engineering process.  They will also have likely attended graduate school in a business focused program such as an MBA with a curriculum that covers finance, accounting, product and strategy.  This background helps them understand the language of business.  The person will hopefully have served for at least a short time in one of the engineering disciplines as an individual contributor to help bridge the communication chasm that can sometimes exist between those who “do” and those who “dream”.  As they progress in their career, they will have taken on roles within product or worked closely with product in not only identification of near term product elements, but the strategic evaluation of product needs longer term as well.

From the perspective of philosophy, the ideal candidates are those who understand that innovation is more closely correlated with collaboration through wide networks than it is to the intelligence of one individual or a small group of people.  This understanding helps drive beneficial cognitive conflict and increased contribution to the process of innovation rather than the closed minded approach of affective conflict associated with small groups of contributors.

In summary, it’s not about whence the person comes but rather “who the person is”.  Leading cross disciplinary teams requires cross disciplinary knowledge.  As we can’t possibly experience enough personally to be effective in all areas, we must broaden ourselves through education and exposure and deepen ourselves through specific past experiences.  Most importantly, for a leader to succeed in such an environment he or she must understand that “it’s not about them” – that success is most highly correlated with teams that contribute and not with just being “wickedly smart”.


Comments Off on The Agile Executive

Book Review: The Lords of Strategy

OK, this isn’t a book review as much as it is a comparison of how the iterative and rapid “productization” of strategy closely parallels Agile methods of software development.  But first, here’s an overview of a particularly good book – Walter Kiechel’s The Lords of Strategy.   I flew through the book – not because I wanted it to end but because I couldn’t put it down.  It’s an incredible history of the people, organizations and ideas that developed the concept of corporate strategy and it’s full of incredible facts and observations.  Take for instance that the notion of strategy consulting as purveyed by the likes of BCG, Bain and McKinsey is only roughly 30 to 35 years old and that perhaps even more interestingly the notion that a company exists to create shareholder wealth is only about 30 years old.  The book does a great job of explaining not only the history of the ideas behind strategy consulting, sometimes told alongside the biography of their inventors, but how those ideas affected the industry for better and for worse.  Ultimately it describes how these ideas “quickened the pace of capitalism” though the reader is left with figuring out whether we are better or worse off for the change.

What struck me as particularly interesting in this book is the parallels one can draw between how corporate strategy (including the product and services surrounding it) developed and how Agile methods of solution development should work.  The germinating idea behind strategy was the identification of the “experience curve” by the Boston Consulting Group.  This curve identified through trend analysis that the longer and more experienced a company became, the lower its cost of producing a certain good or service.  This notion, though flawed (experience alone isn’t what drives cost), came quickly and was brought to market quickly by BCG.  In rapid fashion, the company built upon this to develop the growth-share matrix as its second “product” offering.  Both of these ideas together led to a grouping of offerings that suggested companies take on debt, reduce costs, differentiate themselves on price and expand shareholder value.  The success of BCG led to McKinsey joining the ranks of strategy consultants, Harvard Business School changing its curriculum (via Michael Porter who had now built his own strategy framework – the famous 5 Forces analysis) and created Bain and Company.

Key here is the evolutionary nature of strategy as a product.  In the very early phases, the offerings were quite frankly wrong.  We know now that the notion that companies differentiate themselves on price alone in every industry is flawed.  But the firms and institutions that supported strategy as a product and intellectual endeavor did not try to offer the absolute best solution – they attempted to bring an appropriate solution to the market and then modify it from there.  In effect, for the time, their solution was the minimum viable product.  Did their approach work?  Billions of dollars of consulting revenue and profits and billions in market value would argue it was an effective approach.

While these companies didn’t realize it at the time, they were in fact practicing agile development.  They didn’t know end user requirements – how could they?  The market wasn’t created yet.  They created a quick offering and iterated upon it, simultaneously changing the market demand and adapting to both the shifting demand and their growing understanding of what strategy needed to become.

Where else might agile methods apply?


2 comments

Revisiting the 1:10:100 Rule

Has the 1:10:100 rule changed? We think so, though the principles still hold true.

If you have any gray in your hair, you likely remember the 1:10:100 rule.  Put simply, the rule indicates that the cost of defect identification goes up exponentially with each phase of development.  It costs a factor of 1 in requirements, 10 in development, 100 in QA and 1000 in production. The increasing cost recognizes the need to go back through various phases, the lost opportunity associated with other work, the amount of people and systems involved in identifying the problem, and end user (or customer) impact in a production environment. In a 2002 study by the National Institute of Standards and Technology the estimated cost of software bugs was $59.5 billion annually, half the cost borne by the users and the other by the developers.

While there is an argument to be made that Agile development methods reduce this exponential rise in cost, Agile alone simply can’t break the fact that the later you find defects, the more it costs you or your customers.   But I also believe it’s our jobs as managers and leaders to continue to reduce this cost between phases – especially in production environments.  If the impact in the production environment is partially a function of 1) the duration of impact, 2) the degree of functionality impacted, and 3) the number of customers impacted, then reducing any of these should reduce the cost of defect identification in production.  What can we do besides considering Agile methods?

There are at least three approaches that significantly reduce the cost of finding production problems.  These are “swimlaning”, having the ability to roll back code in XaaS environments (our term for anything as a service), and real time monitoring of  business metrics.  These approaches affect the number of customers impacted and the duration of the impact respectively.

Swim Lanes

We think we might have coined the term “swimlaning” as it applies to technology architectures.  Swimlaning, as we’ve written about on this blog as well as in the book, is the extreme application of the “shard” or “pod” concept to create strict fault isolation within architectures.  Each service or customer segment gets its own dedicated set of systems from the point of accepting a request (usually the webserver) to the data storage subsystem tier that contains the data necessary to fulfill that request (a database, file system or other storage system).  No synchronous communication is allowed across the “swimlanes” that exist between these fault isolation zones.  If you swimlane by the Z axis of scale (customers) you can perform phased rollouts to subsets of your customers and minimize the percentage of your customer base that a rollout impacts.  An issue that would otherwise impact 100% of your customers now impacts 1%, 5% or whatever the smallest customer swimlane is.  If swimlaned by functionality, you only lose that functionality and the rest of your site remains functioning.  The 1000x impact might now be 1/10th or 1/100th the previous cost.  Obviously you can’t have less cost than the previous phase, as you still need to perform new work, but the cost must go down.

Rollback

Ensuring that you can always roll back recently released code reduces the duration of customer impact.  While there is absolutely an upfront cost in developing code and schemas to be backwards compatible, you should consider it an insurance policy to help ensure that you never kill your customers.  If asked, most customers will probably tell you they expect that you can always roll back from major issues.   One thing is for certain – if you lose customers you have INCREASED rather than decreased the cost of production issue identification.  If you can isolate issues to minutes or fractions of an hour in many cases it becomes nearly imperceptible.

Monitoring Business Metrics

Monitoring the CPU, memory, and disk space on servers is important but ensuring that you understand how the system is performing from the customer’s perspective is crucial. It’s not uncommon to have a system respond normally to an internal health check but be unresponsive to customers. Network issues can often provide this type of failure. The way to ensure you catch these and other failures quickly is to monitor a business metric such as logins/sec or orders/min. Comparing these week-over-week e.g. Monday at 3pm compared to last Monday at 3pm, will allow you to spot issues quickly and rollback or fix the problem, reducing the impact to customers.


Comments Off on Revisiting the 1:10:100 Rule

PDLC or SDLC

As a frequent technology writer I often find myself referring to the method or process that teams use to produce software. The two terms that are usually given for this are software development life cycle (SDLC) and product development life cycle (PDLC). The question that I have is are these really interchangeable? I don’t think so and here’s why.

Wikipedia, our collective intelligence, doesn’t have an entry for PDLC, but explains that the product life cycle has to do with the life of a product in the market and involves many professional disciplines. According to this definition the stages include market introduction, growth, mature, and saturation. This really isn’t the PDLC that I’m interested in. Under new product development (NDP) we find a defintion more akin to PDLC that includes the complete process of bringing a new product to market and includes the following steps: idea generation, idea screening, concept development, business analysis, beta testing, technical implementation, commercialization, and pricing.

Under SDLC, Wikipedia doesn’t let us down and explains it as a structure imposed on the development of software products. In the article are references to multiple different models including the classic waterfall as well as agile, RAD, and Scrum and others.

In my mind the PDLC is the overarching process of product development that includes the business units. The SDLC is the specific steps within the PDLC that are completed by the technical organization (product managers included). An image on HBSC’s site that doesn’t seem to have any accompanying explanation depicts this very well graphically.

Another way to explain the way I think of them is to me all professional software projects are products but not all product development includes software development.  See the Venn diagram below. The upfront (bus analysis, competitive analysis, etc) and back end work (infrastructure, support, depreciation, etc) are part of the PDLC and are essential to get the software project created in the SDLC out the door successfully.  There are non-software related products that still require a PDLC to develop.

Do you use them interchangeably?  What do you think the differences are?


2 comments