AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

Tag » software development

Agile Communication

In Agile software development methodology the communication between team members is critical. Two of the twelve principles deal with directly with this issue:

  • Business people and developers must work together daily throughout the project.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Despite the importance of this, time and time again we see teams that are spread out across floors and even buildings trying to work in an Agile fashion. Every foot two people are separated decreases the likelihood that they communicate directly or overhear something that they can provide input on. In physics, there is such a thing as an inverse-square law where a quantity is inversely proportional to the square of the distance from the source. Newton’s law of universal gravitation is example of an inverse-square law. I don’t believe anyone has associated this law with verbal communication but I’m convinced this is the case.

communication : 1 / (distance x distance)

This isn’t to say that remote teams can’t work. In fact, I’m actually a proponent of remote workers but I think because they are not in the building or across the street special arrangements are made like an open Skype call with the remote office. People might worry about being seen as lazy if they Skype someone across the room but across the country, that’s fine.

The take away of this is to put people working with each other as close together as possible. If you need to move peoples’ desks, do it. The temporary disruption is worth the gained communication over the length of the project.


4 comments

Growing Too Fast

When you’re waiting for your startup to achieve the hockey-stick growth, so you can ring the NASDAQ bell in a hoodie, it might seem impossible to grow too quickly. However, there are some serious possible negative consequences of too rapid growth using the wrong metrics, i.e. vanity metrics.

A household brand that we can point to for growing too quickly is Starbucks. Schultz, when he was on his 8-year hiatus from the CEO role, wrote a letter criticizing the Seattle-based company for growing its global chain of 13,000 coffee shops too quickly. He stated that as a result the company was commoditizing itself and losing much of its soul. Upon his return to the CEO role he took actions to close over 600 of these stores. Not only are situations like this harmful to the brand but all those stores employees were negatively impacted as well.

At AKF, we deal with hyper-growth companies every week and sometimes we get to keep up with them over years. The growth of most companies is not steady but rather it is sporadic. When a company has grown too quickly based on the wrong metrics, trouble ensues.

At Quigo (our previous company acquired by AOL) traffic growth would move along steadily until we brought on a large customer and then it might double overnight. But traffic isn’t revenue. Making money on that traffic lagged by weeks or months. Had we celebrate the increase in traffic rather than the really important business metrics (revenue & profit), we might have grown too rapidly. Some of the bad things we’ve personally seen with companies that grow too rapidly based on vanity metrics include:

  • Organization – bringing employees up to speed takes time and it is easy to get behind or ahead of the demand curve. Hire too slowly and customers might be impacted but hiring too quickly means either too high of a burn rate or having to lay people off. Growing rapidly in year one and then laying people off in year two causes credibility issues. This in turn might give potential new customers pause and will certainly result in hiring difficulties in the future.
  • Valuation – interest, from an investment perspective, in particular types of technology companies waxes and wanes over time. Online advertising companies were hot in 2007 but now it is a much more competitive environment. Social networking was hot before the FB IPO but now people are concerned. Raising money at too high of a valuation, unless it’s the last round, will make future rounds more difficult and painful.
  • Office Space – along with hiring too many employees comes building out or leasing too much office space. Obviously this causes excessive burn rates but it also can have morale implications. With too few employees and too much space the team is reminded every day that they haven’t grown fast enough to fill the office. They also might gravitate away from each other (often heading for the available offices) and lose the power of sitting beside each other.
  • Hardware – while you might have to buy or lease hardware based on a vanity metric like web traffic, this doesn’t mean you should forget how much revenue and profit this traffic is generating. Traffic that might not bring in the expected revenue should be treated as such. We’ve argued for an R-F-M analysis for storage costs which can be extended to processing as well.

Are you growing too fast or have you seen companies grow to fast?


Comments Off on Growing Too Fast

The Agile Organization

We’ve done a lot of work with organizations attempting to become more Agile by implementing Agile development practices.  One common problem we see time and time again is that the “old school” way of defining organization structure starts to lose its value in an Agile world.  Here I am specifically talking about organization structures developed around functional roles such as development (or engineering), QA, Operations, Infrastructure, etc.

This old method of organizing, which resembles a Y axis split within the AKF scale cube served our industry well for a long time.  And, for many organizations, it can continue to work well.  It particularly works well in organizations that follow waterfall models as the organization structure mimics the flow and passage of work through certain gates.  The structure is also comforting in its familiarity as most long tenured managers and individual contributors have worked within similarly structured organizations their entire professional careers.

But in the Agile world, this organization structure doesn’t add as much value as in the Waterfall world.  In fact, I argue that it’s counter-productive in many ways.  The first and perhaps most benign issue is that the actual structure of the organization does not foster work-flow.  Unlike waterfall development where one group hands off a project to another in phases (development to QA, QA to operations, etc), Agile methods seek to develop and deploy seamlessly.  To be successful the Agile team needs representation from multiple stakeholders within functional groups.  As individuals now spend most of their time in cross functional teams, what value does the functional group offer?  In essence, these functional organizations become the analog to the “home room” in school.

The next problem is the inherent conflict created between the Agile team and the functional organization.  To be truly effective, the team must be empowered to some degree.  What power or responsibility does the functional leader then have?  If he or she isn’t responsible for a specific product, are they to be given some sort of veto power?  Such a notion has meaning in the waterfall world but really runs counter to the time to market and discovery objectives of Agile methods.  The resulting affective conflict simply doesn’t add value to the overall product.  In fact, as research shows, it destroys value.  Some proponents of continuing with functional organizations might indicate that the functional groups allow for more effective management and mentoring of individuals within their domain.  Given how little time managers truly spend on mentoring relative to other tasks, I highly doubt this is the case for most organizations.  Our experience is that the functional groups spend more time arguing over ownership of certain decisions (affective conflict) rather than mentoring, training and evaluating individual contributors.

Perhaps the largest problem – larger than the lack of support for work flow and the creation of conflict – is that implementing agile processes across functional organizations sub-optimizes innovation.  Research indicates that innovation happens most frequently and beneficially within groups of individuals with diverse and non-overlapping experience across a number of domains (functional and experiential diversity) and with non-redundant links to individuals outside of their organization.  By engaging in beneficial debate (cognitive conflict) on approaches to a certain problem or opportunity, the perspective and networks brought by each person widens the potential solution set.  Alas, this is the true unheralded value of agile development teams that properly incorporate multiple disciplines within the team (QA, dev, product, infrastructure).  Each of these individuals not only brings new and valuable expertise in how to develop a product, they also have contacts outside the organization unlikely to be matched by each of their peers.  Innovation quality and frequency therefore increases.  But the inherent conflict in multiple competing organizational affiliations will dampen this innovation.  So not only is there conflict and a lack of workflow, the potential major benefit is removed or at the very least diminished.

Having discussed the problems inherent to functional organizations and agile processes, we’ll next discuss how a company might “organize” around agile to be more effective.


1 comment

5 Things Agile is NOT

Agile Processes can help fix several - but not all - issues. Here are the top 5 misconceptions about the process that we see in our practice.

It seems that everyone is moving to an Agile approach in their product (or software) development lifecycle.  Some move with great success, some with great fanfare and for some it’s one of the last moves their company and engineering organizations make before failing miserably and shutting doors permanently.  As often as not, companies just move because they believe it will cure all of their problems.  As we’ve written before, no new PDLC will cure all of your problems.  Agile may in fact be best for you, but there are always tradeoffs.

We’ve compiled a top 5 misconceptions about Agile from our experience working with companies to solve problems.  Be careful of these pitfalls as they can cause you to fail miserably in your efforts.

1)      Agile is NOT a reason to NOT manage engineers or programmers

Engineering organizations measure themselves.  In fact, many Agile methods include a number of metrics such as burn down charts to measure progress and velocity for measuring the rate at which teams deliver business value.  As with any other engineering organization, you should seek to find meaningful metrics to understand and improve developer and organizational quality and productivity.  Don’t let your team or your managers tell you not to do your job!

2)      Agile is NOT a reason to have engineering alone make product decisions

You still have to run your business which means nesting the product vision to the vision of the company.   More than likely you are paying business or product people to define what it is you need to do to fight and win in your market.  Someone still has to set the broad strategic vision to which products will be developed.  Engineers can and should contribute to this vision, and that’s where Agile methods can help.

3)      Agile alone is NOT a cure for all of your product delivery problems

As we’ve blogged before, there simply are no silver bullets.  With the right management and oversight, you can make any development lifecycle work.  There are just cases where Agile methods work better.   But don’t look to have a PDLC fix your business, people or management issues.

4)      Agile is NOT an excuse to NOT put in the appropriate processes

There is nothing in the Agile manifesto that states that process is evil.  In fact, it recognizes processes are important by stating that there is value within them.  Agile simply believes that individuals and interactions are more important – a concept with which we completely agree.   Don’t argue that all process is evil and cite Agile as the proof as so many organizations seem to do.  Certain processes (e.g. code reviews) are necessary for hyper growth environments to be successful.

5)      Agile is NOT an excuse not to create SOME documentation

Another often misinterpreted point is that Agile eliminates documentation.  Not only is this not true, it is ridiculous.  The Agile manifesto again recognizes the need for documentation and simply prefers working software over COMPREHENSIVE documentation.  Go ahead and try to figure out how to use or troubleshoot something for the very first time without documentation and see how easy it is…  Programming before designing coupled with not creating  useful documentation makes one a hack instead of an engineer.  Document – just don’t go overboard in doing it.


1 comment

PDLC or SDLC

As a frequent technology writer I often find myself referring to the method or process that teams use to produce software. The two terms that are usually given for this are software development life cycle (SDLC) and product development life cycle (PDLC). The question that I have is are these really interchangeable? I don’t think so and here’s why.

Wikipedia, our collective intelligence, doesn’t have an entry for PDLC, but explains that the product life cycle has to do with the life of a product in the market and involves many professional disciplines. According to this definition the stages include market introduction, growth, mature, and saturation. This really isn’t the PDLC that I’m interested in. Under new product development (NDP) we find a defintion more akin to PDLC that includes the complete process of bringing a new product to market and includes the following steps: idea generation, idea screening, concept development, business analysis, beta testing, technical implementation, commercialization, and pricing.

Under SDLC, Wikipedia doesn’t let us down and explains it as a structure imposed on the development of software products. In the article are references to multiple different models including the classic waterfall as well as agile, RAD, and Scrum and others.

In my mind the PDLC is the overarching process of product development that includes the business units. The SDLC is the specific steps within the PDLC that are completed by the technical organization (product managers included). An image on HBSC’s site that doesn’t seem to have any accompanying explanation depicts this very well graphically.

Another way to explain the way I think of them is to me all professional software projects are products but not all product development includes software development.  See the Venn diagram below. The upfront (bus analysis, competitive analysis, etc) and back end work (infrastructure, support, depreciation, etc) are part of the PDLC and are essential to get the software project created in the SDLC out the door successfully.  There are non-software related products that still require a PDLC to develop.

Do you use them interchangeably?  What do you think the differences are?


2 comments

No Such Thing As a Software Engineer

Mike Fisher recently blogged about all the recent activity decrying the death of software engineering in his post “Engineering or Craftsmanship”.  The two terms should never have been stuck together in the first place.  Compared to the “true” engineering disciplines, the construct is as ridiculous as the term “sanitation engineer”.

Most other engineering disciplines require school trained engineers with deep technical and scientific knowledge to accomplish their associated tasks.  There probably aren’t many ground breaking airplanes designed by people who do not understand lift and drag, few ground breaking electronic devices designed by people who don’t understand the principles of electromotive force and few skyscrapers designed by people who do not understand the principles of statics and dynamics.  This isn’t to say that such things haven’t happened (e.g. the bicycle manufacturers turned airplane pioneers named the Wright brothers), but rather that these exceptions are examples of contributions by geniuses and savants rather than the norm.

The development of software is simply different than the work performed within true engineering disciplines.  With just a little bit of time and very little training or deep knowledge, one can create a website or product that is disruptive within any given market segment.  You don’t need to learn a great deal about science or technology to begin being successful and you need not be a genius.  The barrier to entry to develop a business changing service on the internet simply isn’t the same as the knowledge necessary to send a man to the moon.  Software, as it turns out, simply isn’t “rocket science”.   To develop it we don’t need a great deal of scientific or technical experience and it’s really not the application of a “real science” (one with immutable laws etc) as is say electrical engineering.

Sure, there are some people who as a result of training are better than other people and there is still incredible value in going to school to learn the classic components of computer science such as asymptotic analysis.  Experience increases one’s ability to create efficient programs that reduce the cost of operations, increase scalability and decrease the cost of development.  But consider this, many people with classical engineering backgrounds simply walk into software development jobs and are incredibly successful.  Seldom is it the case that a software engineer without an appropriate undergraduate engineering background will walk into a chemical, electrical or mechanical engineering position and start kicking ass.

The “laws” that developers refer to (Brooks Law, Moore’s Law, etc) aren’t really laws as much as they are observations of things that have held true for some time.  It’s entirely possible that at some point Moore’s Law won’t even be a “law” anymore.  They just aren’t the same as Faraday’s Law or Bernoulli’s Principle.  It’s a heck of a lot easier to understand an observation than it is to understand, “prove” or derive the equations within the other engineering disciplines.  Reading a Wikipedia page and applying the knowledge to your work is not as difficult as spending months learning calculus so that one can use differential equations.

All of this goes to say that software developers rarely “engineer” anything – at least not anymore and not in the sense defined by other engineering disciplines.  Most software developers are closer to the technicians within other engineering disciplines; they have a knowledge that certain approaches work but do not necessarily understand the “physics” behind the approach.  In fact, such “physics” (or the equivalent) rarely exist.  Many no longer even understand how compilers and interpreters do their jobs (or care to do so).

None of this goes to say that we should give up managing our people or projects.  Many articles decry the end of management in software, claiming that it just runs up costs.  I doubt this is the case as the articles I have read do not indicate the cost of developing software without attempting to manage its quality or cost.  Rather they point to the failures of past measurement and containment strategies as a reason to get rid of them.  To me, it’s a reason to refine them and get better.  Agile methods may be a better way to develop software over time, or it may be the next coming of the “iterative or cyclic method of software development”.  Either way, we owe it to ourselves to run the scientific experiment appropriately and measure it against previous models to determine if there are true benefits in our goal of maximizing shareholder value.


14 comments

Engineering or Craftsmanship

Having gone through a computer science program at a school that required many engineering courses such as mechanical engineering, fluid dynamics, and electrical engineering as part of the core curriculum, I have a good appreciation of the differences between classic engineering work and computer science. One of the other AKF partners attending this same program along with me and we often debate whether our field should be considered an engineering discipline or not.

Jeff Atwood posted recently about how floored he was reading Tom DeMarco’s article in IEEE Software, where Tom stated that he has come to the conclusion that “software engineering is an idea that whose time has come and gone.” Tom DeMarco is one of the leading contributors on software engineering practices and has written such books as Controlling Software Projects: Management, Measurement, and Estimation, in which it’s first line is the famously quoted “You can’t control what you can’t measure.” Tom has come to the conclusion that:

For the past 40 years, for example, we’ve tortured ourselves over our inability to finish a software project on time and on budget. But as I hinted earlier, this never should have been the supreme goal. The more important goal is transformation, creating software that changes the world or that transforms a company or how it does business….Software development is and always will be somewhat experimental.

Jeff concludes his post with this statement, “…control is ultimately illusory on software development projects. If you want to move your project forward, the only reliable way to do that is to cultivate a deep sense of software craftsmanship and professionalism around it.”

All this reminded me of a post that Jeffrey Zeldman made about design management in which he states:

The trick to great projects, I have found, is (a.) landing clients with whom you are sympatico, and who understand language, time, and money the same way you do, and (b.) assembling teams you don’t have to manage, because everyone instinctively knows what to do.

There seems to be a theme among these thought leaders that you cannot manage your way into building great software but rather you must hone software much like a brew-master does for a micro-brew or a furniture-maker does for a piece of furniture. I suspect the real driver behind this notion of software craftsmanship is that it if you don’t want to have to actively manage projects and people you need to be highly selective in who joins the team and limit the size of the team. You must have management and process in larger organizations, no matter how professional and disciplined the team. There is likely some ratio of professionalism and size of team where if you fall below this your projects breakdown without additional process and active management. As in the graph below, if all your engineers are apprentices or journeymen and not master craftsmen they would be lower on the craftsmanship axis and you could have fewer of them on your team before you required some increased process or control.

Continuing with the microbrewery example, you cannot provide the volume and consistency of product for the entire beer drinking population of the US with four brewers in a 1,000 sq ft shop. You need thousands of people with management and process. The same goes for large software projects, you eventually cannot develop and support the application with a small team.

But wait you say, how about large open source projects? Let’s take Wikipedia, perhaps the largest open project that exists. Jimbo Wales, the co-founder of Wikipedia states “…it turns out over 50% of all the edits are done by just .7% of the users – 524 people. And in fact the most active 2%, which is 1400 people, have done 73.4% of all the edits.” Considering there are almost 3 million English articles in Wikipedia, this means each team working on a article is very small, possibly a single person.

Speaking of Wikipedia, one of those 524 people defines software engineer as “a person who applies the principles of software engineering to the design, development, testing, and evaluation of the software and systems that make computers or anything containing software, such as chips, work.” To me this is too formulaic and doesn’t describe accurately the application of style, aesthetics, and pride in one’s work. I for one like the notion that software development is as much craftsmanship as it is engineering and if that acknowledgement requires us to give up the coveted title of engineer so be it. But I can’t let this desire to be seen as a craftsman obscure the concept that the technology organization has to support the business as best as possible. If the business requires 750 engineers then there is no amount of proper selection or craftsmanship that is going to replace control, management, measurement, and process.

Perhaps not much of a prophecy but I predict the continuing divergence among software development organizations and professionals. Some will insist that the only way to make quality software is in small teams that do not require management nor measurement while others will fall squarely in the camp of control, insisting that any sizable project requires too many developers to not also require measurements. The reality is yes to both, microbrews are great but consumers still demand that a Michelob purchased in Wichita taste the same as it does in San Jose. Our technological world will be changed dramatically by a small team of developers but at the same time we will continue to run software on our desktops created by armies of engineers.

What side of the argument do you side with engineer or craftsman?


6 comments

Scaling & Availability Anti-patterns

Most of you are familiar with patterns in software development. If you are not a great reference is Patterns of Enterprise Application Architecture by Martin Fowler or Code Complete by Steve McConnell. The concept of a pattern is a reusable design that solves a particular problem. No sense in having every software engineer reinvent the wheel. There is also the concept of an anti-pattern, which as the name implies, is a design or behavior that appears to be useful but results in less than optimal results and thus something that you do not want engineers to follow. We’ve decided to put a spin on the anti-pattern concept by coming up with a list of anti-patterns for scaling and availability. These are practices or actions that will lead to pitfalls for your application or ultimately your business.

  1. SPOF – Lots of people still insist on deploying single devices.  The rationale is that either it cost too much to deploy in pairs or that the software running on it is standalone.  The reality is that hardware will always fail it’s just a matter of time and when originally deployed that software might have been standalone but in the releases since then other features might depend on it.  Having a single point of failure is asking for that failure to impact your customer experience or worse bring down the entire site.
  2. Synchronous calls – Synchronous calls are unavoidable but engineers should be aware of the potential problems that this can cause.  Daisy chaining applications together in a serial fashion decreases the availability due to the multiplicative affect of failure.  If two independent devices both have 99.9% expected availability, connecting them through synchronous calls causes the overall system to have 99.9% x 99.9% = 99.8% expected availability.
  3. No ability to rollback – Yes, building and testing for the ability to rollback every release can be costly but eventually you will have a release that causes significant problems for your customers. You’ve probably heard the mantra “Failing to plan is planning to fail.”  Take it one step further and actually plan to fail and then plan a way out of that failure back to a known good state.
  4. Not logging – We’ve written a lot about the importance of logging and reviewing these logs.  If you are not logging and not periodically going through those logs you don’t know how your application is behaving.
  5. Testing quality into the product – Testing is important but quality is designed into a product. Testing validates that the functionality works as predicted and that you didn’t break other things while adding the new feature. Expecting to find performance flaws, scalability issues, or terrible user experiences in testing and have them resolved is a total waste of time and resources as well as likely to fail at least 50% of the time.
  6. Monolithic databases – If you get big enough you will need the ability to split your database on at least one axis as described in the AKF Scale Cube. Planning on Moore’s law to save you is planning to fail.
  7. Monolithic application – Ditto for your application. You will eventually need to scale one or more of your application tiers in order to continue growing. 
  8. Scaling through 3rd parties – If you are relying on a vendor for your ability to scale such as with a database cluster you are asking for problems. Use clustering and other vendor features for availability, plan on scaling by dividing your users onto separate devices, sharding.
  9. No culture of excellence – Restoring the site is only half the solution. If your site goes down and you don’t determine the root cause of the failure (after you get the site back up not while the site is down) then you are destined to repeat that failure. Establish a culture of not allowing the same mistake or failure to happen twice and introduce processes such as root cause analysis and postmortems for this. 

Anti-patterns can be incorporated into design and architecture reviews to compliment a set of architectural principles.  Together, they form the “must not’s” and “must do’s” against which all designs and architectures can be evaluated.


4 comments