AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

Tag » software engineer

No Such Thing As a Software Engineer

Mike Fisher recently blogged about all the recent activity decrying the death of software engineering in his post “Engineering or Craftsmanship”.  The two terms should never have been stuck together in the first place.  Compared to the “true” engineering disciplines, the construct is as ridiculous as the term “sanitation engineer”.

Most other engineering disciplines require school trained engineers with deep technical and scientific knowledge to accomplish their associated tasks.  There probably aren’t many ground breaking airplanes designed by people who do not understand lift and drag, few ground breaking electronic devices designed by people who don’t understand the principles of electromotive force and few skyscrapers designed by people who do not understand the principles of statics and dynamics.  This isn’t to say that such things haven’t happened (e.g. the bicycle manufacturers turned airplane pioneers named the Wright brothers), but rather that these exceptions are examples of contributions by geniuses and savants rather than the norm.

The development of software is simply different than the work performed within true engineering disciplines.  With just a little bit of time and very little training or deep knowledge, one can create a website or product that is disruptive within any given market segment.  You don’t need to learn a great deal about science or technology to begin being successful and you need not be a genius.  The barrier to entry to develop a business changing service on the internet simply isn’t the same as the knowledge necessary to send a man to the moon.  Software, as it turns out, simply isn’t “rocket science”.   To develop it we don’t need a great deal of scientific or technical experience and it’s really not the application of a “real science” (one with immutable laws etc) as is say electrical engineering.

Sure, there are some people who as a result of training are better than other people and there is still incredible value in going to school to learn the classic components of computer science such as asymptotic analysis.  Experience increases one’s ability to create efficient programs that reduce the cost of operations, increase scalability and decrease the cost of development.  But consider this, many people with classical engineering backgrounds simply walk into software development jobs and are incredibly successful.  Seldom is it the case that a software engineer without an appropriate undergraduate engineering background will walk into a chemical, electrical or mechanical engineering position and start kicking ass.

The “laws” that developers refer to (Brooks Law, Moore’s Law, etc) aren’t really laws as much as they are observations of things that have held true for some time.  It’s entirely possible that at some point Moore’s Law won’t even be a “law” anymore.  They just aren’t the same as Faraday’s Law or Bernoulli’s Principle.  It’s a heck of a lot easier to understand an observation than it is to understand, “prove” or derive the equations within the other engineering disciplines.  Reading a Wikipedia page and applying the knowledge to your work is not as difficult as spending months learning calculus so that one can use differential equations.

All of this goes to say that software developers rarely “engineer” anything – at least not anymore and not in the sense defined by other engineering disciplines.  Most software developers are closer to the technicians within other engineering disciplines; they have a knowledge that certain approaches work but do not necessarily understand the “physics” behind the approach.  In fact, such “physics” (or the equivalent) rarely exist.  Many no longer even understand how compilers and interpreters do their jobs (or care to do so).

None of this goes to say that we should give up managing our people or projects.  Many articles decry the end of management in software, claiming that it just runs up costs.  I doubt this is the case as the articles I have read do not indicate the cost of developing software without attempting to manage its quality or cost.  Rather they point to the failures of past measurement and containment strategies as a reason to get rid of them.  To me, it’s a reason to refine them and get better.  Agile methods may be a better way to develop software over time, or it may be the next coming of the “iterative or cyclic method of software development”.  Either way, we owe it to ourselves to run the scientific experiment appropriately and measure it against previous models to determine if there are true benefits in our goal of maximizing shareholder value.


Engineering or Craftsmanship

Having gone through a computer science program at a school that required many engineering courses such as mechanical engineering, fluid dynamics, and electrical engineering as part of the core curriculum, I have a good appreciation of the differences between classic engineering work and computer science. One of the other AKF partners attending this same program along with me and we often debate whether our field should be considered an engineering discipline or not.

Jeff Atwood posted recently about how floored he was reading Tom DeMarco’s article in IEEE Software, where Tom stated that he has come to the conclusion that “software engineering is an idea that whose time has come and gone.” Tom DeMarco is one of the leading contributors on software engineering practices and has written such books as Controlling Software Projects: Management, Measurement, and Estimation, in which it’s first line is the famously quoted “You can’t control what you can’t measure.” Tom has come to the conclusion that:

For the past 40 years, for example, we’ve tortured ourselves over our inability to finish a software project on time and on budget. But as I hinted earlier, this never should have been the supreme goal. The more important goal is transformation, creating software that changes the world or that transforms a company or how it does business….Software development is and always will be somewhat experimental.

Jeff concludes his post with this statement, “…control is ultimately illusory on software development projects. If you want to move your project forward, the only reliable way to do that is to cultivate a deep sense of software craftsmanship and professionalism around it.”

All this reminded me of a post that Jeffrey Zeldman made about design management in which he states:

The trick to great projects, I have found, is (a.) landing clients with whom you are sympatico, and who understand language, time, and money the same way you do, and (b.) assembling teams you don’t have to manage, because everyone instinctively knows what to do.

There seems to be a theme among these thought leaders that you cannot manage your way into building great software but rather you must hone software much like a brew-master does for a micro-brew or a furniture-maker does for a piece of furniture. I suspect the real driver behind this notion of software craftsmanship is that it if you don’t want to have to actively manage projects and people you need to be highly selective in who joins the team and limit the size of the team. You must have management and process in larger organizations, no matter how professional and disciplined the team. There is likely some ratio of professionalism and size of team where if you fall below this your projects breakdown without additional process and active management. As in the graph below, if all your engineers are apprentices or journeymen and not master craftsmen they would be lower on the craftsmanship axis and you could have fewer of them on your team before you required some increased process or control.

Continuing with the microbrewery example, you cannot provide the volume and consistency of product for the entire beer drinking population of the US with four brewers in a 1,000 sq ft shop. You need thousands of people with management and process. The same goes for large software projects, you eventually cannot develop and support the application with a small team.

But wait you say, how about large open source projects? Let’s take Wikipedia, perhaps the largest open project that exists. Jimbo Wales, the co-founder of Wikipedia states “…it turns out over 50% of all the edits are done by just .7% of the users – 524 people. And in fact the most active 2%, which is 1400 people, have done 73.4% of all the edits.” Considering there are almost 3 million English articles in Wikipedia, this means each team working on a article is very small, possibly a single person.

Speaking of Wikipedia, one of those 524 people defines software engineer as “a person who applies the principles of software engineering to the design, development, testing, and evaluation of the software and systems that make computers or anything containing software, such as chips, work.” To me this is too formulaic and doesn’t describe accurately the application of style, aesthetics, and pride in one’s work. I for one like the notion that software development is as much craftsmanship as it is engineering and if that acknowledgement requires us to give up the coveted title of engineer so be it. But I can’t let this desire to be seen as a craftsman obscure the concept that the technology organization has to support the business as best as possible. If the business requires 750 engineers then there is no amount of proper selection or craftsmanship that is going to replace control, management, measurement, and process.

Perhaps not much of a prophecy but I predict the continuing divergence among software development organizations and professionals. Some will insist that the only way to make quality software is in small teams that do not require management nor measurement while others will fall squarely in the camp of control, insisting that any sizable project requires too many developers to not also require measurements. The reality is yes to both, microbrews are great but consumers still demand that a Michelob purchased in Wichita taste the same as it does in San Jose. Our technological world will be changed dramatically by a small team of developers but at the same time we will continue to run software on our desktops created by armies of engineers.

What side of the argument do you side with engineer or craftsman?


Scaling & Availability Anti-patterns

Most of you are familiar with patterns in software development. If you are not a great reference is Patterns of Enterprise Application Architecture by Martin Fowler or Code Complete by Steve McConnell. The concept of a pattern is a reusable design that solves a particular problem. No sense in having every software engineer reinvent the wheel. There is also the concept of an anti-pattern, which as the name implies, is a design or behavior that appears to be useful but results in less than optimal results and thus something that you do not want engineers to follow. We’ve decided to put a spin on the anti-pattern concept by coming up with a list of anti-patterns for scaling and availability. These are practices or actions that will lead to pitfalls for your application or ultimately your business.

  1. SPOF – Lots of people still insist on deploying single devices.  The rationale is that either it cost too much to deploy in pairs or that the software running on it is standalone.  The reality is that hardware will always fail it’s just a matter of time and when originally deployed that software might have been standalone but in the releases since then other features might depend on it.  Having a single point of failure is asking for that failure to impact your customer experience or worse bring down the entire site.
  2. Synchronous calls – Synchronous calls are unavoidable but engineers should be aware of the potential problems that this can cause.  Daisy chaining applications together in a serial fashion decreases the availability due to the multiplicative affect of failure.  If two independent devices both have 99.9% expected availability, connecting them through synchronous calls causes the overall system to have 99.9% x 99.9% = 99.8% expected availability.
  3. No ability to rollback – Yes, building and testing for the ability to rollback every release can be costly but eventually you will have a release that causes significant problems for your customers. You’ve probably heard the mantra “Failing to plan is planning to fail.”  Take it one step further and actually plan to fail and then plan a way out of that failure back to a known good state.
  4. Not logging – We’ve written a lot about the importance of logging and reviewing these logs.  If you are not logging and not periodically going through those logs you don’t know how your application is behaving.
  5. Testing quality into the product – Testing is important but quality is designed into a product. Testing validates that the functionality works as predicted and that you didn’t break other things while adding the new feature. Expecting to find performance flaws, scalability issues, or terrible user experiences in testing and have them resolved is a total waste of time and resources as well as likely to fail at least 50% of the time.
  6. Monolithic databases – If you get big enough you will need the ability to split your database on at least one axis as described in the AKF Scale Cube. Planning on Moore’s law to save you is planning to fail.
  7. Monolithic application – Ditto for your application. You will eventually need to scale one or more of your application tiers in order to continue growing. 
  8. Scaling through 3rd parties – If you are relying on a vendor for your ability to scale such as with a database cluster you are asking for problems. Use clustering and other vendor features for availability, plan on scaling by dividing your users onto separate devices, sharding.
  9. No culture of excellence – Restoring the site is only half the solution. If your site goes down and you don’t determine the root cause of the failure (after you get the site back up not while the site is down) then you are destined to repeat that failure. Establish a culture of not allowing the same mistake or failure to happen twice and introduce processes such as root cause analysis and postmortems for this. 

Anti-patterns can be incorporated into design and architecture reviews to compliment a set of architectural principles.  Together, they form the “must not’s” and “must do’s” against which all designs and architectures can be evaluated.