AKF Partners

Abbott, Keeven & Fisher PartnersPartners In Hyper Growth

Tag » computing

Newsletter – New Associate Partners

Below is part of our recent newsletter. If you haven’t subscribed yet, click here to do so.

New Associate Partners
We are very excited to announce that Geoff Kershner and Dave Berardi are joining AKF Partners.

For those who have worked with AKF in the past, you might recognize Geoff as he was part of our extended team a couple years ago. He most recently spent time at LiveOps running a Program Management Office in efforts including Agile development and PCI-DSS certification. He also spent 6 years at eBay and PayPal.

Dave is a long time friend and colleague as well who recently spent a number of years helping Progressive with incident and problem management processes. He has led various software development teams and spent 8 years at GE working on data warehousing and six sigma projects.

Check out Dave and Geoff’s full profiles.

Trends
We’re seeing three interesting areas continue to popup this summer:

  1. Cloud Computing
  2. Becoming a SaaS Company
  3. Agile Organizations

Here are some highlights:

Cloud Computing
One of our latest blog posts discusses the current trends with cloud computing. A common definition of cloud computing is the delivery of computing and storage capacity as a service. A distinct characteristic is that users are charged based on usage instead of a more conventional licensing or upfront purchase. There are three basic types of cloud computing: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Servie (SaaS).The major trend is that we are seeing more and more companies becoming comfortable with cloud offerings. In a 2012 survey with 785 respondents by North Bridge Venture Partners a very low 3% of respondents consider cloud services to be too risky and 50% of the survey respondents now say they have “complete confidence” in the cloud. The demand for SaaS offerings is growing rapidly as Gartner predicts that SaaS will hit $14.5 billion in 2012 continuing through 2015 when it will be $22.1 billion. This means your company should be leveraging the cloud as infrastructure where possible (burst capacity, rapid provisioning, etc) and if you’re not already a SaaS provider you need to be thinking about moving their rapidly. Which brings us to the second trend…

Becoming a SaaS Company
Given the comfort with cloud offerings that is now pervasive in industry if you’re don’t already provide a SaaS offering you need to start thinking about it now. One of the hardest things to do is to move from an on-premise or shrink-wrapped software company and become a Software-as-a-Service company. It is not enough to simply bundle up an application in a hosted fashion and label yourself a “SaaS” company.  If you don’t work aggressively to increase availability and decrease your cost of operations, someone with greater experience will come along and simply put you out of business.  After all, your business is now about SERVICE – not SOFTWARE. We have pulled together some advice such as start thinking about having two different products (on-premise and SaaS) or determine which business you want and kill the other one off. Attempting to satisfy both with a single architecture will likely result in you failing at both.

Our belief is that scalability and ultimately business success require architecture, organization, and process to all be aligned and functioning well. This brings up our last trend, how to organize for success as a SaaS company.

Agile Organizations
We’ve written lots of posts about this issue starting with how“old school” organizations are aligned functionally and then hownew Agile organizations are aligned by delivery of services. The organizations that align this way have remained remarkably nimble and agile despite sometimes having thousands of software developers. As a SaaS provider you need to rethink the traditional organizational structure and start thinking about how to organize to provide the best service possible at the lowest cost.


1 comment

Scalability Rules Videos and Reviews

Here are a few videos that we did to explain Scalability Rules…ignore the scary faces:

You can find more videos by following this link.

Here are a couple reviews of the book:
InfoQ
Code Ranch
LinkedIn Reviews


Comments Off on Scalability Rules Videos and Reviews

Alternative Solutions to Old Problems

Are you like @devops_borat and not a fan of DevOps? Or, maybe you think deploying dozens of time each day to production is ludicrous. I’m actually a fan of both DevOps and continuous deployment but if you’re not don’t worry these are just new solutions to old problems and there are alternatives.

devops_borat

The Problems
As long as people have been divided into separate organizations there has existed strife and competition between the teams. In the technology field this is no place more apparent than between development and operations. In at least 50+% of the companies that we meet with they have problems getting these teams to work together. If you’ve been around for a few years you’ve surely heard one team pointing to the other as the problem, whether that problem is an outage or slow product development.

A solution to this problem is DevOps. Wikipedia states that DevOps “relates to the emerging understanding of the interdependence of development and operations in meeting a business’ goal to producing timely software products and services.”

Another common tech problem is that large changes are risky. It is called “Big Bang” for a reason…things go bang! If you’ve been part of an ERP implementation that took months if not years to prepare for you know how risky these large changes are.

A solution to this problem is to make small changes more frequently. According to Eric Ries, co-founder and former CTO of IMVU, continuous deployment is a method of improving software quality due to the discipline, automation, and rigorous standards that are required in order to accomplish continuous deployment.

Alternative Solutions
Admittedly, DevOps and continuous deployment are somewhat extreme for some teams. For those or for teams that just don’t believe that these are the solutions, don’t fret there are alternatives.

JAD/ ARB – For improving the coordination between development and operations, we’ve recommend the JAD and ARB processes. These are very lightweight processes that force the teams to work together for better architected and better supported solutions.

Progressive Rollout – For reducing risk by making smaller changes, we recommend progressive rollout. This is a simple concept that involves first pushing code to a very small set of servers, monitoring for issues, and then progressively increasing the percentage of servers that receive the new code. The time between rollouts can be 30 min to 24 hours depending on how quickly you are likely to detect problems. We often suggestion the percentage of servers in the progressive rollout to be 1%, 5%, 20%, 50%, 100%.

The bottom line is something technologists know – there are almost always multiple ways to solve a problem. If you don’t like the current or new solution look for an alternative.


Comments Off on Alternative Solutions to Old Problems

Enterprise Cloud Computing – Book Review

A topic of particular interest to us is cloud computing so I picked up a copy of Gautum Shroff’s Enterprise Cloud Computing: Technology, Architecture, Applications published in 2010 by Cambridge University Press. Overall I enjoyed the book and thought it covered some great topics but there were a few topics that I wanted the author to cover in more depth.

Enterprise Cloud Computing Book Cover

The publisher states that the book is “intended primarily for practicing software architects who need to assess the impact of such a transformation.” I would recommend this book for architects, engineers, and managers who are not currently well versed with cloud computing. For individuals who already possess a familiarity on these subject this will not be in depth enough nor will it have enough practical advice on when to consider the different applications.

Of minor issue to me is that this book spends a good deal of time upfront covering the evolution of the internet into a cloud computing platform. A bigger issue to me is that coverage of topics is done very well at an academic or theoretical level but doesn’t follow through enough on the practical side. For example, Shroff’s coverage of topics such as MapReduce in Chapter 11 are thorough in describing how the internal functionality but fall short on when, how, or why to actually implement them in an enterprise architecture. In this 13 page chapter, he unfortunately only gives one page to the practical application of batch processing using MapReduce. He revisits this topic in other chapters such as Chapter 16 “Enterprise analytics and search” and does an excellent job explaining how it works but his coverage of the when, how, or why this should be implemented is not given enough attention.

He picks up the practical advice in the final Chapter 18 “Roadmap for enterprise cloud computing”. Here he suggests several ways companies should consider using cloud and Dev 2.0 (Force.com and TCS InstantApps). I would like to have seen this practical side implemented throughout the book.

I really enjoyed Shroff”s coverage of the economics of cloud computing in Chapter 6. He addresses the issue by showing how he compares the in-house (collocation center) vs cloud. Readers can adopt his approach using their own numbers to produce a similar comparison.

The book does a great job covering the fundamentals of enterprise computing, including a technical introduction to enterprise architecture. It will of interest to programmers and software architects who are not yet familiar with these topics. It is suggested by the publisher that this book could serve as a reference for a graduate-level course in software architecture or software engineering, I agree.


1 comment

Scalability Warning Signs

Is your system trying to tell you that you're going to have scalability problems? We like to think that we couldn't have predicted problems at 10x our last year's traffic but there are often warning signs that we can heed if we know what to look for.

Unless you’re one of the incredibly lucky sites where the traffic spikes 100x overnight, scalability problems don’t sneak up on you. They give you warning signs that if you are able to recognize and react to appropriately, allow you to stay ahead of the issues. However, we’re often so head down getting the next release out the door that we don’t take the time to realize we’re experiencing warning signs until they become huge problems staring us in the face.  Here are a few of the warnings that we’ve heard teams talk about in the past couple of months that were clearly signs of problems on the horizon.

Not wanting to make changes – If you find yourself denying request for changes to certain parts of your system, this might be a warning sign that you have scalability issues with that component. A completely scalable system has components that can fail without catastrophic impact to customers. If you’re avoiding changes to a component because of the risk of problems this is a warning sign that you need to re-architect to eliminate or at least mitigate the risk.

Performance creep – If after each release you need to add hardware to a subsystem or you accept a performance degradation in a service you could have a scaling issue approaching quickly. Consistently increasing consumption of CPU or memory resources in a service with each release will lead you into an unsustainable situation. If today you’re comfortably sitting at 40% CPU utilization and you allow a modest 10% degradation in each release you have less than nine releases before you are well above 100% but the reality is you won’t get close to that without significant issues.

Investigating larger hardware – If you’ve started asking your vendors or VAR about bigger hardware you’re heading down the path of scalability problems. The scale of more computational resources per dollar is not linear, it’s closer to cubic or even exponential scales. Purchasing more expensive hardware might seem like the economical way out when you compare the cost of the first hardware upgrade versus developer time but run the calculation out several iterations. When you get to a Sun Fire™ E25K Server with Oracle Database 10g at a $6M price tag you might feel differently about the decision.

Asking vendors for advanced features – When you start exploring advanced options of your vendor’s software you’re likely heading down the path of increased complexity and this is a clear warning sign of scalability problems in your future. Besides potentially locking you into a vendor which lowers your negotiating power it puts the fate of your company in someone else’s hands, which wouldn’t make me sleep very well at night. See our post on using vendor features to scale for more information.

Watch out for these or similar warning signs that scalability problems are looming on the horizon. Dealing with the problems today while you have time to plan properly might not get you an award for being a firefighter but you’ll more likely deliver a quality product without costly interruption.


1 comment

UC Berkeley's take on cloud computing

Researchers at UC Berkeley have outlined their take on cloud computing in an paper “Above the Clouds: A Berkeley View of Cloud Computing.” They cover a lot of material in this paper and it’s well worth reading.  Section 7 was particularly interesting to us because it covers the top 10 obstacles that companies must overcome in order to utilize the cloud.  According to them these are:

  • Availability of service
  • Data lock-in
  • Data confidentiality and auditability
  • Data transfer bottlenecks
  • Performance unpredictability
  • Scalable storage
  • Bugs in large distributed systems
  • Scaling quickly
  • Reputation fate sharing
  • Software licensing

 

These look very similar to our top five concerns that we outlined in our article on Venturebeat.com.  Our list was:

  • Security
  • Non-portability
  • Control (availability)
  • Limitations (non-persistent storage) 
  • Performance

Their article concludes with “Although Cloud Computing providers may run afoul of the obstacles …we believe that over the long run providers will successfully navigate these challenges…” They continue saying “Hence, developers would be wise to design their next generation of systems to be deployed into Cloud Computing.”  

We agree and reiterate our conclusion from “The Cloud Isn’t For Everyone

“Of course, most importantly, we should all keep an eye on how cloud computing evolves over the coming months and years. This technology has the potential to change the fundamental cost and organization structures of most SaaS companies. And as cloud providers mature, we’re sure they’ll address our top five concerns, becoming more viable companies in their own right.”


Comments Off on UC Berkeley's take on cloud computing