One of the most viewed articles on the AKF Partners website is Technology Due Diligence Checklists. That post is based on a tool we use during technical due diligence engagements. It covers a broad array of topics, from architecture to product management, development process, and security. What makes the checklist valuable? How is it best employed? Come along as we go a step deeper into the checklist.

The secret sauce in applying the checklist has several key ingredients:

  • A conversation, not an audit. The checklist should be applied during a conversation, with leading questions asked. It is not something to be emailed and filled out in isolation– too much contextual richness a conversation draws out that an isolated questionnaire approach would miss.
  • A ranged evaluation, not a Boolean. Despite being called a checklist, the responses are not limited to a yes or a no. There are degrees of compliance, and our evaluation process uses a Likert scale to capture the situation more accurately.
  • Applicability varies. There are situations where a question is not applicable and is removed from the process and not scored. Being able to identify the conditions when a question should not be applied is a matter of experience and contextual knowledge. This is where you need experienced technologists on the task. Everyone is familiar with paying taxes, but would you hire just anyone to represent you at an IRS audit?

Let us look at some of the questions a bit closer.

From the X Axis Scalability section, we have:

“Are load balancers used to distribute requests across multiple end points?”

The conversation: Avoid asking a question that can be answered with a simple “yes” or “no” if possible – open ended questions will better explain how things work and why they were built that way.

An open-ended question to ask here would be “Explain what happens when a user initiates a session with your website?”

The evaluation: Not all load balancing solutions and deployments are the same, this is where experienced technologists apply the Likert scale. A top score here would involve load balancing used with stateless systems or sessions that are not pinned – any end point can service the initial request and any node can respond with the requested information. A valid but lower scoring situation would be load balancing used but with sticky or pinned sessions. Sticky or pinned sessions hold onto some system resources and connections, making those resource unavailable for other uses and increasing the negative impact of an end point failure. The lowest score would be no load balancing used, usually in conjunction with a single end point aka a single point of failure.

Applicability: Most interactive websites would be expected to use load balancing. An example of a use case where load balancing would not be needed or recommended would be a batch service for processing video clips – not a real time customer interactive use case.

From the Fault Isolation section, we have:

“Do you isolate data with respect to domains/services?”

The conversation: Again, avoid questions that could be answered with a simple yes or no. During a due diligence project, I would ask this question a bit differently, such as “Where is the data from that service stored?” That would lead to additional questions to help discover if the service is truly independent with its own data store or something less architecturally desirable.

The evaluation: Many companies are in the process of morphing a monolith into multiple services for several valid reasons. Monoliths are an excellent choice for a startup trying to get a product to market quickly. Over time with growth, it makes business and technical sense to decompose the monolith into independent services. Asking about data isolation helps identify if the services are truly independent. An independent service with its own data store can develop at the pace needed without waiting on other teams making changes to the monolith, helping the team to deliver features faster. A service that shares a data store with another service or the monolith from which it came is not truly independent and thus has lower availability (shared failure domains) and slower velocity (coordinating with other teams sharing the data store).

Applicability: There is not a lot of grey area here – if a company is undertaking the effort to create services (new or out of a monolith), those services need dedicated data stores. Sharing data stores defeats most of the benefits of building the service in the first place. We have seen more often than we would like an architecture touted to be service-oriented to really be nothing of the sort. That leads to additional questions about why the incomplete service separation exists and does the team have the technical know how to do it correctly.

From the Operations Process section:

“Are customer experience (business) metrics used for monitoring to determine if a problem exists?”

The conversation: An open-ended question here could be “How do you know if your product/service is working correctly?” Responses would tend to cover several questions in this section, including logging, system resource monitoring, and synthetic monitoring.

The evaluation: Evaluating logging and monitoring is not very complex. Logs should be centralized, searchable, access controlled, and preclude logging PII. Customer experience and synthetic monitors are far more effective at providing early notification of an incident than traditional system resource monitors, but system resources monitors are useful for infrastructure capacity planning.

Applicability: Applicability can vary widely here as the set of questions is intended for customer interactive transactional sites. A system that receives only API calls from other sites would have different monitoring needs. This is where the experienced technologist applying the checklist with contextual awareness adds value to the process.

As you can see from the examples above, there is some art in using the checklist effectively. Having the technical experience and contextual awareness  to understand how to appropriately score responses is paramount to maximizing the value of the technical due diligence effort.

There is an additional benefit of conducting the diligence session as a conversation with open ended questions – we get a feel for the team. Typical observations include:

  • Does the team have sufficient architectural skill to do what is needed as the business scales, or could they use some outside assistance?
  • Are engineering and operations processes mature enough for the business stage, or are heroic efforts with hair on fire the norm?
  • Is the organization ready to scale with the business, or are some adjustments needed?

A feel for the team is important as our clients often ask how much cost and/or time is needed to address issues discovered. Our response includes accommodations for the quality and experience of the team – similar architectural and process situations will have varied plans to address them based on the team involved.

Would you like to know more? Sell side or buy side, AKF Partners can help optimize the outcome of your technical due diligence project.