In 2018, AKF Partners shared a list of questions we used to conduct technical due diligence engagements for our clients. It proved to be popular and continues to be one of the most viewed blog posts on our site.

We continually review and improve our models and tools and have made some changes to our technical due diligence question list. Some of the terminology refers to AKF models and a quick visit to the AKF Scale Cube will help orient you for the scalability questions. Without further ado, here it is.


  1. How do you manage the routing of requests to various endpoints?
  2. Is session state stored in the browser or separate tier?
  3. Have you employed any strategies to increase database / storage performance and scalability?
  4. Have you implemented any caching strategies?
  5. Are authentication and authorization services synchronized and available to all environments / regions?
  6. Does the client/target leverage edge caching (CDN/browser caching)?


  1. Are services (i.e. login, signup, checkout) separated across different servers?
  2. Does each service have it's own separate datastore?
  3. Are services sized appropriately with consideration of needs (e.g. availability, frequency of change, dependencies, skill sets)?


  1. Are end points (web, app servers) dedicated to a subset of similar data (i.e. users, SKU, content)?
  2. Are databases dedicated to only a subset of similar data?
  3. How do you handle multi or all-tenant in your implementation (services & data)?
  4. How do you handle data residency requirements?


  1. Are only asynchronous calls being made across services?
  2. Do you isolate data with respect to domains / services?
  3. Does logical architecture enforce separation of function / areas of concern?


  1. Is there any specific area or areas where a failure will cause an outage?
  2. Is your data tier resilient to logical corruption (durable snapshots, intra database rollback) or being unavailable?
  3. How do you use multiple data centers (regions) to implement disaster recovery strategies?
  4. Are data centers (if colo or owned DCs) located in geographically low-risk areas?


  1. Does the application use Stored Procedures?
  2. Are the web, application, and database servers on physically separate tiers or separate virtual instances?
  3. Are auto-scaling techniques used?
  4. What Third party purchased technologies are part of the system?
  5. If self-hosted - describe the hardware sizes across the major components/tiers of the application.
    If cloud hosted - describe the instance sizes used for workloads including the DB(s).
  1. Do you use virtualization, and if so, what are your objectives in using it?
  2. How expensive would it be for you to use a different public cloud provider? How do you determine which PaaS services you incorporate into your product?
  3. Are data centers located in low-cost areas (if colo or owned DCs)?
  4. What traffic is routed through a Firewall/WAF?
  5. Are there any customer-specific code/data structures in your codebase?


  1. Is there a product management team or product owner that can make decisions to add, delay, or deprecate features?
  2. Does the product management team have ownership of business goals?
  3. Does the team develop success criteria, OKRs, KPIs or customer feedback loops that help to inform feature decisions?
  4. Does the team use an iterative discovery process to validate market value and achieve goals?
  5. Are leading and lagging indicators defined for each major project, and what is the process to track, validate, and rebase or revisit the metrics?
  6. Is everyone in the Product and Engineering organization skilled in using the product in the role of a customer?


  1. Are leading and lagging indicators defined for each major project, and what is the process to track, validate, and rebase or revisit the metrics?
  2. Does the team use any relative sizing method to estimate effort for features/story cards?
  3. Does the team utilize a velocity measure to improve team time to market?
  4. Does the team use burn down, metrics, retrospectives to measure the progress and performance of iterations?
  5. How does the team measure engineering efficiency and who owns and drives these improvements?
  6. Does the team perform estimates and measure actual against predicted?
  7. Is there a Definition of Done in place?


  1. Does the team use an approach that allows for easy identification and use of a prod bug fix branch for rapid deployment?
  2. Does the team use a feature-branch approach? Can a single feature/engineer block a release?
  3. In your public cloud architecture, how do you build and provision a new server / environment / region?
  4. Does the team have documented coding standards that are applied?
  5. Are engineers conducting code reviews with defined standards or have automation in the dev pipeline that validates against the standards?
  6. Is open-source licensing actively managed and tracked?
  7. Are engineers writing unit tests (code coverage)?
  8. Is the automated testing coverage greater than 75%?
  9. Does the team utilize continuous integration?
  10. Is load and performance testing conducted before releasing to a significant portion of users or is the testing built into the development pipeline?
  11. Does the team deploy small payloads frequently versus larger payloads seldomly?
  12. Does the team utilize continuous deployment?
  13. Is any containerization (e.g. Docker) and orchestration (Kubernetes) in place?
  14. Are feature flags, where a feature is enabled or disabled outside of a code release, in use?
  15. Does the team have a mechanism that can be used to rollback (wire on/off. DDL/DML scripted and tested, additive persistence tier, no "select *"?
  16. How does the team define technical debt?
  17. Is Tech Debt tracked and prioritized on an ongoing basis?
  18. How are issues found in production, backlogged bugs, as well as technical debt addressed in the product lifecycle?
  19. Does the team utilize a Joint Application Design process for large features that brings together engineering and ops for a solution or do they have experientially diverse teams?
  20. Does the team have documented architectural principles that are followed?
  21. Does the team utilize an Architecture Review Board where large features are reviewed to uphold architectural principles?


  1. Describe application logging in place. Where are the logs stored? Are they centralized and searchable? Who has access to the logs?
  2. Are customer experience (business) metrics used for monitoring to determine if a problem exists?
  3. Are system level monitors and metrics used to determine where and what the problem may be?
  4. Are synthetic monitors in use against your key transaction flows?
  5. Are incidents centrally logged with appropriate details?
  6. Are problems separated from incidents and centrally logged?
  7. Is there any differentiation on the severity of issues? How do you escalate appropriate severity incidents to teams/leaders/the business?
  8. Are alerts sent in real time to the appropriate owners and subject matter experts for diagnosis and resolution?
  9. Is there a single location where all production changes (code and infrastructure) are logged and available when diagnosing a problem?
  10. Are postmortems conducted on significant problems and are actions identified and assigned and driven to completion?
  11. Do you measure time to fully close problems?
  12. Is availability measured in true customer-impact?
  13. Are Quality of Service meetings held where customer complaints, incidents, SLA reports, postmortem scheduling and other necessary information reviewed/updated regularly?
  14. Do operational look back meetings occur either monthly or quarterly where themes are identified for architectural improvements?
  15. Does the team know how much headroom is left in the infrastructure or how much time until capacity is exceeded?
  16. How do you simulate faults in the system?


  1. Do architects have both engineering/development and infrastructure experience?
  2. Are teams perpetually seeded, fed, and weeded?
  3. Are teams aligned with services or features that are in pods or swim lanes?
  4. Are Agile teams able to act autonomously with a satisfactory TTM?
  5. Are measurable business goals, OKRs and KPIs visible and commercialized with teams?
  6. Are teams comprised of members with all of the skills necessary to achieve their goals?
  7. Have architects designed for graceful failures by thinking about scale cube concepts?
  8. Does leadership think about availability as a feature by setting aside investment for debt and scaling?
  9. Does the client have a satisfactory engineer to QA tester ratio?


  1. Is there a set of approved and published information security policies used by the organization?
  2. Has an individual who has final responsibility for information security been designated?
  3. Are security responsibilities clearly defined across teams (i.e., distributed vs completely centralized)?
  4. Are the organization's security objectives and goals shared and aligned across the organization?
  5. Has an ongoing security awareness and training program for all employees been implemented?
  6. Is a complete inventory of all data assets maintained with owners designated?
  7. Has a data categorization system been established and classified in terms of legal/regulatory requirements (PCI, HIPAA, SOX, etc.), value, sensitivity, etc.?
  8. Has an access control policy been established which allows users access only to network and network services required to perform their job duties?
  9. Are the access rights of all employees and external party users to information and information processing facilities removed upon termination of their employment, contract or agreement?
  10. Is multi-factor authentication used for access to systems where the confidentiality, integrity or availability of data stored has been deemed critical or essential?
  11. Is access to source code restricted to only those who require access to perform their job duties?
  12. Are the development and testing environments separate from the production/operational environment (i.e., they don't share servers, are on separate network segments, etc.)?
  13. Are network vulnerability scans run frequently (at least quarterly) and vulnerabilities assessed and addressed based on risk to the business?
  14. Are application vulnerability scans (penetration tests) run frequently (at least annually or after significant code changes) and vulnerabilities assessed and addressed based on risk to the business?
  15. Are all data classified as sensitive, confidential or required by law/regulation (i.e., PCI, PHI, PII, etc.) encrypted in transit?
  16. Is testing of security functionality carried out during development?
  17. Are rules regarding information security included and documented in code development standards?
  18. Has an incident response plan been documented and tested at least annually?
  19. Are encryption controls being used in compliance with all relevant agreements, legislation and regulations? (i.e., data in use, in transit and at rest)
  20. Do you have a process for ranking and prioritizing security risks?
  21. Do you have an IDS (Intrusion Detection System) solution implemented? How about an IPS (Intrusion Protection System)

Why the Changes?

Technology evolves, engineers innovate, entrepreneurs create. A static checklist will not improve with age like wine. Keep your eyes out for future blog posts discussing more details on the changes we made.

Want to learn more?

Contact us, we would be happy to discuss how we have helped hundreds of clients over the years.