Skip the Code: How AI Prompts Now Handle Tech Due Diligence Better Than Custom Scripts

A follow-up reflection on how generative AI has simplified tech supplier assessments.

The Evolution of AI-Powered Due Diligence

Six months ago, I published Transformative AI Solutions for Supplier Due Diligence showcasing a Python-based solution using Jupyter notebooks to automate supplier assessments. The approach worked well, but required technical setup, API configurations, and custom code to process questionnaires.

Fast forward to today, and I’ve realized something remarkable: modern generative AI systems like Claude, ChatGPT, and Perplexity and others have become sophisticated enough to handle complex tech due diligence assessments directly through conversational interfaces—no code required.

The Game-Changing Simplicity

What once required a GitHub repository, environment setup, and Python scripting can now be accomplished with a well-crafted prompt. The AI systems have evolved to:

  • Systematically analyze multiple URLs for comprehensive information gathering
  • Maintain structured output formats with proper sourcing and confidence levels
  • Handle complex questionnaires with dozens or hundreds of questions
  • Provide audit-ready documentation with source citations

The Modern Approach: A Proven Prompt Framework

Instead of writing custom code, CISO teams can now use this comprehensive prompt to conduct thorough supplier assessments:


Tech Due Diligence Assessment Prompt

Task Overview

You are conducting a security and compliance assessment for a technology supplier. Your role is to answer technical due diligence questions that will help CISO team members determine if this supplier meets organizational security and compliance requirements.

Input Requirements

Supplier Information:

  • Service/Product Name: [INSERT SERVICE NAME]
  • Primary URL: [INSERT MAIN URL]
  • Assessment Date: [INSERT DATE]

Review ALL of the following resources for comprehensive information:

  • Security Center: [INSERT URL]
  • Privacy/Legal Center: [INSERT URL]
  • Terms of Service: [INSERT URL]
  • Compliance Center: [INSERT URL]
  • Additional Resources: [LIST ANY OTHER RELEVANT URLS]

Instructions

  1. Thoroughly analyze each provided URL for relevant security, privacy, and compliance information
  2. Answer each question in the uploaded CSV file based on your analysis
  3. Be specific and detailed – avoid generic responses
  4. Cite your sources – reference the exact URLs where you found information
  5. Indicate confidence levels – be transparent about answer reliability

Required Output Format

Create a comprehensive table with these exact columns:

QuestionAnswerConfidence LevelSource URL(s)Additional Notes
[Question text][Detailed answer][High/Medium/Low][Specific URL][Any caveats or clarifications]

Answer Guidelines

  • High Confidence (90-100%): Information explicitly stated in official documentation
  • Medium Confidence (60-89%): Information reasonably inferred from available sources
  • Low Confidence (30-59%): Limited information available, answer based on industry standards or partial evidence
  • Unable to Determine (<30%): Insufficient information to provide reliable answer

Quality Standards

  • Provide factual, evidence-based responses only
  • If information is not available, state “Information not found” rather than guessing
  • Include specific page sections or document names when possible
  • Flag any contradictory information found across sources
  • Note if information appears outdated

Final Deliverable

A complete assessment table covering all questions from the CSV file, with each answer properly sourced and confidence-rated for CISO decision-making.


Real-World Implementation

This prompt-based approach delivers the same comprehensive results as the original coded solution, but with significant advantages:

Immediate Usability

  • No environment setup or dependencies
  • Works across multiple AI platforms
  • Accessible to non-technical team members

Enhanced Flexibility

  • Easy to modify prompt for different assessment frameworks and resources
  • Adaptable to various supplier types and industries
  • Can incorporate new compliance requirements instantly

Superior User Experience

  • Natural language interaction
  • Real-time clarifications possible
  • Iterative refinement of results

Comparison: Then vs. Now

AspectOriginal Jupyter ApproachModern AI Prompt Approach
Setup Time30+ minutes< 2 minutes
Technical Skills RequiredPython, API keys, environmentBasic copy-paste
CustomizationCode modifications neededNatural language adjustments
Team AccessibilityTechnical users onlyAnyone can use
MaintenanceCode updates, dependency managementPrompt refinements only

When to Still Consider Custom Code

While the prompt-based approach handles most use cases excellently, custom solutions might still be valuable for:

  • Bulk processing of hundreds of suppliers simultaneously
  • Input and output limitations
  • Integration with existing enterprise systems
  • Automated workflows requiring no human intervention
  • Specific compliance frameworks with complex scoring algorithms

Critical Security Consideration: Human Verification Required

Important: While this prompt-based approach offers remarkable efficiency, human verification of AI output remains absolutely essential. Generative AI systems can hallucinate and produce incorrect results, as these systems are primarily focused on maintaining conversational flow rather than guaranteeing factual accuracy.

The original Jupyter script-based approach provided some mitigation against this risk because answers were systematically mapped to specific spreadsheet columns with structured data validation. The conversational approach, while more accessible, requires heightened vigilance.

Key recommendation: Never trust generative AI output blindly (see also: The hidden poison in AI-generated code: How vibecoding enables slopsquatting attacks) especially for critical security decisions. Always review responses with appropriate caution, cross-reference critical findings with original sources, and apply professional judgment to validate AI-generated assessments before making supplier decisions.

The Bottom Line

The rapid advancement of generative AI has fundamentally changed how we approach technical due diligence. What required custom development six months ago can now be accomplished through thoughtful prompting and conversation.

For most CISO teams conducting supplier assessments, the barrier to AI-powered due diligence has essentially disappeared. The technology is ready—the question is whether your processes are ready to embrace this simplification.

Ready to try it yourself? Copy the prompt above, gather your supplier’s key URLs, prepare your questionnaire, and experience how conversational AI can transform your due diligence workflow.

6 Comments

    • Super interesting shift. What I’d love to see is how prompt-based due diligence stacks up on precision and repeatability. With code, we had clear control over parsing schemas, validating API policies, checking SBOMs against CVE databases, and tracing IAM role sprawl across cloud accounts. Prompts abstract that but at the cost of deterministic logic. Curious how you ensure completeness across supplier answers, especially in SOC 2-style multi-domain checks or when chaining evidence from different doc formats. Is there a fallback orchestration layer for edge cases, or are you betting entirely on prompt generalization?

      • Thank you for raising such important considerations about precision and repeatability Tarak. You’re absolutely right to highlight these critical aspects of due diligence processes.

        I completely agree that generative AI output should never be trusted blindly, particularly for security-critical decisions. My approach always emphasizes the importance of reviewing AI responses with appropriate caution and applying professional judgment to validate any AI-generated assessments before making supplier decisions.

        When reviews surface potential red flags or require deeper validation, I recommend falling back to the more deterministic code-based approach ( https://www.lotharschulz.info/2025/01/06/transformative-ai-solutions-for-supplier-due-diligence-save-time-ensure-precision/ ) and cross-referencing findings with original source documents. This hybrid approach allows us to leverage AI’s efficiency for initial screening while maintaining the rigor and control you mentioned for critical validations.

        The goal isn’t to replace human expertise and systematic validation, but rather to augment our capabilities where appropriate.

        • Appreciate the thoughtful reply!
          I completely agree, blind trust in LLM output, especially in supplier due diligence, creates a false sense of certainty. What matters most isn’t just the generation of responses, but the traceability of how we got there. Prompted pipelines can be dangerously non-deterministic if they don’t support structured memory, versioned context inputs, or enforceable audit checkpoints.

          I really liked your hybrid approach: using AI for triage, then pivoting to code-based or source-anchored validation for flagged risks. This split lets us harness AI’s summarization and pattern detection strengths without surrendering epistemic control where regulatory or financial consequences are at stake.

          The next evolution, in my view, is integrating prompt agents with config-as-code systems, so every output has reproducible provenance and every critical decision path is reviewable across teams.

          • Tarak Excellent points about traceability and epistemic control – those are fundamental to maintaining trust in any due diligence process. Your vision of integrating prompt agents with config-as-code systems for reproducible provenance is particularly compelling and addresses the audit trail challenges we face in regulated environments.

            Building on your thoughts, I believe organizations should also leverage the data they already have to make due diligences truly comparable. This is where technologies like MCP become invaluable – they provide excellent options for connecting internal data stores and sources to AI setups, enabling AI to make recommendations based on organizational context and historical patterns rather than operating in isolation. You may refer to https://www.lotharschulz.info/2025/04/09/rust-mcp-local-server-bridging-rust-logic-with-ai-frontends/

            How can I disagree with mentioning AI agents in 2025 😉 The trend was established at CES in January this year and has gained significant momentum across industries since then. What’s particularly exciting is that AI solutions can now already be used for interpreting due diligence results in the context of organizational risk tolerance and compliance requirements.

          • Thanks Lothar, really appreciate the thoughtful expansion, especially around how traceability and context from internal systems can elevate due diligence from checkbox compliance to actionable intelligence.
            Totally agree on MCP’s role. What’s technically interesting is that MCP isn’t just about standardizing context, it allows for structured, auditable memory in AI pipelines. When paired with config-as-code (e.g. tracked YAML config snapshots or Git-traced context providers), it gives us the equivalent of an infrastructure diff for decision-making: you can answer not just what the AI said, but why it said it, using which historical priors and prompts.
            Also, your mention of historical patterns is key. Most enterprise data isn’t raw signal, it’s embedded in unstructured logs, approvals, and documents that require parsing and justification before reuse. A local Rust-MCP layer acting as an orchestrator for agents (especially in secure air-gapped setups) lets us cache, gate, and version these steps in ways that align with audit and epistemic integrity.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.