At a Glance

AI Vendor Risk: 5 Key Checks Before You Deploy

Before adopting an AI vendor’s solution, ensure your prompt data—your inputs and AI-generated outputs—is handled securely. Ask these 5 critical questions:

  1. Do they use your prompts to train their models?

    • Training on your data could leak sensitive info.

  2. Can you opt out of model improvement programs?

    • Ensure data isn’t used unless you explicitly allow it.

  3. Is prompt logging disabled by default?

    • Avoid default storage of sensitive inputs and outputs.

  4. What are their data retention policies?

    • Confirm how long your data is stored and if it’s anonymized.

  5. Do they hold certifications like SOC 2 or GDPR?

    • Validate their security and compliance posture.

✅ Use our AI Vendor Risk Checklist and best practices to protect data, update contracts, and test settings before going live.

When integrating an AI vendor’s solution into your organization, managing prompt data risks is critical to protect sensitive information, maintain compliance, and build trust. Prompt data—your inputs and AI-generated outputs—can expose proprietary workflows, customer data, or trade secrets if mishandled. Before deploying, here’s a comprehensive guide to assessing AI vendors, complete with a checklist, real-world examples, and best practices to ensure secure AI adoption.

👉 “For a broader framework, read our AI Risk Management Guide Overview

1. Does the Vendor Use Your Prompts to Train Their Model?

Why It Matters: If your interactions feed into vendor model training, sensitive information could leak into future outputs.

Case in point: A Google-led team extracted over 10,000 unique memorized examples—including personal data—from ChatGPT using a simple “repeat” prompt trick that cost only $200 .

What to Ask:

  • “Do you use client prompts or outputs for model training?”

  • “Is customer data segmented by default (e.g., enterprise vs. consumer accounts)?”

Example Response: OpenAI’s Enterprise API does not use prompts or responses for model training unless explicitly opted in—unlike consumer accounts.

2. Can You Opt Out of Model Improvement Programs?

Why It Matters: Opt‑out ensures your data remains private and excluded from enhancement pipelines.

Example: OpenAI users can disable “Improve the model for everyone” in their data controls; business accounts often default to no‑train settings .

Vendor Check:

  • Is no‑train mode enabled by default for business users?

  • Are data-segregation protocols in place unless opt‑in is given?

3. Is Prompt Logging Disabled by Default?

Why It Matters: Automatic logging increases your exposure to breaches unless it’s opt‑in and anonymized.

Best Practice: Logging should be opt‑in only, with automated deletion (e.g., after 30 days).

Vendor Check:

  • Is logging off by default?

  • Ask for details: Where logs are stored, what retention period is used, and whether logs are anonymized.

4. What Are the Vendor’s Data Retention Policies?

Why It Matters: Clear retention limits reduce risks by preventing indefinite storage.

Anthropic Example:

  • API inputs/outputs auto-delete within 30 days, unless extended by contract.

  • Organization accounts can set custom timelines.

  • Logged legal or policy-related prompts may be retained up to 2 years

What to Request:

  • A written retention policy.

  • Confirmation of anonymization or auto-purge mechanisms.

5. Do They Hold Recognized Certifications?

Why It Matters: Certifications like SOC 2, ISO 27001, or GDPR demonstrate compliance and accountability.

Relevant Reminder: Clearview AI was fined €20 million under GDPR in 2023—highlighting the real risks of non-compliance.

Ask Your Vendor:

  • Can they share audit reports or official compliance certifications?

📋 AI Vendor Risk Checklist: 5 Must-Ask Questions

Question
Why It Matters
How to Verify
Do you use my prompts & outputs for model training?
Ensures your data stays private and isn’t used to improve vendor models.
Request a copy of the vendor’s data usage policy.
Can we opt out of model improvement programs?
Validates flexibility to protect sensitive data.
Confirm opt-out options in the vendor’s privacy portal.
Are prompt logs stored? If so, where and for how long?
Clarifies data retention and security practices.
Ask for a data retention policy and anonymization details.
Is prompt logging disabled by default?
Ensures secure-by-default settings.
Test in a staging environment to confirm no logs are stored.
Do you hold certifications (e.g., SOC 2, GDPR)?
Verifies compliance with industry standards.
Request third-party audit reports or certifications.

📥 Download the AI Vendor Risk Checklist

🧩Case Study Highlights

  • OpenAI Enterprise API

    • Enterprise API: No training on enterprise prompts by default.

    • Consumer accounts: Users must manually opt out via Data Controls 

    • Takeaway: Confirm your account type defaults to secure settings.

    Anthropic Claude (Business/Enterprise)

    • API inputs: auto-deleted after 30 days.

    • Enterprise agreements allow extended retention or zero‑retention options .

    • Takeaway: Signal to vendors that secure-by-default and configurable retention are high priorities.

🔑 Best Practices Before Go-Live

  1. Update Contracts: Include clear clauses that forbid prompt training/logging without explicit, documented consent. Define auto-purge timelines (e.g., 30 days).

  2. Due Diligence: Request privacy certifications and audits. Leverage platforms like OneTrust or ServiceNow for structured vendor assessments.

  3. Test in Staging: Confirm log behavior—verify no logs are kept unless enabled, and that purge is effective.

  4. Continuous Monitoring: Conduct annual vendor risk reviews, especially as regulations (like EU AI Act) evolve in 2025.

FAQ: Common AI Vendor Questions

  • What is “prompt data”?
    It’s everything you send and receive via the AI tool—inputs, outputs, and metadata.

    How do I verify GDPR compliance?
    Request Data Processing Agreements and GDPR-specific third-party audit reports.

    Which tools help vendor reviews?
    OneTrust, ServiceNow, or manual checklists for smaller teams.

🔍 Final Takeaway

Secure AI deployment relies on asking the right vendor questions—about prompt usage, opt-out controls, logging settings, retention policies, and certifications. Use our checklist to verify practices, build protections into contracts, and test systems before production. Want to streamline the process? 

Download our free AI Vendor Risk Checklist and share it with your team.(Coming Soon)

👉 “Explore the full series in our AI Risk Management Content Hub.”

Share this post

Related posts

🎯 Download the Free AI Audit Readiness Checklist

Stay ahead of the AI curve. This practical, no-fluff checklist helps you assess AI risk before deployment — based on real standards like NIST AI RMF and ISO/IEC 42001.

🔒 No spam. Just useful tools.
📥 Enter your email to get instant access.

Subscription Form