Skip to main content

Security and Privacy for AI features in Cotiss

Privacy, Control, and Transparency Built In

Rochelle Sanderson avatar
Written by Rochelle Sanderson
Updated over 3 months ago

Our Privacy & Data Commitments

At Cotiss, trust is at the heart of everything we do. As we roll out new AI functionality, we want to be clear and transparent about how your data is handled, stored, and protected. Below you’ll find guidance on our approach to privacy, security, and your control over AI features.

  • You own your data. We do not use customer data to train AI models.

  • Enterprise-grade protection. We use the Enterprise version of OpenAI. By default, OpenAI Enterprise does not use any data you send to train its models. You can learn more in OpenAI’s Enterprise Privacy commitments.

  • Data residency. All data remains within our secure infrastructure and region, ensuring compliance with applicable regulations.

Trust & Safety / Quality

We are committed to making AI a safe and reliable partner in your work.

  • Quality monitoring. We review model outputs for accuracy to reduce the risk of hallucinations, and prevent offensive or inappropriate content.

  • Feedback loops. You have the ability to flag poor or incorrect responses, which directly helps us improve the AI experience for all customers.

  • Independent security assurance. We have successfully completed a SOC 2 Type II audit, confirming that our security and confidentiality controls meet industry best practices.

  • Data security.

    • Documents are encrypted at rest (AES-256) and all data is encrypted in transit (TLS 1.2+).

    • Audit evidence is segregated per client, per control type when stored.

Your Control & Access

We believe you should always stay in control of how AI is used in your organisation.

  • Opt-out available. Administrators can disable AI functionality for their organisation at any time via settings.

  • Secure data exchange. All interactions between Cotiss and OpenAI are encrypted both at rest and in transit.

  • Clear disclosure. Cotiss will always indicate when AI is in use. No “silent AI” functions are permitted.

  • Authentication. Cotiss supports SAML/SSO and MFA for enterprise customers.

Your Responsibilities

  • Customer responsibility. Customers are responsible for reviewing and validating AI outputs before use in business-critical, legal, financial, or safety-related contexts. Avoid submitting sensitive personal data unless necessary. Notify Cotiss of suspected misuse or vulnerabilities.

  • Disclaimer. AI outputs are provided “as-is.” Cotiss does not indemnify customers against reliance on incorrect AI outputs.

Cotiss AI Subprocessors List

  • OpenAI

Additional Resources

Support

If you have questions, concerns, or feedback about our AI functionality, please reach out to our support team in the bottom right of your screen. We’re here to help.

Updates

This policy may be updated as Cotiss AI evolves. Customers will receive at least 30 days’ advance notice of any material changes. Continued use of AI features after the effective date constitutes acceptance of the updated policy.

Did this answer your question?