Skip to main content
All CollectionsAI in Tines
Security & privacy for AI in Tines
Security & privacy for AI in Tines

Understand why you can trust the AI-powered features in Tines.

Kelli Hinteregger avatar
Written by Kelli Hinteregger
Updated this week

All AI in Tines is powered by large language models, running on our infrastructure.

There are no changes to our pre-existing terms and policies arising from these features. By running the models in this way, we have eliminated new risks related to data transport, storage, or sub-processing.

Language models

Language models (used by automatic mode, the AI action, and Workbench) run directly in our infrastructure provider, AWS. Language model authors (like Anthropic or Meta) do not have access to or visibility of the running model in AWS, and Tines does not maintain a direct relationship with these entities.

AWS does not perform any training based on prompt data or usage metadata, nor does it log any input/output model data.

Private and secure by design

Because the language model runs within Tines’ infrastructure, we achieve a very high standard of privacy and security:

✓ Stateless

x No public networking

✓ Private

x No training

✓ In-region

x No storage

✓ Tenant-scoped

x No queries or output logging

Did this answer your question?