All AI in Tines is powered by large language models, running on our infrastructure.
There are no changes to our pre-existing terms and policies arising from these features. By running the models in this way, we have eliminated new risks related to data transport, storage, or sub-processing.
Language models
Language models (used by automatic mode, the AI action, and Workbench) run directly in our infrastructure provider, AWS. Language model authors (like Anthropic or Meta) do not have access to or visibility of the running model in AWS, and Tines does not maintain a direct relationship with these entities.
AWS does not perform any training based on prompt data or usage metadata, nor does it log any input/output model data.
Private and secure by design
Because the language model runs within Tines’ infrastructure, we achieve a very high standard of privacy and security:
✓ Stateless | x No public networking |
✓ Private | x No training |
✓ In-region | x No storage |
✓ Tenant-scoped | x No queries or output logging |