Ensuring Data Security with LLMs: Our Commitment to Safe AI Practices
To ensure the security and integrity of our systems while utilizing Large Language Models (LLMs), we adhere to stringent data handling and operational protocols. Below is an outline of our key practices:
All data is processed in data centers within the EU. We’re using the Microsoft Azure OpenAI service. The Azure OpenAI Service is fully controlled and operated by Microsoft; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Service does NOT interact with any services operated by OpenAI
We allow our customers to opt-out from using any of our AI services.
To protect our systems from common LLM vulnerabilities and risks we follow the following practices:
All operations that are executed as a result of an interaction with an LLM are non-destructive and can be undone
The data we share with LLMs is never influenced by the output of the LLM. The data we share is fixed and part of the first prompt. Subsequent interactions with the LLM don’t change the amount of data we share. LLMs are not directly connected to any of our services or databases.
We limit the data that is shared with the LLM to an absolute minimum and document it here
Last updated