You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: shared/responsible-ai-faqs-includes/copilot-data-security-privacy.md
+15-11Lines changed: 15 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,15 @@
1
1
---
2
2
author: sericks007
3
3
ms.author: sericks
4
-
ms.date: 06/24/2024
4
+
ms.date: 07/15/2024
5
5
ms.topic: include
6
6
ms.contributors:
7
7
- ywanjari
8
8
- deepabansal
9
9
- traliil
10
10
- mikebc
11
+
contributors:
12
+
- edupont04
11
13
---
12
14
13
15
<!--Any changes to this article must be reviewed by RAI Champ Leads and CELA-->
@@ -18,13 +20,13 @@ Copilot is built on [Microsoft Azure OpenAI Service](/azure/cognitive-services/o
18
20
19
21
Copilot features aren't available in all Azure geographies and languages. Depending on where your environment is hosted, you might need to allow data movement across geographies to use them. For more information, see the articles listed under [Data movement across geographies](#data-movement-across-geographies).
20
22
21
-
## What happens to my data when I use Copilot?
23
+
## What happens to my data when I use Copilot?
22
24
23
25
You are in control of your data. Microsoft doesn't share your data with a third party unless you've granted permission to do so. Further, we don't use your customer data to train Copilot or its AI features, unless you provide consent for us to do so. Copilot adheres to existing data permissions and policies, and its responses to you're based only on data that you personally can access. For more information about how you can control your data and how your data is handled, see the articles listed under [Copilot in Dynamics 365 apps and Power Platform](#copilot-in-dynamics-365-apps-and-power-platform).
24
26
25
27
Copilot monitors for abusive or harmful uses of the service with transient processing. We don't store or conduct eyes-on review of Copilot inputs and outputs for abuse monitoring purposes.
26
28
27
-
## How does Copilot use my data?
29
+
## How does Copilot use my data?
28
30
29
31
Each service or feature uses Copilot based on the data that you provide or set up for Copilot to process.
30
32
@@ -34,11 +36,11 @@ Your prompts (inputs) and Copilot's responses (outputs or results):
34
36
35
37
- Are NOT used to train or improve any third-party products or services (such as OpenAI models).
36
38
37
-
- Are NOT used to train or improve Microsoft AI models, unless your tenant admin opts in to sharing data with us.
39
+
- Are NOT used to train or improve Microsoft AI models, unless your tenant admin opts in to sharing data with us. Learn more at [FAQ for optional data sharing for Copilot AI features in Dynamics 365 and Power Platform](/power-platform/faqs-copilot-data-sharing).
38
40
39
41
[Learn more about Azure OpenAI Service data privacy and security](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext). To learn more about how Microsoft protects and uses your data more generally, read our [Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=521839).
40
42
41
-
## Where does my data go?
43
+
## Where does my data go?
42
44
43
45
Microsoft runs on trust. We're committed to security, privacy, and compliance in everything we do, and our approach to AI is no different. Customer data, including Copilot inputs and outputs, is stored within the Microsoft Cloud trust boundary.
44
46
@@ -58,7 +60,7 @@ Microsoft is uniquely positioned to deliver enterprise-ready AI. Copilot is powe
58
60
59
61
-**Architected to protect your data at both the tenant and the environment level.** We know that data leakage is a concern for customers. Microsoft AI models are not trained on and don't learn from your tenant data or your prompts, unless your tenant admin has opted in to sharing data with us. Within your environments, you can control access through permissions that you set up. Authentication and authorization mechanisms segregate requests to the shared model among tenants. Copilot utilizes data that only you can access, using the same technology that we've been using for years to secure customer data.
60
62
61
-
## Are Copilot's responses always factual?
63
+
## Are Copilot's responses always factual?
62
64
63
65
As with any generative AI, Copilot responses aren't guaranteed to be 100% factual. While we continue to improve responses to fact-based inquiries, you should still use your judgment and review the output before you send it to others. Copilot provides helpful drafts and summaries to help you do more, but it's fully automatic. You always have a chance to review the AI-generated content.
64
66
@@ -80,26 +82,27 @@ Hate and fairness-related harms refer to any content that uses pejorative or dis
80
82
81
83
[Learn more about Azure OpenAI content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython#harm-categories).
82
84
83
-
## Does Copilot block prompt injections (jailbreak attacks)?
85
+
## Does Copilot block prompt injections (jailbreak attacks)?
84
86
85
87
[Jailbreak attacks](/azure/ai-services/openai/whats-new#responsible-ai) are user prompts that are designed to provoke the generative AI model into behaving in ways it was trained not to or breaking the rules it's been told to follow. Services across Dynamics 365 and Power Platform are required to protect against prompt injections. [Learn more about jailbreak attacks and how to use Azure AI Content Safety to detect them](/azure/ai-services/content-safety/concepts/jailbreak-detection).
86
88
87
89
## Does Copilot block indirect prompt injections (indirect attacks)?
88
-
Indirect attacks, also referred to as _indirect prompt attacks_ or _cross-___domain prompt injection attacks_, are a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Services across Dynamics 365 and Power Platform are required to protect against indirect prompt injections. [Learn more about indirect attacks and how to use Azure AI Content Safety to detect them](/azure/ai-services/content-safety/concepts/jailbreak-detection).
89
90
90
-
## How does Microsoft test and validate Copilot quality, including prompt injection protection and grounded responses?
91
+
Indirect attacks, also referred to as *indirect prompt attacks* or *cross-___domain prompt injection attacks*, are a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Services across Dynamics 365 and Power Platform are required to protect against indirect prompt injections. [Learn more about indirect attacks and how to use Azure AI Content Safety to detect them](/azure/ai-services/content-safety/concepts/jailbreak-detection).
92
+
93
+
## How does Microsoft test and validate Copilot quality, including prompt injection protection and grounded responses?
91
94
92
95
Every new Copilot product and language model iteration must pass an internal responsible AI review before it can be launched. Before release, we use a process called "red teaming" (in which a team simulates an enemy attack, finding and exploiting weaknesses to help the organization improve its defenses) to assess potential risks in harmful content, jailbreak scenarios, and grounded responses. After release, we use automated testing and manual and automated evaluation tools to assess the quality of Copilot responses.
93
96
94
-
## How does Microsoft enhance the foundation model and measure improvements in grounded responses?
97
+
## How does Microsoft enhance the foundation model and measure improvements in grounded responses?
95
98
96
99
In the context of AI, especially AI that deals with language models like the one that Copilot is based on, *grounding* helps the AI generate responses that are more relevant and make sense in the real world. Grounding helps ensure that the AI's responses are based on reliable information and are as accurate and relevant as possible. Grounded response metrics assess how accurately the facts stated in the grounding content that's provided to the model are represented in the final response.
97
100
98
101
Foundation models like GPT-4 are enhanced by Retrieval Augmented Generation (RAG) techniques. These techniques allow the models to use more information than they were trained on to understand a user's scenario. RAG works by first identifying data that is relevant for the scenario, similar to how a search engine identifies web pages that are relevant for the user's search terms. It uses multiple approaches to identify what content is relevant to the user prompt and should be used to ground the response. Approaches include searching against different types of indexes, such as inverted indexes using information retrieval techniques like term matching, or vector indexes using vector distance comparisons for semantic similarity. After it identifies the relevant documents, RAG passes the data to the model along with the current conversation, giving the model more context to better understand the information it already has and generate a response that's grounded in the real world. Finally, RAG checks the response to make sure that it's supported by the source content it provided to the model. Copilot generative AI features incorporate RAG in multiple ways. One example is chat with data using, where a chatbot is grounded with the customer's own data sources.
99
102
100
103
Another method for enhancing foundational models is known as *fine-tuning*. A large dataset of query-response pairs is shown to a foundational model to augment its original training with new samples that are targeted to a specific scenario. The model can then be deployed as a separate model—one that's fine-tuned for that scenario. While grounding is about making the AI's knowledge relevant to the real world, fine-tuning is about making the AI's knowledge more specific to a particular task or ___domain. Microsoft uses fine-tuning in multiple ways. For example, we use Power Automate flow creation from natural language descriptions provided by the user.
101
104
102
-
## Does Copilot meet requirements for regulatory compliance?
105
+
## Does Copilot meet requirements for regulatory compliance?
103
106
104
107
Microsoft Copilot is part of the Dynamics 365 and Power Platform ecosystem and meets the same requirements for regulatory compliance. For more information about the regulatory certifications of Microsoft services, go to [Service Trust Portal](https://servicetrust.microsoft.com/). Additionally, Copilot adheres to our commitment to responsible AI, which is put into action through our [Responsible AI Standard](https://www.microsoft.com/ai/responsible-ai). As regulation in AI evolves, Microsoft continues to adapt and respond to new requirements.
105
108
@@ -133,6 +136,7 @@ Microsoft Copilot is part of the Dynamics 365 and Power Platform ecosystem and m
133
136
| Power Automate | Copilot in cloud flows on the Home page and in the designer (See [Get started with Copilot in cloud flows](/power-automate/get-started-with-copilot) for details.) | No | Contact support to run a PowerShell script. |
134
137
| Power Pages | All (See [Copilot overview in Power Pages](/power-pages/configure/ai-copilot-overview) for details.) | No |[Turn off Copilot in Power Pages](/power-pages/configure/ai-copilot-overview#turn-off-copilot-in-power-pages)|
135
138
139
+
Learn more at [FAQ for optional data sharing for Copilot AI features in Dynamics 365 and Power Platform](/power-platform/faqs-copilot-data-sharing).
0 commit comments