Skip to content

Commit 9feb54d

Browse files
authored
Merge pull request #10249 from MicrosoftDocs/Sally-NewQuestion
Sally - New question - Update copilot-data-security-privacy.md
2 parents 157b5bf + 615f53f commit 9feb54d

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

shared/responsible-ai-faqs-includes/copilot-data-security-privacy.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
author: sericks007
33
ms.author: sericks
4-
ms.date: 06/04/2024
4+
ms.date: 06/24/2024
55
ms.topic: include
66
---
77

@@ -79,6 +79,9 @@ Hate and fairness-related harms refer to any content that uses pejorative or dis
7979

8080
[Jailbreak attacks](/azure/ai-services/openai/whats-new#responsible-ai) are user prompts that are designed to provoke the generative AI model into behaving in ways it was trained not to or breaking the rules it's been told to follow. Services across Dynamics 365 and Power Platform are required to protect against prompt injections. [Learn more about jailbreak attacks and how to use Azure AI Content Safety to detect them](/azure/ai-services/content-safety/concepts/jailbreak-detection).
8181

82+
## Does Copilot block indirect prompt injections (indirect attacks)?
83+
Indirect attacks, also referred to as _indirect prompt attacks_ or _cross-___domain prompt injection attacks_, are a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Services across Dynamics 365 and Power Platform are required to protect against indirect prompt injections. [Learn more about indirect attacks and how to use Azure AI Content Safety to detect them](/azure/ai-services/content-safety/concepts/jailbreak-detection).
84+
8285
## How does Microsoft test and validate Copilot quality, including prompt injection protection and grounded responses?
8386

8487
Every new Copilot product and language model iteration must pass an internal responsible AI review before it can be launched. Before release, we use a process called "red teaming" (in which a team simulates an enemy attack, finding and exploiting weaknesses to help the organization improve its defenses) to assess potential risks in harmful content, jailbreak scenarios, and grounded responses. After release, we use automated testing and manual and automated evaluation tools to assess the quality of Copilot responses.

0 commit comments

Comments
 (0)