Microsoft Copilot (including Microsoft 365 Copilot and Security Copilot, included in M365 E5 as of late 2025) follows existing Microsoft 365 permissions: AI only accesses data the user can see via Microsoft Graph. To ensure "AI only sees what it’s supposed to see," implement Zero Trust guardrails before rollout. This prevents oversharing, data leakage, and compliance risks (e.g., GDPR, HIPAA).


Follow Microsoft's 7-step Zero Trust for Copilot (from learn.microsoft.com/security/zero-trust/copilots) and Purview DSPM for AI:

1. Data Mapping (Discover & Classify Sensitive Data)

  • Use Microsoft Purview Data Security Posture Management (DSPM) for AI(in Microsoft Purview portal) to scan SharePoint, OneDrive, Exchange, Teams for sensitive data (PII, PHI, financials) accessible to Copilot.
    • Run default AI risk assessment → Get reports on oversharing, unlabeled data, guest access.
    • Auto-classify with 100+ detectors; apply sensitivity labels (e.g., "Confidential" blocks Copilot access via VIEW/EXTRACT rights).
  • Steps:
    1. Purview portal → Data security posture → AI hub → Run assessment.
    2. Review recommendations (e.g., "Fix 500 overshared files").
    3. Enable labels for SharePoint/OneDrive (learn.microsoft.com/purview/ai-microsoft-purview).
  • Tools: Purview Content Explorer, DSPM AI dashboard for Copilot-specific risks.


2. Access Cleanup (Enforce Least Privilege)

  • Audit/clean permissions before licensing Copilot—Copilot amplifies oversharing.
    • SharePoint/OneDrive: Remove external sharing, guest access; limit to groups (no "Everyone").
    • Entra ID: Access reviews (quarterly); revoke unused perms; use PIM for just-in-time.
    • Oversharing controls: Purview → Block download/print/EXTRACT on labeled files (prevents screenshots to unsafe AI).
  • Steps:
    1. Purview → Oversharing insights → Remediate (e.g., 80% of breaches from loose perms).
    2. Entra ID → Identity Governance → Access reviews for high-risk users/groups.
    3. Pilot Copilot on small group (M365 admin center → Copilot settings → Targeted release).


3. Secure Configuration (Set Policies & Labels)

  • Sensitivity Labels + DLP: Copilot honors labels—block AI on "Highly Confidential."
    • Enable labels in SharePoint/OneDrive/Exchange/Teams.
    • DLP policies (Purview → DLP) for Copilot location: Block prompts/responses with SITs (e.g., SSN, credit cards).
  • Retention: Apply to Copilot-generated content (longest policy wins).
  • Encryption: Use MIP/IRM; Customer Key/BYOK blocks Copilot if no VIEW rights.
  • Steps(M365 admin/Purview):
    1. Compliance → Sensitivity labels → Publish (inherit to new files).
    2. DLP → Create policy → Locations: "Microsoft 365 Copilot" → Block sensitive SITs.
    3. Endpoint DLP (Intune-managed Windows): Block copy to third-party AI browsers (e.g., ChatGPT).


4. Ongoing Monitoring (Visibility & Response)

  • Audit Logs: Unified in Purview Audit (Copilot prompts/responses, 180-day retention; E5=1yr+).
    • Activity explorer: Filter "Copilot" events; alerts on anomalies.
  • DSPM AI: Real-time dashboard for AI interactions, DLP hits, agent risks (Public Preview Dec 2025).
  • Defender XDR/Cloud Apps: Monitor shadow AI (e.g., ChatGPT logins); correlate with Sentinel SIEM.
  • Copilot Analytics: M365 admin → Usage reports, SCU consumption (Security Copilot).
  • Steps:
    1. Purview → Audit → Search "CopilotPrompt"/"CopilotResponse".
    2. Defender for Cloud Apps → App discovery → Block risky AI (score-based).
    3. Sentinel → AI rules for prompt injection/DLP blocks.


5. Block Unsafe AI Apps (Prevent Shadow AI)

  • Intune MAM/Endpoint: Block unmanaged AI (ChatGPT, Gemini) on mobile/desktop.
    • Android/iOS: App config → Block Google Play AI apps (ChatGPT, Copilot consumer, Perplexity).
    • Windows: Settings catalog → "Turn off Windows Copilot" (Windows AI CSP); WDAC for apps.
  • Defender for Cloud Apps: Session policy → Block/monitor AI SaaS (e.g., claude.ai).
  • Conditional Access (Entra): Block high-risk sign-ins to Copilot apps; require compliant devices.
  • Browser: Edge policy (Intune) → Block AI extensions/sites; DLP on Edge.
  • Steps(Intune):
    1. Devices → Configuration → App protection → Block AI package IDs (e.g., com.openai.chatgpt).
    2. Endpoint security → Block unsafe AI via EDLP (browser paste to AI sites).
    3. Purview Endpoint DLP → "Generative AI" policy location → Block/Warn.


Phased Rollout & Best Practices (2026 Updates)

  • Pilot: Audit-only DLP; 10-20% users; measure via Copilot Analytics (Viva Insights).
  • E5 Perks: Security Copilot agents (Defender, Intune, Purview) auto-embedded; SCU dashboard.
  • Red Teaming: Test jailbreaks/XPIA (Microsoft blocks via classifiers; Purview DSPM detects).
  • User Training: Safe prompting; no screenshots of labeled data.
  • Tools Stack: Purview (DSPM/DLP), Entra (CA/PIM), Intune (devices), Defender XDR (threats), Sentinel (SIEM).

Step Key Tool Outcome
Data Mapping Purview DSPM AI Identify 90% oversharing pre-launch
Access Cleanup Entra Reviews + Purview Oversharing Least privilege enforced
Secure Config Sensitivity Labels + DLP Copilot AI blocked from sensitive data
Monitoring Audit + DSPM Dashboard Real-time alerts/DLP hits
Block Unsafe AI Intune + Cloud Apps No shadow AI data exfil

Start Here: M365 admin center → Copilot → Settings (pilot mode). Run Purview DSPM assessment (free in E5). Review monthly. For agents (Copilot Studio), add runtime guardrails (Power Platform admin).


This aligns with Microsoft's Responsible AI & Secure Future Initiative—reduces risks 80%+ per early adopters.

Docs: learn.microsoft.com/copilot/microsoft-365 (Privacy/Security). Customize via Copilot Control System (Ignite 2025).


Questions? Contact us here...