โ† Back to articles Security

Security Dashboard for AI Is Now Generally Available: What CISOs and AI Risk Leaders Need to Know

Security Dashboard for AI Is Now Generally Available: What CISOs and AI Risk Leaders Need to Know

Microsoft has officially moved the Security Dashboard for AI into General Availability, giving security teams a unified pane of glass to assess, monitor, and govern AI-related risks across the enterprise. If your organization is deploying Copilot, Azure OpenAI, or any third-party AI workloads, this dashboard is no longer optional reading โ€” it is the operational backbone your governance program has been waiting for. In this post, we break down what the dashboard does, how to get it configured, and what you should be validating from day one.

Context and Background: Why AI Security Is a Board-Level Problem Right Now

Enterprise AI adoption has moved faster than most security programs anticipated. In the span of 18 months, organizations have gone from piloting Microsoft 365 Copilot to running it alongside Azure OpenAI Service integrations, custom GPT plugins, and a sprawl of third-party AI SaaS tools. The attack surface has grown in ways that traditional CSPM and DLP tooling was never designed to address.

Simultaneously, regulatory pressure is mounting. The EU AI Act is in phased enforcement, the NIST AI RMF has been adopted as a baseline by several US federal agencies, and internal AI governance committees are popping up in almost every Fortune 1000 organization. These committees need data. They need to answer questions like:

  • Which AI applications are in use across our tenant, sanctioned or not?
  • What sensitive data has been exposed to AI workloads?
  • Are our AI models and pipelines configured securely?
  • Where are the highest-risk interactions happening and by whom?

Before this GA release, answering those questions required stitching together data from Microsoft Defender for Cloud Apps, Microsoft Purview, Defender for Cloud CSPM, and Entra ID logs โ€” manually. The Security Dashboard for AI changes that by surfacing a consolidated risk view directly in the Microsoft Defender portal.

Problem Statement: The Gap Between AI Deployment Speed and Security Visibility

The core problem is one of asymmetry. AI tools are deployed at business unit speed; security governance catches up at committee speed. By the time a CISO gets a risk briefing on a new AI deployment, the tool has already processed months of sensitive documents, email threads, and customer data.

More specifically, the gaps tend to cluster around three areas:

  1. Discovery: Security teams don't have a reliable inventory of which AI apps are being used, including shadow AI that bypasses procurement.
  2. Data exposure: Sensitive labels, overshared SharePoint content, and unclassified data are being ingested by AI workloads with no visibility into what left the boundary.
  3. Posture: AI infrastructure โ€” Azure OpenAI deployments, Semantic Kernel integrations, AI Search indexes โ€” is being provisioned without security baselines being checked at scale.

The Security Dashboard for AI is Microsoft's answer to all three of these gaps in a single, policy-driven interface.

What the Security Dashboard for AI Actually Covers

AI App Discovery and Risk Scoring

Pulling from Microsoft Defender for Cloud Apps (MDA), the dashboard surfaces all AI applications detected in your environment โ€” including unsanctioned tools. Each app receives a risk score based on factors like data handling policies, security certifications, and data residency. You can pivot from the dashboard directly into MDA to apply governance actions like blocking, monitoring, or conditional access enforcement.

Sensitive Data Exposure to AI

Integrated with Microsoft Purview Data Security Posture Management (DSPM), the dashboard shows which sensitivity-labeled files, SharePoint sites, and data stores have been accessed by or shared with AI workloads. This is critical for organizations where M365 Copilot is running over content that wasn't properly classified before deployment.

AI Security Posture (CSPM)

For Azure-hosted AI workloads, Defender for Cloud's AI security posture recommendations are surfaced here. This includes misconfiguration findings for Azure OpenAI Service, AI Search, and Machine Learning workspaces โ€” mapped to frameworks like NIST AI RMF and the Microsoft Security Benchmark.

User Interaction Risk

The dashboard also aggregates signals around risky AI interactions โ€” prompt injection attempts, DLP policy violations triggered during Copilot sessions, and high-risk user behavior patterns detected through Insider Risk Management correlations.

Step-by-Step: Configuring Your Environment to Feed the Security Dashboard for AI

The dashboard itself is available in the Microsoft Defender portal under AI Security in the left navigation. However, the quality of data it surfaces depends entirely on what you have enabled and configured upstream. Here is the configuration checklist and supporting automation.

Step 1: Enable Microsoft Defender for Cloud Apps and Connect Microsoft 365

Ensure MDA is active and your Microsoft 365 connector is enabled. Without this, AI app discovery data will not populate.

# Verify MDA connector status via Graph API
# Requires: SecurityEvents.Read.All, CloudApp.Read.All

$tenantId = ""
$clientId = ""
$clientSecret = ""

$body = @{
    grant_type    = "client_credentials"
    scope         = "https://graph.microsoft.com/.default"
    client_id     = $clientId
    client_secret = $clientSecret
}

$tokenResponse = Invoke-RestMethod -Method Post `
    -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" `
    -Body $body

$headers = @{ Authorization = "Bearer $($tokenResponse.access_token)" }

# Check connected apps in MDA
$mdaApps = Invoke-RestMethod -Method Get `
    -Uri "https://graph.microsoft.com/v1.0/security/cloudAppSecurityProfiles" `
    -Headers $headers

$mdaApps.value | Select-Object displayName, riskScore, appId | Format-Table -AutoSize

Step 2: Deploy Purview Sensitivity Labels and Enable DSPM for AI

Data exposure insights require that your sensitivity labels are published and that DSPM for AI is enabled in the Microsoft Purview portal under Settings > Data Security Posture Management.

# PowerShell: Verify sensitivity label publication scope includes all users
# Requires: ExchangeOnlineManagement module

Import-Module ExchangeOnlineManagement
Connect-IPPSSession -UserPrincipalName admin@yourtenant.onmicrosoft.com

# Get all label policies and their scope
$labelPolicies = Get-LabelPolicy

foreach ($policy in $labelPolicies) {
    Write-Output "Policy: $($policy.Name)"
    Write-Output "  Mode: $($policy.Mode)"
    Write-Output "  ExchangeLocation: $($policy.ExchangeLocation)"
    Write-Output "  SharePointLocation: $($policy.SharePointLocation)"
    Write-Output "  ModernGroupLocation: $($policy.ModernGroupLocation)"
    Write-Output "---"
}

Step 3: Enable Defender for Cloud and AI Security Posture Recommendations

For Azure AI workloads to appear in the dashboard's posture section, you need the Defender CSPM plan enabled on subscriptions hosting Azure OpenAI, AI Search, or Azure Machine Learning resources.

# Enable Defender CSPM on a subscription via Azure CLI
az security pricing create \
  --name CloudPosture \
  --tier standard \
  --subscription ""

# Verify AI-specific recommendations are visible
az security assessment list \
  --subscription "" \
  --query "[?contains(displayName, 'AI') || contains(displayName, 'OpenAI')].{Name:displayName, Status:status.code}" \
  --output table

Step 4: Configure Microsoft Purview Communication Compliance and IRM for AI Interaction Signals

To get risky interaction data (prompt injection, sensitive data in prompts), you need Insider Risk Management policies scoped to AI activity and Communication Compliance policies that capture Copilot interactions.

// Example: Insider Risk Management policy scoped to AI-related indicators
// Deploy via Microsoft Purview REST API or configure manually in portal
// This JSON represents the policy indicator configuration

{
  "policyName": "AI-Interaction-Risk-Monitor",
  "policyType": "GeneralDataLeaks",
  "indicators": [
    "AiAppUsage",
    "SensitiveLabelDowngrade",
    "CopilotSensitiveDataExfil",
    "CloudServiceEgressAnomaly"
  ],
  "policyScope": {
    "includeAllUsers": true,
    "excludeGroups": ["sg-executive-exemptions"]
  },
  "alertThreshold": "Medium",
  "retentionPeriod": 90
}

Step 5: Assign the AI Security Reader Role

Governance committee members and AI risk leads need access to the dashboard without full security admin rights. Use the built-in role scoping in Defender.

# Assign Security Reader role scoped to AI Security workload via Graph
# Note: Use PIM for just-in-time access in production

$roleDefinitionId = ""  # Get from Entra ID role definitions
$userId = ""

$body = @{
    "@odata.type" = "#microsoft.graph.unifiedRoleAssignment"
    roleDefinitionId = $roleDefinitionId
    principalId = $userId
    directoryScopeId = "/"
} | ConvertTo-Json

Invoke-RestMethod -Method POST `
    -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments" `
    -Headers $headers `
    -Body $body `
    -ContentType "application/json"

Result and Verification: What a Healthy Dashboard Looks Like

Once the above configurations are in place and data has had 24โ€“48 hours to populate, navigate to Microsoft Defender portal > AI Security. You should expect to see:

  • An AI app inventory with risk scores for all discovered apps, including shadow AI tools picked up via MDA log ingestion
  • A data exposure widget showing the count and trend of sensitivity-labeled files accessed by AI workloads in the last 30 days
  • A posture score for Azure AI resources with actionable recommendations ranked by severity
  • A user interaction risk panel showing anomalous or policy-violating AI sessions

To validate data is flowing correctly, run the following quick check against your SIEM or Log Analytics workspace:

// KQL: Validate AI interaction signals are arriving in Microsoft Sentinel
CloudAppEvents
| where Timestamp > ago(24h)
| where Application has_any ("Microsoft Copilot", "Azure OpenAI", "ChatGPT")
| summarize EventCount = count(), UniqueUsers = dcount(AccountObjectId) by Application, ActionType
| order by EventCount desc

Key Takeaways

  • The Security Dashboard for AI is now GA and available in the Microsoft Defender portal โ€” no additional license SKU is required beyond existing Defender and Purview entitlements (though DSPM for AI requires specific Purview licensing).
  • The dashboard is only as good as the upstream integrations โ€” MDA, Purview DSPM, Defender CSPM, and IRM must all be configured correctly to get full signal coverage.
  • AI app discovery via MDA will surface shadow AI tools your procurement team doesn't know about โ€” expect to find more than you're comfortable with on day one.
  • Sensitive data exposure reporting is the fastest way to demonstrate AI governance value to a board or audit committee โ€” use it proactively, not reactively.
  • The Azure AI posture recommendations map to NIST AI RMF and Microsoft Security Benchmark, making them directly useful for compliance reporting without manual crosswalking.
  • Scope access to the dashboard appropriately โ€” AI governance committee members need visibility without being granted full security admin rights.
  • Use Microsoft Sentinel KQL queries alongside the dashboard to build custom workbooks and automated alerts that extend beyond what the built-in UI surfaces.

The Security Dashboard for AI is a significant step toward making AI governance operationally tractable rather than purely a policy exercise. If you haven't walked your CISO through this yet, block 30 minutes, pull up the Defender portal, and show them the data. The conversation tends to get very focused, very quickly.

๐ŸŽ“ Ready to go deeper?

Practice real MD-102 exam questions, get AI feedback on your weak areas, and fast-track your Intune certification.

Start Free Practice โ†’ Book a Session
Souhaiel Morhag
Souhaiel Morhag
Microsoft Endpoint & Modern Workplace Engineer

Souhaiel is a Microsoft Intune and endpoint management specialist with hands-on experience deploying and securing enterprise environments across Microsoft 365. He founded MSEndpoint.com to share practical, real-world guides for IT admins navigating Microsoft technologies โ€” and built the MSEndpoint Academy at app.msendpoint.com/academy, a dedicated learning platform for professionals preparing for the MD-102 (Microsoft 365 Endpoint Administrator) certification. Through in-depth articles and AI-powered practice exams, Souhaiel helps IT teams move faster and certify with confidence.

Related Articles