Audit Logging for AI Compliance
When an AI agent accesses customer data, processes a refund, or sends an email on behalf of your organization, you need to know exactly what happened. Not approximately. Not eventually. Exactly, and immediately.
Regulatory frameworks like GDPR, HIPAA, SOX, and PCI-DSS all require complete audit trails for data access and processing. But traditional audit logging was designed for human users performing predictable actions. AI agents are different—they operate at machine speed, generate actions dynamically, and can process thousands of requests per hour. Your audit logging needs to keep up.
This guide covers what to log, how to structure audit events, and how to use logs for debugging, compliance, and security analysis in AI agent deployments.
Why AI Agent Audit Logging Is Different
Human Audit Logging
Traditional audit logging captures:
- User logged in at 9:00 AM
- User viewed customer record #12345
- User updated order status to "shipped"
- User logged out at 5:00 PM
Volume: Dozens to hundreds of events per user per day.
Pattern: Predictable, sequential, human-speed.
AI Agent Audit Logging
AI agent audit logging must capture:
- Agent received request at 14:30:00.123
- Agent generated action: read_customer (customer_id: 12345)
- Policy "support-v2" evaluated: ALLOWED (3ms)
- Agent executed action: returned 1 row, 245 bytes
- Agent generated action: send_email (to: [email protected])
- Policy "support-v2" evaluated: ALLOWED (2ms)
- Agent executed action: email sent
- Agent generated action: process_refund (order: 67890, amount: $150)
- Policy "support-v2" evaluated: ALLOWED with condition (requires approval > $100)
- Approval request sent to [email protected]
- Agent session ended: 5 actions, 3 allowed, 0 blocked, 1 pending approval
Volume: Hundreds to thousands of events per agent per hour.
Pattern: Dynamic, parallel, millisecond-speed, unpredictable.
The Key Differences
| Aspect | Human Logging | AI Agent Logging | |--------|--------------|-----------------| | Volume | Low (100s/day) | High (1000s/hour) | | Speed | Seconds between events | Milliseconds between events | | Predictability | High | Low (dynamic actions) | | Context needed | Basic (user, action) | Rich (policy, risk, parameters) | | Real-time needs | Nice to have | Essential for security | | Correlation | Simple (user session) | Complex (multi-agent, delegation chains) |
What to Log: The Complete Audit Event
Every AI agent action should generate an audit event with these fields:
Core Fields (Required)
{
"eventId": "evt_8f3a2b1c-4d5e-6f7a-8b9c-0d1e2f3a4b5c",
"timestamp": "2026-02-08T14:30:00.123Z",
"eventType": "action_executed",
"agent": {
"id": "support-bot-01",
"type": "customer_support",
"version": "2.3.1",
"framework": "langchain"
},
"action": {
"type": "read_customer",
"resource": "customer://id:12345",
"parameters": {
"customer_id": "12345",
"fields": ["name", "email", "order_history"]
}
},
"authorization": {
"result": "allowed",
"policyId": "support-policy-v2",
"policyVersion": "2.1.0",
"evaluationTime": "3ms",
"tokenId": "tok_abc123",
"tokenExpires": "2026-02-08T22:30:00Z"
},
"context": {
"userId": "[email protected]",
"sessionId": "sess_xyz789",
"requestId": "req_def456",
"ipAddress": "10.0.1.50",
"userAgent": "ACT-Agent/2.3.1"
}
}
Security Fields (For Blocked Actions)
{
"eventType": "action_blocked",
"security": {
"riskScore": 8.5,
"riskFactors": [
"external_email_domain",
"sensitive_data_detected",
"outside_business_hours"
],
"blockReason": "Email domain not in allowlist",
"recommendedAction": "Review agent configuration",
"alertSent": true,
"alertRecipients": ["[email protected]"],
"circuitBreakerStatus": "2_of_3_violations"
}
}
Execution Fields (For Completed Actions)
{
"eventType": "action_executed",
"execution": {
"duration": "45ms",
"responseSize": "2.3KB",
"rowsReturned": 1,
"dataClassification": "internal",
"sensitiveDataAccessed": false
}
}
Delegation Fields (For Multi-Agent Systems)
{
"eventType": "capability_delegated",
"delegation": {
"parentAgent": "manager-bot-01",
"parentTokenId": "tok_parent_123",
"childAgent": "worker-bot-03",
"childTokenId": "tok_child_456",
"delegatedCapabilities": ["read_customer"],
"attenuations": {
"maxRows": "reduced from 1000 to 100",
"ttl": "10 minutes"
},
"delegationDepth": 2,
"delegationChain": ["user:alice", "manager-bot-01", "worker-bot-03"]
}
}
Structuring Audit Events for Different Use Cases
Use Case 1: Regulatory Compliance (GDPR)
GDPR requires you to demonstrate:
- What personal data was accessed (data subject rights)
- Why it was accessed (lawful basis)
- Who accessed it (processor/controller)
- When it was accessed (timing)
- What happened to it (processing activity)
ACT audit event for GDPR:
{
"compliance": {
"framework": "GDPR",
"dataSubjectId": "customer_12345",
"personalDataCategories": ["name", "email", "order_history"],
"processingPurpose": "customer_support_inquiry",
"lawfulBasis": "legitimate_interest",
"dataProcessor": "support-bot-01",
"dataController": "company_name",
"retentionPeriod": "90_days",
"crossBorderTransfer": false
}
}
Responding to a Data Subject Access Request (DSAR):
# Find all data accessed for a specific customer
def generate_dsar_report(customer_id):
events = act.audit.query(
filters={
"context.dataSubjectId": customer_id,
"action.type": ["read_*", "export_*"],
"authorization.result": "allowed"
},
time_range="last_12_months"
)
report = {
"dataSubject": customer_id,
"generatedAt": datetime.now().isoformat(),
"totalAccessEvents": len(events),
"dataCategories": extract_categories(events),
"accessingAgents": extract_agents(events),
"purposes": extract_purposes(events),
"events": events
}
return report
Use Case 2: Healthcare Compliance (HIPAA)
HIPAA requires tracking all access to Protected Health Information (PHI):
{
"compliance": {
"framework": "HIPAA",
"phiAccessed": true,
"phiCategories": ["patient_name", "diagnosis_code"],
"minimumNecessary": true,
"accessPurpose": "treatment_support",
"coveredEntity": "hospital_name",
"businessAssociate": "ai_agent_provider",
"breakTheGlass": false,
"patientConsent": "on_file"
}
}
Use Case 3: Financial Compliance (SOX)
SOX requires audit trails for financial data and controls:
{
"compliance": {
"framework": "SOX",
"financialDataAccessed": true,
"controlId": "CTRL-2026-001",
"transactionType": "refund",
"amount": 150.00,
"currency": "USD",
"approvalRequired": true,
"approvalStatus": "pending",
"approver": "[email protected]",
"segregationOfDuties": true
}
}
Implementing Audit Logging with ACT
Basic Integration
from act_sdk import ACT, AuditLogger
act = ACT(api_key=os.getenv("ACT_API_KEY"))
audit = AuditLogger(act)
def execute_agent_action(agent_token, action, resource, params):
# ACT automatically logs the validation
validation = act.validate(
token=agent_token,
action=action,
resource=resource,
context=params
)
if validation.allowed:
# Execute and log the result
result = perform_action(action, resource, params)
audit.log_execution(
validation_id=validation.id,
duration=result.duration,
response_size=result.size,
success=True
)
return result
else:
# Blocked actions are automatically logged by ACT
# Additional custom logging if needed:
audit.log_custom(
event_type="agent_action_blocked",
agent=agent_token.agent_id,
action=action,
reason=validation.reason,
risk_score=validation.risk_score
)
raise SecurityError(validation.reason)
Advanced: Custom Audit Enrichment
class EnrichedAuditLogger:
def __init__(self, act_client):
self.act = act_client
def log_with_context(self, event, enrichments):
"""Add business context to audit events."""
enriched_event = {
**event,
"business_context": {
"department": enrichments.get("department"),
"cost_center": enrichments.get("cost_center"),
"project": enrichments.get("project"),
"classification": enrichments.get("data_classification"),
"regulation": enrichments.get("applicable_regulation")
}
}
self.act.audit.log(enriched_event)
def log_sensitive_access(self, event, data_categories):
"""Special logging for sensitive data access."""
enriched_event = {
**event,
"sensitive_data": {
"categories": data_categories,
"justification": event.get("purpose"),
"minimumNecessary": True,
"encrypted": True,
"retentionPolicy": "30_days"
}
}
self.act.audit.log(enriched_event)
# Also send to compliance team
if "pii" in data_categories or "phi" in data_categories:
notify_compliance_team(enriched_event)
Real-Time Streaming
# Stream audit events in real-time for monitoring
def start_audit_stream():
stream = act.audit.stream(
filters={
"security.riskScore": {">=": 7.0}
}
)
for event in stream:
# Process high-risk events in real-time
if event["security"]["riskScore"] >= 9.0:
trigger_incident_response(event)
elif event["security"]["riskScore"] >= 7.0:
send_alert(event)
# Update dashboards
update_security_dashboard(event)
# Feed into SIEM
forward_to_siem(event)
Using Audit Logs: Four Key Applications
1. Compliance Reporting
Generate reports that satisfy auditor requirements:
def generate_compliance_report(framework, time_range):
"""Generate a compliance report for a specific framework."""
if framework == "GDPR":
return {
"report_type": "GDPR Data Processing Activity Report",
"period": time_range,
"total_data_access_events": act.audit.count(
filters={"compliance.framework": "GDPR"},
time_range=time_range
),
"data_subjects_affected": act.audit.distinct_count(
field="compliance.dataSubjectId",
time_range=time_range
),
"processing_purposes": act.audit.group_by(
field="compliance.processingPurpose",
time_range=time_range
),
"blocked_access_attempts": act.audit.count(
filters={
"authorization.result": "denied",
"compliance.framework": "GDPR"
},
time_range=time_range
),
"data_categories_accessed": act.audit.distinct_values(
field="compliance.personalDataCategories",
time_range=time_range
)
}
elif framework == "HIPAA":
return {
"report_type": "HIPAA Access Audit Report",
"phi_access_events": act.audit.count(
filters={"compliance.phiAccessed": True},
time_range=time_range
),
"minimum_necessary_violations": act.audit.count(
filters={"compliance.minimumNecessary": False},
time_range=time_range
),
"break_the_glass_events": act.audit.count(
filters={"compliance.breakTheGlass": True},
time_range=time_range
)
}
2. Security Analysis and Threat Detection
Use audit logs to detect attack patterns:
def detect_security_anomalies(time_window="1h"):
"""Analyze audit logs for security anomalies."""
anomalies = []
# Pattern 1: Unusual volume of blocked actions
blocked_counts = act.audit.group_by(
field="agent.id",
filters={"authorization.result": "denied"},
time_range=time_window
)
for agent_id, count in blocked_counts.items():
if count > BLOCKED_THRESHOLD:
anomalies.append({
"type": "excessive_blocked_actions",
"agent": agent_id,
"count": count,
"severity": "high"
})
# Pattern 2: Access outside normal patterns
off_hours_events = act.audit.query(
filters={
"timestamp": {"hour_range": "22:00-06:00"},
"authorization.result": "allowed"
},
time_range=time_window
)
if len(off_hours_events) > OFF_HOURS_THRESHOLD:
anomalies.append({
"type": "off_hours_activity",
"count": len(off_hours_events),
"severity": "medium"
})
# Pattern 3: Data access volume spikes
data_volume = act.audit.aggregate(
field="execution.responseSize",
operation="sum",
group_by="agent.id",
time_range=time_window
)
for agent_id, volume in data_volume.items():
avg_volume = get_historical_average(agent_id)
if volume > avg_volume * 5:
anomalies.append({
"type": "data_volume_spike",
"agent": agent_id,
"volume": volume,
"average": avg_volume,
"severity": "high"
})
return anomalies
3. Debugging Agent Behavior
When an agent does something unexpected, audit logs tell you exactly what happened:
def debug_agent_session(session_id):
"""Reconstruct an agent's session from audit logs."""
events = act.audit.query(
filters={"context.sessionId": session_id},
sort="timestamp:asc"
)
timeline = []
for event in events:
timeline.append({
"time": event["timestamp"],
"action": event["action"]["type"],
"resource": event["action"]["resource"],
"result": event["authorization"]["result"],
"policy": event["authorization"]["policyId"],
"risk_score": event.get("security", {}).get("riskScore", 0),
"duration": event.get("execution", {}).get("duration"),
"parameters": event["action"]["parameters"]
})
return {
"session_id": session_id,
"total_actions": len(timeline),
"allowed": sum(1 for e in timeline if e["result"] == "allowed"),
"blocked": sum(1 for e in timeline if e["result"] == "denied"),
"timeline": timeline
}
4. Policy Optimization
Use audit logs to identify policies that are too restrictive or too permissive:
def analyze_policy_effectiveness(policy_id, time_range="30d"):
"""Analyze how well a policy is working."""
events = act.audit.query(
filters={"authorization.policyId": policy_id},
time_range=time_range
)
total = len(events)
allowed = sum(1 for e in events if e["authorization"]["result"] == "allowed")
blocked = total - allowed
# Find most commonly blocked actions (might be too restrictive)
blocked_actions = {}
for event in events:
if event["authorization"]["result"] == "denied":
action = event["action"]["type"]
blocked_actions[action] = blocked_actions.get(action, 0) + 1
# Find high-risk allowed actions (might be too permissive)
risky_allowed = [e for e in events
if e["authorization"]["result"] == "allowed"
and e.get("security", {}).get("riskScore", 0) > 5.0]
return {
"policy_id": policy_id,
"total_evaluations": total,
"allow_rate": f"{(allowed/total)*100:.1f}%",
"block_rate": f"{(blocked/total)*100:.1f}%",
"most_blocked_actions": sorted(
blocked_actions.items(), key=lambda x: x[1], reverse=True
)[:10],
"risky_allowed_count": len(risky_allowed),
"recommendations": generate_recommendations(
blocked_actions, risky_allowed, total
)
}
Log Retention and Storage
Retention Policies by Regulation
retention_policies:
gdpr:
default: 90 days
sensitive_data_access: 1 year
security_events: 2 years
data_breach_events: 5 years
hipaa:
default: 6 years
phi_access: 6 years
security_incidents: 6 years
sox:
default: 7 years
financial_transactions: 7 years
access_controls: 7 years
pci_dss:
default: 1 year
cardholder_data_access: 1 year
security_events: 1 year
Storage Architecture
storage:
hot_storage:
duration: 30 days
type: real-time queryable
use_case: active monitoring, debugging, alerts
warm_storage:
duration: 1 year
type: indexed, searchable
use_case: compliance queries, investigations
cold_storage:
duration: 7 years
type: compressed, archived
use_case: regulatory requirements, legal holds
immutability:
enabled: true
method: write-once-read-many (WORM)
tamper_detection: cryptographic hash chain
Best Practices for AI Audit Logging
1. Log Everything, Filter Later
# ❌ BAD: Selectively logging
if action.risk_score > 5:
audit.log(event) # Misses important low-risk patterns
# ✓ GOOD: Log everything, use filters for analysis
audit.log(event) # Always log
# Use queries to filter what you need
high_risk = audit.query(filters={"risk_score": {">=": 5}})
2. Make Logs Immutable
Once written, audit logs should never be modified or deleted (except by retention policies).
# ACT provides immutable audit logs by default
# Each event is cryptographically signed
# Hash chain prevents tampering
# Tamper detection alerts on any modification attempt
3. Include Sufficient Context
// ❌ BAD: Minimal context
{"action": "read", "result": "allowed"}
// ✓ GOOD: Rich context
{
"action": {"type": "read_customer", "resource": "customer://12345"},
"authorization": {"result": "allowed", "policy": "support-v2"},
"context": {"user": "[email protected]", "session": "sess_123"},
"execution": {"duration": "45ms", "rows": 1, "size": "2.3KB"}
}
4. Correlate Across Agents
In multi-agent systems, use correlation IDs to trace actions across agents:
{
"correlationId": "corr_master_abc123",
"delegationChain": [
"user:alice → manager-bot → worker-bot-03"
],
"rootCause": "user_request:sess_xyz789"
}
5. Automate Compliance Reporting
Don't wait for auditors to ask—generate reports automatically:
automated_reports:
- name: "Weekly GDPR Summary"
schedule: "every Monday 9:00 AM"
recipients: ["[email protected]"]
template: "gdpr_weekly"
- name: "Monthly SOX Audit"
schedule: "1st of month"
recipients: ["[email protected]", "[email protected]"]
template: "sox_monthly"
- name: "Daily Security Digest"
schedule: "every day 8:00 AM"
recipients: ["[email protected]"]
template: "security_daily"
Conclusion
Audit logging for AI agents isn't optional—it's a fundamental requirement for compliance, security, and operational excellence.
Traditional logging approaches fall short because AI agents operate at a fundamentally different scale and pattern than human users. You need:
- ✅ Rich, structured events with full context for every action
- ✅ Real-time streaming for immediate threat detection
- ✅ Compliance-aware logging that maps to regulatory frameworks
- ✅ Immutable storage with tamper detection
- ✅ Powerful query capabilities for investigations and reporting
- ✅ Automated compliance reports for auditors and regulators
ACT provides all of this out of the box. Every action validated through ACT is automatically logged with full context, stored immutably, and available for real-time analysis and compliance reporting.
Implement comprehensive audit logging for your AI agents Get Started with ACT →
Related articles: