An unhandled error has occurred. Reload πŸ—™

The Missing Security Layer in AI Ecosystems

AI FrameworksSecurity Gap

The Missing Security Layer in AI Ecosystems

The AI ecosystem is booming. Frameworks like LangChain, Autogen, Semantic Kernel, and OpenAI Assistants make it remarkably easy to build powerful AI agents that can browse the web, query databases, send emails, and execute code. But there's a critical problem hiding in plain sight: none of these frameworks include fine-grained authorization.

They give your agents the keys to the kingdomβ€”and then hope nothing goes wrong.

This isn't a minor oversight. It's a fundamental gap that puts every production AI deployment at risk. And it's the gap that ACT was built to fill.

The Current State of AI Framework Security

Let's look at what the most popular AI frameworks actually provide for security and authorization.

LangChain

LangChain is the most widely adopted framework for building LLM-powered applications. It provides:

  • βœ… Tool definitions (what functions the agent can call)
  • βœ… Memory management (conversation history)
  • βœ… Chain composition (multi-step workflows)
  • ❌ No action-level authorization
  • ❌ No resource-level permissions
  • ❌ No runtime constraint enforcement
  • ❌ No audit logging of agent actions

What this means in practice:

# LangChain tool definition - no authorization
@tool
def query_database(sql: str) -> str:
    """Execute a SQL query against the database."""
    # No validation of WHAT query is being run
    # No check on WHO is running it
    # No constraint on HOW MUCH data is returned
    return db.execute(sql)

# The agent can run ANY SQL query:
# SELECT * FROM customers              ← OK
# DELETE FROM customers                 ← Catastrophic
# SELECT * FROM admin_credentials       ← Security breach

LangChain trusts that the LLM will only generate "good" tool calls. But as we've seen with hallucinated tool calls, that trust is misplaced.

Microsoft Autogen

Autogen enables multi-agent conversations where agents collaborate to solve problems. It provides:

  • βœ… Multi-agent orchestration
  • βœ… Code execution capabilities
  • βœ… Human-in-the-loop options
  • ❌ No permission controls between agents
  • ❌ No resource access restrictions
  • ❌ No action validation

What this means in practice:

# Autogen agent - no permission boundaries
assistant = autogen.AssistantAgent(
    name="code_executor",
    llm_config=llm_config,
    # No permission config
    # No resource restrictions
    # No action limits
)

# This agent can:
# - Execute arbitrary code
# - Access any file on the system
# - Make network requests anywhere
# - Modify or delete data

When multiple Autogen agents collaborate, any agent can potentially access resources intended for other agents. There are no trust boundaries.

OpenAI Assistants API

OpenAI's Assistants API provides a managed experience for building AI agents. It offers:

  • βœ… Function calling
  • βœ… File retrieval
  • βœ… Code interpreter
  • ❌ API key is the only security boundary
  • ❌ No per-action authorization
  • ❌ No resource-level controls

What this means in practice:

# OpenAI Assistant - API key is all-or-nothing
client = OpenAI(api_key="sk-...")

assistant = client.beta.assistants.create(
    name="Customer Support",
    tools=[
        {"type": "function", "function": send_email_schema},
        {"type": "function", "function": query_database_schema},
    ]
    # No way to restrict:
    # - Which customers the agent can access
    # - What email domains it can send to
    # - How many queries it can run per hour
    # - What data it can read vs write
)

The API key grants access. After that, the agent can do anything the functions allow.

Semantic Kernel

Microsoft's Semantic Kernel provides:

  • βœ… Plugin architecture
  • βœ… Planner for multi-step tasks
  • βœ… Memory and embeddings
  • ❌ No built-in authorization for plugin actions
  • ❌ No runtime policy enforcement

CrewAI

CrewAI focuses on multi-agent role-playing:

  • βœ… Role-based agent definitions
  • βœ… Task delegation
  • ❌ Roles are descriptive, not enforced
  • ❌ No actual permission boundaries

The Common Pattern: Security Is "Your Problem"

Every AI framework follows the same pattern:

  1. Provide powerful tools - Database access, API calls, email, code execution
  2. Trust the LLM - Assume the model will use tools responsibly
  3. Punt on security - "Implement your own authorization"

This is like giving someone a car with no brakes and saying, "Drive carefully."

What Teams Actually Do (The Scary Part)

When frameworks don't provide authorization, teams typically do one of four things:

Option 1: Nothing (Most Common)

# "It works in testing, ship it"
result = agent.run(user_input)
# No validation, no constraints, no logging

Option 2: Prompt Engineering (Easily Bypassed)

system_prompt = """
You are a helpful assistant. 
NEVER delete data. NEVER access admin endpoints.
Only help with customer support queries.
"""
# Prompt injection bypasses this trivially

Option 3: Application-Level Checks (Incomplete)

def execute_tool(tool_name, params):
    if tool_name == "delete_customer":
        raise Exception("Not allowed")
    # But what about:
    # - delete_order?
    # - update_customer with malicious data?
    # - query_database with DELETE SQL?
    # Can't anticipate every attack vector

Option 4: API Keys (All-or-Nothing)

# Read-only API key for database
db = Database(api_key="read-only-key")
# But the agent can still:
# - Read ALL data (including sensitive records)
# - Query without rate limits
# - Access any table
# - No audit trail of what was accessed

None of these approaches provide real security.

Why This Gap Exists

AI Frameworks Optimize for Developer Experience

Framework authors want adoption. Security adds friction. So they optimize for:

  • Easy setup ("Get started in 5 minutes!")
  • Flexible tool definitions
  • Minimal configuration
  • Quick demos

Authorization is complex, context-dependent, and hard to make "easy." So it gets deprioritized.

The "Responsible AI" Illusion

Many frameworks include "responsible AI" features like:

  • Content filtering (block toxic outputs)
  • Safety classifiers (detect harmful intent)
  • Guardrails libraries (validate outputs)

But these address content safety, not action authorization.

Content safety: "Don't generate offensive text"
Action authorization: "Don't delete the production database"

These are fundamentally different problems.

The Assumption of Controlled Environments

Framework documentation often assumes:

  • Agents run in sandboxed environments
  • Tools have limited capabilities
  • Human review catches problems

Production reality:

  • Agents access real databases with real customer data
  • Tools can send emails, process payments, modify records
  • Actions happen in millisecondsβ€”no time for human review

The Real-World Impact

Case Study 1: The Runaway Support Bot

Scenario: A company deploys a LangChain-based support bot with database access.

What happened: A prompt injection attack caused the bot to run UPDATE customers SET status = 'deleted' WHERE 1=1, marking all customers as deleted.

Root cause: No action-level authorization. The bot had full write access to the database.

With ACT: The UPDATE action would have been blocked by a read-only policy. The attempt would have been logged, and the security team alerted.

Case Study 2: The Data Leak

Scenario: An Autogen multi-agent system processes customer inquiries. One agent has email capabilities.

What happened: A hallucination caused the email agent to send a customer's full order history (including payment details) to an unrelated email address.

Root cause: No resource-level restrictions on the email tool. No validation of recipient addresses.

With ACT: The email domain would have been validated against an allowlist. Sensitive data patterns would have been detected. The action would have been blocked.

Case Study 3: The Infinite Loop

Scenario: An AI agent with API access enters a retry loop, making thousands of calls per minute.

What happened: The target API's rate limits were exceeded, causing cascading failures across multiple services. Monthly API costs spiked by $50,000.

Root cause: No rate limiting on agent actions. No circuit breaker.

With ACT: Rate limits would have capped API calls. A circuit breaker would have suspended the agent after detecting anomalous behavior.

How ACT Fills the Gap

ACT provides the missing authorization layer that sits between your AI framework and your resources:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   AI Framework      β”‚
β”‚ (LangChain/Autogen) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚ Agent generates action
           β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    ACT Layer        β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Action Validationβ”‚ β”‚  ← Is this action allowed?
β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚
β”‚ β”‚Resource Validationβ”‚ β”‚  ← Can it access this resource?
β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚
β”‚ β”‚ Constraint Check β”‚ β”‚  ← Are limits respected?
β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚
β”‚ β”‚  Audit Logging   β”‚ β”‚  ← Log everything
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚ Only if allowed
           β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Your APIs/Data    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Integration with LangChain

from langchain.tools import tool
from act_sdk import ACTValidator

act = ACTValidator(api_key=os.getenv("ACT_API_KEY"))

@tool
def query_database(sql: str) -> str:
    """Execute a validated SQL query."""
    # ACT validates BEFORE execution
    validation = act.validate(
        token=agent_token,
        action="database_query",
        resource=f"db://customers",
        context={"sql": sql}
    )

    if not validation.allowed:
        return f"Query blocked: {validation.reason}"

    result = db.execute(sql)
    return str(result)

# Now the agent is constrained by policy:
# βœ… SELECT queries on allowed tables
# ❌ DELETE, UPDATE, DROP blocked
# ❌ Admin tables inaccessible
# βœ… Rate limited to 100 queries/hour
# βœ… Every query logged for audit

Integration with Autogen

import autogen
from act_sdk import ACTValidator

act = ACTValidator(api_key=os.getenv("ACT_API_KEY"))

def act_protected_execute(agent_name, action, params):
    """Wrap Autogen tool execution with ACT validation."""
    validation = act.validate(
        token=get_agent_token(agent_name),
        action=action,
        resource=extract_resource(params),
        context=params
    )

    if validation.allowed:
        return execute_action(action, params)
    else:
        log_blocked(agent_name, action, validation.reason)
        return f"Action not permitted: {validation.reason}"

# Each Autogen agent gets its own ACT token
# with specific permissions and constraints
researcher_token = act.issue_token(
    agent="researcher",
    policy={
        "actions": ["web_search", "read_document"],
        "constraints": {"maxResults": 50, "noExternalAPIs": True}
    }
)

writer_token = act.issue_token(
    agent="writer",
    policy={
        "actions": ["create_document", "edit_document"],
        "constraints": {"maxLength": 5000}
    }
)

Integration with OpenAI Assistants

from openai import OpenAI
from act_sdk import ACTValidator

act = ACTValidator(api_key=os.getenv("ACT_API_KEY"))

def handle_tool_call(tool_call, agent_token):
    """Validate tool calls from OpenAI Assistants."""
    validation = act.validate(
        token=agent_token,
        action=tool_call.function.name,
        resource=extract_resource(tool_call.function.arguments),
        context=json.loads(tool_call.function.arguments)
    )

    if validation.allowed:
        result = execute_function(
            tool_call.function.name,
            json.loads(tool_call.function.arguments)
        )
        return {"tool_call_id": tool_call.id, "output": str(result)}
    else:
        return {
            "tool_call_id": tool_call.id,
            "output": f"Action blocked by policy: {validation.reason}"
        }

What ACT Provides That Frameworks Don't

| Capability | AI Frameworks | ACT | |-----------|--------------|-----| | Action Authorization | ❌ None | βœ… Per-action validation | | Resource Permissions | ❌ None | βœ… Fine-grained resource control | | Rate Limiting | ❌ None | βœ… Configurable per action | | Constraint Enforcement | ❌ None | βœ… Amount limits, time windows, conditions | | Audit Logging | ❌ Basic at best | βœ… Complete action-level audit trail | | Instant Revocation | ❌ Restart required | βœ… Real-time token revocation | | Circuit Breakers | ❌ None | βœ… Automatic suspension on anomalies | | Multi-Agent Isolation | ❌ Shared context | βœ… Per-agent tokens and boundaries |

Getting Started: Adding ACT to Your AI Stack

Step 1: Audit Your Current Setup

For each AI agent, document:

  • What tools/functions does it have access to?
  • What resources can those tools access?
  • What's the worst thing the agent could do?
  • How would you know if something went wrong?

Step 2: Define Policies

# Example: Customer support agent
agent: support-bot
policy:
  actions:
    - read_customer
    - read_order
    - create_ticket
    - send_email
  resources:
    - "customer://id:{{session.customer_id}}"
    - "order://customer:{{session.customer_id}}/*"
    - "ticket://agent:{{agent.id}}/*"
    - "email://domain:@company.com"
  constraints:
    queries:
      maxRows: 100
      rateLimit: "500/hour"
    emails:
      allowedDomains: ["@company.com"]
      maxPerHour: 50
    tickets:
      maxOpen: 10

Step 3: Integrate

Add ACT validation to your tool execution pipeline. It takes minutes, not weeks:

# Before (no security)
def execute_tool(name, params):
    return tools[name](**params)

# After (ACT-secured)
def execute_tool(name, params):
    validation = act.validate(agent_token, name, extract_resource(params), params)
    if validation.allowed:
        return tools[name](**params)
    else:
        raise SecurityError(validation.reason)

Step 4: Monitor

Review ACT audit logs to understand agent behavior:

  • Which actions are most common?
  • Are any actions being blocked frequently? (policy too restrictive?)
  • Are there suspicious patterns? (possible attacks)
  • Are rate limits being hit? (agent optimization needed)

The Cost of Inaction

Every day you run AI agents without proper authorization is a day you're exposed to:

  • Data breaches from hallucinated or injected tool calls
  • Financial losses from unauthorized transactions
  • Compliance violations from unaudited data access
  • Reputation damage from security incidents
  • Service disruptions from runaway agents

The frameworks you're using were built for capability, not security. That's not a criticismβ€”it's their design choice. But it means security is your responsibility.

ACT makes that responsibility manageable.

Conclusion

The AI framework ecosystem has a gaping security hole. LangChain, Autogen, OpenAI Assistants, Semantic Kernelβ€”none of them provide the fine-grained authorization that production AI agents need.

This isn't something you can fix with prompt engineering or API keys. You need a dedicated authorization layer that:

  • βœ… Validates every action before execution
  • βœ… Enforces resource-level permissions
  • βœ… Applies runtime constraints
  • βœ… Provides complete audit trails
  • βœ… Enables instant revocation

ACT is that layer.

Don't wait for a security incident to prove you need it. The gap is real, the risks are documented, and the solution is available today.


Add the missing security layer to your AI stack Get Started with ACT β†’

Related articles: