Skip to main content

The Agent–User Interaction (AG-UI) Protocol

AG-UI is an open, lightweight, event-based protocol that standardizes how AI agents connect to user-facing applications. Built for simplicity and flexibility, it standardizes how agent state, UI intents, and user interactions flow between your model/agent runtime and user-facing frontend applications—to allow application developers to ship reliable, debuggable, user‑friendly agentic features fast while focusing on application needs and avoding complex ad-hoc wiring.
AG-UI Overview

Building blocks (today & upcoming)

Streaming chat
Live token and event streaming for responsive multi turn sessions, with cancel and resume.
Multimodality
Typed attachments and real time media (files, images, audio, transcripts); supports voice, previews, annotations, provenance.
Generative UI, static
Render model output as stable, typed components under app control.
Generative UI, declarative
Small declarative language for constrained yet open-ended agent UIs; agents propose trees and constraints, the app validates and mounts.
Shared state
(Read-only & read-write). Typed store shared between agent and app, with streamed event-sourced diffs and conflict resolution for snappy collaboration.
Thinking steps
Visualize intermediate reasoning from traces and tool events; no raw chain of thought.
Frontend tool calls
Typed handoffs from agent to frontend-executed actions, and back.
Backend tool rendering & side effects
Visualize backend tool outputs in app and chat, emit side effects as first-class events.
Interrupts (human in the loop)
Pause, approve, edit, retry, or escalate mid flow without losing state.
Sub-agents and composition
Nested delegation with scoped state, tracing, and cancellation.
Agent steering
Dynamically redirect agent execution with real-time user input to guide behavior and outcomes.
Tool output streaming
Stream tool results and logs so UIs can render long-running effects in real time.
Custom events
Open-ended data exchange for needs not covered by the protocol.

Why Agentic Apps need AG-UI

Agentic applications break the simple request/response model that dominated frontend-backend development in the pre-agentic era: a client makes a request, the server returns data, the client renders it, and the interaction ends.

The requirements of user‑facing agents

While agents are just software, they exhibit characteristics that make them challenging to serve behind traditional REST/GraphQL APIs:
  • Agents are long‑running and stream intermediate work—often across multi‑turn sessions.
  • Agents are nondeterministic and can control application UI nondeterministically.
  • Agents simultanously mix structured + unstructured IO (e.g. text & voice, alongside tool calls and state updates).
  • Agents need user-interactive composition: e.g. they may call sub‑agents, often recursively.
  • And more…
AG-UI is an event-based protocol that enables dynamic communication between agentic frontends and backends. It builds on top of the foundational protocols of the web (HTTP, WebSockets) as an abstraction layer designed for the agentic age—bridging the gap between traditional client-server architectures and the dynamic, stateful nature of AI agents.

The AI protocol landscape

AG-UI has emerged as the 3rd leg of the AI protocol landscape:
AI Protocol Stack
  • MCP: Connects agents to tool and to context.
  • A2A: Connects agents to other agents.
  • AG-UI: Connects agents to users (through user-facing applications)
These protocols are complimentary and have distinct technical goals; a single agent can and often does use all 3 simultanously. Where these protocols intersect, there are opportunities for seamless handshakes facilitating interoperability—work on these integration points is actively ongoing. AG-UI’s mandate is to support the full set of building blocks required by modern agentic applications.

AG-UI in action

You can see demo apps of the AG-UI features with the framework of your choice, with preview, code and walkthrough docs in the AG-UI Dojo

Supported Integrations

AG-UI was born from CopilotKit’s initial partnership with LangGraph and CrewAI - and brings the incredibly popular agent-user-interactivity infrastructure to the wider agentic ecosystem.

Direct to LLM

FrameworkStatusAG-UI Resources
Direct to LLMSupportedDocs

Agent Framework - Partnerships

FrameworkStatusAG-UI Resources
LangGraphSupportedDocs, Demos
Google ADKSupportedDocs, Demos
CrewAISupportedDocs, Demos

Agent Framework - 1st Party

FrameworkStatusAG-UI Resources
MastraSupportedDocs, Demos
Pydantic AISupportedDocs, Demos
AgnoSupportedDocs, Demos
LlamaIndexSupportedDocs, Demos
AG2SupportedDocs
AWS Bedrock AgentsIn Progress
AWS Strands AgentsIn Progress
Microsoft Agent FrameworkIn Progress

Agent Framework - Community

FrameworkStatusAG-UI Resources
Vercel AI SDKSupportedDocs
OpenAI Agent SDKIn Progress
Cloudflare AgentsIn Progress

Agent Interaction Protocols

ProtocolStatusAG-UI ResourcesIntegrations
A2A MiddlewareSupportedDocsPartnership

SDKs

SDKStatusAG-UI ResourcesIntegrations
KotlinSupportedGetting StartedCommunity
GolangSupportedGetting StartedCommunity
JavaSupportedGetting StartedCommunity
RustSupportedGetting StartedCommunity
.NETIn ProgressPRCommunity
NimIn ProgressPRCommunity
DartIn ProgressPRCommunity
FlowiseIn ProgressGitHub SourceCommunity
LangflowIn ProgressGitHub SourceCommunity

Clients

ClientStatusAG-UI ResourcesIntegrations
CopilotKitSupportedGetting Started1st Party
Terminal + AgentSupportedGetting StartedCommunity
React NativeHelp WantedGitHub SourceCommunity

Quick Start

Choose the path that fits your needs:

Explore AG-UI

Dive deeper into AG-UI’s core concepts and capabilities:

Resources

Explore guides, tools, and integrations to help you build, optimize, and extend your AG-UI implementation. These resources cover everything from practical development workflows to debugging techniques.

Contributing

Want to contribute? Check out our Contributing Guide to learn how you can help improve AG-UI.

Support and Feedback

Here’s how to get help or provide feedback: