Guest post by Magnus Samuelsen, Open Source Contributor and AI Engineer at Jyske Bank
Pydantic AI Meets OpenBB Workspace
Right now, OpenBB Workspace users can use the built-in Copilot with OpenAI, or build an integration from scratch. The OpenBB AI SDK gives you the protocol, but you're still left wiring up LLM calls, tool orchestration, and streaming events.
This is where openbb-pydantic-ai comes in. I built this library to handle all of that for you. It's a bridge that lets you write Pydantic AI agents that speak native OpenBB. It translates QueryRequest payloads into Pydantic AI runs and streams native OpenBB events back to the client, so you can focus on your agent's logic, not the integration plumbing.
This post will walk you through the architecture of the adapter that makes this possible, the key technical challenges I solved, and how you can get your own custom agent running in minutes.
The Challenge: Not Just a Chatbot
Integrating an AI agent into OpenBB Workspace is more complex than connecting to a chat interface. The Workspace is a dynamic environment where users and AI interact with data widgets, visualizations, and complex analysis dashboards.
An effective agent must:
"See" the Workspace: Understand the current state of the Workspace, including available widgets, data sources, and user interactions.
Access Remote Resources: Fetch data from widgets that exist on the client's browser and call external tools from MCP Servers, running locally on your machine or connected through the OpenBB Workspace via MCP.
Stream Rich Events: Stream reasoning steps, tool use, artifacts like tables and charts, and provide citations that the Workspace UI can render appropriately.
The Solution: A UI Event Stream Adapter
Fortunately, Pydantic AI has a flexible adapter system exactly for this type of job.
It's UIAdapter and UIEventStream abstract classes are designed for translating between agent events and different UI protocols. By implementing OpenBBAIAdapter and OpenBBAIEventStream, we can map OpenBB's QueryRequest payloads into Pydantic AI event streams, and vice versa.
As mentioned earlier, though, integration with OpenBB Workspace is more than just event translation, so there are some extra features in addition to basic request handling.
Workspace Context Injection
The first challenge is making the agent aware of the user's Workspace. The adapter extracts Workspace context (available widgets, URLs, defaults) from the QueryRequest, builds an OpenBBDeps dependency object, and injects it into the agent run as RunContext. This context updates the system prompt in the Pydantic AI message stack, giving the agent the necessary awareness to make informed decisions.
The Deferred Call Handshake
Next, we need to enable the agent to access client-side widget data. For this we build an ExternalToolset on the fly based on the widgets present in the OpenBBDeps object. We expose all available widgets as function tools to the agent. In addition, we build a separate toolset for MCP tools that are passed in the tools field of the QueryRequest.
When an agent needs widget data, we can't block the server waiting for the browser. Instead, we use a deferred execution pattern:
- Agent requests data: The agent calls a tool, which raises
CallDeferredand ends the run with aDeferredToolRequestspayload - Connection closes: The adapter emits a final
get_widget_dataSSE event containing the widget IDs and closes the stream. We include thetool_call_idinextra_state.tool_calls, a pass-through object the frontend will return unchanged - Frontend executes: The browser fetches the requested widget data or MCP tool results
- New request: The frontend sends a fresh
POST /queryrequest with the tool results and the originalextra_stateattached - Agent resumes: It extracts
tool_call_idvalues fromextra_state.tool_callsto match each result back to its corresponding tool call, rebuilds the properToolReturnPartmessages, and injects them into a new agent run to continue streaming
This explicit handshake keeps the server stateless, with no long-lived connections or session storage required, while giving the agent real-time access to client-side data. Each widget call also generates a citation, so users can track where data comes from.
| Widget Type | Agent Can Parse | Status |
|---|---|---|
| Tables, JSON | Yes | Full support |
| Charts, Text | Yes | Full support |
| PDFs, Images | No | Tool available; agent receives raw bytes but cannot yet process |
| Custom widgets | Partial | Depends on output schema |
Server-Side Visualization Tools
Not everything needs a round trip to the client. I also include some visualization tools, openbb_create_chart and openbb_create_table, that run as normal tools on the server side. When the agent uses this, it produces an OpenBB-ready artifact, so you can stream charts and tables correctly formatted for the Workspace UI without any extra effort.
If you instruct the agent to use placeholders (e.g. {{place_chart_here}}) in its response, the adapter will replace these with the correct artifact references automatically. If not they are emitted at the end of the run.
By using Pydantic AI's ToolReturn object, we can send the heavy chart configuration as a side-channel artifact to the UI, while returning a simple "Chart created successfully" message to the agent. This is an attempt at keeping the agent's context window clean and focused on the conversation.
Finally, the adapter translates every remaining Pydantic AI event into OpenBB SSE events. Text chunks are streamed as MessageChunkSSE, reasoning steps (including Thinking tokens) are grouped under a "Step-by-step reasoning" dropdown passed as StatusUpdateSSE, and any tool invocation that stays on the server is passed as details in a StatusUpdateSSE to the user so they can observe a complex, multi-tool workflow in real time. Tables and charts (whether produced by the visualization helpers or another tool) are emitted as MessageArtifactSSE events with the correct rendering parameters, and citations are buffered and emitted at the end of the run.
The end result is that you can build a fully-featured AI agent for OpenBB Workspace using Pydantic AI's high-level abstractions, while still taking advantage of the Workspace's dynamic environment and rich UI capabilities. Because the adapter handles all the translation and streaming logic, you can focus on designing your agent's behavior, tools, and prompts without worrying about the underlying protocol details.
In practice, this means you only need a single line of code to turn any Pydantic AI agent into an OpenBB Workspace agent:
1
OpenBBAIAdapter.dispatch_request(request, agent=agent)Simply call dispatch_request with the incoming QueryRequest and your Pydantic AI agent instance, and the adapter takes care of the rest. (Full example below.)
What Does This Unlock?
By bridging Pydantic AI with OpenBB Workspace, you can now take full control over the AI agent's behavior, tools, and prompts while seamlessly integrating with the user's workspace environment. This means using any model provider supported by Pydantic AI, custom tools for analysis, and optimized prompts for your specific use case.
For example:
You could build an agent that focuses on a different domain than finance. Perhaps you want to analyze sports statistics or personal health data, all while leveraging the powerful Workspace UI for visualizations and interactivity.
You have trained a custom model that you run locally or on a private server, and you want to use that model to power your agent in the OpenBB Workspace.
You build custom agent workflows that follow a specific logic or sequence of tool calls tailored to your use case.
Whether you have a specific analysis workflow in mind or want to experiment with different model providers and prompt strategies, this adapter gets you there faster. And if you build something useful, share it with the community!
Bringing It All Together: A Complete Example
Here's a complete example of how to set up a custom Pydantic AI agent using the openbb-pydantic-ai adapter, running it inside OpenBB Workspace with OpenRouter as the model provider and a local MCP server:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
from anyio import BrokenResourceError
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP
from pydantic_ai.models.openai import OpenAIChatModel
from openbb_pydantic_ai import OpenBBAIAdapter, OpenBBDeps
# Connect to local MCP server for additional tools
mcp = MCPServerStreamableHTTP(
url="http://localhost:8001/mcp/",
max_retries=3,
)
# Use any OpenRouter model you prefer
model = OpenAIChatModel(
provider="openrouter",
model_name="prime-intellect/intellect-3",
)
# Define agent with workspace-aware instructions
agent = Agent(
model,
instructions=(
"""
You have access to the OpenBB Workspace and you can use the widgets to get data.
To get data use the widget tools.
Other data sources and analysis tools are available via the MCP toolset.
When you want to visualize data, use the `openbb_create_chart` tool.
This tool supports line, bar, scatter, pie, and donut charts.
Always use this tool when you want to display visualizations.
For tables, simply output markdown tables and they will be rendered nicely.
Or use the `openbb_create_table` tool for large tables.
Use placerholder {{place_chart_here}} in your response to indicate where charts should go.
Highlight key insights and suggest actionable next steps when helpful.
If the next step is obvious, you can just do it without asking the user.
"""
),
deps_type=OpenBBDeps,
retries=3,
toolsets=[mcp],
)
app = FastAPI()
AGENT_BASE_URL = "http://localhost:8003"
# OpenBB Workspace discovers agents via this endpoint
@app.get("/agents.json")
async def agents_json():
return JSONResponse(
content={
"<agent-id>": {
"name": "My Custom Agent",
"description": "This is my custom agent",
"image": f"{AGENT_BASE_URL}/my-custom-agent/logo.png",
"endpoints": {
"query": f"{AGENT_BASE_URL}/query",
},
"features": {
"streaming": True,
"widget-dashboard-select": True, # Access priority widgets
"widget-dashboard-search": True, # Access non-priority widgets
"mcp-tools": True, # Use MCP tools
},
}
}
)
# Main query endpoint that handles SSE streaming
@app.post("/query")
async def query(request: Request):
"""
OpenBB Workspace sends POST requests with QueryRequest payload.
The adapter handles SSE streaming automatically.
"""
try:
return await OpenBBAIAdapter.dispatch_request(
request, agent=agent
)
except BrokenResourceError:
# Client disconnected we expect this sometimes
pass
# CORS configuration for OpenBB Workspace domain
app.add_middleware(
CORSMiddleware,
allow_origins=["https://pro.openbb.co"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)Future Roadmap: Smarter Context Management
While the current adapter solves the connectivity problem, the next step is context efficiency.
Financial datasets can be massive, sometimes hundreds of thousands of tokens for a single widget. Feeding raw JSON into an LLM's context window is inefficient and prone to cause hallucinations.
Get Started Today
Install openbb-pydantic-ai from PyPI, plug in your model, and deploy your agent in no time. Build that niche earnings-call analyzer, ESG scoring agent, or custom quant workflow and share it with the OpenBB community.
If you have ideas on how to improve this or just want to discuss agents, reach me on LinkedIn or GitHub.
Magnus Samuelsen is an AI Engineer at Jyske Bank, working at the intersection of AI, finance and banking. He holds an MSc in Business Administration and Data Science from Copenhagen Business School. Magnus is also an active open-source contributor around AI in finance, having led the addition of MCP (Model Context Protocol) support to the Open Data Platform. You can find him on LinkedIn.