,

Connecting Open WebUI to MCP Servers — Your AI Gets Superpowers

Title Image

Here is the problem: your locally-hosted LLM is brilliant at generating text, but it is blind to the world. It cannot check your GitHub repos, query your database, scrape the web for fresh information, or update your Notion pages. It is a genius sitting in an empty room.

MCP (Model Context Protocol) solves this. It is the universal wiring that plugs your AI into your tools. And as of Open WebUI v0.6.31, that wiring is native, built right in, and takes about three minutes to configure.

The numbers back this up. The public MCP server registry has grown 7.8 times year-over-year — from 1,200 servers in Q1 2025 to over 9,400 public MCP servers by April 2026. 78% of enterprise AI teams now report at least one MCP-backed agent in production. And 67% of CTOs surveyed name MCP their default agent-integration standard (Digital Applied, 2026). Every major frontier lab — Anthropic, OpenAI, Google — ships client support for it. The protocol has won.

This post walks you through connecting Open WebUI to MCP servers, highlights the best available servers you can plug in right now, and shows you how to start putting your AI to work.

What Is MCP and Why Does Your Local AI Need It?

MCP is an open specification that defines how AI clients communicate with external tools and data sources. Think of it as a universal USB-C port for AI. One standard. Any tool. Any client.

The protocol defines three primitives:

  • Tools — actions the AI can invoke (create a GitHub issue, send a Slack message, run a database query)
  • Resources — readable data sources the AI can reference (a live dashboard, a document, an API response)
  • Prompts — reusable message templates for common workflows (code review, incident response, onboarding)

Before MCP, every AI tool needed its own custom integration. You connected to GitHub APIs here, Notion APIs there, and Slack somewhere else. Each integration was fragile, required separate authentication, and only worked with one AI client.

MCP changes the economics. You write one MCP server and it works across every compliant AI client — Open WebUI, Claude Desktop, Cursor, Windsurf, VS Code with Copilot, and 300+ others. The median time to wire a new SaaS tool into an AI agent has dropped from 18 hours of custom code to 4.2 hours with MCP (Digital Applied, 2026). That is a 4.3x productivity multiplier on the slowest, most error-prone stage of agent development.

The implications for your setup are straightforward. Once you connect Open WebUI to an MCP server, your local LLM stops being a text generator and starts being a doer. It queries your Postgres database. It searches the web with structured results. It automates multi-step workflows — check a Jira ticket, update a doc, notify your Slack channel — all within the chat interface.

Prerequisites: What You Need Before Connecting

Before you connect an MCP server to Open WebUI, you need three things in place:

  1. Open WebUI v0.6.31 or later — MCP support was introduced in this release. Check your version at /api/version in your browser or in your admin panel.
    See my Open WebUI Docker deployment guide on how to set this up.
  2. A running model with Function Calling set to Native — Navigate to your model’s Advanced Params and set Function Calling to Native. Without this, the AI cannot invoke MCP tools.
  3. A WEBUI_SECRET_KEY environment variable — This is required for OAuth-connected tools and is a security best practice for any production deployment. Without it, OAuth tokens break every time you restart the container.

Connecting Your First MCP Server (Step by Step)

Open WebUI supports Streamable HTTP transport natively. This is the recommended and supported transport method. Here is the exact process:

Step 1: Open External Tools in Admin Settings

Navigate to Admin Settings → External Tools in your Open WebUI admin panel. Click the + (Add Server) button to create a new connection.

Step 2: Configure the Server

Set the following fields:

  • Type — Select MCP (Streamable HTTP). Do not select OpenAPI — entering MCP-style configuration into an OpenAPI connection will crash the UI with an infinite loading screen.
  • Server URL — Enter the URL of your MCP server (e.g., http://localhost:3001)
  • Auth — Choose your authentication mode:
    • None — Use this for local or internal servers that do not require authentication. Important: Default to “None” unless your server strictly requires a token. Selecting “Bearer” without providing a key sends an empty Authorization header, which causes many servers to reject the connection immediately.
    • Bearer — Use this for servers that require an API token. Enter your token in the credentials field.
    • OAuth 2.1 — Dynamic Client Registration for servers that support OAuth flows.
    • OAuth 2.1 (Static) — Use this if you have pre-created a client ID and secret with the MCP server provider.
  • Function Name Filter (optional) — Enter a comma-separated pattern to restrict which tools from the server are exposed to the AI. Use this when a server has many tools and you only want specific ones available.

Click Save to establish the connection.

Step 3: Activate Tools in Chat

Once the server is saved, the tools become available in your chat. To enable them:

  1. Open a chat with your connected model
  2. Click the + button in the chat input area
  3. Navigate to Integrations → Tools
  4. Enable the MCP tools you want to use for that conversation

Your LLM can now invoke tools from the connected MCP server. It will use them in real time as you converse.

Docker Note

If your MCP server runs on the host machine and Open WebUI is running inside Docker, use http://host.docker.internal:<port> instead of localhost. This tells Docker to route traffic to the host’s network stack.

What About Stdio and SSE MCP Servers?

Not all MCP servers use Streamable HTTP. Many popular ones — especially community-built servers — still use stdio transport. According to usage data, 67% of MCP servers run over local stdio, 28% use Streamable HTTP, and 5% remain on the deprecated SSE transport (Digital Applied, 2026). Open WebUI cannot connect to these directly.

The solution is mcpo — an open-source bridge from the Open WebUI team that translates stdio MCP servers into OpenAPI-compatible endpoints. You deploy mcpo as a proxy, point it at your stdio server, and then connect Open WebUI to mcpo’s HTTP endpoint the same way described above.

Install and run mcpo with:

Using uvx (recommended):

uvx mcpo --port 8000 -- your_mcp_server_command

Or using pip:

pip install mcpo
mcpo --port 8000 -- your_mcp_server_command

What this command is doing:

  • --port 8000 – Tells mcpo to listen on port 8000. This is the port you point Open WebUI’s External Tools connection at.
  • -- your_mcp_server_command – Everything after the double-dash is passed through to the MCP server. For example, uvx mcp-server-time --local-timezone=America/New_York.

Repository: open-webui/mcpo on GitHub. It has over 4,200 stars and is actively maintained by the Open WebUI team.

Available MCP Servers You Can Plug In Right Now

With over 9,400 public MCP servers available, choosing where to start can be overwhelming. Here are the best ones to begin with, organized by category. All of these connect to Open WebUI natively via Streamable HTTP (or through mcpo for stdio-based servers).

Development Tools

GitHub MCP — Interact with repositories, issues, pull requests, and code search. This is the first-party server from GitHub with over 240,000 weekly downloads. Your AI can create issues, review PRs, search codebases, and manage milestones — all from within your chat.

Linear MCP — Read, create, and update issues; manage sprints and cycles. If your team uses Linear for project management, this turns your LLM into a project manager that operates inside the chat interface.

Context7 MCP — Fetches version-specific documentation for thousands of libraries at query time. Instead of your AI guessing at API usage from its training data, it pulls the exact docs for the version you’re using. This dramatically reduces hallucination on technical questions.

Sentry MCP — Error tracking, stack traces, and frequency trends. The killer query: “Are there any new errors since yesterday’s deploy?” — answered without opening a dashboard.

Web and Data Access

Exa MCP — Semantic web search with structured JSON results. Built specifically for AI agents, it returns clean, query-ready data rather than raw search engine pages.

Postgres MCP — Query PostgreSQL databases directly from AI clients with safe query validation. Over 119,000 weekly downloads. Your LLM can run read queries, explore your schema, and return structured results — without anyone needing to touch SQL. This is the MCP server people set up and never go back from.

Playwright MCP — Browser automation via accessibility tree. Navigate pages, click elements, fill forms, and take screenshots. Your AI can perform end-to-end tests, scrape dynamic content, and interact with web applications that other tools cannot reach.

Communication and Productivity

Slack MCP — Search channels, send messages, and manage threads within Slack workspaces. Over 142,000 weekly downloads. Your AI can answer questions based on channel history, post summaries, or trigger alerts when conditions are met.

Notion MCP — Full CRUD operations on Notion pages, databases, and blocks. Your AI can create meeting notes, update project trackers, and query your workspace databases — all through natural language.

Multi-Integration Platforms

Composio MCP — 250+ app integrations including GitHub, Slack, Gmail, Jira, Salesforce, and more, all in a single server. If you want the broadest possible tool access with minimal setup, this is the highest-leverage server to start with.

Other Notable Servers

The ecosystem is growing rapidly. Other servers worth exploring include Vercel MCP (deployment management), Brave Search MCP (privacy-respecting web search, 98,000+ weekly downloads (Digital Applied, 2026)), Salesforce MCP (CRM data access), Google Drive MCP (first-party server with 168,000 weekly downloads), and Google Maps MCP (location services and place details). The full registry at registry.modelcontextprotocol.io and curated lists like awesome-mcp-servers list every available server.

You Can Write Your Own (And You Should)

This is where it gets interesting. The MCP spec is open, and building your own server is far simpler than building a full integration from scratch. The MCP SDK is available in TypeScript, Python, Java, Kotlin, C#, and Swift — with 97 million monthly downloads across all languages combined (Digital Applied, 2026).

If you have a proprietary data source, an internal API, or a workflow that no existing server covers, you can write a custom MCP server in under an hour. Your AI then gains access to exactly the tools your business needs. One MCP server. Every AI client. Forever reusable.

For the details on writing a custom server, check the official MCP documentation. The pattern is consistent: define your tools, expose them via your chosen transport (Streamable HTTP, stdio, or SSE), and connect Open WebUI to the endpoint. The MCP SDK handles the JSON-RPC plumbing. You focus on the business logic.

This is the real competitive advantage. While everyone else is using the same public tools, you are wiring your AI directly to your systems. You trade volatile, recurring API fees for a single, owned integration. You transform data leakage into privacy. You shift from consumer to creator.

Key Takeaway

Financial: Connect Open WebUI to MCP servers and stop paying for redundant tool integrations. One MCP server serves every AI client you use, and the median time to integrate drops from 18 hours to 4.2 hours.

Security: Keep your AI interactions within your own infrastructure. MCP servers with OAuth 2.1 scoped permissions mean your data never leaves your controlled environment — and 81% of remote MCP servers now use OAuth 2.1 for authentication (Digital Applied, 2026).

Strategic: The MCP ecosystem is the fastest-growing protocol in AI history — 7.8x server growth in 12 months. 78% of enterprise AI teams have MCP in production (Digital Applied, 2026). Building your AI workflow on an open, universally-supported standard today means you are not locked into any single vendor tomorrow.

Next Steps

Already have Open WebUI running? If you want to add web search on top of MCP, check out our guide on connecting Open WebUI to Brave Search and Playwright — it complements MCP perfectly by giving your AI a live web search pipeline.

Want to explore more of Open WebUI’s capabilities? Our Open WebUI Docker deployment guide walks you through the complete setup from scratch.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *