🛣️ Roadmap / Open WebUI

Open WebUI is building the AI interface that replaces everything else on your desktop. Here's where we're going.


🔨 In Progress

🖥️ Desktop App

A dedicated desktop application with system-level integration. Global hotkeys, menu bar access, notification support, and instant launch. One download, double-click, running. No Docker, no Python, no terminal. Your AI assistant, always one keystroke away.

📈 Usage Tracking & Cost Management

Know exactly who's using what, how many tokens are flowing through each model, and what it's costing you. Per-user breakdowns, per-model analytics, and budget controls for teams that need to scale AI usage without surprises.

♿ Accessibility

Every release gets closer. Screen reader support, full keyboard navigation, ARIA compliance, high-contrast modes. AI should be for everyone, and we take that literally.


🔭 Planned

🧠 AI Workflow Builder

Drag, drop, and wire together models, tools, knowledge bases, and logic gates into multi-step AI pipelines. Think: "pull data from this API, run it through GPT-4, check the output against these rules, then post the result." All visual. No code required.

⏰ Scheduled Tasks & Automations

Set your AI to work while you sleep. Daily report generation, weekly data pulls, automated monitoring, recurring analysis. Define a workflow once and let it run on autopilot. Your AI doesn't need to wait for you to ask.

🔧 Integrated Fine-tuning

Your conversations and ratings become training data. Open WebUI will build fine-tune datasets from your actual usage, preprocess them, and hand you a ready-to-train package. Personalized models, built from how you already work. Your AI gets better the more you use it.

🧑‍💻 Enhanced Collaboration

Multiple users in the same conversation, watching responses stream in together, annotating, branching, and building on each other's prompts. Brainstorming with your team and your AI in the same room.

📣 Wakeword Detection

Say the word and start talking. No keyboard, no click, no tab switching. Walk into a room, speak, and your AI is already listening. This is what the future of human-computer interaction looks like.

🌐 Modular RAG Framework

Swap out every piece of the retrieval pipeline from the UI. Different chunking strategies, different embedding models, different rerankers, all configurable with drag-and-drop. One size doesn't fit all, and your RAG setup shouldn't pretend it does.

👤 User Profiles & Sharing

Publish your model configurations, prompts, and skills. Follow other users. Build on what they've shared. A growing ecosystem of ready-to-use AI setups that anyone can import and make their own.

Real-world model rankings based on actual usage and feedback, not synthetic benchmarks. See which models perform best for code, for writing, for analysis, ranked by the people who use them every day.


The Vision

AI is consolidating the tools we use every day. Search, writing, analysis, code, project management, they're all converging into a single interface. Open WebUI is built to be that interface: self-hosted, extensible, and designed to grow with the models it connects to.

Everything on this page is a step toward that goal.


Want to help us ship faster? Run the dev branch, test upcoming features, and report what you find. Every bug caught early lets us tackle more.

For discussion and real-time progress, join us on Discord or follow development on GitHub.