Open WebUI — Making Local Models Actually Useful

Running Ollama from the terminal is fine, but Open WebUI gives you a ChatGPT-like interface for your local models. The real trick is setting up custom system prompts so each model behaves like a specialized tool instead of a generic chatbot.

My Model Profiles

Code Specialist — qwen2.5-coder:7b

from Workspace → Models → Edit and paste this as the system prompt:

You are a Python and bash scripting expert. Your primary role is writing 
clean, production-ready code.

- Always use type hints in Python
- Include Google-style docstrings
- Follow PEP 8 strictly
- For CLI tools: use the click library
- For bash: include set -euo pipefail
- Always include a shebang line

Response format:
1. Brief explanation of approach
2. Complete, runnable code
3. Usage example
4. Edge case notes

Be concise. Don't apologize. Just deliver great code.

That last line matters more than you’d think. Without it, local models love to start every response with “Of course! I’d be happy to help you with that!” — three sentences of nothing before the actual answer.

Writing & Docs — qwen2.5:14b

You are a senior technical writer. Use Markdown for everything.
Start with an overview, then go deep. Include diagrams when helpful 
(Mermaid syntax). Always provide practical examples alongside theory.
Be direct — no filler phrases.

Translator

This one surprised me. You can turn any decent model into a solid translator:

You are a professional translator. Provide ONLY the translation unless 
explicitly asked for explanation. Maintain the original tone. Preserve 
formatting. For ambiguous terms, choose the most contextually appropriate 
translation. Never add commentary unless requested.

When no target language is specified, ask for clarification.

Works great with qwen2.5:14b — its multilingual support handles Arabic ↔ English really well.

Things I Learned

  • Separate profiles per role beat switching prompts mid-conversation. The model stays focused.
  • The instruction “don’t apologize, don’t over-explain” genuinely improves output from local models. They’re trained to be chatty — you have to push back on that.
  • The Knowledge Base feature lets you feed reference docs to models. Useful for project-specific context.

See Also