I left ChatGPT + the Gemini app for RikkaHub (full control on Android)

basanta sapkota

You know that feeling when you can see the exact workflow you want in your head… and then the app hits you with a polite little “nope”? That was me with the consumer versions of ChatGPT and Gemini. Clean UI. Smooth experience. But the second you want full control your own providers, your own models, your own tools you start bumping into invisible walls. And after the tenth time, it stops being “minor friction” and starts being “why am I doing this to myself?”

So yeah, I moved over to RikkaHub. Not because ChatGPT or Gemini are “bad.” They’re not. RikkaHub just feels like it was made by someone who actually enjoys fiddling with the stack until it purrs. #

Key takeaways

  • RikkaHub is a native Android LLM chat client, and you can hop between multiple providers. No single-vendor handcuffs. - It lets you plug in custom API / URL / models, and it explicitly calls out compatibility with OpenAI-, Google-, and Anthropic-style APIs. - It handles multimodal input like images, plus docs such as PDF and DOCX. Rendering is strong too, with Markdown, LaTeX, and Mermaid in the mix. - MCP is one of the cleanest ways to wire “tools” into an assistant, and RikkaHub advertises integrated MCP. - The public Gemini web app still doesn’t appear to offer user-facing settings for adding your own custom remote MCP server URL, at least based on community reports. - Want MCP inside web chat apps anyway? There’s a workaround: MCP SuperAssistant.

What is RikkaHub, really? And why does it feel different?

RikkaHub is a native Android LLM chat client that lets you run conversations through different providers. The project is public on GitHub, and honestly, the feature list reads like somebody got tired of “almost” being able to do what they wanted and just built the thing. A few highlights pulled straight from the repo and the official site:

  • Multiple provider support, with custom API / URL / models, plus explicit OpenAI/Google/Anthropic-style compatibility
  • Multimodal input: images, and documents like PDF and DOCX
  • Solid Markdown rendering with code highlighting, LaTeX, tables, and Mermaid diagrams
  • Built-in search integrations like Exa, Tavily, Brave, Perplexity, and others
  • Agent customization, prompt variables, and a “ChatGPT-like memory feature”
  • Handy import/export bits, including QR code export/import for providers
  • Extra knobs like custom HTTP headers and request bodies

The nice part is it’s all spelled out pretty plainly in the README and on the site. No mystery meat. Links are down in Sources. 

Why I left ChatGPT and the Gemini app: RikkaHub hands me the keys

1) Custom models + custom endpoints, so vendor lock-in doesn’t run my life

The biggest shift is weirdly psychological. With RikkaHub, I’m not “using ChatGPT.” I’m using my setup. My endpoints.Yet models.Still rules. RikkaHub’s README explicitly mentions custom API/URL/models, including compatibility with OpenAI, Google, and Anthropic style APIs. In real terms, that often means you can point it at things like:

  • a hosted provider you pay directly
  • an OpenAI-compatible gateway you run yourself
  • a team’s internal model endpoint

And when you want to compare answers, switching providers isn’t a hack. It’s the whole idea. ## 2) It’s picky in the way devs are picky about output
If you paste code, you want code blocks that behave. Period. If you write math, you want LaTeX to actually render instead of turning your screen into soup. And if you’re mapping a system, Mermaid diagrams save time and brain cells. RikkaHub supports Markdown rendering with code highlighting, plus Mermaid. That alone cuts out a bunch of small annoyances that add up fast. ## 3) It fits power-user habits instead of fighting them
Prompt variables, agent customization, and memory are those “small big” features. The ones you don’t care about… until you’ve had them. Then going back feels like. typing with mittens on. Also, the “custom headers and request bodies” thing sounds nerdy until you need it. Weird auth header? Org/project header? Proxy tag? Been there. It matters. 

RikkaHub + MCP: custom tools without duct tape and wishful thinking

Let’s talk MCP, because this is where “chat app” turns into “assistant that can actually do stuff.”

Anthropic describes MCP as an open standard for building secure, two-way connections between AI assistants and the systems where your data lives. Think content repos, dev tools, business apps, internal services. The MCP docs keep it practical too: you connect to local or remote MCP servers, and suddenly tool-style capabilities show up, like calendar, Notion, databases, and so on.And spec lays out the structure pretty clearly. Hosts connect to Servers via Clients. It also defines things like Tools, Resources, and Prompts, plus security expectations around user consent and tool safety. If you want the official reads, here you go:

  • Anthropic’s announcement. [Introducing the Model Context Protocol]
  • MCP intro docs. [What is MCP?]
  • MCP spec, including security notes: [MCP Specification]

Why not just use MCP directly in Gemini or ChatGPT?

Because consumer apps tend to drag their feet on “bring your own toolchain.” That’s the annoying truth. There’s a community thread pointing out custom MCP server configuration seems supported in some Google/Gemini products like Gemini Code Assist, Gemini CLI, and Gemini Enterprise. But it doesn’t appear as a user-facing setting in the public Gemini web app, which seems to focus on Google-built extensions. That matches what a lot of people feel: the consumer UI is polished, but it’s not “plug in your own stack” polished. Source is linked down below in the Sources section. #

A practical workaround: MCP SuperAssistant

If you want MCP-style tool execution inside web chat interfaces right now, MCP SuperAssistant is an option. It’s a Chrome extension that detects MCP tool calls on supported platforms and routes them through a local proxy server, which then talks to actual MCP servers. The repo explicitly claims support for platforms including ChatGPT and Google Gemini, among others. Source is also in Sources. 

How I set up RikkaHub with a custom model endpoint

Since RikkaHub supports custom API/URL/models, I stick to one rule that has saved me from a lot of dumb troubleshooting. Test the endpoint outside the app first. ## Step 1: Verify your OpenAI-compatible endpoint with curl
If your provider exposes an OpenAI-style /v1/chat/completions, this quick check catches auth and routing problems early:

export OPENAI_API_KEY="your_key_here"
export OPENAI_BASE_URL="https://your-endpoint.example.com"

curl "$OPENAI_BASE_URL/v1/chat/completions" \
  -H "Authorization. Bearer $OPENAI_API_KEY" \
  -H "Content-Type. Application/json" \
  -d '{
    "model". "your-model-id",
    "messages". [{"role":"user","content":"Say hello in one sentence."}]
  }'

Once works, RikkaHub is usually “just” configuration. Paste base URL, paste key, pick the model name, done. 

Step 2: Use custom headers when your provider requires them

RikkaHub explicitly supports custom HTTP request headers and request bodies, which is exactly what you need when a provider wants something nonstandard:

X-Project-ID: mobile-lab
X-Trace-Tag: rikkahub

I keep headers minimal. If it feels sketchy to send, I don’t send it. Simple. #

Best practices for a sane setup (keys, privacy, cost… the boring stuff bites later)

A few habits I learned the hard way:

  • Use separate API keys for mobile clients. If something leaks, the blast radius is smaller. - Prefer a proxy/gateway if you need unified logging, rate limits, or provider failover. - Cap max tokens and set sane defaults. Mobile usage can get expensive fast. - Be picky with tools, MCP or otherwise. Tool execution is basically code execution wearing a disguise, and MCP’s spec is clear about tool safety and user consent. - Don’t over-trust memory. Handy, sure. I still treat it like a convenience layer, not a source of truth. #

Why RikkaHub stuck for me

For everyday chatting, ChatGPT and Gemini are totally fine. But when you care about control custom endpoints, custom models, and MCP-style tool wiring RikkaHub feels like the Android client that actually respects the way power users work. If you try it, go small. Add one provider. Verify one model. Then iterate. And if you’re in the mood for more tooling talk, my related post on agentic workflows is here: Top 10 agentic coding tools in 2026 (dev guide). Already moved off the big default apps? What did you switch to… and what broke first?

Sources

  • RikkaHub GitHub repository (feature list, provider support, rendering, memory, customization). Https.//github.com/rikkahub/rikkahub
  • RikkaHub official site (overview + “Integrated Model Context Protocol” claim). Https.//rikka-ai.com/en
  • Anthropic announcement (what MCP is and why it exists). Https.//www.anthropic.com/news/model-context-protocol
  • Model Context Protocol docs (intro, local/remote servers, examples). Https.//modelcontextprotocol.io/docs/getting-started/intro
  • Model Context Protocol specification (hosts/clients/servers, tools/resources, security guidance). Https.//modelcontextprotocol.io/specification/2025-06-18
  • Community discussion on Gemini public app + custom MCP connectors (limitations/roadmap question). Https.//www.reddit.com/r/mcp/comments/1p5ixpa/any_plans_for_custom_mcp_model_context_protocol/
  • MCP SuperAssistant (extension brings MCP tool execution into web AI chats via a proxy): https://github.com/srbhptl39/MCP-SuperAssistant

Post a Comment