Skip to main content

AI Providers

Vibe supports two ways to connect to AI models: Vibe API (managed subscription) or Bring Your Own Key (BYOK).

Vibe API Subscription

The easiest way to get started. Sign in with Google, GitHub, LinkedIn, or email.

Plans

PlanPriceBudgetResetModels
Free$0/month$1Dailygpt-5-mini, gpt-5-nano
Pro$25/month$50Monthlygpt-5-mini, gpt-5, gemini-2.0
Max$99/month$300MonthlyAll Pro models + gpt-5.2, gpt-5.4, claude-opus-4.5

How to Subscribe

  1. Open Vibe settings (click gear icon)
  2. Click Connect next to Vibe API
  3. Sign in with your preferred method
  4. Your account is created with Free tier
  5. Click Manage Subscription to upgrade

Budget Usage

  • Free: $1 budget resets every 24 hours
  • Pro/Max: Monthly budget resets on billing date
  • Progress bar shows current spend vs. budget
  • Color changes: green → yellow (70%) → red (90%)

Model Selection with Vibe API

Use the vibe: prefix for Vibe API models:

vibe:gpt-5-mini      # Fast, cost-effective (recommended)
vibe:gpt-5 # Advanced reasoning
vibe:gpt-5-nano # Ultra-fast, minimal cost

Pro and Max plans unlock additional models like vibe:gpt-5.4 and vibe:claude-opus-4.5.

Bring Your Own Key (BYOK)

Use your own API keys from providers directly.

Supported Providers

ProviderModel FormatAPI Key Source
OpenAIopenai:gpt-5-miniplatform.openai.com
Anthropicanthropic:claude-3-5-sonnetconsole.anthropic.com
Google Geminigemini:gemini-2.0-flashaistudio.google.com
OpenRouteropenrouter:meta-llama/llama-3.1-70bopenrouter.ai
Azure OpenAIazure:gpt-5-miniAzure Portal

Setup BYOK

  1. Open Vibe settings
  2. Select your model (e.g., openai:gpt-5-mini)
  3. Enter your API key in the API Key field
  4. Click Save Settings
  5. Click Test Settings to verify

Ollama — Self-Hosted (Free & Private)

Run AI models locally with Ollama. No API key needed — your data never leaves your machine.

  1. Install Ollama (setup guide)
  2. Pull a model: ollama pull qwen3.5
  3. In Vibe Settings → select Ollama (Self-Hosted) as your provider
  4. Selector shows installed models plus installable models from Ollama library
  5. Selecting an uninstalled model auto-installs it via Ollama API (/api/pull)

Model format: ollama:qwen3.5, ollama:llama3.1:8b

Privacy

When using Ollama, all prompts and responses stay on your machine. Ideal for sensitive data, enterprise compliance, or offline use. Learn more →

Custom Base URL

For other self-hosted servers or proxies, enter a custom base URL:

http://localhost:3001/v1    # Local LLM
https://your-proxy.com/v1 # Custom proxy
Use CaseModelWhy
Daily tasksgpt-5-miniFast, accurate, cost-effective
Complex researchgpt-5Better reasoning, slower
Quick lookupsgpt-5-nanoUltra-fast responses
Vision tasksgpt-5-miniSupports screenshots

Vision Support

Models with vision capability can see and analyze screenshots. Look for the eye icon next to model names in the dropdown.

Vision-capable models:

  • openai:gpt-5-mini, openai:gpt-5
  • anthropic:claude-3-5-sonnet
  • gemini:gemini-2.0-flash