Skip to main content

Settings Reference

Complete reference for all Vibe configuration options.

Model Configuration

Model

The AI model to use, in provider:model-name format.

Examples:

  • vibe:gpt-5-mini - Vibe API (subscription)
  • openai:gpt-5-mini - Direct OpenAI
  • anthropic:claude-3-5-sonnet - Anthropic Claude
  • gemini:gemini-2.0-flash - Google Gemini

API Key

Your provider's API key. Required for BYOK providers, not needed for Vibe API.

  • Keys are stored encrypted in browser storage
  • Only sent to the selected provider
  • Separate keys stored per provider

Base URL (Optional)

Custom API endpoint for self-hosted models or proxies.

Use cases:

  • Local LLM: http://localhost:3001/v1
  • Azure OpenAI: https://your-resource.openai.azure.com
  • Custom proxy: https://your-proxy.com/v1

Leave blank to use the provider's default endpoint.

Debugging Options

Show Thoughts

Default: Enabled

Displays the agent's chain of thought reasoning in the chat. Shows what the agent is thinking before and after each action.

When to enable:

  • Understanding why the agent made certain decisions
  • Debugging unexpected behavior
  • Learning how the agent approaches tasks

When to disable:

  • Cleaner, minimal output
  • Faster perceived response time

Verbose Tool Calls

Default: Disabled

Shows detailed input and output for each tool call in expanded view.

When to enable:

  • Debugging tool behavior
  • Understanding exactly what parameters were sent
  • Verifying tool responses

Show Element Highlights

Default: Disabled

Displays red highlight boxes with index numbers on interactive elements when capturing page content.

When to enable:

  • Debugging element selection issues
  • Understanding what elements the agent sees
  • Visual verification of page parsing

Log LLM Queries

Default: Disabled

Enables HTTP interceptor to log all LLM API requests and responses to the browser console.

When to enable:

  • Debugging API issues
  • Monitoring token usage
  • Analyzing request/response timing

How to view: Open DevTools (F12) → Console tab

Enable Langfuse

Default: Disabled

Enables Langfuse integration for LLM observability and tracing.

Requirements:

  • Langfuse account
  • Environment variables configured:
    • LANGFUSE_PUBLIC_KEY
    • LANGFUSE_SECRET_KEY
    • LANGFUSE_HOST (optional)

Features:

  • Trace visualization
  • Cost tracking
  • Latency analysis
  • Session replay

Vision Settings

Enable Screenshot (Vision)

Default: Enabled (for vision-capable models)

Allows the agent to capture and analyze screenshots of web pages.

Auto-disabled: When a non-vision model is selected, this setting is automatically turned off.

When to enable:

  • Visual verification of actions
  • Complex UI navigation
  • Image-based content extraction

When to disable:

  • Reduce API costs (images use more tokens)
  • Text-only tasks
  • Privacy-sensitive contexts

Saving Settings

Click Save Settings to persist your configuration. Settings are saved to browser storage and applied immediately.

Click Test Settings to verify your API key and model configuration work correctly.