cubesOpenClaw

Guide to Using OpenClaw with Infron

OpenClaw (formerly Moltbot, originally Clawdbot) is a powerful AI messaging gateway that connects multiple messaging platforms (WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and more) to AI models. By integrating with Infron, you can access a wide range of models including GPT-5.2, Claude-4.5, Gemini-3, DeepSeek, and more.

Account & API Keys Setup

The first step to start using Infron is to create an accountarrow-up-right and get your API keyarrow-up-right.

Setup Guide

Infron powers model access via its OpenAI-compatible API .​​​

Step 1: Install OpenClaw

Install OpenClaw globally via npm:

npm install -g openclaw@latest

Or run the onboarding wizard to set up OpenClaw:

openclaw onboard --install-daemon

Step 2: Configure Infron Provider

Add the Infron provider configuration to your ~/.openclaw/openclaw.json file:

{
  "models": {
    "mode": "merge",
    "providers": {
      "infron": {
        "baseUrl": "https://llm.onerouter.pro/v1",
        "apiKey": "<API_KEY>",
        "api": "openai-completions",
        "models": [
          {
            "id": "deepseek/deepseek-v3.2",
            "name": "DeepSeek Chat via Infron",
            "reasoning": false,
            "input": ["text"],
            "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
            "contextWindow": 64000,
            "maxTokens": 8192
          },
          {
            "id": "openai/gpt-5.2",
            "name": "GPT-5.2 via Infron",
            "reasoning": false,
            "input": ["text", "image"],
            "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
            "contextWindow": 200000,
            "maxTokens": 8192
          },
          {
            "id": "google/gemini-3-pro-preview",
            "name": "Gemini 3 Pro via Infron",
            "reasoning": false,
            "input": ["text", "image"],
            "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
            "contextWindow": 200000,
            "maxTokens": 8192
          },
          {
            "id": "anthropic/claude-sonnet-4.5",
            "name": "Claude Sonnet 4.5 via Infron",
            "reasoning": false,
            "input": ["text", "image"],
            "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
            "contextWindow": 200000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "openai/gpt-5.2"
      },
      "models": {
        "deepseek/deepseek-v3.2": {},
        "openai/gpt-5.2": {},
        "google/gemini-3-pro-preview": {},
        "anthropic/claude-sonnet-4.5": {}
      }
    }
  }
}

Step 3: Add More Models (Optional)

You can add more models to the models array. Check the model listarrow-up-right for available models and their capabilities.

Step 4: Verify the Configuration

List the available models:

You should see your configured models:

Step 5: Set the Default Model

Set your preferred default model:

Use Cases

Once configured, you can use Infron models in various ways:

Via CLI Agent

Via Messaging Channels

Configure your messaging channels (WhatsApp, Telegram, Discord, etc.) and the gateway will automatically use your configured model:

Switching Models

You can switch models at any time:

Last updated