# Overview

### What Is Batch Processing?

Batch processing is a powerful method for handling large volumes of requests efficiently. Instead of sending and processing each request individually with an immediate response, batch processing lets you submit many requests together for asynchronous execution. This approach is especially useful when:

* You need to process large datasets
* Real-time responses are not required
* You want to maximize cost efficiency
* You are running large-scale evaluations or analyses

Batch processing (batching) enables you to send multiple message requests in a single batch and retrieve their results later (within up to 24 hours). The key benefits include significant cost reduction (up to 50%) and higher throughput for analytical or offline workloads.

### How to Use the Batches API

A Batch consists of a list of individual requests. Each request contains:

* A unique custom\_id to identify the message request
* A params object containing the standard parameters used in the Messages API

To create a batch, pass this list of requests into the requests parameter.

#### Create a message batch <a href="#create-a-message-batch" id="create-a-message-batch"></a>

> Create a batch of messages for asynchronous processing. All usage is charged at 50% of the standard API prices.

{% tabs %}
{% tab title="Python" %}

```python
import requests
import json

headers = {
    "Authorization": "Bearer <<API_KEY>>",
    "Content-Type": "application/json"
}

data = {
  "requests": [
    {
      "custom_id": "my-request-01",
      "params": {
        "model": "gpt-4o-mini-batch",
        "max_tokens": 1024,
        "messages": [
          {
            "role": "user",
            "content": "How to learn nestjs?"
          }
        ],
        "metadata": {
          "ANY_ADDITIONAL_PROPERTY": "text"
        },
        "stop_sequences": [
          "text"
        ],
        "system": "text",
        "temperature": 1,
        "tool_choice": null,
        "tools": [],
        "top_k": 1,
        "top_p": 1,
        "thinking": {
          "budget_tokens": 1024,
          "type": "enabled"
        }
      }
    },
    {
      "custom_id": "my-request-02",
      "params": {
        "model": "gpt-4o-mini-batch",
        "max_tokens": 1024,
        "messages": [
          {
            "role": "user",
            "content": "How to learn Reactjs?"
          }
        ],
        "metadata": {
          "ANY_ADDITIONAL_PROPERTY": "text"
        },
        "stop_sequences": [
          "text"
        ],
        "system": "text",
        "temperature": 1,
        "tool_choice": null,
        "tools": [],
        "top_k": 1,
        "top_p": 1,
        "thinking": {
          "budget_tokens": 1024,
          "type": "enabled"
        }
      }
    },
    {
      "custom_id": "my-request-03",
      "params": {
        "model": "gpt-4o-mini-batch",
        "max_tokens": 1024,
        "messages": [
          {
            "role": "user",
            "content": "How to learn Nextjs?"
          }
        ],
        "metadata": {
          "ANY_ADDITIONAL_PROPERTY": "text"
        },
        "stop_sequences": [
          "text"
        ],
        "system": "text",
        "temperature": 1,
        "tool_choice": null,
        "tools": [],
        "top_k": 1,
        "top_p": 1,
        "thinking": {
          "budget_tokens": 1024,
          "type": "enabled"
        }
      }
    }
  ]
}

response = requests.post("https://llm.onerouter.pro/v1/batches", headers=headers, data=json.dumps(data))

data = response.json()
print("Batch created:", json.dumps(data, indent=2, ensure_ascii=False))
```

{% endtab %}
{% endtabs %}

In this example, three separate requests are batched together for asynchronous processing. Each request has a unique `custom_id` and contains the standard parameters you'd use for a Messages API call.

```json
{
  'batch': {
    'cancelled_at': None,
    'cancelling_at': None,
    'completed_at': None,
    'completion_window': '24h',
    'created_at': 1765972352,
    'endpoint': '',
    'error_file_id': '',
    'errors': None,
    'expired_at': None,
    'expires_at': 1766058749,
    'failed_at': None,
    'finalizing_at': None,
    'id': 'batch_a34c321b-ed4b-4e91-ae29-7f02939d8962',
    'in_progress_at': None,
    'input_file_id': 'file-142b17fbff7d4a06a88ec9205ae143c9',
    'metadata': None,
    'object': 'batch',
    'output_file_id': '',
    'request_counts': {
      'completed': 0,
      'failed': 0,
      'total': 0
    },
    'status': 'validating'
  },
  'batch_id': 'batch_a34c321b-ed4b-4e91-ae29-7f02939d8962',
  'file': {
    'bytes': 802,
    'created_at': 1765972347,
    'filename': 'batch.jsonl',
    'id': 'file-142b17fbff7d4a06a88ec9205ae143c9',
    'object': 'file',
    'purpose': 'batch',
    'status': 'processed'
  },
  'file_id': 'file-142b17fbff7d4a06a88ec9205ae143c9',
  'task_id': 2,
  'task_status': 'NOT_START'
}
```

#### Get status or results of a specific message batch <a href="#get-status-or-results-of-a-specific-message-batch" id="get-status-or-results-of-a-specific-message-batch"></a>

> Get batch status if in progress, or stream results if completed in JSONL format.

{% tabs %}
{% tab title="Python" %}

```python
import requests
import json

# Insert your batch_id here
batch_id = "batch_a34c321b-ed4b-4e91-ae29-7f02939d8962"

headers = {
    "Authorization": "Bearer <<API_KEY>>",
    "Content-Type": "application/json"
}

response = requests.get("https://llm.onerouter.pro/v1/batches/{batch_id}", headers=headers)

print("Raw response:\n", response.text[:500])  

try:
    data = [json.loads(line) for line in response.text.splitlines() if line.strip()]
    print("\n✅ Parsed JSONL:")
    print(json.dumps(data, indent=2))
except json.JSONDecodeError:
    try:
        data = response.json()
        print("\n✅ Parsed JSON:")
        print(json.dumps(data, indent=2))
    except Exception as e:
        print("\n⚠️ Could not parse response:", e)
```

{% endtab %}
{% endtabs %}

#### Cancel a specific batch <a href="#cancel-a-specific-batch" id="cancel-a-specific-batch"></a>

You can cancel a Batch that is currently processing using the cancel endpoint. Immediately after cancellation, a batch's `processing_status` will be `canceling`. Canceled batches end up with a status of `ended` and may contain partial results for requests that were processed before cancellation.

{% tabs %}
{% tab title="Python" %}

```python
import requests 
import json

batch_id = "batch_a34c321b-ed4b-4e91-ae29-7f02939d8962"
headers = { 
    "Authorization": "Bearer <<API_KEY>>", 
    "Content-Type": "application/json" 
}

response = requests.post(
    f"https://llm.onerouter.pro/v1/batches/{batch_id}/cancel", 
    headers=headers
)
if response.status_code == 200: 
    print("Batch canceled successfully:") 
    data = response.json() 
    print(json.dumps(data, indent=2, ensure_ascii=False)) 
else: 
    print(f"Failed to cancel batch ({response.status_code}):") 
    data = response.json() 
    print(json.dumps(data, indent=2, ensure_ascii=False))
```

{% endtab %}
{% endtabs %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://infronai.gitbook.io/docs/batch-apis/overview.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
