# Text

This quickstart walks you through making your first text generation request with Infron.

### Using the Infron API directly

{% tabs %}
{% tab title="Curl" %}

```sh
curl https://llm.onerouter.pro/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
  "model": "deepseek/deepseek-v3.2",
  "messages": [
    {
      "role": "user",
      "content": "What is the meaning of life?"
    }
  ]
}'
```

{% endtab %}

{% tab title="Python" %}

```python
import requests
import json

response = requests.post(
  url="https://llm.onerouter.pro/v1/chat/completions",
  headers={
    "Authorization": "Bearer <API_KEY>",
    "Content-Type": "application/json"
  },
  data=json.dumps({
    "model": "deepseek/deepseek-v3.2", 
    "messages": [
      {
        "role": "user",
        "content": "What is the meaning of life?"
      }
    ]
  })
)
print(response.json()["choices"][0]["message"]["content"])
```

{% endtab %}

{% tab title="TypeScript" %}

```typescript
fetch('https://llm.onerouter.pro/v1/chat/completions', {
  method: 'POST',
  headers: {
    Authorization: 'Bearer <API_KEY>',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'deepseek/deepseek-v3.2',
    messages: [
      {
        role: 'user',
        content: 'What is the meaning of life?',
      },
    ],
  }),
});
```

{% endtab %}
{% endtabs %}

The API also supports [streaming](https://infronai.gitbook.io/docs/api-reference/llm-model-api/streaming).

### **Using the OpenAI SDK**

Get started with just a few lines of code using your preferred SDK or framework.

{% tabs %}
{% tab title="Python" %}

```python
from openai import OpenAI

client = OpenAI(
  base_url="https://llm.onerouter.pro/v1",
  api_key="<API_KEY>",
)

completion = client.chat.completions.create(
  model="deepseek/deepseek-v3.2",
  messages=[
    {
      "role": "user",
      "content": "What is the meaning of life?"
    }
  ]
)

print(completion.choices[0].message.content)
```

{% endtab %}

{% tab title="TypeScript" %}

```typescript
import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: 'https://llm.onerouter.pro/v1',
  apiKey: '<API_KEY>',
});

async function main() {
  const completion = await openai.chat.completions.create({
    model: 'deepseek/deepseek-v3.2',
    messages: [
      {
        role: 'user',
        content: 'What is the meaning of life?',
      },
    ],
  });

  console.log(completion.choices[0].message);
}

main();
```

{% endtab %}
{% endtabs %}

### Using third-party SDKs

For information about using third-party SDKs and frameworks with Infron, please see our [frameworks documentation](https://infronai.gitbook.io/docs/frameworks-and-integrations/overview).

### Next steps

* Learn about [provider and model routing with fallbacks](https://infronai.gitbook.io/docs/routing-and-gateway/inference-provider-routing)
* Try other APIs: [OpenAI-compatible](https://app.gitbook.com/s/oWo5LeOZTLqTSLX7mP7F/openai-compatible-api/overview), [Anthropic-compatible](https://app.gitbook.com/s/oWo5LeOZTLqTSLX7mP7F/anthropic-compatible-api/overview), or [OpenResponses](https://app.gitbook.com/s/oWo5LeOZTLqTSLX7mP7F/openresponses-api/overview)
