lightbulbReasoning

Advanced reasoning capabilities with the Responses API

The Responses API supports advanced reasoning capabilities, allowing models to show their internal reasoning process with configurable effort levels.

Reasoning Configuration

Configure reasoning behavior using the reasoning parameter:

const response = await fetch('https://llm.onerouter.pro/v1/responses', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer <<API_KEY>>',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'o4-mini',
    input: 'What is the meaning of life?',
    reasoning: {
      effort: 'high'
    },
    max_output_tokens: 9000,
  }),
});

const result = await response.json();
console.log(result);

Reasoning Effort Levels

The effort parameter controls how much computational effort the model puts into reasoning:

Effort Level
Description

minimal

Basic reasoning with minimal computational effort

low

Light reasoning for simple problems

medium

Balanced reasoning for moderate complexity

high

Deep reasoning for complex problems

Complex Reasoning Example

For complex mathematical or logical problems:

Reasoning in Conversation Context

Include reasoning in multi-turn conversations:

Streaming Reasoning

Enable streaming to see reasoning develop in real-time:

Best Practices

  1. Choose appropriate effort levels: Use high for complex problems, low for simple tasks

  2. Consider token usage: Reasoning increases token consumption

  3. Use streaming: For long reasoning chains, streaming provides better user experience

  4. Include context: Provide sufficient context for the model to reason effectively

Next Steps

  • Explore Tool Calling with reasoning

toolboxTool Callingchevron-right
  • Review Basic Usage fundamentals

playBasic Usagechevron-right

Last updated