Skip to content

Supported Providers and Models

HardWire supports multiple AI providers and models for chat and workflow execution.


Available Providers

Provider Description
Google Google's Gemini model family
OpenAI OpenAI's GPT model family
Anthropic Anthropic's Claude model family

Supported Models

The following models are currently available for use:

Model ID Display Name Provider Input Cost (per 1M tokens) Output Cost (per 1M tokens)
gemini-2.5-pro gemini-2.5-pro Google $1.25 $10.00
gpt-5.2 GPT-5.2 OpenAI $1.75 $14.00
claude-opus-4.5 Claude Opus 4.5 Anthropic $5.00 $25.00

Selecting a Model

Users can set their preferred model using the Update Model Preference endpoint. The selected model will be used for all chat conversations and workflow executions.

Example: Get Available Models

curl -X GET "https://api.hardwire.dev/api/models" \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"

Example: Update Model Preference

curl -X PUT "https://api.hardwire.dev/api/users/model-preference" \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"modelId": "claude-opus-4.5"}'

Model Capabilities

All supported models provide:

  • Chat completion - Conversational responses to user messages
  • Tool use - Ability to invoke MCP tools during workflow execution
  • Streaming - Real-time response streaming for both chat and workflows

Cost Considerations

Model pricing is based on token usage:

  • Input tokens - Charged for the text sent to the model (user messages, system prompts, context)
  • Output tokens - Charged for the text generated by the model (assistant responses)

Choose the appropriate model based on your use case:

  • gemini-2.5-pro - Best value for general-purpose tasks
  • gpt-5.2 - Good balance of capability and cost
  • claude-opus-4.5 - Premium model for complex reasoning tasks