Use Carapace as a local OpenAI-compatible endpoint

Outcome

Expose /v1/chat/completions and /v1/responses on your local Carapace service and call them with standard OpenAI-style HTTP requests.

Prerequisites

1) Create config

Export a gateway token:

export CARAPACE_GATEWAY_TOKEN="$(openssl rand -hex 32)"

Windows (PowerShell) alternative:

$bytes = [byte[]]::new(32)
[System.Security.Cryptography.RandomNumberGenerator]::Fill($bytes)
$env:CARAPACE_GATEWAY_TOKEN = [System.BitConverter]::ToString($bytes).Replace('-', '').ToLower()
{
  "gateway": {
    "bind": "loopback",
    "port": 18789,
    "auth": {
      "mode": "token",
      "token": "${CARAPACE_GATEWAY_TOKEN}"
    },
    "openai": {
      "chatCompletions": true,
      "responses": true
    }
  },
  "anthropic": {
    "apiKey": "${ANTHROPIC_API_KEY}"
  }
}

2) Run commands

CARAPACE_CONFIG_PATH=./carapace.json5 cara

Call chat completions:

curl -sS \
  -H "Authorization: Bearer ${CARAPACE_GATEWAY_TOKEN}" \
  -H "Content-Type: application/json" \
  -X POST http://127.0.0.1:18789/v1/chat/completions \
  -d '{
    "model": "carapace",
    "messages": [
      {"role":"user","content":"Say hello in one sentence."}
    ]
  }'

Call responses:

curl -sS \
  -H "Authorization: Bearer ${CARAPACE_GATEWAY_TOKEN}" \
  -H "Content-Type: application/json" \
  -X POST http://127.0.0.1:18789/v1/responses \
  -d '{
    "model": "carapace",
    "input": "List three secure defaults for a local AI assistant."
  }'

3) Verify

Common failures and fixes

Need a recipe for your use case?

Tell us what outcome you want and we can prioritize a walkthrough.

Request a cookbook recipe