Q AI — API Reference

The Q AI API provides HTTP endpoints for chat (synchronous and streaming), tool discovery, and action confirmation.

Endpoints

MethodPathAuthDescription
GET/healthzNoneHealth check
GET/metricsNonePrometheus metrics
POST/api/v1/chatBearer + RoleSynchronous chat
POST/api/v1/chat/streamBearer + RoleStreaming chat (SSE)
POST/api/v1/chat/confirmBearer + RoleConfirm a mutating action
GET/api/v1/toolsBearer + RoleList available tools for role

Authentication

All /api/v1/* endpoints require:

  • Authorization: Bearer <api-key> -- API key for authentication.
  • X-MEV-Role: <role> -- user role for RBAC tool filtering.
curl -H "Authorization: Bearer your-api-key" \
     -H "X-MEV-Role: operator" \
     http://localhost:9100/api/v1/tools

POST /api/v1/chat

Synchronous chat endpoint. Sends a message, executes the full agentic loop, and returns the complete response.

Request

{
  "message": "What's the profit from the last 24 hours?",
  "conversation_id": "conv_abc123",
  "model": "claude-sonnet-4-20250514",
  "max_tokens": 4096,
  "temperature": 0.1
}
FieldTypeRequiredDescription
messagestringyesUser message
conversation_idstringnoConversation ID for context continuity
modelstringnoOverride model selection
max_tokensintegernoMax response tokens (default: 4096)
temperaturefloatnoLLM temperature (default: 0.1)

Response

{
  "id": "msg_xyz789",
  "conversation_id": "conv_abc123",
  "role": "assistant",
  "content": "Total profit in the last 24 hours: **2.847 ETH** across 142 landed bundles.\n\n- Triangular arb: 1.23 ETH (43%)\n- Sandwich: 0.89 ETH (31%)\n- Binary arb: 0.727 ETH (26%)\n\nTop chain: Ethereum mainnet (2.1 ETH).",
  "tools_used": ["analyst_profit", "analyst_chains"],
  "model": "claude-sonnet-4-20250514",
  "usage": {
    "input_tokens": 1240,
    "output_tokens": 312
  },
  "created_at": "2026-03-15T14:30:00Z"
}

Confirmation Required Response

When Q AI needs to execute a mutating action:

{
  "id": "msg_xyz790",
  "conversation_id": "conv_abc123",
  "role": "assistant",
  "type": "confirmation_required",
  "content": "I'll submit this 2-tx bundle targeting block 19,482,300 on Ethereum. Proceed?",
  "confirm_id": "conf_8a7b6c",
  "action": {
    "tool": "bundle_submit",
    "params": {
      "txs": ["0xabc...", "0xdef..."],
      "chain": "ethereum",
      "blockNumber": 19482300
    }
  },
  "created_at": "2026-03-15T14:30:05Z"
}

POST /api/v1/chat/stream

Streaming chat endpoint using Server-Sent Events (SSE). Returns tokens as they are generated.

Request

Same body as /api/v1/chat.

SSE Response Format

event: message_start
data: {"id":"msg_xyz789","conversation_id":"conv_abc123","model":"claude-sonnet-4-20250514"}

event: content_delta
data: {"delta":"Total profit"}

event: content_delta
data: {"delta":" in the last 24 hours: "}

event: content_delta
data: {"delta":"**2.847 ETH**"}

event: tool_use
data: {"tool":"analyst_profit","status":"executing"}

event: tool_result
data: {"tool":"analyst_profit","result":{"total_eth":"2.847","bundles":142}}

event: content_delta
data: {"delta":" across 142 landed bundles."}

event: message_end
data: {"usage":{"input_tokens":1240,"output_tokens":312},"tools_used":["analyst_profit"]}

SSE Event Types

EventDescription
message_startStart of response, includes message ID and model
content_deltaIncremental text content
tool_useTool execution started
tool_resultTool execution completed with result
confirmation_requiredMutating action needs user confirmation
errorError occurred during processing
message_endEnd of response, includes usage stats

Consuming SSE in TypeScript

const response = await fetch("http://localhost:9100/api/v1/chat/stream", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer your-api-key",
    "X-MEV-Role": "operator",
  },
  body: JSON.stringify({ message: "Show me relay stats" }),
});

const reader = response.body!.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const text = decoder.decode(value);
  for (const line of text.split("\n")) {
    if (line.startsWith("data: ")) {
      const data = JSON.parse(line.slice(6));
      if (data.delta) process.stdout.write(data.delta);
    }
  }
}

POST /api/v1/chat/confirm

Confirm or reject a mutating action requested by Q AI.

Request

{
  "confirm_id": "conf_8a7b6c",
  "confirmed": true
}

Response (confirmed)

{
  "id": "msg_xyz791",
  "conversation_id": "conv_abc123",
  "role": "assistant",
  "content": "Bundle submitted successfully.\n\nBundle hash: `0x9f8e7d...`\nTarget block: 19,482,300\nStatus: pending",
  "tools_used": ["bundle_submit"],
  "created_at": "2026-03-15T14:30:10Z"
}

Response (rejected)

{
  "id": "msg_xyz791",
  "conversation_id": "conv_abc123",
  "role": "assistant",
  "content": "Bundle submission cancelled.",
  "created_at": "2026-03-15T14:30:10Z"
}

GET /api/v1/tools

List all tools available to the authenticated user's role.

Response

{
  "tools": [
    {
      "name": "engine_health",
      "description": "Check engine health and uptime",
      "category": "engine",
      "mutating": false,
      "requires_confirmation": false,
      "parameters": {
        "type": "object",
        "properties": {},
        "required": []
      }
    },
    {
      "name": "bundle_submit",
      "description": "Submit a bundle to the MEV engine",
      "category": "bundle",
      "mutating": true,
      "requires_confirmation": true,
      "parameters": {
        "type": "object",
        "properties": {
          "txs": { "type": "array", "items": { "type": "string" } },
          "chain": { "type": "string" },
          "blockNumber": { "type": "integer" }
        },
        "required": ["txs", "chain"]
      }
    }
  ],
  "total": 42,
  "role": "operator"
}

Roles

RoleAccessible Tool Categories
adminAll tools (engine, bundle, builder, validator, ops, analyst, mcp)
operatorengine, bundle, builder, validator, ops
searcherbundle (submit, status, simulate, cancel, list, profit)
analystengine (read-only), analyst, mcp
viewerengine_health, engine_status

Configuration

VariableDescriptionDefault
Q_AI_PORTAPI listen port9100
Q_AI_ENGINE_URLMEV engine URLhttp://localhost:8080
Q_AI_GATEWAY_URLGateway WebSocket URLws://localhost:9099
Q_AI_MCP_URLMCP server URLhttp://localhost:9101
Q_AI_REDIS_URLRedis URL for conversation cacheredis://localhost:6379
ANTHROPIC_API_KEYAnthropic API key(required)
OPENAI_API_KEYOpenAI API key (fallback)(optional)
Q_AI_DEFAULT_MODELDefault LLM modelclaude-sonnet-4-20250514
Q_AI_MAX_TOKENSDefault max tokens4096
Q_AI_TEMPERATUREDefault temperature0.1
Q_AI_CONVERSATION_TTLConversation cache TTL24h
Q_AI_RATE_LIMITRequests per minute per key60

Code Examples

curl

# Synchronous chat
curl -X POST http://localhost:9100/api/v1/chat \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key" \
  -H "X-MEV-Role: operator" \
  -d '{"message": "What is the engine health?"}'

# List tools
curl -H "Authorization: Bearer your-api-key" \
     -H "X-MEV-Role: operator" \
     http://localhost:9100/api/v1/tools

# Streaming chat
curl -N -X POST http://localhost:9100/api/v1/chat/stream \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key" \
  -H "X-MEV-Role: operator" \
  -d '{"message": "Show me profit by chain"}'

Python

import httpx

client = httpx.Client(
    base_url="http://localhost:9100",
    headers={
        "Authorization": "Bearer your-api-key",
        "X-MEV-Role": "operator",
    },
)

# Chat
response = client.post("/api/v1/chat", json={
    "message": "What's the profit from the last 24 hours?"
})
print(response.json()["content"])

# Streaming
with client.stream("POST", "/api/v1/chat/stream", json={
    "message": "Analyze relay performance"
}) as stream:
    for line in stream.iter_lines():
        if line.startswith("data: "):
            print(line[6:])
Edit this page