Q AI — MEV Intelligence Platform

Q AI is the intelligence layer of the YoorQuezt MEV platform. It provides natural language access to the MEV engine, agentic tool-use for automated operations, forensic analysis, and multi-provider LLM support. Q understands MEV strategies, blockchain state, and engine operations -- enabling operators, searchers, and partners to interact with the platform through conversation.

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                         Clients                                  │
│   Web Dashboard  |  TUI (yqtui)  |  CLI (yqctl)  |  SDKs       │
└────────┬──────────────┬───────────────┬───────────────┬──────────┘
         │              │               │               │
         ▼              ▼               ▼               ▼
┌─────────────────────────────────────────────────────────────────┐
│                      Q AI API (:9100)                            │
│   /chat  |  /chat/stream  |  /chat/confirm  |  /tools          │
├─────────────────────────────────────────────────────────────────┤
│                   Agentic Tool-Use Loop                          │
│   NL Input → Provider → Tool Calls → Execution → Response      │
├──────────────┬──────────────────────────────────────────────────┤
│  MCP Server  │              Tool Registry (40+)                  │
│  (16 tools)  │  Engine | Bundle | Builder | Validator | Ops     │
└──────┬───────┴──────────────┬───────────────────────────────────┘
       │                      │
       ▼                      ▼
┌──────────────┐    ┌─────────────────┐    ┌──────────────────┐
│ MEV Engine   │    │ WebSocket GW    │    │  External APIs   │
│ (:8080)      │    │ (:9099)         │    │  (chains, relays)│
└──────────────┘    └─────────────────┘    └──────────────────┘

Agentic Tool-Use Loop

Q AI uses an agentic loop where the LLM decides which tools to call based on the user's natural language query:

  1. User input -- natural language query arrives via API, TUI, CLI, or SDK.
  2. Context assembly -- Q gathers conversation history, user role, and available tools.
  3. LLM reasoning -- the provider (Claude, GPT, etc.) analyzes the query and selects tools.
  4. Tool execution -- Q executes the selected tools against the MEV engine and blockchain APIs.
  5. Result synthesis -- the LLM synthesizes tool results into a natural language response.
  6. Confirmation gate -- mutating actions require explicit user confirmation before execution.

Q AI Agents

AgentRoleCapabilities
SentinelMonitoringReal-time MEV detection, mempool watching, anomaly alerts
BundlerExecutionBundle construction, simulation, submission, status tracking
SolverOptimizationIntent solving, route optimization, gas estimation
OpsOperationsHealth monitoring, relay management, configuration, diagnostics

RBAC (Role-Based Access Control)

RoleDescriptionTool Access
adminFull platform accessAll tools
operatorEngine operationsEngine, builder, validator, ops tools
searcherBundle submission and monitoringBundle, simulator tools
analystRead-only analyticsRead-only engine, analytics tools
viewerBasic read accessHealth, status, public metrics

Tools (40+)

Tools are organized by category:

Engine (5 tools)

  • engine_health -- Check engine health and uptime
  • engine_metrics -- Get Prometheus metrics
  • engine_config -- View engine configuration
  • engine_chains -- List connected chains
  • engine_status -- Comprehensive engine status

Bundle (10 tools)

  • bundle_submit -- Submit a bundle (mutating, requires confirmation)
  • bundle_status -- Check bundle status
  • bundle_simulate -- Simulate a bundle
  • bundle_cancel -- Cancel a pending bundle (mutating)
  • bundle_list -- List recent bundles
  • bundle_profit -- Bundle profit breakdown
  • bundle_history -- Historical bundle data
  • bundle_gas -- Gas estimation for bundle
  • bundle_validate -- Validate bundle before submission
  • bundle_decode -- Decode bundle transactions

Builder (6 tools)

  • builder_blocks -- List recent blocks
  • builder_block -- Get block details
  • builder_pending -- View pending block template
  • builder_stats -- Builder statistics
  • builder_relays -- Connected relay status
  • builder_bids -- Recent builder bids

Validator (6 tools)

  • validator_auctions -- List auctions
  • validator_auction -- Auction details
  • validator_bids -- Bids in an auction
  • validator_settlements -- Settlement history
  • validator_payouts -- Payout history
  • validator_stats -- Validator statistics

Operator (8 tools)

  • ops_health -- System health overview
  • ops_relays -- Relay management
  • ops_mempool -- Mempool inspection
  • ops_peers -- Connected peers
  • ops_config -- Runtime configuration
  • ops_logs -- Recent log entries
  • ops_alerts -- Active alerts
  • ops_diagnostics -- Run diagnostics

Analyst (7 tools)

  • analyst_profit -- Profit analytics
  • analyst_strategies -- Strategy performance
  • analyst_chains -- Per-chain analytics
  • analyst_trends -- MEV trend analysis
  • analyst_competition -- Competitive analysis
  • analyst_gas -- Gas market analysis
  • analyst_forensics -- MEV forensic reports

MCP Ops (16 tools)

See Q AI -- MCP Server for the complete MCP tool reference.

MCP Server

Q AI includes a Model Context Protocol (MCP) server with 16 read-only tools for platform observability. See MCP Server.

Interfaces

InterfaceDescriptionQ AI Access
Web DashboardBrowser-based UIChat widget, streaming responses
TUI (yqtui)Terminal dashboardTab 6 - integrated chat
CLI (yqctl)Command-line toolPipe output to Q for analysis
TypeScript SDK@yoorquezt/sdk-mevQMEVClient.chat() / chatStream()
Python SDKyoorquezt-sdk-mevQMEVClient.chat() / chat_stream_iter()
REST APIHTTP endpoints/api/v1/chat, /api/v1/chat/stream

Provider Architecture

Q AI supports multiple LLM providers with automatic fallback:

ProviderModelsBest For
AnthropicClaude Opus, SonnetComplex reasoning, tool-use, long context
OpenAIGPT-4o, GPT-4-turboGeneral purpose, fast responses
MetaLlama 3 70B, 8BSelf-hosted, cost-effective
MistralMistral Large, MediumEuropean compliance, fast
DeepSeekDeepSeek V3, CoderCode analysis, math-heavy queries

Model selection strategy:

  • Default: Claude Sonnet for balance of speed and quality.
  • Complex queries: Automatic escalation to Claude Opus or GPT-4o for multi-step reasoning.
  • High-throughput: Llama 3 8B or Mistral Medium for simple status queries.
  • Fallback chain: Primary provider timeout triggers fallback to next provider.

Environment Variables

VariableDescriptionDefault
Q_AI_PORTAPI listen port9100
Q_AI_ENGINE_URLMEV engine URLhttp://localhost:8080
Q_AI_GATEWAY_URLGateway WebSocket URLws://localhost:9099
ANTHROPIC_API_KEYAnthropic API key(required)
OPENAI_API_KEYOpenAI API key(optional fallback)
Q_AI_DEFAULT_MODELDefault modelclaude-sonnet-4-20250514
Q_AI_MAX_TOKENSMax response tokens4096
Q_AI_TEMPERATURELLM temperature0.1
Q_AI_LOG_LEVELLog levelinfo

Partner Integration Tiers

TierRate LimitFeatures
Free100 req/dayChat, read-only tools, basic analytics
Pro10,000 req/dayAll tools, streaming, forensics, webhooks
EnterpriseUnlimitedCustom agents, dedicated instance, SLA, SSO

See Partner Integration for details.

Example Queries

"What's the engine health?"
"Show me profit from the last 24 hours by chain"
"Simulate this bundle: [0xabc..., 0xdef...]"
"What MEV was extracted from block 19482300?"
"Show me the top 5 strategies by profit this week"
"Is there a sandwich attack pattern on Uniswap V3 WETH/USDC?"
"Submit this bundle to Flashbots relay"  (requires confirmation)
"Compare relay performance over the last 7 days"
"What's the current gas market look like?"
"Generate a forensic report for wallet 0x1234..."
Edit this page