Mirra
SDK Reference

AI Services

SDK reference for ai services operations

Overview

Access AI services for chat, decisions, and streaming with multi-provider support

  • Category: internal
  • Auth Required: No
  • Supported Modes: standard, delegated, service

Operations

chat

Have a conversation with an AI assistant. Supports multi-turn conversations with system prompts, user messages, and assistant responses.

PROVIDER: Uses Anthropic (Claude) as the AI provider.

BEST PRACTICES:

  • Use system messages to set AI behavior and constraints
  • Keep conversations focused - avoid unnecessary context

MESSAGE STRUCTURE: Each message has:

  • role: "system" | "user" | "assistant"
  • content: string (the message text)

TYPICAL PATTERNS:

  1. Simple query: [{ role: "user", content: "question" }]
  2. With system prompt: [{ role: "system", content: "instructions" }, { role: "user", content: "question" }]
  3. Multi-turn: [system, user, assistant, user, assistant, ...]

Arguments:

  • message (string, optional): Simple string shorthand for single-turn queries. Auto-wrapped into messages array. Use "messages" for multi-turn conversations.
  • messages (array, optional): Array of message objects with role ("system" | "user" | "assistant") and content (string). System messages set AI behavior, user messages are queries, assistant messages are previous AI responses.
  • model (string, optional): Specific model to use. Default: "claude-3-haiku-20240307". Use Anthropic Claude model names.
  • temperature (number, optional): Creativity level 0.0-1.0. Lower=factual/consistent, Higher=creative/varied. Default: 0.7
  • maxTokens (number, optional): Maximum tokens in response. Default: 1000. Increase for longer responses (costs more tokens).

Returns:

NormalizedChatResponse - Returns FLAT structure with: content (AI response text), model (model used), inputTokens, outputTokens, totalTokens. No nested objects.

Response Fields:

FieldTypeDescription
contentstringAI response text content
modelstringModel used for generation
inputTokensnumberNumber of input tokens consumed
outputTokensnumberNumber of output tokens generated
totalTokensnumberTotal tokens (input + output)

Example:

const result = await mirra.ai.chat({});

decide

Use AI to make a decision from a list of options. The AI analyzes your prompt, considers the context, and selects the most appropriate option with reasoning.

USE CASES:

  • Route messages to correct handlers
  • Classify user intents
  • Select appropriate tools or actions
  • Prioritize tasks
  • Choose templates or responses
  • Determine sentiment or category

HOW IT WORKS:

  1. Provide a prompt (the decision context)
  2. List available options (each with id and label)
  3. Optionally add extra context
  4. AI returns selected option ID and reasoning

BEST PRACTICES:

  • Make option labels clear and descriptive
  • Use unique IDs for options
  • Add context when decision needs background info
  • Keep prompt focused on the decision criteria
  • Use metadata field for additional option data

Arguments:

  • prompt (string, required): The decision prompt - what needs to be decided and why
  • options (array, required): Array of options to choose from. Each option must have: id (unique identifier), label (descriptive name), and optional metadata (additional data)
  • context (string, optional): Additional context to help the AI make a better decision
  • model (string, optional): Specific model to use. Defaults to system default.

Returns:

NormalizedDecideResponse - Returns FLAT structure with: selectedOption (chosen option ID), reasoning (explanation of why chosen). No nested objects.

Response Fields:

FieldTypeDescription
selectedOptionstringID of the selected option
reasoningstringExplanation of why this option was chosen

Example:

const result = await mirra.ai.decide({
  prompt: "example",
  options: []
});

agent

Run an AI agent that can call tools across multiple rounds. The agent receives a conversation, decides which tools to use, executes them, and continues until the task is complete or max rounds are reached.

TOOL ACCESS:

  • Specify adapter names in "tools" array to limit which adapters the agent can use
  • Omit "tools" to give the agent access to ALL connected adapters
  • Tools are referenced by camelCase SDK name (e.g., "memory", "googleCalendar", "telegram")

USE CASES:

  • Multi-step research: agent searches memory, reads documents, synthesizes answer
  • Automated workflows: agent creates calendar events, sends messages, updates records
  • Data processing: agent queries data, analyzes results, stores findings

Arguments:

  • messages (array, required): Conversation messages array with role and content
  • tools (array, optional): Adapter names to give the agent access to. Omit for all adapters.
  • systemPrompt (string, optional): System prompt to guide agent behavior
  • model (string, optional): Model to use. Default: claude-sonnet-4-20250514
  • temperature (number, optional): Temperature 0.0-1.0. Default: 0.5
  • maxTokens (number, optional): Max tokens per LLM call. Default: 4096
  • maxRounds (number, optional): Max tool-calling rounds. Default: 10, max: 25

Returns:

AgentResponse - Final text, token usage, round count, and full tool call history

Response Fields:

FieldTypeDescription
contentstringFinal text from the agent
modelstringModel used for generation
inputTokensnumberTotal input tokens across all rounds
outputTokensnumberTotal output tokens across all rounds
totalTokensnumberTotal tokens (input + output)
roundsnumberNumber of tool-calling rounds executed
toolCallsarrayFull history of tool calls made
stopReasonstringWhy the agent stopped: end_turn, max_rounds, error, or abort

Example:

const result = await mirra.ai.agent({
  messages: []
});

computerUse

Proxy for the Anthropic Computer Use API. Forwards requests to Anthropic's Messages API with computer use beta headers and returns the raw response. You handle the tool execution loop (screenshots, clicks, typing) on your side — Mirra handles auth and billing.

HOW IT WORKS:

  1. Send messages with computer use tool definitions
  2. Receive response with tool_use blocks (screenshot, click, type, etc.)
  3. Execute the actions on your machine/VM
  4. Send tool_result back (including base64 screenshots) in the next request
  5. Loop until stopReason is "end_turn"

BILLING:

  • Tokens are charged at a 6x multiplier (same as Sonnet pricing tier)
  • Screenshots consume image input tokens (the main cost driver)
  • tokensCharged field shows actual tokens deducted from your balance

MODEL:

  • Only Sonnet models are supported (claude-sonnet-4-6 default)
  • Opus models are not available for computer use via this endpoint

TOOL TYPES:

  • computer_20251124: Mouse, keyboard, and screenshot actions
  • text_editor_20250728: File editing tool
  • bash_20250124: Shell command execution

Arguments:

  • messages (array, required): Anthropic-format messages array. Include tool_result blocks with base64 screenshots when responding to tool_use requests.
  • tools (array, optional): Anthropic computer use tool definitions. Defaults to computer tool with 1024x768 display if omitted.
  • model (string, optional): Model to use. Default: claude-sonnet-4-6. Only Sonnet models are supported.
  • maxTokens (number, optional): Maximum tokens in response. Default: 4096.
  • system (string, optional): System prompt to guide computer use behavior.
  • temperature (number, optional): Temperature 0.0-1.0. Default: 1.0 (Anthropic recommended for computer use).

Returns:

ComputerUseResponse - Raw Anthropic response with content blocks (text + tool_use), token usage, and tokensCharged (after 6x multiplier).

Response Fields:

FieldTypeDescription
contentarrayRaw Anthropic content blocks (text + tool_use)
modelstringModel used
stopReasonstringend_turn or tool_use
inputTokensnumberRaw input tokens from Anthropic
outputTokensnumberRaw output tokens from Anthropic
totalTokensnumberTotal raw tokens (input + output)
tokensChargednumberActual tokens deducted from balance (after 6x multiplier)

Example:

const result = await mirra.ai.computerUse({
  messages: []
});

On this page