AI Services
SDK reference for ai services operations
Overview
Access AI services for chat, decisions, and streaming with multi-provider support
- Category:
internal - Auth Required: No
- Supported Modes: standard, delegated, service
Operations
chat
Have a conversation with an AI assistant. Supports multi-turn conversations with system prompts, user messages, and assistant responses.
PROVIDER: Uses Anthropic (Claude) as the AI provider.
BEST PRACTICES:
- Use system messages to set AI behavior and constraints
- Keep conversations focused - avoid unnecessary context
MESSAGE STRUCTURE: Each message has:
- role:
"system" | "user" | "assistant" - content: string (the message text)
TYPICAL PATTERNS:
- Simple query: [{ role: "user", content: "question" }]
- With system prompt: [{ role: "system", content: "instructions" }, { role: "user", content: "question" }]
- Multi-turn: [system, user, assistant, user, assistant, ...]
Arguments:
message(string, optional): Simple string shorthand for single-turn queries. Auto-wrapped into messages array. Use "messages" for multi-turn conversations.messages(array, optional): Array of message objects with role ("system" | "user" | "assistant") and content (string). System messages set AI behavior, user messages are queries, assistant messages are previous AI responses.model(string, optional): Specific model to use. Default: "claude-3-haiku-20240307". Use Anthropic Claude model names.temperature(number, optional): Creativity level 0.0-1.0. Lower=factual/consistent, Higher=creative/varied. Default: 0.7maxTokens(number, optional): Maximum tokens in response. Default: 1000. Increase for longer responses (costs more tokens).
Returns:
NormalizedChatResponse - Returns FLAT structure with: content (AI response text), model (model used), inputTokens, outputTokens, totalTokens. No nested objects.
Response Fields:
| Field | Type | Description |
|---|---|---|
content | string | AI response text content |
model | string | Model used for generation |
inputTokens | number | Number of input tokens consumed |
outputTokens | number | Number of output tokens generated |
totalTokens | number | Total tokens (input + output) |
Example:
decide
Use AI to make a decision from a list of options. The AI analyzes your prompt, considers the context, and selects the most appropriate option with reasoning.
USE CASES:
- Route messages to correct handlers
- Classify user intents
- Select appropriate tools or actions
- Prioritize tasks
- Choose templates or responses
- Determine sentiment or category
HOW IT WORKS:
- Provide a prompt (the decision context)
- List available options (each with id and label)
- Optionally add extra context
- AI returns selected option ID and reasoning
BEST PRACTICES:
- Make option labels clear and descriptive
- Use unique IDs for options
- Add context when decision needs background info
- Keep prompt focused on the decision criteria
- Use metadata field for additional option data
Arguments:
prompt(string, required): The decision prompt - what needs to be decided and whyoptions(array, required): Array of options to choose from. Each option must have: id (unique identifier), label (descriptive name), and optional metadata (additional data)context(string, optional): Additional context to help the AI make a better decisionmodel(string, optional): Specific model to use. Defaults to system default.
Returns:
NormalizedDecideResponse - Returns FLAT structure with: selectedOption (chosen option ID), reasoning (explanation of why chosen). No nested objects.
Response Fields:
| Field | Type | Description |
|---|---|---|
selectedOption | string | ID of the selected option |
reasoning | string | Explanation of why this option was chosen |
Example:
agent
Run an AI agent that can call tools across multiple rounds. The agent receives a conversation, decides which tools to use, executes them, and continues until the task is complete or max rounds are reached.
TOOL ACCESS:
- Specify adapter names in "tools" array to limit which adapters the agent can use
- Omit "tools" to give the agent access to ALL connected adapters
- Tools are referenced by camelCase SDK name (e.g., "memory", "googleCalendar", "telegram")
USE CASES:
- Multi-step research: agent searches memory, reads documents, synthesizes answer
- Automated workflows: agent creates calendar events, sends messages, updates records
- Data processing: agent queries data, analyzes results, stores findings
Arguments:
messages(array, required): Conversation messages array with role and contenttools(array, optional): Adapter names to give the agent access to. Omit for all adapters.systemPrompt(string, optional): System prompt to guide agent behaviormodel(string, optional): Model to use. Default: claude-sonnet-4-20250514temperature(number, optional): Temperature 0.0-1.0. Default: 0.5maxTokens(number, optional): Max tokens per LLM call. Default: 4096maxRounds(number, optional): Max tool-calling rounds. Default: 10, max: 25
Returns:
AgentResponse - Final text, token usage, round count, and full tool call history
Response Fields:
| Field | Type | Description |
|---|---|---|
content | string | Final text from the agent |
model | string | Model used for generation |
inputTokens | number | Total input tokens across all rounds |
outputTokens | number | Total output tokens across all rounds |
totalTokens | number | Total tokens (input + output) |
rounds | number | Number of tool-calling rounds executed |
toolCalls | array | Full history of tool calls made |
stopReason | string | Why the agent stopped: end_turn, max_rounds, error, or abort |
Example:
computerUse
Proxy for the Anthropic Computer Use API. Forwards requests to Anthropic's Messages API with computer use beta headers and returns the raw response. You handle the tool execution loop (screenshots, clicks, typing) on your side — Mirra handles auth and billing.
HOW IT WORKS:
- Send messages with computer use tool definitions
- Receive response with tool_use blocks (screenshot, click, type, etc.)
- Execute the actions on your machine/VM
- Send tool_result back (including base64 screenshots) in the next request
- Loop until stopReason is "end_turn"
BILLING:
- Tokens are charged at a 6x multiplier (same as Sonnet pricing tier)
- Screenshots consume image input tokens (the main cost driver)
- tokensCharged field shows actual tokens deducted from your balance
MODEL:
- Only Sonnet models are supported (claude-sonnet-4-6 default)
- Opus models are not available for computer use via this endpoint
TOOL TYPES:
- computer_20251124: Mouse, keyboard, and screenshot actions
- text_editor_20250728: File editing tool
- bash_20250124: Shell command execution
Arguments:
messages(array, required): Anthropic-format messages array. Include tool_result blocks with base64 screenshots when responding to tool_use requests.tools(array, optional): Anthropic computer use tool definitions. Defaults to computer tool with 1024x768 display if omitted.model(string, optional): Model to use. Default: claude-sonnet-4-6. Only Sonnet models are supported.maxTokens(number, optional): Maximum tokens in response. Default: 4096.system(string, optional): System prompt to guide computer use behavior.temperature(number, optional): Temperature 0.0-1.0. Default: 1.0 (Anthropic recommended for computer use).
Returns:
ComputerUseResponse - Raw Anthropic response with content blocks (text + tool_use), token usage, and tokensCharged (after 6x multiplier).
Response Fields:
| Field | Type | Description |
|---|---|---|
content | array | Raw Anthropic content blocks (text + tool_use) |
model | string | Model used |
stopReason | string | end_turn or tool_use |
inputTokens | number | Raw input tokens from Anthropic |
outputTokens | number | Raw output tokens from Anthropic |
totalTokens | number | Total raw tokens (input + output) |
tokensCharged | number | Actual tokens deducted from balance (after 6x multiplier) |
Example: