Mirra
SDK Reference

AI Services

SDK reference for ai services operations

Overview

Access AI services for chat, decisions, and streaming with multi-provider support

  • Category: internal
  • Auth Required: No
  • Supported Modes: standard, delegated, service

Operations

chat

Have a conversation with an AI assistant. Supports multi-turn conversations with system prompts, user messages, and assistant responses.

PROVIDER: Uses Anthropic (Claude) as the AI provider.

BEST PRACTICES:

  • Use system messages to set AI behavior and constraints
  • Keep conversations focused - avoid unnecessary context

MESSAGE STRUCTURE: Each message has:

  • role: "system" | "user" | "assistant"
  • content: string (the message text)

TYPICAL PATTERNS:

  1. Simple query: [{ role: "user", content: "question" }]
  2. With system prompt: [{ role: "system", content: "instructions" }, { role: "user", content: "question" }]
  3. Multi-turn: [system, user, assistant, user, assistant, ...]

Arguments:

  • messages (array, required): Array of message objects with role ("system" | "user" | "assistant") and content (string). System messages set AI behavior, user messages are queries, assistant messages are previous AI responses.
  • model (string, optional): Specific model to use. Default: "claude-3-haiku-20240307". Use Anthropic Claude model names.
  • temperature (number, optional): Creativity level 0.0-1.0. Lower=factual/consistent, Higher=creative/varied. Default: 0.7
  • maxTokens (number, optional): Maximum tokens in response. Default: 1000. Increase for longer responses (costs more tokens).

Returns:

NormalizedChatResponse - Returns FLAT structure with: content (AI response text), model (model used), inputTokens, outputTokens, totalTokens. No nested objects.

Response Fields:

FieldTypeDescription
contentstringAI response text content
modelstringModel used for generation
inputTokensnumberNumber of input tokens consumed
outputTokensnumberNumber of output tokens generated
totalTokensnumberTotal tokens (input + output)

Example:

const result = await mirra.ai.chat({
  messages: []
});

decide

Use AI to make a decision from a list of options. The AI analyzes your prompt, considers the context, and selects the most appropriate option with reasoning.

USE CASES:

  • Route messages to correct handlers
  • Classify user intents
  • Select appropriate tools or actions
  • Prioritize tasks
  • Choose templates or responses
  • Determine sentiment or category

HOW IT WORKS:

  1. Provide a prompt (the decision context)
  2. List available options (each with id and label)
  3. Optionally add extra context
  4. AI returns selected option ID and reasoning

BEST PRACTICES:

  • Make option labels clear and descriptive
  • Use unique IDs for options
  • Add context when decision needs background info
  • Keep prompt focused on the decision criteria
  • Use metadata field for additional option data

Arguments:

  • prompt (string, required): The decision prompt - what needs to be decided and why
  • options (array, required): Array of options to choose from. Each option must have: id (unique identifier), label (descriptive name), and optional metadata (additional data)
  • context (string, optional): Additional context to help the AI make a better decision
  • model (string, optional): Specific model to use. Defaults to system default.

Returns:

NormalizedDecideResponse - Returns FLAT structure with: selectedOption (chosen option ID), reasoning (explanation of why chosen). No nested objects.

Response Fields:

FieldTypeDescription
selectedOptionstringID of the selected option
reasoningstringExplanation of why this option was chosen

Example:

const result = await mirra.ai.decide({
  prompt: "example",
  options: []
});

On this page