AI Services
SDK reference for ai services operations
Overview
Access AI services for chat, decisions, and streaming with multi-provider support
- Category:
internal - Auth Required: No
- Supported Modes: standard, delegated, service
Operations
chat
Have a conversation with an AI assistant. Supports multi-turn conversations with system prompts, user messages, and assistant responses.
PROVIDER: Uses Anthropic (Claude) as the AI provider.
BEST PRACTICES:
- Use system messages to set AI behavior and constraints
- Keep conversations focused - avoid unnecessary context
MESSAGE STRUCTURE: Each message has:
- role:
"system" | "user" | "assistant" - content: string (the message text)
TYPICAL PATTERNS:
- Simple query: [{ role: "user", content: "question" }]
- With system prompt: [{ role: "system", content: "instructions" }, { role: "user", content: "question" }]
- Multi-turn: [system, user, assistant, user, assistant, ...]
Arguments:
messages(array, required): Array of message objects with role ("system" | "user" | "assistant") and content (string). System messages set AI behavior, user messages are queries, assistant messages are previous AI responses.model(string, optional): Specific model to use. Default: "claude-3-haiku-20240307". Use Anthropic Claude model names.temperature(number, optional): Creativity level 0.0-1.0. Lower=factual/consistent, Higher=creative/varied. Default: 0.7maxTokens(number, optional): Maximum tokens in response. Default: 1000. Increase for longer responses (costs more tokens).
Returns:
NormalizedChatResponse - Returns FLAT structure with: content (AI response text), model (model used), inputTokens, outputTokens, totalTokens. No nested objects.
Response Fields:
| Field | Type | Description |
|---|---|---|
content | string | AI response text content |
model | string | Model used for generation |
inputTokens | number | Number of input tokens consumed |
outputTokens | number | Number of output tokens generated |
totalTokens | number | Total tokens (input + output) |
Example:
decide
Use AI to make a decision from a list of options. The AI analyzes your prompt, considers the context, and selects the most appropriate option with reasoning.
USE CASES:
- Route messages to correct handlers
- Classify user intents
- Select appropriate tools or actions
- Prioritize tasks
- Choose templates or responses
- Determine sentiment or category
HOW IT WORKS:
- Provide a prompt (the decision context)
- List available options (each with id and label)
- Optionally add extra context
- AI returns selected option ID and reasoning
BEST PRACTICES:
- Make option labels clear and descriptive
- Use unique IDs for options
- Add context when decision needs background info
- Keep prompt focused on the decision criteria
- Use metadata field for additional option data
Arguments:
prompt(string, required): The decision prompt - what needs to be decided and whyoptions(array, required): Array of options to choose from. Each option must have: id (unique identifier), label (descriptive name), and optional metadata (additional data)context(string, optional): Additional context to help the AI make a better decisionmodel(string, optional): Specific model to use. Defaults to system default.
Returns:
NormalizedDecideResponse - Returns FLAT structure with: selectedOption (chosen option ID), reasoning (explanation of why chosen). No nested objects.
Response Fields:
| Field | Type | Description |
|---|---|---|
selectedOption | string | ID of the selected option |
reasoning | string | Explanation of why this option was chosen |
Example: