Chat Plugin
Overview
The Chat plugin provides AI chat capabilities for the Token Ring ecosystem. It enables AI-powered conversations with advanced context management, tool integration, and interactive command-line controls. The plugin integrates seamlessly with the Token Ring application framework and supports multiple AI providers through a provider registry system.
Key Features
- AI Chat Interface: Seamless integration with multiple AI models and providers
- Context Management: Intelligent context handling with automatic compaction and customizable context sources
- Tool Integration: Extensible tool system with plugin architecture and wildcard matching
- Interactive Commands: Rich command set for chat management including
/chat,/model,/tools, and/compact - State Preservation: Persistent chat history with message history management and message stack for undo operations
- Multi-Provider Support: Works with various AI model providers (OpenAI, Anthropic, etc.)
- Interactive Selection: Tree-based UI for model and tool selection
- Feature Management: Advanced model feature flags and capabilities
- Context Debugging: Display and inspect chat context for transparency
- Parallel/Sequential Tool Execution: Configurable tool execution mode with queue-based processing
- Token Usage Analytics: Detailed breakdown of input/output tokens, costs, and timing
- RPC Endpoints: Remote procedure call support for chat management
- Tool Call Artifacts: Automatic output of tool call requests and responses as artifacts
Core Components
ChatService
The main chat service class that manages AI chat functionality. The service implements TokenRingService and provides:
- AI model configuration and selection with auto-selection capabilities
- Chat message history and state management
- Tool registration and management with wildcard support
- Context handlers for building chat requests
- Interactive command handling through the agent system
- State persistence and serialization
- RPC endpoints for remote management
Context Handlers
Context handlers build the AI chat request by gathering relevant information. Each handler is an async generator that yields ChatInputMessage items:
system-message: Adds system prompts (supports dynamic system prompts via functions)prior-messages: Includes previous conversation history with intelligent truncationcurrent-message: Adds the current user inputtool-context: Includes context from enabled tools based on their required context handlers
Chat Commands
Interactive commands for chat management:
/chat: Send messages and manage chat settings with subcommands (send,settings,feature,context)/model: Set or show the target AI model with interactive selection/compact: Compact conversation context by summarizing prior messages/tools: List, enable, disable, or set enabled tools with tree-based UI
ChatServiceState
The state management class that tracks:
- Current configuration (model, tools, context settings, etc.)
- Chat message history with timestamps
- Tool execution queue (sequential by default)
- Parallel/sequential tool execution mode
- Message stack for undo operations
- Initial config for reset operations
API Reference
ChatService Methods
Model Management
| Method | Description |
|---|---|
setModel(model: string, agent: Agent) | Set the AI model for the agent |
getModel(agent: Agent) | Get the current model name or null |
requireModel(agent: Agent) | Get the current model or throw an error |
Configuration Management
| Method | Description |
|---|---|
getChatConfig(agent: Agent) | Get current chat configuration |
updateChatConfig(aiConfig: Partial<ParsedChatConfig>, agent: Agent) | Update configuration with partial updates |
Message History Management
| Method | Description |
|---|---|
getChatMessages(agent: Agent) | Get all chat messages |
getLastMessage(agent: Agent) | Get the last message or null |
pushChatMessage(message: StoredChatMessage, agent: Agent) | Add a message to history |
clearChatMessages(agent: Agent) | Clear all messages |
popMessage(agent: Agent) | Remove the last message (undo) |
Tool Management
| Method | Description |
|---|---|
addTools(tools: Record<string, TokenRingToolDefinition>) | Register tools from a package |
getAvailableToolNames() | Get all available tool names |
getAvailableTools() | Get all available tool definitions |
getToolNamesLike(pattern: string) | Get tool names matching a pattern |
ensureToolNamesLike(pattern: string) | Expand wildcard patterns to tool names |
getEnabledTools(agent: Agent) | Get enabled tool names |
setEnabledTools(toolNames: string[], agent: Agent) | Set exact enabled tools |
enableTools(toolNames: string[], agent: Agent) | Enable additional tools |
disableTools(toolNames: string[], agent: Agent) | Disable tools |
requireTool(toolName) | Get a tool by name |
Context Handler Management
| Method | Description |
|---|---|
getContextHandlerByName(name: string) | Get a context handler by name |
requireContextHandlerByName(name: string) | Get a context handler or throw |
registerContextHandler(name: string, handler: ContextHandler) | Register a single context handler |
registerContextHandlers(handlers: Record<string, ContextHandler>) | Register multiple context handlers |
Message Building
| Method | Description |
|---|---|
buildChatMessages(input: string, chatConfig: ParsedChatConfig, agent: Agent) | Build chat request messages from context handlers |
runChat Function
The core chat execution function that handles streaming responses, tool calls, and context compaction.
import runChat from "@tokenring-ai/chat/runChat";
async function runChat(
input: string,
chatConfig: ParsedChatConfig,
agent: Agent,
): Promise<AIResponse>
Parameters:
input: The user input messagechatConfig: Chat configuration including model, tools, and context settingsagent: The agent instance
Returns: A promise resolving to the AI response object
Key Features:
- Automatic context compaction when approaching token limits
- Tool call execution with parallel/sequential mode
- Max steps limit with user confirmation option
- Streaming responses with detailed analytics
- Automatic artifact output for tool calls
Utility Functions
tokenRingTool
Converts a tool definition to TokenRing format:
import {tokenRingTool} from "@tokenring-ai/chat";
const tool = tokenRingTool({
name: "my-tool",
displayName: "My Tool",
description: "Does something useful",
inputSchema: z.object({
param: z.string()
}),
async execute(input, agent) {
return "result";
}
});
Tool Result Types:
text: Simple string result or text object with type and contentmedia: Media result with type, mediaType, and data (base64 encoded)json: JSON result with type and data (automatically stringified)
outputChatAnalytics
Outputs token usage and cost analytics:
import {outputChatAnalytics} from "@tokenring-ai/chat";
outputChatAnalytics(response, agent, "Chat Complete");
Output Includes:
- Input/Output/Total token counts
- Cached token information
- Reasoning tokens (if applicable)
- Cost breakdown (input, output, total)
- Timing information (elapsed time, tokens/sec)
compactContext
Manually compacts conversation context:
import {compactContext} from "@tokenring-ai/chat/util/compactContext.ts";
await compactContext("focus topic", agent);
Parameters:
focus: Optional string to guide the summary focus
Usage Examples
Basic Chat Setup
import {TokenRingApp} from "@tokenring-ai/app";
import ChatService from "@tokenring-ai/chat";
const app = new TokenRingApp();
// Add chat service with configuration
app.addServices(new ChatService(app, {
defaultModels: ["auto"],
agentDefaults: {
model: "auto",
autoCompact: true,
maxSteps: 30,
enabledTools: [],
context: {
initial: [
{type: "system-message"},
{type: "tool-context"},
{type: "prior-messages"},
{type: "current-message"}
],
followUp: [
{type: "prior-messages"},
{type: "current-message"}
]
}
}
}));
await app.start();
Sending Messages Programmatically
import runChat from "@tokenring-ai/chat/runChat";
// Build chat configuration
const chatConfig = {
model: "auto",
systemPrompt: "You are a helpful assistant",
maxSteps: 30,
autoCompact: true,
enabledTools: [],
context: {
initial: [
{type: "system-message"},
{type: "tool-context"},
{type: "prior-messages"},
{type: "current-message"}
],
followUp: [
{type: "prior-messages"},
{type: "current-message"}
]
}
};
// Run a chat message
const response = await runChat(
"Hello, how are you?",
chatConfig,
agent
);
Managing Tools
import ChatService from "@tokenring-ai/chat";
const chatService = agent.requireServiceByType(ChatService);
// Get available tools
const availableTools = chatService.getAvailableToolNames();
// Enable specific tools
chatService.enableTools(["web-search", "calculator"], agent);
// Disable specific tools
chatService.disableTools(["file-system"], agent);
// Set exact tool list
chatService.setEnabledTools(["web-search", "calculator"], agent);
// Use wildcard patterns
chatService.ensureToolNamesLike("web-*"); // Expands to all web-* tools
Managing Chat History
import ChatService from "@tokenring-ai/chat";
const chatService = agent.requireServiceByType(ChatService);
// Get all messages
const allMessages = chatService.getChatMessages(agent);
// Get the last message
const lastMessage = chatService.getLastMessage(agent);
// Clear all messages
chatService.clearChatMessages(agent);
// Undo last message (remove from history)
chatService.popMessage(agent);
// Add new message
chatService.pushChatMessage({
request: { messages: [] },
response: { content: [] },
createdAt: Date.now(),
updatedAt: Date.now()
}, agent);
Interactive Chat Commands
Send a Message
/chat send Hello, how are you?
Configure AI Settings
/chat settings temperature=0.5 maxTokens=2000
Manage Model Features
/chat feature list # List features
/chat feature enable reasoning # Enable a feature
/chat feature disable reasoning # Disable a feature
Show Context
/chat context
Compact Context
/compact
/compact focus on the main task details
Manage Tools
/tools # Interactive tool selection
/tools enable web-search calculator
/tools disable file-system
/tools set web-search calculator
Select Model
/model # Interactive model selection
/model get # Show current model
/model set gpt-4-turbo # Set specific model
/model reset # Reset to default
Configuration
Plugin Configuration Schema
The chat plugin is configured through the application's plugin configuration:
import {z} from "zod";
const configSchema = z.object({
chat: z.object({
defaultModels: z.array(z.string()),
agentDefaults: z.object({
model: z.string().default("auto"),
autoCompact: z.boolean().default(true),
enabledTools: z.array(z.string()).default([]),
maxSteps: z.number().default(0),
context: z.object({
initial: z.array(ContextSourceSchema).default(initialContextItems),
followUp: z.array(ContextSourceSchema).default(followUpContextItems),
}).optional(),
}),
}),
});
Chat Configuration Properties
| Property | Type | Default | Description |
|---|---|---|---|
model | string | "auto" | AI model identifier (supports "auto", "auto:reasoning", "auto:frontier", or specific model names) |
systemPrompt | string | function | System instructions for the AI (can be a function for dynamic prompts) |
maxSteps | number | 30 | Maximum processing steps before prompting for continuation |
autoCompact | boolean | true | Enable automatic context compaction when approaching token limits |
enabledTools | string[] | [] | List of enabled tool names (supports wildcards) |
context.initial | ContextItem[] | [system-message, tool-context, prior-messages, current-message] | Context items for initial messages |
context.followUp | ContextItem[] | [prior-messages, current-message] | Context items for follow-up messages |
Context Source Types
| Type | Description |
|---|---|
system-message | Adds the system prompt |
prior-messages | Adds previous conversation history |
current-message | Adds the current user input |
tool-context | Adds context from enabled tools |
Available Settings
Settings can be configured via /chat settings:
temperature: Controls randomness (0.0-2.0)maxTokens: Maximum response lengthtopP: Nucleus sampling threshold (0.0-1.0)frequencyPenalty: Reduce repetition (-2.0 to 2.0)presencePenalty: Encourage new topics (-2.0 to 2.0)stopSequences: Sequences to stop atautoCompact: Enable automatic context compaction
Feature Flags
Features can be managed via /chat feature:
- list: Show current and available features
- enable key[=value]: Enable or set a feature flag
- disable key: Disable a feature flag
Feature Flag Types:
- Boolean:
true,false,1,0 - Number: Numeric values
- String: Text values
Format: Features can be specified as query parameters in the model name, e.g., gpt-4?reasoning=1&temperature=0.7
Integration
Plugin Registration
The chat plugin integrates with the Token Ring application framework:
import chatPlugin from "@tokenring-ai/chat";
// Register the plugin
app.use(chatPlugin);
Service Registration
The plugin automatically registers:
ChatService: Main chat service with all methods- Context handlers for building chat requests
- Interactive chat commands (
/chat,/model,/tools,/compact) - Model feature management
- Context debugging tools
- State management via ChatServiceState
- RPC endpoints for remote management
Agent Configuration
Agents can have their own chat configuration merged with service defaults:
const agentConfig = {
chat: {
model: "gpt-4",
systemPrompt: "You are a helpful assistant",
maxSteps: 50,
autoCompact: true,
enabledTools: ["web-search", "calculator"],
context: {
initial: [
{type: "system-message"},
{type: "tool-context"},
{type: "prior-messages"},
{type: "current-message"}
],
followUp: [
{type: "prior-messages"},
{type: "current-message"}
]
}
}
};
Tool Integration
Tools are registered through packages using the addTools method:
chatService.addTools({
"my-tool": {
name: "my-tool",
displayName: "My Tool",
description: "Does something useful",
inputSchema: z.object({
param: z.string()
}),
async execute(input, agent) {
// Tool implementation
return "result";
}
}
});
RPC Integration
The chat plugin provides RPC endpoints for remote management:
// Get available tools
const tools = await rpc.call("getAvailableTools", {});
// Get current model
const model = await rpc.call("getModel", { agentId: "agent-1" });
// Set model
await rpc.call("setModel", { agentId: "agent-1", model: "gpt-4" });
// Get enabled tools
const enabled = await rpc.call("getEnabledTools", { agentId: "agent-1" });
// Set enabled tools
await rpc.call("setEnabledTools", { agentId: "agent-1", tools: ["web-search"] });
RPC Schema:
// Request schemas
type GetAvailableToolsRequest = {};
type GetModelRequest = { agentId: string };
type SetModelRequest = { agentId: string; model: string };
type GetEnabledToolsRequest = { agentId: string };
type SetEnabledToolsRequest = { agentId: string; tools: string[] };
type EnableToolsRequest = { agentId: string; tools: string[] };
type DisableToolsRequest = { agentId: string; tools: string[] };
type GetChatMessagesRequest = { agentId: string };
type ClearChatMessagesRequest = { agentId: string };
// Response schemas
type GetAvailableToolsResponse = { tools: { [toolName: string]: { displayName: string } } };
type GetModelResponse = { model: string | null };
type SetModelResponse = { success: boolean };
type GetEnabledToolsResponse = { tools: string[] };
type SetEnabledToolsResponse = { tools: string[] };
type EnableToolsResponse = { tools: string[] };
type DisableToolsResponse = { tools: string[] };
type GetChatMessagesResponse = { messages: StoredChatMessage[] };
type ClearChatMessagesResponse = { success: boolean };
Monitoring and Debugging
Context Debugging
Use /chat context to view the current context structure:
/chat context
This shows all context items that would be included in a chat request.
Compact Context
Use /compact to summarize messages and reduce token usage:
/compact
/compact focus on the main task details
Analytics
The outputChatAnalytics function provides detailed token usage and cost information:
import {outputChatAnalytics} from "@tokenring-ai/chat";
outputChatAnalytics(response, agent, "Chat Complete");
Output includes:
- Input/Output/Total token counts
- Cached token information
- Reasoning tokens (if applicable)
- Cost breakdown (input, output, total)
- Timing information (elapsed time, tokens/sec)
Tool Call Debugging
Tool calls automatically generate artifacts showing:
- Request JSON with input parameters
- Response content
- Media data (if applicable)
These artifacts are displayed in the agent interface for debugging and transparency.
Development
Testing
# Run tests
bun test
# Run tests with coverage
bun test:coverage
Build
# Build the package
bun run build
Package Structure
pkg/chat/
├── index.ts # Main exports
├── ChatService.ts # Core chat service class
├── runChat.ts # Core chat execution function
├── schema.ts # Type definitions and Zod schemas
├── plugin.ts # Plugin registration
├── chatCommands.ts # Command exports
├── contextHandlers.ts # Context handler exports
├── contextHandlers/
│ ├── currentMessage.ts # Current message handler
│ ├── priorMessages.ts # Prior messages handler
│ ├── systemMessage.ts # System message handler
│ └── toolContext.ts # Tool context handler
├── commands/
│ ├── chat.ts # Chat command with subcommands
│ ├── model.ts # Model command with subcommands
│ ├── tool.ts # Tool management command
│ └── compact.ts # Context compaction command
│ ├── chat/
│ │ ├── send.ts # Send message implementation
│ │ ├── settings.ts # Settings configuration
│ │ ├── feature.ts # Feature management
│ │ └── context.ts # Context display
│ └── model/
│ ├── set.ts # Set model implementation
│ ├── get.ts # Get model implementation
│ ├── select.ts # Interactive selection
│ ├── reset.ts # Reset to default
│ └── default.ts # Show current and select
├── util/
│ ├── tokenRingTool.ts # Tool wrapper utility
│ ├── compactContext.ts # Context compaction
│ └── outputChatAnalytics.ts # Analytics output
├── state/
│ └── chatServiceState.ts # State management class
├── rpc/
│ ├── chat.ts # RPC endpoints
│ └── schema.ts # RPC schema definitions
└── vitest.config.ts # Test configuration
Related Components
@tokenring-ai/ai-client: AI model registry and client management@tokenring-ai/agent: Agent system integration@tokenring-ai/app: Application framework@tokenring-ai/scheduler: Task scheduling integration
License
MIT License