Skip to main content
The Sidecar pattern lets you extract structured intents from LLM responses entirely on the client side. This is useful when your agent needs to parse tool calls, user intents, or structured data from free-form LLM output without making additional API calls.

How it works

  1. Generate instructions — build a prompt instruction block that tells your LLM how to format its output.
  2. Parse locally — extract the structured intent from the LLM response using a local parser.
All sidecar utilities are exported from the main minns-sdk entry point.

Usage

import { buildSidecarInstruction, extractIntentAndResponse } from 'minns-sdk';

// 1. Generate prompt instructions for your LLM
const instruction = buildSidecarInstruction(spec);
// Append `instruction` to your system prompt

// 2. After receiving the LLM response, parse it locally
const { intent, assistantResponse } = extractIntentAndResponse(
  modelOutput,
  userMsg,
  spec
);

// 3. Use the structured intent
console.log(intent);           // { action: "book_ticket", movie: "Interstellar", ... }
console.log(assistantResponse); // "I've booked your ticket for Interstellar!"

Benefits

Zero latency

No network round-trips — parsing happens entirely in your process.

LLM agnostic

Works with any LLM that can follow formatting instructions (GPT-4, Claude, Llama, etc.).

Type safe

Full TypeScript support with generics for your intent schema.

Resilient

Built-in protection against malformed LLM output.

Next steps