Skip to main content
The SDK is designed to be resilient against erratic LLM outputs and large data structures. Every event passes through multiple safety checks before submission.

Circular reference protection

LLM-generated objects can accidentally contain circular references. The SDK automatically detects these and replaces them with "[Circular]" during serialization.
const obj: any = { name: "test" };
obj.self = obj; // Circular!

// SDK handles this gracefully
await client.event("agent")
  .action("process", obj)
  .send(); // Serialized with "[Circular]" placeholder

Prototype pollution protection

The SDK strips dangerous JavaScript keys during serialization to prevent prototype pollution attacks:
  • __proto__
  • constructor
  • prototype
This is especially important when processing untrusted LLM output that could contain adversarial payloads.

Size guarding

Payloads exceeding maxPayloadSize (default: 1 MB) are rejected after serialization. This prevents accidental submission of oversized events that could degrade server performance.
The size check happens after JSON serialization, so the actual byte size of the serialized payload is what’s measured — not the in-memory object size.

Queue management

The local auto-batch queue has a configurable maximum depth (maxQueueSize, default: 1000). If the queue is full, enqueue() throws an error rather than silently dropping events.
const client = createClient({
  baseUrl: "https://your-instance.minns.ai",
  autoBatch: true,
  maxQueueSize: 5000, // Increase for high-throughput agents
});

Telemetry

The SDK includes opt-in, fire-and-forget telemetry to monitor performance and LLM token usage.
What’s collectedWhat’s NOT collected
Latency metricsRequest bodies
HTTP status codesRaw event content
Estimated token countsUser data
Error messagesMetadata values
Telemetry is enabled by default. To opt out:
const client = createClient({
  baseUrl: "https://your-instance.minns.ai",
  enableDefaultTelemetry: false,
});

Next steps