llmverify
Local-first LLM output monitoring, risk scoring, and classification for Node.js.
Built for teams shipping AI features who need guardrails that work without sending data to third parties.
The Problem
You're shipping AI features. Your LLM outputs need validation before they reach users—prompt injection detection, PII redaction, hallucination risk scoring. The standard options are:
- •Cloud APIs that require sending your data to third parties
- •Complex ML pipelines that add latency and infrastructure overhead
- •Manual regex that's brittle and doesn't scale
Most teams end up with a patchwork of solutions—or skip validation entirely and hope for the best.
What llmverify Does
A single npm package that handles the common LLM safety checks. No cloud dependencies. No ML infrastructure. Just import and use.
Prompt Injection Detection
Pattern-based detection for 9 attack categories including jailbreaks, system prompt exfiltration, and tool abuse. OWASP LLM Top 10 aligned.
PII Redaction
25+ patterns including emails, SSNs, credit cards, API keys (AWS, GitHub, Stripe), JWT tokens, and private keys. Regex-based, ~90% accuracy.
Hallucination Risk Scoring
Heuristic-based risk indicators that flag overconfident language, fabricated entities, and contradictions. Returns confidence intervals, not false certainties.
Runtime Health Monitoring
Wrap any LLM client to track latency, token rate, and behavioral drift. Get alerts when your model degrades before users notice.
JSON Repair & Validation
Auto-fix common JSON formatting errors from LLM outputs. Trailing commas, unquoted keys, truncated responses—handled automatically.
Model-Agnostic Adapters
Unified interface for OpenAI, Anthropic, Groq, Google AI, DeepSeek, Mistral, Cohere, and local models. Switch providers without changing application code.
Quick Start
npm install llmverifyimport { verify, isInputSafe, redactPII } from 'llmverify';
// Verify AI output safety
const result = await verify({ content: aiOutput });
if (result.risk.level === 'critical') {
console.log('Block this content');
}
// Check user input for prompt injection
if (!isInputSafe(userInput)) {
throw new Error('Potential attack detected');
}
// Redact PII before displaying
const { redacted } = redactPII(aiOutput);
console.log(redacted); // "Contact [REDACTED] at [REDACTED]"Limitations
llmverify uses heuristics, not AI. It provides risk indicators, not ground truth.
All results include confidence intervals, methodology explanations, and explicit limitations. This is a guardrail layer, not a replacement for human review on high-stakes decisions.
Compliance Alignment
Built on the CSM6 framework implementing baseline checks for:
OWASP LLM Top 10
Security
NIST AI RMF
Risk Management
EU AI Act
Compliance
ISO 42001
AI Management
Privacy Guarantee
What We Do
- • Zero network requests
- • Zero telemetry
- • Zero data collection
- • Open source—verify yourself
What We Never Do
- • Train on your data
- • Share with third parties
- • Track without consent
- • Phone home for any reason
Run tcpdump while using it—you'll see zero network traffic.
Why I Built This
I've spent years building AI governance frameworks for large-scale enterprise deployments. I've seen what happens when teams ship AI features without proper guardrails—and I've seen the compliance overhead that comes with enterprise-grade solutions.
Most teams don't need a full ML pipeline for basic safety checks. They need something that works out of the box, runs locally, and doesn't require a PhD to configure.
llmverify is that tool. It's not perfect—the limitations section is honest about that—but it covers the 80% case for teams who need to ship safely without overengineering.