Most guardrails check for similarity. Tensalis checks for truth. Our Ensemble Engine detects and corrects hallucinations in LLM outputs with deterministic precision — no LLM dependency, sub-200ms, tamper-proof audit trails.
Watch our deterministic GenAI firewall in action or explore the enterprise observability console. The sandbox is live on Google Cloud Run, and the observability console is live on Firebase.
Inject adversarial payloads and watch the Tensalis engine detect contradictions and apply surgical auto-corrections in real-time.
Explore our enterprise dashboard showing real-time Semantic Trajectory Physics (CRF Layer) and hash-chained audit trails.
Production Live Mode available; Demo Mode includes features under development.
[ Walkthrough Video Embed Here ]
Standard vector search sees keywords. The Tensalis Ensemble sees logic.
Ground Truth: "Returns accepted within 30 days."
AI Response: "You can return items within 90 days."
SILENT FAILURE: Keywords match, but the number is wrong.
Detected: DURATION contradiction (30 ≠ 90)
Corrected: "within 30 days" (from context)
SUCCESS: Atomic fact verification caught and corrected the hallucination.
Complementary detection methods with OR-gate logic. No single point of failure.
Building the foundational trust layer for enterprise GenAI adoption.
Where Google Cloud expertise accelerates our roadmap.
Our hash-chained audit ledger currently stores to local filesystem, which resets on Cloud Run redeployment. Enterprise customers require persistent, queryable audit trails with multi-year retention.
Migration to Cloud SQL for structured queries with BigQuery for long-term analytics and compliance reporting.
Our deterministic engine achieves 96.9% on current benchmarks. Reaching higher accuracy on nuanced cases (unit conversion, implicit inference, modal logic) requires hybrid approaches combining our extractors with inference capabilities.
Exploring Vertex AI for targeted hybrid verification on edge cases while maintaining deterministic fast-path for clear contradictions.
The engine loads 3 ML models on startup (~10-30s cold start). For always-warm production deployment, we need optimized container strategies and model serving infrastructure.
Cloud Run min-instances for warm pools, with potential GKE migration for dedicated model serving at scale.
Where factual accuracy is a compliance requirement, not a nice-to-have.
Catch dosage errors, contraindication hallucinations, and fabricated clinical guidelines before they reach patients.
Ensure investment summaries match prospectuses. Detect "4.5%" vs "45%" numeric drift and fabricated terms.
Prevent AI from flipping "mandatory" to "optional" in policy summaries. Per-clause evidence chains for audit.
Drop-in verification layer for any RAG pipeline. Works with LangChain, LlamaIndex, and custom orchestrations.
Multi-Cloud Solution Architect (AWS Pro, Azure Expert) and MSc DSP (Lancaster University). Designed and built the five-layer Ensemble Engine and patent-pending detection algorithms.
View LinkedIn Profile →Enterprise technology expert focused on scaling and modernizing large IT ecosystems. Leads US commercial strategy and enterprise design partnerships.
View LinkedIn Profile →