What I Built
Built a self-learning AI customer support system that continuously reads interactions, extracts operational knowledge into a living knowledge base, and guides agents in real time. Features a RAG-powered copilot with confidence scores and provenance citations, automated knowledge gap detection across resolved Tier 3 tickets, human-in-the-loop KB article generation, OWASP compliance scanning (PCI, PII, prompt injection, XSS), and an interactive knowledge provenance graph. Indexes 4,300+ documents across 3 ChromaDB collections.
What I Learned
The best AI systems get smarter with every interaction. The closed-loop architecture — detect gap → generate KB draft → human review → publish → immediately retrievable — means the copilot’s answer quality improves continuously without retraining. Low-confidence responses (<50%) automatically trigger gap reports, creating a self-healing knowledge base. Also learned that combining OWASP compliance scanning with QA scoring in parallel catches security issues (PII leaks, prompt injection) that traditional QA rubrics miss entirely.
Key Results
| Feature | Detail |
|---|---|
| Knowledge base | 4,300+ documents indexed across 3 collections |
| Self-learning loop | Detect → Generate → Review → Publish → Retrieve |
| Copilot | RAG with confidence scores + provenance citations |
| Compliance | OWASP scanning (PCI, PII, prompt injection, XSS) |
| Gap detection | Batch scan + low-confidence auto-reporting |
Achievement
🏆 4th Hack Nation — RealPage Speare AI Challenge
Project
Tech Stack: FastAPI, Next.js 15, ChromaDB, GPT-4o, Tailwind CSS, Canvas 2D Graph Viz | Architecture: Self-learning RAG with human-in-the-loop governance
Citation
@online{prasanna_koppolu,
author = {Prasanna Koppolu, Bhanu},
title = {Speare {AI}},
url = {https://bhanuprasanna2001.github.io/projects/speare_ai.html},
langid = {en}
}