RAG Context Poisoning Detector

Detect poisoned retrieval chunks with prompt-injection patterns and generate a cleaner context set.

0

Poisoning Score

0

Keep

0

Review

0

Block

0.0

Avg Risk

Chunk risk decisions

Provide query and chunks to analyze poisoning risk.

Cleaned context output

About This Tool

RAG Context Poisoning Detector highlights suspicious retrieval chunks that contain injection-style instructions, exfiltration phrases, and secret-like patterns before they enter generation context.

Frequently Asked Questions

Is this embedding-based detection?

No. It is deterministic lexical and pattern-based analysis for fast browser-side pre-filtering.

Should review chunks be kept?

Keep them only if you apply manual review or additional safeguards. High-risk chunks should be blocked.

Is chunk content uploaded?

No. Analysis runs fully in your browser.