Privacy and Security AI Tools
This hub focuses on privacy-first AI usage by masking data, scanning for secrets, and checking risky prompt patterns.
Focus Areas
- Prompt policy and injection checks
- PII masking and reversible pseudonymization
- Secret/token detection in snippets
- Browser privacy and tracking audits
Quick Links
Recommended Tools
Prompt Policy Firewall
Scan prompts for PII, secrets, and injection patterns before sending data to AI models.
Prompt Security Scanner
Scan prompts for secret leakage, PII, and injection-style phrases before sending to AI.
Prompt Injection Simulator
Simulate prompt-injection attacks and score guardrail resilience before release.
Prompt Red-Team Generator
Generate adversarial prompt test cases for jailbreak, leakage, and policy-bypass evaluation.
Jailbreak Replay Lab
Replay jailbreak scenarios, score model defenses, and export deterministic safety reports.
Prompt Guardrail Pack Composer
Compose reusable refusal, citation, uncertainty, and output guardrail packs for system prompts.
Sensitive Data Pseudonymizer
Replace sensitive identifiers with reversible placeholders before sending text to AI.
PII Redactor
Detect and redact emails, phones, cards, IBAN, IPs, and tokens from text.
Secret Detector for Code Snippets
Detect hardcoded keys, tokens, and credential-like strings before sharing code snippets.
Cookie Audit Parser
Parse Cookie/Set-Cookie headers and audit Secure, HttpOnly, and SameSite flags.
CSP Header Builder
Generate Content-Security-Policy headers with practical defaults and risk warnings.
Browser Fingerprint Checker
Inspect browser/device fingerprint signals like canvas hash, WebGL, timezone, and platform.
Browser Permissions Auditor
Check browser permission states for camera, microphone, geolocation, notifications, and more.
WebRTC Leak Test
Check WebRTC ICE candidates for local/public IP exposure in your browser.
URL Tracker Cleaner
Remove UTM and tracking parameters from links in bulk for cleaner, privacy-friendly URLs.
Passphrase Generator
Generate memorable high-entropy passphrases with browser-side secure randomness.
FAQ
How do I safely share text with AI models?
Run Prompt Policy Firewall and Sensitive Data Pseudonymizer before sending data to reduce PII and secret leakage risk.
Can I reverse pseudonymized text later?
Yes. Sensitive Data Pseudonymizer provides mapping data so placeholders can be restored locally.
Do these tools upload my content?
No. These tools run client-side only and do not send prompt or file data to the server.