3. Set guardrails around LLM use and prompt injection
Exfiltration doesn’t just happen through networks. It happens through prompts. Employees and attackers alike can use AI assistants to pull sensitive data by crafting malicious or overly broad queries. Known as prompt injection, this tactic can siphon proprietary code, secrets, or customer data out through the AI model’s response.