UUID-based prompt injection protection for shell command output in LLM agents
LLM agents that execute shell commands are vulnerable to prompt injection via command output. An attacker controlling API responses, log files, or any external data can embed fake closing markers and instructions that the model may follow.