Quantcast
Channel: Positive Security | IT Security research and happy coincidences
Viewing all articles
Browse latest Browse all 11

Hacking Auto-GPT and escaping its docker container

$
0
0
We leverage indirect prompt injection to trick Auto-GPT (GPT-4) into executing arbitrary code when it is asked to perform a seemingly harmless task such as text summarization on a malicious website, and discovered vulnerabilities that allow escaping its sandboxed execution environment.

Viewing all articles
Browse latest Browse all 11

Latest Images

Trending Articles





Latest Images