From bb7147dba410e467ae05aedac63e4ffcc51fc57b Mon Sep 17 00:00:00 2001 From: wunderwuzzi23 <35349594+wunderwuzzi23@users.noreply.github.com> Date: Mon, 6 Jan 2025 20:59:11 -0800 Subject: [PATCH] ai domination --- ...chatgpt-command-and-control-via-prompt-injection-zombai.md | 4 ++++ .../index.html | 1 + 2 files changed, 5 insertions(+) diff --git a/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md b/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md index 90eea2b7..0c6c3d43 100644 --- a/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md +++ b/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md @@ -25,8 +25,12 @@ However, there is one part that I want to highlight explicitly: An adversary can compromise ChatGPT instances and have them join a central Command and Control system which provides updated instructions for all the remote controlled ChatGPT instances to follow over-time. +[![ai domination](/blog/images/2025/chatgpt-zombai-tn.png)](/blog/images/2025/chatgpt-zombai-tn.png) + **This research and proof-of-concept demonstrate that it is possible to compromise and remotely control ChatGPT instances through prompt injection, effectively establishing the foundational elements of a novel kind of botnet.** + + Let me explain how ChatGPT is turned into a "ZombAI". ## Compromising a ChatGPT Instance diff --git a/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html b/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html index 621f2b43..1e02d044 100644 --- a/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html +++ b/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html @@ -153,6 +153,7 @@
A Command and Control system (C2) that uses prompt injection to remote control ChatGPT instances.
An adversary can compromise ChatGPT instances and have them join a central Command and Control system which provides updated instructions for all the remote controlled ChatGPT instances to follow over-time.
+This research and proof-of-concept demonstrate that it is possible to compromise and remotely control ChatGPT instances through prompt injection, effectively establishing the foundational elements of a novel kind of botnet.
Let me explain how ChatGPT is turned into a “ZombAI”.