From 223d846329fc9ffec6555914c4669deddfc05a6a Mon Sep 17 00:00:00 2001 From: wunderwuzzi23 <35349594+wunderwuzzi23@users.noreply.github.com> Date: Wed, 8 Jan 2025 16:03:52 -0800 Subject: [PATCH] fix typos --- ...d-chatgpt-command-and-control-via-prompt-injection-zombai.md | 2 +- .../index.html | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md b/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md index 0c6c3d43..19bfd4d2 100644 --- a/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md +++ b/content/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai.md @@ -144,7 +144,7 @@ That's it. As you can see the owner of the storage account can inspect the infor ## Responsible Disclosure -The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back further a lot further. Some of the recommendations provided included asking for the user for confirmation before storing memories, the need to revisit the `url_safe` feature (as there are bypasses), and to disable automatic tool invocation once untrusted data is in the chat context. +The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back a lot further. Recommendations provided included asking the user for confirmation before storing memories, the need to revisit the `url_safe` feature to mitigate bypasses, and to disable automatic tool invocation once untrusted data is in the chat context. ## Conclusion diff --git a/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html b/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html index 1e02d044..b0065e45 100644 --- a/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html +++ b/docs/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/index.html @@ -217,7 +217,7 @@

Azure Blob Storage

azure log

That’s it. As you can see the owner of the storage account can inspect the information that ChatGPT leaks. This is not the only domain I found a bypass for, but the easiest and most straightforward to demonstrate.

Responsible Disclosure

-

The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back further a lot further. Some of the recommendations provided included asking for the user for confirmation before storing memories, the need to revisit the url_safe feature (as there are bypasses), and to disable automatic tool invocation once untrusted data is in the chat context.

+

The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back a lot further. Recommendations provided included asking the user for confirmation before storing memories, the need to revisit the url_safe feature to mitigate bypasses, and to disable automatic tool invocation once untrusted data is in the chat context.

Conclusion

This research demonstrates the potential for advanced prompt injection exploits to compromise AI systems in unprecedented ways, for long-term remote control, exposing significant gaps in existing safeguards.

By demonstrating the feasibility of compromising individual ChatGPT instances via prompt injection and maintaining continuous remote control, highlights the need for stronger defenses against threats to long-term storage, prompt injection attacks, data exfiltration.