Skip to content

Commit

Permalink
fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
wunderwuzzi23 committed Jan 9, 2025
1 parent bb7147d commit 223d846
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ That's it. As you can see the owner of the storage account can inspect the infor

## Responsible Disclosure

The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back further a lot further. Some of the recommendations provided included asking for the user for confirmation before storing memories, the need to revisit the `url_safe` feature (as there are bypasses), and to disable automatic tool invocation once untrusted data is in the chat context.
The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back a lot further. Recommendations provided included asking the user for confirmation before storing memories, the need to revisit the `url_safe` feature to mitigate bypasses, and to disable automatic tool invocation once untrusted data is in the chat context.

## Conclusion

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ <h3 id="azure-blob-storage-logs-as-data-exfiltration-vector">Azure Blob Storage
<p><a href="/blog/images/2024/chatgpt-c2-azure-log.png"><img src="/blog/images/2024/chatgpt-c2-azure-log.png" alt="azure log"></a></p>
<p>That&rsquo;s it. As you can see the owner of the storage account can inspect the information that ChatGPT leaks. This is not the only domain I found a bypass for, but the easiest and most straightforward to demonstrate.</p>
<h2 id="responsible-disclosure">Responsible Disclosure</h2>
<p>The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back further a lot further. Some of the recommendations provided included asking for the user for confirmation before storing memories, the need to revisit the <code>url_safe</code> feature (as there are bypasses), and to disable automatic tool invocation once untrusted data is in the chat context.</p>
<p>The information in this blog post was disclosed to OpenAI early October 2024, although some issues date back a lot further. Recommendations provided included asking the user for confirmation before storing memories, the need to revisit the <code>url_safe</code> feature to mitigate bypasses, and to disable automatic tool invocation once untrusted data is in the chat context.</p>
<h2 id="conclusion">Conclusion</h2>
<p>This research demonstrates the potential for advanced prompt injection exploits to compromise AI systems in unprecedented ways, for long-term remote control, exposing significant gaps in existing safeguards.</p>
<p>By demonstrating the feasibility of compromising individual ChatGPT instances via prompt injection and maintaining continuous remote control, highlights the need for stronger defenses against threats to long-term storage, prompt injection attacks, data exfiltration.</p>
Expand Down

0 comments on commit 223d846

Please sign in to comment.