This policy was adapted from the ghostty project. The original version can be found Ghostty's PR #10412.
The NetExec project has strict rules for AI usage:
-
All AI usage in any form must be disclosed. You must state the tool(s) and model(s) you used (e.g. Claude Code, Cursor, Opus 4.6, Codex 5.2, etc) along with the extent that the work was AI-assisted.
-
Pull requests created in any way by AI can only be for accepted issues. Drive-by pull requests that do not reference an accepted issue may be rejected and closed. If AI isn't disclosed but a maintainer suspects its use, the PR may be rejected and closed. If you want to share code for a non-accepted issue, open a discussion or attach it to the existing issue.
-
Pull requests created by AI must have been fully verified with human use. AI must not create hypothetically correct code that hasn't been tested. Importantly, you must not allow AI to write code for platforms or environments you don't have access to manually test on.
-
Issues and discussions can use AI assistance but must have a full human-in-the-loop. This means that any content generated with AI must have been reviewed and edited by a human before submission. AI is very good at being overly verbose and including noise that distracts from the main point. Humans must do their research and trim this down.
-
No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.
-
Bad AI drivers will be banned You've been warned. We love to help junior developers learn and grow, but if you're interested in that then don't use AI, and we'll help you.
-
Official maintainers have the final say We always strive to be helpful, but there are limits. If you submit agregiously terrible AI generated code with no review, we may ban you without word. We do not want to waste our time reviewing slop if the contributor can't be bothered to review the work themselves.
These rules apply only to outside contributions to NetExec. Maintainers and trusted contributors are exempt from these rules and may use AI tools at their discretion; they've proven themselves trustworthy to apply good judgment.
Please remember that NetExec is maintained by humans.
Every discussion, issue, and pull request is read and reviewed by humans (and sometimes machines, too). It is a boundary point at which people interact with each other and the work done. It is rude and disrespectful to approach this boundary with low-effort, unqualified work, since it puts the burden of validation on the maintainers.
In a perfect world, AI would produce high-quality, accurate work every time, but today that is simply not true. This is compounded by the fact that accessibility to AI is high, allowing low skilled individuals to think that they are contributing useful code. Even many skilled programmers do not understand how to use it effectively. This has opened up a waterfall of low quality contributions across the Open Source community, wasting resources.
NetExec maintainers acknowledge AI as a productive tool to some workflows, and are open to leveraging this technology to improve NetExec; however, there are many low quality AI tools whose use results in pure slop being generated. The security communinity is not immune from AI psychosis, over-hype, or FOMO. As with any new technology, it is important to understand how it works and how to best use it, not blindly apply it to every use case with the hope that it will fix all your issues.
We include this section to be transparent about the project's usage about AI for people who may disagree with it.