Skip to content

docs: update with AI Contribution Policy#621

Open
sdiazlor wants to merge 1 commit intomainfrom
ai-contribution-policy
Open

docs: update with AI Contribution Policy#621
sdiazlor wants to merge 1 commit intomainfrom
ai-contribution-policy

Conversation

@sdiazlor
Copy link
Copy Markdown
Collaborator

@sdiazlor sdiazlor commented Apr 7, 2026

Description

Add AI Contribution Policy in documentation. Shorter version of CodeCarbon.

Related Issue

Fixes #(issue number)

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Refactor (no functional change)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update

Testing

  • I added or updated tests covering my changes
  • Existing tests pass locally (uv run pytest -m "cpu and not slow")

For full setup and testing instructions, see the Contributing Guide.

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my code, especially for agent-assisted changes
  • I updated the documentation where necessary

Thanks for contributing to Pruna! We're excited to review your work.

New to contributing? Check out our Contributing Guide for everything you need to get started.

Note:

  • Draft PRs or PRs without a clear and detailed overview may be delayed.
  • Please mark your PR as Ready for Review and ensure the sections above are filled out.
  • Contributions that are entirely AI-generated without meaningful human review are discouraged.

First Prune (1-year OSS anniversary)

First Prune marks one year of Pruna’s open-source work. During the initiative window, qualifying merged contributions count toward First Prune. You can earn credits for our performance models via our API.

If you’d like your contribution to count toward First Prune, here’s how it works:

  • Initiative window: First Prune starts on March 31.
  • Issue assignment: For your PR to count toward First Prune, the related issue must be assigned to the contributor opening the PR. Issues are labeled with first-prune.
  • Open for review: Please open your PR and mark it ready for review by April 30 (end of April).
  • Review priority: We’ll make our best effort to review quickly any PR that is open and has a review request before April 30.
  • Credits: Each qualifying merged PR earns 30 credits. We’ll be in touch after all qualifying PRs for First Prune have been merged.
  • To get started: Have a look at all models. You’ll need to sign up on the dashboard before you can redeem your credits.

@codacy-production
Copy link
Copy Markdown

Up to standards ✅

🟢 Issues 0 issues

Results:
0 new issues

View in Codacy

TIP This summary will be updated as you push new changes. Give us feedback

@sdiazlor sdiazlor marked this pull request as ready for review April 8, 2026 22:44
@sdiazlor sdiazlor requested a review from minettekaum April 8, 2026 22:44
Copy link
Copy Markdown
Contributor

@minettekaum minettekaum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating this, Sara 😍

I had a few comments on the wording, mostly to make the policy clearer and a bit easier for us to follow. Let me know if you have any questions about my comments :)

@@ -0,0 +1,58 @@
# AI contribution policy

At Pruna, we very much welcome any contributions from the community. However, we want to ensure they are high quality and aligned with our guidelines. To prevent unwanted behavior by the community, we have created this AI contribution policy. Please read it carefully before contributing.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change 'prevent unwanted behavior by the community' to 'ensure AI-assisted contributions remain high quality, reviewable, and aligned with project standards,...'


## 1. Core Philosophy

Pruna accepts AI-assisted code (e.g., using Copilot, Cursor, etc.), but strictly rejects AI-generated contributions where the submitter acts merely as a proxy. The submitter is the **Sole Responsible Author** for every line of code, comment, and design decision.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the accountability principle, but 'Sole Responsible Author' may be too absolute for AI-assisted contributions. I’d suggest wording like 'The human contributor remains fully responsible for every submitted change' instead. That keeps the accountability point without creating tension with the fact that AI assistance is permitted.


> **Accountability lies with the human contributor, not the AI agent**

Coding agents (e.g., Copilot, Claude Code) are not conscious entities and cannot be held accountable for their outputs. They can produce code that looks correct and plausible but contains subtle bugs, security vulnerabilities, or design flaws. So, maintainers and reviewers are ultimately responsible for catching these issues. The following rules ensure that all contributions are carefully vetted and that there is a human submitter behind the agent, taking full responsibility for the submitted code.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This rationale is mostly strong, but I’d tweak the responsibility framing. Right now it sounds like maintainers/reviewers are ultimately on the hook for catching AI mistakes. I think the contributor should be framed as primarily responsible for reviewing and validating the submission, with maintainers providing an additional review layer.


### Law 1: Proof of Verification

AI tools frequently write code that looks correct but fails execution. Therefore, "vibe checks" are insufficient.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the intent here. Optional wording tweak: replace ‘vibe checks are insufficient’ with something slightly more formal like ‘superficial review is insufficient’.


AI tools frequently write code that looks correct but fails execution. Therefore, "vibe checks" are insufficient.

**Requirement:** Every PR introducing functional changes must be carefully tested locally by the human contributor before submission.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should 'tested locally' be broadened a bit to something like ‘validated through appropriate local and/or CI checks before submission.’


**Failure Condition:**

- Answering a review question with "That's what the AI outputted" or "I don't know, it works" leads to immediate closure.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this, but 'immediate closure' may be unnecessarily strict. Maybe say the PR may be closed if the contributor cannot explain or revise the code after reviewer feedback. That keeps the standard high while allowing room for good-faith iteration.


### Law 4: Transparency in AI Usage Disclosure

**Requirement:** If you used AI tools for coding, but manually reviewed and tested every line following the guidelines, you must mark the PR as "AI-assisted".
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest something like this 'If a non-trivial portion of this PR was generated or substantially shaped by AI tools, the PR must be marked as "AI-assisted". Trivial autocomplete, formatting, or minor line completions do not require disclosure.'


**Failure Condition:**

- Lack of transparency about AI tool usage may result in PR closure, especially if the code contains hallucinations or cannot be explained during review.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The disclosure principle makes sense, but I’d tighten the wording here. 'Hallucinations' is understandable informally, though 'fabricated or inaccurate code/comments' may be clearer in policy language.


## 3. Cases where Human must stay in control

In some cases, such as boilerplate code outside the logic of the product, we could accept AI-generated code reviewed by another AI agent.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph seems inconsistent with the earlier accountability principle. If the core message is that humans must remain responsible, I don’t think 'reviewed by another AI agent' should be the accepted standard, even for boilerplate. I’d suggest framing AI review as optional assistance, while still requiring human review and approval.


In some cases, such as boilerplate code outside the logic of the product, we could accept AI-generated code reviewed by another AI agent.

But for the core logic of the product, we want to ensure that humans fully understand the code and the design decisions. This is to ensure that the code is maintainable, secure, and aligned with the project's goals.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the principle here, but 'core logic of the product' may be too subjective to enforce consistently. It may help to define this more concretely or provide examples of the kinds of changes that require especially strong human understanding and validation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants