-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Generative AI documentation Added #3932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One suggestion: Instead of adding a section the the PR template, which is text that both contributors and reviewers need to parse through, could we add an entry to the table at the top (along with Related issue, Has Unit Test etc.) that has the checkbox and a link to GEN_AI.md which would contain all this information ?
Svc/OsTime/OsTime.cpp
Outdated
return; | ||
} | ||
Os::ScopeLock lock(m_epoch_lock); | ||
m_epoch_fw_time = Fw::Time(seconds_now, 0); | ||
m_epoch_os_time = time_now; | ||
m_epoch_valid = true; | ||
this->cmdResponse_out(opCode,cmdSeq,Fw::CmdResponse::OK); | ||
// Send success response after setting epoch | ||
this->cmdResponse_out(opCode, cmdSeq, Fw::CmdResponse::OK); |
Check warning
Code scanning / CodeQL
Unchecked function argument Warning
resolving comments
resolving comment
Svc/OsTime/OsTime.cpp
Outdated
return; | ||
} | ||
Os::ScopeLock lock(m_epoch_lock); | ||
m_epoch_fw_time = Fw::Time(seconds_now, 0); | ||
m_epoch_os_time = time_now; | ||
m_epoch_valid = true; | ||
this->cmdResponse_out(opCode,cmdSeq,Fw::CmdResponse::OK); | ||
// Send success response after setting epoch | ||
this->cmdResponse_out(opCode, cmdSeq, Fw::CmdResponse::OK); |
Check warning
Code scanning / CodeQL
Unchecked function argument Warning
I think this policy is missing an important requirement: Users of LLMs need to verify not just that output is correct, but also that every piece of it is necessary. LLM output is often overly verbose, which requires the reader to sort through irrelevant information to understand what someone was trying to say. Good communication requires being concise and "to the point." Since this policy was (at least partially) generated using an LLM, I'll use it as an example. Why is the following line included under "security"?
What does this have to do with security in particular? And what does it add to the policy that hasn't already been said previously in the policy? There are a number of lines like this one that I believe could be deleted from the file without changing the meaning. I would also suggest that, if someone uses generative AI in a contribution, it should be mandatory to explain how it was used. I don't see why this should be optional. |
GENERATIVE_AI.md
Outdated
|
||
## Disclosure | ||
|
||
To maintain transparency and enable effective code review, contributors must disclose generative AI usage in pull requests: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect we have also seen an uptick in GenAI used to generate comments and discussions. I think we'd benefit from saying that all GenAI usage must also be disclosed (not just PRs).
Should we also reserve ourselves the right to handle content that we think violate the policy (GenAI usage without being disclosed) as we see fit?
The goal of this would be to save time to maintainers. For example unsolicited PRs that add not-so-useful comments and renaming some variables... (we've had that in the past).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that disclosing al GenAI usage would be beneficial!
Would this look like:
- Updating the
GENERATIVE_AI.md
policy to say that all GenAI usage must be disclosed - Making updates to the md files in the ISSUE_TEMPLATE directory stating that the maintainers have the right to handle the content that violates the GenAI usage policy?
If it's something that would be nice to have, I can make these changes and add them to this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to update the Issue template, but yes if you could add a mention to that in GENERATIVE_AI.md that'd be great!
Change Description
Added
GENERATIVE_AI.md
describing how F' values Generative AI and describes best practices for using it.Modifed
pull_request_template.md
to have an "AI Usage" section where contributors can describe how and if AI was used in their PR.Rationale
F' development has seen an increase in usage of AI. Best practice to create documentation on how we should go about its integration into development.
Also, when doing PRs if reviewers are able to see if and how contributors used generative AI, can evaluate the contributions with context that it is AI generated.
The issue #3897 was assigned earlier.
Testing/Review Recommendations
Would be best if could go through and see if what was written for the `F' Generative AI Usage Guidelines' reflect what the development team really wants.
Also I committed the same work I did for fixing issue #3897 on the same branch as the additions to Generative AI documentation. I figured it could probably be done on same PR since neither of the commits were too intensive.
Future Work
Currently, none/hard to find documentation/policy on generative AI for open source projects. In future, might need to update to fit those standards if/when they come out.
AI Usage
Disclosure: Generative AI usage is allowed and encouraged where appropriate. For effective code review, please indicate where AI was used so reviewers can evaluate contributions with this context.
(optional) If AI was used, please describe how it was utilized:
CONTRIBUTING.md
andGENERATIVE_AI.md
used generative AI