-
Notifications
You must be signed in to change notification settings - Fork 788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secret Masking #1643
Comments
💡 Auto-generated documentation-based answer:This feature request is already addressed in PR-Agent/Qodo Merge through the "ignore" configuration options that allow you to exclude specific files or patterns from analysis. You can ignore files or folders containing sensitive information using either glob patterns or regex patterns:
These configurations can be set in your configuration file or passed as command-line arguments. For example:
This ensures that sensitive files or files matching specific patterns are excluded from the data sent to AI models, effectively masking secrets from the analysis process. Relevant Sources: |
Thanks while this helps can we look into integrating tools like llm guard as well |
Sensitive data should never be committed to git. Never. For 'chat-gpt-in-ide' tools, I can understand why a masking feature might be needed - you are working in an intermediate state, and you might have local uncommitted files. But for PRs, secrets should not appear. if they do, PR-Agent should give an alert. In addition, most AI providers today support zero data retention, so the harm of sending "sensitive" data (on the very rare unusual cases it might occur) is low. |
Feature request
The secrets should be masked before sending data to AI models for security
Motivation
Some repositories may contain sensitive information which should not be shared with AI models so these should be masked before sending data to ai model
The text was updated successfully, but these errors were encountered: