Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Automated Adversarial Bug fixing #1

Open
jazir555 opened this issue Mar 3, 2025 · 2 comments
Open

[Feature Request] Automated Adversarial Bug fixing #1

jazir555 opened this issue Mar 3, 2025 · 2 comments

Comments

@jazir555
Copy link

jazir555 commented Mar 3, 2025

Overview

I’d love to see an automated adversarial bug testing system implemented—essentially a "code by consensus" approach, where multiple LLMs collaborate to generate, review, and refine code in an iterative loop.

Proposed Process

Code Generation – The process starts with an initial LLM generating code based on a given prompt (e.g., "Build me a recipe maker and generate a recipe for a sandwich.").

Adversarial Bug Checking – The generated code is then automatically routed to multiple LLMs (e.g., Claude and Gemini) for bug detection.

Bug Aggregation & Fixes – The detected bugs are collected, and another LLM (or the same one) implements the necessary fixes.

Iterative Improvement – This loop continues until all participating LLMs agree that the code is bug-free.

Final Debugging & Logging – The refined code is run through a debugger, with errors logged and additional debugging handled through an automated process.

This approach essentially enables bug fixing by committee, leveraging the strengths of different models to improve reliability and robustness.

Configurable Parameters

To enhance flexibility, users should be able to configure:

  • Choice of LLMs – Select which model generates the initial code and which ones review for bugs.
  • Bug-Fixing Implementation – Define which model applies fixes, with options for fixed selection, random rotation, or round-based rotation.
  • Iteration Limit – Set a max number of bug-checking rounds (e.g., a loop of 3, 5, etc.).
  • Feature Expansion Rules – Introduce additional features at predefined steps (e.g., at round 5, request and implement a new feature before continuing the bug-fixing process).
  • Time & Resource Constraints – Limit the process by time (e.g., run for X minutes/hours) or API usage (e.g., cap at Y code generations).

Enhanced Workflow with Feature Expansion

To take this further, the system could incorporate stepwise feature addition:

  • Round 1: Initial code generation
  • Rounds 2-4: Bug checking and fixes
  • Round 5: Request and implement additional feature ideas
  • Rounds 6-8: Bug checking and refining the new feature
  • Repeat until conditions are met

This structure enables automated iterative development, where new functionality is incrementally added and tested without manual intervention.

Conclusion

This approach would provide an efficient, scalable method for automated bug fixing and feature expansion, leveraging multiple LLMs in a structured workflow. It would also give users precise control over the debugging and development process while optimizing API usage.

Would love to hear your thoughts! 🚀

@jazir555 jazir555 changed the title [Feature Request] Automated Adversarial Bug Testing [Feature Request] Automated Adversarial Bug fixing Mar 3, 2025
@jazir555
Copy link
Author

jazir555 commented Mar 3, 2025

Cleaned up the formatting and expanded/clarified what I mean.

@jazir555
Copy link
Author

jazir555 commented Mar 3, 2025

The intent is to automate the development process as much as possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant