A sophisticated multi-agent fact-checking system that combines advanced evidence retrieval techniques to verify factual claims across diverse domains.
- Tam Trinh
- Manh Nguyen
- Hy Truong Son (Correspondent / PI)
- Python 3.11 or higher
-
Clone the repository
git clone https://github.com/your-username/FactAgent.git cd FactAgent
-
Create a virtual environment
python3.11 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables
cp .env.example .env # Edit .env with your API keys and configuration
FactAgent/
├── src/
│ ├── main_agent.py # Main FactAgent implementation
│ ├── run_experiments.py # Experiment runner
│ ├── evaluate.py # Evaluation utilities
│ ├── utils.py # Utility functions
│ ├── experiments/ # Different reasoning methods
│ │ ├── cot.py # Chain-of-Thought reasoning
│ │ ├── direct.py # Direct reasoning
│ │ ├── folk.py # Folk reasoning
│ │ └── sase.py # SASE reasoning
│ ├── prompts/ # Prompt templates
│ │ ├── evidence_seeking.py
│ │ ├── input_ingestion.py
│ │ ├── query_generation.py
│ │ └── verdict_prediction.py
│ └── tools/ # Tools and utilities
│ ├── retrieve.py # Evidence retrieval
│ └── media_bias_data.json
├── data/ # Test datasets
│ ├── FeverousDev/
│ ├── HoVerDev/
│ └── SciFact-Open/
├── requirements.txt
└── README.md
from src.main_agent import FactAgent
# Initialize the agent
agent = FactAgent(dataset="fever")
# Verify a claim
claim = "The Earth is round."
result = agent.verify_claim(claim)
print(f"Label: {result['label']}")
print(f"Explanation: {result['explanation']}")
# Run all experiments with different models
python src/run_experiments.py
# Evaluate results
python src/evaluate.py
Create a .env
file with:
OPENAI_API_KEY=your_openai_api_key
GOOGLE_API_KEY=your_google_api_key
SERPER_API_KEY=your_serper_api_key_here