El Matador is an AI-powered news credibility analysis tool that helps users evaluate the trustworthiness of news articles. It detects misinformation patterns, analyzes emotional language, highlights suspicious claims, and produces an overall credibility score, all through an interactive Streamlit web interface.
- Credibility Scoring — Assigns a trustworthiness score to news articles using a trained ML model
- Emotional Tone Analysis — Detects emotionally charged or manipulative language that may signal bias
- Claim Highlighting — Identifies and highlights specific claims within articles that warrant scrutiny
- Pattern Detection — Flags common misinformation patterns and rhetorical techniques
- Interactive Web UI — Clean, browser-based interface built with Streamlit
| Layer | Technology |
|---|---|
| Language | Python 3.9+ |
| Web UI | Streamlit |
| ML Model | Scikit-learn (trained via train_model.py) |
| Frontend Assets | HTML, CSS, JavaScript (in /templates and /static) |
| Config | .streamlit/config.toml |
El_Matador/
├── streamlit_app.py # Main application entry point
├── credibility_analyzer.py # Core credibility scoring logic
├── emotional_analyzer.py # Emotional language detection
├── claim_highlighter.py # Claim identification and highlighting
├── pattern_detector.py # Misinformation pattern detection
├── train_model.py # ML model training script
├── utils.py # Shared utility functions
├── models/ # Saved ML model files
├── static/ # CSS, JS, and other static assets
├── templates/ # HTML templates
├── .streamlit/ # Streamlit configuration
│ └── config.toml
├── .kiro/specs/ # Project specs (news-credibility-analysis)
├── requirements.txt # Python dependencies
└── .gitignore
- Python 3.9 or higher
pippackage manager
git clone https://github.com/parthz-13/El_Matador.git
cd El_Matadorpython -m venv venv
# Activate on macOS/Linux
source venv/bin/activate
# Activate on Windows
venv\Scripts\activatepip install -r requirements.txtDownload the WELFake dataset from Kaggle:
- Link: https://www.kaggle.com/datasets/saurabhshahane/fake-news-classification
- Place the downloaded
WELFake_Dataset.csvfile in thedataset/directory:dataset/WELFake_Dataset.csv - Note: This file is large (~150MB) and is excluded from version control via
.gitignore
If the models/ directory is empty or no pre-trained model is present, run:
python train_model.pyThis will generate and save the ML model used for credibility analysis.
streamlit run streamlit_app.pyThe app will open in your browser at http://localhost:8501.
- Open the app in your browser after running the Streamlit command above.
- Paste the text of a news article into the input field.
- Click Analyze to run the full credibility pipeline.
- Review the results:
- Overall credibility score
- Emotional tone breakdown
- Highlighted claims within the article
- Detected misinformation patterns