✅ Logistic Regression churn prediction
✅ FastAPI REST endpoint
✅ OpenAI summarisation for human-readable explanations
- Install requirements:
pip install -r requirements.txt
- Run locally:
uvicorn app:app --reload
app:app
loads the FastAPI app fromapp.py
.--reload
enables auto-reload for development (useful for code changes, not for production).
- Test with:
curl -X POST http://localhost:8000/predict -H "Content-Type: application/json" -d '{
"age": 45.0,
"tenure": 24.0,
"monthly_charges": 79.85,
"total_charges": 1800.0,
"contract_type": "Month-to-month",
"payment_method": "Electronic check"
}'
Set your OpenAI API key:
export OPENAI_API_KEY="your_actual_openai_api_key"
Python_GML_ML_Pipeline/
├── app.py # FastAPI application
├── requirements.txt # Python dependencies
├── logistic_model.pkl # Trained ML model (placeholder)
├── scaler.pkl # Feature scaler (placeholder)
├── Dockerfile # Container configuration
├── .gitignore # Git ignore rules
└── README.md # This file
- Build the Docker image:
docker build -t python-gml-ml-pipeline .
- Run the Docker container:
docker run -p 8000:8000 -e OPENAI_API_KEY="your_actual_openai_api_key" python-gml-ml-pipeline
- Access the FastAPI app:
- API: http://localhost:8000
- Health Check: http://localhost:8000/health
- Interactive Docs: http://localhost:8000/docs
- OpenAPI Schema: http://localhost:8000/openapi.json
Execution Flow:
- Load trained model and scaler (using
joblib
). - API endpoint receives JSON data (new user or input data).
- Dataframe creation & scaling for consistency with training.
- Model predicts churn probability (or other target).
- Returns JSON response with prediction for integration into apps or dashboards.
✅ Run locally:
uvicorn app:app --reload
✅ Run in Docker:
docker build -t python-gml-ml-pipeline .
docker run -p 8000:8000 -e OPENAI_API_KEY="your_actual_openai_api_key" python-gml-ml-pipeline
Key reasons to use FastAPI:
- Modern async Python framework
- Automatic OpenAPI schema & Swagger docs
- Production-grade performance
- Can be integrated into microservices/SaaS
Made with ❤️ by Pierre-Henry Soria. A super passionate & enthusiastic Problem-Solver / Senior Software Engineer. Also a true cheese 🧀, ristretto ☕️, and dark chocolate lover! 😋
logistic_model.pkl
andscaler.pkl
are placeholders. Train and export your own models usingjoblib.dump
.- This project is a modern, production-ready ML pipeline, showcasing deployment and explainability best practices for 2025 and beyond.
“AI models become valuable when they’re deployable, explainable, and integrated into real products that create business value.”