Prototype rebuilt with Mistral LLM, LangChain, and a modern UI.
This is an enhanced and efficient version of the Patient Health Report Analyzer project I initially created as a prototype. The earlier version focused on basic PDF parsing and keyword matching. In this release, I've rebuilt it using cutting-edge technologies like:
- ๐ง Mistral LLM (via Ollama) โ for local, privacy-preserving AI inference
- ๐ LangChain โ for prompt management and chaining logic
- ๐ PDF Parsing โ clean extraction of health report text using
pdfplumber - ๐ป Streamlit UI โ for an intuitive and interactive frontend
- Upload and analyze health/lab PDF reports
- Extracts and summarizes:
- ๐ Key medical observations
โ ๏ธ Critical abnormalities or conditions- ๐ Suggested follow-ups or tests
- โ Missing or ambiguous information
- All AI processing done locally via Mistral (no cloud model dependency)
- Clone the repository:
git clone https://github.com/rahulprajapati08/PHR-Analyzer.git cd PHR-Analyzer - Install dependencies:
pip install -r requirements.txt
- Install Ollama if not already installed, then run:
ollama run mistral
- Launch the App:
streamlit run app.py
- This project is fully local and private โ no external LLM APIs are used.
- You can deploy the Streamlit app online and expose your backend using ngrok or localtunnel.
- Add PDF export of summary
- Build patient history dashboard
- Integrate OCR for scanned reports

