WE TELL YOU WHICH NUTRITIONS ARE GOOD FOR YOU!
Explore the docs »
In today's fast-paced world, individuals are increasingly concerned about their diet and overall health. They often struggle to monitor and understand the nutritional content of their daily food, leading to suboptimal dietary habits and potential health issues. To address this problem, our "Nutrition Counter" project aims to develop a comprehensive solution for tracking and analyzing the nutritional content of food items to provide users with meaningful recommendations for maintaining a balanced and healthy diet.
MERN Technology stack M - MongoDB E - Express R - React N - Node.js
To get a local copy up and running follow these simple steps.
This needed pre-installed
- Python
- Node.js
- npm
npm install npm@latest -g
- Clone the repo
https://github.com/IAmOnkarSawant/30_NutritionCounter.git
- Initialise the local with
- go to folder where you cloned project
git init
git remote -v
git remote add origin https://github.com/IAmOnkarSawant/SSD_PROJECT.git
git remote -v
git fetch origin
enter credentials to continue
- Install NPM packages
- Backend
cd backend
npm install
npm run dev
- Frontend
cd frontend
npm install
npm start
- Dietary Recommendations: Providing personalized dietary recommendations based on the analysis of food images and their nutritional content. Recommending suitable food items for specific health conditions, such as for pregnant women, cardiac patients, diabetic patients, and children.
- Macro-Nutrient Analysis: Analyzing macro-nutrients in food items based on text extracted from images. Calculating and presenting Daily Value (DV%) information to users for better understanding of nutritional content.
- Health Goal Planning: Calculating BMI (Body Mass Index) and BMR (Basal Metabolic Rate) for users. Estimating Total Daily Energy Expenditure (TDEE) based on BMR and activity levels. Providing sample personalized calorie-based diet plans aligned with users' fitness goals.
- Image Processing and Text Extraction: Accepting images of labelled food items, either uploaded or captured using the camera. Processing images through a Machine Learning model (PyTesseract) to extract text information. Differentiating between "Ingredient text-based" and "Table text-based" images for relevant recommendations.
- Category-Specific Recommendations: Offering category-specific recommendations for different types of input images. Tailoring recommendations based on the nature of the text extracted, such as ingredient lists or table formats.
See the open issues for a list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- If you have suggestions for adding or removing projects, feel free to open an issue to discuss it, or directly create a pull request after you edit the README.md file with necessary changes.
- Please make sure you check your spelling and grammar.
- Create individual PR for each suggestion.
- Please also read through the Code Of Conduct before posting your first idea as well.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- Onkar Sawant - Mtech - CSE (IIIT Hyderabad) - Onkar Sawant - **
- Shubham Jaiswal - Mtech - CSE (IIIT Hyderabad) - Shubham Jaiswal - **
- Nithin Venugopal - Mtech - CSE (IIIT Hyderabad) - Nithin Venugopal - **
- Rhitesh Singh - Mtech - CSE (IIIT Hyderabad) - Rhitesh Singh - **