Skip to content

Latest commit

 

History

History
25 lines (20 loc) · 8.14 KB

README.md

File metadata and controls

25 lines (20 loc) · 8.14 KB

This deep learning group project was part of the the TechLabs “Digital Shaper Program” in Münster (winter term 2022)

Abstract (TLDR)

While many people want to live sustainably, it is often difficult to judge the environmental impact of everyday choices. The “avocado recognizer” project included the design of an image recognition tool which identifies fruit and vegetables, and provides corresponding information about their environmental impact. The tool was created using transfer learning techniques and a pre-trained convolutional neural network, which was retrained on an existing dataset of fruit and vegetables from Kaggle. The climate score is based on CO2 production per kilogram of the product and further developments of the tool should include contextual information (e.g., seasonality and origin of the product) for the climate score computations and suggestions for similar but more sustainable alternatives.

The One And Only „Avocado Recognizer With Cucumber Recommendation” 🥑 🥒

At first, there was Lisa. You know her. We all know her. Lisa is a young student, about 20 years old, she goes to Yoga classes, loves to travel and get to know new places and people. She cares for herself. But she also cares about others. Some people would say she’s naïve, but she believes in a better future and wants to be part of the change she wants to see in the world. Lisa thinks that one of the biggest challenges in today’s world is climate change. She demands political change and agendas to face that challenge, to make a difference, to ensure a livable future for everyone. But as political change is slow, she wants to contribute her part and live a sustainable life. She already went vegan. But what should you actually buy for a sustainable diet? There’s one challenge left: Lisa really loves avocados… There’s a little bit of Lisa in many of us – we want to be good people, we want to live sustainably, we want to save the world (if it’s not too much effort). That’s where we come in! We, that is Stefan, Lea, Julia and Kevin (although Kevin left the team before the finish line). We decided to give Lisa – and you – some decision-making aid during grocery shopping. We decided to build a software that would recognize a product, provide some kind of climate-score to inform you about its environmental impact and recommend more sustainable alternatives where possible. How about a cucumber instead of an avocado? You read that right: We decided to build an avocado-recognizer with cucumber recommendation.

The Long And Winding Road To Fruit Image Recognition 🔍

Let’s talk about the methods of our project. After learning more about neural networks, gradient descend or backpropagation and simultaneously, exploring Deep Learning Python libraries, we decided to apply Transfer Learning for our project. More specifically, fine-tuning technique seemed a great way to achieve our ambitious goal of saving the world (one avocado at a time). That means, we used a pre-trained convolutional neural network, removed the last fully connected layer and re-trained it on an existing dataset of fruit and vegetables. Let us lay out this approach in more practical detail. First, our code imports necessary libraries such as PyTorch and torchvision, and it connects the GPU device. Then, the images from the Fruits and Vegetables Image Recognition Dataset are loaded from Kaggle. The training data is augmented (to increase the amount of data for training) and normalized, while the validation data is only normalized. Normalizing image data distorts the colors, but it makes the data more interpretable for the statistical algorithm. We visualized a few images and their associated labels to check if the data loading had worked so far. We then used a pretrained model resnet18 which is included in torchvision. This is pre-trained network based on pictures labelled on 1000 categories. Then, we “froze” the lower layers of the convolutional network and re-trained the last layer on the images of the 36 fruit and vegetable categories in our dataset. “Freezing” the first layers means that no gradient descent is applied on their parameters during the re-training. Only the last layer is adjusted. For this step, we defined a training function, which trains the model by looping through the data in batches and adjusting the weights of the model to minimize the loss function. This includes an evaluation phase to assess the accuracy of the model on the validation data. Re-training the model on our data required too much computational power for the google colab GPU, but Thomas supported us with computing power on top of his advice. Subsequently, a visualization function was defined to display a few predictions with their predicted class and probability score. This was used to test the model performance. Though the model seemed overly confident in the predicted classes, we considered it a great success! For our implementation up to this point, we heavily relied on a Transfer Learning for Computer Vision Tutorial by PyTorch, which we adjusted for our own goals. Especially the connected device, the dataset and the number of classes had to be adjusted. We added an upload-button to the code, which allows us to select an image from the computer. This image is then evaluated by the trained network and the class is returned. Finally, fruit and vegetable is recognized and classified based on a photograph! This classification is only the first step to inform Lisa and anyone else about the environmental impact of their consumer choices. Therefore, we added a dictionary to the code that connects the class to the CO2 output/ kg for each product. We used data from the Dutch National Institute for Public Health and the Environment. Then, the evaluation function for the input image was adjusted to also include this CO2 output and some context if this is low, average or high compared to other fruit and vegetables.

Here it is: a (not so) user-friendly solution. 🤔 🤩

The final „product“ (if we can call it such) is a wonderful google colab document, which has an upload button to input a local image into the network and it will provide the class (fruit or vegetable name) and minimal information about the climate impact (average CO2 impact per kg, independent of the time of year or origin of the product and minimal context). We think this is a wonderful result, but it is not very applicable for an average user. We think that it will not impact many people’s consumer behavior yet. To make this idea applicable, it would need to be implemented as a smartphone application, readily usable during grocery shopping. The information about the environmental impact would have to be season- and origin-specific, be put into context for interpretability and recommendations for alternatives need to be added. The greatest constraint of our solution might be that the training dataset did not include avocados. Lisa will never learn from us that a cucumber would be a great alternative. Maybe the greatest impact of this project will be our own learning process and our reflection on the environmental impact of personal diet. Even so, we think that this is a great result given from where we started this journey.

What’s left to say? 📣

You might have heard that people with coding problems sometimes speak to rubber ducks. While that might sound eccentric, it actually helps to put a problem into words when searching for a solution. We didn’t have a rubber duck, but a much more helpful alternative: a mentor who talked back to us and solved our problems when we struggled. We thank Thomas for his valuable support, for pushing us and connecting the strings where we couldn’t. We are also grateful to all the TechLabs Münster team members who volunteered their time to make this learning experience possible for us.