Skip to content

Commit fb99c35

Browse files
authored
Merge pull request #130 from GDSC-Delft-Dev/dev
Draft: Sprint 7 merge
2 parents da5ef6d + 8146971 commit fb99c35

File tree

93 files changed

+2782
-812
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

93 files changed

+2782
-812
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,6 @@ src/backend/pipeline/data
1010
.vscode
1111
*json
1212
nutrient_masks.npy
13+
PlantVillage
1314
.vscode
1415
.idea

README.md

Lines changed: 120 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -3,97 +3,168 @@
33

44
![example workflow](https://github.com/GDSC-Delft-Dev/apa/actions/workflows/pipeline.yml/badge.svg)
55

6-
## About
6+
Terrafarm is an autonomous farming solution provides a comprehensive way to monitor crops at any scale. We provide farmers with the ability to scrutinize every square inch of their fields for a wide range of issues. By detecting crop diseases before they spread, Terrafarm can reduce the usage of harmful chemicals by up to 90% and eradicate invasive species regionally. As the application provides health reports, farmers can optimize fertilizer use and reduce preventive pesticide, herbicide, and fungicide use.
77

8-
### Problem we solving
9-
Growing (high-quality) crops sustainably for an ever-increasing population is one of the biggest challenges we face today, as farmers all over the world are faced with complex decision making problems for a vast amount of crops. To this end, a variety of parameters need to be traced - think of application of fertilizer, soil humidity or availability of nutrients.
8+
Want to know more? Read our wiki [**here**](../../wiki).
109

11-
In traditional agriculture, fields are treated as homogeneous entities, which generally leads to sub-optimal treatment due to lack of (localized) traceability. This is problematic, as oversupply of agricultural inputs leads to environmental pollution. Moreover, unnecessary large quantities can go to waste if produce are not harvested at their optimal time. Finally, this clearly leads to low yield density and hence missed profits for farmers.
10+
## Getting Started
1211

13-
[Precision agriculture](https://en.wikipedia.org/wiki/Precision_agriculture) on the other hand, aims to produce more crops with fewer resources while maintaining quality. This sustainable agricultural model utilizes IT solutions to allow for localized treatment to a much finer degree. This paradigm shift is becoming increasingly urgent because of the worldwide increase in food demands for example: the number of people who will require food in 2050 is estimated at nine billion.
12+
Our code can be found in the `src` directory. Read below to learn how to explore, run, and modify the backend and frontend, or play with the notebooks in the `notebooks` directory.
1413

15-
### Our solution
16-
Our **mobile app Terrafarm** allows farmers to perform **smart monitoring, analysis and planning** in an intuitive and affordable manner. In fact, our system uses **image processing and deep learning** to extact **actionable insights** from multispectral drone images. These insights - think of pest infestations, moisture content or nutrient deficiencies - are visualized to users, thereby providing full transparancy. We aim to target both small- and medium-scale farmers. Detailed information about our image processing pipeline and Flutter mobile app can be found under `apa/src/backend` and `apa/src/frontend` respectively.
14+
### Backend
1715

18-
<div>
19-
<img src="assets/Terrafarm-poster-0.jpg" alt="Image 1" width="500" style="display:inline-block;">
20-
<img src="assets/Terrafarm-poster-1.jpg" alt="Image 2" width="500" style="display:inline-block;">
21-
</div>
16+
The backend comprises of the image processing pipleline that processes mutlispectral images from farms. You can run it locally, or remotely on GCP (in a container). If you'd like to know more about the pipeline, read our wiki [**here**](../../wiki/Pipeline).
2217

23-
<p style="text-align:center;">Figure: Information poster presenting Terrafarm</p>
18+
#### Local setup
19+
Run the image processing pipeline locally. Tested on linux (`Ubuntu 20`) and Mac (`Ventura 13`). Components that do not involve ML training can also be run on `Windows 10`.
2420

21+
1. Install [**Python 3.10**](https://www.python.org/downloads/)
2522

23+
2. Clone the repo
2624

27-
# Build Tools
25+
```
26+
git clone https://github.com/GDSC-Delft-Dev/apa.git
27+
```
2828

29-
![image](https://img.shields.io/badge/Flutter-02569B?style=for-the-badge&logo=flutter&logoColor=white)
30-
</br>
31-
![image](https://img.shields.io/badge/Dart-0175C2?style=for-the-badge&logo=dart&logoColor=white)
32-
</br>
33-
![image](https://img.shields.io/badge/Python-FFD43B?style=for-the-badge&logo=python&logoColor=blue)
34-
</br>
35-
![image](https://img.shields.io/badge/firebase-ffca28?style=for-the-badge&logo=firebase&logoColor=black)
36-
</br>
37-
![image](https://img.shields.io/badge/Google_Cloud-4285F4?style=for-the-badge&logo=google-cloud&logoColor=white)
38-
</br>
39-
![image](https://img.shields.io/badge/TensorFlow-FF6F00?style=for-the-badge&logo=tensorflow&logoColor=white)
40-
</br>
41-
![image](https://img.shields.io/badge/OpenCV-27338e?style=for-the-badge&logo=OpenCV&logoColor=white)
42-
</br>
43-
![image](https://img.shields.io/badge/GitHub_Actions-2088FF?style=for-the-badge&logo=github-actions&logoColor=white)
29+
Note that this might take a while.
4430

45-
# Getting Started
31+
3. Setup the Python virtual environment
32+
```
33+
pip install virtualenv
34+
virtualenv env
35+
source env/bin/activate (linux, mac)
36+
source env/Scripts/activate (windows)
37+
```
38+
39+
4. Install Python requirements
40+
```
41+
cd src/backend
42+
pip install -r requirements.txt
43+
```
44+
45+
5. Run the pipeline
46+
```
47+
py main.py
48+
```
49+
50+
The supported arguments for `main.py` are:
51+
- `mode` (`local`/`cloud`) - specify if the input images are already in the cloud or need to be uploaded first from the local filesystem
52+
- `path` - path to the input images, relative to the local/cloud root
53+
- `name` - a unique name for the created job
54+
55+
Run the pipeline with images already in the cloud:
56+
```
57+
py main.py --path path/to/images --mode cloud
58+
```
59+
60+
Run the pipeline with images on your local filesystem:
61+
```
62+
py main.py --path path/to/images --mode local
63+
```
4664

47-
Follow these steps to set up your project locally.
65+
#### Cloud setup
66+
To use infrastructure, please request the GCP service account key at `pawel.mist@gmail.com`.
4867

49-
Clone the repo
68+
1. Clone the repo
5069
```
5170
git clone https://github.com/GDSC-Delft-Dev/apa.git
5271
```
5372

54-
## Setup backend
73+
Note that this might take a while.
5574

56-
Setup virtual python environment
75+
2. Set the GCP service account environmental variable
5776
```
58-
pip install virtualenv
59-
virtualenv env
77+
export GCP_FA_PRIVATE_KEY=<key> (linux, mac)
78+
set GCP_FA_PRIVATE_KEY=<key> (windows)
6079
```
6180

62-
Activate on MacOS or Linux
81+
3. Trigger the pipeline
82+
83+
Manual triggers allow you to run the latest pipeline builds from the Artifact Registry with custom input data using Cloud Run. You can run a job with either input data from your local file system or input data that already resides in the cloud.
84+
85+
```bash
86+
cd src/backend
87+
sudo chmod +x trigger.sh
88+
./trigger.sh
89+
```
90+
91+
The supported arguments for `trigger.sh` are:
92+
- `l` - path to the local images
93+
- `c` - path to the images on the cloud (Cloud Storage)
94+
- `n` - a unique name for the pipeline job
95+
96+
Note that local inputs are first copied to a staging directory in Cloud Storage, and will only be removed if the job succeeeds.
97+
98+
Provide input data from a local filesystem
99+
100+
```bash
101+
./trigger.sh -l /path/to/data/ -n name-of-the-job
63102
```
64-
source env/bin/activate
103+
104+
Provide input data from Cloud Storage
105+
106+
```bash
107+
./trigger.sh -c /path/to/data/ -n name-of-the-job
65108
```
66109

67-
Activate on Windows
110+
### Testing
111+
To executed the automated tests, run `pytest` unit tests:
112+
68113
```
69-
source env/Scripts/activate
114+
python -m pytest
70115
```
71-
Install Python requirements
116+
117+
You can find our tests in `src\backend\pipeline\test\unit`.
118+
119+
### Static analysis
120+
Our project uses `mypy` and `pylint` to assert the quality of the code. You can run these with:
121+
72122
```
73-
pip install -r requirements.txt
123+
python -m mypy . --explicit-package-bases
124+
python -m pylint ./pipeline
74125
```
75-
Please refer to `apa/src/backend/README.md` for detailed information on the image processing pipeline.
76-
<!-- TODO: Perhaps more info? -->
77126

78-
## Setup frontend
79-
Please refer to `apa/src/frontend/README.md`.
127+
### CI/CD
128+
The CI/CD pushes the build from the latest commit to the `pipelines-dev` repository in the Google Artifact Registry. Note that only the backend is covered.
129+
130+
You can find the pipeline declaration in `.github\workflows\pipeline.yml`.
80131

132+
## Frontend setup
133+
Please refer to `apa/src/frontend/README.md`.
81134

82-
# Contributing
135+
## Contributing
83136
Anyone who is eager to contribute to this project is very welcome to do so. Simply take the following steps:
84137
1. Fork the project
85138
2. Create your own feature branch
86139
3. Commit your changes
87140
4. Push to the `dev` branch and open a PR
88141

89-
# Datasets
142+
## Datasets
90143
You can play with the datasets in the `notebooks` folder.
91144

145+
## Build Tools
146+
147+
![image](https://img.shields.io/badge/Flutter-02569B?style=for-the-badge&logo=flutter&logoColor=white)
148+
</br>
149+
![image](https://img.shields.io/badge/Dart-0175C2?style=for-the-badge&logo=dart&logoColor=white)
150+
</br>
151+
![image](https://img.shields.io/badge/Python-FFD43B?style=for-the-badge&logo=python&logoColor=blue)
152+
</br>
153+
![image](https://img.shields.io/badge/firebase-ffca28?style=for-the-badge&logo=firebase&logoColor=black)
154+
</br>
155+
![image](https://img.shields.io/badge/Google_Cloud-4285F4?style=for-the-badge&logo=google-cloud&logoColor=white)
156+
</br>
157+
![image](https://img.shields.io/badge/TensorFlow-FF6F00?style=for-the-badge&logo=tensorflow&logoColor=white)
158+
</br>
159+
![image](https://img.shields.io/badge/OpenCV-27338e?style=for-the-badge&logo=OpenCV&logoColor=white)
160+
</br>
161+
![image](https://img.shields.io/badge/GitHub_Actions-2088FF?style=for-the-badge&logo=github-actions&logoColor=white)
162+
92163

93-
# License
164+
## License
94165
Distributed under the MIT License. See `LICENSE.txt` for more information.
95166

96-
# Contact
167+
## Contact
97168
- Google Developers Student Club Delft - dsc.delft@gmail.com
98169
- Paul Misterka - pawel.mist@gmail.com
99170
- Mircea Lica - mirceatlx@gmail.com

assets/logo.png

242 KB
Loading

assets/logo_with_background.png

699 KB
Loading

src/backend/README.md

Lines changed: 0 additions & 127 deletions
This file was deleted.

src/backend/main.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
import argparse
55
from pipeline.mat import Mat
66
from google.cloud import storage
7-
from pipeline.templates import full_pipeline, default_pipeline, training_pipeline, nutrient_pipeline
7+
from pipeline.templates import full_pipeline, default_pipeline, training_pipeline, nutrient_pipeline, disease_pipeline
88
import asyncio
99
from pipeline.config import CloudConfig
1010
from typing import Any
@@ -55,7 +55,7 @@ def main(args: Any):
5555
imgs = [Mat.read(file) for file in sorted(glob.glob("pipeline/data/D*.JPG"))]
5656

5757
# Get test data
58-
imgs = imgs[:3]
58+
imgs = imgs[:min(len(imgs), 3)]
5959

6060
# Run the pipeline
6161
pipeline = nutrient_pipeline(cloud=cloud_config)

0 commit comments

Comments
 (0)