|
3 | 3 |
|
4 | 4 | 
|
5 | 5 |
|
6 |
| -## About |
| 6 | +Terrafarm is an autonomous farming solution provides a comprehensive way to monitor crops at any scale. We provide farmers with the ability to scrutinize every square inch of their fields for a wide range of issues. By detecting crop diseases before they spread, Terrafarm can reduce the usage of harmful chemicals by up to 90% and eradicate invasive species regionally. As the application provides health reports, farmers can optimize fertilizer use and reduce preventive pesticide, herbicide, and fungicide use. |
7 | 7 |
|
8 |
| -### Problem we solving |
9 |
| -Growing (high-quality) crops sustainably for an ever-increasing population is one of the biggest challenges we face today, as farmers all over the world are faced with complex decision making problems for a vast amount of crops. To this end, a variety of parameters need to be traced - think of application of fertilizer, soil humidity or availability of nutrients. |
| 8 | +Want to know more? Read our wiki [**here**](../../wiki). |
10 | 9 |
|
11 |
| -In traditional agriculture, fields are treated as homogeneous entities, which generally leads to sub-optimal treatment due to lack of (localized) traceability. This is problematic, as oversupply of agricultural inputs leads to environmental pollution. Moreover, unnecessary large quantities can go to waste if produce are not harvested at their optimal time. Finally, this clearly leads to low yield density and hence missed profits for farmers. |
| 10 | +## Getting Started |
12 | 11 |
|
13 |
| -[Precision agriculture](https://en.wikipedia.org/wiki/Precision_agriculture) on the other hand, aims to produce more crops with fewer resources while maintaining quality. This sustainable agricultural model utilizes IT solutions to allow for localized treatment to a much finer degree. This paradigm shift is becoming increasingly urgent because of the worldwide increase in food demands for example: the number of people who will require food in 2050 is estimated at nine billion. |
| 12 | +Our code can be found in the `src` directory. Read below to learn how to explore, run, and modify the backend and frontend, or play with the notebooks in the `notebooks` directory. |
14 | 13 |
|
15 |
| -### Our solution |
16 |
| -Our **mobile app Terrafarm** allows farmers to perform **smart monitoring, analysis and planning** in an intuitive and affordable manner. In fact, our system uses **image processing and deep learning** to extact **actionable insights** from multispectral drone images. These insights - think of pest infestations, moisture content or nutrient deficiencies - are visualized to users, thereby providing full transparancy. We aim to target both small- and medium-scale farmers. Detailed information about our image processing pipeline and Flutter mobile app can be found under `apa/src/backend` and `apa/src/frontend` respectively. |
| 14 | +### Backend |
17 | 15 |
|
18 |
| -<div> |
19 |
| - <img src="assets/Terrafarm-poster-0.jpg" alt="Image 1" width="500" style="display:inline-block;"> |
20 |
| - <img src="assets/Terrafarm-poster-1.jpg" alt="Image 2" width="500" style="display:inline-block;"> |
21 |
| -</div> |
| 16 | +The backend comprises of the image processing pipleline that processes mutlispectral images from farms. You can run it locally, or remotely on GCP (in a container). If you'd like to know more about the pipeline, read our wiki [**here**](../../wiki/Pipeline). |
22 | 17 |
|
23 |
| -<p style="text-align:center;">Figure: Information poster presenting Terrafarm</p> |
| 18 | +#### Local setup |
| 19 | +Run the image processing pipeline locally. Tested on linux (`Ubuntu 20`) and Mac (`Ventura 13`). Components that do not involve ML training can also be run on `Windows 10`. |
24 | 20 |
|
| 21 | +1. Install [**Python 3.10**](https://www.python.org/downloads/) |
25 | 22 |
|
| 23 | +2. Clone the repo |
26 | 24 |
|
27 |
| -# Build Tools |
| 25 | +``` |
| 26 | +git clone https://github.com/GDSC-Delft-Dev/apa.git |
| 27 | +``` |
28 | 28 |
|
29 |
| - |
30 |
| -</br> |
31 |
| - |
32 |
| -</br> |
33 |
| - |
34 |
| -</br> |
35 |
| - |
36 |
| -</br> |
37 |
| - |
38 |
| -</br> |
39 |
| - |
40 |
| -</br> |
41 |
| - |
42 |
| -</br> |
43 |
| - |
| 29 | +Note that this might take a while. |
44 | 30 |
|
45 |
| -# Getting Started |
| 31 | +3. Setup the Python virtual environment |
| 32 | +``` |
| 33 | +pip install virtualenv |
| 34 | +virtualenv env |
| 35 | +source env/bin/activate (linux, mac) |
| 36 | +source env/Scripts/activate (windows) |
| 37 | +``` |
| 38 | + |
| 39 | +4. Install Python requirements |
| 40 | +``` |
| 41 | +cd src/backend |
| 42 | +pip install -r requirements.txt |
| 43 | +``` |
| 44 | + |
| 45 | +5. Run the pipeline |
| 46 | +``` |
| 47 | +py main.py |
| 48 | +``` |
| 49 | + |
| 50 | +The supported arguments for `main.py` are: |
| 51 | +- `mode` (`local`/`cloud`) - specify if the input images are already in the cloud or need to be uploaded first from the local filesystem |
| 52 | +- `path` - path to the input images, relative to the local/cloud root |
| 53 | +- `name` - a unique name for the created job |
| 54 | + |
| 55 | +Run the pipeline with images already in the cloud: |
| 56 | +``` |
| 57 | +py main.py --path path/to/images --mode cloud |
| 58 | +``` |
| 59 | + |
| 60 | +Run the pipeline with images on your local filesystem: |
| 61 | +``` |
| 62 | +py main.py --path path/to/images --mode local |
| 63 | +``` |
46 | 64 |
|
47 |
| -Follow these steps to set up your project locally. |
| 65 | +#### Cloud setup |
| 66 | +To use infrastructure, please request the GCP service account key at `pawel.mist@gmail.com`. |
48 | 67 |
|
49 |
| -Clone the repo |
| 68 | +1. Clone the repo |
50 | 69 | ```
|
51 | 70 | git clone https://github.com/GDSC-Delft-Dev/apa.git
|
52 | 71 | ```
|
53 | 72 |
|
54 |
| -## Setup backend |
| 73 | +Note that this might take a while. |
55 | 74 |
|
56 |
| -Setup virtual python environment |
| 75 | +2. Set the GCP service account environmental variable |
57 | 76 | ```
|
58 |
| -pip install virtualenv |
59 |
| -virtualenv env |
| 77 | +export GCP_FA_PRIVATE_KEY=<key> (linux, mac) |
| 78 | +set GCP_FA_PRIVATE_KEY=<key> (windows) |
60 | 79 | ```
|
61 | 80 |
|
62 |
| -Activate on MacOS or Linux |
| 81 | +3. Trigger the pipeline |
| 82 | + |
| 83 | +Manual triggers allow you to run the latest pipeline builds from the Artifact Registry with custom input data using Cloud Run. You can run a job with either input data from your local file system or input data that already resides in the cloud. |
| 84 | + |
| 85 | +```bash |
| 86 | +cd src/backend |
| 87 | +sudo chmod +x trigger.sh |
| 88 | +./trigger.sh |
| 89 | +``` |
| 90 | + |
| 91 | +The supported arguments for `trigger.sh` are: |
| 92 | +- `l` - path to the local images |
| 93 | +- `c` - path to the images on the cloud (Cloud Storage) |
| 94 | +- `n` - a unique name for the pipeline job |
| 95 | + |
| 96 | +Note that local inputs are first copied to a staging directory in Cloud Storage, and will only be removed if the job succeeeds. |
| 97 | + |
| 98 | +Provide input data from a local filesystem |
| 99 | + |
| 100 | +```bash |
| 101 | +./trigger.sh -l /path/to/data/ -n name-of-the-job |
63 | 102 | ```
|
64 |
| -source env/bin/activate |
| 103 | + |
| 104 | +Provide input data from Cloud Storage |
| 105 | + |
| 106 | +```bash |
| 107 | +./trigger.sh -c /path/to/data/ -n name-of-the-job |
65 | 108 | ```
|
66 | 109 |
|
67 |
| -Activate on Windows |
| 110 | +### Testing |
| 111 | +To executed the automated tests, run `pytest` unit tests: |
| 112 | + |
68 | 113 | ```
|
69 |
| -source env/Scripts/activate |
| 114 | +python -m pytest |
70 | 115 | ```
|
71 |
| -Install Python requirements |
| 116 | + |
| 117 | +You can find our tests in `src\backend\pipeline\test\unit`. |
| 118 | + |
| 119 | +### Static analysis |
| 120 | +Our project uses `mypy` and `pylint` to assert the quality of the code. You can run these with: |
| 121 | + |
72 | 122 | ```
|
73 |
| -pip install -r requirements.txt |
| 123 | +python -m mypy . --explicit-package-bases |
| 124 | +python -m pylint ./pipeline |
74 | 125 | ```
|
75 |
| -Please refer to `apa/src/backend/README.md` for detailed information on the image processing pipeline. |
76 |
| -<!-- TODO: Perhaps more info? --> |
77 | 126 |
|
78 |
| -## Setup frontend |
79 |
| -Please refer to `apa/src/frontend/README.md`. |
| 127 | +### CI/CD |
| 128 | +The CI/CD pushes the build from the latest commit to the `pipelines-dev` repository in the Google Artifact Registry. Note that only the backend is covered. |
| 129 | + |
| 130 | +You can find the pipeline declaration in `.github\workflows\pipeline.yml`. |
80 | 131 |
|
| 132 | +## Frontend setup |
| 133 | +Please refer to `apa/src/frontend/README.md`. |
81 | 134 |
|
82 |
| -# Contributing |
| 135 | +## Contributing |
83 | 136 | Anyone who is eager to contribute to this project is very welcome to do so. Simply take the following steps:
|
84 | 137 | 1. Fork the project
|
85 | 138 | 2. Create your own feature branch
|
86 | 139 | 3. Commit your changes
|
87 | 140 | 4. Push to the `dev` branch and open a PR
|
88 | 141 |
|
89 |
| -# Datasets |
| 142 | +## Datasets |
90 | 143 | You can play with the datasets in the `notebooks` folder.
|
91 | 144 |
|
| 145 | +## Build Tools |
| 146 | + |
| 147 | + |
| 148 | +</br> |
| 149 | + |
| 150 | +</br> |
| 151 | + |
| 152 | +</br> |
| 153 | + |
| 154 | +</br> |
| 155 | + |
| 156 | +</br> |
| 157 | + |
| 158 | +</br> |
| 159 | + |
| 160 | +</br> |
| 161 | + |
| 162 | + |
92 | 163 |
|
93 |
| -# License |
| 164 | +## License |
94 | 165 | Distributed under the MIT License. See `LICENSE.txt` for more information.
|
95 | 166 |
|
96 |
| -# Contact |
| 167 | +## Contact |
97 | 168 | - Google Developers Student Club Delft - dsc.delft@gmail.com
|
98 | 169 | - Paul Misterka - pawel.mist@gmail.com
|
99 | 170 | - Mircea Lica - mirceatlx@gmail.com
|
|
0 commit comments