Skip to content

Precision agriculture image processing suite for UAS-acquired multipsectral imagery on GCP

License

Notifications You must be signed in to change notification settings

GDSC-Delft-Dev/apa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Terrafarm

example workflow

Terrafarm is an autonomous farming solution provides a comprehensive way to monitor crops at any scale. We provide farmers with the ability to scrutinize every square inch of their fields for a wide range of issues. By detecting crop diseases before they spread, Terrafarm can reduce the usage of harmful chemicals by up to 90% and eradicate invasive species regionally. As the application provides health reports, farmers can optimize fertilizer use and reduce preventive pesticide, herbicide, and fungicide use.

Want to know more? Read our wiki here.

Getting Started

Our code can be found in the src directory. Read below to learn how to explore, run, and modify the backend and frontend, or play with the notebooks in the notebooks directory.

Backend

The backend comprises of the image processing pipleline that processes mutlispectral images from farms. You can run it locally, or remotely on GCP (in a container). If you'd like to know more about the pipeline, read our wiki here.

Local setup

Run the image processing pipeline locally. Tested on linux (Ubuntu 20) and Mac (Ventura 13). Components that do not involve ML training can also be run on Windows 10.

  1. Install Python 3.10

  2. Clone the repo

git clone https://github.com/GDSC-Delft-Dev/apa.git

Note that this might take a while.

  1. Setup the Python virtual environment
pip install virtualenv
virtualenv env
source env/bin/activate (linux, mac)
source env/Scripts/activate (windows)
  1. Install Python requirements
cd src/backend
pip install -r requirements.txt
  1. Run the pipeline
py main.py

The supported arguments for main.py are:

  • mode (local/cloud) - specify if the input images are already in the cloud or need to be uploaded first from the local filesystem
  • path - path to the input images, relative to the local/cloud root
  • name - a unique name for the created job

Run the pipeline with images already in the cloud:

py main.py --path path/to/images --mode cloud

Run the pipeline with images on your local filesystem:

py main.py --path path/to/images --mode local

Cloud setup

To use infrastructure, please request the GCP service account key at pawel.mist@gmail.com.

  1. Clone the repo
git clone https://github.com/GDSC-Delft-Dev/apa.git

Note that this might take a while.

  1. Set the GCP service account environmental variable
export GCP_FA_PRIVATE_KEY=<key> (linux, mac)
set GCP_FA_PRIVATE_KEY=<key> (windows)
  1. Trigger the pipeline

Manual triggers allow you to run the latest pipeline builds from the Artifact Registry with custom input data using Cloud Run. You can run a job with either input data from your local file system or input data that already resides in the cloud.

cd src/backend
sudo chmod +x trigger.sh
./trigger.sh

The supported arguments for trigger.sh are:

  • l - path to the local images
  • c - path to the images on the cloud (Cloud Storage)
  • n - a unique name for the pipeline job

Note that local inputs are first copied to a staging directory in Cloud Storage, and will only be removed if the job succeeeds.

Provide input data from a local filesystem

./trigger.sh -l /path/to/data/ -n name-of-the-job

Provide input data from Cloud Storage

./trigger.sh -c /path/to/data/ -n name-of-the-job

Testing

To executed the automated tests, run pytest unit tests:

python -m pytest

You can find our tests in src\backend\pipeline\test\unit.

Static analysis

Our project uses mypy and pylint to assert the quality of the code. You can run these with:

python -m mypy . --explicit-package-bases
python -m pylint ./pipeline

CI/CD

The CI/CD pushes the build from the latest commit to the pipelines-dev repository in the Google Artifact Registry. Note that only the backend is covered.

You can find the pipeline declaration in .github\workflows\pipeline.yml.

Frontend setup

Please refer to apa/src/frontend/README.md.

Contributing

Anyone who is eager to contribute to this project is very welcome to do so. Simply take the following steps:

  1. Fork the project
  2. Create your own feature branch
  3. Commit your changes
  4. Push to the dev branch and open a PR

Datasets

You can play with the datasets in the notebooks folder.

Build Tools

image
image
image
image
image
image
image
image

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact