|
| 1 | + |
| 2 | + |
| 3 | +# Snow Water Equivalent Machine Learning (SWEML): Using Machine Learning to Advance Snow State Modeling |
| 4 | + |
| 5 | +[](https://github.com/geo-smart/use_case_template/actions/workflows/deploy.yaml) |
| 6 | +[](https://geo-smart.github.io/use_case_template) |
| 7 | +[](https://mybinder.org/v2/gh/geo-smart/use_case_template/HEAD?urlpath=lab) |
| 8 | +[](https://geo-smart.github.io/usecases) |
| 9 | + |
| 10 | + |
| 11 | + |
| 12 | + |
| 13 | + |
| 14 | + |
| 15 | + |
| 16 | + |
| 17 | +# Getting Started: |
| 18 | +The first step is to identify a folder location where you would like to work in a development environment. |
| 19 | +We suggest a location that will be able to easily access streamflow predictions to make for easy evaluation of your model. |
| 20 | +Using the command prompt, change your working directory to this folder and git clone [Snow-Extrapolation](https://github.com/geo-smart/Snow-Extrapolation) |
| 21 | + |
| 22 | + git clone https://github.com/geo-smart/Snow-Extrapolation |
| 23 | + |
| 24 | + |
| 25 | +## Virtual Environment |
| 26 | +It is a best practice to create a virtual environment when starting a new project, as a virtual environment essentially creates an isolated working copy of Python for a particular project. |
| 27 | +I.e., each environment can have its own dependencies or even its own Python versions. |
| 28 | +Creating a Python virtual environment is useful if you need different versions of Python or packages for different projects. |
| 29 | +Lastly, a virtual environment keeps things tidy, makes sure your main Python installation stays healthy and supports reproducible and open science. |
| 30 | + |
| 31 | +## Creating Stable CONDA Environment on CIROH Cloud or other 2i2c Cloud Computing Platform |
| 32 | +Go to home directory |
| 33 | +``` |
| 34 | +cd ~ |
| 35 | +``` |
| 36 | +Create an envs directory |
| 37 | +``` |
| 38 | +mkdir envs |
| 39 | +``` |
| 40 | +Create .condarc file and link it to a text file |
| 41 | +``` |
| 42 | +touch .condarc |
| 43 | +
|
| 44 | +ln -s .condarc condarc.txt |
| 45 | +``` |
| 46 | +Add the below lines to the condarc.txt file |
| 47 | +``` |
| 48 | +# .condarc |
| 49 | +envs_dirs: |
| 50 | + - ~/envs |
| 51 | +``` |
| 52 | +Restart your server |
| 53 | + |
| 54 | +### Creating your NSM_env Python Virtual Environment |
| 55 | +Since we will be using Jupyter Notebooks for this exercise, we will use the Anaconda command prompt to create our virtual environment. |
| 56 | +In the command line type: |
| 57 | + |
| 58 | + conda create -n SWEML_env python=3.9 |
| 59 | + |
| 60 | +For this example, we will be using Python version 3.9.12, specify this version when setting up your new virtual environment. |
| 61 | +After Anaconda finishes setting up your SWEML_env, activate it using the activate function. |
| 62 | + |
| 63 | + conda activate NSM_env |
| 64 | + |
| 65 | +You should now be working in your new NSM_env within the command prompt. |
| 66 | +However, we will want to work in this environment within our Jupyter Notebook and need to create a kernel to connect them. |
| 67 | +We begin by installing the **ipykernel** python package: |
| 68 | + |
| 69 | + pip install --user ipykernel |
| 70 | + |
| 71 | +With the package installed, we can connect the NSM_env to our Python Notebook |
| 72 | + |
| 73 | + python -m ipykernel install --user --name=SWEML_env |
| 74 | + |
| 75 | +Under contributors, there is a start-to-finish example to get participants up to speed on the modeling workflow. |
| 76 | +To double-check you have the correct working environment, open up the [Methods](./contributors/NSM_Example/methods.ipynb) file, click the kernel tab on the top toolbar, and select the SWEML_env. |
| 77 | +The NSM_env should show up on the top right of the Jupyter Notebook. |
| 78 | + |
| 79 | + |
| 80 | + |
| 81 | + |
| 82 | +### Loading other Python dependencies |
| 83 | +Load Ulmo package: |
| 84 | + mamba install ulmo |
| 85 | + |
| 86 | +We will now be installing the packages needed to use SWEML_env, as well as other tools to accomplish data science tasks. |
| 87 | +Enter the following code block in your Anaconda Command Prompt to get the required dependencies with the appropriate versions, note, you must be in the correct working directory: |
| 88 | + |
| 89 | + pip install -r requirements.txt |
| 90 | + |
| 91 | +### Connect to AWS |
| 92 | +All of the data for the project is on a publicly accessible AWS S3 bucket (national-snow-model), however, some methods require credentials. |
| 93 | +Please request credentials as an issue and put the credentials in the head of the repo (e.g., SWEML) as AWSaccessKeys.csv. |
| 94 | + |
| 95 | +### Explore the model through an example |
| 96 | + |
| 97 | +The objective of the project is to optimize the NSM, or SSM in this case. |
| 98 | +To do, the next step is to explore the [NSM Example](./contributors/NSM_Example/methods.ipynb). |
0 commit comments