Skip to content

Commit c5686c1

Browse files
Merge pull request #55 from whitelightning450/nsmv2ASOupdate
Nsmv2 as oupdate
2 parents 9c59d4c + f6ee941 commit c5686c1

File tree

14 files changed

+117170
-1
lines changed

14 files changed

+117170
-1
lines changed

.gitignore

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,8 @@ Data/WBD/
1212
*.pkl
1313
*.parquet
1414
/Figures/*
15-
*.pyc
15+
*.pyc
16+
*.tif
17+
*.tif.xml
18+
*.tif.aux.xml
19+
*.zip
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
![NSM_Cover](./Images/ML_SWE.jpg)
2+
3+
# Snow Water Equivalent Machine Learning (SWEML): Using Machine Learning to Advance Snow State Modeling
4+
5+
[![Deploy](https://github.com/geo-smart/use_case_template/actions/workflows/deploy.yaml/badge.svg)](https://github.com/geo-smart/use_case_template/actions/workflows/deploy.yaml)
6+
[![Jupyter Book Badge](https://jupyterbook.org/badge.svg)](https://geo-smart.github.io/use_case_template)
7+
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/geo-smart/use_case_template/HEAD?urlpath=lab)
8+
[![GeoSMART Use Case](./book/img/use_case_badge.svg)](https://geo-smart.github.io/usecases)
9+
![GitHub](https://img.shields.io/github/license/whitelightning450/National-ML-Snow-Prediction-Mod?logo=GitHub&style=flat-square)
10+
![GitHub top language](https://img.shields.io/github/languages/top/whitelightning450/National-ML-Snow-Prediction-Mod?logo=Jupyter&style=flat-square)
11+
![GitHub repo size](https://img.shields.io/github/repo-size/whitelightning450/National-ML-Snow-Prediction-Mod?logo=Github&style=flat-square)
12+
13+
14+
15+
16+
17+
# Getting Started:
18+
The first step is to identify a folder location where you would like to work in a development environment.
19+
We suggest a location that will be able to easily access streamflow predictions to make for easy evaluation of your model.
20+
Using the command prompt, change your working directory to this folder and git clone [Snow-Extrapolation](https://github.com/geo-smart/Snow-Extrapolation)
21+
22+
git clone https://github.com/geo-smart/Snow-Extrapolation
23+
24+
25+
## Virtual Environment
26+
It is a best practice to create a virtual environment when starting a new project, as a virtual environment essentially creates an isolated working copy of Python for a particular project.
27+
I.e., each environment can have its own dependencies or even its own Python versions.
28+
Creating a Python virtual environment is useful if you need different versions of Python or packages for different projects.
29+
Lastly, a virtual environment keeps things tidy, makes sure your main Python installation stays healthy and supports reproducible and open science.
30+
31+
## Creating Stable CONDA Environment on CIROH Cloud or other 2i2c Cloud Computing Platform
32+
Go to home directory
33+
```
34+
cd ~
35+
```
36+
Create an envs directory
37+
```
38+
mkdir envs
39+
```
40+
Create .condarc file and link it to a text file
41+
```
42+
touch .condarc
43+
44+
ln -s .condarc condarc.txt
45+
```
46+
Add the below lines to the condarc.txt file
47+
```
48+
# .condarc
49+
envs_dirs:
50+
- ~/envs
51+
```
52+
Restart your server
53+
54+
### Creating your NSM_env Python Virtual Environment
55+
Since we will be using Jupyter Notebooks for this exercise, we will use the Anaconda command prompt to create our virtual environment.
56+
In the command line type:
57+
58+
conda create -n SWEML_env python=3.9
59+
60+
For this example, we will be using Python version 3.9.12, specify this version when setting up your new virtual environment.
61+
After Anaconda finishes setting up your SWEML_env, activate it using the activate function.
62+
63+
conda activate NSM_env
64+
65+
You should now be working in your new NSM_env within the command prompt.
66+
However, we will want to work in this environment within our Jupyter Notebook and need to create a kernel to connect them.
67+
We begin by installing the **ipykernel** python package:
68+
69+
pip install --user ipykernel
70+
71+
With the package installed, we can connect the NSM_env to our Python Notebook
72+
73+
python -m ipykernel install --user --name=SWEML_env
74+
75+
Under contributors, there is a start-to-finish example to get participants up to speed on the modeling workflow.
76+
To double-check you have the correct working environment, open up the [Methods](./contributors/NSM_Example/methods.ipynb) file, click the kernel tab on the top toolbar, and select the SWEML_env.
77+
The NSM_env should show up on the top right of the Jupyter Notebook.
78+
79+
![Notebook_env](./contributors/NSM_Example/Images/NSM-Kernel.JPG)
80+
81+
82+
### Loading other Python dependencies
83+
Load Ulmo package:
84+
mamba install ulmo
85+
86+
We will now be installing the packages needed to use SWEML_env, as well as other tools to accomplish data science tasks.
87+
Enter the following code block in your Anaconda Command Prompt to get the required dependencies with the appropriate versions, note, you must be in the correct working directory:
88+
89+
pip install -r requirements.txt
90+
91+
### Connect to AWS
92+
All of the data for the project is on a publicly accessible AWS S3 bucket (national-snow-model), however, some methods require credentials.
93+
Please request credentials as an issue and put the credentials in the head of the repo (e.g., SWEML) as AWSaccessKeys.csv.
94+
95+
### Explore the model through an example
96+
97+
The objective of the project is to optimize the NSM, or SSM in this case.
98+
To do, the next step is to explore the [NSM Example](./contributors/NSM_Example/methods.ipynb).
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
![NSM_Cover](./Images/ML_SWE.jpg)
2+
3+
# Snow Water Equivalent Machine Learning (SWEML): Using Machine Learning to Advance Snow State Modeling
4+
5+
[![Deploy](https://github.com/geo-smart/use_case_template/actions/workflows/deploy.yaml/badge.svg)](https://github.com/geo-smart/use_case_template/actions/workflows/deploy.yaml)
6+
[![Jupyter Book Badge](https://jupyterbook.org/badge.svg)](https://geo-smart.github.io/use_case_template)
7+
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/geo-smart/use_case_template/HEAD?urlpath=lab)
8+
[![GeoSMART Use Case](./book/img/use_case_badge.svg)](https://geo-smart.github.io/usecases)
9+
![GitHub](https://img.shields.io/github/license/whitelightning450/National-ML-Snow-Prediction-Mod?logo=GitHub&style=flat-square)
10+
![GitHub top language](https://img.shields.io/github/languages/top/whitelightning450/National-ML-Snow-Prediction-Mod?logo=Jupyter&style=flat-square)
11+
![GitHub repo size](https://img.shields.io/github/repo-size/whitelightning450/National-ML-Snow-Prediction-Mod?logo=Github&style=flat-square)
12+
13+
14+
## Model Running Instructions: Making a Model Inference
15+
SWEML supports a flexible ML framework that allows the use and exploration of many ML algorithms.
16+
The [Model](https://github.com/whitelightning450/SWEML/tree/main/Model) folder exemplifies the model agnostic structure and variety of ML algorithms explored during the development of the model.
17+
We recommend using the [Neural Network](https://github.com/whitelightning450/SWEML/tree/main/Model/Neural_Network) model as it has consistently proven to be the best-performing model.
18+
After completing the [Getting Started](https://github.com/whitelightning450/SWEML/blob/main/Getting%20Started.md) steps to set up the correct packages and versioning, one can begin to explore the model.
19+
20+
Begin with the [training](Model/NeuralNetwork/training.ipynb) file and run the first and second blocks of code.
21+
Note, for first-time users there will not be all of the correct folders due to GitHub.
22+
When running the second code block, add the missing folders as apparent in the error message.
23+
24+
25+
Most files are linked to the CIROH AWS S3 folder but can also be made using the files within each directory.
26+
The hindcast simulation is set to the 2019 water year [here](https://github.com/whitelightning450/SWEML/blob/main/Model/Neural_Network/SSM_Hindcast_2019.ipynb), as we pre-compiled all of the necessary information to run and evaluate the model for this year.
27+
The model framework fully supports the use of other years in the 2013-2018 period but will require the user to turn on the Get_Monitoring_Data_Threaded(), Data_Processing(), and augmentPredictionDFs() functions.
28+
The 2019 simulation at a weekly temporal resolution takes approximately 90 seconds on a quality laptop and can quickly exceed 1 hour when running for a different year due to the data acquisition and processing.
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
basemap==1.3.3
2+
basemap-data==1.3.2
3+
branca==0.5.0
4+
boto3
5+
botocore
6+
earthaccess
7+
earthpy==0.9.4
8+
ee==0.2
9+
folium==0.12.1.post1
10+
geojson==2.5.0
11+
geopy==2.2.0
12+
graphviz==0.20.1
13+
hvplot==0.8.0
14+
h5py==3.7.0
15+
hydroeval==0.1.0
16+
joblib==1.2.0
17+
keras==2.9.0
18+
matplotlib==3.5.0
19+
matplotlib-inline==0.1.3
20+
plotly==5.11.0
21+
progressbar==2.5
22+
pygeos==0.13
23+
pyproj==3.3.1
24+
rioxarray==0.15.0
25+
rasterstats==0.19.0
26+
s3fs==2023.10.0
27+
scikit-learn==1.1.1
28+
scipy==1.9.0
29+
seaborn==0.11.2
30+
tables==3.7.0
31+
tensorflow==2.9.1
32+
tqdm==4.64.0
33+
vincent==0.4.4
34+
xarray==2022.6.0
35+
pandas==1.4.3
36+
pyarrow
37+
contextily
38+
mercantile
39+
nbformat
40+
netCDF4

0 commit comments

Comments
 (0)