Skip to content

Commit

Permalink
Merge pull request #493 from MPAS-Dev/develop
Browse files Browse the repository at this point in the history
Merge develop into master for v1.1
  • Loading branch information
xylar authored Nov 20, 2018
2 parents aec2e82 + dcec8e7 commit 87786ff
Show file tree
Hide file tree
Showing 106 changed files with 6,650 additions and 1,615 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -91,3 +91,5 @@ ENV/

# Rope project settings
.ropeproject

.DS_Store
4 changes: 3 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,10 @@ matrix:
fast_finish: true
include:
- python: 2.7
- python: 3.5
- python: 3.6
- python: 3.7
dist: xenial
sudo: true

before_install:
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
Expand Down
64 changes: 36 additions & 28 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -7,36 +7,44 @@ Copyright (c) 2018 UT-Battelle, LLC. All rights reserved.
Copyright 2018. Los Alamos National Security, LLC. This software was produced
under U.S. Government contract DE-AC52-06NA25396 for Los Alamos National
Laboratory (LANL), which is operated by Los Alamos National Security, LLC for
the U.S. Department of Energy. Contributions to this code were also made by
UT-Battelle, LLC ("UT-Battelle"), management and operations contractor for the
Oak Ridge National Laboratory ("ORNL") under the authority of its DOE Prime
Contract No. DE-AC05-00OR22725 and Lawrence Livermore National Security, LLC,
management and operations contractor for Lawrence Livermore National
Laboratory (“LLNL”) under the authority of its U.S. Department of Energy
(“DOE”) Prime Contract No. DE-AC52-07NA27344.
the U.S. Department of Energy. The U.S. Government has rights to use,
reproduce, and distribute this software. NEITHER THE GOVERNMENT NOR LOS ALAMOS
NATIONAL SECURITY, LLC MAKES ANY WARRANTY, EXPRESS OR IMPLIED, OR ASSUMES ANY
LIABILITY FOR THE USE OF THIS SOFTWARE. If software is modified to produce
derivative works, such modified software should be clearly marked, so as not to
confuse it with the version available from LANL.

The U.S. Government has rights to use, reproduce, and distribute this software.
NEITHER THE GOVERNMENT NOR LOS ALAMOS NATIONAL SECURITY, LLC MAKES ANY
WARRANTY, EXPRESS OR IMPLIED, OR ASSUMES ANY LIABILITY FOR THE USE OF THIS
SOFTWARE. If software is modified to produce derivative works, such modified
software should be clearly marked, so as not to confuse it with the version
available from LANL.
Copyright (c) 2018 Lawrence Livermore National Security, LLC. This work was
produced under the auspices of the U.S. Department of Energy by Lawrence
Livermore National Laboratory under Contract DE-AC52-07NA27344. This work was
prepared as an account of work sponsored by an agency of the United States
Government. Neither the United States Government nor Lawrence Livermore
National Security, LLC, nor any of their employees makes any warranty,
expressed or implied, or assumes any legal liability or responsibility for the
accuracy, completeness, or usefulness of any information, apparatus, product,
or process disclosed, or represents that its use would not infringe privately
owned rights. Reference herein to any specific commercial product, process, or
service by trade name, trademark, manufacturer, or otherwise does not
necessarily constitute or imply its endorsement, recommendation, or favoring by
the United States Government or Lawrence Livermore National Security, LLC. The
views and opinions of authors expressed herein do not necessarily state or
reflect those of the United States Government or Lawrence Livermore National
Security, LLC, and shall not be used for advertising or product endorsement
purposes.

Copyright 2018. Lawrence Livermore National Security, LLC. All rights reserved.
This work was prepared as an account of work sponsored by an agency of the
United States Government. Neither the United States Government nor Lawrence
Livermore National Security, LLC, nor any of their employees makes any
warranty, expressed or implied, or assumes any legal liability or
responsibility for the accuracy, completeness, or usefulness of any
information, apparatus, product, or process disclosed, or represents that its
use would not infringe privately owned rights. Reference herein to any specific
commercial product, process, or service by trade name, trademark, manufacturer,
or otherwise does not necessarily constitute or imply its endorsement,
recommendation, or favoring by the United States Government or Lawrence
Livermore National Security, LLC. The views and opinions of authors expressed
herein do not necessarily state or reflect those of the United States
Government or Lawrence Livermore National Security, LLC, and shall not be used
for advertising or product endorsement purposes.
Copyright 2018 UT-Battelle. LLC. This technology acquired under license from
UT-Battelle, LLC, the management and operating contractor of the Oak Ridge
National Laboratory acting on behalf of the U.S. Department of Energy under
Contract No. DE-AC05-00OR22725. The United States Government and UT-Battelle,
LLC make no representations and disclaim all warranties, both expressed and
implied. There are no express or implied warranties of merchantability or
fitness for a particular purpose, or that the use of the software will not
infringe any patent, copyright, trademark, or other proprietary rights, or that
the software will accomplish the intended results or that the software or its
use will not result in injury or damage. The user assumes responsibility for
all liabilities, penalties, fines, claims, causes of action, and costs and
expenses, caused by, resulting from or arising out of, in whole or in part the
use, storage or disposal of the software.

Additionally, redistribution and use in source and binary forms, with or
without modification, are permitted provided that the following conditions are
Expand Down
134 changes: 88 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ used those components.

MPAS-Analysis is available as an anaconda package via the `e3sm` channel:

```
``` bash
conda install -c conda-forge -c e3sm mpas_analysis
```

Expand All @@ -40,18 +40,32 @@ environment with the following packages:
* cmocean
* progressbar2
* requests
* setuptools
* shapely

These can be installed via the conda command:
```
``` bash
conda install -c conda-forge numpy scipy matplotlib netCDF4 xarray dask \
bottleneck basemap lxml nco pyproj pillow cmocean progressbar2 requests
bottleneck basemap lxml nco pyproj pillow cmocean progressbar2 requests \
setuptools shapely
```

Then, get the code from:
[https://github.com/MPAS-Dev/MPAS-Analysis](https://github.com/MPAS-Dev/MPAS-Analysis)


## Download analysis input data

To download the data that is necessary to MPAS-Analysis, run:
If you installed the `mpas_analysis` package, download the data that is
necessary to MPAS-Analysis by running:

``` bash
download_analysis_data -o /path/to/output/directory
```

If you are using the git repository, run:

``` bash
./download_analysis_data.py -o /path/to/output/directory
```

Expand All @@ -66,42 +80,55 @@ two subdirectories:

## List Analysis

To list the available analysis tasks, run:
If you installed the `mpas_analysis` package, list the available analysis tasks
by running:

``` bash
mpas_analysis --list
```
./run_mpas_analysis --list

If using a git repository, run:
``` bash
python -m mpas_analysis --list
```

This lists all tasks and their tags. These can be used in the `generate`
command-line option or config option. See `mpas_analysis/config.default`
for more details.

## Running the analysis

1. Create and empty config file (say `config.myrun`), copy `config.example`,
or copy one of the example files in the `configs` directory.
or copy one of the example files in the `configs` directory (if using a
git repo) or download one from the
[example configs directory](https://github.com/MPAS-Dev/MPAS-Analysis/tree/develop/configs).
2. Either modify config options in your new file or copy and modify config
options from `mpas_analysis/config.default`.

**Requirements for custom config files:**
* At minimum you should set `baseDirectory` under `[output]` to the folder
where output is stored. **NOTE** this value should be a unique
directory for each run being analyzed. If multiple runs are analyzed in
the same directory, cached results from a previous analysis will not be
updated correctly.
* Any options you copy into the config file **must** include the
appropriate section header (e.g. '[run]' or '[output]')
* You do not need to copy all options from `mpas_analysis/config.default`.
This file will automatically be used for any options you do not include
in your custom config file.
* You should **not** modify `mpas_analysis/config.default` directly.
3. run: `./run_mpas_analysis config.myrun`. This will read the configuraiton
options from `mpas_analysis/config.default` (in a git repo) or directly
from GitHub:
[config.default](https://github.com/MPAS-Dev/MPAS-Analysis/tree/develop/mpas_anlysis/config.default).
3. If you installed the `mpas_analysis` package, run:
`mpas_analysis config.myrun`. If using a git checkout, run:
`python -m mpas_analysis config.myrun`. This will read the configuraiton
first from `mpas_analysis/config.default` and then replace that
configuraiton with any changes from from `config.myrun`
4. If you want to run a subset of the analysis, you can either set the
`generate` option under `[output]` in your config file or use the
`--generate` flag on the command line. See the comments in
`mpas_analysis/config.default` for more details on this option.

**Requirements for custom config files:**
* At minimum you should set `baseDirectory` under `[output]` to the folder
where output is stored. **NOTE** this value should be a unique
directory for each run being analyzed. If multiple runs are analyzed in
the same directory, cached results from a previous analysis will not be
updated correctly.
* Any options you copy into the config file **must** include the
appropriate section header (e.g. '[run]' or '[output]')
* You do not need to copy all options from `mpas_analysis/config.default`.
This file will automatically be used for any options you do not include
in your custom config file.
* You should **not** modify `mpas_analysis/config.default` directly.

## List of MPAS output files that are needed by MPAS-Analysis:

* mpas-o files:
Expand Down Expand Up @@ -141,11 +168,19 @@ Note: for older runs, mpas-seaice files will be named:
## Purge Old Analysis

To purge old analysis (delete the whole output directory) before running run
the analysis, add the `--purge` flag:
the analysis, add the `--purge` flag. If you installed `mpas_analysis` as
a package, run:

``` bash
mpas_analysis --purge <config.file>
```
./run_mpas_analysis --purge <config.file>
````

If you are running in the repo, use:

``` bash
python -m mpas_analysis --purge <config.file>
```

All of the subdirectories listed in `output` will be deleted along with the
climatology subdirectories in `oceanObservations` and `seaIceObservations`.

Expand All @@ -164,23 +199,27 @@ final website with `--html_only`, and re-running after the simulation has
progressed to extend time series (however, not recommended for changing the
bounds on climatologies, see above).

## Running in parallel
## Running in parallel via a queueing system

If you are running from a git repo:

1. Copy the appropriate job script file from `configs/<machine_name>` to
the same directory as `run_mpas_analysis` (or another directory if
preferred). The default script, `configs/job_script.default.bash`, is
1. If you are running from a git repo, copy the appropriate job script file
from `configs/<machine_name>` to the root directory (or another directory
if preferred). The default cript, `configs/job_script.default.bash`, is
appropriate for a laptop or desktop computer with multiple cores.
2. Modify the number of nodes (equal to the number of parallel tasks), the
run name and optionally the output directory and the path to the config
file for the run (default: `./configs/<machine_name>/config.<run_name>`)
Note: in `job_script.default.bash`, the number of parallel tasks is set
manually, since there are no nodes.
2. If using the `mpas_analysis` conda package, download the job script and/or
sample config file from the
[example configs directory](https://github.com/MPAS-Dev/MPAS-Analysis/tree/develop/configs).
2. Modify the number of parallel tasks, the run name, the output directory
and the path to the config file for the run.
3. Note: the number of parallel tasks can be anything between 1 and the
number of analysis tasks to be performed. If there are more tasks than
parallel tasks, later tasks will simply wait until earlier tasks have
finished.
4. Submit the job using the modified job script



If a job script for your machine is not available, try modifying the default
job script in `configs/job_script.default.bash` or one of the job scripts for
another machine to fit your needs.
Expand All @@ -200,23 +239,26 @@ within the `mpas_analysis/shared` directory.
3. modify `mpas_analysis/config.default` (and possibly any machine-specific
config files in `configs/<machine>`)
4. import new analysis task in `mpas_analysis/<component>/__init__.py`
5. add new analysis task to `run_mpas_analysis` under `build_analysis_list`:
```python
analyses.append(<component>.MyTask(config, myArg='argValue'))
```
This will add a new object of the `MyTask` class to a list of analysis tasks
created in `build_analysis_list`. Later on in `run_analysis`, it will first
go through the list to make sure each task needs to be generated
(by calling `check_generate`, which is defined in `AnalysisTask`), then,
will call `setup_and_check` on each task (to make sure the appropriate AM is
on and files are present), and will finally call `run` on each task that is
to be generated and is set up properly.
5. add new analysis task to `mpas_analysis/__main__.py` under
`build_analysis_list`, see below.

A new analysis task can be added with:
```python
analyses.append(<component>.MyTask(config, myArg='argValue'))
```
This will add a new object of the `MyTask` class to a list of analysis tasks
created in `build_analysis_list`. Later on in `run_analysis`, it will first
go through the list to make sure each task needs to be generated
(by calling `check_generate`, which is defined in `AnalysisTask`), then,
will call `setup_and_check` on each task (to make sure the appropriate AM is
on and files are present), and will finally call `run` on each task that is
to be generated and is set up properly.

## Generating Documentation

To generate the `sphinx` documentation, run:
```bash
conda install sphinx sphinx_rtd_theme numpydoc recommonmark tabulate
conda install sphinx sphinx_rtd_theme numpydoc m2r tabulate
cd docs
make html
```
1 change: 1 addition & 0 deletions ci/requirements.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,4 @@ dependencies:
- cmocean
- progressbar2
- requests
- shapely
12 changes: 8 additions & 4 deletions conda/recipe/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{% set version = "1.0" %}
{% set version = "1.1" %}

package:
name: mpas_analysis
Expand All @@ -12,7 +12,8 @@ build:
script: $PYTHON setup.py install --single-version-externally-managed
--record=record.txt
noarch: python

entry_points:
- mpas_analysis = mpas_analysis.__main__:main
test:
requires:
- pytest
Expand All @@ -30,18 +31,21 @@ requirements:
- python
- numpy
- scipy
- matplotlib
- matplotlib <3.0.0|>=3.0.2
- netcdf4
- xarray >=0.10.0
- dask
- bottleneck
- basemap
- basemap <1.2.0|>1.2.0
- lxml
- nco >=4.7.0
- pillow
- cmocean
- progressbar2
- requests
- pyproj
- setuptools
- shapely

about:
home: http://gitub.com/MPAS-Dev/MPAS-Analysis
Expand Down
8 changes: 4 additions & 4 deletions config.example
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
## sea ice observations
## * [regions]/regionMaskDirectory -- a directory containing MOC and
## ice shelf region masks
## 3. run: ./run_mpas_analysis config.myrun. This will read the configuraiton
## 3. run: mpas_analysis config.myrun. This will read the configuraiton
## first from config.default and then replace that configuraiton with any
## changes from from config.myrun
## 4. If you want to run a subset of the analysis, you can either set the
Expand Down Expand Up @@ -112,7 +112,7 @@ timeSeriesSubdirectory = timeseries
htmlSubdirectory = html

# a list of analyses to generate. Valid names can be seen by running:
# ./run_mpas_analysis --list
# mpas_analysis --list
# This command also lists tags for each analysis.
# Shortcuts exist to generate (or not generate) several types of analysis.
# These include:
Expand All @@ -130,11 +130,11 @@ htmlSubdirectory = html
# 'no_<component>', 'no_<tag>' -- in analogy to 'all_*', skip all analysis
# tasks from the given compoonent or with
# the given tag. Do
# ./run_mpas_analysis --list
# mpas_analysis --list
# to list all task names and their tags
# an equivalent syntax can be used on the command line to override this
# option:
# ./run_mpas_analysis config.analysis --generate \
# mpas_analysis config.analysis --generate \
# only_ocean,no_timeSeries,timeSeriesSST
generate = ['all_publicObs']

Expand Down
Loading

0 comments on commit 87786ff

Please sign in to comment.