From 188d074a234c7b77862814890cb5f4f82b1907d6 Mon Sep 17 00:00:00 2001 From: Wei Huang Date: Tue, 12 Nov 2024 22:52:31 -0600 Subject: [PATCH] use auto numerated list and code-block --- docs/source/noaa_csp.rst | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/docs/source/noaa_csp.rst b/docs/source/noaa_csp.rst index a4c25ae89b..9b9858e2f4 100644 --- a/docs/source/noaa_csp.rst +++ b/docs/source/noaa_csp.rst @@ -333,9 +333,8 @@ or access the cluster from your web terminal, one can start clone, compile, and cd /contrib/$USER #you should have a username, and have a directory at /contrib where we save our permanent files. git clone --recursive git@github.com:NOAA-EMC/global-workflow.git global-workflow - #or the develop form at EPIC: + #or the develop fork at EPIC: git clone --recursive git@github.com:NOAA-EPIC/global-workflow-cloud.git global-workflow-cloud - #. compile global-workflow: .. code-block:: console @@ -344,7 +343,6 @@ or access the cluster from your web terminal, one can start clone, compile, and cd sorc build_all.sh # or similar command to compile for gefs, or others. link_workflow.sh # after build_all.sh finished successfully - #. As users may define a very small cluster as controller, one may use the script below to compile in compute node. Save the this script in a file, say, com.slurm, and submit this job with command "sbatch com.slurm": @@ -366,7 +364,6 @@ or access the cluster from your web terminal, one can start clone, compile, and #build_all.sh build_all.sh -w link_workflow.sh - #. run global-workflow C48 ATM test case (assume user has /lustre filesystem attached): .. code-block:: console