You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/basic/methods_comparison/methods_comparison.ipynb
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -187,7 +187,7 @@
187
187
"cell_type": "markdown",
188
188
"metadata": {},
189
189
"source": [
190
-
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_streaming(X=X, params=params, data_handler=False, variables=variables)` and the `fit` method, `SPOD_analysis.fit()`. \n",
190
+
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_streaming(data=X, params=params, data_handler=False, variables=variables)` and the `fit` method, `SPOD_analysis.fit()`. \n",
191
191
"\n",
192
192
"The `PySPOD` constructor takes `X`, that can either be a `numpy.ndarray` containing the data or the path to the data file , the parameters `params`, a parameter called `data_handler` that can be either `False` or a function to read the data, and `variables` that is the list containing the names of our variables. If, as `data_handler`, we pass `False`, then we need to load the entire matrix of data into RAM, and that must comply with the **PySPOD** input data requirements (i.e. the dimension of the data matrix must correspond to (time $\\times$ spatial dimension shape $\\times$ number of variables). \n",
Copy file name to clipboardExpand all lines: tutorials/basic/methods_comparison_file/methods_comparison_file.ipynb
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -240,7 +240,7 @@
240
240
"cell_type": "markdown",
241
241
"metadata": {},
242
242
"source": [
243
-
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_streaming(X=X, params=params, data_handler=False, variables=variables)` and the `fit` method, `SPOD_analysis.fit()`. \n",
243
+
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_streaming(data=X, params=params, data_handler=False, variables=variables)` and the `fit` method, `SPOD_analysis.fit()`. \n",
244
244
"\n",
245
245
"The `PySPOD` constructor takes `X`, that can either be a `numpy.ndarray` containing the data or the path to the data file , the parameters `params`, a parameter called `data_handler` that can be either `False` or a function to read the data, and `variables` that is the list containing the names of our variables. If, as `data_handler`, we pass `False`, then we need to load the entire matrix of data into RAM, and that must comply with the **PySPOD** input data requirements (i.e. the dimension of the data matrix must correspond to (time $\\times$ spatial dimension shape $\\times$ number of variables). \n",
Copy file name to clipboardExpand all lines: tutorials/climate/ERA20C_MEI_2D/ERA20C_MEI_2D.ipynb
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -369,7 +369,7 @@
369
369
"if params['normalize']:\n",
370
370
"\tparams['weights'] = \\\n",
371
371
" weights.apply_normalization(\\\n",
372
-
" X=X, \n",
372
+
" data=X, \n",
373
373
" n_variables=params['nv'], \n",
374
374
" weights=params['weights'], \n",
375
375
" method='variance')"
@@ -386,7 +386,7 @@
386
386
"cell_type": "markdown",
387
387
"metadata": {},
388
388
"source": [
389
-
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_low_storage(X=X, params=params, data_handler=False, variables=variables)` and the `fit` method, `SPOD_analysis.fit()`. \n",
389
+
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_low_storage(data=X, params=params, data_handler=False, variables=variables)` and the `fit` method, `SPOD_analysis.fit()`. \n",
390
390
"\n",
391
391
"The `PySPOD` constructor takes `X`, that can either be a `numpy.ndarray` containing the data or the path to the data file , the parameters `params`, a parameter called `data_handler` that can be either `False` or a function to read the data, and `variables` that is the list containing the names of our variables. If, as `data_handler`, we pass `False`, then we need to load the entire matrix of data into RAM, and that must comply with the **PySPOD** input data requirements (i.e. the dimension of the data matrix must correspond to (time $\\times$ spatial dimension shape $\\times$ number of variables). \n",
392
392
"\n",
@@ -503,7 +503,7 @@
503
503
],
504
504
"source": [
505
505
"# Perform SPOD analysis using low storage module\n",
"The results are stored in the results folder defined in the parameter you specified under `params[savedir]`. We can load the results for both modes and eigenvalues, and use any other postprocessing tool that is more suitable to your application. The files are stored in `numpy` binary format `.npy`. There exists several tools to convert them in `netCDF`, `MATLAB` and several other formats that can be better suited to you specific post-processing pipeline.\n",
790
790
"\n",
791
791
"This tutorial was intended to help you setup your own multivariate case. You can play with the parameters we explored above to gain more insights into the capabilities of the library. You can also run on the same data the other two SPOD algorithms implemented as part of this library by simply calling:\n",
Copy file name to clipboardExpand all lines: tutorials/climate/ERA20C_QBO_3D/ERA20C_QBO_3D.ipynb
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -357,7 +357,7 @@
357
357
"cell_type": "markdown",
358
358
"metadata": {},
359
359
"source": [
360
-
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_low_storage(X=X, params=params, file_handler=False)` and the `fit` method, `SPOD_analysis.fit()`. \n",
360
+
"Once we have loaded the data and defined the required and optional parameters, we can perform the analysis. This step is accomplished by calling the `PySPOD` constructor, `SPOD_low_storage(data=X, params=params, file_handler=False)` and the `fit` method, `SPOD_analysis.fit()`. \n",
361
361
"\n",
362
362
"The `PySPOD` constructor takes `X`, that can either be a `numpy.ndarray` containing the data or the path to the data file , the parameters `params`, a parameter called `data_handler` that can be either `False` or a function to read the data, and `variables` that is the list containing the names of our variables. If, as `data_handler`, we pass `False`, then we need to load the entire matrix of data into RAM, and that must comply with the **PySPOD** input data requirements (i.e. the dimension of the data matrix must correspond to (time $\\times$ spatial dimension shape $\\times$ number of variables). \n",
363
363
"\n",
@@ -466,7 +466,7 @@
466
466
],
467
467
"source": [
468
468
"# Perform SPOD analysis using low storage module\n",
"The results are stored in the results folder defined in the parameter you specified under `params[savedir]`. You can load the results for both modes and eigenvalues, and use any other postprocessing tool that is more suitable to your application. The files are stored in `numpy` binary format `.npy`. There exists several tools to convert them in `netCDF`, `MATLAB` and several other formats that can be better suited to you specific post-processing pipeline.\n",
766
766
"\n",
767
767
"This tutorial was intended to help you setup your own three-dimensional case. You can play with the parameters we explored above to gain more insights into the capabilities of the library. You can also run on the same data the other two SPOD algorithms implemented as part of this library by simply calling:\n",
0 commit comments