diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000000..e69de29bb2 diff --git a/README.html b/README.html new file mode 100644 index 0000000000..c5702b17a1 --- /dev/null +++ b/README.html @@ -0,0 +1,127 @@ + + + + + + + <no title> — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

See the rendered version of TOPSAIL’s documentation at this address:

+

> https://openshift-psap.github.io/topsail/index.html

+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_sources/README.rst.txt b/_sources/README.rst.txt new file mode 100644 index 0000000000..44325cd023 --- /dev/null +++ b/_sources/README.rst.txt @@ -0,0 +1,5 @@ +:orphan: + +See the rendered version of TOPSAIL's documentation at this address: + +> https://openshift-psap.github.io/topsail/index.html diff --git a/_sources/contributing.rst.txt b/_sources/contributing.rst.txt new file mode 100644 index 0000000000..e61580fd02 --- /dev/null +++ b/_sources/contributing.rst.txt @@ -0,0 +1,135 @@ +Contributing +============ + +Thanks for taking the time to contribute! + +The following is a set of guidelines for contributing to ``TOPSAIL``. +These are mostly guidelines, feel free to propose changes to this +document in a pull request. + +--- + +The primary goal of the repository is to serve as a central repository +of the PSAP team's performance and scale test automation. + +The secondary goal of the repository is to offer a toolbox for setting +up and configuring clusters, in preparation of performance and scale test execution. + + +Pull Request Guidelines +----------------------- + +- Pull Requests (PRs) need to be ``/approve`` and reviewed ``/lgtm`` by + PSAP team members before being merged. + +- PRs should have a proper description explaining the problem being + solved, or the new feature being introduced. + + +Review Guidelines +----------------- + +- Reviews can be performed by anyone interested in the good health of + the repository; but approval and/or ``/lgtm`` is reserved to PSAP + team members at the moment. + +- The main merging criteria is to have a successful test run that + executes the modified code. Because of the nature of the repository, + we can't test all the code paths for all PRs. + + - In order to save unnecessary AWS cloud time, the testing is not + automatically executed by Prow; it must be manually triggered. + + +Style Guidelines +---------------- + +YAML style +^^^^^^^^^^ + +* Align nested lists with their parent's label + +.. code-block:: yaml + + - block: + - name: ... + block: + - name: ... + +* YAML files use the `.yml` extension + +Ansible style +^^^^^^^^^^^^^ + +We strive to follow Ansible best practices in the different playbooks. + +This command is executed as a GitHub-Action hook on all the new PRs, +to help keeping a consistent code style: + +.. code-block:: shell + + ansible-lint -v --force-color -c config/ansible-lint.yml playbooks roles + +* Try to avoid using ``shell`` tasks as much as possible + + - Make sure that ``set -o pipefail;`` is part of the shell command + whenever a ``|`` is involved (``ansible-lint`` forgets some of + them) + + - Redirection into a ``{{ artifact_extra_logs_dir }}`` file is a + common exception + +* Use inline stanza for ``debug`` and ``fail`` tasks, eg: + +.. code-block:: yaml + + - name: The GFD did not label the nodes + fail: msg="The GFD did not label the nodes" + +Coding guidelines +----------------- + +* Keep the main log file clean when everything goes right, and store + all the relevant information in the ``{{ artifact_extra_logs_dir + }}`` directory, eg: + +.. code-block:: yaml + + - name: Inspect the Subscriptions status (debug) + shell: + oc describe subscriptions.operators.coreos.com/gpu-operator-certified + -n openshift-operators + > {{ artifact_extra_logs_dir }}/gpu_operator_Subscription.log + failed_when: false + +* Include troubleshooting inspection commands whenever + possible/relevant (see above for an example) + + - mark them as ``failed_when: false`` to ensure that their execution + doesn't affect the testing + - add ``(debug)`` in the task name to make it clear that the command + is not part of the proper testing. + +* Use ``ignore_errors: true`` **only** for tracking **known + failures**. + + - use ``failed_when: false`` to ignore the task return code + - but whenever possible, write tasks that do not fail, eg: + +.. code-block:: yaml + + oc delete --ignore-not-found=true $MY_RESOURCE + +* Try to group related modifications in a dedicated commit, and stack + commits in logical order (eg, 1/ add role, 2/ add toolbox script 3/ + integrate the toolbox scrip in the nightly CI) + + - Commits are not squashed, so please avoid commits "fixing" another + commit of the PR. + - Hints: `git revise `_ + + * use ``git revise `` to modify an older commit (not + older that ``master`` ;-) + * use ``git revise --cut `` to split a commit in two + logical commits + * or simply use ``git commit --amend`` to modify the most recent commit diff --git a/_sources/extending/orchestration.rst.txt b/_sources/extending/orchestration.rst.txt new file mode 100644 index 0000000000..b21d0cb223 --- /dev/null +++ b/_sources/extending/orchestration.rst.txt @@ -0,0 +1,539 @@ +Creating a New Orchestration +============================ + +You're working on a new perf&scale test project, and you want to have +it automated and running in the CI? Good! Do you already have you test +architecture in mind? And your toolbox is ready? Perfect, so we can +start building the orchestration! + +Prepare the environment +----------------------- + +To create an orchestration, go to ``projects/PROJECT_NAME/testing`` +and prepare the following boilerplate code. + +Mind that the ``PROJECT_NAME`` should be compatible with Python +packages (no ``-``) to keep things simple. + + +Prepare the ``test.py``, ``config.yaml`` and ``command_args.yaml.j2`` +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +These files are all what is mandatory to have a configurable +orchestration layer. + +* ``test.py`` should contain these entrypoints, for interacting with the CI: + +:: + + @entrypoint() + def prepare_ci(): + """ + Prepares the cluster and the namespace for running the tests + """ + + pass + + + @entrypoint() + def test_ci(): + """ + Runs the test from the CI + """ + + pass + + + @entrypoint() + def cleanup_cluster(mute=False): + """ + Restores the cluster to its original state + """ + # _Not_ executed in OpenShift CI cluster (running on AWS). Only required for running in bare-metal environments. + + common.cleanup_cluster() + + pass + + + @entrypoint(ignore_secret_path=True, apply_preset_from_pr_args=False) + def generate_plots_from_pr_args(): + """ + Generates the visualization reports from the PR arguments + """ + + visualize.download_and_generate_visualizations() + + export.export_artifacts(env.ARTIFACT_DIR, test_step="plot") + + + class Entrypoint: + """ + Commands for launching the CI tests + """ + + def __init__(self): + + self.prepare_ci = prepare_ci + self.test_ci = test_ci + self.cleanup_cluster_ci = cleanup_cluster + self.export_artifacts = export_artifacts + + self.generate_plots_from_pr_args = generate_plots_from_pr_args + + def main(): + # Print help rather than opening a pager + fire.core.Display = lambda lines, out: print(*lines, file=out) + + fire.Fire(Entrypoint()) + + + if __name__ == "__main__": + try: + sys.exit(main()) + except subprocess.CalledProcessError as e: + logging.error(f"Command '{e.cmd}' failed --> {e.returncode}") + sys.exit(1) + except KeyboardInterrupt: + print() # empty line after ^C + logging.error(f"Interrupted.") + sys.exit(1) + +* ``config.yaml`` should contain + +:: + + ci_presets: + # name of the presets to apply, or null if no preset + name: null + # list of names of the presets to apply, or a single name, or null if no preset + names: null + + + single: + clusters.create.type: single + + keep: + clusters.create.keep: true + clusters.create.ocp.tags.Project: PSAP/Project/... + # clusters.create.ocp.tags.TicketId: + + light_cluster: + clusters.create.ocp.deploy_cluster.target: cluster_light + + light: + extends: [light_cluster] + ... + + ... + + secrets: + dir: + name: psap-ods-secret + env_key: PSAP_ODS_SECRET_PATH + # name of the file containing the properties of LDAP secrets + s3_ldap_password_file: s3_ldap.passwords + keep_cluster_password_file: get_cluster.password + brew_registry_redhat_io_token_file: brew.registry.redhat.io.token + opensearch_instances: opensearch.yaml + aws_credentials: .awscred + git_credentials: git-credentials + + clusters: + metal_profiles: + ...: ... + create: + type: single # can be: single, ocp, managed + keep: false + name_prefix: fine-tuning-ci + ocp: + # list of tags to apply to the machineset when creating the cluster + tags: + # TicketId: "..." + Project: PSAP/Project/... + deploy_cluster: + target: cluster + base_domain: psap.aws.rhperfscale.org + version: 4.15.9 + region: us-west-2 + control_plane: + type: m6a.xlarge + workers: + type: m6a.2xlarge + count: 2 + + sutest: + is_metal: false + lab: + name: null + compute: + dedicated: true + machineset: + name: workload-pods + type: m6i.2xlarge + count: null + taint: + key: only-workload-pods + value: "yes" + effect: NoSchedule + driver: + is_metal: false + compute: + dedicated: true + machineset: + name: test-pods + count: null + type: m6i.2xlarge + taint: + key: only-test-pods + value: "yes" + effect: NoSchedule + cleanup_on_exit: false + + matbench: + preset: null + workload: projects....visualizations... + prom_workload: projects....visualizations.... + config_file: plots.yaml + download: + mode: prefer_cache + url: + url_file: + # if true, copy the results downloaded by `matbench download` into the artifacts directory + save_to_artifacts: false + # directory to plot. Set by testing/common/visualize.py before launching the visualization + test_directory: null + lts: + generate: true + horreum: + test_name: null + opensearch: + export: + enabled: false + enabled_on_replot: false + fail_test_on_fail: true + instance: smoke + index: ... + index_prefix: "" + prom_index_suffix: -prom + regression_analyses: + enabled: false + # if the regression analyses fail, mark the test as failed + fail_test_on_regression: false + export_artifacts: + enabled: false + bucket: rhoai-cpt-artifacts + path_prefix: cpt/fine-tuning + dest: null # will be set by the export code + +* ``command_args.yml.j2`` should start with: + +:: + + {% set secrets_location = false | or_env(secrets.dir.env_key) %} + {% if not secrets_location %} + {{ ("ERROR: secrets_location must be defined (secrets.dir.name="+ secrets.dir.name|string +" or env(secrets.dir.env_key=" + secrets.dir.env_key|string + ")) ") | raise_exception }} + {% endif %} + {% set s3_ldap_password_location = secrets_location + "/" + secrets.s3_ldap_password_file %} + + # --- + + +Copy the ``clusters.sh`` and ``configure.sh`` +""""""""""""""""""""""""""""""""""""""""""""" + +These files are necessary to be able to create clusters on +OpenShift CI. (``/test rhoai-e2e``). They shouldn't be modified. + +And now, the boiler-plate code is in place, and we can start building +the test orchestration. + +Create ``test_....py`` and ``prepare_....py`` +""""""""""""""""""""""""""""""""""""""""""""" + +Starting at this step, the development of the test orchestration +starts, and you "just" have to fill the gaps :) + + +In the ``prepare_ci`` method, prepare your cluster, according to the +configuration. In the ``test_ci`` method, run your test and collect +its artifacts. In the ``cleanup_cluster_ci``, cleanup you cluster, so +that it can be used again for another test. + +Start building your test orchestration +-------------------------------------- + +One the boilerplate code is in place, we can start building the test +orchestration. TOPSAIL provides some "low level" helper modules: + +:: + + from projects.core.library import env, config, run, configure_logging, export + +as well as libraries of common orchestration bits: + +:: + + from projects.rhods.library import prepare_rhoai as prepare_rhoai_mod + from projects.gpu_operator.library import prepare_gpu_operator + from projects.matrix_benchmarking.library import visualize + + +These libraries are illustrated below. They are not formally described +at the moment. They come from project code blocks that have noticed to +be used identically across projects, so they have been moved to +library directories to be easier to reuse. + +Sharing code across projects means extending the risk of unnoticed +bugs when updating the library. With this in mind, the question of +code sharing vs code duplication takes another direction, as extensive +testing is not easy in such a rapidly evolving project. + + +Core helper modules +""""""""""""""""""" + +The ``run`` module +'''''''''''''''''' + +* helper functions to run system commands, toolbox commands, and + ``from_config`` toolbox commands: + +:: + + def run(command, capture_stdout=False, capture_stderr=False, check=True, protect_shell=True, cwd=None, stdin_file=None, log_command=True) + +This method allows running a command, capturing or not its +stdout/stderr, checking it's return code, chaning it's working +directory, protecting it with bash safety flags (``set -o +errexit;set -o pipefail;set -o nounset;set -o errtrace``), passing a +file as stdin, logging or not the command, ... + +:: + + def run_toolbox(group, command, artifact_dir_suffix=None, run_kwargs=None, mute_stdout=None, check=None, **kwargs) + +This command allows running a toolbox command. ``group, command, +kwargs`` are the CLI toolbox command arguments. ``run_kwargs`` allows +passing arguments directory to the ``run`` command described +above. ``mute_stdout`` allows muting (capturing) the stdout +text. ``check`` allows disabling the exception on error +check. ``artifact_dir_suffix`` allows appending a suffix to the +toolbox directory name (eg, to distinguish two identical calls in the +artifacts). + +:: + + def run_toolbox_from_config(group, command, prefix=None, suffix=None, show_args=None, extra=None, artifact_dir_suffix=None, mute_stdout=False, check=True, run_kwargs=None) + +This command allows running a toolbox command with the ``from_config`` +helper (see the description of the ``command_args.yaml.j2`` +file). ``prefix`` and ``suffix`` allow distinguishing commands in the +``command_args.yaml.j2`` file. ``extra`` allows passing extra +arguments that override what is in the template file. ``show_args`` +only display the arguments that would be passed to ``run_toolbox.py``.z + +* ``run_and_catch`` is an helper function for chaining multiple + functions without swallowing exceptions: + +:: + + exc = None + exc = run.run_and_catch( + exc, + run.run_toolbox, "kserve", "capture_operators_state", run_kwargs=dict(capture_stdout=True), + ) + + exc = run.run_and_catch( + exc, + run.run_toolbox, "cluster", "capture_environment", run_kwargs=dict(capture_stdout=True), + ) + + if exc: raise exc + +* helper context to run functions in parallel. If + ``exit_on_exception`` is set, the code will exit the process when an + exception is catch. Otherwise it will simply raise it. If + ``dedicated_dir`` is set, a dedicated directly, based on the + ``name`` parameter, will be created. + +:: + + class Parallel(object): + def __init__(self, name, exit_on_exception=True, dedicated_dir=True): + +Example: + +:: + + def prepare(): + with run.Parallel("prepare1") as parallel: + parallel.delayed(prepare_rhoai) + parallel.delayed(scale_up_sutest) + + + test_settings = config.project.get_config("tests.fine_tuning.test_settings") + with run.Parallel("prepare2") as parallel: + parallel.delayed(prepare_gpu) + parallel.delayed(prepare_namespace, test_settings) + + with run.Parallel("prepare3") as parallel: + parallel.delayed(preload_image_yyy) + parallel.delayed(preload_image_xxx) + parallel.delayed(preload_image_zzz) + + +The ``env`` module +'''''''''''''''''' + +* ``ARTIFACT_DIR`` thread-safe access to the storage directory. Prefer + using this than ``$ARTIFACT_DIR`` which isn't thread safe. + +* helper context to create a dedicated artifact directory. Based on + OpenShift CI, TOPSAIL relies on the ``ARTIFACT_DIR`` environment + variable to store its artifacts. Each toolbox command creates a new + directory name ``nnn__group__command``, which keeps the directories + ordered and easy to follow. However, when many commands are executed, + sometimes in parallel, the number of directories increase and becomes + hard to understand. This command allows creating subdirectories, to + group things logically: + +Example: + +:: + + with env.NextArtifactDir("prepare_namespace"): + set_namespace_annotations() + download_data_sources(test_settings) + +The ``config`` module +''''''''''''''''''''' + +* the ``config.project.get_config()`` helper command to + access the configuration. Uses the inline Json format. This object + holds the main project configuration. + +* the ``config.project.set_config(, )`` helper + command to update the configuration. Sometimes, it is convenient to + store values in the configuration (eg, coming from the + command-line). Mind that this is not thread-safe (an error is raised + if this command is called in a ``run.Parallel`` context). Mind that + this command does not allow creating new configuration fields in the + document. Only existing fields can be updated. + + +The ``projects.rhods.library.prepare_rhoai`` library module +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +This library helps with the deployment of RHOAI pre-builds on OpenShift. + +* ``install_servicemesh()`` installs the ServiceMesh Operator, if not + already installed in the cluster (this is a dependency of RHOAI) + +* ``uninstall_servicemesh(mute=True)`` uninstall the ServiceMesh + Operator, if it is installed + +* ``is_rhoai_installed()`` tells if RHOAI is currently installed or + not. + +* ``install(token_file=None, force=False)`` installs RHOAI, if it is + not already installed (unless ``force`` is passed). Mind that the + current deployment code only works with the pre-builds of RHOAI, + which require a Brew ``token_file``. If the token isn't passed, it + is assumed that the cluster already has access to Brew. + + +The ``projects.gpu_operator.library.prepare_gpu_operator`` library module +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +This library helps with the deployment of the GPU stack on OpenShift. + +* ``prepare_gpu_operator()`` deploys the NFD Operator and the GPU + Operator, if they are not already installed. + +* ``wait_ready(...)`` waits for the GPU Operator stack to be deployed, + and optionally enable additional GPU Operator features: + + * ``enable_time_sharing`` enables the time-sharing capability of + the GPU Operator, (configured via the ``command_args.yaml.j2`` + file). + * ``extend_metrics=True, wait_metrics=True`` enables extra metrics + to be captured by the GPU Operator DCGM component (the + "well-known" metrics set). If ``wait_metrics`` is enabled, the + automation will wait for the DCGM to start reporting these + metrics. + * ``wait_stack_deployed`` allows disabling the final wait, and + only enable the components above. + +* ``cleanup_gpu_operator()`` undeploys the GPU Operator and the NFD + Operator, if they are deployed. + +* ``add_toleration(effect, key)`` adds a toleration to the GPU + Operator DaemonSet Pods. This allows the GPU Operator Pods to be + deployed on nodes with specific taints. Mind that this command + overrides any toleration previously set. + +The ``projects.local_ci.library.prepare_user_pods`` library module +"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +This library helps with the execution of multi-user TOPSAIL tests. + +Multi-user tests consist in Pods running inside the cluster, and +all executing a TOPSAIL command. Their initialization is synchronized +with a barrier, then they wait a configurable delay before starting +their script. When they terminate, their file artifacts are collected via a +S3 server, and stored locally for post-processing. + +* ``prepare_base_image_container(namespace)`` builds a TOPSAIL image + in a given namespace. The image must be consistent with the commit + of TOPSAIL being tested, so the ``BuildConfig`` relies on the PR + number of fetch the right commit. The ``apply_prefer_pr`` function + provides the helper code to update the configuration with the number + of the PR being tested. + +* ``apply_prefer_pr(pr_number=None)`` inspects the environment to + detect the PR number. When running locally, export + ``HOMELAB_CI=true`` and ``PULL_NUMBER=...`` for this function to + automatically detect the PR number. Mind that this function updates + the configuration file, so it cannot run inside a parallel context. + +* ``delete_istags(namespace)`` cleanups up the istags used by TOPSAIL + User Pods. + +* ``rebuild_driver_image(namespace, pr_number)`` helps refreshing the + image when running locally. + +:: + + @entrypoint() + def rebuild_driver_image(pr_number): + namespace = config.project.get_config("base_image.namespace") + prepare_user_pods.rebuild_driver_image(namespace, pr_number) + +* ``cluster_scale_up(user_count)`` scales up the cluster with the + right number of nodes (when not running in a bare-metal cluster). + +* ``prepare_user_pods(user_count)`` prepares the cluster for running a + multi-user scale test. Deploys the dependency tools (minio, redis), + builds the image, prepare the ServiceAccount that TOPSAIL will use, + prepare the secrets that TOPSAIL will have access to ... + +* ``cleanup_cluster()`` cleanups up the cluster by deleting the User + Pod namespace. + +The ``projects.matrix_benchmarking.library.visualize`` library module +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +This module helps with the post-processing of TOPSAIL results. + +* ``prepare_matbench()`` is called from the ContainerFile. It + installs the ``pip`` dependencies of MatrixBenchmarking. + +* ``download_and_generate_visualizations(results_dirname)`` is called + from the CIs, when replotting. It downloads test results runs the + post-processing steps against it. + +* ``generate_from_dir(results_dirname, generate_lts=None)`` is the + main entrypoint of this library. It accepts a directory as argument, + and runs the post-processing steps against it. The expected + configuration should be further documented ... diff --git a/_sources/extending/toolbox.rst.txt b/_sources/extending/toolbox.rst.txt new file mode 100644 index 0000000000..b07943debb --- /dev/null +++ b/_sources/extending/toolbox.rst.txt @@ -0,0 +1,152 @@ +How roles are organized +----------------------- + +Roles in TOPSAIL are standard Ansible roles that are wired into the +``run_toolbox.py`` command line interface. + +In TOPSAIL, the roles are organized by projects, in the +``projects/PROJECT_NAME/roles`` directories. Their structure follows +Ansible standard role guidelines: + +.. code:: bash + + toolbox/ + ├── .py + └── / + ├── defaults + │ └── main.yml + ├── files + │ └── .keep + ├── README.md + ├── tasks + │ └── main.yml + ├── templates + │ └── example.yml.j2 + └── vars + └── main.yml + +How default parameters are generated +------------------------------------ + +Topsail generates automatically all the default parameters in the +``/defaults/main.yml`` file, to make sure all the roles +parameters are consistent with what the CLI supports +(``run_toolbox.py``). The file ``/defaults/main.yml`` is +rendered automatically when executing from the project’s root folder: + +:: + + ./run_toolbox.py repo generate_ansible_default_settings + + +Including new roles in Topsail’s CLI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +1. Creating a Python class for the new role +""""""""""""""""""""""""""""""""""""""""""" + +Create a class file to reference the new role and define the default +parameters that can be referenced from the CLI as parameters. + +In the project’s ``toolbox`` directory, create or edit the +``.py`` file with the following code: + +:: + + import sys + + from projects.core.library.ansible_toolbox import ( + RunAnsibleRole, AnsibleRole, + AnsibleMappedParams, AnsibleConstant, + AnsibleSkipConfigGeneration + ) + + class : + """ + Commands relating to + """ + + @AnsibleRole("") + @AnsibleMappedParams + def run(self, + , + , + ): + """ + Run + + Args: + : First parameter + : Nth parameter + """ + + # if needed, perform simple parameters validation here + + return RunAnsibleRole(locals()) + +Description of the decorators +''''''''''''''''''''''''''''' + +* ``@AnsibleROle(role_name)`` tells the role where the command is implemented +* ``@AnsibleMappedParams`` specifies that the Python arguments should + be mapped into the Ansible arguments (that's the most common) +* ``@AnsibleSkipConfigGeneration`` specifies that no configuration + should be generated for this command (usually, it means that another + command already specifies the arguments, and this one reuses the + same role with different settings) +* ``@AnsibleConstant(description, name, value)`` specifies a Ansible + argument without Python equivalent. Can be used to pass flags + embedded in the function name. Eg: ``dump_prometheus`` and + ``reset_prometheus``. + + + + +2. Including the new toolbox class in the Toolbox +""""""""""""""""""""""""""""""""""""""""""""""""" + +This step in not necessary anymore. The ``run_toolbox.py`` command +from the root directory loads the toolbox with this generic call: + +:: + + projects.core.library.ansible_toolbox.Toolbox() + +This class traverses all the ``projects/*/toolbox/*.py`` Python files, +and loads the class with the titled name of the file (simplified code): + +:: + + for toolbox_file in (TOPSAIL_DIR / "projects").glob("*/toolbox/*.py"): + toolbox_module = __import__(toolbox_file) + toolbox_name = name of without extension + toolbox_class = getattr(toolbox_module, toolbox_name.title()) + + +3. Rendering the default parameters +""""""""""""""""""""""""""""""""""" + +Now, once the new toolbox command is created, the role class is added to the +project’s root folder and the CLI entrypoint is included in the +``Toolbox`` class, it is possible to render the role default parameters +from the ``run_toolbox.py`` CLI. To render the default parameters for +all roles execute: + +:: + + ./run_toolbox.py repo generate_ansible_default_settings + + +TOPSAIL GitHub repository will refuse to merge the PR if this command +has not been called after the Python entrypoint has been modified. + +4. Executing the new toolbox command +"""""""""""""""""""""""""""""""""""" + +Once the role is in the correct folder and the ``Toolbox`` entrypoints +are up to date, this new role can be executed directly from ``run_toolbox.py`` +like: + +:: + + ./run_toolbox.py diff --git a/_sources/extending/visualization.rst.txt b/_sources/extending/visualization.rst.txt new file mode 100644 index 0000000000..3805562c3a --- /dev/null +++ b/_sources/extending/visualization.rst.txt @@ -0,0 +1,686 @@ +Creating a new visualization module +=================================== + +TOPSAIL post-processing/visualization rely on MatrixBenchmarking +modules. The post-processing steps are configured within the +``matbench`` field of the configuration file: + +:: + + matbench: + preset: null + workload: projects.fine_tuning.visualizations.fine_tuning + config_file: plots.yaml + download: + mode: prefer_cache + url: + url_file: + # if true, copy the results downloaded by `matbench download` into the artifacts directory + save_to_artifacts: false + # directory to plot. Set by testing/common/visualize.py before launching the visualization + test_directory: null + lts: + generate: true + horreum: + test_name: null + opensearch: + export: + enabled: false + enabled_on_replot: false + fail_test_on_fail: true + instance: smoke + index: topsail-fine-tuning + index_prefix: "" + prom_index_suffix: -prom + regression_analyses: + enabled: false + # if the regression analyses fail, mark the test as failed + fail_test_on_regression: false + +The visualization modules are split into several sub-modules, that are +described below. + +The ``store`` module +-------------------- + +The ``store`` module is built as an extension of +``projects.matrix_benchmarking.visualizations.helpers.store``, which +defines the ``store`` architecture usually used in TOPSAIL. + +:: + + local_store = helpers_store.BaseStore( + cache_filename=CACHE_FILENAME, important_files=IMPORTANT_FILES, + + artifact_dirnames=parsers.artifact_dirnames, + artifact_paths=parsers.artifact_paths, + + parse_always=parsers.parse_always, + parse_once=parsers.parse_once, + + # --- + + lts_payload_model=models_lts.Payload, + generate_lts_payload=lts_parser.generate_lts_payload, + + # --- + + models_kpis=models_kpi.KPIs, + get_kpi_labels=lts_parser.get_kpi_labels, + ) + +The upper part defines the core of the ``store`` module. It is +mandatory. + +The lower parts define the LTS payload and KPIs. This part if +optional, and only required to push KPIs to OpenSearch. + +The store parsers +~~~~~~~~~~~~~~~~~ + +The goal of the ``store.parsers`` module is to turn TOPSAIL test +artifacts directories into a Python object, that can be plotted or +turned into LTS KPIs. + +The parsers of the main workload components rely on the ``simple`` +store. + +:: + + store_simple.register_custom_parse_results(local_store.parse_directory) + +The ``simple`` store searches for a ``settings.yaml`` file and an +``exit_code`` file. + +When these two files are found, the parsing of a test begins, and the +current directory is considered a test root directory. + +The parsing is done this way: + +:: + + if exists(CACHE_FILE) and not MATBENCH_STORE_IGNORE_CACHE == true: + results = reload(CACHE_FILE) + else: + results = parse_once() + + parse_always(results) + results.lts = parse_lts(results) + return results + +This organization improves the flexibility of the parsers, wrt to what +takes time (should be in ``parse_once``) vs what depends on the +current execution environment (should be in ``parse_always``). + +Mind that if you are working on the parsers, you should disable the +cache, or your modifications will not be taken into account. + +:: + + export MATBENCH_STORE_IGNORE_CACHE=true + +You can re-enable it afterwards with: + +:: + + unset MATBENCH_STORE_IGNORE_CACHE + +The results of the main parser is a ``types.SimpleNamespace`` +object. By choice, it is weakly (on the fly) defined, so the +developers must take care to properly propagate any modification of +the structure. We tested having a Pydantic model, but that turned out +to be to cumbersome to maintain. Could be retested. + +The important part of the parser is triggered by the execution of this +method: + +:: + + def parse_once(results, dirname): + results.test_config = helpers_store_parsers.parse_test_config(dirname) + results.test_uuid = helpers_store_parsers.parse_test_uuid(dirname) + ... + +This ``parse_once`` method is in charge of transforming a directory +(``dirname``) into a Python object (``results``). The parse heavily +relies on ``obj = types.SimpleNamespace()`` objects, which are +dictionaries which fields can be access as attributes. The inner +dictionary can be accessed with ``obj.__dict__`` for programmatic +traversal. + +The ``parse_once`` method should delegate the parsing to submethods, +which typically looks like this (safety checks have been removed for +readability): + + +:: + + def parse_once(results, dirname): + ... + results.finish_reason = _parse_finish_reason(dirname) + .... + + @helpers_store_parsers.ignore_file_not_found + def _parse_finish_reason(dirname): + finish_reason = types.SimpleNamespace() + finish_reason.exit_code = None + + with open(register_important_file(dirname, artifact_paths.FINE_TUNING_RUN_FINE_TUNING_DIR / "artifacts/pod.json")) as f: + pod_def = json.load(f) + + finish_reason.exit_code = container_terminated_state["exitCode"] + + return finish_reason + +Note that: + +* for efficiency, JSON parsing should be preferred to YAML parsing, + which is much slower. +* for grep-ability, the ``results.xxx`` field name should match the + variable defined in the method (``xxx = types.SimpleNamespace()``) +* the ``ignore_file_not_found`` decorator will catch + ``FileNotFoundError`` exceptions and return ``None`` instead. This + makes the code resilient against not-generated artifacts. This + happens "often" while performing investigations in TOPSAIL, because + the test failed in an unexpected way. The visualization is expected + to perform as best as possible when this happens (graceful + degradation), so that the rest of the artifacts can be exploited to + understand what happened and caused the failure. + +The difference between these two methods: + +:: + + def parse_once(results, dirname): ... + + def parse_always(results, dirname, import_settings): .. + +is that ``parse_once`` is called once, then the results is saved into +a cache file, and reloaded from there, the environment variable +``MATBENCH_STORE_IGNORE_CACHE=y`` is set. + +Method ``parse_always`` is always called, even after reloading the +cache file. This can be used to parse information about the +environment in which the post-processing is executed. + +:: + + artifact_dirnames = types.SimpleNamespace() + artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR = "*__cluster__capture_environment" + artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR = "*__fine_tuning__run_fine_tuning_job" + artifact_dirnames.RHODS_CAPTURE_STATE = "*__rhods__capture_state" + artifact_paths = types.SimpleNamespace() # will be dynamically populated + +This block is used to lookup the directories where the files to be +parsed are stored (the prefix ``nnn__`` can change easily, so it +shouldn't be hardcoded). + +During the initialization of the store module, the directories listed +by ``artifacts_dirnames`` are resolved and stored in the +``artifacts_paths`` namespace. They can be used in the parser with, +eg: ``artifact_paths.FINE_TUNING_RUN_FINE_TUNING_DIR / +"artifacts/pod.log"``. + +If the directory blob does not resolve to a file, its value is ``None``. + +:: + + IMPORTANT_FILES = [ + ".uuid", + "config.yaml", + f"{artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR}/_ansible.log", + f"{artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR}/nodes.json", + f"{artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR}/ocp_version.yml", + f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/src/config_final.json", + f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/artifacts/pod.log", + f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/artifacts/pod.json", + f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/_ansible.play.yaml", + f"{artifact_dirnames.RHODS_CAPTURE_STATE}/rhods.createdAt", + f"{artifact_dirnames.RHODS_CAPTURE_STATE}/rhods.version", + ] + + +This block defines the files important for the parsing. They are +"important" and not "mandatory" as the parsing should be able to +proceed even if the files are missing. + +The list of "important files" is used when downloading results for +re-processing. The download command can either lookup the cache file, +or download all the important files. A warning is issued during the +parsing if a file opened with ``register_important_file`` is not part +of the import files list. + +The ``store`` and ``models`` LTS and KPI modules +------------------------------------------------ + +The Long-Term Storage (LTS) payload and the Key Performance Indicators +(KPIs) are TOPSAIL/MatrixBenchmarking features for Continuous +Performance Testing (CPT). + +* The LTS payload is a "complex" object, with ``metadata``, + ``results`` and ``kpis`` fields. The ``metadata``, ``results`` are + defined with Pydantic models, which enforce their structure. This + was the first attempt of TOPSAIL/MatrixBenchmarking to go towards + long-term stability of the test results and metadata. This attempt + has not been convincing, but it is still part of the pipeline for + historical reasons. Any metadata or result can be stored in these + two objects, provided that you correctly add the fields in the + models. +* The KPIs is our current working solution for continuous performance + testing. A KPI is a simple object, which consists in a value, a help + text, a timestamp, a unit, and a set of labels. The KPIs follow the + OpenMetrics idea. + +:: + + # HELP kserve_container_cpu_usage_max Max CPU usage of the Kserve container | container_cpu_usage_seconds_total + # UNIT kserve_container_cpu_usage_max cores + kserve_container_cpu_usage_max{instance_type="g5.2xlarge", accelerator_name="NVIDIA-A10G", ocp_version="4.16.0-rc.6", rhoai_version="2.13.0-rc1+2024-09-02", model_name="flan-t5-small", ...} 1.964734477279039 + +Currently, the KPIs are part of the LTS payload, and the labels are +duplicated for each of the KPIs. This designed will be reconsidered in +the near future. + +Definition of KPI labels and values +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The KPIs are a set of performance indicators and labels. + +The KPIs are defined by functions which extract the KPI value by +inspecting the LTS payload: + +:: + + @matbench_models.HigherBetter + @matbench_models.KPIMetadata(help="Number of dataset tokens processed per seconds per GPU", unit="tokens/s") + def dataset_tokens_per_second_per_gpu(lts_payload): + return lts_payload.results.dataset_tokens_per_second_per_gpu + +the name of the function is the name of the KPI, and the annotation +define the metadata and some formatting properties: + +:: + + # mandatory + @matbench_models.KPIMetadata(help="Number of train tokens processed per GPU per seconds", unit="tokens/s") + + # one of these two is mandatory + @matbench_models.LowerBetter + # or + @matbench_models.HigherBetter + + # ignore this KPI in the regression analyse + @matbench_models.IgnoredForRegression + + # simple value formatter + @matbench_models.Format("{:.2f}") + + # formatter with a divisor (and a new unit) + @matbench_models.FormatDivisor(1024, unit="GB", format="{:.2f}") + +The KPI labels are defined via a Pydantic model: + +:: + + KPI_SETTINGS_VERSION = "1.0" + class Settings(matbench_models.ExclusiveModel): + kpi_settings_version: str + ocp_version: matbench_models.SemVer + rhoai_version: matbench_models.SemVer + instance_type: str + + accelerator_type: str + accelerator_count: int + + model_name: str + tuning_method: str + per_device_train_batch_size: int + batch_size: int + max_seq_length: int + container_image: str + + replicas: int + accelerators_per_replica: int + + lora_rank: Optional[int] + lora_dropout: Optional[float] + lora_alpha: Optional[int] + lora_modules: Optional[str] + + ci_engine: str + run_id: str + test_path: str + urls: Optional[dict[str, str]] + +So eventually, the KPIs are the combination of the generic part +(``matbench_models.KPI``) and project specific labels (``Settings``): + +:: + + class KPI(matbench_models.KPI, Settings): pass + KPIs = matbench_models.getKPIsModel("KPIs", __name__, kpi.KPIs, KPI) + + +Definition of the LTS payload +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The LTS payload was the original idea of the document to save for +continuous performance testing. KPIs have replaced them in this +endeavor, but in the current state of the project, the LTS payload +includes the KPIs. The LTS payload is the object actually sent to the +OpenSearch database. + +The LTS Payload is composed of three objects: + +- the metadata (replaced by the KPI labels) +- the results (replace by the KPI values) +- the KPIs + +:: + + LTS_SCHEMA_VERSION = "1.0" + class Metadata(matbench_models.Metadata): + lts_schema_version: str + settings: Settings + + presets: List[str] + config: str + ocp_version: matbench_models.SemVer + + class Results(matbench_models.ExclusiveModel): + train_tokens_per_second: float + dataset_tokens_per_second: float + gpu_hours_per_million_tokens: float + dataset_tokens_per_second_per_gpu: float + train_tokens_per_gpu_per_second: float + train_samples_per_second: float + train_runtime: float + train_steps_per_second: float + avg_tokens_per_sample: float + + class Payload(matbench_models.ExclusiveModel): + metadata: Metadata + results: Results + kpis: KPIs + +Generation of the LTS payload +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The generation of the LTS payload is done after the parsing of main +artifacts. + +:: + + def generate_lts_payload(results, import_settings): + lts_payload = types.SimpleNamespace() + + lts_payload.metadata = generate_lts_metadata(results, import_settings) + lts_payload.results = generate_lts_results(results) + # lts_payload.kpis is generated in the helper store + + return lts_payload + +On purpose, the parser does *not* use the Pydantic model when creating +the LTS payload. The reason for that is that the parser is strict. If +a field is missing, the object will not be created and an exception +will be raised. When TOPSAIL is used for running performance +investigations (in particular scale tests), we do not what this, +because the test might terminate with some artifacts missing. Hence, +the parsing will be incomplete, and we do *not* want that to abort the +visualization process. + +However, when running in continuous performance testing mode, we do +want to guarantee that everything is correctly populated. + +So TOPSAIL will run the parsing twice. First, without checking the LTS +conformity: + +:: + + matbench parse + --output-matrix='.../internal_matrix.json' \ + --pretty='True' \ + --results-dirname='...' \ + --workload='projects.kserve.visualizations.kserve-llm' + +Then, when LTS generation is enabled, with the LTS checkup: + +:: + + matbench parse \ + --output-lts='.../lts_payload.json' \ + --pretty='True' \ + --results-dirname='...' \ + --workload='projects.kserve.visualizations.kserve-llm' + +This step (which reload from the cache file) will be recorded as a +failure if the parsing is incomplete. + + +Generation of the KPI values +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The KPI values are generated in two steps: + +First the ``KPIs`` dictionary is populated when the ``KPIMetadata`` +decorator is applied to a function (``function name --> dict with the +function, metadata, format, etc``) + +:: + + KPIs = {} # populated by the @matbench_models.KPIMetadata decorator + # ... + @matbench_models.KPIMetadata(help="Number of train tokens processed per seconds", unit="tokens/s") + def train_tokens_per_second(lts_payload): + return lts_payload.results.train_tokens_per_second + +Second, when the LTS payload is generated via the ``helpers_store`` + +:: + + import projects.matrix_benchmarking.visualizations.helpers.store as helpers_store + +the LTS payload is passed to the KPI function, and the full KPI is +generated. + +The ``plotting`` visualization module +------------------------------------- + +The ``plotting`` module contains two kind of classes: the "actual" +plotting classes, which generate Plotly plots, and the report classes, +which generates HTML pages, based on Plotly's Dash framework. + +The ``plotting`` plot classes +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The ``plotting`` plot classes generate Plotly plots. They receive a +set of parameters about what should be plotted: + +:: + + def do_plot(self, ordered_vars, settings, setting_lists, variables, cfg): + ... + +and they return a Plotly figure, and optionally some text to write +below the plot: + +:: + + return fig, msg + +The parameters are mostly useful when multiple experiments have been +captured: + +- ``setting_lists`` and ``settings`` should not be touched. They + should be passed to ``common.Matrix.all_records``, which will return + a filtered list of all the entry to include in the plot. + +:: + + for entry in common.Matrix.all_records(settings, setting_lists): + # extract plot data from entry + pass + +Some plotting classes may be written to display only one experiment +results. A fail-safe exit can be written this way: + +:: + + if common.Matrix.count_records(settings, setting_lists) != 1: + return {}, "ERROR: only one experiment must be selected" + +- the ``variables`` dictionary tells which settings have multiple + values. Eg, we may have 6 experiments, all with + ``model_name=llama3``, but with ``virtual_users=[4, 16, 32]`` and + ``deployment_type=[raw, knative]``. In this case, the + ``virtual_users`` and ``deployment_type`` will be listed in the + ``variables``. This is useful to give a name to each entry. Eg, + here, ``entry.get_name(variables)`` may return ``virtual_users=16, + deployment_type=raw``. + +- the ``ordered_vars`` list tells the preferred ordering for + processing the experiments. With the example above and + ``ordered_vars=[virtual_users, deployment_type]``, we may want to + use the virtual_user setting as legend. With + ``ordered_vars=[deployment_type, virtual_users]``, we may want to + use the ``deployment_type`` instead. This gives flexibility in the + way the plots are rendered. This order can be set in the GUI, or via + the reporting calls. + +Note that using these parameters is optional. They have no sense when +only one experiment should be plotted, and ``ordered_vars`` is useful +only when using the GUI, or when generating reports. They help the +generic processing of the results. + +- the ``cfg`` dictionary provides some dynamic configuration flags to + perform the visualization. They can be passed either via the GUI, or + by the report classes (eg, to highlight a particular aspect of the + plot). + + +Guideline for writing the plotting classes +"""""""""""""""""""""""""""""""""""""""""" + +Writing a plotting class is often messy and dirty, with a lot of +``if`` this ``else`` that. With Plotly's initial framework +``plotly.graph_objs``, it was easy and tempting to mix the data +preparation (traversing the data structures) with the data +visualization (adding elements like lines to the plot), and do both +parts in the same loops. + +Plotly express (``plotly.express``) introduced a new way to generate +the plots, based on Pandas DataFrames: + +:: + + df = pd.DataFrame(generateThroughputData(entries, variables, ordered_vars, cfg__model_name)) + fig = px.line(df, hover_data=df.columns, + x="throughput", y="tpot_mean", color="model_testname", text="test_name",) + +This pattern, where the first phase shapes the data to plot into +DataFrame, and the second phase turns the DataFrame into a figure, is +the preferred way to organize the code of the plotting classes. + +The ``plotting`` reports +^^^^^^^^^^^^^^^^^^^^^^^^ + + +The report classes are similar to the plotting classes, except that +they generate ... reports, instead of plots (!). + +A report is an HTML document, based on the Dash framework HTML tags +(that is, Python objects): + +:: + + args = ordered_vars, settings, setting_lists, variables, cfg + + header += [html.H1("Latency per token during the load test")] + + header += Plot_and_Text(f"Latency details", args) + header += html.Br() + header += html.Br() + + header += Plot_and_Text(f"Latency distribution", args) + + header += html.Br() + header += html.Br() + +The configuration dictionary, mentioned above, can be used to generate +different flavors of the plot: + +:: + + header += Plot_and_Text(f"Latency distribution", set_config(dict(box_plot=False, show_text=False), args)) + + for entry in common.Matrix.all_records(settings, setting_lists): + header += [html.H2(entry.get_name(reversed(sorted(set(list(variables.keys()) + ['model_name'])))))] + header += Plot_and_Text(f"Latency details", set_config(dict(entry=entry), args)) + +Defining the plots and reports to generate +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +When TOPSAIL has successfully run the parsing step, it calls the +``visualization`` component with a predefined list of reports +(preferred) and plots (not recommended) to generate. This is stored in +``data/plots.yaml``: + +:: + + visualize: + - id: llm_test + generate: + - "report: Error report" + - "report: Latency per token" + - "report: Throughput" + +The ``analyze`` regression analyze module +----------------------------------------- + +The last part of TOPSAIL/MatrixBenchmarking post-processing is the +automated regression analyses. The workflow required to enable performance +analyses will be described in the orchestration section. What is +required in the workload module only consists of a few keys to define. + + +:: + + # the setting (kpi labels) keys against which the historical regression should be performed + COMPARISON_KEYS = ["rhoai_version"] + +The setting keys listed in ``COMPARISON_KEYS`` will be used to +distinguish which entries to considered as "history" for a given test, +from everything else. In this example, we see that we compare against +historical OpenShift AI versions. + +:: + + COMPARISON_KEYS = ["rhoai_version", "image_tag"] + +Here, we compare against the historical RHOAI version and image tag. + +:: + + # the setting (kpi labels) keys that should be ignored when searching for historical results + IGNORED_KEYS = ["runtime_image", "ocp_version"] + +Then we define the settings to ignore when searching for historical +records. Here, we ignore the runtime image name, and the OpenShift +version. + +:: + + # the setting (kpi labels) keys *prefered* for sorting the entries in the regression report + SORTING_KEYS = ["model_name", "virtual_users"] + +Finally, for readability purpose, we define how the entries should be +sorted, so that the tables have a consistent ordering. + +:: + + IGNORED_ENTRIES = { + "virtual_users": [4, 8, 32, 128] + } + +Last, we can define some settings to ignore while traversing the +entries that have been tested. diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt new file mode 100644 index 0000000000..d6738d6b63 --- /dev/null +++ b/_sources/index.rst.txt @@ -0,0 +1,47 @@ +================================================= +Red Hat PSAP TOPSAIL test orchestration framework +================================================= + +.. toctree:: + :maxdepth: 3 + :caption: General + + intro + contributing + + +.. _understanding_topsail: + +.. toctree:: + :maxdepth: 3 + :caption: Understanding The Architecture + + understanding/orchestration + understanding/toolbox + understanding/visualization + +.. _extending_topsail: + +.. toctree:: + :maxdepth: 3 + :caption: Extending The Architecture + + extending/orchestration + extending/toolbox + extending/visualization + +.. _topsail_toolbox: + +.. toctree:: + :maxdepth: 4 + :caption: TOPSAIL's Toolbox + + toolbox.generated/index + +.. _topsail_orchetrations: + +.. toctree:: + :maxdepth: 3 + :caption: Test Orchestrations + +Documentation generated on |today| from |release|. diff --git a/_sources/intro.rst.txt b/_sources/intro.rst.txt new file mode 100644 index 0000000000..6b2b3ec68c --- /dev/null +++ b/_sources/intro.rst.txt @@ -0,0 +1 @@ +.. include:: ../README.rst \ No newline at end of file diff --git a/_sources/toolbox.generated/Busy_Cluster.cleanup.rst.txt b/_sources/toolbox.generated/Busy_Cluster.cleanup.rst.txt new file mode 100644 index 0000000000..5971a564c5 --- /dev/null +++ b/_sources/toolbox.generated/Busy_Cluster.cleanup.rst.txt @@ -0,0 +1,33 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Busy_Cluster.cleanup + + +busy_cluster cleanup +==================== + +Cleanups namespaces to make a cluster un-busy + + + + +Parameters +---------- + + +``namespace_label_key`` + +* The label key to use to locate the namespaces to cleanup + +* default value: ``busy-cluster.topsail`` + + +``namespace_label_value`` + +* The label value to use to locate the namespaces to cleanup + +* default value: ``yes`` + diff --git a/_sources/toolbox.generated/Busy_Cluster.create_configmaps.rst.txt b/_sources/toolbox.generated/Busy_Cluster.create_configmaps.rst.txt new file mode 100644 index 0000000000..dc592ecc10 --- /dev/null +++ b/_sources/toolbox.generated/Busy_Cluster.create_configmaps.rst.txt @@ -0,0 +1,78 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Busy_Cluster.create_configmaps + + +busy_cluster create_configmaps +============================== + +Creates configmaps and secrets to make a cluster busy + + + + +Parameters +---------- + + +``namespace_label_key`` + +* The label key to use to locate the namespaces to populate + +* default value: ``busy-cluster.topsail`` + + +``namespace_label_value`` + +* The label value to use to locate the namespaces to populate + +* default value: ``yes`` + + +``prefix`` + +* Prefix to give to the configmaps/secrets to create + +* default value: ``busy`` + + +``count`` + +* Number of configmaps/secrets to create + +* default value: ``10`` + + +``labels`` + +* Dict of the key/value labels to set for the configmap/secrets + + +``as_secrets`` + +* If True, creates secrets instead of configmaps + + +``entries`` + +* Number of entries to create + +* default value: ``10`` + + +``entry_values_length`` + +* Length of an entry value + +* default value: ``1024`` + + +``entry_keys_prefix`` + +* The prefix to use to create the entry values + +* default value: ``entry-`` + diff --git a/_sources/toolbox.generated/Busy_Cluster.create_deployments.rst.txt b/_sources/toolbox.generated/Busy_Cluster.create_deployments.rst.txt new file mode 100644 index 0000000000..b5383bf3df --- /dev/null +++ b/_sources/toolbox.generated/Busy_Cluster.create_deployments.rst.txt @@ -0,0 +1,76 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Busy_Cluster.create_deployments + + +busy_cluster create_deployments +=============================== + +Creates configmaps and secrets to make a cluster busy + + + + +Parameters +---------- + + +``namespace_label_key`` + +* The label key to use to locate the namespaces to populate + +* default value: ``busy-cluster.topsail`` + + +``namespace_label_value`` + +* The label value to use to locate the namespaces to populate + +* default value: ``yes`` + + +``prefix`` + +* Prefix to give to the deployments to create + +* default value: ``busy`` + + +``count`` + +* Number of deployments to create + +* default value: ``1`` + + +``labels`` + +* Dict of the key/value labels to set for the deployments + + +``replicas`` + +* Number of replicas to set for the deployments + +* default value: ``1`` + + +``services`` + +* Number of services to create for each of the deployments + +* default value: ``1`` + + +``image_pull_back_off`` + +* If True, makes the containers image pull fail. + + +``crash_loop_back_off`` + +* If True, makes the containers fail. If a integer value, wait this many seconds before failing. + diff --git a/_sources/toolbox.generated/Busy_Cluster.create_jobs.rst.txt b/_sources/toolbox.generated/Busy_Cluster.create_jobs.rst.txt new file mode 100644 index 0000000000..10db1e0791 --- /dev/null +++ b/_sources/toolbox.generated/Busy_Cluster.create_jobs.rst.txt @@ -0,0 +1,66 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Busy_Cluster.create_jobs + + +busy_cluster create_jobs +======================== + +Creates jobs to make a cluster busy + + + + +Parameters +---------- + + +``namespace_label_key`` + +* The label key to use to locate the namespaces to populate + +* default value: ``busy-cluster.topsail`` + + +``namespace_label_value`` + +* The label value to use to locate the namespaces to populate + +* default value: ``yes`` + + +``prefix`` + +* Prefix to give to the deployments to create + +* default value: ``busy`` + + +``count`` + +* Number of deployments to create + +* default value: ``10`` + + +``labels`` + +* Dict of the key/value labels to set for the deployments + + +``replicas`` + +* The number of parallel tasks to execute + +* default value: ``2`` + + +``runtime`` + +* The runtime of the Job Pods in seconds, of inf + +* default value: ``120`` + diff --git a/_sources/toolbox.generated/Busy_Cluster.create_namespaces.rst.txt b/_sources/toolbox.generated/Busy_Cluster.create_namespaces.rst.txt new file mode 100644 index 0000000000..3e3c360e6b --- /dev/null +++ b/_sources/toolbox.generated/Busy_Cluster.create_namespaces.rst.txt @@ -0,0 +1,38 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Busy_Cluster.create_namespaces + + +busy_cluster create_namespaces +============================== + +Creates namespaces to make a cluster busy + + + + +Parameters +---------- + + +``prefix`` + +* Prefix to give to the namespaces to create + +* default value: ``busy-namespace`` + + +``count`` + +* Number of namespaces to create + +* default value: ``10`` + + +``labels`` + +* Dict of the key/value labels to set for the namespace + diff --git a/_sources/toolbox.generated/Busy_Cluster.status.rst.txt b/_sources/toolbox.generated/Busy_Cluster.status.rst.txt new file mode 100644 index 0000000000..49298b41b6 --- /dev/null +++ b/_sources/toolbox.generated/Busy_Cluster.status.rst.txt @@ -0,0 +1,33 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Busy_Cluster.status + + +busy_cluster status +=================== + +Shows the busyness of the cluster + + + + +Parameters +---------- + + +``namespace_label_key`` + +* The label key to use to locate the namespaces to cleanup + +* default value: ``busy-cluster.topsail`` + + +``namespace_label_value`` + +* The label value to use to locate the namespaces to cleanup + +* default value: ``yes`` + diff --git a/_sources/toolbox.generated/Cluster.build_push_image.rst.txt b/_sources/toolbox.generated/Cluster.build_push_image.rst.txt new file mode 100644 index 0000000000..e3a20fc9b5 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.build_push_image.rst.txt @@ -0,0 +1,84 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.build_push_image + + +cluster build_push_image +======================== + +Build and publish an image to quay using either a Dockerfile or git repo. + + + + +Parameters +---------- + + +``image_local_name`` + +* Name of locally built image. + + +``tag`` + +* Tag for the image to build. + + +``namespace`` + +* Namespace where the local image will be built. + + +``remote_repo`` + +* Remote image repo to push to. If undefined, the image will not be pushed. + + +``remote_auth_file`` + +* Auth file for the remote repository. + + +``git_repo`` + +* Git repo containing Dockerfile if used as source. If undefined, the local path of 'dockerfile_path' will be used. + + +``git_ref`` + +* Git commit ref (branch, tag, commit hash) in the git repository. + + +``dockerfile_path`` + +* Path/Name of Dockerfile if used as source. If 'git_repo' is undefined, this path will be resolved locally, and the Dockerfile will be injected in the image BuildConfig. + +* default value: ``Dockerfile`` + + +``context_dir`` + +* Context dir inside the git repository. + +* default value: ``/`` + + +``memory`` + +* Flag to specify the required memory to build the image (in Gb). +* type: Float + + +``from_image`` + +* Base image to use, instead of the FROM image specified in the Dockerfile. + + +``from_imagetag`` + +* Base imagestreamtag to use, instead of the FROM image specified in the Dockerfile. + diff --git a/_sources/toolbox.generated/Cluster.capture_environment.rst.txt b/_sources/toolbox.generated/Cluster.capture_environment.rst.txt new file mode 100644 index 0000000000..ba6a19328c --- /dev/null +++ b/_sources/toolbox.generated/Cluster.capture_environment.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.capture_environment + + +cluster capture_environment +=========================== + +Captures the cluster environment + + + diff --git a/_sources/toolbox.generated/Cluster.create_htpasswd_adminuser.rst.txt b/_sources/toolbox.generated/Cluster.create_htpasswd_adminuser.rst.txt new file mode 100644 index 0000000000..10bacc472d --- /dev/null +++ b/_sources/toolbox.generated/Cluster.create_htpasswd_adminuser.rst.txt @@ -0,0 +1,54 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.create_htpasswd_adminuser + + +cluster create_htpasswd_adminuser +================================= + +Create an htpasswd admin user. + +Will remove any other existing OAuth. + +Example of password file: +password=my-strong-password + + +Parameters +---------- + + +``username`` + +* Username of the htpasswd user. + + +``passwordfile`` + +* Password file where the user's password is stored. Will be sourced. + + +``wait`` + +* If True, waits for the user to be able to login into the cluster. + + +# Constants +# Name of the secret that will contain the htpasswd passwords +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_secret_name: htpasswd-secret + +# Name of the htpasswd IDP being created +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_htpasswd_idp_name: htpasswd + +# Role that will be given to the user group +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_role: cluster-admin + +# Name of the group that will be created for the user +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_groupname: local-admins diff --git a/_sources/toolbox.generated/Cluster.create_osd.rst.txt b/_sources/toolbox.generated/Cluster.create_osd.rst.txt new file mode 100644 index 0000000000..7e0adb9698 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.create_osd.rst.txt @@ -0,0 +1,87 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.create_osd + + +cluster create_osd +================== + +Create an OpenShift Dedicated cluster. + +Secret_file: + KUBEADMIN_PASS: password of the default kubeadmin user. + AWS_ACCOUNT_ID + AWS_ACCESS_KEY + AWS_SECRET_KEY: Credentials to access AWS. + + +Parameters +---------- + + +``cluster_name`` + +* The name to give to the cluster. + + +``secret_file`` + +* The file containing the cluster creation credentials. + + +``kubeconfig`` + +* The KUBECONFIG file to populate with the access to the cluster. + + +``version`` + +* OpenShift version to deploy. + +* default value: ``4.10.15`` + + +``region`` + +* AWS region where the cluster will be deployed. + +* default value: ``us-east-1`` + + +``htaccess_idp_name`` + +* Name of the Identity provider that will be created for the admin account. + +* default value: ``htpasswd`` + + +``compute_machine_type`` + +* Name of the AWS machine instance type that will be used for the compute nodes. + +* default value: ``m5.xlarge`` + + +``compute_nodes`` + +* The number of compute nodes to create. A minimum of 2 is required by OSD. +* type: Int + +* default value: ``2`` + + +# Constants +# Name of the worker node machinepool +# Defined as a constant in Cluster.create_osd +cluster_create_osd_machinepool_name: default + +# Group that the admin account will be part of. +# Defined as a constant in Cluster.create_osd +cluster_create_osd_kubeadmin_group: cluster-admins + +# Name of the admin account that will be created. +# Defined as a constant in Cluster.create_osd +cluster_create_osd_kubeadmin_name: kubeadmin diff --git a/_sources/toolbox.generated/Cluster.deploy_aws_efs.rst.txt b/_sources/toolbox.generated/Cluster.deploy_aws_efs.rst.txt new file mode 100644 index 0000000000..0525382a67 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_aws_efs.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_aws_efs + + +cluster deploy_aws_efs +====================== + +Deploy AWS EFS CSI driver and configure AWS accordingly. + +Assumes that AWS (credentials, Ansible module, Python module) is properly configured in the system. + diff --git a/_sources/toolbox.generated/Cluster.deploy_ldap.rst.txt b/_sources/toolbox.generated/Cluster.deploy_ldap.rst.txt new file mode 100644 index 0000000000..4c92bea1a1 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_ldap.rst.txt @@ -0,0 +1,67 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_ldap + + +cluster deploy_ldap +=================== + +Deploy OpenLDAP and LDAP Oauth + +Example of secret properties file: + +admin_password=adminpasswd + + +Parameters +---------- + + +``idp_name`` + +* Name of the LDAP identity provider. + + +``username_prefix`` + +* Prefix for the creation of the users (suffix is 0..username_count) + + +``username_count`` + +* Number of users to create. +* type: Int + + +``secret_properties_file`` + +* Path of a file containing the properties of LDAP secrets. + + +``use_ocm`` + +* If true, use `ocm create idp` to deploy the LDAP identity provider. + + +``use_rosa`` + +* If true, use `rosa create idp` to deploy the LDAP identity provider. + + +``cluster_name`` + +* Cluster to use when using OCM or ROSA. + + +``wait`` + +* If True, waits for the first user (0) to be able to login into the cluster. + + +# Constants +# Name of the admin user +# Defined as a constant in Cluster.deploy_ldap +cluster_deploy_ldap_admin_user: admin diff --git a/_sources/toolbox.generated/Cluster.deploy_minio_s3_server.rst.txt b/_sources/toolbox.generated/Cluster.deploy_minio_s3_server.rst.txt new file mode 100644 index 0000000000..11c0d8e2c0 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_minio_s3_server.rst.txt @@ -0,0 +1,50 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_minio_s3_server + + +cluster deploy_minio_s3_server +============================== + +Deploy Minio S3 server + +Example of secret properties file: + +user_password=passwd +admin_password=adminpasswd + + +Parameters +---------- + + +``secret_properties_file`` + +* Path of a file containing the properties of S3 secrets. + + +``namespace`` + +* Namespace in which Minio should be deployed. + +* default value: ``minio`` + + +``bucket_name`` + +* The name of the default bucket to create in Minio. + +* default value: ``myBucket`` + + +# Constants +# Name of the Minio admin user +# Defined as a constant in Cluster.deploy_minio_s3_server +cluster_deploy_minio_s3_server_root_user: admin + +# Name of the user/access key to use to connect to the Minio server +# Defined as a constant in Cluster.deploy_minio_s3_server +cluster_deploy_minio_s3_server_access_key: minio diff --git a/_sources/toolbox.generated/Cluster.deploy_nfs_provisioner.rst.txt b/_sources/toolbox.generated/Cluster.deploy_nfs_provisioner.rst.txt new file mode 100644 index 0000000000..7546da90c9 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_nfs_provisioner.rst.txt @@ -0,0 +1,52 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_nfs_provisioner + + +cluster deploy_nfs_provisioner +============================== + +Deploy NFS Provisioner + + + + +Parameters +---------- + + +``namespace`` + +* The namespace where the resources will be deployed + +* default value: ``nfs-provisioner`` + + +``pvc_sc`` + +* The name of the storage class to use for the NFS-provisioner PVC + +* default value: ``gp3-csi`` + + +``pvc_size`` + +* The size of the PVC to give to the NFS-provisioner + +* default value: ``10Gi`` + + +``storage_class_name`` + +* The name of the storage class that will be created + +* default value: ``nfs-provisioner`` + + +``default_sc`` + +* Set to true to mark the storage class as default in the cluster + diff --git a/_sources/toolbox.generated/Cluster.deploy_nginx_server.rst.txt b/_sources/toolbox.generated/Cluster.deploy_nginx_server.rst.txt new file mode 100644 index 0000000000..de31f40504 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_nginx_server.rst.txt @@ -0,0 +1,29 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_nginx_server + + +cluster deploy_nginx_server +=========================== + +Deploy an NGINX HTTP server + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where the server will be deployed. Will be create if it doesn't exist. + + +``directory`` + +* Directory containing the files to serve on the HTTP server. + diff --git a/_sources/toolbox.generated/Cluster.deploy_opensearch.rst.txt b/_sources/toolbox.generated/Cluster.deploy_opensearch.rst.txt new file mode 100644 index 0000000000..a6ee19a091 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_opensearch.rst.txt @@ -0,0 +1,41 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_opensearch + + +cluster deploy_opensearch +========================= + +Deploy OpenSearch and OpenSearch-Dashboards + +Example of secret properties file: + +user_password=passwd +admin_password=adminpasswd + + +Parameters +---------- + + +``secret_properties_file`` + +* Path of a file containing the properties of LDAP secrets. + + +``namespace`` + +* Namespace in which the application will be deployed + +* default value: ``opensearch`` + + +``name`` + +* Name to give to the opensearch instance + +* default value: ``opensearch`` + diff --git a/_sources/toolbox.generated/Cluster.deploy_operator.rst.txt b/_sources/toolbox.generated/Cluster.deploy_operator.rst.txt new file mode 100644 index 0000000000..08e2f7a6b0 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_operator.rst.txt @@ -0,0 +1,87 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_operator + + +cluster deploy_operator +======================= + +Deploy an operator from OperatorHub catalog entry. + + + + +Parameters +---------- + + +``catalog`` + +* Name of the catalog containing the operator. + + +``manifest_name`` + +* Name of the operator package manifest. + + +``namespace`` + +* Namespace in which the operator will be deployed, or 'all' to deploy in all the namespaces. + + +``version`` + +* Version to deploy. If unspecified, deploys the latest version available in the selected channel. + + +``channel`` + +* Channel to deploy from. If unspecified, deploys the CSV's default channel. Use '?' to list the available channels for the given package manifest. + + +``installplan_approval`` + +* InstallPlan approval mode (Automatic or Manual). + +* default value: ``Manual`` + + +``catalog_namespace`` + +* Namespace in which the CatalogSource will be deployed + +* default value: ``openshift-marketplace`` + + +``deploy_cr`` + +* If set, deploy the first example CR found in the CSV. +* type: Bool + + +``namespace_monitoring`` + +* If set, enable OpenShift namespace monitoring. +* type: Bool + + +``all_namespaces`` + +* If set, deploy the CSV in all the namespaces. +* type: Bool + + +``config_env_names`` + +* If not empty, a list of config env names to pass to the subscription +* type: List + + +``csv_base_name`` + +* If not empty, base name of the CSV. If empty, use the manifest_name. + diff --git a/_sources/toolbox.generated/Cluster.deploy_redis_server.rst.txt b/_sources/toolbox.generated/Cluster.deploy_redis_server.rst.txt new file mode 100644 index 0000000000..3cb0089ac3 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.deploy_redis_server.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.deploy_redis_server + + +cluster deploy_redis_server +=========================== + +Deploy a redis server + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where the server will be deployed. Will be create if it doesn't exist. + diff --git a/_sources/toolbox.generated/Cluster.destroy_ocp.rst.txt b/_sources/toolbox.generated/Cluster.destroy_ocp.rst.txt new file mode 100644 index 0000000000..5426499a05 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.destroy_ocp.rst.txt @@ -0,0 +1,48 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.destroy_ocp + + +cluster destroy_ocp +=================== + +Destroy an OpenShift cluster + + + + +Parameters +---------- + + +``region`` + +* The AWS region where the cluster lives. If empty and --confirm is passed, look up from the cluster. + + +``tag`` + +* The resource tag key. If empty and --confirm is passed, look up from the cluster. + + +``confirm`` + +* If the region/label are not set, and --confirm is passed, destroy the current cluster. + + +``tag_value`` + +* The resource tag value. + +* default value: ``owned`` + + +``openshift_install`` + +* The path to the `openshift-install` to use to destroy the cluster. If empty, pick it up from the `deploy-cluster` subproject. + +* default value: ``openshift-install`` + diff --git a/_sources/toolbox.generated/Cluster.destroy_osd.rst.txt b/_sources/toolbox.generated/Cluster.destroy_osd.rst.txt new file mode 100644 index 0000000000..261be45b85 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.destroy_osd.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.destroy_osd + + +cluster destroy_osd +=================== + +Destroy an OpenShift Dedicated cluster. + + + + +Parameters +---------- + + +``cluster_name`` + +* The name of the cluster to destroy. + diff --git a/_sources/toolbox.generated/Cluster.download_to_pvc.rst.txt b/_sources/toolbox.generated/Cluster.download_to_pvc.rst.txt new file mode 100644 index 0000000000..7954734b11 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.download_to_pvc.rst.txt @@ -0,0 +1,70 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.download_to_pvc + + +cluster download_to_pvc +======================= + +Downloads the a dataset into a PVC of the cluster + + + + +Parameters +---------- + + +``name`` + +* Name of the data source + + +``source`` + +* URL of the source data + + +``pvc_name`` + +* Name of the PVC that will be create to store the dataset files. + + +``namespace`` + +* Name of the namespace in which the PVC will be created + + +``creds`` + +* Path to credentials to use for accessing the dataset. + + +``storage_dir`` + +* The path where to store the downloaded files, in the PVC + +* default value: ``/`` + + +``clean_first`` + +* If True, clears the storage directory before downloading. + + +``pvc_access_mode`` + +* The access mode to request when creating the PVC + +* default value: ``ReadWriteOnce`` + + +``pvc_size`` + +* The size of the PVC to request, when creating it + +* default value: ``80Gi`` + diff --git a/_sources/toolbox.generated/Cluster.dump_prometheus_db.rst.txt b/_sources/toolbox.generated/Cluster.dump_prometheus_db.rst.txt new file mode 100644 index 0000000000..dcf4d3026c --- /dev/null +++ b/_sources/toolbox.generated/Cluster.dump_prometheus_db.rst.txt @@ -0,0 +1,45 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.dump_prometheus_db + + +cluster dump_prometheus_db +========================== + +Dump Prometheus database into a file + +By default, target OpenShift Prometheus Pod. + + +Parameters +---------- + + +``label`` + +* Label to use to identify Prometheus Pod. + +* default value: ``app.kubernetes.io/component=prometheus`` + + +``namespace`` + +* Namespace where to search Promtheus Pod. + +* default value: ``openshift-monitoring`` + + +``dump_name_prefix`` + +* Name prefix for the archive that will be stored. + +* default value: ``prometheus`` + + +# Constants +# +# Defined as a constant in Cluster.dump_prometheus_db +cluster_prometheus_db_mode: dump diff --git a/_sources/toolbox.generated/Cluster.fill_workernodes.rst.txt b/_sources/toolbox.generated/Cluster.fill_workernodes.rst.txt new file mode 100644 index 0000000000..50b586ff39 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.fill_workernodes.rst.txt @@ -0,0 +1,40 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.fill_workernodes + + +cluster fill_workernodes +======================== + +Fills the worker nodes with place-holder Pods with the maximum available amount of a given resource name. + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which the place-holder Pods should be deployed + +* default value: ``default`` + + +``name`` + +* Name prefix to use for the place-holder Pods + +* default value: ``resource-placeholder`` + + +``label_selector`` + +* Label to use to select the nodes to fill + +* default value: ``node-role.kubernetes.io/worker`` + diff --git a/_sources/toolbox.generated/Cluster.preload_image.rst.txt b/_sources/toolbox.generated/Cluster.preload_image.rst.txt new file mode 100644 index 0000000000..b39b3652b2 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.preload_image.rst.txt @@ -0,0 +1,56 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.preload_image + + +cluster preload_image +===================== + +Preload a container image on all the nodes of a cluster. + + + + +Parameters +---------- + + +``name`` + +* Name to give to the DaemonSet used for preloading the image. + + +``image`` + +* Container image to preload on the nodes. + + +``namespace`` + +* Namespace in which the DaemonSet will be created. + +* default value: ``default`` + + +``node_selector_key`` + +* NodeSelector key to apply to the DaemonSet. + + +``node_selector_value`` + +* NodeSelector value to apply to the DaemonSet. + + +``pod_toleration_key`` + +* Pod toleration to apply to the DaemonSet. + + +``pod_toleration_effect`` + +* Pod toleration to apply to the DaemonSet. + diff --git a/_sources/toolbox.generated/Cluster.query_prometheus_db.rst.txt b/_sources/toolbox.generated/Cluster.query_prometheus_db.rst.txt new file mode 100644 index 0000000000..2691adfd15 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.query_prometheus_db.rst.txt @@ -0,0 +1,71 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.query_prometheus_db + + +cluster query_prometheus_db +=========================== + +Query Prometheus with a list of PromQueries read in a file + +The metrics_file is a multi-line list, with first the name of the metric, prefixed with '#' +Then the definition of the metric, than can spread on multiple lines, until the next # is found. + +Example: +:: + + promquery_file: + # sutest__cluster_cpu_capacity + sum(cluster:capacity_cpu_cores:sum) + # sutest__cluster_memory_requests + sum( + kube_pod_resource_request{resource="memory"} + * + on(node) group_left(role) ( + max by (node) (kube_node_role{role=~".+"}) + ) + ) + # openshift-operators CPU request + sum(kube_pod_container_resource_requests{namespace=~'openshift-operators',resource='cpu'}) + # openshift-operators CPU limit + sum(kube_pod_container_resource_limits{namespace=~'openshift-operators',resource='cpu'}) + # openshift-operators CPU usage + sum(rate(container_cpu_usage_seconds_total{namespace=~'openshift-operators'}[5m])) + + +Parameters +---------- + + +``promquery_file`` + +* File where the Prometheus Queries are stored. See the example above to understand the format. + + +``dest_dir`` + +* Directory where the metrics should be stored + + +``namespace`` + +* The namespace where the metrics should searched for + + +``duration_s`` + +* The duration of the history to query + + +``start_ts`` + +* The start timestamp of the history to query. Incompatible with duration_s flag. + + +``end_ts`` + +* The end timestamp of the history to query. Incompatible with duration_s flag. + diff --git a/_sources/toolbox.generated/Cluster.reset_prometheus_db.rst.txt b/_sources/toolbox.generated/Cluster.reset_prometheus_db.rst.txt new file mode 100644 index 0000000000..e846e32f44 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.reset_prometheus_db.rst.txt @@ -0,0 +1,49 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.reset_prometheus_db + + +cluster reset_prometheus_db +=========================== + +Resets Prometheus database, by destroying its Pod + +By default, target OpenShift Prometheus Pod. + + +Parameters +---------- + + +``mode`` + +* Mode in which the role will run. Can be 'reset' or 'dump'. + +* default value: ``reset`` + + +``label`` + +* Label to use to identify Prometheus Pod. + +* default value: ``app.kubernetes.io/component=prometheus`` + + +``namespace`` + +* Namespace where to search Promtheus Pod. + +* default value: ``openshift-monitoring`` + + +# Constants +# Prefix to apply to the db name in 'dump' mode +# Defined as a constant in Cluster.reset_prometheus_db +cluster_prometheus_db_dump_name_prefix: prometheus + +# Directory to dump on the Prometheus Pod +# Defined as a constant in Cluster.reset_prometheus_db +cluster_prometheus_db_directory: /prometheus diff --git a/_sources/toolbox.generated/Cluster.set_project_annotation.rst.txt b/_sources/toolbox.generated/Cluster.set_project_annotation.rst.txt new file mode 100644 index 0000000000..c041012d68 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.set_project_annotation.rst.txt @@ -0,0 +1,39 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.set_project_annotation + + +cluster set_project_annotation +============================== + +Set an annotation on a given project, or for any new projects. + + + + +Parameters +---------- + + +``key`` + +* The annotation key + + +``value`` + +* The annotation value. If value is omited, the annotation is removed. + + +``project`` + +* The project to annotate. Must be set unless --all is passed. + + +``all`` + +* If set, the annotation will be set for any new project. + diff --git a/_sources/toolbox.generated/Cluster.set_scale.rst.txt b/_sources/toolbox.generated/Cluster.set_scale.rst.txt new file mode 100644 index 0000000000..803247dcd7 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.set_scale.rst.txt @@ -0,0 +1,70 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.set_scale + + +cluster set_scale +================= + +Ensures that the cluster has exactly ``scale`` nodes with instance_type ``instance_type`` + +If the machinesets of the given instance type already have the required total number of replicas, +their replica parameters will not be modified. +Otherwise, +- If there's only one machineset with the given instance type, its replicas will be set to the value of this parameter. +- If there are other machinesets with non-zero replicas, the playbook will fail, unless the `force` parameter is +set to true. In that case, the number of replicas of the other machinesets will be zeroed before setting the replicas +of the first machineset to the value of this parameter." +- If `--base-machineset=machineset` flag is passed, `machineset` machineset will be used to derive the new +machinetset (otherwise, the first machinetset of the listing will be used). This is useful if the desired `instance_type` +is only available in some specific regions and, controlled by different machinesets. + +Example: ./run_toolbox.py cluster set_scale g4dn.xlarge 1 # ensure that the cluster has 1 GPU node + + +Parameters +---------- + + +``instance_type`` + +* The instance type to use, for example, g4dn.xlarge + + +``scale`` + +* The number of required nodes with given instance type + + +``base_machineset`` + +* Name of a machineset to use to derive the new one. Default: pickup the first machineset found in `oc get machinesets -n openshift-machine-api`. + + +``force`` + +* Missing documentation for force + + +``taint`` + +* Taint to apply to the machineset. + + +``name`` + +* Name to give to the new machineset. + + +``spot`` + +* Set to true to request spot instances from AWS. Set to false (default) to request on-demand instances. + + +``disk_size`` + +* Size of the EBS volume to request for the root partition + diff --git a/_sources/toolbox.generated/Cluster.undeploy_ldap.rst.txt b/_sources/toolbox.generated/Cluster.undeploy_ldap.rst.txt new file mode 100644 index 0000000000..ce47495b05 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.undeploy_ldap.rst.txt @@ -0,0 +1,39 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.undeploy_ldap + + +cluster undeploy_ldap +===================== + +Undeploy OpenLDAP and LDAP Oauth + + + + +Parameters +---------- + + +``idp_name`` + +* Name of the LDAP identity provider. + + +``use_ocm`` + +* If true, use `ocm delete idp` to delete the LDAP identity provider. + + +``use_rosa`` + +* If true, use `rosa delete idp` to delete the LDAP identity provider. + + +``cluster_name`` + +* Cluster to use when using OCM or ROSA. + diff --git a/_sources/toolbox.generated/Cluster.update_pods_per_node.rst.txt b/_sources/toolbox.generated/Cluster.update_pods_per_node.rst.txt new file mode 100644 index 0000000000..330c8a4901 --- /dev/null +++ b/_sources/toolbox.generated/Cluster.update_pods_per_node.rst.txt @@ -0,0 +1,52 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.update_pods_per_node + + +cluster update_pods_per_node +============================ + +Update the maximum number of Pods per Nodes, and Pods per Core See alse: https://docs.openshift.com/container-platform/4.14/nodes/nodes/nodes-nodes-managing-max-pods.html + + + + +Parameters +---------- + + +``max_pods`` + +* The maximum number of Pods per nodes + +* default value: ``250`` + + +``pods_per_core`` + +* The maximum number of Pods per core + +* default value: ``10`` + + +``name`` + +* The name to give to the KubeletConfig object + +* default value: ``set-max-pods`` + + +``label`` + +* The label selector for the nodes to update + +* default value: ``pools.operator.machineconfiguration.openshift.io/worker`` + + +``label_value`` + +* The expected value for the label selector + diff --git a/_sources/toolbox.generated/Cluster.upgrade_to_image.rst.txt b/_sources/toolbox.generated/Cluster.upgrade_to_image.rst.txt new file mode 100644 index 0000000000..e39719e70d --- /dev/null +++ b/_sources/toolbox.generated/Cluster.upgrade_to_image.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.upgrade_to_image + + +cluster upgrade_to_image +======================== + +Upgrades the cluster to the given image + + + + +Parameters +---------- + + +``image`` + +* The image to upgrade the cluster to + diff --git a/_sources/toolbox.generated/Cluster.wait_fully_awake.rst.txt b/_sources/toolbox.generated/Cluster.wait_fully_awake.rst.txt new file mode 100644 index 0000000000..93b35e88eb --- /dev/null +++ b/_sources/toolbox.generated/Cluster.wait_fully_awake.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cluster.wait_fully_awake + + +cluster wait_fully_awake +======================== + +Waits for the cluster to be fully awake after Hive restart + + + diff --git a/_sources/toolbox.generated/Configure.apply.rst.txt b/_sources/toolbox.generated/Configure.apply.rst.txt new file mode 100644 index 0000000000..4c11ee1d3b --- /dev/null +++ b/_sources/toolbox.generated/Configure.apply.rst.txt @@ -0,0 +1,29 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Configure.apply + + +configure apply +=============== + +Applies a preset (or a list of presets) to the current configuration file + + + + +Parameters +---------- + + +``preset`` + +* A preset to apply + + +``presets`` + +* A list of presets to apply + diff --git a/_sources/toolbox.generated/Configure.enter.rst.txt b/_sources/toolbox.generated/Configure.enter.rst.txt new file mode 100644 index 0000000000..55301da88d --- /dev/null +++ b/_sources/toolbox.generated/Configure.enter.rst.txt @@ -0,0 +1,46 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Configure.enter + + +configure enter +=============== + +Enter into a custom configuration file for a TOPSAIL project + + + + +Parameters +---------- + + +``project`` + +* The name of the projec to configure + + +``show_export`` + +* Show the export command + + +``shell`` + +* If False, do nothing. If True, exec the default shell. Any other value is executed. + +* default value: ``True`` + + +``preset`` + +* A preset to apply + + +``presets`` + +* A list of presets to apply + diff --git a/_sources/toolbox.generated/Configure.get.rst.txt b/_sources/toolbox.generated/Configure.get.rst.txt new file mode 100644 index 0000000000..6a05714e08 --- /dev/null +++ b/_sources/toolbox.generated/Configure.get.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Configure.get + + +configure get +============= + +Gives the value of a given key, in the current configuration file + + + + +Parameters +---------- + + +``key`` + +* The key to lookup in the configuration file + diff --git a/_sources/toolbox.generated/Configure.name.rst.txt b/_sources/toolbox.generated/Configure.name.rst.txt new file mode 100644 index 0000000000..a9ed1d36a6 --- /dev/null +++ b/_sources/toolbox.generated/Configure.name.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Configure.name + + +configure name +============== + +Gives the name of the current configuration + + + diff --git a/_sources/toolbox.generated/Cpt.deploy_cpt_dashboard.rst.txt b/_sources/toolbox.generated/Cpt.deploy_cpt_dashboard.rst.txt new file mode 100644 index 0000000000..df55f8a684 --- /dev/null +++ b/_sources/toolbox.generated/Cpt.deploy_cpt_dashboard.rst.txt @@ -0,0 +1,63 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Cpt.deploy_cpt_dashboard + + +cpt deploy_cpt_dashboard +======================== + +Deploy and configure the CPT Dashboard + +Example of secret properties file: + +admin_password=adminpasswd + + +Parameters +---------- + + +``frontend_istag`` + +* Imagestream tag to use for the frontend container + + +``backend_istag`` + +* Imagestream tag to use for the backend container + + +``plugin_name`` + +* Name of the CPT Dashboard plugin to configure + + +``es_url`` + +* URL of the OpenSearch backend + + +``es_indice`` + +* Indice of the OpenSearch backend + + +``es_username`` + +* Username to use to login into OpenSearch + + +``secret_properties_file`` + +* Path of a file containing the OpenSearch user credentials + + +``namespace`` + +* Namespace in which the application will be deployed + +* default value: ``topsail-cpt-dashboard`` + diff --git a/_sources/toolbox.generated/Fine_Tuning.ray_fine_tuning_job.rst.txt b/_sources/toolbox.generated/Fine_Tuning.ray_fine_tuning_job.rst.txt new file mode 100644 index 0000000000..7d5123c02e --- /dev/null +++ b/_sources/toolbox.generated/Fine_Tuning.ray_fine_tuning_job.rst.txt @@ -0,0 +1,152 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Fine_Tuning.ray_fine_tuning_job + + +fine_tuning ray_fine_tuning_job +=============================== + +Run a simple Ray fine-tuning Job. + + + + +Parameters +---------- + + +``name`` + +* The name of the fine-tuning job to create + + +``namespace`` + +* The name of the namespace where the scheduler load will be generated + + +``pvc_name`` + +* The name of the PVC where the model and dataset are stored + + +``model_name`` + +* The name of the model to use inside the /dataset directory of the PVC + + +``workload`` + +* The name of the workload job to run (see the role's workload directory) + +* default value: ``ray-finetune-llm-deepspeed`` + + +``dataset_name`` + +* The name of the dataset to use inside the /model directory of the PVC + + +``dataset_replication`` + +* Number of replications of the dataset to use, to artificially extend or reduce the fine-tuning effort + +* default value: ``1`` + + +``dataset_transform`` + +* Name of the transformation to apply to the dataset + + +``dataset_prefer_cache`` + +* If True, and the dataset has to be transformed/duplicated, save and/or load it from the PVC + +* default value: ``True`` + + +``dataset_prepare_cache_only`` + +* If True, only prepare the dataset cache file and do not run the fine-tuning. + + +``container_image`` + +* The image to use for the fine-tuning container + +* default value: ``quay.io/rhoai/ray:2.35.0-py39-cu121-torch24-fa26`` + + +``ray_version`` + +* The version identifier passed to the RayCluster object + +* default value: ``2.35.0`` + + +``gpu`` + +* The number of GPUs to request for the fine-tuning job + + +``memory`` + +* The number of RAM gigs to request for to the fine-tuning job (in Gigs) + +* default value: ``10`` + + +``cpu`` + +* The number of CPU cores to request for the fine-tuning job (in cores) + +* default value: ``1`` + + +``request_equals_limits`` + +* If True, sets the 'limits' of the job with the same value as the request. + + +``prepare_only`` + +* If True, only prepare the environment but do not run the fine-tuning job. + + +``delete_other`` + +* If True, delete the other PyTorchJobs before running + + +``pod_count`` + +* Number of Pods to include in the job + +* default value: ``1`` + + +``hyper_parameters`` + +* Dictionnary of hyper-parameters to pass to sft-trainer + + +``sleep_forever`` + +* If true, sleeps forever instead of running the fine-tuning command. + + +``capture_artifacts`` + +* If enabled, captures the artifacts that will help post-mortem analyses + +* default value: ``True`` + + +``shutdown_cluster`` + +* If True, let the RayJob shutdown the RayCluster when the job terminates + diff --git a/_sources/toolbox.generated/Fine_Tuning.run_fine_tuning_job.rst.txt b/_sources/toolbox.generated/Fine_Tuning.run_fine_tuning_job.rst.txt new file mode 100644 index 0000000000..e91c4b08dc --- /dev/null +++ b/_sources/toolbox.generated/Fine_Tuning.run_fine_tuning_job.rst.txt @@ -0,0 +1,153 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Fine_Tuning.run_fine_tuning_job + + +fine_tuning run_fine_tuning_job +=============================== + +Run a simple fine-tuning Job. + + + + +Parameters +---------- + + +``name`` + +* The name of the fine-tuning job to create + + +``namespace`` + +* The name of the namespace where the scheduler load will be generated + + +``pvc_name`` + +* The name of the PVC where the model and dataset are stored + + +``workload`` + +* The name of the workload to run inside the container (fms or ilab) + + +``model_name`` + +* The name of the model to use inside the /dataset directory of the PVC + + +``dataset_name`` + +* The name of the dataset to use inside the /model directory of the PVC + + +``dataset_replication`` + +* Number of replications of the dataset to use, to artificially extend or reduce the fine-tuning effort + +* default value: ``1`` + + +``dataset_transform`` + +* Name of the transformation to apply to the dataset + + +``dataset_prefer_cache`` + +* If True, and the dataset has to be transformed/duplicated, save and/or load it from the PVC + +* default value: ``True`` + + +``dataset_prepare_cache_only`` + +* If True, only prepare the dataset cache file and do not run the fine-tuning. + + +``dataset_response_template`` + +* The delimiter marking the beginning of the response in the dataset samples + + +``container_image`` + +* The image to use for the fine-tuning container + +* default value: ``quay.io/modh/fms-hf-tuning:release-7a8ff0f4114ba43398d34fd976f6b17bb1f665f3`` + + +``gpu`` + +* The number of GPUs to request for the fine-tuning job + + +``memory`` + +* The number of RAM gigs to request for to the fine-tuning job (in Gigs) + +* default value: ``10`` + + +``cpu`` + +* The number of CPU cores to request for the fine-tuning job (in cores) + +* default value: ``1`` + + +``request_equals_limits`` + +* If True, sets the 'limits' of the job with the same value as the request. + + +``prepare_only`` + +* If True, only prepare the environment but do not run the fine-tuning job. + + +``delete_other`` + +* If True, delete the other PyTorchJobs before running + + +``pod_count`` + +* Number of Pods to include in the job + +* default value: ``1`` + + +``hyper_parameters`` + +* Dictionnary of hyper-parameters to pass to sft-trainer + + +``capture_artifacts`` + +* If enabled, captures the artifacts that will help post-mortem analyses + +* default value: ``True`` + + +``sleep_forever`` + +* If true, sleeps forever instead of running the fine-tuning command. + + +``ephemeral_output_pvc_size`` + +* If a size (with units) is passed, use an ephemeral volume claim for storing the fine-tuning output. Otherwise, use an emptyDir. + + +``use_roce`` + +* If enabled, activates the flags required to use RoCE fast network + diff --git a/_sources/toolbox.generated/Fine_Tuning.run_quality_evaluation.rst.txt b/_sources/toolbox.generated/Fine_Tuning.run_quality_evaluation.rst.txt new file mode 100644 index 0000000000..1c3e9a153d --- /dev/null +++ b/_sources/toolbox.generated/Fine_Tuning.run_quality_evaluation.rst.txt @@ -0,0 +1,82 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Fine_Tuning.run_quality_evaluation + + +fine_tuning run_quality_evaluation +================================== + +Run a simple fine-tuning Job. + + + + +Parameters +---------- + + +``name`` + +* The name of the fine-tuning job to create + + +``namespace`` + +* The name of the namespace where the scheduler load will be generated + + +``pvc_name`` + +* The name of the PVC where the model and dataset are stored + + +``model_name`` + +* The name of the model to use inside the /dataset directory of the PVC + + +``container_image`` + +* The image to use for the fine-tuning container + +* default value: ``registry.redhat.io/ubi9`` + + +``gpu`` + +* The number of GPUs to request for the fine-tuning job + + +``memory`` + +* The number of RAM gigs to request for to the fine-tuning job (in Gigs) + +* default value: ``10`` + + +``cpu`` + +* The number of CPU cores to request for the fine-tuning job (in cores) + +* default value: ``1`` + + +``pod_count`` + +* Number of pods to deploy in the job + +* default value: ``1`` + + +``hyper_parameters`` + +* Dictionnary of hyper-parameters to pass to sft-trainer + + +``sleep_forever`` + +* If true, sleeps forever instead of running the fine-tuning command. + diff --git a/_sources/toolbox.generated/From_Config.run.rst.txt b/_sources/toolbox.generated/From_Config.run.rst.txt new file mode 100644 index 0000000000..d40786f76c --- /dev/null +++ b/_sources/toolbox.generated/From_Config.run.rst.txt @@ -0,0 +1,60 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: From_Config.run + + +from_config run +=============== + +Run ``topsail`` toolbox commands from a single config file. + + + + +Parameters +---------- + + +``group`` + +* Group from which the command belongs. + + +``command`` + +* Command to call, within the group. + + +``config_file`` + +* Configuration file from which the parameters will be looked up. Can be passed via the TOPSAIL_FROM_CONFIG_FILE environment variable. + + +``command_args_file`` + +* Command argument configuration file. Can be passed via the TOPSAIL_FROM_COMMAND_ARGS_FILE environment variable. + + +``prefix`` + +* Prefix to apply to the role name to lookup the command options. + + +``suffix`` + +* Suffix to apply to the role name to lookup the command options. + + +``extra`` + +* Extra arguments to pass to the commands. Use the dictionnary notation: '{arg1: val1, arg2: val2}'. +* type: Dict + + +``show_args`` + +* Print the generated arguments on stdout and exit, or only a given argument if a value is passed. + diff --git a/_sources/toolbox.generated/Gpu_Operator.capture_deployment_state.rst.txt b/_sources/toolbox.generated/Gpu_Operator.capture_deployment_state.rst.txt new file mode 100644 index 0000000000..852805a4dd --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.capture_deployment_state.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.capture_deployment_state + + +gpu_operator capture_deployment_state +===================================== + +Captures the GPU operator deployment state + + + diff --git a/_sources/toolbox.generated/Gpu_Operator.deploy_cluster_policy.rst.txt b/_sources/toolbox.generated/Gpu_Operator.deploy_cluster_policy.rst.txt new file mode 100644 index 0000000000..3416a1f739 --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.deploy_cluster_policy.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.deploy_cluster_policy + + +gpu_operator deploy_cluster_policy +================================== + +Creates the ClusterPolicy from the OLM ClusterServiceVersion + + + diff --git a/_sources/toolbox.generated/Gpu_Operator.deploy_from_bundle.rst.txt b/_sources/toolbox.generated/Gpu_Operator.deploy_from_bundle.rst.txt new file mode 100644 index 0000000000..31df9badee --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.deploy_from_bundle.rst.txt @@ -0,0 +1,31 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.deploy_from_bundle + + +gpu_operator deploy_from_bundle +=============================== + +Deploys the GPU Operator from a bundle + + + + +Parameters +---------- + + +``bundle`` + +* Either a bundle OCI image or "master" to deploy the latest bundle + + +``namespace`` + +* Optional namespace in which the GPU Operator will be deployed. Before v1.9, the value must be "openshift-operators". With >=v1.9, the namespace can freely chosen (except 'openshift-operators'). Default: nvidia-gpu-operator. + +* default value: ``nvidia-gpu-operator`` + diff --git a/_sources/toolbox.generated/Gpu_Operator.deploy_from_operatorhub.rst.txt b/_sources/toolbox.generated/Gpu_Operator.deploy_from_operatorhub.rst.txt new file mode 100644 index 0000000000..3edf8671d7 --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.deploy_from_operatorhub.rst.txt @@ -0,0 +1,43 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.deploy_from_operatorhub + + +gpu_operator deploy_from_operatorhub +==================================== + +Deploys the GPU operator from OperatorHub + + + + +Parameters +---------- + + +``namespace`` + +* Optional namespace in which the GPU Operator will be deployed. Before v1.9, the value must be "openshift-operators". With >=v1.9, the namespace can freely chosen. Default: nvidia-gpu-operator. + +* default value: ``nvidia-gpu-operator`` + + +``version`` + +* Optional version to deploy. If unspecified, deploys the latest version available in the selected channel. Run the toolbox gpu_operator list_version_from_operator_hub subcommand to see the available versions. + + +``channel`` + +* Optional channel to deploy from. If unspecified, deploys the CSV's default channel. + + +``installPlan`` + +* Optional InstallPlan approval mode (Automatic or Manual [default]) + +* default value: ``Manual`` + diff --git a/_sources/toolbox.generated/Gpu_Operator.enable_time_sharing.rst.txt b/_sources/toolbox.generated/Gpu_Operator.enable_time_sharing.rst.txt new file mode 100644 index 0000000000..bc313beea2 --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.enable_time_sharing.rst.txt @@ -0,0 +1,38 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.enable_time_sharing + + +gpu_operator enable_time_sharing +================================ + +Enable time-sharing in the GPU Operator ClusterPolicy + + + + +Parameters +---------- + + +``replicas`` + +* Number of slices available for each of the GPUs + + +``namespace`` + +* Namespace in which the GPU Operator is deployed + +* default value: ``nvidia-gpu-operator`` + + +``configmap_name`` + +* Name of the ConfigMap where the configuration will be stored + +* default value: ``time-slicing-config-all`` + diff --git a/_sources/toolbox.generated/Gpu_Operator.extend_metrics.rst.txt b/_sources/toolbox.generated/Gpu_Operator.extend_metrics.rst.txt new file mode 100644 index 0000000000..c532afd77f --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.extend_metrics.rst.txt @@ -0,0 +1,58 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.extend_metrics + + +gpu_operator extend_metrics +=========================== + +Enable time-sharing in the GPU Operator ClusterPolicy + + + + +Parameters +---------- + + +``include_defaults`` + +* If True, include the default DCGM metrics in the custom config + +* default value: ``True`` + + +``include_well_known`` + +* If True, include well-known interesting DCGM metrics in the custom config + + +``namespace`` + +* Namespace in which the GPU Operator is deployed + +* default value: ``nvidia-gpu-operator`` + + +``configmap_name`` + +* Name of the ConfigMap where the configuration will be stored + +* default value: ``metrics-config`` + + +``extra_metrics`` + +* If not None, a [{name,type,description}*] list of dictionnaries with the extra metrics to include in the custom config +* type: List + + +``wait_refresh`` + +* If True, wait for the DCGM components to take into account the new configuration + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Gpu_Operator.get_csv_version.rst.txt b/_sources/toolbox.generated/Gpu_Operator.get_csv_version.rst.txt new file mode 100644 index 0000000000..0cec9989ae --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.get_csv_version.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.get_csv_version + + +gpu_operator get_csv_version +============================ + +Get the version of the GPU Operator currently installed from OLM Stores the version in the 'ARTIFACT_EXTRA_LOGS_DIR' artifacts directory. + + + diff --git a/_sources/toolbox.generated/Gpu_Operator.run_gpu_burn.rst.txt b/_sources/toolbox.generated/Gpu_Operator.run_gpu_burn.rst.txt new file mode 100644 index 0000000000..535ba6cd5d --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.run_gpu_burn.rst.txt @@ -0,0 +1,48 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.run_gpu_burn + + +gpu_operator run_gpu_burn +========================= + +Runs the GPU burn on the cluster + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which GPU-burn will be executed + +* default value: ``default`` + + +``runtime`` + +* How long to run the GPU for, in seconds +* type: Int + +* default value: ``30`` + + +``keep_resources`` + +* If true, do not delete the GPU-burn ConfigMaps +* type: Bool + + +``ensure_has_gpu`` + +* If true, fails if no GPU is available in the cluster. +* type: Bool + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Gpu_Operator.undeploy_from_operatorhub.rst.txt b/_sources/toolbox.generated/Gpu_Operator.undeploy_from_operatorhub.rst.txt new file mode 100644 index 0000000000..ed33640a4a --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.undeploy_from_operatorhub.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.undeploy_from_operatorhub + + +gpu_operator undeploy_from_operatorhub +====================================== + +Undeploys a GPU-operator that was deployed from OperatorHub + + + diff --git a/_sources/toolbox.generated/Gpu_Operator.wait_deployment.rst.txt b/_sources/toolbox.generated/Gpu_Operator.wait_deployment.rst.txt new file mode 100644 index 0000000000..128a469f5d --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.wait_deployment.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.wait_deployment + + +gpu_operator wait_deployment +============================ + +Waits for the GPU operator to deploy + + + diff --git a/_sources/toolbox.generated/Gpu_Operator.wait_stack_deployed.rst.txt b/_sources/toolbox.generated/Gpu_Operator.wait_stack_deployed.rst.txt new file mode 100644 index 0000000000..a8d4c235d8 --- /dev/null +++ b/_sources/toolbox.generated/Gpu_Operator.wait_stack_deployed.rst.txt @@ -0,0 +1,26 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Gpu_Operator.wait_stack_deployed + + +gpu_operator wait_stack_deployed +================================ + +Waits for the GPU Operator stack to be deployed on the GPU nodes + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which the GPU Operator is deployed + +* default value: ``nvidia-gpu-operator`` + diff --git a/_sources/toolbox.generated/Kepler.deploy_kepler.rst.txt b/_sources/toolbox.generated/Kepler.deploy_kepler.rst.txt new file mode 100644 index 0000000000..d4591c876d --- /dev/null +++ b/_sources/toolbox.generated/Kepler.deploy_kepler.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kepler.deploy_kepler + + +kepler deploy_kepler +==================== + +Deploy the Kepler operator and monitor to track energy consumption + + + diff --git a/_sources/toolbox.generated/Kepler.undeploy_kepler.rst.txt b/_sources/toolbox.generated/Kepler.undeploy_kepler.rst.txt new file mode 100644 index 0000000000..d824cdd23c --- /dev/null +++ b/_sources/toolbox.generated/Kepler.undeploy_kepler.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kepler.undeploy_kepler + + +kepler undeploy_kepler +====================== + +Cleanup the Kepler operator and associated resources + + + diff --git a/_sources/toolbox.generated/Kserve.capture_operators_state.rst.txt b/_sources/toolbox.generated/Kserve.capture_operators_state.rst.txt new file mode 100644 index 0000000000..09d6b07fe0 --- /dev/null +++ b/_sources/toolbox.generated/Kserve.capture_operators_state.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kserve.capture_operators_state + + +kserve capture_operators_state +============================== + +Captures the state of the operators of the KServe serving stack + + + + +Parameters +---------- + + +``raw_deployment`` + +* If True, do not try to capture any Serverless related resource + diff --git a/_sources/toolbox.generated/Kserve.capture_state.rst.txt b/_sources/toolbox.generated/Kserve.capture_state.rst.txt new file mode 100644 index 0000000000..f80051b0ff --- /dev/null +++ b/_sources/toolbox.generated/Kserve.capture_state.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kserve.capture_state + + +kserve capture_state +==================== + +Captures the state of the KServe stack in a given namespace + + + + +Parameters +---------- + + +``namespace`` + +* The namespace in which the Serving stack was deployed. If empty, use the current project. + diff --git a/_sources/toolbox.generated/Kserve.deploy_model.rst.txt b/_sources/toolbox.generated/Kserve.deploy_model.rst.txt new file mode 100644 index 0000000000..d7da24db5b --- /dev/null +++ b/_sources/toolbox.generated/Kserve.deploy_model.rst.txt @@ -0,0 +1,67 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kserve.deploy_model + + +kserve deploy_model +=================== + +Deploy a KServe model + + + + +Parameters +---------- + + +``namespace`` + +* The namespace in which the model should be deployed + + +``runtime`` + +* Name of the runtime (standalone-tgis or vllm) + + +``model_name`` + +* The name to give to the serving runtime + + +``sr_name`` + +* The name of the ServingRuntime object + + +``sr_kserve_image`` + +* The image of the Kserve serving runtime container + + +``inference_service_name`` + +* The name to give to the inference service + + +``inference_service_min_replicas`` + +* The minimum number of replicas. If none, the field is left unset. +* type: Int + + +``delete_others`` + +* If True, deletes the other serving runtime/inference services of the namespace + +* default value: ``True`` + + +``raw_deployment`` + +* If True, do not try to configure anything related to Serverless. + diff --git a/_sources/toolbox.generated/Kserve.extract_protos.rst.txt b/_sources/toolbox.generated/Kserve.extract_protos.rst.txt new file mode 100644 index 0000000000..889cbcee36 --- /dev/null +++ b/_sources/toolbox.generated/Kserve.extract_protos.rst.txt @@ -0,0 +1,41 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kserve.extract_protos + + +kserve extract_protos +===================== + +Extracts the protos of an inference service + + + + +Parameters +---------- + + +``namespace`` + +* The namespace in which the model was deployed + + +``inference_service_name`` + +* The name of the inference service + + +``dest_dir`` + +* The directory where the protos should be stored + + +``copy_to_artifacts`` + +* If True, copy the protos to the command artifacts. If False, don't. + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Kserve.extract_protos_grpcurl.rst.txt b/_sources/toolbox.generated/Kserve.extract_protos_grpcurl.rst.txt new file mode 100644 index 0000000000..5e2b9b643a --- /dev/null +++ b/_sources/toolbox.generated/Kserve.extract_protos_grpcurl.rst.txt @@ -0,0 +1,47 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kserve.extract_protos_grpcurl + + +kserve extract_protos_grpcurl +============================= + +Extracts the protos of an inference service, with GRPCurl observe + + + + +Parameters +---------- + + +``namespace`` + +* The namespace in which the model was deployed + + +``inference_service_name`` + +* The name of the inference service + + +``dest_file`` + +* The path where the proto file will be stored + + +``methods`` + +* The list of methods to extract +* type: List + + +``copy_to_artifacts`` + +* If True, copy the protos to the command artifacts. If False, don't. + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Kserve.undeploy_model.rst.txt b/_sources/toolbox.generated/Kserve.undeploy_model.rst.txt new file mode 100644 index 0000000000..e08f6b388a --- /dev/null +++ b/_sources/toolbox.generated/Kserve.undeploy_model.rst.txt @@ -0,0 +1,39 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kserve.undeploy_model + + +kserve undeploy_model +===================== + +Undeploy a KServe model + + + + +Parameters +---------- + + +``namespace`` + +* The namespace in which the model should be deployed + + +``sr_name`` + +* The name to give to the serving runtime + + +``inference_service_name`` + +* The name to give to the inference service + + +``all`` + +* Delete all the inference services/servingruntime of the namespace + diff --git a/_sources/toolbox.generated/Kserve.validate_model.rst.txt b/_sources/toolbox.generated/Kserve.validate_model.rst.txt new file mode 100644 index 0000000000..1c19c70cb0 --- /dev/null +++ b/_sources/toolbox.generated/Kserve.validate_model.rst.txt @@ -0,0 +1,62 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kserve.validate_model + + +kserve validate_model +===================== + +Validate the proper deployment of a KServe model + +Warning: + This command requires `grpcurl` to be available in the PATH. + + +Parameters +---------- + + +``inference_service_names`` + +* A list of names of the inference service to validate + + +``query_count`` + +* Number of query to perform + + +``runtime`` + +* Name of the runtime used (standalone-tgis or vllm) + + +``model_id`` + +* The model-id to pass to the inference service + +* default value: ``not-used`` + + +``namespace`` + +* The namespace in which the Serving stack was deployed. If empty, use the current project. + + +``raw_deployment`` + +* If True, do not try to configure anything related to Serverless. Works only in-cluster at the moment. + + +``method`` + +* The gRPC method to call #TODO remove? + + +``proto`` + +* If not empty, the proto file to pass to grpcurl + diff --git a/_sources/toolbox.generated/Kubemark.deploy_capi_provider.rst.txt b/_sources/toolbox.generated/Kubemark.deploy_capi_provider.rst.txt new file mode 100644 index 0000000000..72927f51d1 --- /dev/null +++ b/_sources/toolbox.generated/Kubemark.deploy_capi_provider.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kubemark.deploy_capi_provider + + +kubemark deploy_capi_provider +============================= + +Deploy the Kubemark Cluster-API provider + + + diff --git a/_sources/toolbox.generated/Kubemark.deploy_nodes.rst.txt b/_sources/toolbox.generated/Kubemark.deploy_nodes.rst.txt new file mode 100644 index 0000000000..c72b37ed79 --- /dev/null +++ b/_sources/toolbox.generated/Kubemark.deploy_nodes.rst.txt @@ -0,0 +1,40 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kubemark.deploy_nodes + + +kubemark deploy_nodes +===================== + +Deploy a set of Kubemark nodes + + + + +Parameters +---------- + + +``namespace`` + +* The namespace in which the MachineDeployment will be created + +* default value: ``openshift-cluster-api`` + + +``deployment_name`` + +* The name of the MachineDeployment + +* default value: ``kubemark-md`` + + +``count`` + +* The number of nodes to deploy + +* default value: ``4`` + diff --git a/_sources/toolbox.generated/Kwok.deploy_kwok_controller.rst.txt b/_sources/toolbox.generated/Kwok.deploy_kwok_controller.rst.txt new file mode 100644 index 0000000000..c14364c504 --- /dev/null +++ b/_sources/toolbox.generated/Kwok.deploy_kwok_controller.rst.txt @@ -0,0 +1,31 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kwok.deploy_kwok_controller + + +kwok deploy_kwok_controller +=========================== + +Deploy the KWOK hollow node provider + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where KWOK will be deployed. Cannot be changed at the moment. + +* default value: ``kube-system`` + + +``undeploy`` + +* If true, undeploys KWOK instead of deploying it. + diff --git a/_sources/toolbox.generated/Kwok.set_scale.rst.txt b/_sources/toolbox.generated/Kwok.set_scale.rst.txt new file mode 100644 index 0000000000..c966c0ba87 --- /dev/null +++ b/_sources/toolbox.generated/Kwok.set_scale.rst.txt @@ -0,0 +1,69 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Kwok.set_scale + + +kwok set_scale +============== + +Deploy a set of KWOK nodes + + + + +Parameters +---------- + + +``scale`` + +* The number of required nodes with given instance type + + +``taint`` + +* Taint to apply to the machineset. + + +``name`` + +* Name to give to the new machineset. + +* default value: ``kwok-machine`` + + +``role`` + +* Role of the new nodes + +* default value: ``worker`` + + +``cpu`` + +* Number of CPU allocatable + +* default value: ``32`` + + +``memory`` + +* Number of Gi of memory allocatable + +* default value: ``256`` + + +``gpu`` + +* Number of nvidia.com/gpu allocatable + + +``pods`` + +* Number of Pods allocatable + +* default value: ``250`` + diff --git a/_sources/toolbox.generated/Llm_Load_Test.run.rst.txt b/_sources/toolbox.generated/Llm_Load_Test.run.rst.txt new file mode 100644 index 0000000000..90c80c17e2 --- /dev/null +++ b/_sources/toolbox.generated/Llm_Load_Test.run.rst.txt @@ -0,0 +1,109 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Llm_Load_Test.run + + +llm_load_test run +================= + +Load test the wisdom model + + + + +Parameters +---------- + + +``host`` + +* The host endpoint of the gRPC call + + +``port`` + +* The gRPC port on the specified host + + +``duration`` + +* The duration of the load testing + + +``plugin`` + +* The llm-load-test plugin to use (tgis_grpc_plugin or caikit_client_plugin for now) + +* default value: ``tgis_grpc_plugin`` + + +``interface`` + +* (http or grpc) the interface to use for llm-load-test-plugins that support both + +* default value: ``grpc`` + + +``model_id`` + +* The ID of the model to pass along with the GRPC call + +* default value: ``not-used`` + + +``src_path`` + +* Path where llm-load-test has been cloned + +* default value: ``projects/llm_load_test/subprojects/llm-load-test/`` + + +``streaming`` + +* Whether to stream the llm-load-test requests + +* default value: ``True`` + + +``use_tls`` + +* Whether to set use_tls: True (grpc in Serverless mode) + + +``concurrency`` + +* Number of concurrent simulated users sending requests + +* default value: ``16`` + + +``max_input_tokens`` + +* Max input tokens in llm load test to filter the dataset + +* default value: ``1024`` + + +``max_output_tokens`` + +* Max output tokens in llm load test to filter the dataset + +* default value: ``512`` + + +``max_sequence_tokens`` + +* Max sequence tokens in llm load test to filter the dataset + +* default value: ``1536`` + + +``endpoint`` + +* Name of the endpoint to query (for openai plugin only) + +* default value: ``/v1/completions`` + diff --git a/_sources/toolbox.generated/Local_Ci.run.rst.txt b/_sources/toolbox.generated/Local_Ci.run.rst.txt new file mode 100644 index 0000000000..cda6457ef4 --- /dev/null +++ b/_sources/toolbox.generated/Local_Ci.run.rst.txt @@ -0,0 +1,136 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Local_Ci.run + + +local_ci run +============ + +Runs a given CI command + + + + +Parameters +---------- + + +``ci_command`` + +* The CI command to run. + + +``pr_number`` + +* The ID of the PR to use for the repository. + + +``git_repo`` + +* The Github repo to use. + +* default value: ``https://github.com/openshift-psap/topsail`` + + +``git_ref`` + +* The Github ref to use. + +* default value: ``main`` + + +``namespace`` + +* The namespace in which the image. + +* default value: ``topsail`` + + +``istag`` + +* The imagestream tag to use. + +* default value: ``topsail:main`` + + +``pod_name`` + +* The name to give to the Pod running the CI command. + +* default value: ``topsail`` + + +``service_account`` + +* Name of the ServiceAccount to use for running the Pod. + +* default value: ``default`` + + +``secret_name`` + +* Name of the Secret to mount in the Pod. + + +``secret_env_key`` + +* Name of the environment variable with which the secret path will be exposed in the Pod. + + +``test_name`` + +* Name of the test being executed. + +* default value: ``local-ci-test`` + + +``test_args`` + +* List of arguments to give to the test. + + +``init_command`` + +* Command to run in the container before running anything else. + + +``export_bucket_name`` + +* Name of the S3 bucket where the artifacts should be exported. + + +``export_test_run_identifier`` + +* Identifier of the test being executed (will be a dirname). + +* default value: ``default`` + + +``export`` + +* If True, exports the artifacts to the S3 bucket. If False, do not run the export command. + +* default value: ``True`` + + +``retrieve_artifacts`` + +* If False, do not retrieve locally the test artifacts. + +* default value: ``True`` + + +``pr_config`` + +* Optional path to a PR config file (avoids fetching Github PR json). + + +``update_git`` + +* If True, updates the git repo with the latest main/PR before running the test. + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Local_Ci.run_multi.rst.txt b/_sources/toolbox.generated/Local_Ci.run_multi.rst.txt new file mode 100644 index 0000000000..7b2f12bd10 --- /dev/null +++ b/_sources/toolbox.generated/Local_Ci.run_multi.rst.txt @@ -0,0 +1,148 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Local_Ci.run_multi + + +local_ci run_multi +================== + +Runs a given CI command in parallel from multiple Pods + + + + +Parameters +---------- + + +``ci_command`` + +* The CI command to run. + + +``user_count`` + +* Batch job parallelism count. +* type: Int + +* default value: ``1`` + + +``namespace`` + +* The namespace in which the image. + +* default value: ``topsail`` + + +``istag`` + +* The imagestream tag to use. + +* default value: ``topsail:main`` + + +``job_name`` + +* The name to give to the Job running the CI command. + +* default value: ``topsail`` + + +``service_account`` + +* Name of the ServiceAccount to use for running the Pod. + +* default value: ``default`` + + +``secret_name`` + +* Name of the Secret to mount in the Pod. + + +``secret_env_key`` + +* Name of the environment variable with which the secret path will be exposed in the Pod. + + +``retrieve_artifacts`` + +* If False, do not retrieve locally the test artifacts. + + +``minio_namespace`` + +* Namespace where the Minio server is located. + + +``minio_bucket_name`` + +* Name of the bucket in the Minio server. + + +``minio_secret_key_key`` + +* Key inside 'secret_env_key' containing the secret to access the Minio bucket. Must be in the form 'user_password=SECRET_KEY'. + + +``variable_overrides`` + +* Optional path to the variable_overrides config file (avoids fetching Github PR json). + + +``use_local_config`` + +* If true, gives the local configuration file ($TOPSAIL_FROM_CONFIG_FILE) to the Pods. + +* default value: ``True`` + + +``capture_prom_db`` + +* If True, captures the Prometheus DB of the systems. +* type: Bool + +* default value: ``True`` + + +``git_pull`` + +* If True, update the repo in the image with the latest version of the build ref before running the command in the Pods. +* type: Bool + + +``state_signal_redis_server`` + +* Optional address of the Redis server to pass to StateSignal synchronization. If empty, do not perform any synchronization. + + +``sleep_factor`` + +* Delay (in seconds) between the start of each of the users. + + +``user_batch_size`` + +* Number of users to launch after the sleep delay. + +* default value: ``1`` + + +``abort_on_failure`` + +* If true, let the Job abort the parallel execution on the first Pod failure. If false, ignore the process failure and track the overall failure count with a flag. + + +``need_all_success`` + +* If true, fails the execution if any of the Pods failed. If false, fails it if none of the Pods succeed. + + +``launch_as_daemon`` + +* If true, do not wait for the job to complete. Most of the options above become irrelevant + diff --git a/_sources/toolbox.generated/Nfd.has_gpu_nodes.rst.txt b/_sources/toolbox.generated/Nfd.has_gpu_nodes.rst.txt new file mode 100644 index 0000000000..6d2e575829 --- /dev/null +++ b/_sources/toolbox.generated/Nfd.has_gpu_nodes.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Nfd.has_gpu_nodes + + +nfd has_gpu_nodes +================= + +Checks if the cluster has GPU nodes + + + diff --git a/_sources/toolbox.generated/Nfd.has_labels.rst.txt b/_sources/toolbox.generated/Nfd.has_labels.rst.txt new file mode 100644 index 0000000000..af11cd8f61 --- /dev/null +++ b/_sources/toolbox.generated/Nfd.has_labels.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Nfd.has_labels + + +nfd has_labels +============== + +Checks if the cluster has NFD labels + + + diff --git a/_sources/toolbox.generated/Nfd.wait_gpu_nodes.rst.txt b/_sources/toolbox.generated/Nfd.wait_gpu_nodes.rst.txt new file mode 100644 index 0000000000..3a65a40d07 --- /dev/null +++ b/_sources/toolbox.generated/Nfd.wait_gpu_nodes.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Nfd.wait_gpu_nodes + + +nfd wait_gpu_nodes +================== + +Wait until nfd find GPU nodes + + + diff --git a/_sources/toolbox.generated/Nfd.wait_labels.rst.txt b/_sources/toolbox.generated/Nfd.wait_labels.rst.txt new file mode 100644 index 0000000000..f2679a3cf7 --- /dev/null +++ b/_sources/toolbox.generated/Nfd.wait_labels.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Nfd.wait_labels + + +nfd wait_labels +=============== + +Wait until nfd labels the nodes + + + diff --git a/_sources/toolbox.generated/Nfd_Operator.deploy_from_operatorhub.rst.txt b/_sources/toolbox.generated/Nfd_Operator.deploy_from_operatorhub.rst.txt new file mode 100644 index 0000000000..a3a750b46f --- /dev/null +++ b/_sources/toolbox.generated/Nfd_Operator.deploy_from_operatorhub.rst.txt @@ -0,0 +1,41 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Nfd_Operator.deploy_from_operatorhub + + +nfd_operator deploy_from_operatorhub +==================================== + +Deploys the NFD Operator from OperatorHub + + + + +Parameters +---------- + + +``channel`` + +* The operator hub channel to deploy. e.g. 4.7 + + +# Constants +# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_deploy_cr: true + +# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_namespace: openshift-nfd + +# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_manifest_name: nfd + +# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_catalog: redhat-operators diff --git a/_sources/toolbox.generated/Nfd_Operator.undeploy_from_operatorhub.rst.txt b/_sources/toolbox.generated/Nfd_Operator.undeploy_from_operatorhub.rst.txt new file mode 100644 index 0000000000..f16ebc6a50 --- /dev/null +++ b/_sources/toolbox.generated/Nfd_Operator.undeploy_from_operatorhub.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Nfd_Operator.undeploy_from_operatorhub + + +nfd_operator undeploy_from_operatorhub +====================================== + +Undeploys an NFD-operator that was deployed from OperatorHub + + + diff --git a/_sources/toolbox.generated/Notebooks.benchmark_performance.rst.txt b/_sources/toolbox.generated/Notebooks.benchmark_performance.rst.txt new file mode 100644 index 0000000000..398312e709 --- /dev/null +++ b/_sources/toolbox.generated/Notebooks.benchmark_performance.rst.txt @@ -0,0 +1,75 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Notebooks.benchmark_performance + + +notebooks benchmark_performance +=============================== + +Benchmark the performance of a notebook image. + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which the notebook will be deployed, if not deploying with RHODS. + +* default value: ``rhods-notebooks`` + + +``imagestream`` + +* Imagestream to use to look up the notebook Pod image. + +* default value: ``s2i-generic-data-science-notebook`` + + +``imagestream_tag`` + +* Imagestream tag to use to look up the notebook Pod image. If emtpy and and the image stream has only one tag, use it. Fails otherwise. + + +``notebook_directory`` + +* Directory containing the files to mount in the notebook. + +* default value: ``projects/notebooks/testing/notebooks/`` + + +``notebook_filename`` + +* Name of the ipynb notebook file to execute with JupyterLab. + +* default value: ``benchmark_entrypoint.ipynb`` + + +``benchmark_name`` + +* Name of the benchmark to execute in the notebook. + +* default value: ``pyperf_bm_go.py`` + + +``benchmark_repeat`` + +* Number of repeats of the benchmark to perform for one time measurement. +* type: Int + +* default value: ``1`` + + +``benchmark_number`` + +* Number of times the benchmark time measurement should be done. +* type: Int + +* default value: ``1`` + diff --git a/_sources/toolbox.generated/Notebooks.capture_state.rst.txt b/_sources/toolbox.generated/Notebooks.capture_state.rst.txt new file mode 100644 index 0000000000..0e03f0e230 --- /dev/null +++ b/_sources/toolbox.generated/Notebooks.capture_state.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Notebooks.capture_state + + +notebooks capture_state +======================= + +Capture information about the cluster and the RHODS notebooks deployment + + + diff --git a/_sources/toolbox.generated/Notebooks.cleanup.rst.txt b/_sources/toolbox.generated/Notebooks.cleanup.rst.txt new file mode 100644 index 0000000000..df6a2f196c --- /dev/null +++ b/_sources/toolbox.generated/Notebooks.cleanup.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Notebooks.cleanup + + +notebooks cleanup +================= + +Clean up the resources created along with the notebooks, during the scale tests. + + + + +Parameters +---------- + + +``username_prefix`` + +* Prefix of the usernames who created the resources. + diff --git a/_sources/toolbox.generated/Notebooks.dashboard_scale_test.rst.txt b/_sources/toolbox.generated/Notebooks.dashboard_scale_test.rst.txt new file mode 100644 index 0000000000..1d9c837b6a --- /dev/null +++ b/_sources/toolbox.generated/Notebooks.dashboard_scale_test.rst.txt @@ -0,0 +1,118 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Notebooks.dashboard_scale_test + + +notebooks dashboard_scale_test +============================== + +End-to-end scale testing of ROAI dashboard scale test, at user level. + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which the scale test should be deployed. + + +``idp_name`` + +* Name of the identity provider to use. + + +``username_prefix`` + +* Prefix of the usernames to use to run the scale test. + + +``user_count`` + +* Number of users to run in parallel. +* type: Int + + +``secret_properties_file`` + +* Path of a file containing the properties of LDAP secrets. (See 'deploy_ldap' command) + + +``minio_namespace`` + +* Namespace where the Minio server is located. + + +``minio_bucket_name`` + +* Name of the bucket in the Minio server. + + +``user_index_offset`` + +* Offset to add to the user index to compute the user name. +* type: Int + + +``artifacts_collected`` + +* - 'all' - 'no-screenshot' - 'no-screenshot-except-zero' - 'no-screenshot-except-failed' - 'no-screenshot-except-failed-and-zero' - 'none' + +* default value: ``all`` + + +``user_sleep_factor`` + +* Delay to sleep between users + +* default value: ``1.0`` + + +``user_batch_size`` + +* Number of users to launch at the same time. +* type: Int + +* default value: ``1`` + + +``ods_ci_istag`` + +* Imagestream tag of the ODS-CI container image. + + +``ods_ci_test_case`` + +* ODS-CI test case to execute. + +* default value: ``notebook_dsg_test.robot`` + + +``artifacts_exporter_istag`` + +* Imagestream tag of the artifacts exporter side-car container image. + + +``state_signal_redis_server`` + +* Hostname and port of the Redis server for StateSignal synchronization (for the synchronization of the beginning of the user simulation) + + +``toleration_key`` + +* Toleration key to use for the test Pods. + + +``capture_prom_db`` + +* If True, captures the Prometheus DB of the systems. +* type: Bool + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Notebooks.locust_scale_test.rst.txt b/_sources/toolbox.generated/Notebooks.locust_scale_test.rst.txt new file mode 100644 index 0000000000..dba521fcef --- /dev/null +++ b/_sources/toolbox.generated/Notebooks.locust_scale_test.rst.txt @@ -0,0 +1,138 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Notebooks.locust_scale_test + + +notebooks locust_scale_test +=========================== + +End-to-end testing of RHOAI notebooks at scale, at API level + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where the test will run + + +``idp_name`` + +* Name of the identity provider to use. + + +``secret_properties_file`` + +* Path of a file containing the properties of LDAP secrets. (See 'deploy_ldap' command). + + +``test_name`` + +* Test to perform. + + +``minio_namespace`` + +* Namespace where the Minio server is located. + + +``minio_bucket_name`` + +* Name of the bucket in the Minio server. + + +``username_prefix`` + +* Prefix of the RHODS users. + + +``user_count`` + +* Number of users to run in parallel. +* type: Int + + +``user_index_offset`` + +* Offset to add to the user index to compute the user name. +* type: Int + + +``locust_istag`` + +* Imagestream tag of the locust container. + + +``artifacts_exporter_istag`` + +* Imagestream tag of the artifacts exporter side-car container. + + +``run_time`` + +* Test run time (eg, 300s, 20m, 3h, 1h30m, etc.) + +* default value: ``1m`` + + +``spawn_rate`` + +* Rate to spawn users at (users per second) + +* default value: ``1`` + + +``sut_cluster_kubeconfig`` + +* Path of the system-under-test cluster's Kubeconfig. If provided, the RHODS endpoints will be looked up in this cluster. + + +``notebook_image_name`` + +* Name of the RHODS image to use when launching the notebooks. + +* default value: ``s2i-generic-data-science-notebook`` + + +``notebook_size_name`` + +* Size name of the notebook. + +* default value: ``Small`` + + +``toleration_key`` + +* Toleration key to use for the test Pods. + + +``cpu_count`` + +* Number of Locust processes to launch (one per Pod with 1cpu). +* type: Int + +* default value: ``1`` + + +``user_sleep_factor`` + +* Delay to sleep between users +* type: Float + +* default value: ``1.0`` + + +``capture_prom_db`` + +* If True, captures the Prometheus DB of the systems. +* type: Bool + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Notebooks.ods_ci_scale_test.rst.txt b/_sources/toolbox.generated/Notebooks.ods_ci_scale_test.rst.txt new file mode 100644 index 0000000000..fa34c7a898 --- /dev/null +++ b/_sources/toolbox.generated/Notebooks.ods_ci_scale_test.rst.txt @@ -0,0 +1,190 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Notebooks.ods_ci_scale_test + + +notebooks ods_ci_scale_test +=========================== + +End-to-end scale testing of ROAI notebooks, at user level. + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which the scale test should be deployed. + + +``idp_name`` + +* Name of the identity provider to use. + + +``username_prefix`` + +* Prefix of the usernames to use to run the scale test. + + +``user_count`` + +* Number of users to run in parallel. +* type: Int + + +``secret_properties_file`` + +* Path of a file containing the properties of LDAP secrets. (See 'deploy_ldap' command) + + +``notebook_url`` + +* URL from which the notebook will be downloaded. + + +``minio_namespace`` + +* Namespace where the Minio server is located. + + +``minio_bucket_name`` + +* Name of the bucket in the Minio server. + + +``user_index_offset`` + +* Offset to add to the user index to compute the user name. +* type: Int + + +``sut_cluster_kubeconfig`` + +* Path of the system-under-test cluster's Kubeconfig. If provided, the RHODS endpoints will be looked up in this cluster. + + +``artifacts_collected`` + +* - 'all' - 'no-screenshot' - 'no-screenshot-except-zero' - 'no-screenshot-except-failed' - 'no-screenshot-except-failed-and-zero' - 'none' + +* default value: ``all`` + + +``user_sleep_factor`` + +* Delay to sleep between users + +* default value: ``1.0`` + + +``user_batch_size`` + +* Number of users to launch at the same time. +* type: Int + +* default value: ``1`` + + +``ods_ci_istag`` + +* Imagestream tag of the ODS-CI container image. + + +``ods_ci_exclude_tags`` + +* Tags to exclude in the ODS-CI test case. + +* default value: ``None`` + + +``ods_ci_test_case`` + +* Robot test case name. + +* default value: ``notebook_dsg_test.robot`` + + +``artifacts_exporter_istag`` + +* Imagestream tag of the artifacts exporter side-car container image. + + +``notebook_image_name`` + +* Notebook image name. + +* default value: ``s2i-generic-data-science-notebook`` + + +``notebook_size_name`` + +* Notebook size. + +* default value: ``Small`` + + +``notebook_benchmark_name`` + +* Benchmark script file name to execute in the notebook. + +* default value: ``pyperf_bm_go.py`` + + +``notebook_benchmark_number`` + +* Number of the benchmarks executions per repeat. + +* default value: ``20`` + + +``notebook_benchmark_repeat`` + +* Number of the benchmark repeats to execute. + +* default value: ``2`` + + +``state_signal_redis_server`` + +* Hostname and port of the Redis server for StateSignal synchronization (for the synchronization of the beginning of the user simulation) + + +``toleration_key`` + +* Toleration key to use for the test Pods. + + +``capture_prom_db`` + +* If True, captures the Prometheus DB of the systems. +* type: Bool + +* default value: ``True`` + + +``stop_notebooks_on_exit`` + +* If False, keep the user notebooks running at the end of the test. +* type: Bool + +* default value: ``True`` + + +``only_create_notebooks`` + +* If True, only create the notebooks, but don't start them. This will overwrite the value of 'ods_ci_exclude_tags'. +* type: Bool + + +``driver_running_on_spot`` + +* If True, consider that the driver Pods are running on Spot instances and can disappear at any time. +* type: Bool + diff --git a/_sources/toolbox.generated/Pipelines.capture_state.rst.txt b/_sources/toolbox.generated/Pipelines.capture_state.rst.txt new file mode 100644 index 0000000000..993d9a4ac8 --- /dev/null +++ b/_sources/toolbox.generated/Pipelines.capture_state.rst.txt @@ -0,0 +1,41 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Pipelines.capture_state + + +pipelines capture_state +======================= + +Captures the state of a Data Science Pipeline Application in a given namespace. + + + + +Parameters +---------- + + +``dsp_application_name`` + +* The name of the application + + +``namespace`` + +* The namespace in which the application was deployed + + +``user_id`` + +* Identifier of the user to capture + + +``capture_extra_artifacts`` + +* Whether to capture extra descriptions and YAML's + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Pipelines.deploy_application.rst.txt b/_sources/toolbox.generated/Pipelines.deploy_application.rst.txt new file mode 100644 index 0000000000..200382de62 --- /dev/null +++ b/_sources/toolbox.generated/Pipelines.deploy_application.rst.txt @@ -0,0 +1,29 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Pipelines.deploy_application + + +pipelines deploy_application +============================ + +Deploy a Data Science Pipeline Application in a given namespace. + + + + +Parameters +---------- + + +``name`` + +* The name of the application to deploy + + +``namespace`` + +* The namespace in which the application should be deployed + diff --git a/_sources/toolbox.generated/Pipelines.run_kfp_notebook.rst.txt b/_sources/toolbox.generated/Pipelines.run_kfp_notebook.rst.txt new file mode 100644 index 0000000000..ff0a4afd86 --- /dev/null +++ b/_sources/toolbox.generated/Pipelines.run_kfp_notebook.rst.txt @@ -0,0 +1,101 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Pipelines.run_kfp_notebook + + +pipelines run_kfp_notebook +========================== + +Run a notebook in a given notebook image. + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which the notebook will be deployed, if not deploying with RHODS. If empty, use the project return by 'oc project --short'. + + +``dsp_application_name`` + +* The name of the DSPipelines Application to use. If empty, lookup the application name in the namespace. + + +``imagestream`` + +* Imagestream to use to look up the notebook Pod image. + +* default value: ``s2i-generic-data-science-notebook`` + + +``imagestream_tag`` + +* Imagestream tag to use to look up the notebook Pod image. If emtpy and and the image stream has only one tag, use it. Fails otherwise. + + +``notebook_name`` + +* A prefix to add the name of the notebook to differential notebooks in the same project + + +``notebook_directory`` + +* Directory containing the files to mount in the notebook. + +* default value: ``testing/pipelines/notebooks/hello-world`` + + +``notebook_filename`` + +* Name of the ipynb notebook file to execute with JupyterLab. + +* default value: ``kfp_hello_world.ipynb`` + + +``run_count`` + +* Number of times to run the pipeline + + +``run_delay`` + +* Number of seconds to wait before trigger the next run from the notebook + + +``stop_on_exit`` + +* If False, keep the notebook running after the test. + +* default value: ``True`` + + +``capture_artifacts`` + +* If False, disable the post-test artifact collection. + +* default value: ``True`` + + +``capture_prom_db`` + +* If True, captures the Prometheus DB of the systems. + + +``capture_extra_artifacts`` + +* Whether to capture extra descriptions and YAML's + +* default value: ``True`` + + +``wait_for_run_completion`` + +* Whether to wait for one runs completion before starting the next + diff --git a/_sources/toolbox.generated/Repo.generate_ansible_default_settings.rst.txt b/_sources/toolbox.generated/Repo.generate_ansible_default_settings.rst.txt new file mode 100644 index 0000000000..eb0b42fd4f --- /dev/null +++ b/_sources/toolbox.generated/Repo.generate_ansible_default_settings.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.generate_ansible_default_settings + + +repo generate_ansible_default_settings +====================================== + +Generate the ``defaults/main/config.yml`` file of the Ansible roles, based on the Python definition. + + + diff --git a/_sources/toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate.rst.txt b/_sources/toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate.rst.txt new file mode 100644 index 0000000000..d40a3512b4 --- /dev/null +++ b/_sources/toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate.rst.txt @@ -0,0 +1,34 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.generate_middleware_ci_secret_boilerplate + + +repo generate_middleware_ci_secret_boilerplate +============================================== + +Generate the boilerplace code to include a new secret in the Middleware CI configuration + + + + +Parameters +---------- + + +``name`` + +* Name of the new secret to include + + +``description`` + +* Description of the secret to include + + +``varname`` + +* Optional short name of the file + diff --git a/_sources/toolbox.generated/Repo.generate_toolbox_related_files.rst.txt b/_sources/toolbox.generated/Repo.generate_toolbox_related_files.rst.txt new file mode 100644 index 0000000000..778b4a444f --- /dev/null +++ b/_sources/toolbox.generated/Repo.generate_toolbox_related_files.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.generate_toolbox_related_files + + +repo generate_toolbox_related_files +=================================== + +Generate the rst document and Ansible default settings, based on the Toolbox Python definition. + + + diff --git a/_sources/toolbox.generated/Repo.generate_toolbox_rst_documentation.rst.txt b/_sources/toolbox.generated/Repo.generate_toolbox_rst_documentation.rst.txt new file mode 100644 index 0000000000..5089a36505 --- /dev/null +++ b/_sources/toolbox.generated/Repo.generate_toolbox_rst_documentation.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.generate_toolbox_rst_documentation + + +repo generate_toolbox_rst_documentation +======================================= + +Generate the ``doc/toolbox.generated/*.rst`` file, based on the Toolbox Python definition. + + + diff --git a/_sources/toolbox.generated/Repo.send_job_completion_notification.rst.txt b/_sources/toolbox.generated/Repo.send_job_completion_notification.rst.txt new file mode 100644 index 0000000000..1426bfe080 --- /dev/null +++ b/_sources/toolbox.generated/Repo.send_job_completion_notification.rst.txt @@ -0,0 +1,50 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.send_job_completion_notification + + +repo send_job_completion_notification +===================================== + +Send a *job completion* notification to github and/or slack about the completion of a test job. + +A *job completion* notification is the message sent at the end of a CI job. + + +Parameters +---------- + + +``reason`` + +* Reason of the job completion. Can be ERR or EXIT. +* type: Str + + +``status`` + +* A status message to write at the top of the notification. +* type: Str + + +``github`` + +* Enable or disable sending the *job completion* notification to Github + +* default value: ``True`` + + +``slack`` + +* Enable or disable sending the *job completion* notification to Slack + +* default value: ``True`` + + +``dry_run`` + +* If enabled, don't send any notification, just show the message in the logs + diff --git a/_sources/toolbox.generated/Repo.validate_no_broken_link.rst.txt b/_sources/toolbox.generated/Repo.validate_no_broken_link.rst.txt new file mode 100644 index 0000000000..f5926b6a8b --- /dev/null +++ b/_sources/toolbox.generated/Repo.validate_no_broken_link.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.validate_no_broken_link + + +repo validate_no_broken_link +============================ + +Ensure that all the symlinks point to a file + + + diff --git a/_sources/toolbox.generated/Repo.validate_no_wip.rst.txt b/_sources/toolbox.generated/Repo.validate_no_wip.rst.txt new file mode 100644 index 0000000000..d9e4a2de34 --- /dev/null +++ b/_sources/toolbox.generated/Repo.validate_no_wip.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.validate_no_wip + + +repo validate_no_wip +==================== + +Ensures that none of the commits have the WIP flag in their message title. + + + diff --git a/_sources/toolbox.generated/Repo.validate_role_files.rst.txt b/_sources/toolbox.generated/Repo.validate_role_files.rst.txt new file mode 100644 index 0000000000..8a70cb5f85 --- /dev/null +++ b/_sources/toolbox.generated/Repo.validate_role_files.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.validate_role_files + + +repo validate_role_files +======================== + +Ensures that all the Ansible variables defining a filepath (``project/*/toolbox/``) do point to an existing file. + + + diff --git a/_sources/toolbox.generated/Repo.validate_role_vars_used.rst.txt b/_sources/toolbox.generated/Repo.validate_role_vars_used.rst.txt new file mode 100644 index 0000000000..499c0b8cbc --- /dev/null +++ b/_sources/toolbox.generated/Repo.validate_role_vars_used.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Repo.validate_role_vars_used + + +repo validate_role_vars_used +============================ + +Ensure that all the Ansible variables defined are actually used in their role (with an exception for symlinks) + + + diff --git a/_sources/toolbox.generated/Rhods.capture_state.rst.txt b/_sources/toolbox.generated/Rhods.capture_state.rst.txt new file mode 100644 index 0000000000..1f1098c482 --- /dev/null +++ b/_sources/toolbox.generated/Rhods.capture_state.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.capture_state + + +rhods capture_state +=================== + +Captures the state of the RHOAI deployment + + + diff --git a/_sources/toolbox.generated/Rhods.delete_ods.rst.txt b/_sources/toolbox.generated/Rhods.delete_ods.rst.txt new file mode 100644 index 0000000000..3a5e69f0c8 --- /dev/null +++ b/_sources/toolbox.generated/Rhods.delete_ods.rst.txt @@ -0,0 +1,26 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.delete_ods + + +rhods delete_ods +================ + +Forces ODS operator deletion + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where RHODS is installed. + +* default value: ``redhat-ods-operator`` + diff --git a/_sources/toolbox.generated/Rhods.deploy_addon.rst.txt b/_sources/toolbox.generated/Rhods.deploy_addon.rst.txt new file mode 100644 index 0000000000..a8f6140150 --- /dev/null +++ b/_sources/toolbox.generated/Rhods.deploy_addon.rst.txt @@ -0,0 +1,41 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.deploy_addon + + +rhods deploy_addon +================== + +Installs the RHODS OCM addon + + + + +Parameters +---------- + + +``cluster_name`` + +* The name of the cluster where RHODS should be deployed. + + +``notification_email`` + +* The email to register for RHODS addon deployment. + + +``wait_for_ready_state`` + +* If true (default), will cause the role to wait until addon reports ready state. (Can time out) + +* default value: ``True`` + + +# Constants +# Identifier of the addon that should be deployed +# Defined as a constant in Rhods.deploy_addon +ocm_deploy_addon_ocm_deploy_addon_id: managed-odh diff --git a/_sources/toolbox.generated/Rhods.deploy_ods.rst.txt b/_sources/toolbox.generated/Rhods.deploy_ods.rst.txt new file mode 100644 index 0000000000..58ed412049 --- /dev/null +++ b/_sources/toolbox.generated/Rhods.deploy_ods.rst.txt @@ -0,0 +1,56 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.deploy_ods + + +rhods deploy_ods +================ + +Deploy ODS operator from its custom catalog + + + + +Parameters +---------- + + +``catalog_image`` + +* Container image containing the RHODS bundle. + + +``tag`` + +* Catalog image tag to use to deploy RHODS. + + +``channel`` + +* The channel to use for the deployment. Let empty to use the default channel. + + +``version`` + +* The version to deploy. Let empty to install the last version available. + + +``disable_dsc_config`` + +* If True, pass the flag to disable DSC configuration + + +``opendatahub`` + +* If True, deploys a OpenDataHub manifest instead of RHOAI + + +``managed_rhoai`` + +* If True, deploys RHOAI with the Managed Service flag. If False, deploys it as Self-Managed. + +* default value: ``True`` + diff --git a/_sources/toolbox.generated/Rhods.dump_prometheus_db.rst.txt b/_sources/toolbox.generated/Rhods.dump_prometheus_db.rst.txt new file mode 100644 index 0000000000..62897097f9 --- /dev/null +++ b/_sources/toolbox.generated/Rhods.dump_prometheus_db.rst.txt @@ -0,0 +1,43 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.dump_prometheus_db + + +rhods dump_prometheus_db +======================== + +Dump Prometheus database into a file + + + + +Parameters +---------- + + +``dump_name_prefix`` + +* Missing documentation for dump_name_prefix + +* default value: ``prometheus`` + + +# Constants +# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_directory: /prometheus/data + +# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_namespace: redhat-ods-monitoring + +# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_label: deployment=prometheus + +# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_mode: dump diff --git a/_sources/toolbox.generated/Rhods.reset_prometheus_db.rst.txt b/_sources/toolbox.generated/Rhods.reset_prometheus_db.rst.txt new file mode 100644 index 0000000000..1218ad62fb --- /dev/null +++ b/_sources/toolbox.generated/Rhods.reset_prometheus_db.rst.txt @@ -0,0 +1,28 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.reset_prometheus_db + + +rhods reset_prometheus_db +========================= + +Resets RHODS Prometheus database, by destroying its Pod. + + + + +# Constants +# +# Defined as a constant in Rhods.reset_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_namespace: redhat-ods-monitoring + +# +# Defined as a constant in Rhods.reset_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_label: deployment=prometheus + +# +# Defined as a constant in Rhods.reset_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_mode: reset diff --git a/_sources/toolbox.generated/Rhods.undeploy_ods.rst.txt b/_sources/toolbox.generated/Rhods.undeploy_ods.rst.txt new file mode 100644 index 0000000000..c036bb508e --- /dev/null +++ b/_sources/toolbox.generated/Rhods.undeploy_ods.rst.txt @@ -0,0 +1,26 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.undeploy_ods + + +rhods undeploy_ods +================== + +Undeploy ODS operator + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where RHODS is installed. + +* default value: ``redhat-ods-operator`` + diff --git a/_sources/toolbox.generated/Rhods.update_datasciencecluster.rst.txt b/_sources/toolbox.generated/Rhods.update_datasciencecluster.rst.txt new file mode 100644 index 0000000000..fea7031469 --- /dev/null +++ b/_sources/toolbox.generated/Rhods.update_datasciencecluster.rst.txt @@ -0,0 +1,43 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.update_datasciencecluster + + +rhods update_datasciencecluster +=============================== + +Update RHOAI datasciencecluster resource + + + + +Parameters +---------- + + +``name`` + +* Name of the resource to update. If none, update the first (and only) one found. + + +``enable`` + +* List of all the components to enable +* type: List + + +``show_all`` + +* If enabled, show all the available components and exit. + + +``extra_settings`` + +* Dict of key:value to set manually in the DSC, using JSON dot notation. +* type: Dict + +* default value: ``{'spec.components.kserve.serving.managementState': 'Removed'}`` + diff --git a/_sources/toolbox.generated/Rhods.wait_odh.rst.txt b/_sources/toolbox.generated/Rhods.wait_odh.rst.txt new file mode 100644 index 0000000000..e6b0ffddbd --- /dev/null +++ b/_sources/toolbox.generated/Rhods.wait_odh.rst.txt @@ -0,0 +1,26 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.wait_odh + + +rhods wait_odh +============== + +Wait for ODH to finish its deployment + + + + +Parameters +---------- + + +``namespace`` + +* Namespace in which ODH is deployed + +* default value: ``opendatahub`` + diff --git a/_sources/toolbox.generated/Rhods.wait_ods.rst.txt b/_sources/toolbox.generated/Rhods.wait_ods.rst.txt new file mode 100644 index 0000000000..a8dd0c6fb4 --- /dev/null +++ b/_sources/toolbox.generated/Rhods.wait_ods.rst.txt @@ -0,0 +1,20 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Rhods.wait_ods + + +rhods wait_ods +============== + +Wait for ODS to finish its deployment + + + + +# Constants +# Comma-separated list of the RHODS images that should be awaited +# Defined as a constant in Rhods.wait_ods +rhods_wait_ods_images: s2i-minimal-notebook,s2i-generic-data-science-notebook diff --git a/_sources/toolbox.generated/Scheduler.cleanup.rst.txt b/_sources/toolbox.generated/Scheduler.cleanup.rst.txt new file mode 100644 index 0000000000..495cee125e --- /dev/null +++ b/_sources/toolbox.generated/Scheduler.cleanup.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Scheduler.cleanup + + +scheduler cleanup +================= + +Clean up the scheduler load namespace + + + + +Parameters +---------- + + +``namespace`` + +* Name of the namespace where the scheduler load was generated + diff --git a/_sources/toolbox.generated/Scheduler.create_mcad_canary.rst.txt b/_sources/toolbox.generated/Scheduler.create_mcad_canary.rst.txt new file mode 100644 index 0000000000..ab55309eed --- /dev/null +++ b/_sources/toolbox.generated/Scheduler.create_mcad_canary.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Scheduler.create_mcad_canary + + +scheduler create_mcad_canary +============================ + +Create a canary for MCAD Appwrappers and track the time it takes to be scheduled + + + + +Parameters +---------- + + +``namespace`` + +* Name of the namespace where the canary should be generated + diff --git a/_sources/toolbox.generated/Scheduler.deploy_mcad_from_helm.rst.txt b/_sources/toolbox.generated/Scheduler.deploy_mcad_from_helm.rst.txt new file mode 100644 index 0000000000..20774418ab --- /dev/null +++ b/_sources/toolbox.generated/Scheduler.deploy_mcad_from_helm.rst.txt @@ -0,0 +1,52 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Scheduler.deploy_mcad_from_helm + + +scheduler deploy_mcad_from_helm +=============================== + +Deploys MCAD from helm + + + + +Parameters +---------- + + +``namespace`` + +* Name of the namespace where MCAD should be deployed + + +``git_repo`` + +* Name of the GIT repo to clone + +* default value: ``https://github.com/project-codeflare/multi-cluster-app-dispatcher`` + + +``git_ref`` + +* Name of the GIT branch to fetch + +* default value: ``main`` + + +``image_repo`` + +* Name of the image registry where the image is stored + +* default value: ``quay.io/project-codeflare/mcad-controller`` + + +``image_tag`` + +* Tag of the image to use + +* default value: ``stable`` + diff --git a/_sources/toolbox.generated/Scheduler.generate_load.rst.txt b/_sources/toolbox.generated/Scheduler.generate_load.rst.txt new file mode 100644 index 0000000000..5523ffaa6e --- /dev/null +++ b/_sources/toolbox.generated/Scheduler.generate_load.rst.txt @@ -0,0 +1,116 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Scheduler.generate_load + + +scheduler generate_load +======================= + +Generate scheduler load + + + + +Parameters +---------- + + +``namespace`` + +* Name of the namespace where the scheduler load will be generated + + +``base_name`` + +* Name prefix for the scheduler resources + +* default value: ``sched-test-`` + + +``job_template_name`` + +* Name of the job template to use inside the AppWrapper + +* default value: ``sleeper`` + + +``aw_states_target`` + +* List of expected AppWrapper target states + + +``aw_states_unexpected`` + +* List of AppWrapper states that fail the test + + +``mode`` + +* Mcad, kueue, coscheduling or job + +* default value: ``job`` + + +``count`` + +* Number of resources to create + +* default value: ``3`` + + +``pod_count`` + +* Number of Pods to create in each of the AppWrappers + +* default value: ``1`` + + +``pod_runtime`` + +* Run time parameter to pass to the Pod + +* default value: ``30`` + + +``pod_requests`` + +* Requests to pass to the Pod definition + +* default value: ``{'cpu': '100m'}`` + + +``timespan`` + +* Number of minutes over which the resources should be created + + +``distribution`` + +* The distribution method to use to spread the resource creation over the requested timespan + +* default value: ``poisson`` + + +``scheduler_load_generator`` + +* The path of the scheduler load generator to launch + +* default value: ``projects/scheduler/subprojects/scheduler-load-generator/generator.py`` + + +``kueue_queue`` + +* The name of the Kueue queue to use + +* default value: ``local-queue`` + + +``resource_kind`` + +* The kind of resource created by the load generator + +* default value: ``job`` + diff --git a/_sources/toolbox.generated/Server.deploy_ldap.rst.txt b/_sources/toolbox.generated/Server.deploy_ldap.rst.txt new file mode 100644 index 0000000000..218230d3e5 --- /dev/null +++ b/_sources/toolbox.generated/Server.deploy_ldap.rst.txt @@ -0,0 +1,67 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Server.deploy_ldap + + +server deploy_ldap +================== + +Deploy OpenLDAP and LDAP Oauth + +Example of secret properties file: + +admin_password=adminpasswd + + +Parameters +---------- + + +``idp_name`` + +* Name of the LDAP identity provider. + + +``username_prefix`` + +* Prefix for the creation of the users (suffix is 0..username_count) + + +``username_count`` + +* Number of users to create. +* type: Int + + +``secret_properties_file`` + +* Path of a file containing the properties of LDAP secrets. + + +``use_ocm`` + +* If true, use `ocm create idp` to deploy the LDAP identity provider. + + +``use_rosa`` + +* If true, use `rosa create idp` to deploy the LDAP identity provider. + + +``cluster_name`` + +* Cluster to use when using OCM or ROSA. + + +``wait`` + +* If True, waits for the first user (0) to be able to login into the cluster. + + +# Constants +# Name of the admin user +# Defined as a constant in Server.deploy_ldap +server_deploy_ldap_admin_user: admin diff --git a/_sources/toolbox.generated/Server.deploy_minio_s3_server.rst.txt b/_sources/toolbox.generated/Server.deploy_minio_s3_server.rst.txt new file mode 100644 index 0000000000..c25accb252 --- /dev/null +++ b/_sources/toolbox.generated/Server.deploy_minio_s3_server.rst.txt @@ -0,0 +1,50 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Server.deploy_minio_s3_server + + +server deploy_minio_s3_server +============================= + +Deploy Minio S3 server + +Example of secret properties file: + +user_password=passwd +admin_password=adminpasswd + + +Parameters +---------- + + +``secret_properties_file`` + +* Path of a file containing the properties of S3 secrets. + + +``namespace`` + +* Namespace in which Minio should be deployed. + +* default value: ``minio`` + + +``bucket_name`` + +* The name of the default bucket to create in Minio. + +* default value: ``myBucket`` + + +# Constants +# Name of the Minio admin user +# Defined as a constant in Server.deploy_minio_s3_server +server_deploy_minio_s3_server_root_user: admin + +# Name of the user/access key to use to connect to the Minio server +# Defined as a constant in Server.deploy_minio_s3_server +server_deploy_minio_s3_server_access_key: minio diff --git a/_sources/toolbox.generated/Server.deploy_nginx_server.rst.txt b/_sources/toolbox.generated/Server.deploy_nginx_server.rst.txt new file mode 100644 index 0000000000..a3f7f22b87 --- /dev/null +++ b/_sources/toolbox.generated/Server.deploy_nginx_server.rst.txt @@ -0,0 +1,29 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Server.deploy_nginx_server + + +server deploy_nginx_server +========================== + +Deploy an NGINX HTTP server + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where the server will be deployed. Will be create if it doesn't exist. + + +``directory`` + +* Directory containing the files to serve on the HTTP server. + diff --git a/_sources/toolbox.generated/Server.deploy_opensearch.rst.txt b/_sources/toolbox.generated/Server.deploy_opensearch.rst.txt new file mode 100644 index 0000000000..f1fba5b3f2 --- /dev/null +++ b/_sources/toolbox.generated/Server.deploy_opensearch.rst.txt @@ -0,0 +1,41 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Server.deploy_opensearch + + +server deploy_opensearch +======================== + +Deploy OpenSearch and OpenSearch-Dashboards + +Example of secret properties file: + +user_password=passwd +admin_password=adminpasswd + + +Parameters +---------- + + +``secret_properties_file`` + +* Path of a file containing the properties of LDAP secrets. + + +``namespace`` + +* Namespace in which the application will be deployed + +* default value: ``opensearch`` + + +``name`` + +* Name to give to the opensearch instance + +* default value: ``opensearch`` + diff --git a/_sources/toolbox.generated/Server.deploy_redis_server.rst.txt b/_sources/toolbox.generated/Server.deploy_redis_server.rst.txt new file mode 100644 index 0000000000..4f1e71192f --- /dev/null +++ b/_sources/toolbox.generated/Server.deploy_redis_server.rst.txt @@ -0,0 +1,24 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Server.deploy_redis_server + + +server deploy_redis_server +========================== + +Deploy a redis server + + + + +Parameters +---------- + + +``namespace`` + +* Namespace where the server will be deployed. Will be create if it doesn't exist. + diff --git a/_sources/toolbox.generated/Server.undeploy_ldap.rst.txt b/_sources/toolbox.generated/Server.undeploy_ldap.rst.txt new file mode 100644 index 0000000000..6682caa5d8 --- /dev/null +++ b/_sources/toolbox.generated/Server.undeploy_ldap.rst.txt @@ -0,0 +1,39 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Server.undeploy_ldap + + +server undeploy_ldap +==================== + +Undeploy OpenLDAP and LDAP Oauth + + + + +Parameters +---------- + + +``idp_name`` + +* Name of the LDAP identity provider. + + +``use_ocm`` + +* If true, use `ocm delete idp` to delete the LDAP identity provider. + + +``use_rosa`` + +* If true, use `rosa delete idp` to delete the LDAP identity provider. + + +``cluster_name`` + +* Cluster to use when using OCM or ROSA. + diff --git a/_sources/toolbox.generated/Storage.deploy_aws_efs.rst.txt b/_sources/toolbox.generated/Storage.deploy_aws_efs.rst.txt new file mode 100644 index 0000000000..085d869d46 --- /dev/null +++ b/_sources/toolbox.generated/Storage.deploy_aws_efs.rst.txt @@ -0,0 +1,15 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Storage.deploy_aws_efs + + +storage deploy_aws_efs +====================== + +Deploy AWS EFS CSI driver and configure AWS accordingly. + +Assumes that AWS (credentials, Ansible module, Python module) is properly configured in the system. + diff --git a/_sources/toolbox.generated/Storage.deploy_nfs_provisioner.rst.txt b/_sources/toolbox.generated/Storage.deploy_nfs_provisioner.rst.txt new file mode 100644 index 0000000000..18a6c70863 --- /dev/null +++ b/_sources/toolbox.generated/Storage.deploy_nfs_provisioner.rst.txt @@ -0,0 +1,52 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Storage.deploy_nfs_provisioner + + +storage deploy_nfs_provisioner +============================== + +Deploy NFS Provisioner + + + + +Parameters +---------- + + +``namespace`` + +* The namespace where the resources will be deployed + +* default value: ``nfs-provisioner`` + + +``pvc_sc`` + +* The name of the storage class to use for the NFS-provisioner PVC + +* default value: ``gp3-csi`` + + +``pvc_size`` + +* The size of the PVC to give to the NFS-provisioner + +* default value: ``10Gi`` + + +``storage_class_name`` + +* The name of the storage class that will be created + +* default value: ``nfs-provisioner`` + + +``default_sc`` + +* Set to true to mark the storage class as default in the cluster + diff --git a/_sources/toolbox.generated/Storage.download_to_pvc.rst.txt b/_sources/toolbox.generated/Storage.download_to_pvc.rst.txt new file mode 100644 index 0000000000..a25d7b595e --- /dev/null +++ b/_sources/toolbox.generated/Storage.download_to_pvc.rst.txt @@ -0,0 +1,82 @@ +:orphan: + +.. + _Auto-generated file, do not edit manually ... + _Toolbox generate command: repo generate_toolbox_rst_documentation + _ Source component: Storage.download_to_pvc + + +storage download_to_pvc +======================= + +Downloads the a dataset into a PVC of the cluster + + + + +Parameters +---------- + + +``name`` + +* Name of the data source + + +``source`` + +* URL of the source data + + +``pvc_name`` + +* Name of the PVC that will be create to store the dataset files. + + +``namespace`` + +* Name of the namespace in which the PVC will be created + + +``creds`` + +* Path to credentials to use for accessing the dataset. + + +``storage_dir`` + +* The path where to store the downloaded files, in the PVC + +* default value: ``/`` + + +``clean_first`` + +* If True, clears the storage directory before downloading. + + +``pvc_access_mode`` + +* The access mode to request when creating the PVC + +* default value: ``ReadWriteOnce`` + + +``pvc_size`` + +* The size of the PVC to request, when creating the PVC + +* default value: ``80Gi`` + + +``pvc_storage_class_name`` + +* The name of the storage class to pass when creating the PVC + + +``image`` + +* The image to use for running the download Pod + +* default value: ``registry.access.redhat.com/ubi9/ubi`` + diff --git a/_sources/toolbox.generated/index.rst.txt b/_sources/toolbox.generated/index.rst.txt new file mode 100644 index 0000000000..b0c8f7ada3 --- /dev/null +++ b/_sources/toolbox.generated/index.rst.txt @@ -0,0 +1,329 @@ + +Toolbox Documentation +===================== + + +``busy_cluster`` +**************** + +:: + + Commands relating to make a cluster busy with lot of resources + + + +* :doc:`cleanup ` Cleanups namespaces to make a cluster un-busy +* :doc:`create_configmaps ` Creates configmaps and secrets to make a cluster busy +* :doc:`create_deployments ` Creates configmaps and secrets to make a cluster busy +* :doc:`create_jobs ` Creates jobs to make a cluster busy +* :doc:`create_namespaces ` Creates namespaces to make a cluster busy +* :doc:`status ` Shows the busyness of the cluster + +``cluster`` +*********** + +:: + + Commands relating to cluster scaling, upgrading and environment capture + + + +* :doc:`build_push_image ` Build and publish an image to quay using either a Dockerfile or git repo. +* :doc:`capture_environment ` Captures the cluster environment +* :doc:`create_htpasswd_adminuser ` Create an htpasswd admin user. +* :doc:`create_osd ` Create an OpenShift Dedicated cluster. +* :doc:`deploy_operator ` Deploy an operator from OperatorHub catalog entry. +* :doc:`destroy_ocp ` Destroy an OpenShift cluster +* :doc:`destroy_osd ` Destroy an OpenShift Dedicated cluster. +* :doc:`dump_prometheus_db ` Dump Prometheus database into a file +* :doc:`fill_workernodes ` Fills the worker nodes with place-holder Pods with the maximum available amount of a given resource name. +* :doc:`preload_image ` Preload a container image on all the nodes of a cluster. +* :doc:`query_prometheus_db ` Query Prometheus with a list of PromQueries read in a file +* :doc:`reset_prometheus_db ` Resets Prometheus database, by destroying its Pod +* :doc:`set_project_annotation ` Set an annotation on a given project, or for any new projects. +* :doc:`set_scale ` Ensures that the cluster has exactly `scale` nodes with instance_type `instance_type` +* :doc:`update_pods_per_node ` Update the maximum number of Pods per Nodes, and Pods per Core See alse: https://docs.openshift.com/container-platform/4.14/nodes/nodes/nodes-nodes-managing-max-pods.html +* :doc:`upgrade_to_image ` Upgrades the cluster to the given image +* :doc:`wait_fully_awake ` Waits for the cluster to be fully awake after Hive restart + +``configure`` +************* + +:: + + Commands relating to TOPSAIL testing configuration + + + +* :doc:`apply ` Applies a preset (or a list of presets) to the current configuration file +* :doc:`enter ` Enter into a custom configuration file for a TOPSAIL project +* :doc:`get ` Gives the value of a given key, in the current configuration file +* :doc:`name ` Gives the name of the current configuration + +``cpt`` +******* + +:: + + Commands relating to continuous performance testing management + + + +* :doc:`deploy_cpt_dashboard ` Deploy and configure the CPT Dashboard + +``fine_tuning`` +*************** + +:: + + Commands relating to RHOAI scheduler testing + + + +* :doc:`ray_fine_tuning_job ` Run a simple Ray fine-tuning Job. +* :doc:`run_fine_tuning_job ` Run a simple fine-tuning Job. +* :doc:`run_quality_evaluation ` Run a simple fine-tuning Job. + +``run`` +******* + +:: + + Run `topsail` toolbox commands from a single config file. + + + + +``gpu_operator`` +**************** + +:: + + Commands for deploying, building and testing the GPU operator in various ways + + + +* :doc:`capture_deployment_state ` Captures the GPU operator deployment state +* :doc:`deploy_cluster_policy ` Creates the ClusterPolicy from the OLM ClusterServiceVersion +* :doc:`deploy_from_bundle ` Deploys the GPU Operator from a bundle +* :doc:`deploy_from_operatorhub ` Deploys the GPU operator from OperatorHub +* :doc:`enable_time_sharing ` Enable time-sharing in the GPU Operator ClusterPolicy +* :doc:`extend_metrics ` Enable time-sharing in the GPU Operator ClusterPolicy +* :doc:`get_csv_version ` Get the version of the GPU Operator currently installed from OLM Stores the version in the 'ARTIFACT_EXTRA_LOGS_DIR' artifacts directory. +* :doc:`run_gpu_burn ` Runs the GPU burn on the cluster +* :doc:`undeploy_from_operatorhub ` Undeploys a GPU-operator that was deployed from OperatorHub +* :doc:`wait_deployment ` Waits for the GPU operator to deploy +* :doc:`wait_stack_deployed ` Waits for the GPU Operator stack to be deployed on the GPU nodes + +``kepler`` +********** + +:: + + Commands relating to kepler deployment + + + +* :doc:`deploy_kepler ` Deploy the Kepler operator and monitor to track energy consumption +* :doc:`undeploy_kepler ` Cleanup the Kepler operator and associated resources + +``kserve`` +********** + +:: + + Commands relating to RHOAI KServe component + + + +* :doc:`capture_operators_state ` Captures the state of the operators of the KServe serving stack +* :doc:`capture_state ` Captures the state of the KServe stack in a given namespace +* :doc:`deploy_model ` Deploy a KServe model +* :doc:`extract_protos ` Extracts the protos of an inference service +* :doc:`extract_protos_grpcurl ` Extracts the protos of an inference service, with GRPCurl observe +* :doc:`undeploy_model ` Undeploy a KServe model +* :doc:`validate_model ` Validate the proper deployment of a KServe model + +``kubemark`` +************ + +:: + + Commands relating to kubemark deployment + + + +* :doc:`deploy_capi_provider ` Deploy the Kubemark Cluster-API provider +* :doc:`deploy_nodes ` Deploy a set of Kubemark nodes + +``kwok`` +******** + +:: + + Commands relating to KWOK deployment + + + +* :doc:`deploy_kwok_controller ` Deploy the KWOK hollow node provider +* :doc:`set_scale ` Deploy a set of KWOK nodes + +``llm_load_test`` +***************** + +:: + + Commands relating to llm-load-test + + + +* :doc:`run ` Load test the wisdom model + +``local_ci`` +************ + +:: + + Commands to run the CI scripts in a container environment similar to the one used by the CI + + + +* :doc:`run ` Runs a given CI command +* :doc:`run_multi ` Runs a given CI command in parallel from multiple Pods + +``nfd`` +******* + +:: + + Commands for NFD related tasks + + + +* :doc:`has_gpu_nodes ` Checks if the cluster has GPU nodes +* :doc:`has_labels ` Checks if the cluster has NFD labels +* :doc:`wait_gpu_nodes ` Wait until nfd find GPU nodes +* :doc:`wait_labels ` Wait until nfd labels the nodes + +``nfd_operator`` +**************** + +:: + + Commands for deploying, building and testing the NFD operator in various ways + + + +* :doc:`deploy_from_operatorhub ` Deploys the NFD Operator from OperatorHub +* :doc:`undeploy_from_operatorhub ` Undeploys an NFD-operator that was deployed from OperatorHub + +``notebooks`` +************* + +:: + + Commands relating to RHOAI Notebooks + + + +* :doc:`benchmark_performance ` Benchmark the performance of a notebook image. +* :doc:`capture_state ` Capture information about the cluster and the RHODS notebooks deployment +* :doc:`cleanup ` Clean up the resources created along with the notebooks, during the scale tests. +* :doc:`dashboard_scale_test ` End-to-end scale testing of ROAI dashboard scale test, at user level. +* :doc:`locust_scale_test ` End-to-end testing of RHOAI notebooks at scale, at API level +* :doc:`ods_ci_scale_test ` End-to-end scale testing of ROAI notebooks, at user level. + +``pipelines`` +************* + +:: + + Commands relating to RHODS + + + +* :doc:`capture_state ` Captures the state of a Data Science Pipeline Application in a given namespace. +* :doc:`deploy_application ` Deploy a Data Science Pipeline Application in a given namespace. +* :doc:`run_kfp_notebook ` Run a notebook in a given notebook image. + +``repo`` +******** + +:: + + Commands to perform consistency validations on this repo itself + + + +* :doc:`generate_ansible_default_settings ` Generate the `defaults/main/config.yml` file of the Ansible roles, based on the Python definition. +* :doc:`generate_middleware_ci_secret_boilerplate ` Generate the boilerplace code to include a new secret in the Middleware CI configuration +* :doc:`generate_toolbox_related_files ` Generate the rst document and Ansible default settings, based on the Toolbox Python definition. +* :doc:`generate_toolbox_rst_documentation ` Generate the `doc/toolbox.generated/*.rst` file, based on the Toolbox Python definition. +* :doc:`send_job_completion_notification ` Send a *job completion* notification to github and/or slack about the completion of a test job. +* :doc:`validate_no_broken_link ` Ensure that all the symlinks point to a file +* :doc:`validate_no_wip ` Ensures that none of the commits have the WIP flag in their message title. +* :doc:`validate_role_files ` Ensures that all the Ansible variables defining a filepath (`project/*/toolbox/`) do point to an existing file. +* :doc:`validate_role_vars_used ` Ensure that all the Ansible variables defined are actually used in their role (with an exception for symlinks) + +``rhods`` +********* + +:: + + Commands relating to RHODS + + + +* :doc:`capture_state ` Captures the state of the RHOAI deployment +* :doc:`delete_ods ` Forces ODS operator deletion +* :doc:`deploy_addon ` Installs the RHODS OCM addon +* :doc:`deploy_ods ` Deploy ODS operator from its custom catalog +* :doc:`dump_prometheus_db ` Dump Prometheus database into a file +* :doc:`reset_prometheus_db ` Resets RHODS Prometheus database, by destroying its Pod. +* :doc:`undeploy_ods ` Undeploy ODS operator +* :doc:`update_datasciencecluster ` Update RHOAI datasciencecluster resource +* :doc:`wait_odh ` Wait for ODH to finish its deployment +* :doc:`wait_ods ` Wait for ODS to finish its deployment + +``scheduler`` +************* + +:: + + Commands relating to RHOAI scheduler testing + + + +* :doc:`cleanup ` Clean up the scheduler load namespace +* :doc:`create_mcad_canary ` Create a canary for MCAD Appwrappers and track the time it takes to be scheduled +* :doc:`deploy_mcad_from_helm ` Deploys MCAD from helm +* :doc:`generate_load ` Generate scheduler load + +``server`` +********** + +:: + + Commands relating to the deployment of servers on OpenShift + + + +* :doc:`deploy_ldap ` Deploy OpenLDAP and LDAP Oauth +* :doc:`deploy_minio_s3_server ` Deploy Minio S3 server +* :doc:`deploy_nginx_server ` Deploy an NGINX HTTP server +* :doc:`deploy_opensearch ` Deploy OpenSearch and OpenSearch-Dashboards +* :doc:`deploy_redis_server ` Deploy a redis server +* :doc:`undeploy_ldap ` Undeploy OpenLDAP and LDAP Oauth + +``storage`` +*********** + +:: + + Commands relating to OpenShift file storage + + + +* :doc:`deploy_aws_efs ` Deploy AWS EFS CSI driver and configure AWS accordingly. +* :doc:`deploy_nfs_provisioner ` Deploy NFS Provisioner +* :doc:`download_to_pvc ` Downloads the a dataset into a PVC of the cluster diff --git a/_sources/understanding/orchestration.rst.txt b/_sources/understanding/orchestration.rst.txt new file mode 100644 index 0000000000..aff1b8ac52 --- /dev/null +++ b/_sources/understanding/orchestration.rst.txt @@ -0,0 +1,405 @@ +The Test Orchestrations Layer +============================= + +The test orchestration layer is the crux of TOPSAIL. It binds +everything else together: +- the CI job launchers +- the configuration +- the toolbox commands +- the post-mortem visualizations and automated regression analyses. + +Historically, this layer has been first and foremost triggered by CI +jobs, with clean clusters and kube-admin privileges. This is still the +first target of TOPSAIL test automation. The side effect of that is +that TOPSAIL may seem not very user-friendly when trying to use it +interactively from a terminal. + +In this section, we'll try to cover these different aspects that +TOPSAIL binds together. + +The CI job launchers +==================== + +TOPSAIL test orchestrations are focused on reproducibility and +end-to-end testing. These two ideas are directly linked, and in the +OpenShift world, the easiest to ensure that the rests are reproducible +and end-to-end automated is to start from scratch (or from a fresh and +clean cluster). + +Cluster creation +^^^^^^^^^^^^^^^^ + +In OpenShift CI, TOPSAIL has the ability to create a dedicated cluster +(even two, one for RHOAI, one for simulating users). This mode is +launched with the ``rhoai-e2e`` test. It is particularly useful when +launching cloud scale tests. The cluster creation is handled by the +`deploy-cluster subproject +`_. +This part of TOPSAIL is old, and mostly written in Bash. But it has +proved to be robust and reliable, although we haven't been using it +much since we got access to bare-metal clusters. + +By default, these clusters are destroyed after the test. +A ``keep`` flag can be set in the configuration to avoid destroying +it, and creating a kube-admin user with a predefined password. (Ask +in PM for how access the cluster). + +Cluster from pool +^^^^^^^^^^^^^^^^^ + +In OpenShift CI, TOPSAIL has a pool of pre-deployed clusters. These +clusters are controlled by the `Hive +`_ +tool, managed by the OpenShift CI team. In the current configuration, +the pool have 2 single-node OpenShift systems. + +These clusters are always destroyed at the end of the run. This is +outside of TOPSAIL control. + +Bare-metal clusters +^^^^^^^^^^^^^^^^^^^ + +In the Middleware Jenkins CI, TOPSAIL can be launched against two +bare-metal clusters. These clusters have long running OpenShift +deployments, and they are "never" reinstalled (at least, there is no +reinstall automation in place at the moment). Hence, the test +orchestrations are in charge of cleanup the cluster before (to ensure +that no garbage is left) and after the test (to let the cluster clean +for the following users). So the complete test sequence is: + +1. cleanup +2. prepare +3. test +4. cleanup + +This is the theory at least. In practice, the clusters are dedicated +to the team, and after mutual agreement, the cleanups and prepare +steps may be skipped to save time. Or the test and final cleanup, to +have a cluster ready for development. + +Before launching a test, check the state of the cluster. Is RHOAI +installed? is the DSC configured as you expected? If not, make sure +you tick the cleanup and prepare steps. + +Is someone else's job already on the same cluster? if yes, your job +will be queued and start only after the first job completion. Make +sure you tick the cleanup and prepare steps. + +Launching TOPSAIL jobs on the CI engines +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +See this google doc for all the details about launching TOPSAIL jobs +on the CI engines: + +* `How to launch TOPSAIL tests `_ + +TOPSAIL Configuration System +============================ + +The configuration system is (yet another) key element of TOPSAIL. It +has been designed to flexible, modular, and (important point to +understand some of its implementation choices) configurable from +OpenShift CI and other CI engines. + +A bit of history +^^^^^^^^^^^^^^^^ + +OpenShift CI is a great tool, but a strong limitation of it is that it +can be only statically configured (from the `openshift/release +`_ +repository). TOPSAIL had to find a way to enable dynamic +configuration, without touching the source code. Long story (see a +small `slide deck +`_ +illustrating it) short, TOPSAIL can be configured in Github. (See `How +to launch TOPSAIL tests +`_ +for all the details). + +:: + + /test rhoai-light fine_tuning ibm_40gb_models + /var tests.fine_tuning.test_settings.gpu: [2, 4] + + +A bit of apology +^^^^^^^^^^^^^^^^ + +TOPSAIL project's configuration is a YAML document. On one side, each +project is free to define is own configuration. But on the other side, +some code is shared between different projects (the ``library`` files, +defined in some of the projects). + +This aspect (the full flexibility + the code reuse in the libraries) +makes the configuration structure hard to track. A refactoring might +be envisaged to have a more strongly defined configuration format, at +least for the reusable libraries (eg, the library could tell: this +configuration block does not follow my model, I do not accept to +process it). + +How it actually works +^^^^^^^^^^^^^^^^^^^^^ + +So, TOPSAIL project's configuration is a YAML document. And the test +orchestration reads it alter its behavior. It's as simple as that. + +:: + + tests: + capture_prom: true + capture_state: true + +:: + + capture_prom = config.project.get_config("tests.capture_prom") + if not capture_prom: + logging.info("tests.capture_prom is disabled, skipping Prometheus DB reset") + return + +Sometimes, the test orchestration doesn't need to handle some +configuration flags, but only pass them to the toolbox layer. TOPSAIL +provides a helper toolbox command for that: ``from_config``. + +Example: + +:: + + rhods: + catalog: + image: brew.registry.redhat.io/rh-osbs/iib + tag: 804339 + channel: fast + version: 2.13.0 + version_name: rc1 + opendatahub: false + managed_rhoi: true + +These configuration flags should be passed directly to the ``rhods +deploy_ods`` toolbox command + +:: + + def deploy_ods(self, catalog_image, tag, channel="", version="", + disable_dsc_config=False, opendatahub=False, managed_rhoai=True): + """ + Deploy ODS operator from its custom catalog + + Args: + catalog_image: Container image containing the RHODS bundle. + tag: Catalog image tag to use to deploy RHODS. + channel: The channel to use for the deployment. Let empty to use the default channel. + ... + """ + +So the way to launch the RHOAI deployement should be: + +:: + + run.run_toolbox("rhods", "deploy_ods" + catalog_image=config.project.get_config("rhods.catalog.image"), + tag=config.project.get_config("rhods.catalog.tag"), + channel=config.project.get_config("rhods.catalog.channel"), + ...) + +Instead, the orchestration can use the ``command_args.yaml.j2`` file: + +:: + + rhods deploy_ods: + catalog_image: {{ rhods.catalog.image }} + tag: {{ rhods.catalog.tag }} + channel: {{ rhods.catalog.channel }} + ... + +where the template will be generated from the configuration file. And +this command will trigger it: + +:: + + run.run_toolbox_from_config("rhods", "deploy_ods") + + +or this equivalent, from the command-line: + +:: + + source ./projects/fine_tuning/testing/configure.sh + ./run_toolbox.py from_config rhods deploy_ods + +Configuring the configuration with presets +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +TOPSAIL configuration can be updated through the presets. This allows +storing multiple different test flavors side by side, and deciding at +launch time which one to execute. + +The presets, stored inside in the configuration in the ``ci_presets`` +field, define how to update the main configuration blocks before +running the test. + +Here is an example, which will test multiple dataset replication +factors: + +:: + + dgx_single_model_multi_dataset: + extends: [dgx_single_model] + tests.fine_tuning.matbenchmarking.enabled: true + tests.fine_tuning.test_settings.gpu: 1 + tests.fine_tuning.test_settings.dataset_replication: [1, 2, 4, 8] + +We see that three fields are "simply" updated. The ``extends`` keyword +means that first of all (because it is in the first position), we need +to apply the ``dgx_single_model`` preset, and only after modify the +three fields. + +The presets are applied with a simple recursive algorithm (which will +dirtily crash if there is a loop in the presets ^.^). If multiple +presets are defined, and they touch the same values, only the last +change will be visible. Same for the ``extends`` keyword. It applied +at its position in the dictionary. + +Last important point: the presets **cannot** create new fields. This +can be worked around by having placeholders in the main +configuration. Eg: + +:: + + tests: + fine_tuning: + test_settings: + hyper_parameters: + per_device_train_batch_size: null + gradient_accumulation_steps: null + +And everything is YAML. So the preset values can be YAML dictionaries +(or lists). + +:: + + tests.fine_tuning.test_settings.hyper_parameters: {r: 4, lora_alpha: 16} + +This would work even if no placeholder has been set for ``r`` and +``lora_alpha``, because the ``hyper_parameters`` is being assigned +(and everything it contained before would be erased). + + +Calling the toolbox commands +============================ + +The "orchestration" layer orchestrates the toolbox commands. That is, +it calls them, in the right order, according to configuration flags, +and with the right parameters. + +The Python code can call the toolbox directly, by passing all the +necessary arguments: + +:: + + has_dsc = run.run("oc get dsc -oname", capture_stdout=True).stdout + run.run_toolbox( + "rhods", "update_datasciencecluster", + enable=["kueue", "codeflare", "trainingoperator"], + name=None if has_dsc else "default-dsc", + ) + +or from the configuration: + +:: + + run.run_toolbox_from_config("rhods", "deploy_ods") + +But it can also have a "mix" of both, via the ``extra`` arguments of +the ``from_config`` call: + + +:: + + extra = dict(source=source, storage_dir=storage_dir, name=source_name) + run.run_toolbox_from_config("cluster", "download_to_pvc", extra=extra) + +This way, ``cluster download_to_pvc`` will have parameters received +from the configuration, and extra settings (which take precedence), +prepared directly in Python. + +The ``from_config`` command also accepts a prefix and/or a +suffix. Indeed, one command might be called with different parameters +in the same workflow. + +A simple example is the ``cluster set_scale`` command, which is used, +in cloud environment, to control the number of nodes dedicated to a +given task. + +:: + + sutest/cluster set_scale: + name: {{ clusters.sutest.compute.machineset.name }} + instance_type: {{ clusters.sutest.compute.machineset.type }} + scale: SET_AT_RUNTIME + + driver/cluster set_scale: + instance_type: {{ clusters.driver.compute.machineset.type }} + name: {{ clusters.driver.compute.machineset.name }} + scale: SET_AT_RUNTIME + +This will be called with the ``prefix`` parameter: + +:: + + run.run_toolbox_from_config("cluster", "set_scale", prefix="sutest", extra=dict(scale=...)) + run.run_toolbox_from_config("cluster", "set_scale", prefix="driver", extra=dict(scale=...)) + +and the same works for the suffix: + +:: + + prefix/command sub-command/suffix: ... + + +Creating dedicated directories +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The artifacts are a critical element for TOPSAIL post-mortem +processing and troubleshooting. But when the orchestration starts to +involve multiple commands, it gets complicated to understand what is +done at which step. + +So TOPSAIL provides the ``env.NextArtifactDir`` context, which creates +a dedicated directory (with a ``nnn__`` prefix to enforce the correct +ordering). + +Inside this directory, ``env.ARTIFACT_DIR`` will be correctly, so that +the code can write its artifact files in a dedicated directory. + +:: + + with env.NextArtifactDir("multi_model_test_sequentially"): + +This is mostly used in the ``test`` part, to group the multiple +commands related to a test together. + +Running toolbox commands in parallel +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +When the orchestration preparation starts to involve multiple +commands, running all of them sequentially make take forever. + +So TOPSAIL provides the ``run.Parallel`` context and the +``parallel.delayed`` function to allow running multiple commands in +parallel: + +:: + + with run.Parallel("prepare_scale") as parallel: + parallel.delayed(prepare_kserve.prepare) + parallel.delayed(scale_up_sutest) + + parallel.delayed(prepare_user_pods.prepare_user_pods, user_count) + parallel.delayed(prepare_user_pods.cluster_scale_up, user_count) + +This will create a dedicated directory, and at the end of the block it +will execute the 4 functions in dedicated threads. + +Mind that the configuration **cannot** be updated inside a parallel +region (eg, +``config.project.set_config("tests.scale.model.consolidated", True)``). diff --git a/_sources/understanding/toolbox.rst.txt b/_sources/understanding/toolbox.rst.txt new file mode 100644 index 0000000000..32e50b983c --- /dev/null +++ b/_sources/understanding/toolbox.rst.txt @@ -0,0 +1,109 @@ +The Reusable Toolbox Layer +========================== + +TOPSAIL's toolbox provides an extensive set of reusable +functionalities. It is a critical part of the test orchestration, as +the toolbox commands are in charge of the majority of the operations +affecting the state of the cluster. + +The Ansible-based design of the toolbox has proved along the last +years to be a key element in the efficiency of TOPSAIL-based +performance and scale investigations. The Ansible roles are always +executed locally, with a custom stdout callback for easy log reading. + + +In the design of toolbox framework, post-mortem troubleshooting is one +of the key concerns. The roles are always executed with a dedicated +artifact directory (``{{ artifact_extra_logs_dir }}``), when the tasks +are expected to store their generated source artifacts (``src`` +directory), the state of the resources they have changed +(``artifacts`` directory). The role should also store any other +information helpful to understand why the role execution failed, as +well as any "proof" that it executed its task correctly. These +artifacts will be reviewed after the test execution, to understand +what went wrong, if the cluster was in the right state, etc. The +artifacts can also be parsed by the post-mortem visualization engine, +to extract test results, timing information, etc: + +:: + + - name: Create the src artifacts directory + file: + path: "{{ artifact_extra_logs_dir }}/src/" + state: directory + mode: '0755' + + - name: Create the nginx HTTPS route + shell: + set -o pipefail; + oc create route passthrough nginx-secure + --service=nginx --port=https + -n "{{ cluster_deploy_nginx_server_namespace }}" + --dry-run=client -oyaml + | yq -y '.apiVersion = "route.openshift.io/v1"' + | tee "{{ artifact_extra_logs_dir }}/src/route_nginx-secure.yaml" + | oc apply -f - + + + - name: Create the artifacts artifacts directory + file: + path: "{{ artifact_extra_logs_dir }}/artifacts/" + state: directory + mode: '0755' + + - name: Get the status of the Deployment and Pod + shell: + oc get deploy/nginx-deployment + -owide + -n "{{ cluster_deploy_nginx_server_namespace }}" + > "{{ artifact_extra_logs_dir }}/artifacts/deployment.status"; + + oc get pods -l app=nginx + -owide + -n "{{ cluster_deploy_nginx_server_namespace }}" + > "{{ artifact_extra_logs_dir }}/artifacts/pod.status"; + + oc describe pods -l app=nginx + -n "{{ cluster_deploy_nginx_server_namespace }}" + > "{{ artifact_extra_logs_dir }}/artifacts/pod.descr"; + +The commands are coded with Ansible roles, with a Python API and CLI +interface on top of it. + +So this entrypoint: + +:: + + @AnsibleRole("cluster_deploy_nginx_server") + @AnsibleMappedParams + def deploy_nginx_server(self, namespace, directory): + """ + Deploy an NGINX HTTP server + + Args: + namespace: namespace where the server will be deployed. Will be create if it doesn't exist. + directory: directory containing the files to serve on the HTTP server. + """ + +will be translated into this CLI: + +:: + + $ ./run_toolbox.py cluster deploy_nginx_server --help + + INFO: Showing help with the command 'run_toolbox.py cluster deploy_nginx_server -- --help'. + + NAME + run_toolbox.py cluster deploy_nginx_server - Deploy an NGINX HTTP server + + SYNOPSIS + run_toolbox.py cluster deploy_nginx_server VALUE | NAMESPACE DIRECTORY + + DESCRIPTION + Deploy an NGINX HTTP server + + POSITIONAL ARGUMENTS + NAMESPACE + namespace where the server will be deployed. Will be create if it doesn't exist. + DIRECTORY + directory containing the files to serve on the HTTP server. diff --git a/_sources/understanding/visualization.rst.txt b/_sources/understanding/visualization.rst.txt new file mode 100644 index 0000000000..88921e1599 --- /dev/null +++ b/_sources/understanding/visualization.rst.txt @@ -0,0 +1,55 @@ +The Post-mortem Processing & Visualization Layer +================================================ + +TOPSAIL post-mortem visualization relies on the `MatrixBenchmarking +`_. + +MatrixBenchmarking consists of multiple components: + +- the ``benchmark`` component is in charge of running various test + configurations. MatrixBenchmarking/benchmark is configured with a + set of settings, with one or multiple values. The execution engine + will go through each of the possible configurations and execute it + to capture its performance. +- the ``visualize`` component is in charge of the generation of plots + and reports, based on the Dash and Plotly + packages. MatrixBenchmarking/visualize is launched either against a + single result directory, or against a directory with multiple + results. The result directories can have been generated by TOPSAIL, + which directly writes the relevant files (often the case there's + only one test executed, or when the test list is a simple iteration + over a list of configurations), or via MatrixBenchmarking/benchmark + (when the test list has to iterate over various, dynamically defined + settings). This component is further described below. +- the ``download`` component is in charge of downloading artifacts + from S3, OpenShift CI or the Middleware Jenkins. Using this + component instead of a simple scrapper allows downloading only the + files important for the post-processing, or even only the cache + file. This component is used when "re-plotting", that is, when + regenerating the visualization in the CI without re-running the + tests. +- the ``upload_lts`` component is used to upload the LTS (long term + storage) payload and KPIs (key performance indicators) to + OpenSearch. It is triggered at the end of a gating test. +- the ``download_lts`` component is used to download the historical + LTS payloads and KPIs from OpenSearch. It is used in gating test + before running the regression analyze. +- the ``analyze_lts`` component is used to check the results of a test + against "similar" historical results. "similar" here means that the + test results should have been executed with the same settings, + except the so-called "comparison settings" (eg, the RHOAI version, + the OCP version, etc). The regression analyze is done with the help + of the `datastax-labs/hunter + `_ package. + + In this document, we'll focus on the ``visualize`` component, which + is a key part of TOPSAIL test pipelines. (So are ``analyze_lts``, + ``download_lts`` and ``upload_lts`` for continuous performance + testing, but they don't require much per-project customization.) + +TOPSAIL/MatrixBenchmarking visualization modules are split into +two main components: the parsers (in ``store`` module) and plotters +(in ``plotting`` module). In addition to that, the continuous +performance testing (CPT) requires two extra components: the models +(in the ``models`` module) and the regression analyze preps (in the +``analyze`` module). diff --git a/_static/_sphinx_javascript_frameworks_compat.js b/_static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 0000000000..81415803ec --- /dev/null +++ b/_static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,123 @@ +/* Compatability shim for jQuery and underscores.js. + * + * Copyright Sphinx contributors + * Released under the two clause BSD licence + */ + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/_static/basic.css b/_static/basic.css new file mode 100644 index 0000000000..7577acb1ad --- /dev/null +++ b/_static/basic.css @@ -0,0 +1,903 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/_static/ci-dashboard.png b/_static/ci-dashboard.png new file mode 100644 index 0000000000..a1ed5a622d Binary files /dev/null and b/_static/ci-dashboard.png differ diff --git a/_static/css/badge_only.css b/_static/css/badge_only.css new file mode 100644 index 0000000000..c718cee441 --- /dev/null +++ b/_static/css/badge_only.css @@ -0,0 +1 @@ +.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-style:normal;font-weight:400;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#FontAwesome) format("svg")}.fa:before{font-family:FontAwesome;font-style:normal;font-weight:400;line-height:1}.fa:before,a .fa{text-decoration:inherit}.fa:before,a .fa,li .fa{display:inline-block}li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-.8em}ul.fas li .fa{width:.8em}ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before,.icon-book:before{content:"\f02d"}.fa-caret-down:before,.icon-caret-down:before{content:"\f0d7"}.fa-caret-up:before,.icon-caret-up:before{content:"\f0d8"}.fa-caret-left:before,.icon-caret-left:before{content:"\f0d9"}.fa-caret-right:before,.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60}.rst-versions .rst-current-version:after{clear:both;content:"";display:block}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}} \ No newline at end of file diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff b/_static/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 0000000000..6cb6000018 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff2 b/_static/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 0000000000..7059e23142 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff b/_static/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 0000000000..f815f63f99 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff2 b/_static/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 0000000000..f2c76e5bda Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/_static/css/fonts/fontawesome-webfont.eot b/_static/css/fonts/fontawesome-webfont.eot new file mode 100644 index 0000000000..e9f60ca953 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.eot differ diff --git a/_static/css/fonts/fontawesome-webfont.svg b/_static/css/fonts/fontawesome-webfont.svg new file mode 100644 index 0000000000..855c845e53 --- /dev/null +++ b/_static/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_static/css/fonts/fontawesome-webfont.ttf b/_static/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 0000000000..35acda2fa1 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.ttf differ diff --git a/_static/css/fonts/fontawesome-webfont.woff b/_static/css/fonts/fontawesome-webfont.woff new file mode 100644 index 0000000000..400014a4b0 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff differ diff --git a/_static/css/fonts/fontawesome-webfont.woff2 b/_static/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 0000000000..4d13fc6040 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff2 differ diff --git a/_static/css/fonts/lato-bold-italic.woff b/_static/css/fonts/lato-bold-italic.woff new file mode 100644 index 0000000000..88ad05b9ff Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff differ diff --git a/_static/css/fonts/lato-bold-italic.woff2 b/_static/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 0000000000..c4e3d804b5 Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff2 differ diff --git a/_static/css/fonts/lato-bold.woff b/_static/css/fonts/lato-bold.woff new file mode 100644 index 0000000000..c6dff51f06 Binary files /dev/null and b/_static/css/fonts/lato-bold.woff differ diff --git a/_static/css/fonts/lato-bold.woff2 b/_static/css/fonts/lato-bold.woff2 new file mode 100644 index 0000000000..bb195043cf Binary files /dev/null and b/_static/css/fonts/lato-bold.woff2 differ diff --git a/_static/css/fonts/lato-normal-italic.woff b/_static/css/fonts/lato-normal-italic.woff new file mode 100644 index 0000000000..76114bc033 Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff differ diff --git a/_static/css/fonts/lato-normal-italic.woff2 b/_static/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 0000000000..3404f37e2e Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff2 differ diff --git a/_static/css/fonts/lato-normal.woff b/_static/css/fonts/lato-normal.woff new file mode 100644 index 0000000000..ae1307ff5f Binary files /dev/null and b/_static/css/fonts/lato-normal.woff differ diff --git a/_static/css/fonts/lato-normal.woff2 b/_static/css/fonts/lato-normal.woff2 new file mode 100644 index 0000000000..3bf9843328 Binary files /dev/null and b/_static/css/fonts/lato-normal.woff2 differ diff --git a/_static/css/theme.css b/_static/css/theme.css new file mode 100644 index 0000000000..19a446a0e7 --- /dev/null +++ b/_static/css/theme.css @@ -0,0 +1,4 @@ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel,.rst-content .menuselection{font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .guilabel,.rst-content .menuselection{border:1px solid #7fbbe3;background:#e7f2fa}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} \ No newline at end of file diff --git a/_static/doctools.js b/_static/doctools.js new file mode 100644 index 0000000000..d06a71d751 --- /dev/null +++ b/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/_static/documentation_options.js b/_static/documentation_options.js new file mode 100644 index 0000000000..80a731b511 --- /dev/null +++ b/_static/documentation_options.js @@ -0,0 +1,14 @@ +var DOCUMENTATION_OPTIONS = { + URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), + VERSION: 'git-main/c8e4b1e9', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/_static/file.png b/_static/file.png new file mode 100644 index 0000000000..a858a410e4 Binary files /dev/null and b/_static/file.png differ diff --git a/_static/jquery.js b/_static/jquery.js new file mode 100644 index 0000000000..c4c6022f29 --- /dev/null +++ b/_static/jquery.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=y.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=y.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),y.elements=c+" "+a,j(b)}function f(a){var b=x[a[v]];return b||(b={},w++,a[v]=w,x[w]=b),b}function g(a,c,d){if(c||(c=b),q)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():u.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||t.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),q)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return y.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(y,b.frag)}function j(a){a||(a=b);var d=f(a);return!y.shivCSS||p||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),q||i(a,d),a}function k(a){for(var b,c=a.getElementsByTagName("*"),e=c.length,f=RegExp("^(?:"+d().join("|")+")$","i"),g=[];e--;)b=c[e],f.test(b.nodeName)&&g.push(b.applyElement(l(b)));return g}function l(a){for(var b,c=a.attributes,d=c.length,e=a.ownerDocument.createElement(A+":"+a.nodeName);d--;)b=c[d],b.specified&&e.setAttribute(b.nodeName,b.nodeValue);return e.style.cssText=a.style.cssText,e}function m(a){for(var b,c=a.split("{"),e=c.length,f=RegExp("(^|[\\s,>+~])("+d().join("|")+")(?=[[\\s,>+~#.:]|$)","gi"),g="$1"+A+"\\:$2";e--;)b=c[e]=c[e].split("}"),b[b.length-1]=b[b.length-1].replace(f,g),c[e]=b.join("}");return c.join("{")}function n(a){for(var b=a.length;b--;)a[b].removeNode()}function o(a){function b(){clearTimeout(g._removeSheetTimer),d&&d.removeNode(!0),d=null}var d,e,g=f(a),h=a.namespaces,i=a.parentWindow;return!B||a.printShived?a:("undefined"==typeof h[A]&&h.add(A),i.attachEvent("onbeforeprint",function(){b();for(var f,g,h,i=a.styleSheets,j=[],l=i.length,n=Array(l);l--;)n[l]=i[l];for(;h=n.pop();)if(!h.disabled&&z.test(h.media)){try{f=h.imports,g=f.length}catch(o){g=0}for(l=0;g>l;l++)n.push(f[l]);try{j.push(h.cssText)}catch(o){}}j=m(j.reverse().join("")),e=k(a),d=c(a,j)}),i.attachEvent("onafterprint",function(){n(e),clearTimeout(g._removeSheetTimer),g._removeSheetTimer=setTimeout(b,500)}),a.printShived=!0,a)}var p,q,r="3.7.3",s=a.html5||{},t=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,u=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,v="_html5shiv",w=0,x={};!function(){try{var a=b.createElement("a");a.innerHTML="",p="hidden"in a,q=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){p=!0,q=!0}}();var y={elements:s.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:r,shivCSS:s.shivCSS!==!1,supportsUnknownElements:q,shivMethods:s.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=y,j(b);var z=/^$|\b(?:all|print)\b/,A="html5shiv",B=!q&&function(){var c=b.documentElement;return!("undefined"==typeof b.namespaces||"undefined"==typeof b.parentWindow||"undefined"==typeof c.applyElement||"undefined"==typeof c.removeNode||"undefined"==typeof a.attachEvent)}();y.type+=" print",y.shivPrint=o,o(b),"object"==typeof module&&module.exports&&(module.exports=y)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/html5shiv.min.js b/_static/js/html5shiv.min.js new file mode 100644 index 0000000000..cd1c674f5e --- /dev/null +++ b/_static/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3-pre",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/theme.js b/_static/js/theme.js new file mode 100644 index 0000000000..1fddb6ee4a --- /dev/null +++ b/_static/js/theme.js @@ -0,0 +1 @@ +!function(n){var e={};function t(i){if(e[i])return e[i].exports;var o=e[i]={i:i,l:!1,exports:{}};return n[i].call(o.exports,o,o.exports,t),o.l=!0,o.exports}t.m=n,t.c=e,t.d=function(n,e,i){t.o(n,e)||Object.defineProperty(n,e,{enumerable:!0,get:i})},t.r=function(n){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(n,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(n,"__esModule",{value:!0})},t.t=function(n,e){if(1&e&&(n=t(n)),8&e)return n;if(4&e&&"object"==typeof n&&n&&n.__esModule)return n;var i=Object.create(null);if(t.r(i),Object.defineProperty(i,"default",{enumerable:!0,value:n}),2&e&&"string"!=typeof n)for(var o in n)t.d(i,o,function(e){return n[e]}.bind(null,o));return i},t.n=function(n){var e=n&&n.__esModule?function(){return n.default}:function(){return n};return t.d(e,"a",e),e},t.o=function(n,e){return Object.prototype.hasOwnProperty.call(n,e)},t.p="",t(t.s=0)}([function(n,e,t){t(1),n.exports=t(3)},function(n,e,t){(function(){var e="undefined"!=typeof window?window.jQuery:t(2);n.exports.ThemeNav={navBar:null,win:null,winScroll:!1,winResize:!1,linkScroll:!1,winPosition:0,winHeight:null,docHeight:null,isRunning:!1,enable:function(n){var t=this;void 0===n&&(n=!0),t.isRunning||(t.isRunning=!0,e((function(e){t.init(e),t.reset(),t.win.on("hashchange",t.reset),n&&t.win.on("scroll",(function(){t.linkScroll||t.winScroll||(t.winScroll=!0,requestAnimationFrame((function(){t.onScroll()})))})),t.win.on("resize",(function(){t.winResize||(t.winResize=!0,requestAnimationFrame((function(){t.onResize()})))})),t.onResize()})))},enableSticky:function(){this.enable(!0)},init:function(n){n(document);var e=this;this.navBar=n("div.wy-side-scroll:first"),this.win=n(window),n(document).on("click","[data-toggle='wy-nav-top']",(function(){n("[data-toggle='wy-nav-shift']").toggleClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift")})).on("click",".wy-menu-vertical .current ul li a",(function(){var t=n(this);n("[data-toggle='wy-nav-shift']").removeClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift"),e.toggleCurrent(t),e.hashChange()})).on("click","[data-toggle='rst-current-version']",(function(){n("[data-toggle='rst-versions']").toggleClass("shift-up")})),n("table.docutils:not(.field-list,.footnote,.citation)").wrap("
"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/_static/minus.png b/_static/minus.png new file mode 100644 index 0000000000..d96755fdaf Binary files /dev/null and b/_static/minus.png differ diff --git a/_static/plus.png b/_static/plus.png new file mode 100644 index 0000000000..7107cec93a Binary files /dev/null and b/_static/plus.png differ diff --git a/_static/pygments.css b/_static/pygments.css new file mode 100644 index 0000000000..84ab3030a9 --- /dev/null +++ b/_static/pygments.css @@ -0,0 +1,75 @@ +pre { line-height: 125%; } +td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #f8f8f8; } +.highlight .c { color: #3D7B7B; font-style: italic } /* Comment */ +.highlight .err { border: 1px solid #FF0000 } /* Error */ +.highlight .k { color: #008000; font-weight: bold } /* Keyword */ +.highlight .o { color: #666666 } /* Operator */ +.highlight .ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #9C6500 } /* Comment.Preproc */ +.highlight .cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #3D7B7B; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #A00000 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ +.highlight .gr { color: #E40000 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #008400 } /* Generic.Inserted */ +.highlight .go { color: #717171 } /* Generic.Output */ +.highlight .gp { color: #000080; font-weight: bold } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #0044DD } /* Generic.Traceback */ +.highlight .kc { color: #008000; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #008000 } /* Keyword.Pseudo */ +.highlight .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #B00040 } /* Keyword.Type */ +.highlight .m { color: #666666 } /* Literal.Number */ +.highlight .s { color: #BA2121 } /* Literal.String */ +.highlight .na { color: #687822 } /* Name.Attribute */ +.highlight .nb { color: #008000 } /* Name.Builtin */ +.highlight .nc { color: #0000FF; font-weight: bold } /* Name.Class */ +.highlight .no { color: #880000 } /* Name.Constant */ +.highlight .nd { color: #AA22FF } /* Name.Decorator */ +.highlight .ni { color: #717171; font-weight: bold } /* Name.Entity */ +.highlight .ne { color: #CB3F38; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #0000FF } /* Name.Function */ +.highlight .nl { color: #767600 } /* Name.Label */ +.highlight .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ +.highlight .nt { color: #008000; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #19177C } /* Name.Variable */ +.highlight .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mb { color: #666666 } /* Literal.Number.Bin */ +.highlight .mf { color: #666666 } /* Literal.Number.Float */ +.highlight .mh { color: #666666 } /* Literal.Number.Hex */ +.highlight .mi { color: #666666 } /* Literal.Number.Integer */ +.highlight .mo { color: #666666 } /* Literal.Number.Oct */ +.highlight .sa { color: #BA2121 } /* Literal.String.Affix */ +.highlight .sb { color: #BA2121 } /* Literal.String.Backtick */ +.highlight .sc { color: #BA2121 } /* Literal.String.Char */ +.highlight .dl { color: #BA2121 } /* Literal.String.Delimiter */ +.highlight .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #BA2121 } /* Literal.String.Double */ +.highlight .se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */ +.highlight .sh { color: #BA2121 } /* Literal.String.Heredoc */ +.highlight .si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */ +.highlight .sx { color: #008000 } /* Literal.String.Other */ +.highlight .sr { color: #A45A77 } /* Literal.String.Regex */ +.highlight .s1 { color: #BA2121 } /* Literal.String.Single */ +.highlight .ss { color: #19177C } /* Literal.String.Symbol */ +.highlight .bp { color: #008000 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #0000FF } /* Name.Function.Magic */ +.highlight .vc { color: #19177C } /* Name.Variable.Class */ +.highlight .vg { color: #19177C } /* Name.Variable.Global */ +.highlight .vi { color: #19177C } /* Name.Variable.Instance */ +.highlight .vm { color: #19177C } /* Name.Variable.Magic */ +.highlight .il { color: #666666 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/_static/searchtools.js b/_static/searchtools.js new file mode 100644 index 0000000000..97d56a74d8 --- /dev/null +++ b/_static/searchtools.js @@ -0,0 +1,566 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docUrlRoot = DOCUMENTATION_OPTIONS.URL_ROOT; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = docUrlRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = docUrlRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms) + ); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + `Search finished, found ${resultCount} page(s) matching the search query.` + ); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent !== undefined) return docContent.textContent; + console.warn( + "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + /** + * execute search (requires search index to be loaded) + */ + query: (query) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + // array of [docname, title, anchor, descr, score, filename] + let results = []; + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + let score = Math.round(100 * queryLower.length / title.length) + results.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id] of foundEntries) { + let score = Math.round(100 * queryLower.length / entry.length) + results.push([ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // lookup as object + objectTerms.forEach((term) => + results.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + results.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item))); + + // now sort the results by score (in opposite order of appearance, since the + // display function below uses pop() to retrieve items) and then + // alphabetically + results.sort((a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; + }); + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + results = results.reverse(); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord) && !terms[word]) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord) && !titleTerms[word]) + arr.push({ files: titleTerms[word], score: Scorer.partialTitle }); + }); + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1) + fileMap.get(file).push(word); + else fileMap.set(file, [word]); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords) => { + const text = Search.htmlToText(htmlText); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/_static/sphinx_highlight.js b/_static/sphinx_highlight.js new file mode 100644 index 0000000000..aae669d7ea --- /dev/null +++ b/_static/sphinx_highlight.js @@ -0,0 +1,144 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + parent.insertBefore( + span, + parent.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(SphinxHighlight.highlightSearchWords); +_ready(SphinxHighlight.initEscapeListener); diff --git a/contributing.html b/contributing.html new file mode 100644 index 0000000000..7ef6222504 --- /dev/null +++ b/contributing.html @@ -0,0 +1,276 @@ + + + + + + + Contributing — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Contributing

+

Thanks for taking the time to contribute!

+

The following is a set of guidelines for contributing to TOPSAIL. +These are mostly guidelines, feel free to propose changes to this +document in a pull request.

+

+

The primary goal of the repository is to serve as a central repository +of the PSAP team’s performance and scale test automation.

+

The secondary goal of the repository is to offer a toolbox for setting +up and configuring clusters, in preparation of performance and scale test execution.

+
+

Pull Request Guidelines

+
    +
  • Pull Requests (PRs) need to be /approve and reviewed /lgtm by +PSAP team members before being merged.

  • +
  • PRs should have a proper description explaining the problem being +solved, or the new feature being introduced.

  • +
+
+
+

Review Guidelines

+
    +
  • Reviews can be performed by anyone interested in the good health of +the repository; but approval and/or /lgtm is reserved to PSAP +team members at the moment.

  • +
  • The main merging criteria is to have a successful test run that +executes the modified code. Because of the nature of the repository, +we can’t test all the code paths for all PRs.

    +
      +
    • In order to save unnecessary AWS cloud time, the testing is not +automatically executed by Prow; it must be manually triggered.

    • +
    +
  • +
+
+
+

Style Guidelines

+
+

YAML style

+
    +
  • Align nested lists with their parent’s label

  • +
+
- block:
+  - name: ...
+    block:
+    - name: ...
+
+
+
    +
  • YAML files use the .yml extension

  • +
+
+
+

Ansible style

+

We strive to follow Ansible best practices in the different playbooks.

+

This command is executed as a GitHub-Action hook on all the new PRs, +to help keeping a consistent code style:

+
ansible-lint -v --force-color -c config/ansible-lint.yml playbooks roles
+
+
+
    +
  • Try to avoid using shell tasks as much as possible

    +
      +
    • Make sure that set -o pipefail; is part of the shell command +whenever a | is involved (ansible-lint forgets some of +them)

    • +
    • Redirection into a {{ artifact_extra_logs_dir }} file is a +common exception

    • +
    +
  • +
  • Use inline stanza for debug and fail tasks, eg:

  • +
+
- name: The GFD did not label the nodes
+  fail: msg="The GFD did not label the nodes"
+
+
+
+
+
+

Coding guidelines

+
    +
  • Keep the main log file clean when everything goes right, and store +all the relevant information in the {{ artifact_extra_logs_dir +}} directory, eg:

  • +
+
- name: Inspect the Subscriptions status (debug)
+  shell:
+    oc describe subscriptions.operators.coreos.com/gpu-operator-certified
+       -n openshift-operators
+       > {{ artifact_extra_logs_dir }}/gpu_operator_Subscription.log
+  failed_when: false
+
+
+
    +
  • Include troubleshooting inspection commands whenever +possible/relevant (see above for an example)

    +
      +
    • mark them as failed_when: false to ensure that their execution +doesn’t affect the testing

    • +
    • add (debug) in the task name to make it clear that the command +is not part of the proper testing.

    • +
    +
  • +
  • Use ignore_errors: true only for tracking known +failures.

    +
      +
    • use failed_when: false to ignore the task return code

    • +
    • but whenever possible, write tasks that do not fail, eg:

    • +
    +
  • +
+
oc delete --ignore-not-found=true $MY_RESOURCE
+
+
+
    +
  • Try to group related modifications in a dedicated commit, and stack +commits in logical order (eg, 1/ add role, 2/ add toolbox script 3/ +integrate the toolbox scrip in the nightly CI)

    +
      +
    • Commits are not squashed, so please avoid commits “fixing” another +commit of the PR.

    • +
    • Hints: git revise

      +
        +
      • use git revise <commit> to modify an older commit (not +older that master ;-)

      • +
      • use git revise --cut <commit> to split a commit in two +logical commits

      • +
      • or simply use git commit --amend to modify the most recent commit

      • +
      +
    • +
    +
  • +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/extending/orchestration.html b/extending/orchestration.html new file mode 100644 index 0000000000..bcf6be337d --- /dev/null +++ b/extending/orchestration.html @@ -0,0 +1,650 @@ + + + + + + + Creating a New Orchestration — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Creating a New Orchestration

+

You’re working on a new perf&scale test project, and you want to have +it automated and running in the CI? Good! Do you already have you test +architecture in mind? And your toolbox is ready? Perfect, so we can +start building the orchestration!

+
+

Prepare the environment

+

To create an orchestration, go to projects/PROJECT_NAME/testing +and prepare the following boilerplate code.

+

Mind that the PROJECT_NAME should be compatible with Python +packages (no -) to keep things simple.

+
+

Prepare the test.py, config.yaml and command_args.yaml.j2

+

These files are all what is mandatory to have a configurable +orchestration layer.

+
    +
  • test.py should contain these entrypoints, for interacting with the CI:

  • +
+
@entrypoint()
+def prepare_ci():
+    """
+    Prepares the cluster and the namespace for running the tests
+    """
+
+    pass
+
+
+@entrypoint()
+def test_ci():
+    """
+    Runs the test from the CI
+    """
+
+    pass
+
+
+@entrypoint()
+def cleanup_cluster(mute=False):
+    """
+    Restores the cluster to its original state
+    """
+    # _Not_ executed in OpenShift CI cluster (running on AWS). Only required for running in bare-metal environments.
+
+    common.cleanup_cluster()
+
+    pass
+
+
+@entrypoint(ignore_secret_path=True, apply_preset_from_pr_args=False)
+def generate_plots_from_pr_args():
+    """
+    Generates the visualization reports from the PR arguments
+    """
+
+    visualize.download_and_generate_visualizations()
+
+    export.export_artifacts(env.ARTIFACT_DIR, test_step="plot")
+
+
+class Entrypoint:
+    """
+    Commands for launching the CI tests
+    """
+
+    def __init__(self):
+
+        self.prepare_ci = prepare_ci
+        self.test_ci = test_ci
+        self.cleanup_cluster_ci = cleanup_cluster
+        self.export_artifacts = export_artifacts
+
+        self.generate_plots_from_pr_args = generate_plots_from_pr_args
+
+def main():
+    # Print help rather than opening a pager
+    fire.core.Display = lambda lines, out: print(*lines, file=out)
+
+    fire.Fire(Entrypoint())
+
+
+if __name__ == "__main__":
+    try:
+        sys.exit(main())
+    except subprocess.CalledProcessError as e:
+        logging.error(f"Command '{e.cmd}' failed --> {e.returncode}")
+        sys.exit(1)
+    except KeyboardInterrupt:
+        print() # empty line after ^C
+        logging.error(f"Interrupted.")
+        sys.exit(1)
+
+
+
    +
  • config.yaml should contain

  • +
+
ci_presets:
+  # name of the presets to apply, or null if no preset
+  name: null
+  # list of names of the presets to apply, or a single name, or null if no preset
+  names: null
+
+
+  single:
+    clusters.create.type: single
+
+  keep:
+    clusters.create.keep: true
+    clusters.create.ocp.tags.Project: PSAP/Project/...
+    # clusters.create.ocp.tags.TicketId:
+
+  light_cluster:
+    clusters.create.ocp.deploy_cluster.target: cluster_light
+
+  light:
+    extends: [light_cluster]
+    ...
+
+  ...
+
+secrets:
+  dir:
+    name: psap-ods-secret
+    env_key: PSAP_ODS_SECRET_PATH
+  # name of the file containing the properties of LDAP secrets
+  s3_ldap_password_file: s3_ldap.passwords
+  keep_cluster_password_file: get_cluster.password
+  brew_registry_redhat_io_token_file: brew.registry.redhat.io.token
+  opensearch_instances: opensearch.yaml
+  aws_credentials: .awscred
+  git_credentials: git-credentials
+
+clusters:
+  metal_profiles:
+    ...: ...
+  create:
+    type: single # can be: single, ocp, managed
+    keep: false
+    name_prefix: fine-tuning-ci
+    ocp:
+      # list of tags to apply to the machineset when creating the cluster
+      tags:
+        # TicketId: "..."
+        Project: PSAP/Project/...
+      deploy_cluster:
+        target: cluster
+      base_domain: psap.aws.rhperfscale.org
+      version: 4.15.9
+      region: us-west-2
+      control_plane:
+        type: m6a.xlarge
+      workers:
+        type: m6a.2xlarge
+        count: 2
+
+  sutest:
+    is_metal: false
+    lab:
+      name: null
+    compute:
+      dedicated: true
+      machineset:
+        name: workload-pods
+        type: m6i.2xlarge
+        count: null
+        taint:
+          key: only-workload-pods
+          value: "yes"
+          effect: NoSchedule
+  driver:
+    is_metal: false
+    compute:
+      dedicated: true
+      machineset:
+        name: test-pods
+        count: null
+        type: m6i.2xlarge
+        taint:
+          key: only-test-pods
+          value: "yes"
+          effect: NoSchedule
+  cleanup_on_exit: false
+
+matbench:
+  preset: null
+  workload: projects....visualizations...
+  prom_workload: projects....visualizations....
+  config_file: plots.yaml
+  download:
+    mode: prefer_cache
+    url:
+    url_file:
+    # if true, copy the results downloaded by `matbench download` into the artifacts directory
+    save_to_artifacts: false
+  # directory to plot. Set by testing/common/visualize.py before launching the visualization
+  test_directory: null
+  lts:
+    generate: true
+    horreum:
+      test_name: null
+    opensearch:
+      export:
+        enabled: false
+        enabled_on_replot: false
+        fail_test_on_fail: true
+      instance: smoke
+      index: ...
+      index_prefix: ""
+      prom_index_suffix: -prom
+    regression_analyses:
+      enabled: false
+      # if the regression analyses fail, mark the test as failed
+      fail_test_on_regression: false
+export_artifacts:
+  enabled: false
+  bucket: rhoai-cpt-artifacts
+  path_prefix: cpt/fine-tuning
+  dest: null # will be set by the export code
+
+
+
    +
  • command_args.yml.j2 should start with:

  • +
+
{% set secrets_location = false | or_env(secrets.dir.env_key) %}
+{% if not secrets_location %}
+  {{ ("ERROR: secrets_location must be defined (secrets.dir.name="+ secrets.dir.name|string +" or env(secrets.dir.env_key=" + secrets.dir.env_key|string + ")) ") | raise_exception }}
+{% endif %}
+{% set s3_ldap_password_location = secrets_location + "/" + secrets.s3_ldap_password_file %}
+
+# ---
+
+
+
+
+

Copy the clusters.sh and configure.sh

+

These files are necessary to be able to create clusters on +OpenShift CI. (/test rhoai-e2e). They shouldn’t be modified.

+

And now, the boiler-plate code is in place, and we can start building +the test orchestration.

+
+
+

Create test_....py and prepare_....py

+

Starting at this step, the development of the test orchestration +starts, and you “just” have to fill the gaps :)

+

In the prepare_ci method, prepare your cluster, according to the +configuration. In the test_ci method, run your test and collect +its artifacts. In the cleanup_cluster_ci, cleanup you cluster, so +that it can be used again for another test.

+
+
+
+

Start building your test orchestration

+

One the boilerplate code is in place, we can start building the test +orchestration. TOPSAIL provides some “low level” helper modules:

+
from projects.core.library import env, config, run, configure_logging, export
+
+
+

as well as libraries of common orchestration bits:

+
from projects.rhods.library import prepare_rhoai as prepare_rhoai_mod
+from projects.gpu_operator.library import prepare_gpu_operator
+from projects.matrix_benchmarking.library import visualize
+
+
+

These libraries are illustrated below. They are not formally described +at the moment. They come from project code blocks that have noticed to +be used identically across projects, so they have been moved to +library directories to be easier to reuse.

+

Sharing code across projects means extending the risk of unnoticed +bugs when updating the library. With this in mind, the question of +code sharing vs code duplication takes another direction, as extensive +testing is not easy in such a rapidly evolving project.

+
+

Core helper modules

+
+

The run module

+
    +
  • helper functions to run system commands, toolbox commands, and +from_config toolbox commands:

  • +
+
def run(command, capture_stdout=False, capture_stderr=False, check=True, protect_shell=True, cwd=None, stdin_file=None, log_command=True)
+
+
+

This method allows running a command, capturing or not its +stdout/stderr, checking it’s return code, chaning it’s working +directory, protecting it with bash safety flags (set -o +errexit;set -o pipefail;set -o nounset;set -o errtrace), passing a +file as stdin, logging or not the command, …

+
def run_toolbox(group, command, artifact_dir_suffix=None, run_kwargs=None, mute_stdout=None, check=None, **kwargs)
+
+
+

This command allows running a toolbox command. group, command, +kwargs are the CLI toolbox command arguments. run_kwargs allows +passing arguments directory to the run command described +above. mute_stdout allows muting (capturing) the stdout +text. check allows disabling the exception on error +check. artifact_dir_suffix allows appending a suffix to the +toolbox directory name (eg, to distinguish two identical calls in the +artifacts).

+
def run_toolbox_from_config(group, command, prefix=None, suffix=None, show_args=None, extra=None, artifact_dir_suffix=None, mute_stdout=False, check=True, run_kwargs=None)
+
+
+

This command allows running a toolbox command with the from_config +helper (see the description of the command_args.yaml.j2 +file). prefix and suffix allow distinguishing commands in the +command_args.yaml.j2 file. extra allows passing extra +arguments that override what is in the template file. show_args +only display the arguments that would be passed to run_toolbox.py.z

+
    +
  • run_and_catch is an helper function for chaining multiple +functions without swallowing exceptions:

  • +
+
exc = None
+exc = run.run_and_catch(
+  exc,
+  run.run_toolbox, "kserve", "capture_operators_state", run_kwargs=dict(capture_stdout=True),
+)
+
+exc = run.run_and_catch(
+  exc,
+  run.run_toolbox, "cluster", "capture_environment", run_kwargs=dict(capture_stdout=True),
+)
+
+if exc: raise exc
+
+
+
    +
  • helper context to run functions in parallel. If +exit_on_exception is set, the code will exit the process when an +exception is catch. Otherwise it will simply raise it. If +dedicated_dir is set, a dedicated directly, based on the +name parameter, will be created.

  • +
+
class Parallel(object):
+    def __init__(self, name, exit_on_exception=True, dedicated_dir=True):
+
+
+

Example:

+
def prepare():
+  with run.Parallel("prepare1") as parallel:
+      parallel.delayed(prepare_rhoai)
+      parallel.delayed(scale_up_sutest)
+
+
+  test_settings = config.project.get_config("tests.fine_tuning.test_settings")
+  with run.Parallel("prepare2") as parallel:
+      parallel.delayed(prepare_gpu)
+      parallel.delayed(prepare_namespace, test_settings)
+
+  with run.Parallel("prepare3") as parallel:
+      parallel.delayed(preload_image_yyy)
+      parallel.delayed(preload_image_xxx)
+      parallel.delayed(preload_image_zzz)
+
+
+
+
+

The env module

+
    +
  • ARTIFACT_DIR thread-safe access to the storage directory. Prefer +using this than $ARTIFACT_DIR which isn’t thread safe.

  • +
  • helper context to create a dedicated artifact directory. Based on +OpenShift CI, TOPSAIL relies on the ARTIFACT_DIR environment +variable to store its artifacts. Each toolbox command creates a new +directory name nnn__group__command, which keeps the directories +ordered and easy to follow. However, when many commands are executed, +sometimes in parallel, the number of directories increase and becomes +hard to understand. This command allows creating subdirectories, to +group things logically:

  • +
+

Example:

+
with env.NextArtifactDir("prepare_namespace"):
+    set_namespace_annotations()
+    download_data_sources(test_settings)
+
+
+
+
+

The config module

+
    +
  • the config.project.get_config(<config key>) helper command to +access the configuration. Uses the inline Json format. This object +holds the main project configuration.

  • +
  • the config.project.set_config(<config key>, <value>) helper +command to update the configuration. Sometimes, it is convenient to +store values in the configuration (eg, coming from the +command-line). Mind that this is not thread-safe (an error is raised +if this command is called in a run.Parallel context). Mind that +this command does not allow creating new configuration fields in the +document. Only existing fields can be updated.

  • +
+
+
+
+

The projects.rhods.library.prepare_rhoai library module

+

This library helps with the deployment of RHOAI pre-builds on OpenShift.

+
    +
  • install_servicemesh() installs the ServiceMesh Operator, if not +already installed in the cluster (this is a dependency of RHOAI)

  • +
  • uninstall_servicemesh(mute=True) uninstall the ServiceMesh +Operator, if it is installed

  • +
  • is_rhoai_installed() tells if RHOAI is currently installed or +not.

  • +
  • install(token_file=None, force=False) installs RHOAI, if it is +not already installed (unless force is passed). Mind that the +current deployment code only works with the pre-builds of RHOAI, +which require a Brew token_file. If the token isn’t passed, it +is assumed that the cluster already has access to Brew.

  • +
+
+
+

The projects.gpu_operator.library.prepare_gpu_operator library module

+

This library helps with the deployment of the GPU stack on OpenShift.

+
    +
  • prepare_gpu_operator() deploys the NFD Operator and the GPU +Operator, if they are not already installed.

  • +
  • wait_ready(...) waits for the GPU Operator stack to be deployed, +and optionally enable additional GPU Operator features:

    +
    +
      +
    • enable_time_sharing enables the time-sharing capability of +the GPU Operator, (configured via the command_args.yaml.j2 +file).

    • +
    • extend_metrics=True, wait_metrics=True enables extra metrics +to be captured by the GPU Operator DCGM component (the +“well-known” metrics set). If wait_metrics is enabled, the +automation will wait for the DCGM to start reporting these +metrics.

    • +
    • wait_stack_deployed allows disabling the final wait, and +only enable the components above.

    • +
    +
    +
  • +
  • cleanup_gpu_operator() undeploys the GPU Operator and the NFD +Operator, if they are deployed.

  • +
  • add_toleration(effect, key) adds a toleration to the GPU +Operator DaemonSet Pods. This allows the GPU Operator Pods to be +deployed on nodes with specific taints. Mind that this command +overrides any toleration previously set.

  • +
+
+
+

The projects.local_ci.library.prepare_user_pods library module

+

This library helps with the execution of multi-user TOPSAIL tests.

+

Multi-user tests consist in Pods running inside the cluster, and +all executing a TOPSAIL command. Their initialization is synchronized +with a barrier, then they wait a configurable delay before starting +their script. When they terminate, their file artifacts are collected via a +S3 server, and stored locally for post-processing.

+
    +
  • prepare_base_image_container(namespace) builds a TOPSAIL image +in a given namespace. The image must be consistent with the commit +of TOPSAIL being tested, so the BuildConfig relies on the PR +number of fetch the right commit. The apply_prefer_pr function +provides the helper code to update the configuration with the number +of the PR being tested.

  • +
  • apply_prefer_pr(pr_number=None) inspects the environment to +detect the PR number. When running locally, export +HOMELAB_CI=true and PULL_NUMBER=... for this function to +automatically detect the PR number. Mind that this function updates +the configuration file, so it cannot run inside a parallel context.

  • +
  • delete_istags(namespace) cleanups up the istags used by TOPSAIL +User Pods.

  • +
  • rebuild_driver_image(namespace, pr_number) helps refreshing the +image when running locally.

  • +
+
@entrypoint()
+def rebuild_driver_image(pr_number):
+    namespace = config.project.get_config("base_image.namespace")
+    prepare_user_pods.rebuild_driver_image(namespace, pr_number)
+
+
+
    +
  • cluster_scale_up(user_count) scales up the cluster with the +right number of nodes (when not running in a bare-metal cluster).

  • +
  • prepare_user_pods(user_count) prepares the cluster for running a +multi-user scale test. Deploys the dependency tools (minio, redis), +builds the image, prepare the ServiceAccount that TOPSAIL will use, +prepare the secrets that TOPSAIL will have access to …

  • +
  • cleanup_cluster() cleanups up the cluster by deleting the User +Pod namespace.

  • +
+
+
+

The projects.matrix_benchmarking.library.visualize library module

+

This module helps with the post-processing of TOPSAIL results.

+
    +
  • prepare_matbench() is called from the ContainerFile. It +installs the pip dependencies of MatrixBenchmarking.

  • +
  • download_and_generate_visualizations(results_dirname) is called +from the CIs, when replotting. It downloads test results runs the +post-processing steps against it.

  • +
  • generate_from_dir(results_dirname, generate_lts=None) is the +main entrypoint of this library. It accepts a directory as argument, +and runs the post-processing steps against it. The expected +configuration should be further documented …

  • +
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/extending/toolbox.html b/extending/toolbox.html new file mode 100644 index 0000000000..243066f3ff --- /dev/null +++ b/extending/toolbox.html @@ -0,0 +1,273 @@ + + + + + + + How roles are organized — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

How roles are organized

+

Roles in TOPSAIL are standard Ansible roles that are wired into the +run_toolbox.py command line interface.

+

In TOPSAIL, the roles are organized by projects, in the +projects/PROJECT_NAME/roles directories. Their structure follows +Ansible standard role guidelines:

+
toolbox/
+├── <group name>.py
+└── <new_role_name>/
+    ├── defaults
+       └── main.yml
+    ├── files
+       └── .keep
+    ├── README.md
+    ├── tasks
+       └── main.yml
+    ├── templates
+       └── example.yml.j2
+    └── vars
+        └── main.yml
+
+
+
+
+

How default parameters are generated

+

Topsail generates automatically all the default parameters in the +<role>/defaults/main.yml file, to make sure all the roles +parameters are consistent with what the CLI supports +(run_toolbox.py). The file <role>/defaults/main.yml is +rendered automatically when executing from the project’s root folder:

+
./run_toolbox.py repo generate_ansible_default_settings
+
+
+
+

Including new roles in Topsail’s CLI

+
+

1. Creating a Python class for the new role

+

Create a class file to reference the new role and define the default +parameters that can be referenced from the CLI as parameters.

+

In the project’s toolbox directory, create or edit the +<project_name>.py file with the following code:

+
 import sys
+
+ from projects.core.library.ansible_toolbox import (
+     RunAnsibleRole, AnsibleRole,
+     AnsibleMappedParams, AnsibleConstant,
+     AnsibleSkipConfigGeneration
+ )
+
+class <project_name>:
+    """
+    Commands relating to <project_name>
+    """
+
+    @AnsibleRole("<new_role_name>")
+    @AnsibleMappedParams
+    def run(self,
+            <new_role_parameter_1>,
+            <new_role_parameter_n>,
+            ):
+        """
+        Run <new_role_name>
+
+        Args:
+          <new_role_parameter_1>: First parameter
+          <new_role_parameter_n>: Nth parameter
+        """
+
+        # if needed, perform simple parameters validation here
+
+        return RunAnsibleRole(locals())
+
+
+
+

Description of the decorators

+
    +
  • @AnsibleROle(role_name) tells the role where the command is implemented

  • +
  • @AnsibleMappedParams specifies that the Python arguments should +be mapped into the Ansible arguments (that’s the most common)

  • +
  • @AnsibleSkipConfigGeneration specifies that no configuration +should be generated for this command (usually, it means that another +command already specifies the arguments, and this one reuses the +same role with different settings)

  • +
  • @AnsibleConstant(description, name, value) specifies a Ansible +argument without Python equivalent. Can be used to pass flags +embedded in the function name. Eg: dump_prometheus and +reset_prometheus.

  • +
+
+
+
+

2. Including the new toolbox class in the Toolbox

+

This step in not necessary anymore. The run_toolbox.py command +from the root directory loads the toolbox with this generic call:

+
projects.core.library.ansible_toolbox.Toolbox()
+
+
+

This class traverses all the projects/*/toolbox/*.py Python files, +and loads the class with the titled name of the file (simplified code):

+
for toolbox_file in (TOPSAIL_DIR / "projects").glob("*/toolbox/*.py"):
+    toolbox_module = __import__(toolbox_file)
+    toolbox_name = name of <toolbox_file> without extension
+    toolbox_class = getattr(toolbox_module, toolbox_name.title())
+
+
+
+
+

3. Rendering the default parameters

+

Now, once the new toolbox command is created, the role class is added to the +project’s root folder and the CLI entrypoint is included in the +Toolbox class, it is possible to render the role default parameters +from the run_toolbox.py CLI. To render the default parameters for +all roles execute:

+
./run_toolbox.py repo generate_ansible_default_settings
+
+
+

TOPSAIL GitHub repository will refuse to merge the PR if this command +has not been called after the Python entrypoint has been modified.

+
+
+

4. Executing the new toolbox command

+

Once the role is in the correct folder and the Toolbox entrypoints +are up to date, this new role can be executed directly from run_toolbox.py +like:

+
./run_toolbox.py <project_name> <new_role_name> <new_role_parameter_1> <new_role_parameter_n>
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/extending/visualization.html b/extending/visualization.html new file mode 100644 index 0000000000..863ea9b9d5 --- /dev/null +++ b/extending/visualization.html @@ -0,0 +1,701 @@ + + + + + + + Creating a new visualization module — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Creating a new visualization module

+

TOPSAIL post-processing/visualization rely on MatrixBenchmarking +modules. The post-processing steps are configured within the +matbench field of the configuration file:

+
matbench:
+  preset: null
+  workload: projects.fine_tuning.visualizations.fine_tuning
+  config_file: plots.yaml
+  download:
+    mode: prefer_cache
+    url:
+    url_file:
+    # if true, copy the results downloaded by `matbench download` into the artifacts directory
+    save_to_artifacts: false
+  # directory to plot. Set by testing/common/visualize.py before launching the visualization
+  test_directory: null
+  lts:
+    generate: true
+    horreum:
+      test_name: null
+    opensearch:
+      export:
+        enabled: false
+        enabled_on_replot: false
+        fail_test_on_fail: true
+      instance: smoke
+      index: topsail-fine-tuning
+      index_prefix: ""
+      prom_index_suffix: -prom
+    regression_analyses:
+      enabled: false
+      # if the regression analyses fail, mark the test as failed
+      fail_test_on_regression: false
+
+
+

The visualization modules are split into several sub-modules, that are +described below.

+
+

The store module

+

The store module is built as an extension of +projects.matrix_benchmarking.visualizations.helpers.store, which +defines the store architecture usually used in TOPSAIL.

+
local_store = helpers_store.BaseStore(
+    cache_filename=CACHE_FILENAME, important_files=IMPORTANT_FILES,
+
+    artifact_dirnames=parsers.artifact_dirnames,
+    artifact_paths=parsers.artifact_paths,
+
+    parse_always=parsers.parse_always,
+    parse_once=parsers.parse_once,
+
+    # ---
+
+    lts_payload_model=models_lts.Payload,
+    generate_lts_payload=lts_parser.generate_lts_payload,
+
+    # ---
+
+    models_kpis=models_kpi.KPIs,
+    get_kpi_labels=lts_parser.get_kpi_labels,
+)
+
+
+

The upper part defines the core of the store module. It is +mandatory.

+

The lower parts define the LTS payload and KPIs. This part if +optional, and only required to push KPIs to OpenSearch.

+
+

The store parsers

+

The goal of the store.parsers module is to turn TOPSAIL test +artifacts directories into a Python object, that can be plotted or +turned into LTS KPIs.

+

The parsers of the main workload components rely on the simple +store.

+
store_simple.register_custom_parse_results(local_store.parse_directory)
+
+
+

The simple store searches for a settings.yaml file and an +exit_code file.

+

When these two files are found, the parsing of a test begins, and the +current directory is considered a test root directory.

+

The parsing is done this way:

+
if exists(CACHE_FILE) and not MATBENCH_STORE_IGNORE_CACHE == true:
+  results = reload(CACHE_FILE)
+else:
+  results = parse_once()
+
+parse_always(results)
+results.lts = parse_lts(results)
+return results
+
+
+

This organization improves the flexibility of the parsers, wrt to what +takes time (should be in parse_once) vs what depends on the +current execution environment (should be in parse_always).

+

Mind that if you are working on the parsers, you should disable the +cache, or your modifications will not be taken into account.

+
export MATBENCH_STORE_IGNORE_CACHE=true
+
+
+

You can re-enable it afterwards with:

+
unset MATBENCH_STORE_IGNORE_CACHE
+
+
+

The results of the main parser is a types.SimpleNamespace +object. By choice, it is weakly (on the fly) defined, so the +developers must take care to properly propagate any modification of +the structure. We tested having a Pydantic model, but that turned out +to be to cumbersome to maintain. Could be retested.

+

The important part of the parser is triggered by the execution of this +method:

+
def parse_once(results, dirname):
+    results.test_config = helpers_store_parsers.parse_test_config(dirname)
+    results.test_uuid = helpers_store_parsers.parse_test_uuid(dirname)
+    ...
+
+
+

This parse_once method is in charge of transforming a directory +(dirname) into a Python object (results). The parse heavily +relies on obj = types.SimpleNamespace() objects, which are +dictionaries which fields can be access as attributes. The inner +dictionary can be accessed with obj.__dict__ for programmatic +traversal.

+

The parse_once method should delegate the parsing to submethods, +which typically looks like this (safety checks have been removed for +readability):

+
def parse_once(results, dirname):
+    ...
+    results.finish_reason = _parse_finish_reason(dirname)
+    ....
+
+@helpers_store_parsers.ignore_file_not_found
+def _parse_finish_reason(dirname):
+    finish_reason = types.SimpleNamespace()
+    finish_reason.exit_code = None
+
+    with open(register_important_file(dirname, artifact_paths.FINE_TUNING_RUN_FINE_TUNING_DIR / "artifacts/pod.json")) as f:
+        pod_def = json.load(f)
+
+    finish_reason.exit_code = container_terminated_state["exitCode"]
+
+    return finish_reason
+
+
+

Note that:

+
    +
  • for efficiency, JSON parsing should be preferred to YAML parsing, +which is much slower.

  • +
  • for grep-ability, the results.xxx field name should match the +variable defined in the method (xxx = types.SimpleNamespace())

  • +
  • the ignore_file_not_found decorator will catch +FileNotFoundError exceptions and return None instead. This +makes the code resilient against not-generated artifacts. This +happens “often” while performing investigations in TOPSAIL, because +the test failed in an unexpected way. The visualization is expected +to perform as best as possible when this happens (graceful +degradation), so that the rest of the artifacts can be exploited to +understand what happened and caused the failure.

  • +
+

The difference between these two methods:

+
def parse_once(results, dirname): ...
+
+def parse_always(results, dirname, import_settings): ..
+
+
+

is that parse_once is called once, then the results is saved into +a cache file, and reloaded from there, the environment variable +MATBENCH_STORE_IGNORE_CACHE=y is set.

+

Method parse_always is always called, even after reloading the +cache file. This can be used to parse information about the +environment in which the post-processing is executed.

+
artifact_dirnames = types.SimpleNamespace()
+artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR = "*__cluster__capture_environment"
+artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR = "*__fine_tuning__run_fine_tuning_job"
+artifact_dirnames.RHODS_CAPTURE_STATE = "*__rhods__capture_state"
+artifact_paths = types.SimpleNamespace() # will be dynamically populated
+
+
+

This block is used to lookup the directories where the files to be +parsed are stored (the prefix nnn__ can change easily, so it +shouldn’t be hardcoded).

+

During the initialization of the store module, the directories listed +by artifacts_dirnames are resolved and stored in the +artifacts_paths namespace. They can be used in the parser with, +eg: artifact_paths.FINE_TUNING_RUN_FINE_TUNING_DIR / +"artifacts/pod.log".

+

If the directory blob does not resolve to a file, its value is None.

+
IMPORTANT_FILES = [
+    ".uuid",
+    "config.yaml",
+    f"{artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR}/_ansible.log",
+    f"{artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR}/nodes.json",
+    f"{artifact_dirnames.CLUSTER_CAPTURE_ENV_DIR}/ocp_version.yml",
+    f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/src/config_final.json",
+    f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/artifacts/pod.log",
+    f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/artifacts/pod.json",
+    f"{artifact_dirnames.FINE_TUNING_RUN_FINE_TUNING_DIR}/_ansible.play.yaml",
+    f"{artifact_dirnames.RHODS_CAPTURE_STATE}/rhods.createdAt",
+    f"{artifact_dirnames.RHODS_CAPTURE_STATE}/rhods.version",
+]
+
+
+

This block defines the files important for the parsing. They are +“important” and not “mandatory” as the parsing should be able to +proceed even if the files are missing.

+

The list of “important files” is used when downloading results for +re-processing. The download command can either lookup the cache file, +or download all the important files. A warning is issued during the +parsing if a file opened with register_important_file is not part +of the import files list.

+
+
+
+

The store and models LTS and KPI modules

+

The Long-Term Storage (LTS) payload and the Key Performance Indicators +(KPIs) are TOPSAIL/MatrixBenchmarking features for Continuous +Performance Testing (CPT).

+
    +
  • The LTS payload is a “complex” object, with metadata, +results and kpis fields. The metadata, results are +defined with Pydantic models, which enforce their structure. This +was the first attempt of TOPSAIL/MatrixBenchmarking to go towards +long-term stability of the test results and metadata. This attempt +has not been convincing, but it is still part of the pipeline for +historical reasons. Any metadata or result can be stored in these +two objects, provided that you correctly add the fields in the +models.

  • +
  • The KPIs is our current working solution for continuous performance +testing. A KPI is a simple object, which consists in a value, a help +text, a timestamp, a unit, and a set of labels. The KPIs follow the +OpenMetrics idea.

  • +
+
# HELP kserve_container_cpu_usage_max Max CPU usage of the Kserve container | container_cpu_usage_seconds_total
+# UNIT kserve_container_cpu_usage_max cores
+kserve_container_cpu_usage_max{instance_type="g5.2xlarge", accelerator_name="NVIDIA-A10G", ocp_version="4.16.0-rc.6", rhoai_version="2.13.0-rc1+2024-09-02", model_name="flan-t5-small", ...} 1.964734477279039
+
+
+

Currently, the KPIs are part of the LTS payload, and the labels are +duplicated for each of the KPIs. This designed will be reconsidered in +the near future.

+

The KPIs are a set of performance indicators and labels.

+

The KPIs are defined by functions which extract the KPI value by +inspecting the LTS payload:

+
@matbench_models.HigherBetter
+@matbench_models.KPIMetadata(help="Number of dataset tokens processed per seconds per GPU", unit="tokens/s")
+def dataset_tokens_per_second_per_gpu(lts_payload):
+   return lts_payload.results.dataset_tokens_per_second_per_gpu
+
+
+

the name of the function is the name of the KPI, and the annotation +define the metadata and some formatting properties:

+
# mandatory
+@matbench_models.KPIMetadata(help="Number of train tokens processed per GPU per seconds", unit="tokens/s")
+
+# one of these two is mandatory
+@matbench_models.LowerBetter
+# or
+@matbench_models.HigherBetter
+
+# ignore this KPI in the regression analyse
+@matbench_models.IgnoredForRegression
+
+# simple value formatter
+@matbench_models.Format("{:.2f}")
+
+# formatter with a divisor (and a new unit)
+@matbench_models.FormatDivisor(1024, unit="GB", format="{:.2f}")
+
+
+

The KPI labels are defined via a Pydantic model:

+
KPI_SETTINGS_VERSION = "1.0"
+class Settings(matbench_models.ExclusiveModel):
+   kpi_settings_version: str
+   ocp_version: matbench_models.SemVer
+   rhoai_version: matbench_models.SemVer
+   instance_type: str
+
+   accelerator_type: str
+   accelerator_count: int
+
+   model_name: str
+   tuning_method: str
+   per_device_train_batch_size: int
+   batch_size: int
+   max_seq_length: int
+   container_image: str
+
+   replicas: int
+   accelerators_per_replica: int
+
+   lora_rank: Optional[int]
+   lora_dropout: Optional[float]
+   lora_alpha: Optional[int]
+   lora_modules: Optional[str]
+
+   ci_engine: str
+   run_id: str
+   test_path: str
+   urls: Optional[dict[str, str]]
+
+
+

So eventually, the KPIs are the combination of the generic part +(matbench_models.KPI) and project specific labels (Settings):

+
class KPI(matbench_models.KPI, Settings): pass
+KPIs = matbench_models.getKPIsModel("KPIs", __name__, kpi.KPIs, KPI)
+
+
+

The LTS payload was the original idea of the document to save for +continuous performance testing. KPIs have replaced them in this +endeavor, but in the current state of the project, the LTS payload +includes the KPIs. The LTS payload is the object actually sent to the +OpenSearch database.

+

The LTS Payload is composed of three objects:

+
    +
  • the metadata (replaced by the KPI labels)

  • +
  • the results (replace by the KPI values)

  • +
  • the KPIs

  • +
+
LTS_SCHEMA_VERSION = "1.0"
+class Metadata(matbench_models.Metadata):
+    lts_schema_version: str
+    settings: Settings
+
+    presets: List[str]
+    config: str
+    ocp_version: matbench_models.SemVer
+
+ class Results(matbench_models.ExclusiveModel):
+    train_tokens_per_second: float
+    dataset_tokens_per_second: float
+    gpu_hours_per_million_tokens: float
+    dataset_tokens_per_second_per_gpu: float
+    train_tokens_per_gpu_per_second: float
+    train_samples_per_second: float
+    train_runtime: float
+    train_steps_per_second: float
+    avg_tokens_per_sample: float
+
+ class Payload(matbench_models.ExclusiveModel):
+    metadata: Metadata
+    results: Results
+    kpis: KPIs
+
+
+

The generation of the LTS payload is done after the parsing of main +artifacts.

+
def generate_lts_payload(results, import_settings):
+    lts_payload = types.SimpleNamespace()
+
+    lts_payload.metadata = generate_lts_metadata(results, import_settings)
+    lts_payload.results = generate_lts_results(results)
+    # lts_payload.kpis is generated in the helper store
+
+    return lts_payload
+
+
+

On purpose, the parser does not use the Pydantic model when creating +the LTS payload. The reason for that is that the parser is strict. If +a field is missing, the object will not be created and an exception +will be raised. When TOPSAIL is used for running performance +investigations (in particular scale tests), we do not what this, +because the test might terminate with some artifacts missing. Hence, +the parsing will be incomplete, and we do not want that to abort the +visualization process.

+

However, when running in continuous performance testing mode, we do +want to guarantee that everything is correctly populated.

+

So TOPSAIL will run the parsing twice. First, without checking the LTS +conformity:

+
matbench parse
+     --output-matrix='.../internal_matrix.json' \
+     --pretty='True' \
+     --results-dirname='...' \
+     --workload='projects.kserve.visualizations.kserve-llm'
+
+
+

Then, when LTS generation is enabled, with the LTS checkup:

+
matbench parse \
+     --output-lts='.../lts_payload.json' \
+     --pretty='True' \
+     --results-dirname='...' \
+     --workload='projects.kserve.visualizations.kserve-llm'
+
+
+

This step (which reload from the cache file) will be recorded as a +failure if the parsing is incomplete.

+

The KPI values are generated in two steps:

+

First the KPIs dictionary is populated when the KPIMetadata +decorator is applied to a function (function name --> dict with the +function, metadata, format, etc)

+
KPIs = {} # populated by the @matbench_models.KPIMetadata decorator
+# ...
+@matbench_models.KPIMetadata(help="Number of train tokens processed per seconds", unit="tokens/s")
+def train_tokens_per_second(lts_payload):
+  return lts_payload.results.train_tokens_per_second
+
+
+

Second, when the LTS payload is generated via the helpers_store

+
import projects.matrix_benchmarking.visualizations.helpers.store as helpers_store
+
+
+

the LTS payload is passed to the KPI function, and the full KPI is +generated.

+
+
+

The plotting visualization module

+

The plotting module contains two kind of classes: the “actual” +plotting classes, which generate Plotly plots, and the report classes, +which generates HTML pages, based on Plotly’s Dash framework.

+

The plotting plot classes generate Plotly plots. They receive a +set of parameters about what should be plotted:

+
def do_plot(self, ordered_vars, settings, setting_lists, variables, cfg):
+    ...
+
+
+

and they return a Plotly figure, and optionally some text to write +below the plot:

+
return fig, msg
+
+
+

The parameters are mostly useful when multiple experiments have been +captured:

+
    +
  • setting_lists and settings should not be touched. They +should be passed to common.Matrix.all_records, which will return +a filtered list of all the entry to include in the plot.

  • +
+
for entry in common.Matrix.all_records(settings, setting_lists):
+    # extract plot data from entry
+    pass
+
+
+

Some plotting classes may be written to display only one experiment +results. A fail-safe exit can be written this way:

+
if common.Matrix.count_records(settings, setting_lists) != 1:
+    return {}, "ERROR: only one experiment must be selected"
+
+
+
    +
  • the variables dictionary tells which settings have multiple +values. Eg, we may have 6 experiments, all with +model_name=llama3, but with virtual_users=[4, 16, 32] and +deployment_type=[raw, knative]. In this case, the +virtual_users and deployment_type will be listed in the +variables. This is useful to give a name to each entry. Eg, +here, entry.get_name(variables) may return virtual_users=16, +deployment_type=raw.

  • +
  • the ordered_vars list tells the preferred ordering for +processing the experiments. With the example above and +ordered_vars=[virtual_users, deployment_type], we may want to +use the virtual_user setting as legend. With +ordered_vars=[deployment_type, virtual_users], we may want to +use the deployment_type instead. This gives flexibility in the +way the plots are rendered. This order can be set in the GUI, or via +the reporting calls.

  • +
+

Note that using these parameters is optional. They have no sense when +only one experiment should be plotted, and ordered_vars is useful +only when using the GUI, or when generating reports. They help the +generic processing of the results.

+
    +
  • the cfg dictionary provides some dynamic configuration flags to +perform the visualization. They can be passed either via the GUI, or +by the report classes (eg, to highlight a particular aspect of the +plot).

  • +
+

Writing a plotting class is often messy and dirty, with a lot of +if this else that. With Plotly’s initial framework +plotly.graph_objs, it was easy and tempting to mix the data +preparation (traversing the data structures) with the data +visualization (adding elements like lines to the plot), and do both +parts in the same loops.

+

Plotly express (plotly.express) introduced a new way to generate +the plots, based on Pandas DataFrames:

+
df = pd.DataFrame(generateThroughputData(entries, variables, ordered_vars, cfg__model_name))
+fig = px.line(df, hover_data=df.columns,
+              x="throughput", y="tpot_mean", color="model_testname", text="test_name",)
+
+
+

This pattern, where the first phase shapes the data to plot into +DataFrame, and the second phase turns the DataFrame into a figure, is +the preferred way to organize the code of the plotting classes.

+

The report classes are similar to the plotting classes, except that +they generate … reports, instead of plots (!).

+

A report is an HTML document, based on the Dash framework HTML tags +(that is, Python objects):

+
args = ordered_vars, settings, setting_lists, variables, cfg
+
+header += [html.H1("Latency per token during the load test")]
+
+header += Plot_and_Text(f"Latency details", args)
+header += html.Br()
+header += html.Br()
+
+header += Plot_and_Text(f"Latency distribution", args)
+
+header += html.Br()
+header += html.Br()
+
+
+

The configuration dictionary, mentioned above, can be used to generate +different flavors of the plot:

+
header += Plot_and_Text(f"Latency distribution", set_config(dict(box_plot=False, show_text=False), args))
+
+for entry in common.Matrix.all_records(settings, setting_lists):
+    header += [html.H2(entry.get_name(reversed(sorted(set(list(variables.keys()) + ['model_name'])))))]
+    header += Plot_and_Text(f"Latency details", set_config(dict(entry=entry), args))
+
+
+

When TOPSAIL has successfully run the parsing step, it calls the +visualization component with a predefined list of reports +(preferred) and plots (not recommended) to generate. This is stored in +data/plots.yaml:

+
visualize:
+- id: llm_test
+  generate:
+  - "report: Error report"
+  - "report: Latency per token"
+  - "report: Throughput"
+
+
+
+
+

The analyze regression analyze module

+

The last part of TOPSAIL/MatrixBenchmarking post-processing is the +automated regression analyses. The workflow required to enable performance +analyses will be described in the orchestration section. What is +required in the workload module only consists of a few keys to define.

+
# the setting (kpi labels) keys against which the historical regression should be performed
+COMPARISON_KEYS = ["rhoai_version"]
+
+
+

The setting keys listed in COMPARISON_KEYS will be used to +distinguish which entries to considered as “history” for a given test, +from everything else. In this example, we see that we compare against +historical OpenShift AI versions.

+
COMPARISON_KEYS = ["rhoai_version", "image_tag"]
+
+
+

Here, we compare against the historical RHOAI version and image tag.

+
# the setting (kpi labels) keys that should be ignored when searching for historical results
+IGNORED_KEYS = ["runtime_image", "ocp_version"]
+
+
+

Then we define the settings to ignore when searching for historical +records. Here, we ignore the runtime image name, and the OpenShift +version.

+
# the setting (kpi labels) keys *prefered* for sorting the entries in the regression report
+SORTING_KEYS = ["model_name", "virtual_users"]
+
+
+

Finally, for readability purpose, we define how the entries should be +sorted, so that the tables have a consistent ordering.

+
IGNORED_ENTRIES = {
+    "virtual_users": [4, 8, 32, 128]
+}
+
+
+

Last, we can define some settings to ignore while traversing the +entries that have been tested.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/genindex.html b/genindex.html new file mode 100644 index 0000000000..dd586893c7 --- /dev/null +++ b/genindex.html @@ -0,0 +1,129 @@ + + + + + + Index — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Index

+ +
+ +
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 0000000000..f0cb80afb0 --- /dev/null +++ b/index.html @@ -0,0 +1,256 @@ + + + + + + + Red Hat PSAP TOPSAIL test orchestration framework — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Red Hat PSAP TOPSAIL test orchestration framework

+ + + + +
+
+

Documentation generated on Dec 01, 2024 from git-main/c8e4b1e9.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/intro.html b/intro.html new file mode 100644 index 0000000000..27ef56da26 --- /dev/null +++ b/intro.html @@ -0,0 +1,211 @@ + + + + + + + TOPSAIL — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

TOPSAIL

+

Red Hat/PSAP’s Test Orchestrator for Performance and Scalability of AI +pLatforms

+

Linters build status Consistency build status Render Ansible build status Render docs build status

+

This repository provides an extensive toolbox for performance and +scale testing of Red Hat OpenShift +AI +(RHOAI) platform.

+

The automation relies on:

+
    +
  • Python scripts for the orchestration (the testing directories)

  • +
  • Ansible roles for the cluster control (the toolbox and roles +directories)

  • +
  • MatrixBenchmarking for the +post-processing (the visualization directories)

  • +
+
+

Dependencies

+

The recommended way to run TOPSAIL either via a CI environment, or +within TOPSAIL container via its Toolbx launcher.

+

Requirements:

+
    +
  • All the software requirements should be provided by the container +image, built by the topsail_build command.

  • +
  • A reachable OpenShift cluster

  • +
+
oc version # fails if the cluster is not reachable
+
+
+

Note that TOPSAIL assumes that it has cluster-admin privileges to the +cluster.

+
+
+

TOPSAIL orchestration and toolbox

+

TOPSAIL provides multiple levels of functionalities:

+
    +
  1. the test orchestrations are top level. Most of the time, they are +triggered via a CI engine, for end-to-end testing of a given RHOAI +component. The test orchestration Python code and configuration is +stored in the projects/*/testing directory.

  2. +
  3. the toolbox commands operate between the orchestration code and the +cluster. They are Ansible roles (projects/*/toolbox), in charge +of a specific task to prepare the cluster, run a given test, +capture the state of the cluster … The Ansible roles have a thin +Python layer on top of them (based on the Google Fire package) which provides a +well-defined command-line interface (CLI). This CLI interface +documents the parameters of the command, it allows its discovery +via the ./run_toolbox.py entrypoint, and it generates artifacts +for post-mortem troubleshooting.

  4. +
  5. the post-processing visualization, provided via MatrixBenchmarking workload +modules (projects/*/visualization). The modules are in charge of +parsing the test artifacts, generating visualization reports, +uploading KPIs to OpenSearch, and performing regression analyses.

  6. +
+
+
+

TOPSAIL projects organization

+

TOPSAIL projects +directories are organized following the different levels described +above.

+
    +
  • the testing directory provides the Python scripts with CI +entrypoints (test.py prepare_ci and test.py run_ci) and possibly +extra entrypoints for local interactions. It also contains the +project configuration file (config.yaml)

  • +
  • the toolbox directory contains the Ansible roles that controls and +mutates the cluster during the cluster preparation and test

  • +
  • the toolbox directory also contains the Python wrapper which +provides a well-defined CLI over the Ansible roles

  • +
  • the visualization directory contains the MatrixBenchmarking +workload modules, which perform the post-processing step of the test +(parsing, visualization, regression analyze)

  • +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/objects.inv b/objects.inv new file mode 100644 index 0000000000..bdadd61838 Binary files /dev/null and b/objects.inv differ diff --git a/search.html b/search.html new file mode 100644 index 0000000000..5bdc8bee22 --- /dev/null +++ b/search.html @@ -0,0 +1,144 @@ + + + + + + Search — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + + + +
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/searchindex.js b/searchindex.js new file mode 100644 index 0000000000..4c270ed929 --- /dev/null +++ b/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"docnames": ["README", "contributing", "extending/orchestration", "extending/toolbox", "extending/visualization", "index", "intro", "toolbox.generated/Busy_Cluster.cleanup", "toolbox.generated/Busy_Cluster.create_configmaps", "toolbox.generated/Busy_Cluster.create_deployments", "toolbox.generated/Busy_Cluster.create_jobs", "toolbox.generated/Busy_Cluster.create_namespaces", "toolbox.generated/Busy_Cluster.status", "toolbox.generated/Cluster.build_push_image", "toolbox.generated/Cluster.capture_environment", "toolbox.generated/Cluster.create_htpasswd_adminuser", "toolbox.generated/Cluster.create_osd", "toolbox.generated/Cluster.deploy_aws_efs", "toolbox.generated/Cluster.deploy_ldap", "toolbox.generated/Cluster.deploy_minio_s3_server", "toolbox.generated/Cluster.deploy_nfs_provisioner", "toolbox.generated/Cluster.deploy_nginx_server", "toolbox.generated/Cluster.deploy_opensearch", "toolbox.generated/Cluster.deploy_operator", "toolbox.generated/Cluster.deploy_redis_server", "toolbox.generated/Cluster.destroy_ocp", "toolbox.generated/Cluster.destroy_osd", "toolbox.generated/Cluster.download_to_pvc", "toolbox.generated/Cluster.dump_prometheus_db", "toolbox.generated/Cluster.fill_workernodes", "toolbox.generated/Cluster.preload_image", "toolbox.generated/Cluster.query_prometheus_db", "toolbox.generated/Cluster.reset_prometheus_db", "toolbox.generated/Cluster.set_project_annotation", "toolbox.generated/Cluster.set_scale", "toolbox.generated/Cluster.undeploy_ldap", "toolbox.generated/Cluster.update_pods_per_node", "toolbox.generated/Cluster.upgrade_to_image", "toolbox.generated/Cluster.wait_fully_awake", "toolbox.generated/Configure.apply", "toolbox.generated/Configure.enter", "toolbox.generated/Configure.get", "toolbox.generated/Configure.name", "toolbox.generated/Cpt.deploy_cpt_dashboard", "toolbox.generated/Fine_Tuning.ray_fine_tuning_job", "toolbox.generated/Fine_Tuning.run_fine_tuning_job", "toolbox.generated/Fine_Tuning.run_quality_evaluation", "toolbox.generated/From_Config.run", "toolbox.generated/Gpu_Operator.capture_deployment_state", "toolbox.generated/Gpu_Operator.deploy_cluster_policy", "toolbox.generated/Gpu_Operator.deploy_from_bundle", "toolbox.generated/Gpu_Operator.deploy_from_operatorhub", "toolbox.generated/Gpu_Operator.enable_time_sharing", "toolbox.generated/Gpu_Operator.extend_metrics", "toolbox.generated/Gpu_Operator.get_csv_version", "toolbox.generated/Gpu_Operator.run_gpu_burn", "toolbox.generated/Gpu_Operator.undeploy_from_operatorhub", "toolbox.generated/Gpu_Operator.wait_deployment", "toolbox.generated/Gpu_Operator.wait_stack_deployed", "toolbox.generated/Kepler.deploy_kepler", "toolbox.generated/Kepler.undeploy_kepler", "toolbox.generated/Kserve.capture_operators_state", "toolbox.generated/Kserve.capture_state", "toolbox.generated/Kserve.deploy_model", "toolbox.generated/Kserve.extract_protos", "toolbox.generated/Kserve.extract_protos_grpcurl", "toolbox.generated/Kserve.undeploy_model", "toolbox.generated/Kserve.validate_model", "toolbox.generated/Kubemark.deploy_capi_provider", "toolbox.generated/Kubemark.deploy_nodes", "toolbox.generated/Kwok.deploy_kwok_controller", "toolbox.generated/Kwok.set_scale", "toolbox.generated/Llm_Load_Test.run", "toolbox.generated/Local_Ci.run", "toolbox.generated/Local_Ci.run_multi", "toolbox.generated/Nfd.has_gpu_nodes", "toolbox.generated/Nfd.has_labels", "toolbox.generated/Nfd.wait_gpu_nodes", "toolbox.generated/Nfd.wait_labels", "toolbox.generated/Nfd_Operator.deploy_from_operatorhub", "toolbox.generated/Nfd_Operator.undeploy_from_operatorhub", "toolbox.generated/Notebooks.benchmark_performance", "toolbox.generated/Notebooks.capture_state", "toolbox.generated/Notebooks.cleanup", "toolbox.generated/Notebooks.dashboard_scale_test", "toolbox.generated/Notebooks.locust_scale_test", "toolbox.generated/Notebooks.ods_ci_scale_test", "toolbox.generated/Pipelines.capture_state", "toolbox.generated/Pipelines.deploy_application", "toolbox.generated/Pipelines.run_kfp_notebook", "toolbox.generated/Repo.generate_ansible_default_settings", "toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate", "toolbox.generated/Repo.generate_toolbox_related_files", "toolbox.generated/Repo.generate_toolbox_rst_documentation", "toolbox.generated/Repo.send_job_completion_notification", "toolbox.generated/Repo.validate_no_broken_link", "toolbox.generated/Repo.validate_no_wip", "toolbox.generated/Repo.validate_role_files", "toolbox.generated/Repo.validate_role_vars_used", "toolbox.generated/Rhods.capture_state", "toolbox.generated/Rhods.delete_ods", "toolbox.generated/Rhods.deploy_addon", "toolbox.generated/Rhods.deploy_ods", "toolbox.generated/Rhods.dump_prometheus_db", "toolbox.generated/Rhods.reset_prometheus_db", "toolbox.generated/Rhods.undeploy_ods", "toolbox.generated/Rhods.update_datasciencecluster", "toolbox.generated/Rhods.wait_odh", "toolbox.generated/Rhods.wait_ods", "toolbox.generated/Scheduler.cleanup", "toolbox.generated/Scheduler.create_mcad_canary", "toolbox.generated/Scheduler.deploy_mcad_from_helm", "toolbox.generated/Scheduler.generate_load", "toolbox.generated/Server.deploy_ldap", "toolbox.generated/Server.deploy_minio_s3_server", "toolbox.generated/Server.deploy_nginx_server", "toolbox.generated/Server.deploy_opensearch", "toolbox.generated/Server.deploy_redis_server", "toolbox.generated/Server.undeploy_ldap", "toolbox.generated/Storage.deploy_aws_efs", "toolbox.generated/Storage.deploy_nfs_provisioner", "toolbox.generated/Storage.download_to_pvc", "toolbox.generated/index", "understanding/orchestration", "understanding/toolbox", "understanding/visualization"], "filenames": ["README.rst", "contributing.rst", "extending/orchestration.rst", "extending/toolbox.rst", "extending/visualization.rst", "index.rst", "intro.rst", "toolbox.generated/Busy_Cluster.cleanup.rst", "toolbox.generated/Busy_Cluster.create_configmaps.rst", "toolbox.generated/Busy_Cluster.create_deployments.rst", "toolbox.generated/Busy_Cluster.create_jobs.rst", "toolbox.generated/Busy_Cluster.create_namespaces.rst", "toolbox.generated/Busy_Cluster.status.rst", "toolbox.generated/Cluster.build_push_image.rst", "toolbox.generated/Cluster.capture_environment.rst", "toolbox.generated/Cluster.create_htpasswd_adminuser.rst", "toolbox.generated/Cluster.create_osd.rst", "toolbox.generated/Cluster.deploy_aws_efs.rst", "toolbox.generated/Cluster.deploy_ldap.rst", "toolbox.generated/Cluster.deploy_minio_s3_server.rst", "toolbox.generated/Cluster.deploy_nfs_provisioner.rst", "toolbox.generated/Cluster.deploy_nginx_server.rst", "toolbox.generated/Cluster.deploy_opensearch.rst", "toolbox.generated/Cluster.deploy_operator.rst", "toolbox.generated/Cluster.deploy_redis_server.rst", "toolbox.generated/Cluster.destroy_ocp.rst", "toolbox.generated/Cluster.destroy_osd.rst", "toolbox.generated/Cluster.download_to_pvc.rst", "toolbox.generated/Cluster.dump_prometheus_db.rst", "toolbox.generated/Cluster.fill_workernodes.rst", "toolbox.generated/Cluster.preload_image.rst", "toolbox.generated/Cluster.query_prometheus_db.rst", "toolbox.generated/Cluster.reset_prometheus_db.rst", "toolbox.generated/Cluster.set_project_annotation.rst", "toolbox.generated/Cluster.set_scale.rst", "toolbox.generated/Cluster.undeploy_ldap.rst", "toolbox.generated/Cluster.update_pods_per_node.rst", "toolbox.generated/Cluster.upgrade_to_image.rst", "toolbox.generated/Cluster.wait_fully_awake.rst", "toolbox.generated/Configure.apply.rst", "toolbox.generated/Configure.enter.rst", "toolbox.generated/Configure.get.rst", "toolbox.generated/Configure.name.rst", "toolbox.generated/Cpt.deploy_cpt_dashboard.rst", "toolbox.generated/Fine_Tuning.ray_fine_tuning_job.rst", "toolbox.generated/Fine_Tuning.run_fine_tuning_job.rst", "toolbox.generated/Fine_Tuning.run_quality_evaluation.rst", "toolbox.generated/From_Config.run.rst", "toolbox.generated/Gpu_Operator.capture_deployment_state.rst", "toolbox.generated/Gpu_Operator.deploy_cluster_policy.rst", "toolbox.generated/Gpu_Operator.deploy_from_bundle.rst", "toolbox.generated/Gpu_Operator.deploy_from_operatorhub.rst", "toolbox.generated/Gpu_Operator.enable_time_sharing.rst", "toolbox.generated/Gpu_Operator.extend_metrics.rst", "toolbox.generated/Gpu_Operator.get_csv_version.rst", "toolbox.generated/Gpu_Operator.run_gpu_burn.rst", "toolbox.generated/Gpu_Operator.undeploy_from_operatorhub.rst", "toolbox.generated/Gpu_Operator.wait_deployment.rst", "toolbox.generated/Gpu_Operator.wait_stack_deployed.rst", "toolbox.generated/Kepler.deploy_kepler.rst", "toolbox.generated/Kepler.undeploy_kepler.rst", "toolbox.generated/Kserve.capture_operators_state.rst", "toolbox.generated/Kserve.capture_state.rst", "toolbox.generated/Kserve.deploy_model.rst", "toolbox.generated/Kserve.extract_protos.rst", "toolbox.generated/Kserve.extract_protos_grpcurl.rst", "toolbox.generated/Kserve.undeploy_model.rst", "toolbox.generated/Kserve.validate_model.rst", "toolbox.generated/Kubemark.deploy_capi_provider.rst", "toolbox.generated/Kubemark.deploy_nodes.rst", "toolbox.generated/Kwok.deploy_kwok_controller.rst", "toolbox.generated/Kwok.set_scale.rst", "toolbox.generated/Llm_Load_Test.run.rst", "toolbox.generated/Local_Ci.run.rst", "toolbox.generated/Local_Ci.run_multi.rst", "toolbox.generated/Nfd.has_gpu_nodes.rst", "toolbox.generated/Nfd.has_labels.rst", "toolbox.generated/Nfd.wait_gpu_nodes.rst", "toolbox.generated/Nfd.wait_labels.rst", "toolbox.generated/Nfd_Operator.deploy_from_operatorhub.rst", "toolbox.generated/Nfd_Operator.undeploy_from_operatorhub.rst", "toolbox.generated/Notebooks.benchmark_performance.rst", "toolbox.generated/Notebooks.capture_state.rst", "toolbox.generated/Notebooks.cleanup.rst", "toolbox.generated/Notebooks.dashboard_scale_test.rst", "toolbox.generated/Notebooks.locust_scale_test.rst", "toolbox.generated/Notebooks.ods_ci_scale_test.rst", "toolbox.generated/Pipelines.capture_state.rst", "toolbox.generated/Pipelines.deploy_application.rst", "toolbox.generated/Pipelines.run_kfp_notebook.rst", "toolbox.generated/Repo.generate_ansible_default_settings.rst", "toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate.rst", "toolbox.generated/Repo.generate_toolbox_related_files.rst", "toolbox.generated/Repo.generate_toolbox_rst_documentation.rst", "toolbox.generated/Repo.send_job_completion_notification.rst", "toolbox.generated/Repo.validate_no_broken_link.rst", "toolbox.generated/Repo.validate_no_wip.rst", "toolbox.generated/Repo.validate_role_files.rst", "toolbox.generated/Repo.validate_role_vars_used.rst", "toolbox.generated/Rhods.capture_state.rst", "toolbox.generated/Rhods.delete_ods.rst", "toolbox.generated/Rhods.deploy_addon.rst", "toolbox.generated/Rhods.deploy_ods.rst", "toolbox.generated/Rhods.dump_prometheus_db.rst", "toolbox.generated/Rhods.reset_prometheus_db.rst", "toolbox.generated/Rhods.undeploy_ods.rst", "toolbox.generated/Rhods.update_datasciencecluster.rst", "toolbox.generated/Rhods.wait_odh.rst", "toolbox.generated/Rhods.wait_ods.rst", "toolbox.generated/Scheduler.cleanup.rst", "toolbox.generated/Scheduler.create_mcad_canary.rst", "toolbox.generated/Scheduler.deploy_mcad_from_helm.rst", "toolbox.generated/Scheduler.generate_load.rst", "toolbox.generated/Server.deploy_ldap.rst", "toolbox.generated/Server.deploy_minio_s3_server.rst", "toolbox.generated/Server.deploy_nginx_server.rst", "toolbox.generated/Server.deploy_opensearch.rst", "toolbox.generated/Server.deploy_redis_server.rst", "toolbox.generated/Server.undeploy_ldap.rst", "toolbox.generated/Storage.deploy_aws_efs.rst", "toolbox.generated/Storage.deploy_nfs_provisioner.rst", "toolbox.generated/Storage.download_to_pvc.rst", "toolbox.generated/index.rst", "understanding/orchestration.rst", "understanding/toolbox.rst", "understanding/visualization.rst"], "titles": ["<no title>", "Contributing", "Creating a New Orchestration", "How roles are organized", "Creating a new visualization module", "Red Hat PSAP TOPSAIL test orchestration framework", "TOPSAIL", "busy_cluster cleanup", "busy_cluster create_configmaps", "busy_cluster create_deployments", "busy_cluster create_jobs", "busy_cluster create_namespaces", "busy_cluster status", "cluster build_push_image", "cluster capture_environment", "cluster create_htpasswd_adminuser", "cluster create_osd", "cluster deploy_aws_efs", "cluster deploy_ldap", "cluster deploy_minio_s3_server", "cluster deploy_nfs_provisioner", "cluster deploy_nginx_server", "cluster deploy_opensearch", "cluster deploy_operator", "cluster deploy_redis_server", "cluster destroy_ocp", "cluster destroy_osd", "cluster download_to_pvc", "cluster dump_prometheus_db", "cluster fill_workernodes", "cluster preload_image", "cluster query_prometheus_db", "cluster reset_prometheus_db", "cluster set_project_annotation", "cluster set_scale", "cluster undeploy_ldap", "cluster update_pods_per_node", "cluster upgrade_to_image", "cluster wait_fully_awake", "configure apply", "configure enter", "configure get", "configure name", "cpt deploy_cpt_dashboard", "fine_tuning ray_fine_tuning_job", "fine_tuning run_fine_tuning_job", "fine_tuning run_quality_evaluation", "from_config run", "gpu_operator capture_deployment_state", "gpu_operator deploy_cluster_policy", "gpu_operator deploy_from_bundle", "gpu_operator deploy_from_operatorhub", "gpu_operator enable_time_sharing", "gpu_operator extend_metrics", "gpu_operator get_csv_version", "gpu_operator run_gpu_burn", "gpu_operator undeploy_from_operatorhub", "gpu_operator wait_deployment", "gpu_operator wait_stack_deployed", "kepler deploy_kepler", "kepler undeploy_kepler", "kserve capture_operators_state", "kserve capture_state", "kserve deploy_model", "kserve extract_protos", "kserve extract_protos_grpcurl", "kserve undeploy_model", "kserve validate_model", "kubemark deploy_capi_provider", "kubemark deploy_nodes", "kwok deploy_kwok_controller", "kwok set_scale", "llm_load_test run", "local_ci run", "local_ci run_multi", "nfd has_gpu_nodes", "nfd has_labels", "nfd wait_gpu_nodes", "nfd wait_labels", "nfd_operator deploy_from_operatorhub", "nfd_operator undeploy_from_operatorhub", "notebooks benchmark_performance", "notebooks capture_state", "notebooks cleanup", "notebooks dashboard_scale_test", "notebooks locust_scale_test", "notebooks ods_ci_scale_test", "pipelines capture_state", "pipelines deploy_application", "pipelines run_kfp_notebook", "repo generate_ansible_default_settings", "repo generate_middleware_ci_secret_boilerplate", "repo generate_toolbox_related_files", "repo generate_toolbox_rst_documentation", "repo send_job_completion_notification", "repo validate_no_broken_link", "repo validate_no_wip", "repo validate_role_files", "repo validate_role_vars_used", "rhods capture_state", "rhods delete_ods", "rhods deploy_addon", "rhods deploy_ods", "rhods dump_prometheus_db", "rhods reset_prometheus_db", "rhods undeploy_ods", "rhods update_datasciencecluster", "rhods wait_odh", "rhods wait_ods", "scheduler cleanup", "scheduler create_mcad_canary", "scheduler deploy_mcad_from_helm", "scheduler generate_load", "server deploy_ldap", "server deploy_minio_s3_server", "server deploy_nginx_server", "server deploy_opensearch", "server deploy_redis_server", "server undeploy_ldap", "storage deploy_aws_efs", "storage deploy_nfs_provisioner", "storage download_to_pvc", "Toolbox Documentation", "The Test Orchestrations Layer", "The Reusable Toolbox Layer", "The Post-mortem Processing & Visualization Layer"], "terms": {"see": [0, 1, 2, 4, 31, 36, 44, 51, 84, 85, 86, 122, 123], "render": [0, 4, 5], "version": [0, 2, 4, 6, 16, 23, 44, 51, 54, 74, 102, 122, 123, 125], "topsail": [0, 1, 2, 4, 7, 8, 9, 10, 12, 40, 43, 47, 73, 74, 122, 124, 125], "": [0, 1, 2, 4, 6, 15, 23, 34, 44, 51, 85, 86, 87, 89, 123, 124, 125], "document": [0, 1, 2, 4, 5, 6, 34, 92, 103, 123, 125], "thi": [0, 1, 2, 3, 4, 6, 9, 13, 34, 67, 85, 86, 122, 123, 124, 125], "address": [0, 74], "http": [0, 21, 36, 72, 73, 111, 115, 122, 124], "openshift": [0, 1, 2, 4, 6, 16, 23, 25, 26, 28, 31, 32, 34, 36, 50, 51, 69, 73, 79, 122, 123, 124, 125], "psap": [0, 1, 2, 6, 73], "github": [0, 1, 3, 73, 74, 94, 111, 122, 123], "io": [0, 2, 28, 29, 32, 36, 44, 45, 46, 111, 123, 124], "index": [0, 2, 4, 84, 85, 86], "html": [0, 4, 36, 122], "thank": 1, "take": [1, 2, 4, 53, 110, 122, 123], "time": [1, 2, 4, 6, 52, 53, 81, 84, 85, 86, 89, 101, 110, 112, 122, 123, 124], "The": [1, 3, 6, 7, 8, 9, 10, 12, 16, 19, 20, 25, 26, 27, 31, 33, 34, 36, 37, 40, 41, 44, 45, 46, 62, 63, 64, 65, 66, 67, 69, 71, 72, 73, 74, 79, 87, 88, 89, 101, 102, 112, 114, 120, 121], "follow": [1, 2, 3, 4, 6, 123], "i": [1, 2, 3, 4, 6, 13, 15, 16, 17, 18, 25, 31, 33, 34, 40, 45, 47, 52, 53, 55, 58, 63, 74, 84, 85, 86, 94, 100, 105, 107, 111, 113, 119, 123, 124, 125], "set": [1, 2, 3, 4, 8, 9, 10, 11, 20, 23, 25, 33, 34, 36, 44, 45, 69, 71, 72, 92, 106, 120, 122, 123, 124, 125], "These": [1, 2, 123, 124], "ar": [1, 2, 4, 5, 6, 25, 31, 34, 44, 45, 46, 86, 98, 122, 123, 124, 125], "mostli": [1, 4, 123], "feel": 1, "free": [1, 123], "propos": 1, "chang": [1, 4, 70, 123, 124], "primari": 1, "goal": [1, 4], "repositori": [1, 3, 6, 13, 73, 123], "serv": [1, 21, 61, 62, 63, 66, 67, 106, 115, 122, 124], "central": 1, "team": [1, 123], "perform": [1, 3, 4, 6, 67, 74, 81, 85, 122, 124, 125], "scale": [1, 2, 4, 6, 34, 71, 83, 84, 85, 86, 122, 123, 124], "test": [1, 4, 6, 72, 73, 74, 81, 83, 84, 85, 86, 89, 94, 112, 122, 124, 125], "autom": [1, 2, 4, 6, 123], "secondari": 1, "offer": 1, "toolbox": [1, 2, 47, 51, 92, 93, 97], "up": [1, 2, 3, 25, 47, 81, 83, 85, 86, 89, 109, 122], "configur": [1, 3, 4, 5, 6, 17, 43, 47, 52, 53, 63, 67, 74, 91, 102, 119, 125], "cluster": [1, 5, 6, 7, 8, 9, 10, 11, 12, 55, 67, 68, 69, 75, 76, 82, 85, 86, 101, 111, 113, 118, 120, 121, 124], "prepar": [1, 4, 5, 6, 44, 45, 123], "execut": [1, 2, 4, 5, 10, 40, 55, 73, 74, 81, 84, 86, 89, 123, 124, 125], "pr": [1, 2, 3, 73, 74], "need": [1, 3, 123], "approv": [1, 23, 51], "lgtm": 1, "member": 1, "befor": [1, 2, 4, 9, 27, 34, 44, 45, 50, 51, 73, 74, 89, 121, 123, 125], "being": [1, 2, 15, 73, 123], "merg": [1, 3], "should": [1, 2, 3, 4, 6, 19, 29, 31, 63, 64, 66, 73, 81, 84, 86, 88, 101, 108, 110, 111, 112, 114, 123, 124, 125], "have": [1, 2, 4, 6, 34, 96, 122, 123, 124, 125], "proper": [1, 67, 122], "descript": [1, 2, 53, 87, 89, 91, 124], "explain": 1, "problem": 1, "solv": 1, "new": [1, 5, 33, 34, 53, 71, 91, 122, 123], "featur": [1, 2, 4], "introduc": [1, 4], "can": [1, 2, 3, 4, 31, 32, 47, 50, 51, 86, 94, 101, 123, 124, 125], "anyon": 1, "interest": [1, 53], "good": [1, 2], "health": 1, "reserv": 1, "moment": [1, 2, 67, 70, 123], "main": [1, 2, 3, 4, 5, 73, 74, 90, 111, 122, 123, 125], "criteria": 1, "success": 1, "run": [1, 3, 4, 5, 6, 32, 44, 45, 46, 51, 55, 74, 84, 85, 86, 89, 112, 121, 124, 125], "modifi": [1, 2, 3, 34, 123], "becaus": [1, 4, 123], "natur": 1, "we": [1, 2, 4, 123, 125], "t": [1, 2, 4, 21, 24, 64, 65, 86, 94, 115, 117, 123, 124, 125], "all": [1, 2, 3, 4, 6, 23, 30, 33, 52, 66, 84, 86, 95, 97, 98, 106, 122, 123], "path": [1, 13, 18, 19, 22, 25, 27, 43, 65, 67, 72, 73, 74, 84, 85, 86, 112, 113, 114, 116, 121, 124], "In": [1, 2, 3, 4, 34, 123, 124, 125], "order": [1, 2, 4, 123], "save": [1, 4, 44, 45, 123], "unnecessari": 1, "aw": [1, 2, 16, 17, 25, 34, 119, 122], "cloud": [1, 123], "automat": [1, 2, 3, 23, 51], "prow": 1, "must": [1, 2, 4, 33, 50, 51, 74], "manual": [1, 23, 51, 106], "trigger": [1, 4, 6, 89, 123, 125], "align": 1, "nest": 1, "list": [1, 2, 4, 23, 31, 34, 39, 40, 53, 65, 67, 73, 106, 108, 112, 122, 123, 125], "parent": 1, "label": [1, 4, 7, 8, 9, 10, 11, 12, 25, 28, 29, 32, 36, 76, 78, 122], "block": [1, 2, 4, 123], "name": [1, 2, 3, 4, 13, 15, 16, 18, 19, 20, 22, 23, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 40, 43, 44, 45, 46, 47, 52, 53, 63, 64, 65, 66, 67, 69, 71, 72, 73, 74, 81, 84, 85, 86, 87, 88, 89, 91, 101, 106, 109, 110, 111, 112, 113, 114, 116, 118, 120, 121, 122, 123, 124], "file": [1, 2, 3, 4, 6, 13, 15, 16, 18, 19, 21, 22, 27, 28, 31, 39, 40, 41, 43, 44, 45, 47, 65, 67, 73, 74, 81, 84, 85, 86, 89, 90, 91, 93, 95, 97, 103, 113, 114, 115, 116, 121, 122, 123, 124, 125], "us": [1, 2, 3, 4, 7, 8, 9, 10, 12, 13, 16, 18, 19, 20, 23, 25, 27, 28, 29, 30, 32, 34, 35, 43, 44, 45, 46, 47, 62, 67, 72, 73, 74, 81, 84, 85, 86, 89, 98, 102, 106, 111, 112, 113, 114, 118, 120, 121, 122, 123, 125], "yml": [1, 2, 3, 4, 90, 122], "extens": [1, 2, 3, 4, 6, 124], "strive": 1, "best": [1, 4], "practic": [1, 123], "differ": [1, 3, 4, 6, 34, 123], "playbook": [1, 34], "command": [1, 2, 4, 5, 6, 40, 44, 45, 46, 47, 64, 65, 67, 73, 74, 84, 85, 86, 122, 124], "action": 1, "hook": 1, "help": [1, 2, 4, 44, 45, 124, 125], "keep": [1, 2, 3, 86, 89, 123], "consist": [1, 2, 3, 4, 122, 125], "lint": 1, "v": [1, 2, 4], "forc": [1, 2, 34, 100, 122], "color": [1, 4], "c": [1, 2], "config": [1, 4, 5, 6, 23, 47, 52, 53, 73, 74, 90, 122, 123], "role": [1, 5, 6, 15, 29, 31, 32, 44, 47, 71, 90, 98, 101, 122, 124], "try": [1, 2, 61, 63, 67, 123], "avoid": [1, 73, 74, 123], "shell": [1, 40, 124], "task": [1, 3, 6, 10, 122, 123, 124], "much": [1, 4, 123, 125], "possibl": [1, 3, 4, 125], "make": [1, 3, 4, 7, 8, 9, 10, 11, 122, 123], "sure": [1, 3, 123], "o": [1, 2, 124], "pipefail": [1, 2, 124], "part": [1, 4, 16, 123, 124, 125], "whenev": 1, "involv": [1, 123], "forget": 1, "some": [1, 2, 4, 34, 123], "them": [1, 4, 6, 86, 123], "redirect": 1, "artifact_extra_logs_dir": [1, 54, 122, 124], "common": [1, 2, 3, 4], "except": [1, 2, 4, 50, 84, 86, 98, 122, 125], "inlin": [1, 2], "stanza": 1, "debug": 1, "fail": [1, 2, 4, 6, 9, 34, 55, 74, 81, 84, 86, 89, 112, 124], "eg": [1, 2, 3, 4, 85, 123, 125], "gfd": 1, "did": 1, "node": [1, 2, 4, 16, 29, 30, 31, 34, 36, 58, 69, 70, 71, 75, 77, 78, 122, 123], "msg": [1, 4], "log": [1, 2, 4, 94, 123, 124], "clean": [1, 83, 109, 122, 123], "when": [1, 2, 3, 4, 18, 27, 35, 44, 85, 113, 118, 121, 123, 124, 125], "everyth": [1, 4, 123], "goe": 1, "right": [1, 2, 123, 124], "store": [1, 2, 5, 6, 15, 27, 28, 31, 44, 45, 46, 52, 53, 54, 64, 65, 111, 121, 122, 123, 124, 125], "relev": [1, 125], "inform": [1, 4, 82, 122, 124], "directori": [1, 2, 3, 4, 5, 6, 21, 27, 31, 32, 44, 45, 46, 54, 64, 81, 89, 115, 121, 122, 124, 125], "inspect": [1, 2, 4], "subscript": [1, 23], "statu": [1, 94, 122, 124], "oc": [1, 6, 34, 89, 123, 124], "describ": [1, 2, 4, 6, 124, 125], "oper": [1, 2, 6, 23, 31, 36, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 79, 80, 100, 102, 105, 122, 123, 124], "coreo": 1, "com": [1, 36, 71, 73, 111, 121, 122], "gpu": [1, 2, 4, 34, 44, 45, 46, 48, 50, 51, 52, 53, 54, 55, 56, 57, 58, 71, 75, 77, 122, 123], "certifi": 1, "n": [1, 34, 124], "gpu_operator_subscript": 1, "failed_when": 1, "fals": [1, 2, 4, 34, 40, 64, 65, 73, 74, 86, 89, 102, 123], "includ": [1, 4, 5, 44, 45, 53, 91, 122], "troubleshoot": [1, 6, 123, 124], "abov": [1, 2, 4, 6, 31, 74], "an": [1, 2, 4, 6, 8, 13, 15, 16, 21, 23, 25, 26, 33, 45, 64, 65, 80, 97, 98, 115, 122, 123, 124], "exampl": [1, 2, 3, 4, 15, 18, 19, 22, 23, 31, 34, 43, 113, 114, 116, 123], "mark": [1, 2, 4, 20, 45, 120], "ensur": [1, 34, 95, 96, 97, 98, 122, 123], "doesn": [1, 21, 24, 115, 117, 123, 124], "affect": [1, 124], "add": [1, 2, 4, 84, 85, 86, 89], "clear": [1, 27, 121], "ignore_error": 1, "true": [1, 2, 4, 8, 9, 15, 18, 20, 27, 34, 35, 40, 44, 45, 46, 53, 55, 61, 63, 64, 65, 67, 70, 72, 73, 74, 79, 84, 85, 86, 87, 89, 94, 101, 102, 113, 118, 120, 121, 123], "onli": [1, 2, 4, 34, 44, 45, 47, 67, 72, 81, 86, 89, 106, 123, 125], "track": [1, 59, 74, 110, 122, 123], "known": [1, 2, 53], "failur": [1, 4, 74], "ignor": [1, 4, 74], "return": [1, 2, 3, 4, 89, 123], "write": [1, 4, 94, 123, 125], "do": [1, 2, 4, 40, 44, 45, 55, 61, 63, 67, 73, 74, 97, 122, 123], "delet": [1, 2, 35, 44, 45, 55, 63, 66, 100, 118, 122], "found": [1, 4, 23, 31, 34, 106], "my_resourc": 1, "group": [1, 2, 3, 15, 16, 47, 123], "relat": [1, 3, 61, 63, 67, 122, 123], "modif": [1, 4], "dedic": [1, 2, 5, 16, 26, 122, 124], "commit": [1, 2, 13, 96, 122], "stack": [1, 2, 58, 61, 62, 67, 122], "logic": [1, 2], "1": [1, 2, 4, 5, 9, 16, 34, 44, 45, 46, 74, 81, 84, 85, 86, 112, 123], "2": [1, 2, 4, 5, 10, 16, 44, 86, 123], "script": [1, 2, 6, 86, 122], "3": [1, 5, 112], "integr": 1, "scrip": 1, "nightli": 1, "ci": [1, 2, 5, 6, 73, 74, 84, 86, 91, 94, 122, 125], "squash": 1, "so": [1, 2, 4, 123, 124, 125], "pleas": 1, "fix": 1, "anoth": [1, 2, 3, 123], "hint": 1, "git": [1, 2, 5, 13, 73, 111, 122], "revis": 1, "older": 1, "master": [1, 50], "cut": 1, "split": [1, 4, 125], "two": [1, 2, 4, 123, 125], "simpli": [1, 2, 123], "amend": 1, "most": [1, 3, 6, 74], "recent": 1, "you": [2, 4, 123], "re": [2, 4, 125], "work": [2, 4, 5, 67], "perf": 2, "want": [2, 4], "alreadi": [2, 3, 34, 123], "architectur": [2, 4], "mind": [2, 4, 123], "And": [2, 123], "readi": [2, 101, 123], "perfect": 2, "To": [2, 3], "go": [2, 4, 125], "project_nam": [2, 3], "boilerpl": 2, "code": [2, 3, 4, 5, 6, 91, 122, 123, 124], "compat": 2, "python": [2, 4, 5, 6, 17, 90, 92, 93, 119, 122, 123, 124], "packag": [2, 6, 23, 125], "thing": 2, "simpl": [2, 3, 4, 44, 45, 46, 122, 123, 125], "what": [2, 3, 4, 123, 124], "mandatori": [2, 4], "layer": [2, 5, 6], "contain": [2, 4, 6, 9, 13, 15, 16, 18, 19, 21, 22, 23, 30, 36, 43, 44, 45, 46, 63, 73, 74, 81, 84, 85, 86, 89, 102, 113, 114, 115, 116, 122, 123, 124], "entrypoint": [2, 3, 6, 124], "interact": [2, 6, 123], "def": [2, 3, 4, 123, 124], "prepare_ci": [2, 6], "namespac": [2, 4, 7, 8, 9, 10, 11, 12, 13, 19, 20, 21, 22, 23, 24, 27, 28, 29, 30, 31, 32, 43, 44, 45, 46, 50, 51, 52, 53, 55, 58, 62, 63, 64, 65, 66, 67, 69, 70, 73, 74, 81, 84, 85, 86, 87, 88, 89, 100, 105, 107, 109, 110, 111, 112, 114, 115, 116, 117, 120, 121, 122, 124], "pass": [2, 3, 4, 23, 25, 33, 34, 44, 45, 46, 47, 67, 72, 74, 102, 112, 121, 123], "test_ci": 2, "from": [2, 3, 4, 5, 13, 23, 25, 34, 44, 45, 47, 49, 50, 51, 54, 56, 74, 79, 80, 86, 89, 102, 111, 122, 125], "cleanup_clust": 2, "mute": 2, "restor": 2, "its": [2, 4, 6, 32, 34, 102, 104, 107, 108, 122, 123, 124, 125], "origin": [2, 4], "state": [2, 4, 6, 48, 61, 62, 87, 99, 101, 112, 122, 123, 124], "_not_": 2, "requir": [2, 4, 6, 13, 16, 34, 45, 67, 71, 125], "bare": [2, 5], "metal": [2, 5], "ignore_secret_path": 2, "apply_preset_from_pr_arg": 2, "generate_plots_from_pr_arg": 2, "gener": [2, 4, 6, 44, 45, 46, 47, 81, 85, 86, 89, 90, 91, 92, 93, 108, 109, 110, 112, 122, 123, 124, 125], "report": [2, 4, 6, 101, 125], "argument": [2, 3, 47, 73, 123, 124], "download_and_generate_visu": 2, "export": [2, 4, 40, 73, 84, 85, 86], "export_artifact": 2, "artifact_dir": [2, 123], "test_step": 2, "plot": [2, 5, 125], "class": [2, 4, 5, 20, 120, 121], "launch": [2, 4, 5, 74, 84, 85, 86, 112, 125], "__init__": 2, "self": [2, 3, 4, 102, 123, 124], "cleanup_cluster_ci": 2, "print": [2, 47], "rather": 2, "than": [2, 31], "open": [2, 4], "pager": 2, "fire": [2, 6], "displai": [2, 4], "lambda": 2, "line": [2, 3, 4, 6, 31, 123], "out": [2, 4, 101], "__name__": [2, 4], "__main__": 2, "sy": [2, 3], "exit": [2, 4, 47, 94, 106], "subprocess": 2, "calledprocesserror": 2, "e": [2, 79], "error": [2, 4], "f": [2, 4, 124], "cmd": 2, "returncod": 2, "keyboardinterrupt": 2, "empti": [2, 23, 25, 62, 67, 74, 89, 102, 123], "after": [2, 3, 4, 38, 74, 89, 122, 123, 124], "interrupt": 2, "ci_preset": [2, 123], "preset": [2, 4, 5, 39, 40, 122], "appli": [2, 4, 30, 32, 34, 40, 44, 45, 47, 71, 122, 123, 124], "null": [2, 4, 123], "singl": [2, 47, 122, 123, 125], "type": [2, 4, 13, 16, 18, 23, 34, 47, 53, 55, 63, 65, 71, 74, 81, 84, 85, 86, 94, 106, 113, 123], "ocp": [2, 125], "tag": [2, 4, 13, 25, 43, 73, 74, 81, 84, 85, 86, 89, 102, 111, 123], "ticketid": 2, "light_clust": 2, "deploy_clust": 2, "target": [2, 28, 32, 112, 123], "cluster_light": 2, "light": [2, 123], "extend": [2, 44, 45, 123], "secret": [2, 8, 9, 15, 18, 19, 22, 43, 73, 74, 84, 85, 86, 91, 113, 114, 116, 122], "dir": [2, 13], "od": [2, 84, 86, 100, 102, 103, 104, 105, 108, 122, 123], "env_kei": 2, "psap_ods_secret_path": 2, "properti": [2, 4, 18, 19, 22, 43, 84, 85, 86, 113, 114, 116], "ldap": [2, 18, 22, 35, 84, 85, 86, 113, 116, 118, 122], "s3_ldap_password_fil": 2, "s3_ldap": 2, "password": [2, 15, 16, 123], "keep_cluster_password_fil": 2, "get_clust": 2, "brew_registry_redhat_io_token_fil": 2, "brew": [2, 123], "registri": [2, 46, 111, 121, 123], "redhat": [2, 46, 79, 100, 103, 104, 105, 121, 123], "token": [2, 4, 72], "opensearch_inst": 2, "opensearch": [2, 4, 6, 22, 43, 116, 122, 125], "aws_credenti": 2, "awscr": 2, "git_credenti": 2, "credenti": [2, 16, 17, 27, 43, 119, 121], "metal_profil": 2, "manag": [2, 36, 101, 102, 122, 123], "name_prefix": 2, "fine": [2, 4, 44, 45, 46, 122], "tune": [2, 4, 44, 45, 46, 122], "machineset": [2, 34, 71, 123], "base_domain": 2, "rhperfscal": 2, "org": 2, "4": [2, 4, 5, 16, 36, 69, 79, 122, 123], "15": [2, 16], "9": [2, 50, 51], "region": [2, 16, 25, 34, 123], "u": [2, 16], "west": 2, "control_plan": 2, "m6a": 2, "xlarg": [2, 16, 34], "worker": [2, 16, 29, 36, 71, 122], "2xlarg": [2, 4], "count": [2, 8, 9, 10, 11, 69, 74, 112], "sutest": [2, 123], "is_met": 2, "lab": [2, 125], "comput": [2, 16, 84, 85, 86, 123], "workload": [2, 4, 6, 44, 45], "pod": [2, 4, 10, 28, 29, 30, 32, 36, 44, 45, 46, 71, 73, 74, 81, 84, 85, 86, 89, 104, 112, 121, 122, 124], "m6i": 2, "taint": [2, 34, 71], "kei": [2, 4, 7, 8, 9, 10, 11, 12, 19, 25, 30, 33, 41, 74, 84, 85, 86, 106, 114, 122, 123, 124, 125], "valu": [2, 3, 4, 7, 8, 9, 10, 11, 12, 13, 16, 19, 20, 22, 23, 25, 27, 28, 29, 30, 32, 33, 34, 36, 40, 41, 43, 44, 45, 46, 47, 50, 51, 52, 53, 55, 58, 63, 64, 65, 67, 69, 70, 71, 72, 73, 74, 81, 84, 85, 86, 87, 89, 94, 100, 101, 102, 103, 105, 106, 107, 111, 112, 114, 116, 120, 121, 122, 123, 124, 125], "ye": [2, 7, 8, 9, 10, 12, 123], "effect": [2, 123], "noschedul": 2, "driver": [2, 17, 86, 119, 122, 123], "cleanup_on_exit": 2, "matbench": [2, 4], "prom_workload": 2, "config_fil": [2, 4, 47], "download": [2, 4, 27, 86, 121, 122, 125], "mode": [2, 4, 23, 27, 32, 51, 72, 112, 121, 123, 124], "prefer_cach": [2, 4], "url": [2, 4, 27, 43, 86, 121], "url_fil": [2, 4], "result": [2, 4, 124, 125], "artifact": [2, 4, 6, 44, 45, 54, 64, 65, 73, 74, 84, 85, 86, 89, 122, 123, 124, 125], "save_to_artifact": [2, 4], "test_directori": [2, 4], "lt": [2, 5, 125], "horreum": [2, 4], "test_nam": [2, 4, 73, 85], "enabl": [2, 4, 23, 44, 45, 52, 53, 94, 106, 122, 123], "enabled_on_replot": [2, 4], "fail_test_on_fail": [2, 4], "instanc": [2, 4, 16, 22, 34, 71, 86, 116], "smoke": [2, 4], "index_prefix": [2, 4], "prom_index_suffix": [2, 4], "prom": [2, 4], "regression_analys": [2, 4], "regress": [2, 5, 6, 123, 125], "analys": [2, 4, 6, 44, 45, 123], "fail_test_on_regress": [2, 4], "bucket": [2, 19, 73, 74, 84, 85, 86, 114], "rhoai": [2, 4, 6, 44, 85, 99, 102, 106, 122, 123, 125], "cpt": [2, 4, 5, 125], "path_prefix": 2, "dest": 2, "secrets_loc": 2, "or_env": 2, "defin": [2, 3, 4, 6, 15, 16, 18, 19, 28, 32, 79, 97, 98, 101, 103, 104, 108, 113, 114, 122, 123, 125], "string": 2, "raise_except": 2, "endif": 2, "s3_ldap_password_loc": 2, "necessari": [2, 3, 123], "abl": [2, 4, 15, 18, 113], "e2": [2, 123], "thei": [2, 4, 6, 123, 124, 125], "shouldn": [2, 4], "now": [2, 3, 72], "boiler": 2, "plate": 2, "place": [2, 29, 122, 123], "step": [2, 3, 4, 6, 123], "develop": [2, 4, 123], "just": [2, 94], "fill": [2, 29, 122], "gap": 2, "method": [2, 4, 65, 67, 112], "accord": [2, 123], "collect": [2, 89], "cleanup": [2, 12, 60, 122, 123], "again": 2, "One": 2, "provid": [2, 4, 6, 16, 18, 35, 68, 70, 84, 85, 86, 113, 118, 122, 123, 124], "low": 2, "level": [2, 6, 84, 85, 86, 122], "import": [2, 3, 4, 123, 125], "configure_log": 2, "well": [2, 6, 53, 124], "bit": [2, 5], "prepare_rhoai_mod": 2, "illustr": [2, 123], "below": [2, 4, 125], "formal": 2, "come": 2, "notic": 2, "ident": [2, 16, 18, 35, 84, 85, 86, 113, 118], "across": 2, "been": [2, 3, 4, 72, 123, 125], "move": 2, "easier": 2, "reus": [2, 3, 123], "share": [2, 52, 53, 122, 123], "mean": [2, 3, 123, 125], "risk": 2, "unnot": 2, "bug": 2, "updat": [2, 36, 73, 74, 106, 122, 123], "With": [2, 4, 50, 51], "question": 2, "duplic": [2, 4, 44, 45], "direct": 2, "easi": [2, 4, 124], "rapidli": 2, "evolv": 2, "function": [2, 3, 4, 6, 123, 124], "system": [2, 5, 17, 70, 74, 84, 85, 86, 89, 119], "from_config": [2, 123], "capture_stdout": [2, 123], "capture_stderr": 2, "check": [2, 4, 75, 76, 122, 123, 125], "protect_shel": 2, "cwd": 2, "none": [2, 4, 53, 63, 74, 84, 86, 96, 106, 122, 123], "stdin_fil": 2, "log_command": 2, "allow": [2, 6, 123, 125], "captur": [2, 4, 6, 14, 44, 45, 48, 61, 62, 74, 82, 84, 85, 86, 87, 89, 99, 122, 125], "stdout": [2, 47, 123, 124], "stderr": 2, "chane": 2, "protect": 2, "bash": [2, 123], "safeti": [2, 4], "flag": [2, 3, 4, 13, 31, 34, 45, 74, 96, 102, 122, 123], "errexit": 2, "nounset": 2, "errtrac": 2, "stdin": 2, "run_toolbox": [2, 3, 6, 34, 123, 124], "artifact_dir_suffix": 2, "run_kwarg": 2, "mute_stdout": 2, "kwarg": 2, "cli": [2, 5, 6, 124], "text": [2, 4], "disabl": [2, 4, 89, 94, 102, 123], "append": 2, "suffix": [2, 18, 47, 113, 123], "distinguish": [2, 4], "call": [2, 3, 4, 5, 47, 67, 72, 125], "run_toolbox_from_config": [2, 123], "prefix": [2, 4, 8, 9, 10, 11, 18, 28, 29, 31, 32, 47, 83, 84, 85, 86, 89, 112, 113, 123], "show_arg": [2, 47], "extra": [2, 6, 47, 53, 87, 89, 123, 125], "overrid": 2, "templat": [2, 3, 112, 123], "would": [2, 123], "z": 2, "run_and_catch": 2, "chain": 2, "multipl": [2, 4, 6, 31, 74, 122, 123, 125], "without": [2, 3, 4, 123, 125], "swallow": 2, "exc": 2, "kserv": [2, 4, 5, 106], "capture_operators_st": [2, 122], "dict": [2, 4, 8, 9, 10, 11, 47, 106, 123], "capture_environ": [2, 122], "rais": [2, 4], "context": [2, 13, 123], "parallel": [2, 5, 10, 74, 84, 85, 86, 122], "If": [2, 4, 8, 9, 13, 15, 18, 23, 25, 27, 33, 34, 35, 40, 44, 45, 46, 51, 53, 55, 61, 62, 63, 64, 65, 67, 70, 73, 74, 81, 84, 85, 86, 89, 94, 101, 102, 106, 113, 118, 121, 123], "exit_on_except": 2, "process": [2, 4, 5, 6, 74, 85, 123], "catch": [2, 4], "otherwis": [2, 34, 45, 81, 89], "dedicated_dir": 2, "directli": [2, 3, 123, 125], "base": [2, 4, 6, 13, 23, 34, 90, 92, 93, 122, 124, 125], "paramet": [2, 4, 5, 6, 123], "object": [2, 4, 36, 44, 63], "prepare1": 2, "delai": [2, 74, 84, 85, 86, 123], "scale_up_sutest": [2, 123], "test_set": [2, 123], "get_config": [2, 123], "fine_tun": [2, 4, 5, 123], "prepare2": 2, "prepare_gpu": 2, "prepare_namespac": 2, "prepare3": 2, "preload_image_yyi": 2, "preload_image_xxx": 2, "preload_image_zzz": 2, "thread": [2, 123], "safe": [2, 4], "access": [2, 4, 16, 19, 27, 74, 114, 121, 123], "storag": [2, 4, 5, 20, 27, 125], "prefer": [2, 4], "which": [2, 4, 6, 19, 22, 23, 27, 29, 30, 32, 43, 47, 50, 51, 52, 53, 55, 58, 62, 63, 64, 65, 66, 67, 69, 73, 74, 81, 84, 86, 87, 88, 89, 107, 112, 114, 116, 121, 123, 125], "isn": 2, "reli": [2, 4, 6, 125], "variabl": [2, 4, 47, 73, 74, 97, 98, 122], "each": [2, 4, 9, 52, 74, 112, 123, 125], "nnn__group__command": 2, "howev": [2, 4], "mani": [2, 9], "sometim": [2, 123], "number": [2, 4, 8, 9, 10, 11, 16, 18, 34, 36, 44, 45, 46, 52, 63, 67, 69, 71, 72, 74, 81, 84, 85, 86, 89, 112, 113, 122, 123], "increas": 2, "becom": [2, 74], "hard": [2, 123], "understand": [2, 4, 31, 123, 124], "subdirectori": 2, "nextartifactdir": [2, 123], "set_namespace_annot": 2, "download_data_sourc": 2, "json": [2, 4, 73, 74, 106], "format": [2, 4, 31, 123], "hold": 2, "set_config": [2, 4, 123], "conveni": 2, "doe": [2, 4, 123], "field": [2, 4, 63, 123], "exist": [2, 4, 15, 21, 24, 97, 115, 117, 122, 124], "deploy": [2, 9, 10, 48, 67, 82, 99, 101, 102, 103, 104, 107, 108, 122, 123, 124], "pre": [2, 123], "install_servicemesh": 2, "instal": [2, 25, 54, 100, 101, 102, 105, 122, 123], "servicemesh": 2, "depend": [2, 4, 5], "uninstall_servicemesh": 2, "uninstal": 2, "is_rhoai_instal": 2, "tell": [2, 3, 4, 123], "current": [2, 4, 25, 39, 41, 42, 54, 62, 67, 122, 123], "token_fil": 2, "unless": [2, 33, 34], "assum": [2, 6, 17, 119], "ha": [2, 3, 4, 6, 34, 44, 45, 72, 75, 76, 81, 89, 122, 123, 124, 125], "deploi": [2, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 29, 43, 46, 50, 51, 52, 53, 56, 57, 58, 59, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 79, 80, 81, 84, 86, 87, 88, 89, 101, 102, 107, 111, 113, 114, 115, 116, 117, 119, 120, 122, 123, 124], "nfd": [2, 5, 79, 80], "wait_readi": 2, "wait": [2, 9, 15, 18, 38, 53, 57, 58, 74, 77, 78, 89, 101, 107, 108, 113, 122], "option": [2, 4, 47, 50, 51, 73, 74, 91], "addit": [2, 125], "enable_time_shar": [2, 122], "capabl": 2, "via": [2, 4, 6, 47, 123, 125], "extend_metr": [2, 122], "wait_metr": 2, "metric": [2, 31, 53], "dcgm": [2, 53], "compon": [2, 4, 6, 28, 32, 53, 106, 122, 125], "wait_stack_deploi": [2, 122], "final": [2, 4, 123], "cleanup_gpu_oper": 2, "undeploi": [2, 35, 56, 66, 70, 80, 105, 118, 122], "add_toler": 2, "toler": [2, 30, 84, 85, 86], "daemonset": [2, 30], "specif": [2, 4, 6, 34], "ani": [2, 4, 15, 33, 40, 61, 74, 86, 94, 122, 124], "previous": 2, "multi": [2, 31, 111], "user": [2, 15, 16, 18, 19, 43, 72, 74, 84, 85, 86, 87, 113, 114, 122, 123], "insid": [2, 13, 44, 45, 46, 74, 112, 123], "Their": [2, 3], "initi": [2, 4], "synchron": [2, 74, 84, 86], "barrier": 2, "termin": [2, 4, 44, 123], "s3": [2, 19, 73, 114, 122, 125], "server": [2, 5, 19, 21, 24, 74, 84, 85, 86, 124], "local": [2, 3, 6, 13, 15, 73, 74, 112, 124], "post": [2, 4, 5, 6, 44, 45, 89, 123, 124], "prepare_base_image_contain": 2, "imag": [2, 4, 6, 9, 13, 30, 37, 44, 45, 46, 50, 63, 73, 74, 81, 84, 85, 86, 89, 102, 108, 111, 121, 122, 123], "given": [2, 4, 6, 15, 23, 29, 33, 34, 37, 41, 47, 62, 71, 73, 74, 87, 88, 89, 122, 123], "buildconfig": [2, 13], "fetch": [2, 73, 74, 111], "apply_prefer_pr": 2, "pr_number": [2, 73], "detect": 2, "homelab_ci": 2, "pull_numb": 2, "cannot": [2, 70, 123], "delete_istag": 2, "istag": [2, 73, 74], "rebuild_driver_imag": 2, "refresh": 2, "base_imag": 2, "cluster_scale_up": [2, 123], "user_count": [2, 74, 84, 85, 86, 123], "tool": [2, 123], "minio": [2, 19, 74, 84, 85, 86, 114, 122], "redi": [2, 24, 74, 84, 86, 117, 122], "serviceaccount": [2, 73, 74], "prepare_matbench": 2, "containerfil": 2, "It": [2, 4, 6, 123, 124, 125], "pip": 2, "matrixbenchmark": [2, 4, 6, 125], "results_dirnam": 2, "replot": 2, "against": [2, 4, 123, 125], "generate_from_dir": 2, "generate_lt": 2, "accept": [2, 123], "expect": [2, 4, 36, 112, 123, 124], "further": [2, 125], "standard": 3, "ansibl": [3, 5, 6, 17, 90, 92, 97, 98, 119, 122, 124], "wire": 3, "py": [3, 4, 5, 6, 34, 81, 86, 112, 123, 124], "interfac": [3, 6, 72, 124], "project": [3, 4, 5, 33, 40, 62, 67, 72, 81, 89, 97, 111, 112, 122, 123, 125], "structur": [3, 4, 123], "guidelin": [3, 5], "new_role_nam": 3, "readm": 3, "md": [3, 69], "j2": [3, 5, 123], "var": [3, 123], "support": [3, 72], "root": [3, 4, 34], "folder": 3, "repo": [3, 5, 13, 73, 74, 111], "generate_ansible_default_set": [3, 122], "refer": 3, "referenc": 3, "edit": 3, "core": [3, 4, 5, 36, 44, 45, 46, 122], "librari": [3, 5, 123], "ansible_toolbox": 3, "runansiblerol": 3, "ansiblerol": [3, 124], "ansiblemappedparam": [3, 124], "ansibleconst": 3, "ansibleskipconfiggener": 3, "new_role_parameter_1": 3, "new_role_parameter_n": 3, "arg": [3, 4, 123, 124], "first": [3, 4, 18, 23, 31, 34, 74, 106, 113, 123], "nth": 3, "valid": [3, 67, 122], "here": [3, 4, 123, 125], "role_nam": 3, "where": [3, 4, 13, 15, 16, 20, 21, 24, 25, 27, 28, 31, 32, 44, 45, 46, 52, 53, 64, 65, 70, 72, 73, 74, 84, 85, 86, 100, 101, 105, 109, 110, 111, 112, 115, 117, 120, 121, 123, 124], "implement": [3, 123], "specifi": [3, 13, 72], "map": 3, "usual": [3, 4], "one": [3, 4, 34, 81, 85, 89, 106, 122, 123, 124, 125], "same": [3, 4, 44, 45, 84, 86, 89, 123, 125], "equival": [3, 123], "embed": 3, "dump_prometheu": 3, "reset_prometheu": 3, "anymor": 3, "load": [3, 4, 44, 45, 46, 72, 109, 112, 122], "travers": [3, 4], "titl": [3, 96, 122], "simplifi": 3, "toolbox_fil": 3, "topsail_dir": 3, "glob": 3, "toolbox_modul": 3, "__import__": 3, "toolbox_nam": 3, "toolbox_class": 3, "getattr": 3, "onc": [3, 4], "ad": [3, 4], "refus": 3, "correct": [3, 123], "date": 3, "like": [3, 4], "within": [4, 6, 47], "yaml": [4, 5, 6, 87, 89, 123, 124], "copi": [4, 5, 64, 65], "sever": 4, "sub": [4, 123], "built": [4, 6, 13], "matrix_benchmark": [4, 5], "helper": [4, 5, 123], "local_stor": 4, "helpers_stor": 4, "basestor": 4, "cache_filenam": 4, "important_fil": 4, "artifact_dirnam": 4, "artifact_path": 4, "parse_alwai": 4, "parse_onc": 4, "lts_payload_model": 4, "models_lt": 4, "payload": [4, 125], "generate_lts_payload": 4, "lts_parser": 4, "models_kpi": 4, "get_kpi_label": 4, "upper": 4, "lower": 4, "push": [4, 13], "turn": 4, "store_simpl": 4, "register_custom_parse_result": 4, "parse_directori": 4, "search": [4, 28, 31, 32], "exit_cod": 4, "pars": [4, 6, 124], "begin": [4, 45, 84, 86], "consid": [4, 86], "done": [4, 81, 123, 125], "wai": [4, 6, 122, 123], "cache_fil": 4, "matbench_store_ignore_cach": 4, "reload": 4, "els": [4, 73, 123], "parse_lt": 4, "organ": [4, 5], "improv": 4, "flexibl": [4, 123], "wrt": 4, "environ": [4, 5, 6, 14, 44, 45, 47, 73, 74, 122, 123], "cach": [4, 44, 45, 125], "your": [4, 5, 123], "taken": 4, "account": [4, 16, 53], "afterward": 4, "unset": [4, 63], "simplenamespac": 4, "By": [4, 28, 32, 123], "choic": [4, 123], "weakli": 4, "fly": 4, "care": 4, "properli": [4, 17, 119], "propag": 4, "pydant": 4, "cumbersom": 4, "maintain": 4, "could": [4, 123], "retest": 4, "dirnam": [4, 73], "test_config": 4, "helpers_store_pars": 4, "parse_test_config": 4, "test_uuid": 4, "parse_test_uuid": 4, "charg": [4, 6, 123, 124, 125], "transform": [4, 44, 45], "heavili": 4, "obj": 4, "dictionari": [4, 123], "attribut": 4, "inner": 4, "__dict__": 4, "programmat": 4, "deleg": 4, "submethod": 4, "typic": 4, "look": [4, 25, 47, 81, 85, 86, 89], "remov": [4, 15, 33, 67, 106], "readabl": 4, "finish_reason": 4, "_parse_finish_reason": 4, "ignore_file_not_found": 4, "register_important_fil": 4, "fine_tuning_run_fine_tuning_dir": 4, "pod_def": 4, "container_terminated_st": 4, "exitcod": 4, "note": [4, 6], "effici": [4, 124], "slower": 4, "grep": 4, "abil": [4, 123], "xxx": 4, "match": 4, "decor": 4, "filenotfounderror": 4, "instead": [4, 8, 13, 44, 45, 46, 70, 102, 123, 125], "resili": 4, "happen": 4, "often": [4, 125], "while": 4, "investig": [4, 124], "unexpect": 4, "grace": 4, "degrad": 4, "rest": [4, 123], "exploit": 4, "caus": [4, 101], "between": [4, 6, 74, 84, 85, 86, 123], "import_set": 4, "y": [4, 124], "alwai": [4, 123, 124], "even": [4, 123, 125], "about": [4, 82, 94, 122, 123], "cluster_capture_env_dir": 4, "__cluster__capture_environ": 4, "__fine_tuning__run_fine_tuning_job": 4, "rhods_capture_st": 4, "__rhods__capture_st": 4, "dynam": [4, 123, 125], "popul": [4, 8, 9, 10, 16], "lookup": [4, 41, 47, 89], "nnn__": [4, 123], "easili": 4, "hardcod": 4, "dure": [4, 6, 83, 122], "artifacts_dirnam": 4, "resolv": [4, 13], "artifacts_path": 4, "blob": 4, "uuid": 4, "_ansibl": 4, "ocp_vers": 4, "src": [4, 124], "config_fin": 4, "plai": 4, "rhod": [4, 5, 81, 82, 85, 86, 89, 123], "createdat": 4, "proce": 4, "miss": [4, 34, 103], "either": [4, 6, 13, 50, 122, 125], "A": [4, 5, 6, 16, 39, 40, 67, 89, 94], "warn": [4, 67], "issu": 4, "long": [4, 55, 123, 125], "term": [4, 125], "indic": [4, 43, 125], "continu": [4, 122, 125], "complex": 4, "metadata": 4, "enforc": [4, 123], "wa": [4, 56, 62, 64, 65, 67, 80, 87, 109, 122, 124], "attempt": 4, "toward": 4, "stabil": 4, "convinc": 4, "still": [4, 123], "pipelin": [4, 5, 125], "histor": [4, 123, 125], "reason": [4, 94], "correctli": [4, 123, 124], "our": 4, "solut": 4, "timestamp": [4, 31], "unit": [4, 45], "openmetr": 4, "idea": [4, 123], "kserve_container_cpu_usage_max": 4, "max": [4, 31, 36, 72, 122], "cpu": [4, 31, 44, 45, 46, 71, 112], "usag": [4, 31], "container_cpu_usage_seconds_tot": [4, 31], "instance_typ": [4, 34, 122, 123], "g5": 4, "accelerator_nam": 4, "nvidia": [4, 50, 51, 52, 53, 58, 71], "a10g": 4, "16": [4, 72, 123], "0": [4, 18, 44, 84, 85, 86, 113, 123], "rc": 4, "6": 4, "rhoai_vers": 4, "13": [4, 123], "rc1": [4, 123], "2024": [4, 5], "09": 4, "02": 4, "model_nam": [4, 44, 45, 46, 63], "flan": 4, "t5": 4, "small": [4, 85, 86, 123], "964734477279039": 4, "design": [4, 123, 124], "reconsid": 4, "futur": 4, "extract": [4, 64, 65, 122, 124], "matbench_model": 4, "higherbett": 4, "kpimetadata": 4, "dataset": [4, 27, 44, 45, 46, 72, 121, 122, 123], "per": [4, 36, 85, 86, 122, 125], "second": [4, 9, 10, 55, 74, 85, 89], "dataset_tokens_per_second_per_gpu": 4, "lts_payload": 4, "annot": [4, 33, 122], "train": 4, "lowerbett": 4, "ignoredforregress": 4, "formatt": 4, "2f": 4, "divisor": 4, "formatdivisor": 4, "1024": [4, 8, 72], "gb": [4, 13], "kpi_settings_vers": 4, "exclusivemodel": 4, "str": [4, 94], "semver": 4, "accelerator_typ": 4, "accelerator_count": 4, "int": [4, 16, 18, 55, 63, 74, 81, 84, 85, 86, 113], "tuning_method": 4, "per_device_train_batch_s": [4, 123], "batch_siz": 4, "max_seq_length": 4, "container_imag": [4, 44, 45, 46], "replica": [4, 9, 10, 34, 52, 63], "accelerators_per_replica": 4, "lora_rank": 4, "lora_dropout": 4, "float": [4, 13, 85], "lora_alpha": [4, 123], "lora_modul": 4, "ci_engin": 4, "run_id": 4, "test_path": 4, "eventu": 4, "combin": 4, "getkpismodel": 4, "replac": 4, "endeavor": 4, "actual": [4, 5, 98, 122], "sent": [4, 94], "databas": [4, 28, 32, 103, 104, 122], "compos": 4, "three": [4, 123], "lts_schema_vers": 4, "train_tokens_per_second": 4, "dataset_tokens_per_second": 4, "gpu_hours_per_million_token": 4, "train_tokens_per_gpu_per_second": 4, "train_samples_per_second": 4, "train_runtim": 4, "train_steps_per_second": 4, "avg_tokens_per_sampl": 4, "generate_lts_metadata": 4, "generate_lts_result": 4, "On": [4, 123], "purpos": 4, "strict": 4, "particular": 4, "might": [4, 123], "henc": [4, 123], "incomplet": 4, "abort": [4, 74], "guarante": 4, "twice": 4, "conform": 4, "output": [4, 45, 72], "matrix": 4, "internal_matrix": 4, "pretti": 4, "llm": [4, 44, 72, 122], "Then": [4, 31], "checkup": 4, "record": 4, "etc": [4, 85, 124, 125], "full": [4, 123], "kind": [4, 112], "plotli": [4, 125], "page": 4, "dash": [4, 125], "framework": [4, 124], "receiv": [4, 123], "do_plot": 4, "ordered_var": 4, "setting_list": 4, "cfg": 4, "figur": 4, "fig": 4, "experi": 4, "touch": [4, 123], "all_record": 4, "filter": [4, 72], "entri": [4, 8, 23, 122], "data": [4, 27, 81, 85, 86, 87, 88, 89, 103, 108, 121, 122], "mai": [4, 123], "written": [4, 123], "count_record": 4, "select": [4, 23, 29, 51], "llama3": 4, "virtual_us": 4, "32": [4, 71], "deployment_typ": 4, "raw": 4, "knativ": 4, "case": [4, 34, 84, 86, 125], "give": [4, 8, 9, 10, 11, 16, 20, 22, 30, 34, 36, 41, 42, 63, 66, 71, 73, 74, 116, 120, 122], "get_nam": 4, "legend": 4, "gui": 4, "sens": 4, "highlight": 4, "aspect": [4, 123], "messi": 4, "dirti": 4, "lot": [4, 122], "graph_obj": 4, "tempt": 4, "mix": [4, 123], "element": [4, 123, 124], "both": [4, 72, 123], "loop": [4, 123], "express": 4, "panda": 4, "datafram": 4, "df": 4, "pd": 4, "generatethroughputdata": 4, "cfg__model_nam": 4, "px": 4, "hover_data": 4, "column": 4, "x": 4, "throughput": 4, "tpot_mean": 4, "model_testnam": 4, "pattern": 4, "phase": 4, "shape": 4, "similar": [4, 122, 125], "header": 4, "h1": 4, "latenc": 4, "plot_and_text": 4, "detail": [4, 123], "br": 4, "distribut": [4, 112], "mention": 4, "flavor": [4, 123], "box_plot": 4, "show_text": 4, "h2": 4, "revers": 4, "sort": 4, "successfulli": 4, "predefin": [4, 123], "recommend": [4, 6], "id": [4, 67, 72, 73], "llm_test": 4, "last": [4, 102, 123, 124], "workflow": [4, 123], "orchestr": [4, 124], "section": [4, 123], "few": 4, "comparison_kei": 4, "histori": [4, 5, 31], "compar": 4, "ai": [4, 6], "image_tag": [4, 111], "ignored_kei": 4, "runtime_imag": 4, "runtim": [4, 10, 55, 63, 66, 67], "sorting_kei": 4, "how": [4, 5, 55], "tabl": 4, "ignored_entri": 4, "8": [4, 123], "128": 4, "contribut": 5, "pull": [5, 9], "request": [5, 27, 31, 34, 44, 45, 46, 72, 112, 121], "review": [5, 124], "style": 5, "job": [5, 10, 44, 45, 46, 74, 94, 112, 122], "launcher": [5, 6], "creation": [5, 16, 18, 112, 113], "pool": [5, 36], "engin": [5, 6, 124, 125], "apologi": 5, "creat": [5, 8, 9, 10, 11, 15, 16, 18, 19, 20, 21, 24, 27, 30, 44, 45, 46, 49, 69, 83, 86, 110, 112, 113, 114, 115, 117, 120, 121, 122, 124], "reusabl": [5, 123], "mortem": [5, 6, 44, 45, 123, 124], "visual": [5, 6, 123, 124], "command_arg": [5, 123], "sh": [5, 123], "test_": 5, "prepare_": 5, "start": [5, 31, 74, 86, 89, 123], "build": [5, 13, 74, 122], "modul": [5, 6, 17, 119, 125], "prepare_rhoai": 5, "gpu_oper": 5, "prepare_gpu_oper": 5, "local_ci": 5, "prepare_user_pod": [5, 123], "default": [5, 7, 8, 9, 10, 11, 12, 13, 16, 19, 20, 22, 23, 25, 27, 28, 29, 30, 32, 34, 36, 40, 43, 44, 45, 46, 50, 51, 52, 53, 55, 58, 63, 64, 65, 67, 69, 70, 71, 72, 73, 74, 81, 84, 85, 86, 87, 89, 90, 92, 94, 100, 101, 102, 103, 105, 106, 107, 111, 112, 114, 116, 120, 121, 122, 123], "parser": [5, 125], "model": [5, 44, 45, 46, 63, 64, 65, 66, 67, 72, 122, 123, 125], "kpi": [5, 6, 125], "analyz": [5, 6, 125], "busy_clust": 5, "kepler": 5, "kubemark": 5, "kwok": 5, "llm_load_test": 5, "nfd_oper": 5, "notebook": [5, 89, 108], "schedul": [5, 44, 45, 46], "dec": 5, "01": 5, "c8e4b1e9": 5, "red": 6, "hat": 6, "scalabl": 6, "platform": [6, 36, 122], "control": [6, 34, 111, 123], "toolbx": 6, "softwar": 6, "topsail_build": 6, "reachabl": 6, "admin": [6, 15, 16, 18, 19, 113, 114, 122, 123], "privileg": [6, 123], "top": [6, 94, 124], "end": [6, 31, 84, 85, 86, 94, 122, 123, 125], "thin": 6, "googl": [6, 123], "discoveri": 6, "upload": [6, 125], "run_ci": 6, "possibli": 6, "also": [6, 123, 124], "mutat": 6, "wrapper": 6, "over": [6, 112, 125], "un": [7, 122], "busi": [7, 8, 9, 10, 11, 12, 122], "namespace_label_kei": [7, 8, 9, 10, 12], "locat": [7, 8, 9, 10, 12, 74, 84, 85, 86], "namespace_label_valu": [7, 8, 9, 10, 12], "configmap": [8, 9, 52, 53, 55, 122], "10": [8, 10, 11, 16, 36, 44, 45, 46], "as_secret": 8, "entry_values_length": 8, "length": 8, "entry_keys_prefix": 8, "servic": [9, 63, 64, 65, 66, 67, 102, 122, 124], "image_pull_back_off": 9, "crash_loop_back_off": 9, "integ": 9, "inf": 10, "120": 10, "show": [12, 40, 94, 106, 122, 124], "busy": [12, 122], "publish": [13, 122], "quai": [13, 44, 45, 111, 122], "dockerfil": [13, 122], "image_local_nam": 13, "remote_repo": 13, "remot": 13, "undefin": 13, "remote_auth_fil": 13, "auth": 13, "git_repo": [13, 73, 111], "sourc": [13, 15, 27, 121, 123, 124], "dockerfile_path": 13, "git_ref": [13, 73, 111], "ref": [13, 73, 74], "branch": [13, 111], "hash": 13, "inject": 13, "context_dir": 13, "memori": [13, 31, 44, 45, 46, 71], "from_imag": 13, "from_imagetag": 13, "imagestreamtag": 13, "htpasswd": [15, 16, 122], "Will": [15, 21, 24, 115, 117, 124], "other": [15, 34, 40, 44, 45, 63, 123, 124], "oauth": [15, 18, 35, 113, 118, 122], "my": [15, 123], "strong": [15, 123], "usernam": [15, 43, 83, 84, 86], "passwordfil": 15, "login": [15, 18, 43, 113], "constant": [15, 16, 18, 19, 28, 32, 79, 101, 103, 104, 108, 113, 114], "cluster_create_htpasswd_user_secret_nam": 15, "idp": [15, 18, 35, 113, 118], "cluster_create_htpasswd_user_htpasswd_idp_nam": 15, "cluster_create_htpasswd_user_rol": 15, "cluster_create_htpasswd_user_groupnam": 15, "secret_fil": 16, "kubeadmin_pass": 16, "kubeadmin": 16, "aws_account_id": 16, "aws_access_kei": 16, "aws_secret_kei": 16, "cluster_nam": [16, 18, 26, 35, 101, 113, 118], "kubeconfig": [16, 85, 86], "east": 16, "htaccess_idp_nam": 16, "compute_machine_typ": 16, "machin": [16, 34, 71], "m5": 16, "compute_nod": 16, "minimum": [16, 63], "osd": 16, "machinepool": 16, "cluster_create_osd_machinepool_nam": 16, "cluster_create_osd_kubeadmin_group": 16, "cluster_create_osd_kubeadmin_nam": 16, "ef": [17, 119, 122], "csi": [17, 20, 119, 120, 122], "accordingli": [17, 119, 122], "openldap": [18, 35, 113, 118, 122], "admin_password": [18, 19, 22, 43, 113, 114, 116], "adminpasswd": [18, 19, 22, 43, 113, 114, 116], "idp_nam": [18, 35, 84, 85, 86, 113, 118], "username_prefix": [18, 83, 84, 85, 86, 113], "username_count": [18, 113], "secret_properties_fil": [18, 19, 22, 43, 84, 85, 86, 113, 114, 116], "use_ocm": [18, 35, 113, 118], "ocm": [18, 35, 101, 113, 118, 122], "use_rosa": [18, 35, 113, 118], "rosa": [18, 35, 113, 118], "cluster_deploy_ldap_admin_us": 18, "user_password": [19, 22, 74, 114, 116], "passwd": [19, 22, 114, 116], "bucket_nam": [19, 114], "mybucket": [19, 114], "cluster_deploy_minio_s3_server_root_us": 19, "connect": [19, 114], "cluster_deploy_minio_s3_server_access_kei": 19, "nf": [20, 120, 122], "provision": [20, 120, 122], "resourc": [20, 25, 29, 31, 60, 61, 83, 106, 112, 120, 122, 124], "pvc_sc": [20, 120], "pvc": [20, 27, 44, 45, 46, 120, 121, 122], "gp3": [20, 120], "pvc_size": [20, 27, 120, 121], "size": [20, 27, 34, 45, 85, 86, 120, 121], "10gi": [20, 120], "storage_class_nam": [20, 120], "default_sc": [20, 120], "nginx": [21, 115, 122, 124], "dashboard": [22, 43, 84, 116, 122], "applic": [22, 43, 87, 88, 89, 116, 122], "operatorhub": [23, 51, 56, 79, 80, 122], "catalog": [23, 102, 122, 123], "manifest_nam": 23, "manifest": [23, 102], "unspecifi": [23, 51], "latest": [23, 50, 51, 73, 74], "avail": [23, 29, 34, 51, 52, 55, 67, 102, 106, 122], "channel": [23, 51, 79, 102, 123], "csv": [23, 51], "installplan_approv": 23, "installplan": [23, 51], "catalog_namespac": 23, "catalogsourc": 23, "marketplac": 23, "deploy_cr": 23, "cr": 23, "bool": [23, 55, 74, 84, 85, 86], "namespace_monitor": 23, "monitor": [23, 28, 32, 59, 103, 104, 122], "all_namespac": 23, "config_env_nam": 23, "env": [23, 123], "csv_base_nam": 23, "destroi": [25, 26, 32, 104, 122, 123], "live": 25, "confirm": 25, "tag_valu": 25, "own": [25, 123], "openshift_instal": 25, "pick": 25, "subproject": [25, 72, 112, 123], "pvc_name": [27, 44, 45, 46, 121], "cred": [27, 121], "storage_dir": [27, 121, 123], "clean_first": [27, 121], "pvc_access_mod": [27, 121], "readwriteonc": [27, 121], "80gi": [27, 121], "dump": [28, 32, 103, 122], "prometheu": [28, 31, 32, 74, 84, 85, 86, 89, 103, 104, 122, 123], "identifi": [28, 32, 44, 73, 87, 101], "app": [28, 32, 111, 124], "kubernet": [28, 29, 32], "promtheu": [28, 32], "dump_name_prefix": [28, 103], "archiv": 28, "cluster_prometheus_db_mod": 28, "holder": [29, 122], "maximum": [29, 36, 122], "amount": [29, 122], "placehold": [29, 123], "label_selector": 29, "preload": [30, 122], "node_selector_kei": 30, "nodeselector": 30, "node_selector_valu": 30, "pod_toleration_kei": 30, "pod_toleration_effect": 30, "queri": [31, 67, 72, 122], "promqueri": [31, 122], "read": [31, 122, 123, 124], "metrics_fil": 31, "definit": [31, 90, 92, 93, 112, 122], "spread": [31, 112], "until": [31, 77, 78, 101, 122], "next": [31, 89], "promquery_fil": 31, "sutest__cluster_cpu_capac": 31, "sum": 31, "capacity_cpu_cor": 31, "sutest__cluster_memory_request": 31, "kube_pod_resource_request": 31, "group_left": 31, "kube_node_rol": 31, "kube_pod_container_resource_request": 31, "limit": [31, 44, 45, 123], "kube_pod_container_resource_limit": 31, "rate": [31, 85], "5m": 31, "dest_dir": [31, 64], "duration_": 31, "durat": [31, 72], "start_t": 31, "incompat": 31, "end_t": 31, "reset": [32, 104, 122, 123], "db": [32, 74, 84, 85, 86, 89, 123], "cluster_prometheus_db_dump_name_prefix": 32, "cluster_prometheus_db_directori": 32, "omit": 33, "exactli": [34, 122], "total": 34, "non": 34, "zero": [34, 84, 86], "deriv": 34, "machinetset": 34, "desir": 34, "g4dn": 34, "base_machineset": 34, "pickup": 34, "get": [34, 54, 122, 123, 124], "api": [34, 68, 69, 85, 122, 124], "spot": [34, 86], "demand": 34, "disk_siz": 34, "eb": 34, "volum": [34, 45], "partit": 34, "als": [36, 122], "doc": [36, 93, 122, 123], "14": [36, 122], "max_pod": 36, "250": [36, 71], "pods_per_cor": 36, "kubeletconfig": 36, "selector": 36, "machineconfigur": 36, "label_valu": 36, "upgrad": [37, 122], "fulli": [38, 122], "awak": [38, 122], "hive": [38, 122, 123], "restart": [38, 122], "custom": [40, 53, 102, 122, 123, 124, 125], "projec": 40, "show_export": 40, "noth": 40, "exec": 40, "frontend_istag": 43, "imagestream": [43, 73, 74, 81, 84, 85, 86, 89], "frontend": 43, "backend_istag": 43, "backend": 43, "plugin_nam": 43, "plugin": [43, 72], "es_url": 43, "es_indic": 43, "es_usernam": 43, "rai": [44, 122], "finetun": 44, "deepspe": 44, "dataset_nam": [44, 45], "dataset_repl": [44, 45, 123], "replic": [44, 45, 123], "artifici": [44, 45], "reduc": [44, 45], "effort": [44, 45], "dataset_transform": [44, 45], "dataset_prefer_cach": [44, 45], "dataset_prepare_cache_onli": [44, 45], "35": 44, "py39": 44, "cu121": 44, "torch24": 44, "fa26": 44, "ray_vers": 44, "rayclust": 44, "ram": [44, 45, 46], "gig": [44, 45, 46], "request_equals_limit": [44, 45], "prepare_onli": [44, 45], "delete_oth": [44, 45, 63], "pytorchjob": [44, 45], "pod_count": [44, 45, 46, 112], "hyper_paramet": [44, 45, 46, 123], "dictionnari": [44, 45, 46, 47, 53], "hyper": [44, 45, 46], "sft": [44, 45, 46], "trainer": [44, 45, 46], "sleep_forev": [44, 45, 46], "sleep": [44, 45, 46, 74, 84, 85, 86], "forev": [44, 45, 46, 123], "capture_artifact": [44, 45, 89], "shutdown_clust": 44, "let": [44, 74, 102, 123], "rayjob": 44, "shutdown": 44, "fm": 45, "ilab": 45, "dataset_response_templ": 45, "delimit": 45, "respons": 45, "sampl": 45, "modh": 45, "hf": 45, "releas": [45, 123], "7a8ff0f4114ba43398d34fd976f6b17bb1f665f3": 45, "ephemeral_output_pvc_s": 45, "ephemer": 45, "claim": 45, "emptydir": 45, "use_roc": 45, "activ": 45, "roce": 45, "fast": [45, 123], "network": 45, "ubi9": [46, 121], "belong": 47, "topsail_from_config_fil": [47, 74], "command_args_fil": 47, "topsail_from_command_args_fil": 47, "notat": [47, 106], "arg1": 47, "val1": 47, "arg2": 47, "val2": 47, "clusterpolici": [49, 52, 53, 122], "olm": [49, 54, 122], "clusterservicevers": [49, 122], "bundl": [50, 102, 122, 123], "oci": 50, "v1": [50, 51, 72, 124], "freeli": [50, 51], "chosen": [50, 51], "list_version_from_operator_hub": 51, "subcommand": 51, "slice": 52, "configmap_nam": [52, 53], "include_default": 53, "include_well_known": 53, "extra_metr": 53, "wait_refresh": 53, "burn": [55, 122], "30": [55, 112], "keep_resourc": 55, "ensure_has_gpu": 55, "energi": [59, 122], "consumpt": [59, 122], "associ": [60, 122], "raw_deploy": [61, 63, 67], "serverless": [61, 63, 67, 72], "standalon": [63, 67], "tgi": [63, 67], "vllm": [63, 67], "sr_name": [63, 66], "servingruntim": [63, 66], "sr_kserve_imag": 63, "inference_service_nam": [63, 64, 65, 66, 67], "infer": [63, 64, 65, 66, 67, 122], "inference_service_min_replica": 63, "left": [63, 123], "anyth": [63, 67, 73], "proto": [64, 65, 67, 122], "copy_to_artifact": [64, 65], "don": [64, 65, 86, 94, 125], "grpcurl": [65, 67, 122], "observ": [65, 122], "dest_fil": 65, "query_count": 67, "model_id": [67, 72], "grpc": [67, 72], "todo": 67, "machinedeploy": 69, "deployment_nam": 69, "hollow": [70, 122], "kube": [70, 123], "allocat": 71, "gi": 71, "256": 71, "wisdom": [72, 122], "host": 72, "endpoint": [72, 85, 86], "port": [72, 84, 86, 124], "tgis_grpc_plugin": 72, "caikit_client_plugin": 72, "along": [72, 83, 122, 124], "src_path": 72, "clone": [72, 111], "stream": [72, 81, 89], "whether": [72, 87, 89], "use_tl": 72, "concurr": 72, "simul": [72, 84, 86, 123], "send": [72, 94, 122], "max_input_token": 72, "input": 72, "max_output_token": 72, "512": 72, "max_sequence_token": 72, "sequenc": [72, 123], "1536": 72, "openai": 72, "complet": [72, 74, 89, 94, 122, 123], "ci_command": [73, 74], "pod_nam": 73, "service_account": [73, 74], "secret_nam": [73, 74], "mount": [73, 74, 81, 89], "secret_env_kei": [73, 74], "expos": [73, 74], "test_arg": 73, "init_command": 73, "export_bucket_nam": 73, "export_test_run_identifi": 73, "retrieve_artifact": [73, 74], "retriev": [73, 74], "pr_config": 73, "update_git": 73, "batch": 74, "job_nam": 74, "minio_namespac": [74, 84, 85, 86], "minio_bucket_nam": [74, 84, 85, 86], "minio_secret_key_kei": 74, "form": 74, "secret_kei": 74, "variable_overrid": 74, "use_local_config": 74, "capture_prom_db": [74, 84, 85, 86, 89], "git_pul": 74, "state_signal_redis_serv": [74, 84, 86], "statesign": [74, 84, 86], "sleep_factor": 74, "user_batch_s": [74, 84, 86], "abort_on_failur": 74, "overal": 74, "need_all_success": 74, "succe": 74, "launch_as_daemon": 74, "irrelev": 74, "find": [77, 122, 123], "hub": 79, "g": 79, "7": 79, "cluster_deploy_operator_deploy_cr": 79, "cluster_deploy_operator_namespac": 79, "cluster_deploy_operator_manifest_nam": 79, "cluster_deploy_operator_catalog": 79, "benchmark": [81, 86, 122, 125], "s2i": [81, 85, 86, 89, 108], "scienc": [81, 85, 86, 87, 88, 89, 108, 122], "imagestream_tag": [81, 89], "emtpi": [81, 89], "notebook_directori": [81, 89], "notebook_filenam": [81, 89], "ipynb": [81, 89], "jupyterlab": [81, 89], "benchmark_entrypoint": 81, "benchmark_nam": 81, "pyperf_bm_go": [81, 86], "benchmark_repeat": 81, "repeat": [81, 86], "measur": 81, "benchmark_numb": 81, "who": 83, "roai": [84, 86, 122], "deploy_ldap": [84, 85, 86, 122], "user_index_offset": [84, 85, 86], "offset": [84, 85, 86], "artifacts_collect": [84, 86], "screenshot": [84, 86], "user_sleep_factor": [84, 85, 86], "ods_ci_istag": [84, 86], "ods_ci_test_cas": [84, 86], "notebook_dsg_test": [84, 86], "robot": [84, 86], "artifacts_exporter_istag": [84, 85, 86], "side": [84, 85, 86, 123], "car": [84, 85, 86], "hostnam": [84, 86], "toleration_kei": [84, 85, 86], "locust_istag": 85, "locust": 85, "run_tim": 85, "300": 85, "20m": 85, "3h": 85, "1h30m": 85, "1m": 85, "spawn_rat": 85, "spawn": 85, "sut_cluster_kubeconfig": [85, 86], "under": [85, 86], "notebook_image_nam": [85, 86], "notebook_size_nam": [85, 86], "cpu_count": 85, "1cpu": 85, "notebook_url": 86, "ods_ci_exclude_tag": 86, "exclud": 86, "notebook_benchmark_nam": 86, "notebook_benchmark_numb": 86, "20": 86, "notebook_benchmark_repeat": 86, "stop_notebooks_on_exit": 86, "only_create_notebook": 86, "overwrit": 86, "driver_running_on_spot": 86, "disappear": 86, "dsp_application_nam": [87, 89], "user_id": 87, "capture_extra_artifact": [87, 89], "short": [89, 91, 123], "dspipelin": 89, "notebook_nam": 89, "differenti": 89, "hello": 89, "world": [89, 123], "kfp_hello_world": 89, "run_count": 89, "run_delai": 89, "stop_on_exit": 89, "wait_for_run_complet": 89, "boilerplac": [91, 122], "middlewar": [91, 122, 123, 125], "varnam": 91, "rst": [92, 93, 122], "notif": [94, 122], "slack": [94, 122], "messag": [94, 96, 122], "err": 94, "dry_run": 94, "symlink": [95, 98, 122], "point": [95, 97, 122, 123], "wip": [96, 122], "filepath": [97, 122], "addon": [101, 122], "notification_email": 101, "email": 101, "regist": 101, "wait_for_ready_st": 101, "ocm_deploy_addon_ocm_deploy_addon_id": 101, "odh": [101, 107, 122], "catalog_imag": [102, 123], "disable_dsc_config": [102, 123], "dsc": [102, 106, 123], "opendatahub": [102, 107, 123], "managed_rhoai": [102, 123], "cluster_prometheus_db_cluster_prometheus_db_directori": 103, "cluster_prometheus_db_cluster_prometheus_db_namespac": [103, 104], "cluster_prometheus_db_cluster_prometheus_db_label": [103, 104], "cluster_prometheus_db_cluster_prometheus_db_mod": [103, 104], "datascienceclust": [106, 122], "show_al": 106, "extra_set": 106, "dot": 106, "spec": 106, "managementst": 106, "finish": [107, 108, 122], "comma": 108, "separ": 108, "await": 108, "rhods_wait_ods_imag": 108, "minim": 108, "canari": [110, 122], "mcad": [110, 111, 112, 122], "appwrapp": [110, 112, 122], "helm": [111, 122], "codeflar": [111, 123], "dispatch": 111, "image_repo": 111, "stabl": 111, "base_nam": 112, "sched": 112, "job_template_nam": 112, "sleeper": 112, "aw_states_target": 112, "aw_states_unexpect": 112, "kueue": [112, 123], "coschedul": 112, "pod_runtim": 112, "pod_request": 112, "100m": 112, "timespan": 112, "minut": 112, "poisson": 112, "scheduler_load_gener": 112, "kueue_queu": 112, "queue": 112, "resource_kind": 112, "server_deploy_ldap_admin_us": 113, "server_deploy_minio_s3_server_root_us": 114, "server_deploy_minio_s3_server_access_kei": 114, "pvc_storage_class_nam": 121, "ubi": 121, "create_configmap": 122, "create_deploy": 122, "create_job": 122, "create_namespac": 122, "build_push_imag": 122, "create_htpasswd_adminus": 122, "create_osd": 122, "deploy_oper": 122, "destroy_ocp": 122, "destroy_osd": 122, "dump_prometheus_db": 122, "fill_workernod": 122, "preload_imag": 122, "query_prometheus_db": 122, "reset_prometheus_db": 122, "set_project_annot": 122, "set_scal": [122, 123], "update_pods_per_nod": 122, "upgrade_to_imag": 122, "wait_fully_awak": 122, "enter": 122, "deploy_cpt_dashboard": 122, "ray_fine_tuning_job": 122, "run_fine_tuning_job": 122, "run_quality_evalu": 122, "variou": [122, 125], "capture_deployment_st": 122, "deploy_cluster_polici": 122, "deploy_from_bundl": 122, "deploy_from_operatorhub": 122, "get_csv_vers": 122, "run_gpu_burn": 122, "undeploy_from_operatorhub": 122, "wait_deploy": 122, "deploy_kepl": 122, "undeploy_kepl": 122, "capture_st": [122, 123], "deploy_model": 122, "extract_proto": 122, "extract_protos_grpcurl": 122, "undeploy_model": 122, "validate_model": 122, "deploy_capi_provid": 122, "deploy_nod": 122, "deploy_kwok_control": 122, "run_multi": 122, "has_gpu_nod": 122, "has_label": 122, "wait_gpu_nod": 122, "wait_label": 122, "benchmark_perform": 122, "dashboard_scale_test": 122, "locust_scale_test": 122, "ods_ci_scale_test": 122, "deploy_appl": 122, "run_kfp_notebook": 122, "itself": 122, "generate_middleware_ci_secret_boilerpl": 122, "generate_toolbox_related_fil": 122, "generate_toolbox_rst_document": 122, "send_job_completion_notif": 122, "validate_no_broken_link": 122, "validate_no_wip": 122, "validate_role_fil": 122, "validate_role_vars_us": 122, "delete_od": 122, "deploy_addon": 122, "deploy_od": [122, 123], "undeploy_od": 122, "update_datascienceclust": [122, 123], "wait_odh": 122, "wait_od": 122, "create_mcad_canari": 122, "deploy_mcad_from_helm": 122, "generate_load": 122, "deploy_minio_s3_serv": 122, "deploy_nginx_serv": [122, 124], "deploy_opensearch": 122, "deploy_redis_serv": 122, "undeploy_ldap": 122, "deploy_aws_ef": 122, "deploy_nfs_provision": 122, "download_to_pvc": [122, 123], "crux": 123, "bind": 123, "togeth": 123, "foremost": 123, "seem": 123, "veri": 123, "friendli": 123, "ll": [123, 125], "cover": 123, "focus": 123, "reproduc": 123, "link": 123, "easiest": 123, "scratch": 123, "fresh": 123, "particularli": 123, "handl": 123, "old": 123, "But": 123, "prove": [123, 124], "robust": 123, "reliabl": 123, "although": 123, "haven": 123, "sinc": 123, "got": 123, "ask": 123, "pm": 123, "outsid": 123, "jenkin": [123, 125], "never": 123, "reinstal": 123, "least": 123, "garbag": 123, "theori": 123, "mutual": 123, "agreement": 123, "skip": 123, "Or": 123, "tick": 123, "someon": 123, "queu": 123, "yet": 123, "modular": 123, "great": 123, "static": 123, "had": 123, "stori": 123, "slide": 123, "deck": 123, "ibm_40gb_model": 123, "refactor": 123, "envisag": 123, "more": 123, "strongli": 123, "alter": 123, "behavior": 123, "capture_prom": 123, "info": [123, 124], "rh": 123, "osb": 123, "iib": 123, "804339": 123, "version_nam": 123, "managed_rhoi": 123, "through": [123, 125], "decid": 123, "factor": 123, "dgx_single_model_multi_dataset": 123, "dgx_single_model": 123, "matbenchmark": 123, "keyword": 123, "posit": [123, 124], "recurs": 123, "algorithm": 123, "dirtili": 123, "crash": 123, "visibl": 123, "around": 123, "gradient_accumulation_step": 123, "r": 123, "assign": 123, "eras": 123, "That": 123, "has_dsc": 123, "onam": 123, "trainingoper": 123, "source_nam": 123, "preced": 123, "inde": 123, "set_at_runtim": 123, "critic": [123, 124], "complic": 123, "multi_model_test_sequenti": 123, "sequenti": 123, "prepare_scal": 123, "prepare_kserv": 123, "consolid": 123, "major": 124, "year": 124, "callback": 124, "concern": 124, "why": 124, "proof": 124, "went": 124, "wrong": 124, "0755": 124, "rout": 124, "passthrough": 124, "secur": 124, "cluster_deploy_nginx_server_namespac": 124, "dry": 124, "client": 124, "oyaml": 124, "yq": 124, "apivers": 124, "tee": 124, "route_nginx": 124, "owid": 124, "l": 124, "descr": 124, "cluster_deploy_nginx_serv": 124, "translat": 124, "synopsi": 124, "iter": 125, "scrapper": 125, "regener": 125, "upload_lt": 125, "gate": 125, "download_lt": 125, "analyze_lt": 125, "comparison": 125, "datastax": 125, "hunter": 125, "focu": 125, "plotter": 125, "prep": 125}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"contribut": 1, "pull": 1, "request": 1, "guidelin": 1, "review": 1, "style": 1, "yaml": [1, 2], "ansibl": 1, "code": 1, "creat": [2, 3, 4, 123], "new": [2, 3, 4], "orchestr": [2, 5, 6, 123], "prepar": 2, "environ": 2, "test": [2, 5, 123], "py": 2, "config": 2, "command_arg": 2, "j2": 2, "copi": 2, "cluster": [2, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 122, 123], "sh": 2, "configur": [2, 39, 40, 41, 42, 122, 123], "test_": 2, "prepare_": 2, "start": 2, "build": 2, "your": 2, "core": 2, "helper": 2, "modul": [2, 4], "The": [2, 4, 5, 123, 124, 125], "run": [2, 47, 72, 73, 122, 123], "env": 2, "project": [2, 6], "rhod": [2, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 122], "librari": 2, "prepare_rhoai": 2, "gpu_oper": [2, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 122], "prepare_gpu_oper": 2, "local_ci": [2, 73, 74, 122], "prepare_user_pod": 2, "matrix_benchmark": 2, "visual": [2, 4, 125], "how": [3, 123], "role": 3, "ar": 3, "organ": [3, 6], "default": 3, "paramet": [3, 7, 8, 9, 10, 11, 12, 13, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 43, 44, 45, 46, 47, 50, 51, 52, 53, 55, 58, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 79, 81, 83, 84, 85, 86, 87, 88, 89, 91, 94, 100, 101, 102, 103, 105, 106, 107, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 120, 121], "gener": [3, 5], "includ": 3, "topsail": [3, 5, 6, 123], "": [3, 5], "cli": 3, "1": 3, "python": 3, "class": 3, "descript": 3, "decor": 3, "2": 3, "toolbox": [3, 5, 6, 122, 123, 124], "3": 3, "render": 3, "4": 3, "execut": 3, "command": [3, 123], "store": 4, "parser": 4, "model": 4, "lt": 4, "kpi": 4, "plot": 4, "analyz": 4, "regress": 4, "red": 5, "hat": 5, "psap": 5, "framework": 5, "understand": 5, "architectur": 5, "extend": 5, "depend": 6, "busy_clust": [7, 8, 9, 10, 11, 12, 122], "cleanup": [7, 83, 109], "create_configmap": 8, "create_deploy": 9, "create_job": 10, "create_namespac": 11, "statu": 12, "build_push_imag": 13, "capture_environ": 14, "create_htpasswd_adminus": 15, "create_osd": 16, "deploy_aws_ef": [17, 119], "deploy_ldap": [18, 113], "deploy_minio_s3_serv": [19, 114], "deploy_nfs_provision": [20, 120], "deploy_nginx_serv": [21, 115], "deploy_opensearch": [22, 116], "deploy_oper": 23, "deploy_redis_serv": [24, 117], "destroy_ocp": 25, "destroy_osd": 26, "download_to_pvc": [27, 121], "dump_prometheus_db": [28, 103], "fill_workernod": 29, "preload_imag": 30, "query_prometheus_db": 31, "reset_prometheus_db": [32, 104], "set_project_annot": 33, "set_scal": [34, 71], "undeploy_ldap": [35, 118], "update_pods_per_nod": 36, "upgrade_to_imag": 37, "wait_fully_awak": 38, "appli": 39, "enter": 40, "get": 41, "name": 42, "cpt": [43, 122], "deploy_cpt_dashboard": 43, "fine_tun": [44, 45, 46, 122], "ray_fine_tuning_job": 44, "run_fine_tuning_job": 45, "run_quality_evalu": 46, "from_config": 47, "capture_deployment_st": 48, "deploy_cluster_polici": 49, "deploy_from_bundl": 50, "deploy_from_operatorhub": [51, 79], "enable_time_shar": 52, "extend_metr": 53, "get_csv_vers": 54, "run_gpu_burn": 55, "undeploy_from_operatorhub": [56, 80], "wait_deploy": 57, "wait_stack_deploi": 58, "kepler": [59, 60, 122], "deploy_kepl": 59, "undeploy_kepl": 60, "kserv": [61, 62, 63, 64, 65, 66, 67, 122], "capture_operators_st": 61, "capture_st": [62, 82, 87, 99], "deploy_model": 63, "extract_proto": 64, "extract_protos_grpcurl": 65, "undeploy_model": 66, "validate_model": 67, "kubemark": [68, 69, 122], "deploy_capi_provid": 68, "deploy_nod": 69, "kwok": [70, 71, 122], "deploy_kwok_control": 70, "llm_load_test": [72, 122], "run_multi": 74, "nfd": [75, 76, 77, 78, 122], "has_gpu_nod": 75, "has_label": 76, "wait_gpu_nod": 77, "wait_label": 78, "nfd_oper": [79, 80, 122], "notebook": [81, 82, 83, 84, 85, 86, 122], "benchmark_perform": 81, "dashboard_scale_test": 84, "locust_scale_test": 85, "ods_ci_scale_test": 86, "pipelin": [87, 88, 89, 122], "deploy_appl": 88, "run_kfp_notebook": 89, "repo": [90, 91, 92, 93, 94, 95, 96, 97, 98, 122], "generate_ansible_default_set": 90, "generate_middleware_ci_secret_boilerpl": 91, "generate_toolbox_related_fil": 92, "generate_toolbox_rst_document": 93, "send_job_completion_notif": 94, "validate_no_broken_link": 95, "validate_no_wip": 96, "validate_role_fil": 97, "validate_role_vars_us": 98, "delete_od": 100, "deploy_addon": 101, "deploy_od": 102, "undeploy_od": 105, "update_datascienceclust": 106, "wait_odh": 107, "wait_od": 108, "schedul": [109, 110, 111, 112, 122], "create_mcad_canari": 110, "deploy_mcad_from_helm": 111, "generate_load": 112, "server": [113, 114, 115, 116, 117, 118, 122], "storag": [119, 120, 121, 122], "document": 122, "layer": [123, 124, 125], "ci": 123, "job": 123, "launcher": 123, "creation": 123, "from": 123, "pool": 123, "bare": 123, "metal": 123, "launch": 123, "engin": 123, "system": 123, "A": 123, "bit": 123, "histori": 123, "apologi": 123, "actual": 123, "work": 123, "preset": 123, "call": 123, "dedic": 123, "directori": 123, "parallel": 123, "reusabl": 124, "post": 125, "mortem": 125, "process": 125}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx": 57}, "alltitles": {"Contributing": [[1, "contributing"]], "Pull Request Guidelines": [[1, "pull-request-guidelines"]], "Review Guidelines": [[1, "review-guidelines"]], "Style Guidelines": [[1, "style-guidelines"]], "YAML style": [[1, "yaml-style"]], "Ansible style": [[1, "ansible-style"]], "Coding guidelines": [[1, "coding-guidelines"]], "Creating a New Orchestration": [[2, "creating-a-new-orchestration"]], "Prepare the environment": [[2, "prepare-the-environment"]], "Prepare the test.py, config.yaml and command_args.yaml.j2": [[2, "prepare-the-test-py-config-yaml-and-command-args-yaml-j2"]], "Copy the clusters.sh and configure.sh": [[2, "copy-the-clusters-sh-and-configure-sh"]], "Create test_....py and prepare_....py": [[2, "create-test-py-and-prepare-py"]], "Start building your test orchestration": [[2, "start-building-your-test-orchestration"]], "Core helper modules": [[2, "core-helper-modules"]], "The run module": [[2, "the-run-module"]], "The env module": [[2, "the-env-module"]], "The config module": [[2, "the-config-module"]], "The projects.rhods.library.prepare_rhoai library module": [[2, "the-projects-rhods-library-prepare-rhoai-library-module"]], "The projects.gpu_operator.library.prepare_gpu_operator library module": [[2, "the-projects-gpu-operator-library-prepare-gpu-operator-library-module"]], "The projects.local_ci.library.prepare_user_pods library module": [[2, "the-projects-local-ci-library-prepare-user-pods-library-module"]], "The projects.matrix_benchmarking.library.visualize library module": [[2, "the-projects-matrix-benchmarking-library-visualize-library-module"]], "How roles are organized": [[3, "how-roles-are-organized"]], "How default parameters are generated": [[3, "how-default-parameters-are-generated"]], "Including new roles in Topsail\u2019s CLI": [[3, "including-new-roles-in-topsails-cli"]], "1. Creating a Python class for the new role": [[3, "creating-a-python-class-for-the-new-role"]], "Description of the decorators": [[3, "description-of-the-decorators"]], "2. Including the new toolbox class in the Toolbox": [[3, "including-the-new-toolbox-class-in-the-toolbox"]], "3. Rendering the default parameters": [[3, "rendering-the-default-parameters"]], "4. Executing the new toolbox command": [[3, "executing-the-new-toolbox-command"]], "Creating a new visualization module": [[4, "creating-a-new-visualization-module"]], "The store module": [[4, "the-store-module"]], "The store parsers": [[4, "the-store-parsers"]], "The store and models LTS and KPI modules": [[4, "the-store-and-models-lts-and-kpi-modules"]], "The plotting visualization module": [[4, "the-plotting-visualization-module"]], "The analyze regression analyze module": [[4, "the-analyze-regression-analyze-module"]], "Red Hat PSAP TOPSAIL test orchestration framework": [[5, "red-hat-psap-topsail-test-orchestration-framework"]], "General": [[5, null]], "Understanding The Architecture": [[5, null]], "Extending The Architecture": [[5, null]], "TOPSAIL's Toolbox": [[5, null]], "TOPSAIL": [[6, "topsail"]], "Dependencies": [[6, "dependencies"]], "TOPSAIL orchestration and toolbox": [[6, "topsail-orchestration-and-toolbox"]], "TOPSAIL projects organization": [[6, "topsail-projects-organization"]], "busy_cluster cleanup": [[7, "busy-cluster-cleanup"]], "Parameters": [[7, "parameters"], [8, "parameters"], [9, "parameters"], [10, "parameters"], [11, "parameters"], [12, "parameters"], [13, "parameters"], [15, "parameters"], [16, "parameters"], [18, "parameters"], [19, "parameters"], [20, "parameters"], [21, "parameters"], [22, "parameters"], [23, "parameters"], [24, "parameters"], [25, "parameters"], [26, "parameters"], [27, "parameters"], [28, "parameters"], [29, "parameters"], [30, "parameters"], [31, "parameters"], [32, "parameters"], [33, "parameters"], [34, "parameters"], [35, "parameters"], [36, "parameters"], [37, "parameters"], [39, "parameters"], [40, "parameters"], [41, "parameters"], [43, "parameters"], [44, "parameters"], [45, "parameters"], [46, "parameters"], [47, "parameters"], [50, "parameters"], [51, "parameters"], [52, "parameters"], [53, "parameters"], [55, "parameters"], [58, "parameters"], [61, "parameters"], [62, "parameters"], [63, "parameters"], [64, "parameters"], [65, "parameters"], [66, "parameters"], [67, "parameters"], [69, "parameters"], [70, "parameters"], [71, "parameters"], [72, "parameters"], [73, "parameters"], [74, "parameters"], [79, "parameters"], [81, "parameters"], [83, "parameters"], [84, "parameters"], [85, "parameters"], [86, "parameters"], [87, "parameters"], [88, "parameters"], [89, "parameters"], [91, "parameters"], [94, "parameters"], [100, "parameters"], [101, "parameters"], [102, "parameters"], [103, "parameters"], [105, "parameters"], [106, "parameters"], [107, "parameters"], [109, "parameters"], [110, "parameters"], [111, "parameters"], [112, "parameters"], [113, "parameters"], [114, "parameters"], [115, "parameters"], [116, "parameters"], [117, "parameters"], [118, "parameters"], [120, "parameters"], [121, "parameters"]], "busy_cluster create_configmaps": [[8, "busy-cluster-create-configmaps"]], "busy_cluster create_deployments": [[9, "busy-cluster-create-deployments"]], "busy_cluster create_jobs": [[10, "busy-cluster-create-jobs"]], "busy_cluster create_namespaces": [[11, "busy-cluster-create-namespaces"]], "busy_cluster status": [[12, "busy-cluster-status"]], "cluster build_push_image": [[13, "cluster-build-push-image"]], "cluster capture_environment": [[14, "cluster-capture-environment"]], "cluster create_htpasswd_adminuser": [[15, "cluster-create-htpasswd-adminuser"]], "cluster create_osd": [[16, "cluster-create-osd"]], "cluster deploy_aws_efs": [[17, "cluster-deploy-aws-efs"]], "cluster deploy_ldap": [[18, "cluster-deploy-ldap"]], "cluster deploy_minio_s3_server": [[19, "cluster-deploy-minio-s3-server"]], "cluster deploy_nfs_provisioner": [[20, "cluster-deploy-nfs-provisioner"]], "cluster deploy_nginx_server": [[21, "cluster-deploy-nginx-server"]], "cluster deploy_opensearch": [[22, "cluster-deploy-opensearch"]], "cluster deploy_operator": [[23, "cluster-deploy-operator"]], "cluster deploy_redis_server": [[24, "cluster-deploy-redis-server"]], "cluster destroy_ocp": [[25, "cluster-destroy-ocp"]], "cluster destroy_osd": [[26, "cluster-destroy-osd"]], "cluster download_to_pvc": [[27, "cluster-download-to-pvc"]], "cluster dump_prometheus_db": [[28, "cluster-dump-prometheus-db"]], "cluster fill_workernodes": [[29, "cluster-fill-workernodes"]], "cluster preload_image": [[30, "cluster-preload-image"]], "cluster query_prometheus_db": [[31, "cluster-query-prometheus-db"]], "cluster reset_prometheus_db": [[32, "cluster-reset-prometheus-db"]], "cluster set_project_annotation": [[33, "cluster-set-project-annotation"]], "cluster set_scale": [[34, "cluster-set-scale"]], "cluster undeploy_ldap": [[35, "cluster-undeploy-ldap"]], "cluster update_pods_per_node": [[36, "cluster-update-pods-per-node"]], "cluster upgrade_to_image": [[37, "cluster-upgrade-to-image"]], "cluster wait_fully_awake": [[38, "cluster-wait-fully-awake"]], "configure apply": [[39, "configure-apply"]], "configure enter": [[40, "configure-enter"]], "configure get": [[41, "configure-get"]], "configure name": [[42, "configure-name"]], "cpt deploy_cpt_dashboard": [[43, "cpt-deploy-cpt-dashboard"]], "fine_tuning ray_fine_tuning_job": [[44, "fine-tuning-ray-fine-tuning-job"]], "fine_tuning run_fine_tuning_job": [[45, "fine-tuning-run-fine-tuning-job"]], "fine_tuning run_quality_evaluation": [[46, "fine-tuning-run-quality-evaluation"]], "from_config run": [[47, "from-config-run"]], "gpu_operator capture_deployment_state": [[48, "gpu-operator-capture-deployment-state"]], "gpu_operator deploy_cluster_policy": [[49, "gpu-operator-deploy-cluster-policy"]], "gpu_operator deploy_from_bundle": [[50, "gpu-operator-deploy-from-bundle"]], "gpu_operator deploy_from_operatorhub": [[51, "gpu-operator-deploy-from-operatorhub"]], "gpu_operator enable_time_sharing": [[52, "gpu-operator-enable-time-sharing"]], "gpu_operator extend_metrics": [[53, "gpu-operator-extend-metrics"]], "gpu_operator get_csv_version": [[54, "gpu-operator-get-csv-version"]], "gpu_operator run_gpu_burn": [[55, "gpu-operator-run-gpu-burn"]], "gpu_operator undeploy_from_operatorhub": [[56, "gpu-operator-undeploy-from-operatorhub"]], "gpu_operator wait_deployment": [[57, "gpu-operator-wait-deployment"]], "gpu_operator wait_stack_deployed": [[58, "gpu-operator-wait-stack-deployed"]], "kepler deploy_kepler": [[59, "kepler-deploy-kepler"]], "kepler undeploy_kepler": [[60, "kepler-undeploy-kepler"]], "kserve capture_operators_state": [[61, "kserve-capture-operators-state"]], "kserve capture_state": [[62, "kserve-capture-state"]], "kserve deploy_model": [[63, "kserve-deploy-model"]], "kserve extract_protos": [[64, "kserve-extract-protos"]], "kserve extract_protos_grpcurl": [[65, "kserve-extract-protos-grpcurl"]], "kserve undeploy_model": [[66, "kserve-undeploy-model"]], "kserve validate_model": [[67, "kserve-validate-model"]], "kubemark deploy_capi_provider": [[68, "kubemark-deploy-capi-provider"]], "kubemark deploy_nodes": [[69, "kubemark-deploy-nodes"]], "kwok deploy_kwok_controller": [[70, "kwok-deploy-kwok-controller"]], "kwok set_scale": [[71, "kwok-set-scale"]], "llm_load_test run": [[72, "llm-load-test-run"]], "local_ci run": [[73, "local-ci-run"]], "local_ci run_multi": [[74, "local-ci-run-multi"]], "nfd has_gpu_nodes": [[75, "nfd-has-gpu-nodes"]], "nfd has_labels": [[76, "nfd-has-labels"]], "nfd wait_gpu_nodes": [[77, "nfd-wait-gpu-nodes"]], "nfd wait_labels": [[78, "nfd-wait-labels"]], "nfd_operator deploy_from_operatorhub": [[79, "nfd-operator-deploy-from-operatorhub"]], "nfd_operator undeploy_from_operatorhub": [[80, "nfd-operator-undeploy-from-operatorhub"]], "notebooks benchmark_performance": [[81, "notebooks-benchmark-performance"]], "notebooks capture_state": [[82, "notebooks-capture-state"]], "notebooks cleanup": [[83, "notebooks-cleanup"]], "notebooks dashboard_scale_test": [[84, "notebooks-dashboard-scale-test"]], "notebooks locust_scale_test": [[85, "notebooks-locust-scale-test"]], "notebooks ods_ci_scale_test": [[86, "notebooks-ods-ci-scale-test"]], "pipelines capture_state": [[87, "pipelines-capture-state"]], "pipelines deploy_application": [[88, "pipelines-deploy-application"]], "pipelines run_kfp_notebook": [[89, "pipelines-run-kfp-notebook"]], "repo generate_ansible_default_settings": [[90, "repo-generate-ansible-default-settings"]], "repo generate_middleware_ci_secret_boilerplate": [[91, "repo-generate-middleware-ci-secret-boilerplate"]], "repo generate_toolbox_related_files": [[92, "repo-generate-toolbox-related-files"]], "repo generate_toolbox_rst_documentation": [[93, "repo-generate-toolbox-rst-documentation"]], "repo send_job_completion_notification": [[94, "repo-send-job-completion-notification"]], "repo validate_no_broken_link": [[95, "repo-validate-no-broken-link"]], "repo validate_no_wip": [[96, "repo-validate-no-wip"]], "repo validate_role_files": [[97, "repo-validate-role-files"]], "repo validate_role_vars_used": [[98, "repo-validate-role-vars-used"]], "rhods capture_state": [[99, "rhods-capture-state"]], "rhods delete_ods": [[100, "rhods-delete-ods"]], "rhods deploy_addon": [[101, "rhods-deploy-addon"]], "rhods deploy_ods": [[102, "rhods-deploy-ods"]], "rhods dump_prometheus_db": [[103, "rhods-dump-prometheus-db"]], "rhods reset_prometheus_db": [[104, "rhods-reset-prometheus-db"]], "rhods undeploy_ods": [[105, "rhods-undeploy-ods"]], "rhods update_datasciencecluster": [[106, "rhods-update-datasciencecluster"]], "rhods wait_odh": [[107, "rhods-wait-odh"]], "rhods wait_ods": [[108, "rhods-wait-ods"]], "scheduler cleanup": [[109, "scheduler-cleanup"]], "scheduler create_mcad_canary": [[110, "scheduler-create-mcad-canary"]], "scheduler deploy_mcad_from_helm": [[111, "scheduler-deploy-mcad-from-helm"]], "scheduler generate_load": [[112, "scheduler-generate-load"]], "server deploy_ldap": [[113, "server-deploy-ldap"]], "server deploy_minio_s3_server": [[114, "server-deploy-minio-s3-server"]], "server deploy_nginx_server": [[115, "server-deploy-nginx-server"]], "server deploy_opensearch": [[116, "server-deploy-opensearch"]], "server deploy_redis_server": [[117, "server-deploy-redis-server"]], "server undeploy_ldap": [[118, "server-undeploy-ldap"]], "storage deploy_aws_efs": [[119, "storage-deploy-aws-efs"]], "storage deploy_nfs_provisioner": [[120, "storage-deploy-nfs-provisioner"]], "storage download_to_pvc": [[121, "storage-download-to-pvc"]], "Toolbox Documentation": [[122, "toolbox-documentation"]], "busy_cluster": [[122, "busy-cluster"]], "cluster": [[122, "cluster"]], "configure": [[122, "configure"]], "cpt": [[122, "cpt"]], "fine_tuning": [[122, "fine-tuning"]], "run": [[122, "run"]], "gpu_operator": [[122, "gpu-operator"]], "kepler": [[122, "kepler"]], "kserve": [[122, "kserve"]], "kubemark": [[122, "kubemark"]], "kwok": [[122, "kwok"]], "llm_load_test": [[122, "llm-load-test"]], "local_ci": [[122, "local-ci"]], "nfd": [[122, "nfd"]], "nfd_operator": [[122, "nfd-operator"]], "notebooks": [[122, "notebooks"]], "pipelines": [[122, "pipelines"]], "repo": [[122, "repo"]], "rhods": [[122, "rhods"]], "scheduler": [[122, "scheduler"]], "server": [[122, "server"]], "storage": [[122, "storage"]], "The Test Orchestrations Layer": [[123, "the-test-orchestrations-layer"]], "The CI job launchers": [[123, "the-ci-job-launchers"]], "Cluster creation": [[123, "cluster-creation"]], "Cluster from pool": [[123, "cluster-from-pool"]], "Bare-metal clusters": [[123, "bare-metal-clusters"]], "Launching TOPSAIL jobs on the CI engines": [[123, "launching-topsail-jobs-on-the-ci-engines"]], "TOPSAIL Configuration System": [[123, "topsail-configuration-system"]], "A bit of history": [[123, "a-bit-of-history"]], "A bit of apology": [[123, "a-bit-of-apology"]], "How it actually works": [[123, "how-it-actually-works"]], "Configuring the configuration with presets": [[123, "configuring-the-configuration-with-presets"]], "Calling the toolbox commands": [[123, "calling-the-toolbox-commands"]], "Creating dedicated directories": [[123, "creating-dedicated-directories"]], "Running toolbox commands in parallel": [[123, "running-toolbox-commands-in-parallel"]], "The Reusable Toolbox Layer": [[124, "the-reusable-toolbox-layer"]], "The Post-mortem Processing & Visualization Layer": [[125, "the-post-mortem-processing-visualization-layer"]]}, "indexentries": {}}) \ No newline at end of file diff --git a/toolbox.generated/Busy_Cluster.cleanup.html b/toolbox.generated/Busy_Cluster.cleanup.html new file mode 100644 index 0000000000..72a0d90b2b --- /dev/null +++ b/toolbox.generated/Busy_Cluster.cleanup.html @@ -0,0 +1,142 @@ + + + + + + + busy_cluster cleanup — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

busy_cluster cleanup

+

Cleanups namespaces to make a cluster un-busy

+
+

Parameters

+

namespace_label_key

+
    +
  • The label key to use to locate the namespaces to cleanup

  • +
  • default value: busy-cluster.topsail

  • +
+

namespace_label_value

+
    +
  • The label value to use to locate the namespaces to cleanup

  • +
  • default value: yes

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Busy_Cluster.create_configmaps.html b/toolbox.generated/Busy_Cluster.create_configmaps.html new file mode 100644 index 0000000000..0eb3e786fd --- /dev/null +++ b/toolbox.generated/Busy_Cluster.create_configmaps.html @@ -0,0 +1,175 @@ + + + + + + + busy_cluster create_configmaps — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

busy_cluster create_configmaps

+

Creates configmaps and secrets to make a cluster busy

+
+

Parameters

+

namespace_label_key

+
    +
  • The label key to use to locate the namespaces to populate

  • +
  • default value: busy-cluster.topsail

  • +
+

namespace_label_value

+
    +
  • The label value to use to locate the namespaces to populate

  • +
  • default value: yes

  • +
+

prefix

+
    +
  • Prefix to give to the configmaps/secrets to create

  • +
  • default value: busy

  • +
+

count

+
    +
  • Number of configmaps/secrets to create

  • +
  • default value: 10

  • +
+

labels

+
    +
  • Dict of the key/value labels to set for the configmap/secrets

  • +
+

as_secrets

+
    +
  • If True, creates secrets instead of configmaps

  • +
+

entries

+
    +
  • Number of entries to create

  • +
  • default value: 10

  • +
+

entry_values_length

+
    +
  • Length of an entry value

  • +
  • default value: 1024

  • +
+

entry_keys_prefix

+
    +
  • The prefix to use to create the entry values

  • +
  • default value: entry-

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Busy_Cluster.create_deployments.html b/toolbox.generated/Busy_Cluster.create_deployments.html new file mode 100644 index 0000000000..668d5972ed --- /dev/null +++ b/toolbox.generated/Busy_Cluster.create_deployments.html @@ -0,0 +1,174 @@ + + + + + + + busy_cluster create_deployments — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

busy_cluster create_deployments

+

Creates configmaps and secrets to make a cluster busy

+
+

Parameters

+

namespace_label_key

+
    +
  • The label key to use to locate the namespaces to populate

  • +
  • default value: busy-cluster.topsail

  • +
+

namespace_label_value

+
    +
  • The label value to use to locate the namespaces to populate

  • +
  • default value: yes

  • +
+

prefix

+
    +
  • Prefix to give to the deployments to create

  • +
  • default value: busy

  • +
+

count

+
    +
  • Number of deployments to create

  • +
  • default value: 1

  • +
+

labels

+
    +
  • Dict of the key/value labels to set for the deployments

  • +
+

replicas

+
    +
  • Number of replicas to set for the deployments

  • +
  • default value: 1

  • +
+

services

+
    +
  • Number of services to create for each of the deployments

  • +
  • default value: 1

  • +
+

image_pull_back_off

+
    +
  • If True, makes the containers image pull fail.

  • +
+

crash_loop_back_off

+
    +
  • If True, makes the containers fail. If a integer value, wait this many seconds before failing.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Busy_Cluster.create_jobs.html b/toolbox.generated/Busy_Cluster.create_jobs.html new file mode 100644 index 0000000000..545ecc0423 --- /dev/null +++ b/toolbox.generated/Busy_Cluster.create_jobs.html @@ -0,0 +1,166 @@ + + + + + + + busy_cluster create_jobs — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

busy_cluster create_jobs

+

Creates jobs to make a cluster busy

+
+

Parameters

+

namespace_label_key

+
    +
  • The label key to use to locate the namespaces to populate

  • +
  • default value: busy-cluster.topsail

  • +
+

namespace_label_value

+
    +
  • The label value to use to locate the namespaces to populate

  • +
  • default value: yes

  • +
+

prefix

+
    +
  • Prefix to give to the deployments to create

  • +
  • default value: busy

  • +
+

count

+
    +
  • Number of deployments to create

  • +
  • default value: 10

  • +
+

labels

+
    +
  • Dict of the key/value labels to set for the deployments

  • +
+

replicas

+
    +
  • The number of parallel tasks to execute

  • +
  • default value: 2

  • +
+

runtime

+
    +
  • The runtime of the Job Pods in seconds, of inf

  • +
  • default value: 120

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Busy_Cluster.create_namespaces.html b/toolbox.generated/Busy_Cluster.create_namespaces.html new file mode 100644 index 0000000000..2ee2fd9ef6 --- /dev/null +++ b/toolbox.generated/Busy_Cluster.create_namespaces.html @@ -0,0 +1,146 @@ + + + + + + + busy_cluster create_namespaces — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

busy_cluster create_namespaces

+

Creates namespaces to make a cluster busy

+
+

Parameters

+

prefix

+
    +
  • Prefix to give to the namespaces to create

  • +
  • default value: busy-namespace

  • +
+

count

+
    +
  • Number of namespaces to create

  • +
  • default value: 10

  • +
+

labels

+
    +
  • Dict of the key/value labels to set for the namespace

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Busy_Cluster.status.html b/toolbox.generated/Busy_Cluster.status.html new file mode 100644 index 0000000000..3f90c6d8d2 --- /dev/null +++ b/toolbox.generated/Busy_Cluster.status.html @@ -0,0 +1,142 @@ + + + + + + + busy_cluster status — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

busy_cluster status

+

Shows the busyness of the cluster

+
+

Parameters

+

namespace_label_key

+
    +
  • The label key to use to locate the namespaces to cleanup

  • +
  • default value: busy-cluster.topsail

  • +
+

namespace_label_value

+
    +
  • The label value to use to locate the namespaces to cleanup

  • +
  • default value: yes

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.build_push_image.html b/toolbox.generated/Cluster.build_push_image.html new file mode 100644 index 0000000000..3935491a75 --- /dev/null +++ b/toolbox.generated/Cluster.build_push_image.html @@ -0,0 +1,183 @@ + + + + + + + cluster build_push_image — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster build_push_image

+

Build and publish an image to quay using either a Dockerfile or git repo.

+
+

Parameters

+

image_local_name

+
    +
  • Name of locally built image.

  • +
+

tag

+
    +
  • Tag for the image to build.

  • +
+

namespace

+
    +
  • Namespace where the local image will be built.

  • +
+

remote_repo

+
    +
  • Remote image repo to push to. If undefined, the image will not be pushed.

  • +
+

remote_auth_file

+
    +
  • Auth file for the remote repository.

  • +
+

git_repo

+
    +
  • Git repo containing Dockerfile if used as source. If undefined, the local path of ‘dockerfile_path’ will be used.

  • +
+

git_ref

+
    +
  • Git commit ref (branch, tag, commit hash) in the git repository.

  • +
+

dockerfile_path

+
    +
  • Path/Name of Dockerfile if used as source. If ‘git_repo’ is undefined, this path will be resolved locally, and the Dockerfile will be injected in the image BuildConfig.

  • +
  • default value: Dockerfile

  • +
+

context_dir

+
    +
  • Context dir inside the git repository.

  • +
  • default value: /

  • +
+

memory

+
    +
  • Flag to specify the required memory to build the image (in Gb).

  • +
  • type: Float

  • +
+

from_image

+
    +
  • Base image to use, instead of the FROM image specified in the Dockerfile.

  • +
+

from_imagetag

+
    +
  • Base imagestreamtag to use, instead of the FROM image specified in the Dockerfile.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.capture_environment.html b/toolbox.generated/Cluster.capture_environment.html new file mode 100644 index 0000000000..512d0e4d87 --- /dev/null +++ b/toolbox.generated/Cluster.capture_environment.html @@ -0,0 +1,129 @@ + + + + + + + cluster capture_environment — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster capture_environment

+

Captures the cluster environment

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.create_htpasswd_adminuser.html b/toolbox.generated/Cluster.create_htpasswd_adminuser.html new file mode 100644 index 0000000000..fb3aceff30 --- /dev/null +++ b/toolbox.generated/Cluster.create_htpasswd_adminuser.html @@ -0,0 +1,160 @@ + + + + + + + cluster create_htpasswd_adminuser — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster create_htpasswd_adminuser

+

Create an htpasswd admin user.

+

Will remove any other existing OAuth.

+

Example of password file: +password=my-strong-password

+
+

Parameters

+

username

+
    +
  • Username of the htpasswd user.

  • +
+

passwordfile

+
    +
  • Password file where the user’s password is stored. Will be sourced.

  • +
+

wait

+
    +
  • If True, waits for the user to be able to login into the cluster.

  • +
+

# Constants +# Name of the secret that will contain the htpasswd passwords +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_secret_name: htpasswd-secret

+

# Name of the htpasswd IDP being created +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_htpasswd_idp_name: htpasswd

+

# Role that will be given to the user group +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_role: cluster-admin

+

# Name of the group that will be created for the user +# Defined as a constant in Cluster.create_htpasswd_adminuser +cluster_create_htpasswd_user_groupname: local-admins

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.create_osd.html b/toolbox.generated/Cluster.create_osd.html new file mode 100644 index 0000000000..e8029e81ec --- /dev/null +++ b/toolbox.generated/Cluster.create_osd.html @@ -0,0 +1,187 @@ + + + + + + + cluster create_osd — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster create_osd

+

Create an OpenShift Dedicated cluster.

+
+
Secret_file:

KUBEADMIN_PASS: password of the default kubeadmin user. +AWS_ACCOUNT_ID +AWS_ACCESS_KEY +AWS_SECRET_KEY: Credentials to access AWS.

+
+
+
+

Parameters

+

cluster_name

+
    +
  • The name to give to the cluster.

  • +
+

secret_file

+
    +
  • The file containing the cluster creation credentials.

  • +
+

kubeconfig

+
    +
  • The KUBECONFIG file to populate with the access to the cluster.

  • +
+

version

+
    +
  • OpenShift version to deploy.

  • +
  • default value: 4.10.15

  • +
+

region

+
    +
  • AWS region where the cluster will be deployed.

  • +
  • default value: us-east-1

  • +
+

htaccess_idp_name

+
    +
  • Name of the Identity provider that will be created for the admin account.

  • +
  • default value: htpasswd

  • +
+

compute_machine_type

+
    +
  • Name of the AWS machine instance type that will be used for the compute nodes.

  • +
  • default value: m5.xlarge

  • +
+

compute_nodes

+
    +
  • The number of compute nodes to create. A minimum of 2 is required by OSD.

  • +
  • type: Int

  • +
  • default value: 2

  • +
+

# Constants +# Name of the worker node machinepool +# Defined as a constant in Cluster.create_osd +cluster_create_osd_machinepool_name: default

+

# Group that the admin account will be part of. +# Defined as a constant in Cluster.create_osd +cluster_create_osd_kubeadmin_group: cluster-admins

+

# Name of the admin account that will be created. +# Defined as a constant in Cluster.create_osd +cluster_create_osd_kubeadmin_name: kubeadmin

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_aws_efs.html b/toolbox.generated/Cluster.deploy_aws_efs.html new file mode 100644 index 0000000000..8d89d39a20 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_aws_efs.html @@ -0,0 +1,130 @@ + + + + + + + cluster deploy_aws_efs — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_aws_efs

+

Deploy AWS EFS CSI driver and configure AWS accordingly.

+

Assumes that AWS (credentials, Ansible module, Python module) is properly configured in the system.

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_ldap.html b/toolbox.generated/Cluster.deploy_ldap.html new file mode 100644 index 0000000000..9be8366004 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_ldap.html @@ -0,0 +1,171 @@ + + + + + + + cluster deploy_ldap — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_ldap

+

Deploy OpenLDAP and LDAP Oauth

+

Example of secret properties file:

+

admin_password=adminpasswd

+
+

Parameters

+

idp_name

+
    +
  • Name of the LDAP identity provider.

  • +
+

username_prefix

+
    +
  • Prefix for the creation of the users (suffix is 0..username_count)

  • +
+

username_count

+
    +
  • Number of users to create.

  • +
  • type: Int

  • +
+

secret_properties_file

+
    +
  • Path of a file containing the properties of LDAP secrets.

  • +
+

use_ocm

+
    +
  • If true, use ocm create idp to deploy the LDAP identity provider.

  • +
+

use_rosa

+
    +
  • If true, use rosa create idp to deploy the LDAP identity provider.

  • +
+

cluster_name

+
    +
  • Cluster to use when using OCM or ROSA.

  • +
+

wait

+
    +
  • If True, waits for the first user (0) to be able to login into the cluster.

  • +
+

# Constants +# Name of the admin user +# Defined as a constant in Cluster.deploy_ldap +cluster_deploy_ldap_admin_user: admin

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_minio_s3_server.html b/toolbox.generated/Cluster.deploy_minio_s3_server.html new file mode 100644 index 0000000000..8a7f94e8f4 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_minio_s3_server.html @@ -0,0 +1,156 @@ + + + + + + + cluster deploy_minio_s3_server — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_minio_s3_server

+

Deploy Minio S3 server

+

Example of secret properties file:

+

user_password=passwd +admin_password=adminpasswd

+
+

Parameters

+

secret_properties_file

+
    +
  • Path of a file containing the properties of S3 secrets.

  • +
+

namespace

+
    +
  • Namespace in which Minio should be deployed.

  • +
  • default value: minio

  • +
+

bucket_name

+
    +
  • The name of the default bucket to create in Minio.

  • +
  • default value: myBucket

  • +
+

# Constants +# Name of the Minio admin user +# Defined as a constant in Cluster.deploy_minio_s3_server +cluster_deploy_minio_s3_server_root_user: admin

+

# Name of the user/access key to use to connect to the Minio server +# Defined as a constant in Cluster.deploy_minio_s3_server +cluster_deploy_minio_s3_server_access_key: minio

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_nfs_provisioner.html b/toolbox.generated/Cluster.deploy_nfs_provisioner.html new file mode 100644 index 0000000000..23d9886a12 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_nfs_provisioner.html @@ -0,0 +1,156 @@ + + + + + + + cluster deploy_nfs_provisioner — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_nfs_provisioner

+

Deploy NFS Provisioner

+
+

Parameters

+

namespace

+
    +
  • The namespace where the resources will be deployed

  • +
  • default value: nfs-provisioner

  • +
+

pvc_sc

+
    +
  • The name of the storage class to use for the NFS-provisioner PVC

  • +
  • default value: gp3-csi

  • +
+

pvc_size

+
    +
  • The size of the PVC to give to the NFS-provisioner

  • +
  • default value: 10Gi

  • +
+

storage_class_name

+
    +
  • The name of the storage class that will be created

  • +
  • default value: nfs-provisioner

  • +
+

default_sc

+
    +
  • Set to true to mark the storage class as default in the cluster

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_nginx_server.html b/toolbox.generated/Cluster.deploy_nginx_server.html new file mode 100644 index 0000000000..ca405b35e4 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_nginx_server.html @@ -0,0 +1,140 @@ + + + + + + + cluster deploy_nginx_server — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_nginx_server

+

Deploy an NGINX HTTP server

+
+

Parameters

+

namespace

+
    +
  • Namespace where the server will be deployed. Will be create if it doesn’t exist.

  • +
+

directory

+
    +
  • Directory containing the files to serve on the HTTP server.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_opensearch.html b/toolbox.generated/Cluster.deploy_opensearch.html new file mode 100644 index 0000000000..0c409da708 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_opensearch.html @@ -0,0 +1,149 @@ + + + + + + + cluster deploy_opensearch — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_opensearch

+

Deploy OpenSearch and OpenSearch-Dashboards

+

Example of secret properties file:

+

user_password=passwd +admin_password=adminpasswd

+
+

Parameters

+

secret_properties_file

+
    +
  • Path of a file containing the properties of LDAP secrets.

  • +
+

namespace

+
    +
  • Namespace in which the application will be deployed

  • +
  • default value: opensearch

  • +
+

name

+
    +
  • Name to give to the opensearch instance

  • +
  • default value: opensearch

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_operator.html b/toolbox.generated/Cluster.deploy_operator.html new file mode 100644 index 0000000000..6486551f50 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_operator.html @@ -0,0 +1,186 @@ + + + + + + + cluster deploy_operator — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_operator

+

Deploy an operator from OperatorHub catalog entry.

+
+

Parameters

+

catalog

+
    +
  • Name of the catalog containing the operator.

  • +
+

manifest_name

+
    +
  • Name of the operator package manifest.

  • +
+

namespace

+
    +
  • Namespace in which the operator will be deployed, or ‘all’ to deploy in all the namespaces.

  • +
+

version

+
    +
  • Version to deploy. If unspecified, deploys the latest version available in the selected channel.

  • +
+

channel

+
    +
  • Channel to deploy from. If unspecified, deploys the CSV’s default channel. Use ‘?’ to list the available channels for the given package manifest.

  • +
+

installplan_approval

+
    +
  • InstallPlan approval mode (Automatic or Manual).

  • +
  • default value: Manual

  • +
+

catalog_namespace

+
    +
  • Namespace in which the CatalogSource will be deployed

  • +
  • default value: openshift-marketplace

  • +
+

deploy_cr

+
    +
  • If set, deploy the first example CR found in the CSV.

  • +
  • type: Bool

  • +
+

namespace_monitoring

+
    +
  • If set, enable OpenShift namespace monitoring.

  • +
  • type: Bool

  • +
+

all_namespaces

+
    +
  • If set, deploy the CSV in all the namespaces.

  • +
  • type: Bool

  • +
+

config_env_names

+
    +
  • If not empty, a list of config env names to pass to the subscription

  • +
  • type: List

  • +
+

csv_base_name

+
    +
  • If not empty, base name of the CSV. If empty, use the manifest_name.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.deploy_redis_server.html b/toolbox.generated/Cluster.deploy_redis_server.html new file mode 100644 index 0000000000..5f79e37619 --- /dev/null +++ b/toolbox.generated/Cluster.deploy_redis_server.html @@ -0,0 +1,136 @@ + + + + + + + cluster deploy_redis_server — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster deploy_redis_server

+

Deploy a redis server

+
+

Parameters

+

namespace

+
    +
  • Namespace where the server will be deployed. Will be create if it doesn’t exist.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.destroy_ocp.html b/toolbox.generated/Cluster.destroy_ocp.html new file mode 100644 index 0000000000..e7a20891f6 --- /dev/null +++ b/toolbox.generated/Cluster.destroy_ocp.html @@ -0,0 +1,154 @@ + + + + + + + cluster destroy_ocp — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster destroy_ocp

+

Destroy an OpenShift cluster

+
+

Parameters

+

region

+
    +
  • The AWS region where the cluster lives. If empty and –confirm is passed, look up from the cluster.

  • +
+

tag

+
    +
  • The resource tag key. If empty and –confirm is passed, look up from the cluster.

  • +
+

confirm

+
    +
  • If the region/label are not set, and –confirm is passed, destroy the current cluster.

  • +
+

tag_value

+
    +
  • The resource tag value.

  • +
  • default value: owned

  • +
+

openshift_install

+
    +
  • The path to the openshift-install to use to destroy the cluster. If empty, pick it up from the deploy-cluster subproject.

  • +
  • default value: openshift-install

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.destroy_osd.html b/toolbox.generated/Cluster.destroy_osd.html new file mode 100644 index 0000000000..a192a6371d --- /dev/null +++ b/toolbox.generated/Cluster.destroy_osd.html @@ -0,0 +1,136 @@ + + + + + + + cluster destroy_osd — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster destroy_osd

+

Destroy an OpenShift Dedicated cluster.

+
+

Parameters

+

cluster_name

+
    +
  • The name of the cluster to destroy.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.download_to_pvc.html b/toolbox.generated/Cluster.download_to_pvc.html new file mode 100644 index 0000000000..f634d6e0f3 --- /dev/null +++ b/toolbox.generated/Cluster.download_to_pvc.html @@ -0,0 +1,171 @@ + + + + + + + cluster download_to_pvc — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster download_to_pvc

+

Downloads the a dataset into a PVC of the cluster

+
+

Parameters

+

name

+
    +
  • Name of the data source

  • +
+

source

+
    +
  • URL of the source data

  • +
+

pvc_name

+
    +
  • Name of the PVC that will be create to store the dataset files.

  • +
+

namespace

+
    +
  • Name of the namespace in which the PVC will be created

  • +
+

creds

+
    +
  • Path to credentials to use for accessing the dataset.

  • +
+

storage_dir

+
    +
  • The path where to store the downloaded files, in the PVC

  • +
  • default value: /

  • +
+

clean_first

+
    +
  • If True, clears the storage directory before downloading.

  • +
+

pvc_access_mode

+
    +
  • The access mode to request when creating the PVC

  • +
  • default value: ReadWriteOnce

  • +
+

pvc_size

+
    +
  • The size of the PVC to request, when creating it

  • +
  • default value: 80Gi

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.dump_prometheus_db.html b/toolbox.generated/Cluster.dump_prometheus_db.html new file mode 100644 index 0000000000..b8786c899f --- /dev/null +++ b/toolbox.generated/Cluster.dump_prometheus_db.html @@ -0,0 +1,152 @@ + + + + + + + cluster dump_prometheus_db — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster dump_prometheus_db

+

Dump Prometheus database into a file

+

By default, target OpenShift Prometheus Pod.

+
+

Parameters

+

label

+
    +
  • Label to use to identify Prometheus Pod.

  • +
  • default value: app.kubernetes.io/component=prometheus

  • +
+

namespace

+
    +
  • Namespace where to search Promtheus Pod.

  • +
  • default value: openshift-monitoring

  • +
+

dump_name_prefix

+
    +
  • Name prefix for the archive that will be stored.

  • +
  • default value: prometheus

  • +
+

# Constants +# +# Defined as a constant in Cluster.dump_prometheus_db +cluster_prometheus_db_mode: dump

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.fill_workernodes.html b/toolbox.generated/Cluster.fill_workernodes.html new file mode 100644 index 0000000000..f65c88db5c --- /dev/null +++ b/toolbox.generated/Cluster.fill_workernodes.html @@ -0,0 +1,147 @@ + + + + + + + cluster fill_workernodes — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster fill_workernodes

+

Fills the worker nodes with place-holder Pods with the maximum available amount of a given resource name.

+
+

Parameters

+

namespace

+
    +
  • Namespace in which the place-holder Pods should be deployed

  • +
  • default value: default

  • +
+

name

+
    +
  • Name prefix to use for the place-holder Pods

  • +
  • default value: resource-placeholder

  • +
+

label_selector

+
    +
  • Label to use to select the nodes to fill

  • +
  • default value: node-role.kubernetes.io/worker

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.preload_image.html b/toolbox.generated/Cluster.preload_image.html new file mode 100644 index 0000000000..4b5677e082 --- /dev/null +++ b/toolbox.generated/Cluster.preload_image.html @@ -0,0 +1,161 @@ + + + + + + + cluster preload_image — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster preload_image

+

Preload a container image on all the nodes of a cluster.

+
+

Parameters

+

name

+
    +
  • Name to give to the DaemonSet used for preloading the image.

  • +
+

image

+
    +
  • Container image to preload on the nodes.

  • +
+

namespace

+
    +
  • Namespace in which the DaemonSet will be created.

  • +
  • default value: default

  • +
+

node_selector_key

+
    +
  • NodeSelector key to apply to the DaemonSet.

  • +
+

node_selector_value

+
    +
  • NodeSelector value to apply to the DaemonSet.

  • +
+

pod_toleration_key

+
    +
  • Pod toleration to apply to the DaemonSet.

  • +
+

pod_toleration_effect

+
    +
  • Pod toleration to apply to the DaemonSet.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.query_prometheus_db.html b/toolbox.generated/Cluster.query_prometheus_db.html new file mode 100644 index 0000000000..92805bd43a --- /dev/null +++ b/toolbox.generated/Cluster.query_prometheus_db.html @@ -0,0 +1,178 @@ + + + + + + + cluster query_prometheus_db — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster query_prometheus_db

+

Query Prometheus with a list of PromQueries read in a file

+

The metrics_file is a multi-line list, with first the name of the metric, prefixed with ‘#’ +Then the definition of the metric, than can spread on multiple lines, until the next # is found.

+

Example:

+
promquery_file:
+  # sutest__cluster_cpu_capacity
+  sum(cluster:capacity_cpu_cores:sum)
+  # sutest__cluster_memory_requests
+     sum(
+          kube_pod_resource_request{resource="memory"}
+          *
+          on(node) group_left(role) (
+            max by (node) (kube_node_role{role=~".+"})
+          )
+        )
+  # openshift-operators CPU request
+  sum(kube_pod_container_resource_requests{namespace=~'openshift-operators',resource='cpu'})
+  # openshift-operators CPU limit
+  sum(kube_pod_container_resource_limits{namespace=~'openshift-operators',resource='cpu'})
+  # openshift-operators CPU usage
+  sum(rate(container_cpu_usage_seconds_total{namespace=~'openshift-operators'}[5m]))
+
+
+
+

Parameters

+

promquery_file

+
    +
  • File where the Prometheus Queries are stored. See the example above to understand the format.

  • +
+

dest_dir

+
    +
  • Directory where the metrics should be stored

  • +
+

namespace

+
    +
  • The namespace where the metrics should searched for

  • +
+

duration_s

+
    +
  • The duration of the history to query

  • +
+

start_ts

+
    +
  • The start timestamp of the history to query. Incompatible with duration_s flag.

  • +
+

end_ts

+
    +
  • The end timestamp of the history to query. Incompatible with duration_s flag.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.reset_prometheus_db.html b/toolbox.generated/Cluster.reset_prometheus_db.html new file mode 100644 index 0000000000..1c244de94a --- /dev/null +++ b/toolbox.generated/Cluster.reset_prometheus_db.html @@ -0,0 +1,155 @@ + + + + + + + cluster reset_prometheus_db — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster reset_prometheus_db

+

Resets Prometheus database, by destroying its Pod

+

By default, target OpenShift Prometheus Pod.

+
+

Parameters

+

mode

+
    +
  • Mode in which the role will run. Can be ‘reset’ or ‘dump’.

  • +
  • default value: reset

  • +
+

label

+
    +
  • Label to use to identify Prometheus Pod.

  • +
  • default value: app.kubernetes.io/component=prometheus

  • +
+

namespace

+
    +
  • Namespace where to search Promtheus Pod.

  • +
  • default value: openshift-monitoring

  • +
+

# Constants +# Prefix to apply to the db name in ‘dump’ mode +# Defined as a constant in Cluster.reset_prometheus_db +cluster_prometheus_db_dump_name_prefix: prometheus

+

# Directory to dump on the Prometheus Pod +# Defined as a constant in Cluster.reset_prometheus_db +cluster_prometheus_db_directory: /prometheus

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.set_project_annotation.html b/toolbox.generated/Cluster.set_project_annotation.html new file mode 100644 index 0000000000..9f6a3afadf --- /dev/null +++ b/toolbox.generated/Cluster.set_project_annotation.html @@ -0,0 +1,148 @@ + + + + + + + cluster set_project_annotation — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster set_project_annotation

+

Set an annotation on a given project, or for any new projects.

+
+

Parameters

+

key

+
    +
  • The annotation key

  • +
+

value

+
    +
  • The annotation value. If value is omited, the annotation is removed.

  • +
+

project

+
    +
  • The project to annotate. Must be set unless –all is passed.

  • +
+

all

+
    +
  • If set, the annotation will be set for any new project.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.set_scale.html b/toolbox.generated/Cluster.set_scale.html new file mode 100644 index 0000000000..500f83f4f1 --- /dev/null +++ b/toolbox.generated/Cluster.set_scale.html @@ -0,0 +1,175 @@ + + + + + + + cluster set_scale — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster set_scale

+

Ensures that the cluster has exactly scale nodes with instance_type instance_type

+

If the machinesets of the given instance type already have the required total number of replicas, +their replica parameters will not be modified. +Otherwise, +- If there’s only one machineset with the given instance type, its replicas will be set to the value of this parameter. +- If there are other machinesets with non-zero replicas, the playbook will fail, unless the force parameter is +set to true. In that case, the number of replicas of the other machinesets will be zeroed before setting the replicas +of the first machineset to the value of this parameter.” +- If –base-machineset=machineset flag is passed, machineset machineset will be used to derive the new +machinetset (otherwise, the first machinetset of the listing will be used). This is useful if the desired instance_type +is only available in some specific regions and, controlled by different machinesets.

+

Example: ./run_toolbox.py cluster set_scale g4dn.xlarge 1 # ensure that the cluster has 1 GPU node

+
+

Parameters

+

instance_type

+
    +
  • The instance type to use, for example, g4dn.xlarge

  • +
+

scale

+
    +
  • The number of required nodes with given instance type

  • +
+

base_machineset

+
    +
  • Name of a machineset to use to derive the new one. Default: pickup the first machineset found in oc get machinesets -n openshift-machine-api.

  • +
+

force

+
    +
  • Missing documentation for force

  • +
+

taint

+
    +
  • Taint to apply to the machineset.

  • +
+

name

+
    +
  • Name to give to the new machineset.

  • +
+

spot

+
    +
  • Set to true to request spot instances from AWS. Set to false (default) to request on-demand instances.

  • +
+

disk_size

+
    +
  • Size of the EBS volume to request for the root partition

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.undeploy_ldap.html b/toolbox.generated/Cluster.undeploy_ldap.html new file mode 100644 index 0000000000..a3cb293399 --- /dev/null +++ b/toolbox.generated/Cluster.undeploy_ldap.html @@ -0,0 +1,148 @@ + + + + + + + cluster undeploy_ldap — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster undeploy_ldap

+

Undeploy OpenLDAP and LDAP Oauth

+
+

Parameters

+

idp_name

+
    +
  • Name of the LDAP identity provider.

  • +
+

use_ocm

+
    +
  • If true, use ocm delete idp to delete the LDAP identity provider.

  • +
+

use_rosa

+
    +
  • If true, use rosa delete idp to delete the LDAP identity provider.

  • +
+

cluster_name

+
    +
  • Cluster to use when using OCM or ROSA.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.update_pods_per_node.html b/toolbox.generated/Cluster.update_pods_per_node.html new file mode 100644 index 0000000000..dee12e3954 --- /dev/null +++ b/toolbox.generated/Cluster.update_pods_per_node.html @@ -0,0 +1,156 @@ + + + + + + + cluster update_pods_per_node — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster update_pods_per_node

+

Update the maximum number of Pods per Nodes, and Pods per Core See alse: https://docs.openshift.com/container-platform/4.14/nodes/nodes/nodes-nodes-managing-max-pods.html

+
+

Parameters

+

max_pods

+
    +
  • The maximum number of Pods per nodes

  • +
  • default value: 250

  • +
+

pods_per_core

+
    +
  • The maximum number of Pods per core

  • +
  • default value: 10

  • +
+

name

+
    +
  • The name to give to the KubeletConfig object

  • +
  • default value: set-max-pods

  • +
+

label

+
    +
  • The label selector for the nodes to update

  • +
  • default value: pools.operator.machineconfiguration.openshift.io/worker

  • +
+

label_value

+
    +
  • The expected value for the label selector

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.upgrade_to_image.html b/toolbox.generated/Cluster.upgrade_to_image.html new file mode 100644 index 0000000000..3b74e51e55 --- /dev/null +++ b/toolbox.generated/Cluster.upgrade_to_image.html @@ -0,0 +1,136 @@ + + + + + + + cluster upgrade_to_image — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster upgrade_to_image

+

Upgrades the cluster to the given image

+
+

Parameters

+

image

+
    +
  • The image to upgrade the cluster to

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cluster.wait_fully_awake.html b/toolbox.generated/Cluster.wait_fully_awake.html new file mode 100644 index 0000000000..8ec53a1c1d --- /dev/null +++ b/toolbox.generated/Cluster.wait_fully_awake.html @@ -0,0 +1,129 @@ + + + + + + + cluster wait_fully_awake — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cluster wait_fully_awake

+

Waits for the cluster to be fully awake after Hive restart

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Configure.apply.html b/toolbox.generated/Configure.apply.html new file mode 100644 index 0000000000..500315f41a --- /dev/null +++ b/toolbox.generated/Configure.apply.html @@ -0,0 +1,140 @@ + + + + + + + configure apply — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

configure apply

+

Applies a preset (or a list of presets) to the current configuration file

+
+

Parameters

+

preset

+
    +
  • A preset to apply

  • +
+

presets

+
    +
  • A list of presets to apply

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Configure.enter.html b/toolbox.generated/Configure.enter.html new file mode 100644 index 0000000000..02e746e0c2 --- /dev/null +++ b/toolbox.generated/Configure.enter.html @@ -0,0 +1,153 @@ + + + + + + + configure enter — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

configure enter

+

Enter into a custom configuration file for a TOPSAIL project

+
+

Parameters

+

project

+
    +
  • The name of the projec to configure

  • +
+

show_export

+
    +
  • Show the export command

  • +
+

shell

+
    +
  • If False, do nothing. If True, exec the default shell. Any other value is executed.

  • +
  • default value: True

  • +
+

preset

+
    +
  • A preset to apply

  • +
+

presets

+
    +
  • A list of presets to apply

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Configure.get.html b/toolbox.generated/Configure.get.html new file mode 100644 index 0000000000..0474f2d11f --- /dev/null +++ b/toolbox.generated/Configure.get.html @@ -0,0 +1,136 @@ + + + + + + + configure get — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

configure get

+

Gives the value of a given key, in the current configuration file

+
+

Parameters

+

key

+
    +
  • The key to lookup in the configuration file

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Configure.name.html b/toolbox.generated/Configure.name.html new file mode 100644 index 0000000000..62e3252dab --- /dev/null +++ b/toolbox.generated/Configure.name.html @@ -0,0 +1,129 @@ + + + + + + + configure name — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

configure name

+

Gives the name of the current configuration

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Cpt.deploy_cpt_dashboard.html b/toolbox.generated/Cpt.deploy_cpt_dashboard.html new file mode 100644 index 0000000000..1c72bda04a --- /dev/null +++ b/toolbox.generated/Cpt.deploy_cpt_dashboard.html @@ -0,0 +1,167 @@ + + + + + + + cpt deploy_cpt_dashboard — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cpt deploy_cpt_dashboard

+

Deploy and configure the CPT Dashboard

+

Example of secret properties file:

+

admin_password=adminpasswd

+
+

Parameters

+

frontend_istag

+
    +
  • Imagestream tag to use for the frontend container

  • +
+

backend_istag

+
    +
  • Imagestream tag to use for the backend container

  • +
+

plugin_name

+
    +
  • Name of the CPT Dashboard plugin to configure

  • +
+

es_url

+
    +
  • URL of the OpenSearch backend

  • +
+

es_indice

+
    +
  • Indice of the OpenSearch backend

  • +
+

es_username

+
    +
  • Username to use to login into OpenSearch

  • +
+

secret_properties_file

+
    +
  • Path of a file containing the OpenSearch user credentials

  • +
+

namespace

+
    +
  • Namespace in which the application will be deployed

  • +
  • default value: topsail-cpt-dashboard

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Fine_Tuning.ray_fine_tuning_job.html b/toolbox.generated/Fine_Tuning.ray_fine_tuning_job.html new file mode 100644 index 0000000000..e2573c286e --- /dev/null +++ b/toolbox.generated/Fine_Tuning.ray_fine_tuning_job.html @@ -0,0 +1,233 @@ + + + + + + + fine_tuning ray_fine_tuning_job — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

fine_tuning ray_fine_tuning_job

+

Run a simple Ray fine-tuning Job.

+
+

Parameters

+

name

+
    +
  • The name of the fine-tuning job to create

  • +
+

namespace

+
    +
  • The name of the namespace where the scheduler load will be generated

  • +
+

pvc_name

+
    +
  • The name of the PVC where the model and dataset are stored

  • +
+

model_name

+
    +
  • The name of the model to use inside the /dataset directory of the PVC

  • +
+

workload

+
    +
  • The name of the workload job to run (see the role’s workload directory)

  • +
  • default value: ray-finetune-llm-deepspeed

  • +
+

dataset_name

+
    +
  • The name of the dataset to use inside the /model directory of the PVC

  • +
+

dataset_replication

+
    +
  • Number of replications of the dataset to use, to artificially extend or reduce the fine-tuning effort

  • +
  • default value: 1

  • +
+

dataset_transform

+
    +
  • Name of the transformation to apply to the dataset

  • +
+

dataset_prefer_cache

+
    +
  • If True, and the dataset has to be transformed/duplicated, save and/or load it from the PVC

  • +
  • default value: True

  • +
+

dataset_prepare_cache_only

+
    +
  • If True, only prepare the dataset cache file and do not run the fine-tuning.

  • +
+

container_image

+
    +
  • The image to use for the fine-tuning container

  • +
  • default value: quay.io/rhoai/ray:2.35.0-py39-cu121-torch24-fa26

  • +
+

ray_version

+
    +
  • The version identifier passed to the RayCluster object

  • +
  • default value: 2.35.0

  • +
+

gpu

+
    +
  • The number of GPUs to request for the fine-tuning job

  • +
+

memory

+
    +
  • The number of RAM gigs to request for to the fine-tuning job (in Gigs)

  • +
  • default value: 10

  • +
+

cpu

+
    +
  • The number of CPU cores to request for the fine-tuning job (in cores)

  • +
  • default value: 1

  • +
+

request_equals_limits

+
    +
  • If True, sets the ‘limits’ of the job with the same value as the request.

  • +
+

prepare_only

+
    +
  • If True, only prepare the environment but do not run the fine-tuning job.

  • +
+

delete_other

+
    +
  • If True, delete the other PyTorchJobs before running

  • +
+

pod_count

+
    +
  • Number of Pods to include in the job

  • +
  • default value: 1

  • +
+

hyper_parameters

+
    +
  • Dictionnary of hyper-parameters to pass to sft-trainer

  • +
+

sleep_forever

+
    +
  • If true, sleeps forever instead of running the fine-tuning command.

  • +
+

capture_artifacts

+
    +
  • If enabled, captures the artifacts that will help post-mortem analyses

  • +
  • default value: True

  • +
+

shutdown_cluster

+
    +
  • If True, let the RayJob shutdown the RayCluster when the job terminates

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Fine_Tuning.run_fine_tuning_job.html b/toolbox.generated/Fine_Tuning.run_fine_tuning_job.html new file mode 100644 index 0000000000..bf4d1814cd --- /dev/null +++ b/toolbox.generated/Fine_Tuning.run_fine_tuning_job.html @@ -0,0 +1,235 @@ + + + + + + + fine_tuning run_fine_tuning_job — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

fine_tuning run_fine_tuning_job

+

Run a simple fine-tuning Job.

+
+

Parameters

+

name

+
    +
  • The name of the fine-tuning job to create

  • +
+

namespace

+
    +
  • The name of the namespace where the scheduler load will be generated

  • +
+

pvc_name

+
    +
  • The name of the PVC where the model and dataset are stored

  • +
+

workload

+
    +
  • The name of the workload to run inside the container (fms or ilab)

  • +
+

model_name

+
    +
  • The name of the model to use inside the /dataset directory of the PVC

  • +
+

dataset_name

+
    +
  • The name of the dataset to use inside the /model directory of the PVC

  • +
+

dataset_replication

+
    +
  • Number of replications of the dataset to use, to artificially extend or reduce the fine-tuning effort

  • +
  • default value: 1

  • +
+

dataset_transform

+
    +
  • Name of the transformation to apply to the dataset

  • +
+

dataset_prefer_cache

+
    +
  • If True, and the dataset has to be transformed/duplicated, save and/or load it from the PVC

  • +
  • default value: True

  • +
+

dataset_prepare_cache_only

+
    +
  • If True, only prepare the dataset cache file and do not run the fine-tuning.

  • +
+

dataset_response_template

+
    +
  • The delimiter marking the beginning of the response in the dataset samples

  • +
+

container_image

+
    +
  • The image to use for the fine-tuning container

  • +
  • default value: quay.io/modh/fms-hf-tuning:release-7a8ff0f4114ba43398d34fd976f6b17bb1f665f3

  • +
+

gpu

+
    +
  • The number of GPUs to request for the fine-tuning job

  • +
+

memory

+
    +
  • The number of RAM gigs to request for to the fine-tuning job (in Gigs)

  • +
  • default value: 10

  • +
+

cpu

+
    +
  • The number of CPU cores to request for the fine-tuning job (in cores)

  • +
  • default value: 1

  • +
+

request_equals_limits

+
    +
  • If True, sets the ‘limits’ of the job with the same value as the request.

  • +
+

prepare_only

+
    +
  • If True, only prepare the environment but do not run the fine-tuning job.

  • +
+

delete_other

+
    +
  • If True, delete the other PyTorchJobs before running

  • +
+

pod_count

+
    +
  • Number of Pods to include in the job

  • +
  • default value: 1

  • +
+

hyper_parameters

+
    +
  • Dictionnary of hyper-parameters to pass to sft-trainer

  • +
+

capture_artifacts

+
    +
  • If enabled, captures the artifacts that will help post-mortem analyses

  • +
  • default value: True

  • +
+

sleep_forever

+
    +
  • If true, sleeps forever instead of running the fine-tuning command.

  • +
+

ephemeral_output_pvc_size

+
    +
  • If a size (with units) is passed, use an ephemeral volume claim for storing the fine-tuning output. Otherwise, use an emptyDir.

  • +
+

use_roce

+
    +
  • If enabled, activates the flags required to use RoCE fast network

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Fine_Tuning.run_quality_evaluation.html b/toolbox.generated/Fine_Tuning.run_quality_evaluation.html new file mode 100644 index 0000000000..ebcfbfa9be --- /dev/null +++ b/toolbox.generated/Fine_Tuning.run_quality_evaluation.html @@ -0,0 +1,180 @@ + + + + + + + fine_tuning run_quality_evaluation — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

fine_tuning run_quality_evaluation

+

Run a simple fine-tuning Job.

+
+

Parameters

+

name

+
    +
  • The name of the fine-tuning job to create

  • +
+

namespace

+
    +
  • The name of the namespace where the scheduler load will be generated

  • +
+

pvc_name

+
    +
  • The name of the PVC where the model and dataset are stored

  • +
+

model_name

+
    +
  • The name of the model to use inside the /dataset directory of the PVC

  • +
+

container_image

+
    +
  • The image to use for the fine-tuning container

  • +
  • default value: registry.redhat.io/ubi9

  • +
+

gpu

+
    +
  • The number of GPUs to request for the fine-tuning job

  • +
+

memory

+
    +
  • The number of RAM gigs to request for to the fine-tuning job (in Gigs)

  • +
  • default value: 10

  • +
+

cpu

+
    +
  • The number of CPU cores to request for the fine-tuning job (in cores)

  • +
  • default value: 1

  • +
+

pod_count

+
    +
  • Number of pods to deploy in the job

  • +
  • default value: 1

  • +
+

hyper_parameters

+
    +
  • Dictionnary of hyper-parameters to pass to sft-trainer

  • +
+

sleep_forever

+
    +
  • If true, sleeps forever instead of running the fine-tuning command.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/From_Config.run.html b/toolbox.generated/From_Config.run.html new file mode 100644 index 0000000000..5c552e5812 --- /dev/null +++ b/toolbox.generated/From_Config.run.html @@ -0,0 +1,165 @@ + + + + + + + from_config run — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

from_config run

+

Run topsail toolbox commands from a single config file.

+
+

Parameters

+

group

+
    +
  • Group from which the command belongs.

  • +
+

command

+
    +
  • Command to call, within the group.

  • +
+

config_file

+
    +
  • Configuration file from which the parameters will be looked up. Can be passed via the TOPSAIL_FROM_CONFIG_FILE environment variable.

  • +
+

command_args_file

+
    +
  • Command argument configuration file. Can be passed via the TOPSAIL_FROM_COMMAND_ARGS_FILE environment variable.

  • +
+

prefix

+
    +
  • Prefix to apply to the role name to lookup the command options.

  • +
+

suffix

+
    +
  • Suffix to apply to the role name to lookup the command options.

  • +
+

extra

+
    +
  • Extra arguments to pass to the commands. Use the dictionnary notation: ‘{arg1: val1, arg2: val2}’.

  • +
  • type: Dict

  • +
+

show_args

+
    +
  • Print the generated arguments on stdout and exit, or only a given argument if a value is passed.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.capture_deployment_state.html b/toolbox.generated/Gpu_Operator.capture_deployment_state.html new file mode 100644 index 0000000000..a337b9573d --- /dev/null +++ b/toolbox.generated/Gpu_Operator.capture_deployment_state.html @@ -0,0 +1,129 @@ + + + + + + + gpu_operator capture_deployment_state — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator capture_deployment_state

+

Captures the GPU operator deployment state

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.deploy_cluster_policy.html b/toolbox.generated/Gpu_Operator.deploy_cluster_policy.html new file mode 100644 index 0000000000..c3a5b9fd35 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.deploy_cluster_policy.html @@ -0,0 +1,129 @@ + + + + + + + gpu_operator deploy_cluster_policy — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator deploy_cluster_policy

+

Creates the ClusterPolicy from the OLM ClusterServiceVersion

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.deploy_from_bundle.html b/toolbox.generated/Gpu_Operator.deploy_from_bundle.html new file mode 100644 index 0000000000..b282fd9127 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.deploy_from_bundle.html @@ -0,0 +1,141 @@ + + + + + + + gpu_operator deploy_from_bundle — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator deploy_from_bundle

+

Deploys the GPU Operator from a bundle

+
+

Parameters

+

bundle

+
    +
  • Either a bundle OCI image or “master” to deploy the latest bundle

  • +
+

namespace

+
    +
  • Optional namespace in which the GPU Operator will be deployed. Before v1.9, the value must be “openshift-operators”. With >=v1.9, the namespace can freely chosen (except ‘openshift-operators’). Default: nvidia-gpu-operator.

  • +
  • default value: nvidia-gpu-operator

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.deploy_from_operatorhub.html b/toolbox.generated/Gpu_Operator.deploy_from_operatorhub.html new file mode 100644 index 0000000000..c4fb60f189 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.deploy_from_operatorhub.html @@ -0,0 +1,150 @@ + + + + + + + gpu_operator deploy_from_operatorhub — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator deploy_from_operatorhub

+

Deploys the GPU operator from OperatorHub

+
+

Parameters

+

namespace

+
    +
  • Optional namespace in which the GPU Operator will be deployed. Before v1.9, the value must be “openshift-operators”. With >=v1.9, the namespace can freely chosen. Default: nvidia-gpu-operator.

  • +
  • default value: nvidia-gpu-operator

  • +
+

version

+
    +
  • Optional version to deploy. If unspecified, deploys the latest version available in the selected channel. Run the toolbox gpu_operator list_version_from_operator_hub subcommand to see the available versions.

  • +
+

channel

+
    +
  • Optional channel to deploy from. If unspecified, deploys the CSV’s default channel.

  • +
+

installPlan

+
    +
  • Optional InstallPlan approval mode (Automatic or Manual [default])

  • +
  • default value: Manual

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.enable_time_sharing.html b/toolbox.generated/Gpu_Operator.enable_time_sharing.html new file mode 100644 index 0000000000..0228ae44d7 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.enable_time_sharing.html @@ -0,0 +1,146 @@ + + + + + + + gpu_operator enable_time_sharing — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator enable_time_sharing

+

Enable time-sharing in the GPU Operator ClusterPolicy

+
+

Parameters

+

replicas

+
    +
  • Number of slices available for each of the GPUs

  • +
+

namespace

+
    +
  • Namespace in which the GPU Operator is deployed

  • +
  • default value: nvidia-gpu-operator

  • +
+

configmap_name

+
    +
  • Name of the ConfigMap where the configuration will be stored

  • +
  • default value: time-slicing-config-all

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.extend_metrics.html b/toolbox.generated/Gpu_Operator.extend_metrics.html new file mode 100644 index 0000000000..2bdc402d85 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.extend_metrics.html @@ -0,0 +1,161 @@ + + + + + + + gpu_operator extend_metrics — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator extend_metrics

+

Enable time-sharing in the GPU Operator ClusterPolicy

+
+

Parameters

+

include_defaults

+
    +
  • If True, include the default DCGM metrics in the custom config

  • +
  • default value: True

  • +
+

include_well_known

+
    +
  • If True, include well-known interesting DCGM metrics in the custom config

  • +
+

namespace

+
    +
  • Namespace in which the GPU Operator is deployed

  • +
  • default value: nvidia-gpu-operator

  • +
+

configmap_name

+
    +
  • Name of the ConfigMap where the configuration will be stored

  • +
  • default value: metrics-config

  • +
+

extra_metrics

+
    +
  • If not None, a [{name,type,description}*] list of dictionnaries with the extra metrics to include in the custom config

  • +
  • type: List

  • +
+

wait_refresh

+
    +
  • If True, wait for the DCGM components to take into account the new configuration

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.get_csv_version.html b/toolbox.generated/Gpu_Operator.get_csv_version.html new file mode 100644 index 0000000000..f32a078d61 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.get_csv_version.html @@ -0,0 +1,129 @@ + + + + + + + gpu_operator get_csv_version — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator get_csv_version

+

Get the version of the GPU Operator currently installed from OLM Stores the version in the ‘ARTIFACT_EXTRA_LOGS_DIR’ artifacts directory.

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.run_gpu_burn.html b/toolbox.generated/Gpu_Operator.run_gpu_burn.html new file mode 100644 index 0000000000..c066bd63f9 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.run_gpu_burn.html @@ -0,0 +1,154 @@ + + + + + + + gpu_operator run_gpu_burn — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator run_gpu_burn

+

Runs the GPU burn on the cluster

+
+

Parameters

+

namespace

+
    +
  • Namespace in which GPU-burn will be executed

  • +
  • default value: default

  • +
+

runtime

+
    +
  • How long to run the GPU for, in seconds

  • +
  • type: Int

  • +
  • default value: 30

  • +
+

keep_resources

+
    +
  • If true, do not delete the GPU-burn ConfigMaps

  • +
  • type: Bool

  • +
+

ensure_has_gpu

+
    +
  • If true, fails if no GPU is available in the cluster.

  • +
  • type: Bool

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.undeploy_from_operatorhub.html b/toolbox.generated/Gpu_Operator.undeploy_from_operatorhub.html new file mode 100644 index 0000000000..ff18c012b8 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.undeploy_from_operatorhub.html @@ -0,0 +1,129 @@ + + + + + + + gpu_operator undeploy_from_operatorhub — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator undeploy_from_operatorhub

+

Undeploys a GPU-operator that was deployed from OperatorHub

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.wait_deployment.html b/toolbox.generated/Gpu_Operator.wait_deployment.html new file mode 100644 index 0000000000..0d78fe07de --- /dev/null +++ b/toolbox.generated/Gpu_Operator.wait_deployment.html @@ -0,0 +1,129 @@ + + + + + + + gpu_operator wait_deployment — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator wait_deployment

+

Waits for the GPU operator to deploy

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Gpu_Operator.wait_stack_deployed.html b/toolbox.generated/Gpu_Operator.wait_stack_deployed.html new file mode 100644 index 0000000000..102562b005 --- /dev/null +++ b/toolbox.generated/Gpu_Operator.wait_stack_deployed.html @@ -0,0 +1,137 @@ + + + + + + + gpu_operator wait_stack_deployed — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

gpu_operator wait_stack_deployed

+

Waits for the GPU Operator stack to be deployed on the GPU nodes

+
+

Parameters

+

namespace

+
    +
  • Namespace in which the GPU Operator is deployed

  • +
  • default value: nvidia-gpu-operator

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kepler.deploy_kepler.html b/toolbox.generated/Kepler.deploy_kepler.html new file mode 100644 index 0000000000..43b7be3144 --- /dev/null +++ b/toolbox.generated/Kepler.deploy_kepler.html @@ -0,0 +1,129 @@ + + + + + + + kepler deploy_kepler — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kepler deploy_kepler

+

Deploy the Kepler operator and monitor to track energy consumption

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kepler.undeploy_kepler.html b/toolbox.generated/Kepler.undeploy_kepler.html new file mode 100644 index 0000000000..23dfc9197b --- /dev/null +++ b/toolbox.generated/Kepler.undeploy_kepler.html @@ -0,0 +1,129 @@ + + + + + + + kepler undeploy_kepler — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kepler undeploy_kepler

+

Cleanup the Kepler operator and associated resources

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kserve.capture_operators_state.html b/toolbox.generated/Kserve.capture_operators_state.html new file mode 100644 index 0000000000..763eedd29e --- /dev/null +++ b/toolbox.generated/Kserve.capture_operators_state.html @@ -0,0 +1,136 @@ + + + + + + + kserve capture_operators_state — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kserve capture_operators_state

+

Captures the state of the operators of the KServe serving stack

+
+

Parameters

+

raw_deployment

+
    +
  • If True, do not try to capture any Serverless related resource

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kserve.capture_state.html b/toolbox.generated/Kserve.capture_state.html new file mode 100644 index 0000000000..01e726a0fa --- /dev/null +++ b/toolbox.generated/Kserve.capture_state.html @@ -0,0 +1,136 @@ + + + + + + + kserve capture_state — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kserve capture_state

+

Captures the state of the KServe stack in a given namespace

+
+

Parameters

+

namespace

+
    +
  • The namespace in which the Serving stack was deployed. If empty, use the current project.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kserve.deploy_model.html b/toolbox.generated/Kserve.deploy_model.html new file mode 100644 index 0000000000..306016d243 --- /dev/null +++ b/toolbox.generated/Kserve.deploy_model.html @@ -0,0 +1,170 @@ + + + + + + + kserve deploy_model — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kserve deploy_model

+

Deploy a KServe model

+
+

Parameters

+

namespace

+
    +
  • The namespace in which the model should be deployed

  • +
+

runtime

+
    +
  • Name of the runtime (standalone-tgis or vllm)

  • +
+

model_name

+
    +
  • The name to give to the serving runtime

  • +
+

sr_name

+
    +
  • The name of the ServingRuntime object

  • +
+

sr_kserve_image

+
    +
  • The image of the Kserve serving runtime container

  • +
+

inference_service_name

+
    +
  • The name to give to the inference service

  • +
+

inference_service_min_replicas

+
    +
  • The minimum number of replicas. If none, the field is left unset.

  • +
  • type: Int

  • +
+

delete_others

+
    +
  • If True, deletes the other serving runtime/inference services of the namespace

  • +
  • default value: True

  • +
+

raw_deployment

+
    +
  • If True, do not try to configure anything related to Serverless.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kserve.extract_protos.html b/toolbox.generated/Kserve.extract_protos.html new file mode 100644 index 0000000000..4bf1f85c32 --- /dev/null +++ b/toolbox.generated/Kserve.extract_protos.html @@ -0,0 +1,149 @@ + + + + + + + kserve extract_protos — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kserve extract_protos

+

Extracts the protos of an inference service

+
+

Parameters

+

namespace

+
    +
  • The namespace in which the model was deployed

  • +
+

inference_service_name

+
    +
  • The name of the inference service

  • +
+

dest_dir

+
    +
  • The directory where the protos should be stored

  • +
+

copy_to_artifacts

+
    +
  • If True, copy the protos to the command artifacts. If False, don’t.

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kserve.extract_protos_grpcurl.html b/toolbox.generated/Kserve.extract_protos_grpcurl.html new file mode 100644 index 0000000000..eb6e95a0be --- /dev/null +++ b/toolbox.generated/Kserve.extract_protos_grpcurl.html @@ -0,0 +1,154 @@ + + + + + + + kserve extract_protos_grpcurl — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kserve extract_protos_grpcurl

+

Extracts the protos of an inference service, with GRPCurl observe

+
+

Parameters

+

namespace

+
    +
  • The namespace in which the model was deployed

  • +
+

inference_service_name

+
    +
  • The name of the inference service

  • +
+

dest_file

+
    +
  • The path where the proto file will be stored

  • +
+

methods

+
    +
  • The list of methods to extract

  • +
  • type: List

  • +
+

copy_to_artifacts

+
    +
  • If True, copy the protos to the command artifacts. If False, don’t.

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kserve.undeploy_model.html b/toolbox.generated/Kserve.undeploy_model.html new file mode 100644 index 0000000000..adae71bde8 --- /dev/null +++ b/toolbox.generated/Kserve.undeploy_model.html @@ -0,0 +1,148 @@ + + + + + + + kserve undeploy_model — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kserve undeploy_model

+

Undeploy a KServe model

+
+

Parameters

+

namespace

+
    +
  • The namespace in which the model should be deployed

  • +
+

sr_name

+
    +
  • The name to give to the serving runtime

  • +
+

inference_service_name

+
    +
  • The name to give to the inference service

  • +
+

all

+
    +
  • Delete all the inference services/servingruntime of the namespace

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kserve.validate_model.html b/toolbox.generated/Kserve.validate_model.html new file mode 100644 index 0000000000..b9fd131253 --- /dev/null +++ b/toolbox.generated/Kserve.validate_model.html @@ -0,0 +1,169 @@ + + + + + + + kserve validate_model — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kserve validate_model

+

Validate the proper deployment of a KServe model

+
+
Warning:

This command requires grpcurl to be available in the PATH.

+
+
+
+

Parameters

+

inference_service_names

+
    +
  • A list of names of the inference service to validate

  • +
+

query_count

+
    +
  • Number of query to perform

  • +
+

runtime

+
    +
  • Name of the runtime used (standalone-tgis or vllm)

  • +
+

model_id

+
    +
  • The model-id to pass to the inference service

  • +
  • default value: not-used

  • +
+

namespace

+
    +
  • The namespace in which the Serving stack was deployed. If empty, use the current project.

  • +
+

raw_deployment

+
    +
  • If True, do not try to configure anything related to Serverless. Works only in-cluster at the moment.

  • +
+

method

+
    +
  • The gRPC method to call #TODO remove?

  • +
+

proto

+
    +
  • If not empty, the proto file to pass to grpcurl

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kubemark.deploy_capi_provider.html b/toolbox.generated/Kubemark.deploy_capi_provider.html new file mode 100644 index 0000000000..30ec07acab --- /dev/null +++ b/toolbox.generated/Kubemark.deploy_capi_provider.html @@ -0,0 +1,129 @@ + + + + + + + kubemark deploy_capi_provider — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kubemark deploy_capi_provider

+

Deploy the Kubemark Cluster-API provider

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kubemark.deploy_nodes.html b/toolbox.generated/Kubemark.deploy_nodes.html new file mode 100644 index 0000000000..b18b2d5a61 --- /dev/null +++ b/toolbox.generated/Kubemark.deploy_nodes.html @@ -0,0 +1,147 @@ + + + + + + + kubemark deploy_nodes — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kubemark deploy_nodes

+

Deploy a set of Kubemark nodes

+
+

Parameters

+

namespace

+
    +
  • The namespace in which the MachineDeployment will be created

  • +
  • default value: openshift-cluster-api

  • +
+

deployment_name

+
    +
  • The name of the MachineDeployment

  • +
  • default value: kubemark-md

  • +
+

count

+
    +
  • The number of nodes to deploy

  • +
  • default value: 4

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kwok.deploy_kwok_controller.html b/toolbox.generated/Kwok.deploy_kwok_controller.html new file mode 100644 index 0000000000..57e476382a --- /dev/null +++ b/toolbox.generated/Kwok.deploy_kwok_controller.html @@ -0,0 +1,141 @@ + + + + + + + kwok deploy_kwok_controller — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kwok deploy_kwok_controller

+

Deploy the KWOK hollow node provider

+
+

Parameters

+

namespace

+
    +
  • Namespace where KWOK will be deployed. Cannot be changed at the moment.

  • +
  • default value: kube-system

  • +
+

undeploy

+
    +
  • If true, undeploys KWOK instead of deploying it.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Kwok.set_scale.html b/toolbox.generated/Kwok.set_scale.html new file mode 100644 index 0000000000..235afc77db --- /dev/null +++ b/toolbox.generated/Kwok.set_scale.html @@ -0,0 +1,169 @@ + + + + + + + kwok set_scale — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

kwok set_scale

+

Deploy a set of KWOK nodes

+
+

Parameters

+

scale

+
    +
  • The number of required nodes with given instance type

  • +
+

taint

+
    +
  • Taint to apply to the machineset.

  • +
+

name

+
    +
  • Name to give to the new machineset.

  • +
  • default value: kwok-machine

  • +
+

role

+
    +
  • Role of the new nodes

  • +
  • default value: worker

  • +
+

cpu

+
    +
  • Number of CPU allocatable

  • +
  • default value: 32

  • +
+

memory

+
    +
  • Number of Gi of memory allocatable

  • +
  • default value: 256

  • +
+

gpu

+
    +
  • Number of nvidia.com/gpu allocatable

  • +
+

pods

+
    +
  • Number of Pods allocatable

  • +
  • default value: 250

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Llm_Load_Test.run.html b/toolbox.generated/Llm_Load_Test.run.html new file mode 100644 index 0000000000..c34f4f7b10 --- /dev/null +++ b/toolbox.generated/Llm_Load_Test.run.html @@ -0,0 +1,198 @@ + + + + + + + llm_load_test run — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

llm_load_test run

+

Load test the wisdom model

+
+

Parameters

+

host

+
    +
  • The host endpoint of the gRPC call

  • +
+

port

+
    +
  • The gRPC port on the specified host

  • +
+

duration

+
    +
  • The duration of the load testing

  • +
+

plugin

+
    +
  • The llm-load-test plugin to use (tgis_grpc_plugin or caikit_client_plugin for now)

  • +
  • default value: tgis_grpc_plugin

  • +
+

interface

+
    +
  • (http or grpc) the interface to use for llm-load-test-plugins that support both

  • +
  • default value: grpc

  • +
+

model_id

+
    +
  • The ID of the model to pass along with the GRPC call

  • +
  • default value: not-used

  • +
+

src_path

+
    +
  • Path where llm-load-test has been cloned

  • +
  • default value: projects/llm_load_test/subprojects/llm-load-test/

  • +
+

streaming

+
    +
  • Whether to stream the llm-load-test requests

  • +
  • default value: True

  • +
+

use_tls

+
    +
  • Whether to set use_tls: True (grpc in Serverless mode)

  • +
+

concurrency

+
    +
  • Number of concurrent simulated users sending requests

  • +
  • default value: 16

  • +
+

max_input_tokens

+
    +
  • Max input tokens in llm load test to filter the dataset

  • +
  • default value: 1024

  • +
+

max_output_tokens

+
    +
  • Max output tokens in llm load test to filter the dataset

  • +
  • default value: 512

  • +
+

max_sequence_tokens

+
    +
  • Max sequence tokens in llm load test to filter the dataset

  • +
  • default value: 1536

  • +
+

endpoint

+
    +
  • Name of the endpoint to query (for openai plugin only)

  • +
  • default value: /v1/completions

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Local_Ci.run.html b/toolbox.generated/Local_Ci.run.html new file mode 100644 index 0000000000..59f9d97d40 --- /dev/null +++ b/toolbox.generated/Local_Ci.run.html @@ -0,0 +1,219 @@ + + + + + + + local_ci run — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

local_ci run

+

Runs a given CI command

+
+

Parameters

+

ci_command

+
    +
  • The CI command to run.

  • +
+

pr_number

+
    +
  • The ID of the PR to use for the repository.

  • +
+

git_repo

+
    +
  • The Github repo to use.

  • +
  • default value: https://github.com/openshift-psap/topsail

  • +
+

git_ref

+
    +
  • The Github ref to use.

  • +
  • default value: main

  • +
+

namespace

+
    +
  • The namespace in which the image.

  • +
  • default value: topsail

  • +
+

istag

+
    +
  • The imagestream tag to use.

  • +
  • default value: topsail:main

  • +
+

pod_name

+
    +
  • The name to give to the Pod running the CI command.

  • +
  • default value: topsail

  • +
+

service_account

+
    +
  • Name of the ServiceAccount to use for running the Pod.

  • +
  • default value: default

  • +
+

secret_name

+
    +
  • Name of the Secret to mount in the Pod.

  • +
+

secret_env_key

+
    +
  • Name of the environment variable with which the secret path will be exposed in the Pod.

  • +
+

test_name

+
    +
  • Name of the test being executed.

  • +
  • default value: local-ci-test

  • +
+

test_args

+
    +
  • List of arguments to give to the test.

  • +
+

init_command

+
    +
  • Command to run in the container before running anything else.

  • +
+

export_bucket_name

+
    +
  • Name of the S3 bucket where the artifacts should be exported.

  • +
+

export_test_run_identifier

+
    +
  • Identifier of the test being executed (will be a dirname).

  • +
  • default value: default

  • +
+

export

+
    +
  • If True, exports the artifacts to the S3 bucket. If False, do not run the export command.

  • +
  • default value: True

  • +
+

retrieve_artifacts

+
    +
  • If False, do not retrieve locally the test artifacts.

  • +
  • default value: True

  • +
+

pr_config

+
    +
  • Optional path to a PR config file (avoids fetching Github PR json).

  • +
+

update_git

+
    +
  • If True, updates the git repo with the latest main/PR before running the test.

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Local_Ci.run_multi.html b/toolbox.generated/Local_Ci.run_multi.html new file mode 100644 index 0000000000..c88225d983 --- /dev/null +++ b/toolbox.generated/Local_Ci.run_multi.html @@ -0,0 +1,231 @@ + + + + + + + local_ci run_multi — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

local_ci run_multi

+

Runs a given CI command in parallel from multiple Pods

+
+

Parameters

+

ci_command

+
    +
  • The CI command to run.

  • +
+

user_count

+
    +
  • Batch job parallelism count.

  • +
  • type: Int

  • +
  • default value: 1

  • +
+

namespace

+
    +
  • The namespace in which the image.

  • +
  • default value: topsail

  • +
+

istag

+
    +
  • The imagestream tag to use.

  • +
  • default value: topsail:main

  • +
+

job_name

+
    +
  • The name to give to the Job running the CI command.

  • +
  • default value: topsail

  • +
+

service_account

+
    +
  • Name of the ServiceAccount to use for running the Pod.

  • +
  • default value: default

  • +
+

secret_name

+
    +
  • Name of the Secret to mount in the Pod.

  • +
+

secret_env_key

+
    +
  • Name of the environment variable with which the secret path will be exposed in the Pod.

  • +
+

retrieve_artifacts

+
    +
  • If False, do not retrieve locally the test artifacts.

  • +
+

minio_namespace

+
    +
  • Namespace where the Minio server is located.

  • +
+

minio_bucket_name

+
    +
  • Name of the bucket in the Minio server.

  • +
+

minio_secret_key_key

+
    +
  • Key inside ‘secret_env_key’ containing the secret to access the Minio bucket. Must be in the form ‘user_password=SECRET_KEY’.

  • +
+

variable_overrides

+
    +
  • Optional path to the variable_overrides config file (avoids fetching Github PR json).

  • +
+

use_local_config

+
    +
  • If true, gives the local configuration file ($TOPSAIL_FROM_CONFIG_FILE) to the Pods.

  • +
  • default value: True

  • +
+

capture_prom_db

+
    +
  • If True, captures the Prometheus DB of the systems.

  • +
  • type: Bool

  • +
  • default value: True

  • +
+

git_pull

+
    +
  • If True, update the repo in the image with the latest version of the build ref before running the command in the Pods.

  • +
  • type: Bool

  • +
+

state_signal_redis_server

+
    +
  • Optional address of the Redis server to pass to StateSignal synchronization. If empty, do not perform any synchronization.

  • +
+

sleep_factor

+
    +
  • Delay (in seconds) between the start of each of the users.

  • +
+

user_batch_size

+
    +
  • Number of users to launch after the sleep delay.

  • +
  • default value: 1

  • +
+

abort_on_failure

+
    +
  • If true, let the Job abort the parallel execution on the first Pod failure. If false, ignore the process failure and track the overall failure count with a flag.

  • +
+

need_all_success

+
    +
  • If true, fails the execution if any of the Pods failed. If false, fails it if none of the Pods succeed.

  • +
+

launch_as_daemon

+
    +
  • If true, do not wait for the job to complete. Most of the options above become irrelevant

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Nfd.has_gpu_nodes.html b/toolbox.generated/Nfd.has_gpu_nodes.html new file mode 100644 index 0000000000..4e6adb4efc --- /dev/null +++ b/toolbox.generated/Nfd.has_gpu_nodes.html @@ -0,0 +1,129 @@ + + + + + + + nfd has_gpu_nodes — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

nfd has_gpu_nodes

+

Checks if the cluster has GPU nodes

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Nfd.has_labels.html b/toolbox.generated/Nfd.has_labels.html new file mode 100644 index 0000000000..11a240176c --- /dev/null +++ b/toolbox.generated/Nfd.has_labels.html @@ -0,0 +1,129 @@ + + + + + + + nfd has_labels — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

nfd has_labels

+

Checks if the cluster has NFD labels

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Nfd.wait_gpu_nodes.html b/toolbox.generated/Nfd.wait_gpu_nodes.html new file mode 100644 index 0000000000..013bc43a3c --- /dev/null +++ b/toolbox.generated/Nfd.wait_gpu_nodes.html @@ -0,0 +1,129 @@ + + + + + + + nfd wait_gpu_nodes — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

nfd wait_gpu_nodes

+

Wait until nfd find GPU nodes

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Nfd.wait_labels.html b/toolbox.generated/Nfd.wait_labels.html new file mode 100644 index 0000000000..f12171ef1f --- /dev/null +++ b/toolbox.generated/Nfd.wait_labels.html @@ -0,0 +1,129 @@ + + + + + + + nfd wait_labels — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

nfd wait_labels

+

Wait until nfd labels the nodes

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Nfd_Operator.deploy_from_operatorhub.html b/toolbox.generated/Nfd_Operator.deploy_from_operatorhub.html new file mode 100644 index 0000000000..b626a34b92 --- /dev/null +++ b/toolbox.generated/Nfd_Operator.deploy_from_operatorhub.html @@ -0,0 +1,149 @@ + + + + + + + nfd_operator deploy_from_operatorhub — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

nfd_operator deploy_from_operatorhub

+

Deploys the NFD Operator from OperatorHub

+
+

Parameters

+

channel

+
    +
  • The operator hub channel to deploy. e.g. 4.7

  • +
+

# Constants +# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_deploy_cr: true

+

# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_namespace: openshift-nfd

+

# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_manifest_name: nfd

+

# +# Defined as a constant in Nfd_Operator.deploy_from_operatorhub +cluster_deploy_operator_catalog: redhat-operators

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Nfd_Operator.undeploy_from_operatorhub.html b/toolbox.generated/Nfd_Operator.undeploy_from_operatorhub.html new file mode 100644 index 0000000000..1f8c9791b6 --- /dev/null +++ b/toolbox.generated/Nfd_Operator.undeploy_from_operatorhub.html @@ -0,0 +1,129 @@ + + + + + + + nfd_operator undeploy_from_operatorhub — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

nfd_operator undeploy_from_operatorhub

+

Undeploys an NFD-operator that was deployed from OperatorHub

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Notebooks.benchmark_performance.html b/toolbox.generated/Notebooks.benchmark_performance.html new file mode 100644 index 0000000000..1517031d06 --- /dev/null +++ b/toolbox.generated/Notebooks.benchmark_performance.html @@ -0,0 +1,173 @@ + + + + + + + notebooks benchmark_performance — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

notebooks benchmark_performance

+

Benchmark the performance of a notebook image.

+
+

Parameters

+

namespace

+
    +
  • Namespace in which the notebook will be deployed, if not deploying with RHODS.

  • +
  • default value: rhods-notebooks

  • +
+

imagestream

+
    +
  • Imagestream to use to look up the notebook Pod image.

  • +
  • default value: s2i-generic-data-science-notebook

  • +
+

imagestream_tag

+
    +
  • Imagestream tag to use to look up the notebook Pod image. If emtpy and and the image stream has only one tag, use it. Fails otherwise.

  • +
+

notebook_directory

+
    +
  • Directory containing the files to mount in the notebook.

  • +
  • default value: projects/notebooks/testing/notebooks/

  • +
+

notebook_filename

+
    +
  • Name of the ipynb notebook file to execute with JupyterLab.

  • +
  • default value: benchmark_entrypoint.ipynb

  • +
+

benchmark_name

+
    +
  • Name of the benchmark to execute in the notebook.

  • +
  • default value: pyperf_bm_go.py

  • +
+

benchmark_repeat

+
    +
  • Number of repeats of the benchmark to perform for one time measurement.

  • +
  • type: Int

  • +
  • default value: 1

  • +
+

benchmark_number

+
    +
  • Number of times the benchmark time measurement should be done.

  • +
  • type: Int

  • +
  • default value: 1

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Notebooks.capture_state.html b/toolbox.generated/Notebooks.capture_state.html new file mode 100644 index 0000000000..6244853b04 --- /dev/null +++ b/toolbox.generated/Notebooks.capture_state.html @@ -0,0 +1,129 @@ + + + + + + + notebooks capture_state — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

notebooks capture_state

+

Capture information about the cluster and the RHODS notebooks deployment

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Notebooks.cleanup.html b/toolbox.generated/Notebooks.cleanup.html new file mode 100644 index 0000000000..c3cd2a7216 --- /dev/null +++ b/toolbox.generated/Notebooks.cleanup.html @@ -0,0 +1,136 @@ + + + + + + + notebooks cleanup — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

notebooks cleanup

+

Clean up the resources created along with the notebooks, during the scale tests.

+
+

Parameters

+

username_prefix

+
    +
  • Prefix of the usernames who created the resources.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Notebooks.dashboard_scale_test.html b/toolbox.generated/Notebooks.dashboard_scale_test.html new file mode 100644 index 0000000000..4be6185218 --- /dev/null +++ b/toolbox.generated/Notebooks.dashboard_scale_test.html @@ -0,0 +1,212 @@ + + + + + + + notebooks dashboard_scale_test — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

notebooks dashboard_scale_test

+

End-to-end scale testing of ROAI dashboard scale test, at user level.

+
+

Parameters

+

namespace

+
    +
  • Namespace in which the scale test should be deployed.

  • +
+

idp_name

+
    +
  • Name of the identity provider to use.

  • +
+

username_prefix

+
    +
  • Prefix of the usernames to use to run the scale test.

  • +
+

user_count

+
    +
  • Number of users to run in parallel.

  • +
  • type: Int

  • +
+

secret_properties_file

+
    +
  • Path of a file containing the properties of LDAP secrets. (See ‘deploy_ldap’ command)

  • +
+

minio_namespace

+
    +
  • Namespace where the Minio server is located.

  • +
+

minio_bucket_name

+
    +
  • Name of the bucket in the Minio server.

  • +
+

user_index_offset

+
    +
  • Offset to add to the user index to compute the user name.

  • +
  • type: Int

  • +
+

artifacts_collected

+
    +
    • +
    • ‘all’ - ‘no-screenshot’ - ‘no-screenshot-except-zero’ - ‘no-screenshot-except-failed’ - ‘no-screenshot-except-failed-and-zero’ - ‘none’

    • +
    +
  • +
  • default value: all

  • +
+

user_sleep_factor

+
    +
  • Delay to sleep between users

  • +
  • default value: 1.0

  • +
+

user_batch_size

+
    +
  • Number of users to launch at the same time.

  • +
  • type: Int

  • +
  • default value: 1

  • +
+

ods_ci_istag

+
    +
  • Imagestream tag of the ODS-CI container image.

  • +
+

ods_ci_test_case

+
    +
  • ODS-CI test case to execute.

  • +
  • default value: notebook_dsg_test.robot

  • +
+

artifacts_exporter_istag

+
    +
  • Imagestream tag of the artifacts exporter side-car container image.

  • +
+

state_signal_redis_server

+
    +
  • Hostname and port of the Redis server for StateSignal synchronization (for the synchronization of the beginning of the user simulation)

  • +
+

toleration_key

+
    +
  • Toleration key to use for the test Pods.

  • +
+

capture_prom_db

+
    +
  • If True, captures the Prometheus DB of the systems.

  • +
  • type: Bool

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Notebooks.locust_scale_test.html b/toolbox.generated/Notebooks.locust_scale_test.html new file mode 100644 index 0000000000..54a074a0ba --- /dev/null +++ b/toolbox.generated/Notebooks.locust_scale_test.html @@ -0,0 +1,224 @@ + + + + + + + notebooks locust_scale_test — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

notebooks locust_scale_test

+

End-to-end testing of RHOAI notebooks at scale, at API level

+
+

Parameters

+

namespace

+
    +
  • Namespace where the test will run

  • +
+

idp_name

+
    +
  • Name of the identity provider to use.

  • +
+

secret_properties_file

+
    +
  • Path of a file containing the properties of LDAP secrets. (See ‘deploy_ldap’ command).

  • +
+

test_name

+
    +
  • Test to perform.

  • +
+

minio_namespace

+
    +
  • Namespace where the Minio server is located.

  • +
+

minio_bucket_name

+
    +
  • Name of the bucket in the Minio server.

  • +
+

username_prefix

+
    +
  • Prefix of the RHODS users.

  • +
+

user_count

+
    +
  • Number of users to run in parallel.

  • +
  • type: Int

  • +
+

user_index_offset

+
    +
  • Offset to add to the user index to compute the user name.

  • +
  • type: Int

  • +
+

locust_istag

+
    +
  • Imagestream tag of the locust container.

  • +
+

artifacts_exporter_istag

+
    +
  • Imagestream tag of the artifacts exporter side-car container.

  • +
+

run_time

+
    +
  • Test run time (eg, 300s, 20m, 3h, 1h30m, etc.)

  • +
  • default value: 1m

  • +
+

spawn_rate

+
    +
  • Rate to spawn users at (users per second)

  • +
  • default value: 1

  • +
+

sut_cluster_kubeconfig

+
    +
  • Path of the system-under-test cluster’s Kubeconfig. If provided, the RHODS endpoints will be looked up in this cluster.

  • +
+

notebook_image_name

+
    +
  • Name of the RHODS image to use when launching the notebooks.

  • +
  • default value: s2i-generic-data-science-notebook

  • +
+

notebook_size_name

+
    +
  • Size name of the notebook.

  • +
  • default value: Small

  • +
+

toleration_key

+
    +
  • Toleration key to use for the test Pods.

  • +
+

cpu_count

+
    +
  • Number of Locust processes to launch (one per Pod with 1cpu).

  • +
  • type: Int

  • +
  • default value: 1

  • +
+

user_sleep_factor

+
    +
  • Delay to sleep between users

  • +
  • type: Float

  • +
  • default value: 1.0

  • +
+

capture_prom_db

+
    +
  • If True, captures the Prometheus DB of the systems.

  • +
  • type: Bool

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Notebooks.ods_ci_scale_test.html b/toolbox.generated/Notebooks.ods_ci_scale_test.html new file mode 100644 index 0000000000..fd3ac6ae72 --- /dev/null +++ b/toolbox.generated/Notebooks.ods_ci_scale_test.html @@ -0,0 +1,266 @@ + + + + + + + notebooks ods_ci_scale_test — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

notebooks ods_ci_scale_test

+

End-to-end scale testing of ROAI notebooks, at user level.

+
+

Parameters

+

namespace

+
    +
  • Namespace in which the scale test should be deployed.

  • +
+

idp_name

+
    +
  • Name of the identity provider to use.

  • +
+

username_prefix

+
    +
  • Prefix of the usernames to use to run the scale test.

  • +
+

user_count

+
    +
  • Number of users to run in parallel.

  • +
  • type: Int

  • +
+

secret_properties_file

+
    +
  • Path of a file containing the properties of LDAP secrets. (See ‘deploy_ldap’ command)

  • +
+

notebook_url

+
    +
  • URL from which the notebook will be downloaded.

  • +
+

minio_namespace

+
    +
  • Namespace where the Minio server is located.

  • +
+

minio_bucket_name

+
    +
  • Name of the bucket in the Minio server.

  • +
+

user_index_offset

+
    +
  • Offset to add to the user index to compute the user name.

  • +
  • type: Int

  • +
+

sut_cluster_kubeconfig

+
    +
  • Path of the system-under-test cluster’s Kubeconfig. If provided, the RHODS endpoints will be looked up in this cluster.

  • +
+

artifacts_collected

+
    +
    • +
    • ‘all’ - ‘no-screenshot’ - ‘no-screenshot-except-zero’ - ‘no-screenshot-except-failed’ - ‘no-screenshot-except-failed-and-zero’ - ‘none’

    • +
    +
  • +
  • default value: all

  • +
+

user_sleep_factor

+
    +
  • Delay to sleep between users

  • +
  • default value: 1.0

  • +
+

user_batch_size

+
    +
  • Number of users to launch at the same time.

  • +
  • type: Int

  • +
  • default value: 1

  • +
+

ods_ci_istag

+
    +
  • Imagestream tag of the ODS-CI container image.

  • +
+

ods_ci_exclude_tags

+
    +
  • Tags to exclude in the ODS-CI test case.

  • +
  • default value: None

  • +
+

ods_ci_test_case

+
    +
  • Robot test case name.

  • +
  • default value: notebook_dsg_test.robot

  • +
+

artifacts_exporter_istag

+
    +
  • Imagestream tag of the artifacts exporter side-car container image.

  • +
+

notebook_image_name

+
    +
  • Notebook image name.

  • +
  • default value: s2i-generic-data-science-notebook

  • +
+

notebook_size_name

+
    +
  • Notebook size.

  • +
  • default value: Small

  • +
+

notebook_benchmark_name

+
    +
  • Benchmark script file name to execute in the notebook.

  • +
  • default value: pyperf_bm_go.py

  • +
+

notebook_benchmark_number

+
    +
  • Number of the benchmarks executions per repeat.

  • +
  • default value: 20

  • +
+

notebook_benchmark_repeat

+
    +
  • Number of the benchmark repeats to execute.

  • +
  • default value: 2

  • +
+

state_signal_redis_server

+
    +
  • Hostname and port of the Redis server for StateSignal synchronization (for the synchronization of the beginning of the user simulation)

  • +
+

toleration_key

+
    +
  • Toleration key to use for the test Pods.

  • +
+

capture_prom_db

+
    +
  • If True, captures the Prometheus DB of the systems.

  • +
  • type: Bool

  • +
  • default value: True

  • +
+

stop_notebooks_on_exit

+
    +
  • If False, keep the user notebooks running at the end of the test.

  • +
  • type: Bool

  • +
  • default value: True

  • +
+

only_create_notebooks

+
    +
  • If True, only create the notebooks, but don’t start them. This will overwrite the value of ‘ods_ci_exclude_tags’.

  • +
  • type: Bool

  • +
+

driver_running_on_spot

+
    +
  • If True, consider that the driver Pods are running on Spot instances and can disappear at any time.

  • +
  • type: Bool

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Pipelines.capture_state.html b/toolbox.generated/Pipelines.capture_state.html new file mode 100644 index 0000000000..e62cf2cee8 --- /dev/null +++ b/toolbox.generated/Pipelines.capture_state.html @@ -0,0 +1,149 @@ + + + + + + + pipelines capture_state — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

pipelines capture_state

+

Captures the state of a Data Science Pipeline Application in a given namespace.

+
+

Parameters

+

dsp_application_name

+
    +
  • The name of the application

  • +
+

namespace

+
    +
  • The namespace in which the application was deployed

  • +
+

user_id

+
    +
  • Identifier of the user to capture

  • +
+

capture_extra_artifacts

+
    +
  • Whether to capture extra descriptions and YAML’s

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Pipelines.deploy_application.html b/toolbox.generated/Pipelines.deploy_application.html new file mode 100644 index 0000000000..8ab50db59a --- /dev/null +++ b/toolbox.generated/Pipelines.deploy_application.html @@ -0,0 +1,140 @@ + + + + + + + pipelines deploy_application — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

pipelines deploy_application

+

Deploy a Data Science Pipeline Application in a given namespace.

+
+

Parameters

+

name

+
    +
  • The name of the application to deploy

  • +
+

namespace

+
    +
  • The namespace in which the application should be deployed

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Pipelines.run_kfp_notebook.html b/toolbox.generated/Pipelines.run_kfp_notebook.html new file mode 100644 index 0000000000..71b3900065 --- /dev/null +++ b/toolbox.generated/Pipelines.run_kfp_notebook.html @@ -0,0 +1,194 @@ + + + + + + + pipelines run_kfp_notebook — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

pipelines run_kfp_notebook

+

Run a notebook in a given notebook image.

+
+

Parameters

+

namespace

+
    +
  • Namespace in which the notebook will be deployed, if not deploying with RHODS. If empty, use the project return by ‘oc project –short’.

  • +
+

dsp_application_name

+
    +
  • The name of the DSPipelines Application to use. If empty, lookup the application name in the namespace.

  • +
+

imagestream

+
    +
  • Imagestream to use to look up the notebook Pod image.

  • +
  • default value: s2i-generic-data-science-notebook

  • +
+

imagestream_tag

+
    +
  • Imagestream tag to use to look up the notebook Pod image. If emtpy and and the image stream has only one tag, use it. Fails otherwise.

  • +
+

notebook_name

+
    +
  • A prefix to add the name of the notebook to differential notebooks in the same project

  • +
+

notebook_directory

+
    +
  • Directory containing the files to mount in the notebook.

  • +
  • default value: testing/pipelines/notebooks/hello-world

  • +
+

notebook_filename

+
    +
  • Name of the ipynb notebook file to execute with JupyterLab.

  • +
  • default value: kfp_hello_world.ipynb

  • +
+

run_count

+
    +
  • Number of times to run the pipeline

  • +
+

run_delay

+
    +
  • Number of seconds to wait before trigger the next run from the notebook

  • +
+

stop_on_exit

+
    +
  • If False, keep the notebook running after the test.

  • +
  • default value: True

  • +
+

capture_artifacts

+
    +
  • If False, disable the post-test artifact collection.

  • +
  • default value: True

  • +
+

capture_prom_db

+
    +
  • If True, captures the Prometheus DB of the systems.

  • +
+

capture_extra_artifacts

+
    +
  • Whether to capture extra descriptions and YAML’s

  • +
  • default value: True

  • +
+

wait_for_run_completion

+
    +
  • Whether to wait for one runs completion before starting the next

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.generate_ansible_default_settings.html b/toolbox.generated/Repo.generate_ansible_default_settings.html new file mode 100644 index 0000000000..22a8324213 --- /dev/null +++ b/toolbox.generated/Repo.generate_ansible_default_settings.html @@ -0,0 +1,129 @@ + + + + + + + repo generate_ansible_default_settings — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

repo generate_ansible_default_settings

+

Generate the defaults/main/config.yml file of the Ansible roles, based on the Python definition.

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate.html b/toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate.html new file mode 100644 index 0000000000..016304f174 --- /dev/null +++ b/toolbox.generated/Repo.generate_middleware_ci_secret_boilerplate.html @@ -0,0 +1,144 @@ + + + + + + + repo generate_middleware_ci_secret_boilerplate — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

repo generate_middleware_ci_secret_boilerplate

+

Generate the boilerplace code to include a new secret in the Middleware CI configuration

+
+

Parameters

+

name

+
    +
  • Name of the new secret to include

  • +
+

description

+
    +
  • Description of the secret to include

  • +
+

varname

+
    +
  • Optional short name of the file

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.generate_toolbox_related_files.html b/toolbox.generated/Repo.generate_toolbox_related_files.html new file mode 100644 index 0000000000..fe9aad7eaa --- /dev/null +++ b/toolbox.generated/Repo.generate_toolbox_related_files.html @@ -0,0 +1,129 @@ + + + + + + + repo generate_toolbox_related_files — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + + + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.generate_toolbox_rst_documentation.html b/toolbox.generated/Repo.generate_toolbox_rst_documentation.html new file mode 100644 index 0000000000..856f4998df --- /dev/null +++ b/toolbox.generated/Repo.generate_toolbox_rst_documentation.html @@ -0,0 +1,129 @@ + + + + + + + repo generate_toolbox_rst_documentation — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

repo generate_toolbox_rst_documentation

+

Generate the doc/toolbox.generated/*.rst file, based on the Toolbox Python definition.

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.send_job_completion_notification.html b/toolbox.generated/Repo.send_job_completion_notification.html new file mode 100644 index 0000000000..1f11bbeb74 --- /dev/null +++ b/toolbox.generated/Repo.send_job_completion_notification.html @@ -0,0 +1,157 @@ + + + + + + + repo send_job_completion_notification — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

repo send_job_completion_notification

+

Send a job completion notification to github and/or slack about the completion of a test job.

+

A job completion notification is the message sent at the end of a CI job.

+
+

Parameters

+

reason

+
    +
  • Reason of the job completion. Can be ERR or EXIT.

  • +
  • type: Str

  • +
+

status

+
    +
  • A status message to write at the top of the notification.

  • +
  • type: Str

  • +
+

github

+
    +
  • Enable or disable sending the job completion notification to Github

  • +
  • default value: True

  • +
+

slack

+
    +
  • Enable or disable sending the job completion notification to Slack

  • +
  • default value: True

  • +
+

dry_run

+
    +
  • If enabled, don’t send any notification, just show the message in the logs

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.validate_no_broken_link.html b/toolbox.generated/Repo.validate_no_broken_link.html new file mode 100644 index 0000000000..97e7c55689 --- /dev/null +++ b/toolbox.generated/Repo.validate_no_broken_link.html @@ -0,0 +1,129 @@ + + + + + + + repo validate_no_broken_link — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + + + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.validate_no_wip.html b/toolbox.generated/Repo.validate_no_wip.html new file mode 100644 index 0000000000..ade03c9d0a --- /dev/null +++ b/toolbox.generated/Repo.validate_no_wip.html @@ -0,0 +1,129 @@ + + + + + + + repo validate_no_wip — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

repo validate_no_wip

+

Ensures that none of the commits have the WIP flag in their message title.

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.validate_role_files.html b/toolbox.generated/Repo.validate_role_files.html new file mode 100644 index 0000000000..4f5beb7a3d --- /dev/null +++ b/toolbox.generated/Repo.validate_role_files.html @@ -0,0 +1,129 @@ + + + + + + + repo validate_role_files — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

repo validate_role_files

+

Ensures that all the Ansible variables defining a filepath (project/*/toolbox/) do point to an existing file.

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Repo.validate_role_vars_used.html b/toolbox.generated/Repo.validate_role_vars_used.html new file mode 100644 index 0000000000..69b1e91563 --- /dev/null +++ b/toolbox.generated/Repo.validate_role_vars_used.html @@ -0,0 +1,129 @@ + + + + + + + repo validate_role_vars_used — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

repo validate_role_vars_used

+

Ensure that all the Ansible variables defined are actually used in their role (with an exception for symlinks)

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.capture_state.html b/toolbox.generated/Rhods.capture_state.html new file mode 100644 index 0000000000..e6dd27a2c7 --- /dev/null +++ b/toolbox.generated/Rhods.capture_state.html @@ -0,0 +1,129 @@ + + + + + + + rhods capture_state — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods capture_state

+

Captures the state of the RHOAI deployment

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.delete_ods.html b/toolbox.generated/Rhods.delete_ods.html new file mode 100644 index 0000000000..4486bdacc9 --- /dev/null +++ b/toolbox.generated/Rhods.delete_ods.html @@ -0,0 +1,137 @@ + + + + + + + rhods delete_ods — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods delete_ods

+

Forces ODS operator deletion

+
+

Parameters

+

namespace

+
    +
  • Namespace where RHODS is installed.

  • +
  • default value: redhat-ods-operator

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.deploy_addon.html b/toolbox.generated/Rhods.deploy_addon.html new file mode 100644 index 0000000000..075c7d40ff --- /dev/null +++ b/toolbox.generated/Rhods.deploy_addon.html @@ -0,0 +1,149 @@ + + + + + + + rhods deploy_addon — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods deploy_addon

+

Installs the RHODS OCM addon

+
+

Parameters

+

cluster_name

+
    +
  • The name of the cluster where RHODS should be deployed.

  • +
+

notification_email

+
    +
  • The email to register for RHODS addon deployment.

  • +
+

wait_for_ready_state

+
    +
  • If true (default), will cause the role to wait until addon reports ready state. (Can time out)

  • +
  • default value: True

  • +
+

# Constants +# Identifier of the addon that should be deployed +# Defined as a constant in Rhods.deploy_addon +ocm_deploy_addon_ocm_deploy_addon_id: managed-odh

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.deploy_ods.html b/toolbox.generated/Rhods.deploy_ods.html new file mode 100644 index 0000000000..e4833f11a2 --- /dev/null +++ b/toolbox.generated/Rhods.deploy_ods.html @@ -0,0 +1,161 @@ + + + + + + + rhods deploy_ods — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods deploy_ods

+

Deploy ODS operator from its custom catalog

+
+

Parameters

+

catalog_image

+
    +
  • Container image containing the RHODS bundle.

  • +
+

tag

+
    +
  • Catalog image tag to use to deploy RHODS.

  • +
+

channel

+
    +
  • The channel to use for the deployment. Let empty to use the default channel.

  • +
+

version

+
    +
  • The version to deploy. Let empty to install the last version available.

  • +
+

disable_dsc_config

+
    +
  • If True, pass the flag to disable DSC configuration

  • +
+

opendatahub

+
    +
  • If True, deploys a OpenDataHub manifest instead of RHOAI

  • +
+

managed_rhoai

+
    +
  • If True, deploys RHOAI with the Managed Service flag. If False, deploys it as Self-Managed.

  • +
  • default value: True

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.dump_prometheus_db.html b/toolbox.generated/Rhods.dump_prometheus_db.html new file mode 100644 index 0000000000..c101f0f107 --- /dev/null +++ b/toolbox.generated/Rhods.dump_prometheus_db.html @@ -0,0 +1,150 @@ + + + + + + + rhods dump_prometheus_db — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods dump_prometheus_db

+

Dump Prometheus database into a file

+
+

Parameters

+

dump_name_prefix

+
    +
  • Missing documentation for dump_name_prefix

  • +
  • default value: prometheus

  • +
+

# Constants +# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_directory: /prometheus/data

+

# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_namespace: redhat-ods-monitoring

+

# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_label: deployment=prometheus

+

# +# Defined as a constant in Rhods.dump_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_mode: dump

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.reset_prometheus_db.html b/toolbox.generated/Rhods.reset_prometheus_db.html new file mode 100644 index 0000000000..71b80ae08f --- /dev/null +++ b/toolbox.generated/Rhods.reset_prometheus_db.html @@ -0,0 +1,139 @@ + + + + + + + rhods reset_prometheus_db — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods reset_prometheus_db

+

Resets RHODS Prometheus database, by destroying its Pod.

+

# Constants +# +# Defined as a constant in Rhods.reset_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_namespace: redhat-ods-monitoring

+

# +# Defined as a constant in Rhods.reset_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_label: deployment=prometheus

+

# +# Defined as a constant in Rhods.reset_prometheus_db +cluster_prometheus_db_cluster_prometheus_db_mode: reset

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.undeploy_ods.html b/toolbox.generated/Rhods.undeploy_ods.html new file mode 100644 index 0000000000..e22503bf87 --- /dev/null +++ b/toolbox.generated/Rhods.undeploy_ods.html @@ -0,0 +1,137 @@ + + + + + + + rhods undeploy_ods — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods undeploy_ods

+

Undeploy ODS operator

+
+

Parameters

+

namespace

+
    +
  • Namespace where RHODS is installed.

  • +
  • default value: redhat-ods-operator

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.update_datasciencecluster.html b/toolbox.generated/Rhods.update_datasciencecluster.html new file mode 100644 index 0000000000..08e2663a4a --- /dev/null +++ b/toolbox.generated/Rhods.update_datasciencecluster.html @@ -0,0 +1,151 @@ + + + + + + + rhods update_datasciencecluster — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods update_datasciencecluster

+

Update RHOAI datasciencecluster resource

+
+

Parameters

+

name

+
    +
  • Name of the resource to update. If none, update the first (and only) one found.

  • +
+

enable

+
    +
  • List of all the components to enable

  • +
  • type: List

  • +
+

show_all

+
    +
  • If enabled, show all the available components and exit.

  • +
+

extra_settings

+
    +
  • Dict of key:value to set manually in the DSC, using JSON dot notation.

  • +
  • type: Dict

  • +
  • default value: {'spec.components.kserve.serving.managementState': 'Removed'}

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.wait_odh.html b/toolbox.generated/Rhods.wait_odh.html new file mode 100644 index 0000000000..6afe22d64c --- /dev/null +++ b/toolbox.generated/Rhods.wait_odh.html @@ -0,0 +1,137 @@ + + + + + + + rhods wait_odh — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods wait_odh

+

Wait for ODH to finish its deployment

+
+

Parameters

+

namespace

+
    +
  • Namespace in which ODH is deployed

  • +
  • default value: opendatahub

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Rhods.wait_ods.html b/toolbox.generated/Rhods.wait_ods.html new file mode 100644 index 0000000000..ccae80e48b --- /dev/null +++ b/toolbox.generated/Rhods.wait_ods.html @@ -0,0 +1,133 @@ + + + + + + + rhods wait_ods — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

rhods wait_ods

+

Wait for ODS to finish its deployment

+

# Constants +# Comma-separated list of the RHODS images that should be awaited +# Defined as a constant in Rhods.wait_ods +rhods_wait_ods_images: s2i-minimal-notebook,s2i-generic-data-science-notebook

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Scheduler.cleanup.html b/toolbox.generated/Scheduler.cleanup.html new file mode 100644 index 0000000000..ae55a07a53 --- /dev/null +++ b/toolbox.generated/Scheduler.cleanup.html @@ -0,0 +1,136 @@ + + + + + + + scheduler cleanup — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

scheduler cleanup

+

Clean up the scheduler load namespace

+
+

Parameters

+

namespace

+
    +
  • Name of the namespace where the scheduler load was generated

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Scheduler.create_mcad_canary.html b/toolbox.generated/Scheduler.create_mcad_canary.html new file mode 100644 index 0000000000..538b6af6b8 --- /dev/null +++ b/toolbox.generated/Scheduler.create_mcad_canary.html @@ -0,0 +1,136 @@ + + + + + + + scheduler create_mcad_canary — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

scheduler create_mcad_canary

+

Create a canary for MCAD Appwrappers and track the time it takes to be scheduled

+
+

Parameters

+

namespace

+
    +
  • Name of the namespace where the canary should be generated

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Scheduler.deploy_mcad_from_helm.html b/toolbox.generated/Scheduler.deploy_mcad_from_helm.html new file mode 100644 index 0000000000..d2964f381b --- /dev/null +++ b/toolbox.generated/Scheduler.deploy_mcad_from_helm.html @@ -0,0 +1,156 @@ + + + + + + + scheduler deploy_mcad_from_helm — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

scheduler deploy_mcad_from_helm

+

Deploys MCAD from helm

+
+

Parameters

+

namespace

+
    +
  • Name of the namespace where MCAD should be deployed

  • +
+

git_repo

+
    +
  • Name of the GIT repo to clone

  • +
  • default value: https://github.com/project-codeflare/multi-cluster-app-dispatcher

  • +
+

git_ref

+
    +
  • Name of the GIT branch to fetch

  • +
  • default value: main

  • +
+

image_repo

+
    +
  • Name of the image registry where the image is stored

  • +
  • default value: quay.io/project-codeflare/mcad-controller

  • +
+

image_tag

+
    +
  • Tag of the image to use

  • +
  • default value: stable

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Scheduler.generate_load.html b/toolbox.generated/Scheduler.generate_load.html new file mode 100644 index 0000000000..aa78e0790f --- /dev/null +++ b/toolbox.generated/Scheduler.generate_load.html @@ -0,0 +1,203 @@ + + + + + + + scheduler generate_load — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

scheduler generate_load

+

Generate scheduler load

+
+

Parameters

+

namespace

+
    +
  • Name of the namespace where the scheduler load will be generated

  • +
+

base_name

+
    +
  • Name prefix for the scheduler resources

  • +
  • default value: sched-test-

  • +
+

job_template_name

+
    +
  • Name of the job template to use inside the AppWrapper

  • +
  • default value: sleeper

  • +
+

aw_states_target

+
    +
  • List of expected AppWrapper target states

  • +
+

aw_states_unexpected

+
    +
  • List of AppWrapper states that fail the test

  • +
+

mode

+
    +
  • Mcad, kueue, coscheduling or job

  • +
  • default value: job

  • +
+

count

+
    +
  • Number of resources to create

  • +
  • default value: 3

  • +
+

pod_count

+
    +
  • Number of Pods to create in each of the AppWrappers

  • +
  • default value: 1

  • +
+

pod_runtime

+
    +
  • Run time parameter to pass to the Pod

  • +
  • default value: 30

  • +
+

pod_requests

+
    +
  • Requests to pass to the Pod definition

  • +
  • default value: {'cpu': '100m'}

  • +
+

timespan

+
    +
  • Number of minutes over which the resources should be created

  • +
+

distribution

+
    +
  • The distribution method to use to spread the resource creation over the requested timespan

  • +
  • default value: poisson

  • +
+

scheduler_load_generator

+
    +
  • The path of the scheduler load generator to launch

  • +
  • default value: projects/scheduler/subprojects/scheduler-load-generator/generator.py

  • +
+

kueue_queue

+
    +
  • The name of the Kueue queue to use

  • +
  • default value: local-queue

  • +
+

resource_kind

+
    +
  • The kind of resource created by the load generator

  • +
  • default value: job

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Server.deploy_ldap.html b/toolbox.generated/Server.deploy_ldap.html new file mode 100644 index 0000000000..1ec0099c31 --- /dev/null +++ b/toolbox.generated/Server.deploy_ldap.html @@ -0,0 +1,171 @@ + + + + + + + server deploy_ldap — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

server deploy_ldap

+

Deploy OpenLDAP and LDAP Oauth

+

Example of secret properties file:

+

admin_password=adminpasswd

+
+

Parameters

+

idp_name

+
    +
  • Name of the LDAP identity provider.

  • +
+

username_prefix

+
    +
  • Prefix for the creation of the users (suffix is 0..username_count)

  • +
+

username_count

+
    +
  • Number of users to create.

  • +
  • type: Int

  • +
+

secret_properties_file

+
    +
  • Path of a file containing the properties of LDAP secrets.

  • +
+

use_ocm

+
    +
  • If true, use ocm create idp to deploy the LDAP identity provider.

  • +
+

use_rosa

+
    +
  • If true, use rosa create idp to deploy the LDAP identity provider.

  • +
+

cluster_name

+
    +
  • Cluster to use when using OCM or ROSA.

  • +
+

wait

+
    +
  • If True, waits for the first user (0) to be able to login into the cluster.

  • +
+

# Constants +# Name of the admin user +# Defined as a constant in Server.deploy_ldap +server_deploy_ldap_admin_user: admin

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Server.deploy_minio_s3_server.html b/toolbox.generated/Server.deploy_minio_s3_server.html new file mode 100644 index 0000000000..971213502b --- /dev/null +++ b/toolbox.generated/Server.deploy_minio_s3_server.html @@ -0,0 +1,156 @@ + + + + + + + server deploy_minio_s3_server — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

server deploy_minio_s3_server

+

Deploy Minio S3 server

+

Example of secret properties file:

+

user_password=passwd +admin_password=adminpasswd

+
+

Parameters

+

secret_properties_file

+
    +
  • Path of a file containing the properties of S3 secrets.

  • +
+

namespace

+
    +
  • Namespace in which Minio should be deployed.

  • +
  • default value: minio

  • +
+

bucket_name

+
    +
  • The name of the default bucket to create in Minio.

  • +
  • default value: myBucket

  • +
+

# Constants +# Name of the Minio admin user +# Defined as a constant in Server.deploy_minio_s3_server +server_deploy_minio_s3_server_root_user: admin

+

# Name of the user/access key to use to connect to the Minio server +# Defined as a constant in Server.deploy_minio_s3_server +server_deploy_minio_s3_server_access_key: minio

+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Server.deploy_nginx_server.html b/toolbox.generated/Server.deploy_nginx_server.html new file mode 100644 index 0000000000..1e69fb954c --- /dev/null +++ b/toolbox.generated/Server.deploy_nginx_server.html @@ -0,0 +1,140 @@ + + + + + + + server deploy_nginx_server — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

server deploy_nginx_server

+

Deploy an NGINX HTTP server

+
+

Parameters

+

namespace

+
    +
  • Namespace where the server will be deployed. Will be create if it doesn’t exist.

  • +
+

directory

+
    +
  • Directory containing the files to serve on the HTTP server.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Server.deploy_opensearch.html b/toolbox.generated/Server.deploy_opensearch.html new file mode 100644 index 0000000000..c33a445dd4 --- /dev/null +++ b/toolbox.generated/Server.deploy_opensearch.html @@ -0,0 +1,149 @@ + + + + + + + server deploy_opensearch — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

server deploy_opensearch

+

Deploy OpenSearch and OpenSearch-Dashboards

+

Example of secret properties file:

+

user_password=passwd +admin_password=adminpasswd

+
+

Parameters

+

secret_properties_file

+
    +
  • Path of a file containing the properties of LDAP secrets.

  • +
+

namespace

+
    +
  • Namespace in which the application will be deployed

  • +
  • default value: opensearch

  • +
+

name

+
    +
  • Name to give to the opensearch instance

  • +
  • default value: opensearch

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Server.deploy_redis_server.html b/toolbox.generated/Server.deploy_redis_server.html new file mode 100644 index 0000000000..2aaad8c0c3 --- /dev/null +++ b/toolbox.generated/Server.deploy_redis_server.html @@ -0,0 +1,136 @@ + + + + + + + server deploy_redis_server — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

server deploy_redis_server

+

Deploy a redis server

+
+

Parameters

+

namespace

+
    +
  • Namespace where the server will be deployed. Will be create if it doesn’t exist.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Server.undeploy_ldap.html b/toolbox.generated/Server.undeploy_ldap.html new file mode 100644 index 0000000000..b3e390be92 --- /dev/null +++ b/toolbox.generated/Server.undeploy_ldap.html @@ -0,0 +1,148 @@ + + + + + + + server undeploy_ldap — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

server undeploy_ldap

+

Undeploy OpenLDAP and LDAP Oauth

+
+

Parameters

+

idp_name

+
    +
  • Name of the LDAP identity provider.

  • +
+

use_ocm

+
    +
  • If true, use ocm delete idp to delete the LDAP identity provider.

  • +
+

use_rosa

+
    +
  • If true, use rosa delete idp to delete the LDAP identity provider.

  • +
+

cluster_name

+
    +
  • Cluster to use when using OCM or ROSA.

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Storage.deploy_aws_efs.html b/toolbox.generated/Storage.deploy_aws_efs.html new file mode 100644 index 0000000000..b7fe001e4a --- /dev/null +++ b/toolbox.generated/Storage.deploy_aws_efs.html @@ -0,0 +1,130 @@ + + + + + + + storage deploy_aws_efs — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

storage deploy_aws_efs

+

Deploy AWS EFS CSI driver and configure AWS accordingly.

+

Assumes that AWS (credentials, Ansible module, Python module) is properly configured in the system.

+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Storage.deploy_nfs_provisioner.html b/toolbox.generated/Storage.deploy_nfs_provisioner.html new file mode 100644 index 0000000000..7750ad4cdc --- /dev/null +++ b/toolbox.generated/Storage.deploy_nfs_provisioner.html @@ -0,0 +1,156 @@ + + + + + + + storage deploy_nfs_provisioner — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

storage deploy_nfs_provisioner

+

Deploy NFS Provisioner

+
+

Parameters

+

namespace

+
    +
  • The namespace where the resources will be deployed

  • +
  • default value: nfs-provisioner

  • +
+

pvc_sc

+
    +
  • The name of the storage class to use for the NFS-provisioner PVC

  • +
  • default value: gp3-csi

  • +
+

pvc_size

+
    +
  • The size of the PVC to give to the NFS-provisioner

  • +
  • default value: 10Gi

  • +
+

storage_class_name

+
    +
  • The name of the storage class that will be created

  • +
  • default value: nfs-provisioner

  • +
+

default_sc

+
    +
  • Set to true to mark the storage class as default in the cluster

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/Storage.download_to_pvc.html b/toolbox.generated/Storage.download_to_pvc.html new file mode 100644 index 0000000000..5ef6f7f788 --- /dev/null +++ b/toolbox.generated/Storage.download_to_pvc.html @@ -0,0 +1,180 @@ + + + + + + + storage download_to_pvc — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

storage download_to_pvc

+

Downloads the a dataset into a PVC of the cluster

+
+

Parameters

+

name

+
    +
  • Name of the data source

  • +
+

source

+
    +
  • URL of the source data

  • +
+

pvc_name

+
    +
  • Name of the PVC that will be create to store the dataset files.

  • +
+

namespace

+
    +
  • Name of the namespace in which the PVC will be created

  • +
+

creds

+
    +
  • Path to credentials to use for accessing the dataset.

  • +
+

storage_dir

+
    +
  • The path where to store the downloaded files, in the PVC

  • +
  • default value: /

  • +
+

clean_first

+
    +
  • If True, clears the storage directory before downloading.

  • +
+

pvc_access_mode

+
    +
  • The access mode to request when creating the PVC

  • +
  • default value: ReadWriteOnce

  • +
+

pvc_size

+
    +
  • The size of the PVC to request, when creating the PVC

  • +
  • default value: 80Gi

  • +
+

pvc_storage_class_name

+
    +
  • The name of the storage class to pass when creating the PVC

  • +
+

image

+
    +
  • The image to use for running the download Pod

  • +
  • default value: registry.access.redhat.com/ubi9/ubi

  • +
+
+
+ + +
+
+
+ +
+ +
+

© Copyright 2021, Red Hat PSAP team.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/toolbox.generated/index.html b/toolbox.generated/index.html new file mode 100644 index 0000000000..5f8c0e1673 --- /dev/null +++ b/toolbox.generated/index.html @@ -0,0 +1,434 @@ + + + + + + + Toolbox Documentation — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Toolbox Documentation

+
+

busy_cluster

+
Commands relating to make a cluster busy with lot of resources
+
+
+ +
+
+

cluster

+
Commands relating to cluster scaling, upgrading and environment capture
+
+
+ +
+
+

configure

+
Commands relating to TOPSAIL testing configuration
+
+
+
    +
  • apply Applies a preset (or a list of presets) to the current configuration file

  • +
  • enter Enter into a custom configuration file for a TOPSAIL project

  • +
  • get Gives the value of a given key, in the current configuration file

  • +
  • name Gives the name of the current configuration

  • +
+
+
+

cpt

+
Commands relating to continuous performance testing management
+
+
+ +
+
+

fine_tuning

+
Commands relating to RHOAI scheduler testing
+
+
+ +
+
+

run

+
Run `topsail` toolbox commands from a single config file.
+
+
+
+
+

gpu_operator

+
Commands for deploying, building and testing the GPU operator in various ways
+
+
+ +
+
+

kepler

+
Commands relating to kepler deployment
+
+
+
    +
  • deploy_kepler Deploy the Kepler operator and monitor to track energy consumption

  • +
  • undeploy_kepler Cleanup the Kepler operator and associated resources

  • +
+
+
+

kserve

+
Commands relating to RHOAI KServe component
+
+
+ +
+
+

kubemark

+
Commands relating to kubemark deployment
+
+
+ +
+
+

kwok

+
Commands relating to KWOK deployment
+
+
+ +
+
+

llm_load_test

+
Commands relating to llm-load-test
+
+
+
    +
  • run Load test the wisdom model

  • +
+
+
+

local_ci

+
Commands to run the CI scripts in a container environment similar to the one used by the CI
+
+
+
    +
  • run Runs a given CI command

  • +
  • run_multi Runs a given CI command in parallel from multiple Pods

  • +
+
+
+

nfd

+
Commands for NFD related tasks
+
+
+ +
+
+

nfd_operator

+
Commands for deploying, building and testing the NFD operator in various ways
+
+
+ +
+
+

notebooks

+
Commands relating to RHOAI Notebooks
+
+
+
    +
  • benchmark_performance Benchmark the performance of a notebook image.

  • +
  • capture_state Capture information about the cluster and the RHODS notebooks deployment

  • +
  • cleanup Clean up the resources created along with the notebooks, during the scale tests.

  • +
  • dashboard_scale_test End-to-end scale testing of ROAI dashboard scale test, at user level.

  • +
  • locust_scale_test End-to-end testing of RHOAI notebooks at scale, at API level

  • +
  • ods_ci_scale_test End-to-end scale testing of ROAI notebooks, at user level.

  • +
+
+
+

pipelines

+
Commands relating to RHODS
+
+
+
    +
  • capture_state Captures the state of a Data Science Pipeline Application in a given namespace.

  • +
  • deploy_application Deploy a Data Science Pipeline Application in a given namespace.

  • +
  • run_kfp_notebook Run a notebook in a given notebook image.

  • +
+
+
+

repo

+
Commands to perform consistency validations on this repo itself
+
+
+ +
+
+

rhods

+
Commands relating to RHODS
+
+
+ +
+
+

scheduler

+
Commands relating to RHOAI scheduler testing
+
+
+ +
+
+

server

+
Commands relating to the deployment of servers on OpenShift
+
+
+ +
+
+

storage

+
Commands relating to OpenShift file storage
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/understanding/orchestration.html b/understanding/orchestration.html new file mode 100644 index 0000000000..7471ee3c1e --- /dev/null +++ b/understanding/orchestration.html @@ -0,0 +1,474 @@ + + + + + + + The Test Orchestrations Layer — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

The Test Orchestrations Layer

+

The test orchestration layer is the crux of TOPSAIL. It binds +everything else together: +- the CI job launchers +- the configuration +- the toolbox commands +- the post-mortem visualizations and automated regression analyses.

+

Historically, this layer has been first and foremost triggered by CI +jobs, with clean clusters and kube-admin privileges. This is still the +first target of TOPSAIL test automation. The side effect of that is +that TOPSAIL may seem not very user-friendly when trying to use it +interactively from a terminal.

+

In this section, we’ll try to cover these different aspects that +TOPSAIL binds together.

+
+
+

The CI job launchers

+

TOPSAIL test orchestrations are focused on reproducibility and +end-to-end testing. These two ideas are directly linked, and in the +OpenShift world, the easiest to ensure that the rests are reproducible +and end-to-end automated is to start from scratch (or from a fresh and +clean cluster).

+
+

Cluster creation

+

In OpenShift CI, TOPSAIL has the ability to create a dedicated cluster +(even two, one for RHOAI, one for simulating users). This mode is +launched with the rhoai-e2e test. It is particularly useful when +launching cloud scale tests. The cluster creation is handled by the +deploy-cluster subproject. +This part of TOPSAIL is old, and mostly written in Bash. But it has +proved to be robust and reliable, although we haven’t been using it +much since we got access to bare-metal clusters.

+

By default, these clusters are destroyed after the test. +A keep flag can be set in the configuration to avoid destroying +it, and creating a kube-admin user with a predefined password. (Ask +in PM for how access the cluster).

+
+
+

Cluster from pool

+

In OpenShift CI, TOPSAIL has a pool of pre-deployed clusters. These +clusters are controlled by the Hive +tool, managed by the OpenShift CI team. In the current configuration, +the pool have 2 single-node OpenShift systems.

+

These clusters are always destroyed at the end of the run. This is +outside of TOPSAIL control.

+
+
+

Bare-metal clusters

+

In the Middleware Jenkins CI, TOPSAIL can be launched against two +bare-metal clusters. These clusters have long running OpenShift +deployments, and they are “never” reinstalled (at least, there is no +reinstall automation in place at the moment). Hence, the test +orchestrations are in charge of cleanup the cluster before (to ensure +that no garbage is left) and after the test (to let the cluster clean +for the following users). So the complete test sequence is:

+
    +
  1. cleanup

  2. +
  3. prepare

  4. +
  5. test

  6. +
  7. cleanup

  8. +
+

This is the theory at least. In practice, the clusters are dedicated +to the team, and after mutual agreement, the cleanups and prepare +steps may be skipped to save time. Or the test and final cleanup, to +have a cluster ready for development.

+

Before launching a test, check the state of the cluster. Is RHOAI +installed? is the DSC configured as you expected? If not, make sure +you tick the cleanup and prepare steps.

+

Is someone else’s job already on the same cluster? if yes, your job +will be queued and start only after the first job completion. Make +sure you tick the cleanup and prepare steps.

+
+
+

Launching TOPSAIL jobs on the CI engines

+

See this google doc for all the details about launching TOPSAIL jobs +on the CI engines:

+ +
+
+
+

TOPSAIL Configuration System

+

The configuration system is (yet another) key element of TOPSAIL. It +has been designed to flexible, modular, and (important point to +understand some of its implementation choices) configurable from +OpenShift CI and other CI engines.

+
+

A bit of history

+

OpenShift CI is a great tool, but a strong limitation of it is that it +can be only statically configured (from the openshift/release +repository). TOPSAIL had to find a way to enable dynamic +configuration, without touching the source code. Long story (see a +small slide deck +illustrating it) short, TOPSAIL can be configured in Github. (See How +to launch TOPSAIL tests +for all the details).

+
/test rhoai-light fine_tuning ibm_40gb_models
+/var tests.fine_tuning.test_settings.gpu: [2, 4]
+
+
+
+
+

A bit of apology

+

TOPSAIL project’s configuration is a YAML document. On one side, each +project is free to define is own configuration. But on the other side, +some code is shared between different projects (the library files, +defined in some of the projects).

+

This aspect (the full flexibility + the code reuse in the libraries) +makes the configuration structure hard to track. A refactoring might +be envisaged to have a more strongly defined configuration format, at +least for the reusable libraries (eg, the library could tell: this +configuration block does not follow my model, I do not accept to +process it).

+
+
+

How it actually works

+

So, TOPSAIL project’s configuration is a YAML document. And the test +orchestration reads it alter its behavior. It’s as simple as that.

+
tests:
+  capture_prom: true
+  capture_state: true
+
+
+
capture_prom = config.project.get_config("tests.capture_prom")
+if not capture_prom:
+    logging.info("tests.capture_prom is disabled, skipping Prometheus DB reset")
+    return
+
+
+

Sometimes, the test orchestration doesn’t need to handle some +configuration flags, but only pass them to the toolbox layer. TOPSAIL +provides a helper toolbox command for that: from_config.

+

Example:

+
rhods:
+  catalog:
+    image: brew.registry.redhat.io/rh-osbs/iib
+    tag: 804339
+    channel: fast
+    version: 2.13.0
+    version_name: rc1
+    opendatahub: false
+    managed_rhoi: true
+
+
+

These configuration flags should be passed directly to the rhods +deploy_ods toolbox command

+
def deploy_ods(self, catalog_image, tag, channel="", version="",
+               disable_dsc_config=False, opendatahub=False, managed_rhoai=True):
+    """
+    Deploy ODS operator from its custom catalog
+
+    Args:
+      catalog_image: Container image containing the RHODS bundle.
+      tag: Catalog image tag to use to deploy RHODS.
+      channel: The channel to use for the deployment. Let empty to use the default channel.
+      ...
+    """
+
+
+

So the way to launch the RHOAI deployement should be:

+
run.run_toolbox("rhods", "deploy_ods"
+                catalog_image=config.project.get_config("rhods.catalog.image"),
+                tag=config.project.get_config("rhods.catalog.tag"),
+                channel=config.project.get_config("rhods.catalog.channel"),
+                ...)
+
+
+

Instead, the orchestration can use the command_args.yaml.j2 file:

+
rhods deploy_ods:
+  catalog_image: {{ rhods.catalog.image }}
+  tag: {{ rhods.catalog.tag }}
+  channel: {{ rhods.catalog.channel }}
+  ...
+
+
+

where the template will be generated from the configuration file. And +this command will trigger it:

+
run.run_toolbox_from_config("rhods", "deploy_ods")
+
+
+

or this equivalent, from the command-line:

+
source ./projects/fine_tuning/testing/configure.sh
+./run_toolbox.py from_config rhods deploy_ods
+
+
+
+
+

Configuring the configuration with presets

+

TOPSAIL configuration can be updated through the presets. This allows +storing multiple different test flavors side by side, and deciding at +launch time which one to execute.

+

The presets, stored inside in the configuration in the ci_presets +field, define how to update the main configuration blocks before +running the test.

+

Here is an example, which will test multiple dataset replication +factors:

+
dgx_single_model_multi_dataset:
+  extends: [dgx_single_model]
+  tests.fine_tuning.matbenchmarking.enabled: true
+  tests.fine_tuning.test_settings.gpu: 1
+  tests.fine_tuning.test_settings.dataset_replication: [1, 2, 4, 8]
+
+
+

We see that three fields are “simply” updated. The extends keyword +means that first of all (because it is in the first position), we need +to apply the dgx_single_model preset, and only after modify the +three fields.

+

The presets are applied with a simple recursive algorithm (which will +dirtily crash if there is a loop in the presets ^.^). If multiple +presets are defined, and they touch the same values, only the last +change will be visible. Same for the extends keyword. It applied +at its position in the dictionary.

+

Last important point: the presets cannot create new fields. This +can be worked around by having placeholders in the main +configuration. Eg:

+
tests:
+  fine_tuning:
+    test_settings:
+        hyper_parameters:
+          per_device_train_batch_size: null
+          gradient_accumulation_steps: null
+
+
+

And everything is YAML. So the preset values can be YAML dictionaries +(or lists).

+
tests.fine_tuning.test_settings.hyper_parameters: {r: 4, lora_alpha: 16}
+
+
+

This would work even if no placeholder has been set for r and +lora_alpha, because the hyper_parameters is being assigned +(and everything it contained before would be erased).

+
+
+
+

Calling the toolbox commands

+

The “orchestration” layer orchestrates the toolbox commands. That is, +it calls them, in the right order, according to configuration flags, +and with the right parameters.

+

The Python code can call the toolbox directly, by passing all the +necessary arguments:

+
has_dsc = run.run("oc get dsc -oname", capture_stdout=True).stdout
+run.run_toolbox(
+    "rhods", "update_datasciencecluster",
+    enable=["kueue", "codeflare", "trainingoperator"],
+    name=None if has_dsc else "default-dsc",
+)
+
+
+

or from the configuration:

+
run.run_toolbox_from_config("rhods", "deploy_ods")
+
+
+

But it can also have a “mix” of both, via the extra arguments of +the from_config call:

+
extra = dict(source=source, storage_dir=storage_dir, name=source_name)
+run.run_toolbox_from_config("cluster", "download_to_pvc", extra=extra)
+
+
+

This way, cluster download_to_pvc will have parameters received +from the configuration, and extra settings (which take precedence), +prepared directly in Python.

+

The from_config command also accepts a prefix and/or a +suffix. Indeed, one command might be called with different parameters +in the same workflow.

+

A simple example is the cluster set_scale command, which is used, +in cloud environment, to control the number of nodes dedicated to a +given task.

+
sutest/cluster set_scale:
+  name: {{ clusters.sutest.compute.machineset.name }}
+  instance_type: {{ clusters.sutest.compute.machineset.type }}
+  scale: SET_AT_RUNTIME
+
+driver/cluster set_scale:
+  instance_type: {{ clusters.driver.compute.machineset.type }}
+  name: {{ clusters.driver.compute.machineset.name }}
+  scale: SET_AT_RUNTIME
+
+
+

This will be called with the prefix parameter:

+
run.run_toolbox_from_config("cluster", "set_scale", prefix="sutest", extra=dict(scale=...))
+run.run_toolbox_from_config("cluster", "set_scale", prefix="driver", extra=dict(scale=...))
+
+
+

and the same works for the suffix:

+
prefix/command sub-command/suffix: ...
+
+
+
+

Creating dedicated directories

+

The artifacts are a critical element for TOPSAIL post-mortem +processing and troubleshooting. But when the orchestration starts to +involve multiple commands, it gets complicated to understand what is +done at which step.

+

So TOPSAIL provides the env.NextArtifactDir context, which creates +a dedicated directory (with a nnn__ prefix to enforce the correct +ordering).

+

Inside this directory, env.ARTIFACT_DIR will be correctly, so that +the code can write its artifact files in a dedicated directory.

+
with env.NextArtifactDir("multi_model_test_sequentially"):
+
+
+

This is mostly used in the test part, to group the multiple +commands related to a test together.

+
+
+

Running toolbox commands in parallel

+

When the orchestration preparation starts to involve multiple +commands, running all of them sequentially make take forever.

+

So TOPSAIL provides the run.Parallel context and the +parallel.delayed function to allow running multiple commands in +parallel:

+
with run.Parallel("prepare_scale") as parallel:
+    parallel.delayed(prepare_kserve.prepare)
+    parallel.delayed(scale_up_sutest)
+
+    parallel.delayed(prepare_user_pods.prepare_user_pods, user_count)
+    parallel.delayed(prepare_user_pods.cluster_scale_up, user_count)
+
+
+

This will create a dedicated directory, and at the end of the block it +will execute the 4 functions in dedicated threads.

+

Mind that the configuration cannot be updated inside a parallel +region (eg, +config.project.set_config("tests.scale.model.consolidated", True)).

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/understanding/toolbox.html b/understanding/toolbox.html new file mode 100644 index 0000000000..c856044d73 --- /dev/null +++ b/understanding/toolbox.html @@ -0,0 +1,230 @@ + + + + + + + The Reusable Toolbox Layer — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

The Reusable Toolbox Layer

+

TOPSAIL’s toolbox provides an extensive set of reusable +functionalities. It is a critical part of the test orchestration, as +the toolbox commands are in charge of the majority of the operations +affecting the state of the cluster.

+

The Ansible-based design of the toolbox has proved along the last +years to be a key element in the efficiency of TOPSAIL-based +performance and scale investigations. The Ansible roles are always +executed locally, with a custom stdout callback for easy log reading.

+

In the design of toolbox framework, post-mortem troubleshooting is one +of the key concerns. The roles are always executed with a dedicated +artifact directory ({{ artifact_extra_logs_dir }}), when the tasks +are expected to store their generated source artifacts (src +directory), the state of the resources they have changed +(artifacts directory). The role should also store any other +information helpful to understand why the role execution failed, as +well as any “proof” that it executed its task correctly. These +artifacts will be reviewed after the test execution, to understand +what went wrong, if the cluster was in the right state, etc. The +artifacts can also be parsed by the post-mortem visualization engine, +to extract test results, timing information, etc:

+
- name: Create the src artifacts directory
+  file:
+    path: "{{ artifact_extra_logs_dir }}/src/"
+    state: directory
+    mode: '0755'
+
+- name: Create the nginx HTTPS route
+  shell:
+    set -o pipefail;
+    oc create route passthrough nginx-secure
+       --service=nginx --port=https
+       -n "{{ cluster_deploy_nginx_server_namespace }}"
+       --dry-run=client -oyaml
+      | yq -y '.apiVersion = "route.openshift.io/v1"'
+      | tee "{{ artifact_extra_logs_dir }}/src/route_nginx-secure.yaml"
+      | oc apply -f -
+
+
+- name: Create the artifacts artifacts directory
+  file:
+    path: "{{ artifact_extra_logs_dir }}/artifacts/"
+    state: directory
+    mode: '0755'
+
+- name: Get the status of the Deployment and Pod
+  shell:
+    oc get deploy/nginx-deployment
+       -owide
+       -n "{{ cluster_deploy_nginx_server_namespace }}"
+       > "{{ artifact_extra_logs_dir }}/artifacts/deployment.status";
+
+    oc get pods -l app=nginx
+       -owide
+       -n "{{ cluster_deploy_nginx_server_namespace }}"
+       > "{{ artifact_extra_logs_dir }}/artifacts/pod.status";
+
+    oc describe pods -l app=nginx
+       -n "{{ cluster_deploy_nginx_server_namespace }}"
+       > "{{ artifact_extra_logs_dir }}/artifacts/pod.descr";
+
+
+

The commands are coded with Ansible roles, with a Python API and CLI +interface on top of it.

+

So this entrypoint:

+
@AnsibleRole("cluster_deploy_nginx_server")
+@AnsibleMappedParams
+def deploy_nginx_server(self, namespace, directory):
+    """
+    Deploy an NGINX HTTP server
+
+    Args:
+        namespace: namespace where the server will be deployed. Will be create if it doesn't exist.
+        directory: directory containing the files to serve on the HTTP server.
+    """
+
+
+

will be translated into this CLI:

+
$ ./run_toolbox.py cluster deploy_nginx_server --help
+
+INFO: Showing help with the command 'run_toolbox.py cluster deploy_nginx_server -- --help'.
+
+NAME
+    run_toolbox.py cluster deploy_nginx_server - Deploy an NGINX HTTP server
+
+SYNOPSIS
+    run_toolbox.py cluster deploy_nginx_server VALUE | NAMESPACE DIRECTORY
+
+DESCRIPTION
+    Deploy an NGINX HTTP server
+
+POSITIONAL ARGUMENTS
+    NAMESPACE
+        namespace where the server will be deployed. Will be create if it doesn't exist.
+    DIRECTORY
+        directory containing the files to serve on the HTTP server.
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/understanding/visualization.html b/understanding/visualization.html new file mode 100644 index 0000000000..33d47c32e3 --- /dev/null +++ b/understanding/visualization.html @@ -0,0 +1,182 @@ + + + + + + + The Post-mortem Processing & Visualization Layer — Red Hat PSAP topsail toolbox git-main/c8e4b1e9 documentation + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

The Post-mortem Processing & Visualization Layer

+

TOPSAIL post-mortem visualization relies on the MatrixBenchmarking.

+

MatrixBenchmarking consists of multiple components:

+
    +
  • the benchmark component is in charge of running various test +configurations. MatrixBenchmarking/benchmark is configured with a +set of settings, with one or multiple values. The execution engine +will go through each of the possible configurations and execute it +to capture its performance.

  • +
  • the visualize component is in charge of the generation of plots +and reports, based on the Dash and Plotly +packages. MatrixBenchmarking/visualize is launched either against a +single result directory, or against a directory with multiple +results. The result directories can have been generated by TOPSAIL, +which directly writes the relevant files (often the case there’s +only one test executed, or when the test list is a simple iteration +over a list of configurations), or via MatrixBenchmarking/benchmark +(when the test list has to iterate over various, dynamically defined +settings). This component is further described below.

  • +
  • the download component is in charge of downloading artifacts +from S3, OpenShift CI or the Middleware Jenkins. Using this +component instead of a simple scrapper allows downloading only the +files important for the post-processing, or even only the cache +file. This component is used when “re-plotting”, that is, when +regenerating the visualization in the CI without re-running the +tests.

  • +
  • the upload_lts component is used to upload the LTS (long term +storage) payload and KPIs (key performance indicators) to +OpenSearch. It is triggered at the end of a gating test.

  • +
  • the download_lts component is used to download the historical +LTS payloads and KPIs from OpenSearch. It is used in gating test +before running the regression analyze.

  • +
  • the analyze_lts component is used to check the results of a test +against “similar” historical results. “similar” here means that the +test results should have been executed with the same settings, +except the so-called “comparison settings” (eg, the RHOAI version, +the OCP version, etc). The regression analyze is done with the help +of the datastax-labs/hunter package.

    +

    In this document, we’ll focus on the visualize component, which +is a key part of TOPSAIL test pipelines. (So are analyze_lts, +download_lts and upload_lts for continuous performance +testing, but they don’t require much per-project customization.)

    +
  • +
+

TOPSAIL/MatrixBenchmarking visualization modules are split into +two main components: the parsers (in store module) and plotters +(in plotting module). In addition to that, the continuous +performance testing (CPT) requires two extra components: the models +(in the models module) and the regression analyze preps (in the +analyze module).

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file