diff --git a/archetypes/releases.md b/archetypes/releases.md index 80b08c222..f2898b39d 100644 --- a/archetypes/releases.md +++ b/archetypes/releases.md @@ -15,23 +15,18 @@ _Release date:_ The following sections document the changes this release brings to each service. -### Admin - -### BPMN engine - -### Schema -### BAAS - -### Core +### Admin ### Agent -### Audit +### BaaS + +### ISA-95 -### Keycloak Theme +### Typescript host service -### Router +### Workflow ## Compatibility diff --git a/content/_index.md b/content/_index.md index 0b9cd02f0..1dbac1089 100644 --- a/content/_index.md +++ b/content/_index.md @@ -1,10 +1,10 @@ --- -title: ##Leave only home page without title +title: v4.0.0 description: User guides, deploy docs, references, and deep dives about the Rhize manufacturing data hub. cascade: - type: docs - v: "3.2.1" + type: versions + v: "4.2.0" --- diff --git a/content/deploy/cluster-sizing.md b/content/deploy/cluster-sizing.md index e657e6b1c..3f5ad022d 100644 --- a/content/deploy/cluster-sizing.md +++ b/content/deploy/cluster-sizing.md @@ -16,19 +16,18 @@ The following tables are the minimum recommended sizes to provision your cluster For high availability, Rhize recommends a **minimum of three nodes** with the following specifications. - | Property | Value | |-----------------------|-------------------| | Number of nodes | 3 | | CPU Speed (GHz) | 3.3 | | vCPU per Node | 16 | | Memory per node (GiB) | 32 (64 is better) | -| Persisted volumes | 12 | +| Persisted volumes | 16 | | Persisted Volume IOPS | 5000 | | PV Throughput (MBps) | 500 | | Total Disk Space (TB) | 3 | | Disk IOPS | 5000 | -| Disk MBps | 500MBps | +| Disk MBps | 500 | ### Rhize agent @@ -47,24 +46,25 @@ For the Rhize Agent, the minimum recommended specifications are as follows: The following table lists the **minimum** recommended specifications for the main services. Services with stateful PV have a persistent volume per pod. +>![Warn] +> Avoid NFS or SMB filesystems. These are known to lead to file corruption in BaaS and do not work at all with various other services. + | Service | Pods for HA (replica count) | vCPU per Pod | Memory Per Pod | Stateful PV | DiskSize (GiB) | Comments | |------------------------|-----------------------------|--------------|----------------|-------------|----------------|----------------------------------------------------------------------| | `baas-alpha` | 3 | 8 | 16 (at least) | Yes | 750 | High throughput and IOPS | -| `baas-zero` | 3 | 2 | 2 | Yes | 350 | High throughput and IOPS | -| `libre-core` | 3 | 1 | 2 | No | N/A | HA requires 2 pods, but 3 is to avoid hotkey issues and balance load | -| `bpmn-engine` | 3 | 1 | 2 | No | N/A | HA requires 2 pods, but 3 is to avoid hotkey issues and balance load | -| `nats` | 3 | 1 | 2 | Yes | 100 | High IOPS | -| `nats-box` | 1 | 0.25 | 0.25 | No | N/A | | -| `libre-audit` | 2 | 1 | 1 | No | N/A | | +| `baas-zero` | 3 | 2 | 2 | Yes | 300 | High throughput and IOPS | +| `workflow` | 3 | 1 | 2 | No | N/A | HA requires 2 pods, but 3 is to avoid hotkey issues and balance load | +| `isa95` | 2 | 2 | 1 | NO | N/A | | +| `keycloak-postgres` | 2 | 1 | 2 | No | 200 | Runs in pod with `keycloak` | +| `keycloak` | 2 | 1 | 2 | No | N/A | | | `libre-audit-postgres` | 2 | 1 | 2 | Yes | 250 | Runs in pod with `libre-audit` | | `libre-ui` | 3 | 0.25 | 0.25 | No | N/A | | -| `keycloak` | 2 | 1 | 2 | No | N/A | | -| `keycloak-postgres` | 2 | 1 | 2 | No | 200 | Runs in pod with `keycloak` | -| `router` | 2 | 1 | 2 | Yes | <1 | Requires volume to compose supergraph | -| `grafana`* | 3 | 0.5 | 2 | No | 20-50 | Storage can be in host or in object bucket. | +| `quest-db` | 1 | 4 | 8 | Yes | 250 | High Throughput and IPOS | +| `redpanda` | 3 | | | Yes | 100 | High IOPS | +| `restate` | 3 | | | Yes | 50 | High Throughput and IPOS | +| `appsmith` | 3 | 4 | | Yes | 50 | High Throughput and IPOS | - * May run [in separate cluster](#monitoring-stack) ### Monitoring stack @@ -90,3 +90,9 @@ However, some deployments prefer to separate monitoring to its own cluster. | `tempo-distributor` | 1 | 0.25 | 0.5 | 0.25 | | `tempo-query-frontend` | 1 | 0.25 | 0.5 | 0.25 | | `temp-memcache` | 1 | 0.25 | 0.1 | 0.25 | + +## Back up + +You can [back up Rhize to S3](/deploy/backup/binary/) . +Consider including an S3 bucket as part of your deployment. + diff --git a/content/deploy/install/keycloak.md b/content/deploy/install/keycloak.md index bb97b5d84..052835991 100644 --- a/content/deploy/install/keycloak.md +++ b/content/deploy/install/keycloak.md @@ -49,7 +49,7 @@ To create your Rhize realm, follow these steps. 1. In the side menu, select **Realm Settings**. 1. Enter the following values: | Field | value | - |--------------|-----------------------| + | ------------ | --------------------- | | Frontend URL | Keycloak frontend URL | | Require SSL | External requests | @@ -141,9 +141,9 @@ Create a client for the UI as follows: 1. Configure the **Access Settings**: - - **Root URL**: `.` without trailing slashes - - **Home URL**: `.` without trailing slashes - - **Web Origins**: `.` without trailing slashes + - **Root URL**: `` without trailing slashes + - **Home URL**: `` without trailing slashes + - **Web Origins**: `` without trailing slashes 1. Select **Next**, then **Save**. @@ -168,8 +168,8 @@ Create a client for the UI as follows: 1. Configure the **Access Settings**: - - **Root URL**: `.` without trailing slashes - - **Home URL**: `.` without trailing slashes + - **Root URL**: `` without trailing slashes + - **Home URL**: `` without trailing slashes - **Valid redirect URIs**: `/login/generic_oauth` without trailing slashes - **Valid post logout redirect URIs**: `+` without trailing slashes - **Web origins**: `.` without trailing slashes @@ -181,22 +181,26 @@ Create a client for the UI as follows: The other services do not need authorization but do need client authentication. By default you need to add only the client ID. -For example, to create the BPMN engine client: +For example, to create the Workflow client: 1. In the side menu, select **Clients > create client**. -1. For **Client ID**, enter `{{< param application_name >}}Bpmn` +1. For **Client ID**, enter `{{< param application_name >}}Workflow` +1. **Name**: `{{< param brand_name >}} Workflow Engine` +1. **Description**: `{{< param brand_name >}} Workflow Engine` 1. Configure the **Capability config**: - **Client Authentication**: On 1. Select **Next**, then **Save**. -**Repeat this process for each of the following services:** +Repeat the preceding process for each of the following services with the corresponding values in the table. -| Client ID | Description | -|----------------------------------------|-----------------------| -| `{{< param application_name >}}Audit` | The audit log service | -| `{{< param application_name >}}Core` | The edge agent | -| `{{< param application_name >}}Router` | API router | +| Client ID | Name | Description | +| --------------------------------------- | --------------------------------------- | --------------------------- | +| `{{< param application_name >}}Agent` | {{< param brand_name >}} Agent | The agent data service | +| `{{< param application_name >}}Audit`* | {{< param brand_name >}} Audit Log | The audit log service | +| `{{< param application_name >}}ISA95` | {{< param brand_name >}} ISA-95 Model | The ISA-95 model service | +| `{{< param application_name >}}KPI`* | {{< param brand_name >}} KPI Calculator | The ISO22400 KPI calculator | +| `{{< param application_name >}}Router`* | {{< param brand_name >}} API Router | The API router | -Based on your architecture, repeat for any Libre Edge Agents, `{{< param application_name >}}Agent`. +*- Optional based on your architecture. ### Scope services @@ -216,31 +220,28 @@ To create a scope for your Rhize services, follow these steps: - **Display on consent screen**: `On` - **Include in token scope**: `On` 1. **Create**. -1. Select the **Mappers** tab, then **Configure new mapper**. Add an audience mapper for the DB client: - - **Mapper Type**: `Audience` - - **Name**: `{{< param db >}}AudienceMapper` - - **Include Client Audience**: `{{< param db >}}` - - **Add to ID Token**: `On` - - **Add to access token**: `On` -1. Repeat the preceding step for a mapper for the UI client: - - **Mapper Type**: `Audience` - - **Name**: `{{< param application_name >}}UIAudienceMapper` - - **Include Client Audience**: `{{< param application_name >}}UI` - - **Add to ID Token**: `On` - - **Add to access token**: `Off` -1. Repeat the preceding step for a mapper for the BPMN client: - - **Mapper Type**: `Audience` - - **Name**: `{{< param application_name >}}BpmnAudienceMapper` - - **Include Client Audience**: `{{< param application_name >}}Bpmn` - - **Add to ID Token**: `On` - - **Add to access token**: `On` -1. If using the Rhize Audit microservice, repeat the preceding step for an Audit scope and audience mapper: - - **Mapper Type**: `Audience` - - **Name**: `{{< param application_name >}}AuditAudienceMapper` - - **Include Client Audience**: - - **Included Custom Audience**: `audit` - - **Add to ID Token**: `On` - - **Add to access token**: `On` + +#### Create audience mappers +Select the **Mappers** tab, then **Configure new mapper**. Add an audience mapper for the DB client: + - **Mapper Type**: `Audience` + - **Name**: `{{< param db >}}AudienceMapper` + - **Include Client Audience**: `{{< param db >}}` + - **Add to ID Token**: `On` + - **Add to access token**: `On` + +Repeat the preceding process for each of the following services with the corresponding values in the table. + +| Name | Include Client Audience | ID Token | Access Token | +| ------------------------------------------------------ | ---------------------------------------- | :------: | :----------: | +| `{{< param application_name >}}AuditAudienceMapper`* | `audit`** | `On` | `On` | +| `{{< param application_name >}}AgentAudienceMapper` | `{{< param application_name >}}Agent` | `On` | `On` | +| `{{< param application_name >}}ISA95AudienceMapper` | `{{< param application_name >}}ISA95` | `On` | `On` | +| `{{< param application_name >}}KPIAudienceMapper`* | `{{< param application_name >}}KPI` | `On` | `On` | +| `{{< param application_name >}}UIAudienceMapper` | `{{< param application_name >}}UI` | `On` | `Off` | +| `{{< param application_name >}}WorkflowAudienceMapper` | `{{< param application_name >}}Workflow` | `On` | `On` | + +*- Optional based on your architecture.
+**- Included as a Custom Audience. #### Add services to the scope @@ -250,14 +251,24 @@ To create a scope for your Rhize services, follow these steps: 1. Select `{{< param application_name >}}ClientScope` from the list. 1. **Add > Default**. -Repeat this process for the `dashboard`, `{{< param application_name >}}UI`, `{{< param application_name >}}Bpmn`, `{{< param application_name >}}Core`, `{{< param application_name >}}Router`, `{{< param application_name >}}Audit` (if applicable). Based on your architecture repeat for any Libre Edge Agent clients. +Repeat the preceding process above for each of the following services: + +- `dashboard` +- `{{< param application_name >}}Audit`* +- `{{< param application_name >}}Agent` +- `{{< param application_name >}}ISA95` +- `{{< param application_name >}}KPI`* +- `{{< param application_name >}}Router`* +- `{{< param application_name >}}UI` +- `{{< param application_name >}}Workflow` + +*- Optional based on your architecture. ### Create roles and groups In Keycloak, _roles_ identify a category or type of user. _Groups_ are a common set of attributes for a set of users. - #### Add the Admin Group 1. In the left hand menu, select **Groups > Create group**. @@ -305,7 +316,7 @@ Now map the scope: 1. Select the **Client scopes** tab. 1. **Add client scope**. 1. Select `groups`. -1. **Add > Default**. +1. **Add Default**. ### Add Client Policy @@ -314,7 +325,7 @@ Rhize requires authorization for the database service. 1. In the left hand menu, select **Clients**, and then `{{< param db >}}`. 1. Select the **Authorization** tab. -1. Select the **Policies** sub-tab. +1. Select the **Policies** subtab. 1. Select **Create Policy > Group**. 1. Name the policy `{{< param application_name >}}AdminGroupPolicy`. 1. Select **Add Groups**. @@ -342,43 +353,18 @@ Now create a user password: 1. For **Temporary**, choose `Off`. 1. **Save**. -Repeat this process for the following accounts: - -- Audit: - - **Username**: `{{< param application_name >}}Audit@{{< param domain_name >}}` - - **Email**: `{{< param application_name >}}Audit@{{< param domain_name >}}` - - **Email Verified**: `On` - - **First name**: `Audit` - - **Last name**: `{{< param brand_name >}}` - - **Join Groups**: `{{< param application_name >}}AdminGroup` -- Core: - - **Username**: `{{< param application_name >}}Core@{{< param domain_name >}}` - - **Email**: `{{< param application_name >}}Core@{{< param domain_name >}}` - - **Email Verified**: `On` - - **First name**: `Core` - - **Last name**: `{{< param brand_name >}}` - - **Join Groups**: `{{< param application_name >}}AdminGroup` -- BPMN - - **Username**: `{{< param application_name >}}Bpmn@{{< param domain_name >}}` - - **Email**: `{{< param application_name >}}Bpmn@{{< param domain_name >}}` - - **Email Verified**: `On` - - **First name**: `Bpmn` - - **Last name**: `{{< param brand_name >}}` - - **Join Groups**: `{{< param application_name >}}AdminGroup` -- Router - - **Username**: `{{< param application_name >}}Router@{{< param domain_name >}}` - - **Email**: `{{< param application_name >}}Router@{{< param domain_name >}}` - - **Email Verified**: `On` - - **First name**: `Router` - - **Last name**: `{{< param brand_name >}}` - - **Join Groups**: `{{< param application_name >}}AdminGroup` -- Agent - - **Username**: `{{< param application_name >}}Agent@{{< param domain_name >}}` - - **Email**: `{{< param application_name >}}Agent@{{< param domain_name >}}` - - **Email Verified**: `On` - - **First name**: `Agent` - - **Last name**: `{{< param brand_name >}}` - - **Join Groups**: `{{< param application_name >}}AdminGroup` +Repeat the preceding process for each of the following services with the corresponding values in the table. + +| Username | First name | +| ------------------------------------------------------------------ | ---------- | +| `{{< param application_name >}}Audit@{{< param domain_name >}}`* | Audit | +| `{{< param application_name >}}Agent@{{< param domain_name >}}` | Agent | +| `{{< param application_name >}}ISA95@{{< param domain_name >}}` | ISA95 | +| `{{< param application_name >}}KPI@{{< param domain_name >}}`* | KPI | +| `{{< param application_name >}}Router@{{< param domain_name >}}`* | Router | +| `{{< param application_name >}}Workflow@{{< param domain_name >}}` | Workflow | + +*- Optional based on your architecture. {{% /steps %}} diff --git a/content/deploy/install/row-level-access-control.md b/content/deploy/install/row-level-access-control.md index 9c52208e7..84091fe2f 100644 --- a/content/deploy/install/row-level-access-control.md +++ b/content/deploy/install/row-level-access-control.md @@ -23,7 +23,7 @@ Consider the following scenario: Acme Inc. contracts part of its supply chain to 1. Create an OIDC Role: Define a role called `cmoAccess` in your OIDC provider (e.g., Keycloak). 2. Define a Hierarchy Scope. Create a hierarchy scope in Rhize called `CMO`. This scope is applied to objects or nodes in the graph that relate to the CMO. -3. Add a Rule to the Scope Map: Define a rule in the `scopemap.json` file as follows: +3. Add a Rule to the Scope Map. Define a rule in the `scopemap.scopemap.json` file as follows: ```json { diff --git a/content/deploy/install/services.md b/content/deploy/install/services.md index 6cc3fad43..ab1729a04 100644 --- a/content/deploy/install/services.md +++ b/content/deploy/install/services.md @@ -8,9 +8,6 @@ categories: "how-to" The final installation step is to install the Rhize services in your Kubernetes cluster. -> [!NOTE] -> For the recommended compute per pod for each service, refer to [Cluster sizing]({{< relref "../cluster-sizing" >}}) - ## Prerequisites This topic assumes you have done the following: @@ -37,6 +34,7 @@ Common values that are changed include: Client secrets are necessary for Rhize services to authenticate with Keycloak. These secrets are stored with Kubernetes secrets. 1. Go to Keycloak and get the secrets for each client you've created. + 1. Create Kubernetes secrets for each service. You can either create a secret file, or pass raw data from the command line. {{< callout type="caution" >}} @@ -47,20 +45,30 @@ Client secrets are necessary for Rhize services to authenticate with Keycloak. T With raw data, the command might look something like this: ```bash - kubectl create secret generic {{< param application_name >}}-client-secrets -n {{< param application_name >}} \ - --from-literal=dashboard=VKIZ6zkQYyPutDzWqIZ9uIEnQRviyqsS \ - --from-literal={{< param application_name >}}Audit=q8JBjuEefWTmhv9IX4KKYxNtXXnYtDPD \ - --from-literal={{< param application_name >}}Baas=KYbMHlRLhXwiDNFuDCl3qtPj1cNdeMSl \ - --from-literal={{< param application_name >}}Bpmn=7OrjB7FhOdsNeb819xzEDBbMyVb6kNdr \ - --from-literal={{< param application_name >}}Core=SH28Wlx2uEXcgf1NffStbmSuruxvrpi6 \ - --from-literal={{< param application_name >}}UI=0qQ7c1EtOKvwsAcpd0xYIvle4zsMcGRq \ - --from-literal={{< param application_name >}}Router=0qQ7c1EtOKvwsAcpd0xYIvle4zsMcGRq + kubectl create secret generic {{< param application_name >}}-client-secrets \ + -n {{< param application_name >}} \ + --from-literal=dashboard=G4hoxIL37F5S9DQgeDYGQejcJ6oJhOPA \ + --from-literal={{< param application_name >}}Workflow=GTy1x64U0IHAUTWizugEAnN47a9kWgX8 \ + --from-literal={{< param application_name >}}ISA95=Yvtx1tZWCPFayvDCzHTTInEz9gnuLyLc \ + --from-literal={{< param application_name >}}Baas=KYbMHlRLhXwiDNFuDCl3qtPj1cNdeMSl \ + --from-literal={{< param application_name >}}UI=54yUQqmvgcxoKPaIbPZTQGlEs8Xu2qH0 ``` 1. Create secrets for login passwords. Each service with its own user in Keycloak can have its password supplied through Kubernetes secrets. As you install services through Helm, their respective YAML files reference these secrets. +## Add the Rhize Helm Chart Repository + +You must add the helm chart repository for Rhize. + +1. Add the Helm Chart Repository + + ```bash + helm repo add {{< param application_name >}} https://gitlab.com/api/v4/projects/42214456/packages/helm/stable + helm repo update + ``` + ## Install and add roles for the DB {#db} You must install the {{< param db >}} database service first. @@ -87,23 +95,21 @@ If you need Row Level Access Control, [configure your scope map]({{< relref "row All statuses should be `RUNNING`. - 1. Return to the Keycloak UI and add all `{{< param application_name >}}` roles to the admin group. 1. Proxy the `http:8080` port on `{{< param application_name >}}-baas-dgraph-alpha`. - ``` + ```bash kubectl port-forward -n {{< param application_name >}} pod/baas-baas-alpha-0 8080:8080 ``` 1. Get a token using the credentials. With `curl`, it looks like this: ```bash - curl --location --request POST 'https://- - auth.{{< param application_name >}}/realms/{{< param application_name >}}/protocol/openid-connect/token' \ + curl --location --request POST '/realms/{{< param application_name >}}/protocol/openid-connect/token' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=password' \ - --data-urlencode 'username=system@{{< param application_name >}}.com' \ + --data-urlencode 'username=' \ --data-urlencode 'password=' \ --data-urlencode 'client_id={{< param application_name >}}Baas' \ --data-urlencode 'client_secret=' @@ -112,7 +118,7 @@ If you need Row Level Access Control, [configure your scope map]({{< relref "row 1. Post the schema: ```bash - curl --location --request POST 'http://localhost:/admin/schema' \ + curl --location --request POST '/admin/schema' \ --header 'Authorization: Bearer ' \ --header 'Content-Type: application/octet-stream' \ --data-binary '@' @@ -120,7 +126,7 @@ If you need Row Level Access Control, [configure your scope map]({{< relref "row This creates more roles. -1. Go to Keycloak UI and add all new {{< param db >}} roles to the `ADMIN` group. +1. Go to Keycloak UI and add all new {{< param db >}} roles to the `libreAdminGroup`. If the install is successful, the Keycloak UI is available on its [default port]({{< relref "../../reference/default-ports" >}}). @@ -148,193 +154,250 @@ helm install \ For the full configuration options, read the official [Helm `install` reference](https://helm.sh/docs/helm/helm_install/). - -### NATS {#nats} - - +### Redpanda -[NATS](https://nats.io) is the message broker that powers Rhize's event-driven architecture. +Rhize uses Redpanda to buffer requests to Restate and connect to Agent. -Install NATS with these steps: +Install Redpanda with these steps: -1. If it doesn't exist, add the NATS repository: +1. If the Redpanda repository doesn't exist, add it: ```bash - helm repo add nats https://nats-io.github.io/k8s/helm/charts/ + helm repo add redpanda https://charts.redpanda.com + helm repo update ``` -1. Modify the NATS Helm file with your code editor. Edit any necessary overrides. +1. Modify the Redpanda Helm overrides as needed. + 1. Install with Helm: - ``` - helm install nats -f nats.yaml nats/nats -n {{< param application_name >}} + ```bash + helm install redpanda -f redpanda.yaml redpanda/redpanda -n {{< param application_name >}} ``` +### Alloy -### Tempo +Install Alloy with these steps: -Rhize uses [Tempo](https://grafana.com/oss/tempo/) to trace BPMN processes. +1. If the Grafana repository doesn't exist, add it: -Install Tempo with these steps: + ```bash + helm repo add grafana https://grafana.github.io/helm-charts + helm repo update + ``` + +1. Modify the Alloy Helm overrides as needed. -1. If it doesn't exist, add the Tempo repository: +1. Install with Helm: ```bash - helm repo add grafana https://grafana.github.io/helm-charts + helm install alloy -f alloy.yaml grafana/alloy -n {{< param application_name >}} ``` -1. Modify the Helm file as needed. +### Grafana LGTM + +Grafana LGTM includes Tempo and Grafana. Rhize uses [Tempo](https://grafana.com/oss/tempo/) to trace BPMN processes. + +Install Grafana LGTM with these steps: + +1. Modify the Grafana LGTM Helm overrides as needed. + 1. Install with Helm: ```bash - helm install tempo -f tempo.yaml grafana/tempo -n {{< param application_name >}} + helm install lgtm-distributed -f lgtm-distributed.yaml grafana/lgtm-distributed -n {{< param application_name >}} ``` -### Core +If the install is successful, the Grafana service is available on its +[default port]({{< relref "../../reference/default-ports" >}}). + +### Restate -The {{< param brand_name >}} Core service is the custom edge agent that monitors data sources, like OPC-UA servers, and publishes and subscribes topics to NATS. +Rhize uses Restate as a platform for orchestrating other services. -> **Requirements**: Core requires the [{{< param db >}}](#db) and [NATS](#nats) services. +Install Restate with these steps: -Install the Core agent with these steps: +1. Modify the Restate Helm overrides as needed. -1. In the `core.yaml` Helm file, edit the `clientSecret` and `password` with settings from the Keycloak client. -1. Override any other values, as needed. 1. Install with Helm: ```bash - helm install core -f core.yaml {{< param application_name >}}/core -n {{< param application_name >}} + helm install restate -f restate.yaml oci://ghcr.io/restatedev/restate-helm -n {{< param application_name >}} ``` -### BPMN +So that you can register certain services with Restate, proxy the Restate port: + + ```bash + kubectl port-forward -n {{< param application_name >}} pod/restate-0 9070:9070 + ``` -The BPMN service is the custom engine Rhize uses to process low-code workflows modeled in the BPMN UI. +### Workflow -> **Requirements**: The BPMN service requires the [{{< param db >}}](#db), [NATS](#nats), and [Tempo](#tempo) services. +The Workflow service is the custom engine Rhize uses to process low-code workflows modeled in the Workflow UI. -Install the BPMN engine with these steps: +> **Requirements**: The Workflow service requires the [{{< param db >}}](#db), [Restate](#restate), and [Tempo](#tempo) services. + +Install Workflow with these steps: + +1. Modify the Workflow Helm overrides as needed. -1. Open `bpmn.yaml` Update the `clientSecret` and `password` for your BPMN Keycloak credentials. -1. Modify any other values, as needed. 1. Install with Helm: ```bash - helm install bpmn -f bpmn.yaml {{< param application_name >}}/bpmn -n {{< param application_name >}} + helm install workflow -f workflow.yaml {{< param application_name >}}/workflow -n {{< param application_name >}} ``` -### Router +1. When the Workflow service starts, it should register with Restate. Verify this with: + + ```bash + curl localhost:9070/deployments | jq '.deployments[].uri' + ``` -Rhize uses the [Apollo router](https://www.apollographql.com/docs/router) to unite queries for different services in a single endpoint. + This will show the URL of each registered service. If Workflow's URL is not present, register it with: -> **Requirements:** Router requires the [GraphDB](#db), [BPMN](#bpmn), and [Core](#core) services. + ```bash + curl --location 'http://localhost:9070/deployments' \ + --header 'Content-Type: application/json' \ + --data '{"uri":"http://workflow.{{< param application_name >}}.svc.cluster.local:29080", "force":true}' + ``` -Install the router with these steps: +### Typescript Host Service + +Install Typescript Host Service with these steps: + +1. Modify the Typescript Host Service Helm overrides as needed. -1. Modify the router Helm YAML file as needed. 1. Install with Helm: + ```bash + helm install typescript-host-service -f typescript-host-service.yaml {{< param application_name >}}/typescript-host-service -n {{< param application_name >}} + ``` + +1. When the Typescript Host Service starts, it should register with Restate. Verify this with: + ```bash - helm install router -f router.yaml {{< param application_name >}}/router -n {{< param application_name >}} + curl localhost:9070/deployments | jq '.deployments[].uri' ``` -If the install is successful, the Router explorer is available on its -[default port]({{< relref "../../reference/default-ports" >}}). + This will show the URL of each registered service. If Typescript Host Service's URL is not present, register it with: + + ```bash + curl --location 'http://localhost:9070/deployments' \ + --header 'Content-Type: application/json' \ + --data '{"uri":"http://typescript-host-service.{{< param application_name >}}.svc.cluster.local:9081", "force":true}' + ``` -### Grafana +### QuestDB -Rhize uses [Grafana](https://grafana.com) for its dashboard to monitor real time data. +QuestDB is used by Rhize to store timeseries data, however it can be substitude for another historian. -Install Grafana with these steps: +Install QuestDB with these steps: -1. Modify the Grafana Helm YAML file as needed. +1. If it doesn't exist, add the QuestDB repository: -1. Add the Helm repository ```bash - helm repo add grafana https://grafana.github.io/helm-charts + helm repo add questdb https://helm.questdb.io/ + helm repo update ``` +1. Modify the QuestDB Helm overrides as needed. + 1. Install with Helm: ```bash - helm install grafana -f grafana.yaml grafana/grafana -n {{< param application_name >}} + helm install questdb -f questdb.yaml questdb/questdb -n {{< param application_name >}} ``` -If the install is successful, the Grafana service is available on its -[default port]({{< relref "../../reference/default-ports" >}}). +### ISA-95 -### Agent +Install ISA-95 with these steps: -The Rhize agent bridges your plant processes with the Rhize data hub. -It collects data emitted from the plant and publishes it to the NATS message broker. +1. Modify the ISA-95 Helm overrides as needed. -> **Requirements:** Agent requires the [Graph DB](#db), [Nats](#nats), and [Tempo](#tempo) services. +1. Install with Helm: -Install the agent with these steps: + ```bash + helm install isa95 -f isa95.yaml {{< param application_name >}}/isa95 -n {{< param application_name >}} + ``` -1. Modify the Agent Helm file as needed. +1. When the ISA-95 service starts, it should register with Restate. Verify this with: -1. In the Rhize UI, add a Data Source for Agent to interact with: - - In the lefthand menu, open **Master Data > Data Sources > + Create Data Source**. - - Input a name for the Data Source. - - Add a Connection String and Create. - - Add any relevant Topics. - - Activate the Data Source. + ```bash + curl localhost:9070/deployments | jq '.deployments[].uri' + ``` -1. Install with Helm: + This will show the URL of each registered service. If ISA-95's URL is not present, register it with: ```bash - helm install agent -f agent.yaml {{< param application_name >}}/agent -n {{< param application_name >}} + curl --location 'http://localhost:9070/deployments' \ + --header 'Content-Type: application/json' \ + --data '{"uri":"http://isa95.{{< param application_name >}}.svc.cluster.local:29080", "force":true}' ``` ## Install Admin UI -The Admin UI is the graphical frontend to [handle events]({{< relref "../../how-to/bpmn" >}}) and [define work masters]({{< relref "../../how-to/model" >}}). +The Rhize agent bridges your plant processes with the Rhize data hub. + +The Admin UI is the graphical frontend to [handle events]({{< relref "/how-to/bpmn" >}}) and [define work masters]({{< relref "/how-to/model" >}}). -> **Requirements:** The UI requires the [GraphDB](#db), [BPMN](#bpmn), [Core](#core), and [Router](#router) services. +> **Requirements:** The Admin UI requires the [Workflow](#workflow) services. After installing all other services, install the UI with these steps: -1. Forward the port from the Router API. In the example, this forwards Router traffic to port `4000` on `localhost`. +1. Modify the UI Helm overrides as needed. + +1. Install with Helm: ```bash - kubectl port-forward svc/router 4000:4000 -n {{< param application_name >}} + helm install admin-ui -f admin-ui.yaml {{< param application_name >}}/admin-ui -n {{< param application_name >}} ``` -1. Open the Admin UI Helm file. Update the `envVars` object to reflect the URL for Router and Keycloak. If following the prior examples for port-forwarding, it will look something like this: +If the install is successful, the UI is available on its +[default port]({{< relref "../../reference/default-ports" >}}). - ```yaml - envVars: - APP_APOLLO_CLIENT: "http://localhost:4000" - APP_APOLLO_CLIENT_ADMIN: "http://localhost:4000" - APP_AUTH_KEYCLOAK_SERVER_URL: "http://localhost:8080" - ``` +### Agent + +The Rhize agent bridges your plant processes with the Rhize data hub. +It collects data emitted from the plant and publishes it to the message broker. + +> **Requirements:** Agent requires the [Graph DB](#db), [Tempo](#tempo), Redpanda, and an event broker service to communicate with. + +Install Agent with these steps: + +1. Modify the Agent Helm overrides as needed. + +1. In the Rhize UI, add a Data Source for Agent to interact with: + - In the lefthand menu, open **Master Data > Data Sources > + Create Data Source**. + - Input a name for the Data Source. + - Add a Connection String and Create. + - Add any relevant Topics. + - Activate the Data Source. -1. Modify any other values, as needed. 1. Install with Helm: ```bash - helm install admin-ui -f admin-ui.yaml {{< param application_name >}}/admin-ui -n {{< param application_name >}} + helm install agent -f agent.yaml {{< param application_name >}}/agent -n {{< param application_name >}} ``` -If the install is successful, Admin UI is available on its -[default port]({{< relref "../../reference/default-ports" >}}). +To verify that Agent is working, check the Redpanda UI. -## Optional: Audit Trail service +## Optional Services +### Audit Trail -The Rhize [Audit]({{< relref "../../how-to/audit" >}}) service provides an audit trail for database changes to install. The Audit service uses PostgreSQL for storage. +The Rhize [Audit]({{< relref "/how-to/audit" >}}) service provides an audit trail for database changes. The Audit service uses PostgreSQL for storage. -Install Audit Service with these steps: +Install Audit with these steps: 1. Modify the Audit trail Helm YAML file. It is *recommended* to change the PostgreSQL username and password values. -2. Install with Helm: +1. Install with Helm: ```bash helm install audit -f audit.yaml {{< param application_name >}}/audit -n {{< param application_name >}} ``` -3. Create partition tables in the PostgreSQL database: +1. Create partition tables in the PostgreSQL database: ```sql create table public.audit_log_partition( like public.audit_log ); @@ -343,7 +406,7 @@ Install Audit Service with these steps: For details about maintaining the Audit trail, read [Archive the PostgresQL Audit trail]({{< relref "../maintain/audit/" >}}). -### Enable change data capture +#### Enable change data capture The Audit trail requires [change data capture (CDC)]({{< relref "../../how-to/publish-subscribe/track-changes" >}}) to function. To enable CDC in {{< param application_name >}} BAAS, include the following values for the Helm chart overrides: @@ -359,123 +422,61 @@ alpha: replicas: 1 ``` -### Enable Audit subgraph +### KPI -To use the Audit trail in the UI, you must add the Audit trail subgraph into the router. To enable router to use and compose the subgraph: +The Rhize KPI service is a GraphQL service which calcualtes ISO22400 KPIs using timseries tables. -1. Update the Router Helm chart overrides, `router.yaml`, to include: +Install KPI with these steps: - ```yaml - # Add Audit to the router subgraph url override - router: - configuration: - override_subgraph_url: - AUDIT: http://audit.{{< param application_name >}}.svc.cluster.local:8084/query +1. Modify the KPI Helm overrides as needed. - # If supergraph compose is enabled - supergraphCompose: - supergraphConfig: - subgraphs: - AUDIT: - routing_url: http://audit.{{< param application_name >}}.svc.cluster.local:8084/query - schema: - subgraph_url: http://audit.{{< param application_name >}}.svc.cluster.local:8084/query - ``` +1. Install with Helm: -2. Update the Router deployment + ```bash + helm install kpi -f kpi.yaml {{< param application_name >}}/kpi -n {{< param application_name >}} + ``` -```shell -$ helm upgrade --install router -f router.yaml {{< param application_name >}}/router -n {{< param application_name >}} -``` +### Solace -## Optional: calendar service +Solace is an event broker that can be used alongside Agent, though it can be substituted for any other event broker. -The [{{< param brand_name >}} calendar service]({{< relref "../../how-to/work-calendars">}}) monitors work calendar definitions and creates work calendar entries in real time, both in the [Graph](#db) and time-series databases. +1. Add the Solace Charts Helm repo. -> **Requirements:** The calendar service requires the [GraphDB](#db), [Keycloak](#keycloak), and [NATS](#nats) services. + ```bash + helm repo add solacecharts https://solaceproducts.github.io/pubsubplus-kubernetes-helm-quickstart/helm-charts + helm repo update + ``` -{{% callout type="info" %}} -The work calendar requires a time-series DB installed such as [InfluxDB](https://influxdata.com/), [QuestDB](https://questdb.io) or [TimescaleDB](https://www.timescale.com/). The following instructions are specific to QuestDB. -{{% /callout %}} +1. Modify the Solace Helm overrides as needed. -Install the calendar service with these steps: +1. Install with Helm: -1. Create tables in the time series. For example: + ```bash + helm install solace -f solace.yaml solacecharts/pubsubplus -n {{< param application_name >}} + ``` +> [!NOTE] +> Solace can be installed in high availability by using `pubsubplus-ha` instead of `pubsubplus`. +> See detailed instructions on [github](https://github.com/SolaceProducts/pubsubplus-kubernetes-helm-quickstart). - ```sql - CREATE TABLE IF NOT EXISTS PSDT_POT( - EquipmentId SYMBOL, - EquipmentVersion STRING, - WorkCalendarId STRING, - WorkCalendarIid STRING, - WorkCalendarDefinitionId STRING, - WorkCalendarDefinitionEntryId STRING, - WorkCalendarDefinitionEntryIid STRING, - WorkCalendarEntryId STRING, - WorkCalendarEntryIid SYMBOL, - HierarchyScopeId STRING, - EntryType STRING, - ISO22400CalendarState STRING, - isDeleted boolean, - updatedAt TIMESTAMP, - time TIMESTAMP, - lockerCount INT, - lockers STRING - ) TIMESTAMP(time) PARTITION BY month - DEDUP UPSERT KEYS(time, EquipmentId, WorkCalendarEntryIid); - - CREATE TABLE IF NOT EXISTS PDOT_PBT( - EquipmentId SYMBOL, - EquipmentVersion STRING, - WorkCalendarId STRING, - WorkCalendarIid STRING, - WorkCalendarDefinitionId STRING, - WorkCalendarDefinitionEntryId STRING, - WorkCalendarDefinitionEntryIid STRING, - WorkCalendarEntryId STRING, - WorkCalendarEntryIid SYMBOL, - HierarchyScopeId STRING, - EntryType STRING, - ISO22400CalendarState STRING, - isDeleted boolean, - updatedAt TIMESTAMP, - time TIMESTAMP, - lockerCount INT, - lockers STRING - ) TIMESTAMP(time) PARTITION BY month - DEDUP UPSERT KEYS(time, EquipmentId, WorkCalendarEntryIid); - - CREATE TABLE IF NOT EXISTS Calendar_AdHoc( - EquipmentId SYMBOL, - EquipmentVersion STRING, - WorkCalendarId STRING, - WorkCalendarIid STRING, - WorkCalendarDefinitionId STRING, - WorkCalendarDefinitionEntryId STRING, - WorkCalendarDefinitionEntryIid STRING, - WorkCalendarEntryId STRING, - WorkCalendarEntryIid SYMBOL, - HierarchyScopeId STRING, - EntryType STRING, - ISO22400CalendarState STRING, - isDeleted boolean, - updatedAt TIMESTAMP, - time TIMESTAMP, - lockerCount INT, - lockers STRING - ) TIMESTAMP(time) PARTITION BY month - DEDUP UPSERT KEYS(time, EquipmentId, WorkCalendarEntryIid); - ``` +### Apollo Router + +While Rhize provides a built in GraphQL Playground using Apollo's Sandobx, [Apollo Router](https://www.apollographql.com/docs/router) can be installed to unite queries for different services in a single endpoint outside of Rhize's interface. -1. Modify the calendar YAML file as needed. +> **Requirements:** Router requires the [GraphDB](#db) service. -1. Deploy with helm +Install Router with these steps: + +1. Modify the Router Helm overrides as needed. + +1. Install with Helm: ```bash - helm install calendar-service -f calendar-service.yaml {{< param application_name >}}/calendar-service -n {{< param application_name >}} + helm install router -f router.yaml {{< param application_name >}}/router -n {{< param application_name >}} ``` +If the install is successful, the Router explorer is available on its [default port]({{< relref "../../reference/default-ports" >}}). + ## Optional: change service configuration The services installed in the previous step have many parameters that you can configure for your performance and deployment requirements. diff --git a/content/deploy/install/setup-kubernetes.md b/content/deploy/install/setup-kubernetes.md index 6e4048e83..248335eba 100644 --- a/content/deploy/install/setup-kubernetes.md +++ b/content/deploy/install/setup-kubernetes.md @@ -100,15 +100,13 @@ Then, follow these steps. 1. Update overrides to `keycloak.yaml`. Then install with this command: ```bash - helm install keycloak -f ./keycloak.yaml bitnami/keycloak -n libre + helm install keycloak -f ./keycloak.yaml bitnami/keycloak -n {{< param application_name >}} ``` -> Note: Version may have to be specified by appending on `--version` and the desired chart version. - -1. Set up port forwarding from Keycloak. For example, this forwards traffic to port `8080` on `localhost`. +1. Set up port forwarding from Keycloak. For example, this forwards traffic to port `5101` on `localhost`: ```bash - kubectl port-forward svc/keycloak 8080:80 -n libre + kubectl port-forward svc/keycloak 5101:80 ``` ## Next steps diff --git a/content/how-to/seeq.md b/content/how-to/seeq.md new file mode 100644 index 000000000..20f18ba09 --- /dev/null +++ b/content/how-to/seeq.md @@ -0,0 +1,83 @@ +--- +title: 'Connecting to Seeq' +date: '2025-08-24T21:46:50-07:00' +categories: ["how-to"] +description: How to use Seeq with Rhize +weight: 600 +icon: search +--- + +The Rhize Connector enables Seeq to access data from Rhize. + +> [!NOTE] +> Requires Rhize v4.1.0+ + +### Download +Zip files containing the Rhize Connector are distributed at a dedicated [repository](https://github.com/libremfg/rhize-seeq-connector/releases). + +## Rhize Compatibility +Each release of the Rhize Connector is designed to align with the corresponding version of the Rhize platform for full compatibility. + +## Prerequisites +You must gather some information and setup a Keycloak client to configure a connection to your Rhize instance. + +### Keycloak +The Rhize Connector requires a client configured for it in order to communicate with other Rhize services. + +1. In the side menu, select **Clients > create client.** +2. Configure the **General Settings:** + - **Client Type:** OpenID Connect + - **Client ID:** seeq + **Name** and **Description** can be anything. +3. Configure the **Capability config:** + - **Client Authentication:** On + - **Authorization:** On + - For **Authentication flow,** enable: + - Direct access grants + - Implicit flow + + > Ensure that **Standard flow** in **Authentication flow** is disabled. +4. Select **Next**, then **Save**. +5. Select the **Service accounts roles** tab, then **Assign role:** + - Change the filter to **Filter by clients.** + - Assign roles as relevant in your **scopemap**. + - Alternatively, assign **libreBaas** query roles: + - `resources:query` + - `work-schedule:query` + - `work-performance:query` + - `operations-schedule:query` + - `operations-performance:query` + + Roles can be filted to only show **libreBaas** roles by using the search. +6. Select the **Credentials** tab and copy the **Client secret**. + +> The Client ID and Client Secret are both necessary for authenticating the Rhize Connector. + +### API URL +The API URL defines how to connect to Rhize's database. Commonly this is a domain with the `/graphql` path. + +## Configuration +This is an example configuration. + +```json +{ + "ApiUrl" : "http://localhost:8080/graphql", + "AuthUrl" : "https://localhost:8090/", + "ClientId" : "seeq", + "ClientSecret" : "Dh8tdWmsBi9MB830Zmarj89yrC95mVSX", + "Realm" : "libre", +} +``` + +### Standard Rhize Additional Configuration +| Property Name | Default Value | Data Type | Description | +|:---|:---|:---|:---| +| ApiUrl | null | String | The url to Rhize's backend. | +| AuthUrl | null | String | The url to Keycloak. | +| ClientId | null | String | The ID for the configured Seeq client. | +| ClientSecret | null | String | The secret for the configured Seeq client. | +| Realm | null | String | The realm for Rhize's Keycloak configuration. | + +## Known Issues +There are no known issues for the Rhize Connector. Please report any issues you find to our [support portal](https://libremfg.atlassian.net/servicedesk/customer/portal/1) or to our support email: support@libremfg.atlassian.net. + diff --git a/content/releases/4-2-0.md b/content/releases/4-2-0.md new file mode 100644 index 000000000..d32725b50 --- /dev/null +++ b/content/releases/4-2-0.md @@ -0,0 +1,506 @@ +--- +title: 4.2.0 +date: '2025-12-04T13:18:53-03:00' +description: Release notes for v4.2.0 of the Rhize application +categories: ["releases"] +weight: 1653202285 ## auto-generated, don't change +--- + +Release notes for version 4.2.0 of the Rhize application. + +_Release date:_ +4 Dec 2025 + +## Changes by service + +### Admin + +#### Add + + - Add action payload expression to start message event + - Add AG Grid Transaction Support for WorkMaster UI + - Add amplitude auto-capture for UI usage research in docker + - Add asset mapping tab to equipment + - Add BPMN task for SQL query + - Add bulk version management to Work Master grid + - Add class/instance filtering to selection to work master specifications + - Add copy functionality to work master grid + - Add copy/paste functionality in material specifications grid + - Add datasource topic import from excel + - Add default Mapbox API key + - Add default work type of Production to newly created work masters + - Add dynamic vars to CI/CD pipeline + - Add equipment property mutation and refactor properties component + - Add event tracking to amplitude + - Add import / export of Work Masters to support project lifecycles enabled with environmental variable + - Add inactive filter to data access in specification editors + - Add inline property type editing for equipment class properties + - Add hierarchical view for work masters to improve visual representation + - Add missing description field for Physical Assets + - Add nested properties to Equipment properties page + - Add option to create a work master instance from a pattern + - Add pages for defining Workflow Connection types and Workflow Node Types + - Add persistence to work master editor react flow node locations + - Add restate admin page for subscriptions + - Add sensible defaults for new specifications + - Add storage location to material specifications + - Add support for adding nested properties to Equipment properties + - Add task template for google BigQuery + - Add test specifications to specification + - Add timerange to audit tag query + - Add tooltip to options in select where tooltip data is available + - Add Work Master Editor user interface + - Add work master versioning support + +#### Change + + - Change @bpmn-io/element-template-chooser to ^2.0.0 from ^0.1.0 + - Change a work master's related Operations Segment to be mutually exclusive with Process Segment and Operations Definition + - Change authentication flow to remove client secret from Libre-UI + - Change BPMN view instance list default to last 1 day + - Change builds to use yarn cache for faster pipeline builds + - Change color scheme of work master library to align with site + - Change css for better layout and responsiveness + - Change default work master grid to active work masters + - Change download BPMN to allow for foreign characters + - Change Equipment Asset Mapping to use label as primary display and show ID as subtitle + - Change equipment management page to align with existing resource pages + - Change equipment properties table to use AG-Grid + - Change equipment property binding modal to limit search to the first 100 data source topics + - Change equipment property binding to use the propertyPath instead of propertyLabel + - Change equipment tree component to improve loading of large equipment trees in a reasonable time + - Change equipment tree query to default sort ascending + - + - Change grid component from a Rhize Grid to AG Grid for Equipment, Material Specifications & Material Specification Properties + - Change grid component from a Rhize Grid to AG Grid for Equipment Specification & Equipment Specification Properties + - Change grid component from a Rhize Grid to AG Grid for Personnel Specification & Personnel Specification Properties + - Change grid component from a Rhize Grid to AG Grid for Physical Asset Specification & Physical Asset Specification Properties + - Change hierarchy scope to optional field in workflow specifications + - Change location of the save/reset buttons to the top of specification editors + - Change name to mandatory when creating a new Person by disabling create button when no name provided + - Change node selection dropdowns specification editors + - Change page page title based on the screen + - Change parameter specification to use icons instead of text + - Change Physical Asset to allow update when optional fields are left blank + - Change preact to version 10.19.3 + - Change property type custom cell type and use in other class and operations event definition pages + - Change property type to mandatory in the user interface + - Change project to typescript to 5.8.3 + - Change schema to align with latest ISA-95 Schema + - Change sort order of process segments to show the newly created one at the top of the list instead of bottom + - Change specification property components to use apollo client directly + - Change specification table to synchronize with unified editor tab access + - Change specification tables to show edit control on left + - Change state management to persist selected workflow nodes across sessions + - Change titles from resource name to resource name Specification + - Change to the confirm type modal for cancel and delete material specifications actions + - Change Work Master Editor to reset tabs on initial navigation to page + - Change Work Master Editor to show select popup if selected node or workflow has no link + - Change Work Master Editor page by refactoring tab management logic into dedicated component for more maintainable interface + - Change Work Master grid to allow orphaned Work Master records to be edited + - Change Work Master Editor to support Workflows and Specifications + +#### Fix + - Fix adding new equipment class properties not adding parent property relationship + - Fix asset mapping selection defaulting to the top item instead of the selected equipment object + - Fix assets disappear from equipment asset mapping table when changing version state + - Fix async handling in Workflow Specification updates + - Fix attempts to add equipment parent even if none required + - Fix automatic navigation on newly created version instead of having to manually select it + - Fix BPMN Instance list showing the start-time minute as the month + - Fix BPMN SaveNewVersion not updating the selected version + - Fix display of no data available in Restate Admin page of Services + - Fix equipment general tab drop downs cache values and not updating on page load with most recent + - Fix caching of existing fields when creating new equipment for the second time + - Fix caching of Workflow Connection Types requiring refresh to see newly created in the list + - Fix caching of Workflow Connection Types requiring refresh to see newly created in the list + - Fix changing equipment state hiding child equipment with equipment tree refactor + - Fix clipping of datasource names in equipment property binding modal + - Fix clone of work master without a workflow specification + - Fix deployment bugs + - Fix dirty check on Spatial Definition & Operational Location if undefined + - Fix disable operational location property disable + - Fix disabled equipment showing when searching using the search bar when inactive toggle is off +- Fix double pagination of dataSourceTopics + - Fix duplicate storage or operational locations in specification selection + - Fix editing operations event definition version + - Fix equipment asset mapping being added on deprecated Equipment Versions + - Fix equipment asset mapping asset dropdown pre-selection option not saving + - Fix equipment class caching when adding a new equipment class and immediately want to select it in equipment + - Fix equipment class property change throwing 422 error + - Fix equipment class rule race condition when adding an equipment class rule immediately after creating equipment class + - Fix equipment hierarchy scrolling down halfway through equipment when only a small equipment set present + - Fix equipment hierarchy unknown scrolling depth with large equipment hierarchies + - Fix equipment sidebar search is case sensitive when searching + - Fix equipment tree sorting not persisting upon reload + - Fix for Physical Asset page issue with using 'Enter' to submit variable + - Fix icon clipping in select component by padding + - Fix inability to close expanded view in restate subscription admin page + - Fix inability to set infinite date on equipment asset mapping + - Fix infinite scroll loading/spinner issue in Material Definition sidebar + - Fix inherited equipment class properties not nesting + - Fix issue in Work Master UI where editors were not always showing existing object values + - Fix issue where a workflow specification connection is not refreshed + - Fix issue where Add Workflow button causes loss of node selection in WorkMaster UI + - Fix long data source topic names truncating in equipment property binding modal + - Fix Material Class Property Select all button causing enable/disable function to fail + - Fix mismatch between Equipment Asset Mapping fields when editing to what was persisted + - Fix missing meta data in operational location property meta data detail panel + - Fix missing newly added unit of measures in selection for material class properties after being added + - Fix multiple select placeholder text in Unit Of Measure page showing wrong text string + - Fix Operations Event Definitions failing to load due to introduction of Work Master Versions + - Fix pagination across all table cell components using LibreTable component + - Fix person version optional fields not maintained after version creation + - Fix Physical Asset Fixed assed it being limited to numbers only + - Fix popup elements rendering too narrow of an aspect ratio + - Fix Process Segment page inverted 'See inactive' toggle + - Fix route parameter for work master to iid in route slugs and navigation + - Fix save not persisting on equipment class version + - Fix SaveAsNewVersion failing for secret variables + - Fix sidebar add equipment + - Fix situation where a user could link a work master to an existing node and appending to the list of work masters for a node instead of replacing it + - Fix specification table's version stub causing navbar failure on other pages + - Fix three dot menu obfuscating version number and status, disabling the save and change version state modal + - Fix tree expansion in Equipment hierarchy page + - Fix unlinking of a datasource topic from a bound equipment property + - Fix view instance restate OOM by adding limit to BPMN view instance query + - Fix work master specification table performance issue related to effect loop use + - Fix work master definition values not persisting when edited and saved + - Fix workflow auto-save updating a draft version in the backend + +#### Remove + - Remove ability to edit a physical asset version when the version failed to create in the first place + - Remove reset button from resource specification tree + - Remove unused queries + - Remove unused specification search boxes + + + +### Agent + +#### Add + + - Add OPC UA adapter configuration parameters + - Add check for non-empty password before setting password presence flag + - Add check for non-empty username before setting username presence flag + - Add govulncheck + - Add kakfa egress topic patterns to route data source topics to specific kafka topics + - Add restate handlers + - Add support for custom certificate authorities with keycloak + - Add MQTT qos as a configuration parameter + +#### Change + + - Change build binary to rhize-agent from main + - Change deprecated golang.org/x/exp/rand library to math/rand/v2 + - Change go version to 1.23.9 from 1.23.4 + - Change golang libraries to align with release + - Change library github.com/eclipse/paho.golang to v0.22.0 from v0.12.0 + - Change library github.com/gopcua/opcua to v0.8.0 from v0.5.3 + - Change library gitlab.com/libremfg/rhize-go/v4 to v4.0.1-rc.3 + - Change NATS to optional instead of hard requirement + - Change Rhize service versions to align with release + +#### Fix + + - Fix golangci-lint errors + - Fix kafka message egress memory leak + - Fix parsing OPC-UA topics into NATs subjects + - Fix vulnerabilities + - Fix race conditions in CI/CD MQTT Tests + +### BaaS + +#### Add + + - Add @local directive for dgraph @Remote type resolution + - Add `skipReplacement` option to `@custom` directive to allow for custom resolves not implemented by Runtime + - Add additional error message context to Transaction Too Big errors + - Add admin auth to dql & debug paths + - Add admin resolver for query:lookup, mutation:rollup, mutation:recoverSplitList & mutation:indexRebuild + - Add authentication token propegation for websocket subscriptions + - Add BAAS console to facilitate easier administration of BAAS + - Add custom timeout support to the GraphQL Superflag used for restate calls (default: 1m) + - Add dgraphtest package + - Add getPostingAndLengthNoSort for performance improvements when no-sort is required + - Add graphql subflag flag federation [apollo, restate] to swap federation types (default: restate) + - Add Graphiql playground to console + - Add kafka producer maximum message size + - Add http change-data-capture sink + - Add ISO8601DateTime data type support + - Add logging to badger ErrTooBig + - Add option for custom CA certs to be used when connecting to Keycloak + - Add resource cleanup + - Add support for defining single entities in rules + - Add support for function macros within authorization rules to enable dynamic evalution of permssions of runtime context + - Add support for Regexp Comparison [no Indexes] in GraphQL Queries and Filters + - Add websocket transport to allow for to GraphQL Subscriptions + +#### Change + + - Change @local field resolution to use a queries alias over the field name where present + - Change badger to v4 from v3 + - Change benchmark files to use hypermode repo (was dgraph) + - Change default federation to restate and _Any type to JSONObject + - Change default scopemap to align with latest ISA-95 structure + - Change docker images to 28.4.0 in pipeline + - Change error message for txn too big to give more context + - Change gqlgen and gqlparser to latest versions + - Change ioutil to os library equivalents due to library deprecation + - Change log level of auth rule evaluation to require higher logging level + - Change NATS Sink handler to support new CDC Format + - Change postings cache to align with generic declaration in ristretto v2 + - Change postinglistCountAndLength function to improve performance + - CH-29) Change protobuf for badger and regenerate + - Change resource evaluator not expanding and matching wildcards under all scenarios + - Change ristretto to v2 from v1 + - Change scalar _Any to JSONObject + - Change span trace library to use opentelemetry was opencensus + +#### Fix + + - Fix auth query variable names conflicts with user-defined variable names mutations + - Fix cascade directive field arguments not being coerced to lists + - Fix compatability issues with Rhize OIDC authentication and dgraphtest package + - Fix CSRF vulnerability in the apollo playground fetch/render + - Fix deadlock that happens due to timeout during proposal + - Fix debug tool for schema keys + - Fix debug tool to read WAL entries correctly + - Fix deleteBelowTs rollup issue + - Fix export for any s3 endpoint + - Fix golangci-lint, go-vet & go-vuln issues + - Fix inconsistent time units and prevent erroneous cleanup in incrRollupi Process + - Fix leaking transactions and file descriptors + - Fix memory leak in readMIMEHeader by no longer storing call info, context or complexity into an append only in-memory data structure + - Fix performance issue in type filter + - Fix raft join failure introduced in raft/v3, RestartNode used instead of StartNode + - Fix resolution of _Any scalar type by moving from apolloSchemaExtrase to schemaInputs + - Fix RLAC resources not evaluated correctly + - Fix search operation by list intersection not subset + - Fix snapshot to use updated confstate before sending to prevent stale configuration causing errors + - Fix the conflict in accessing split parts during a rollUp + - Fix validation panic on type check + - Fix wal replay issue during rollup + - Fix wget urls for large datasets in testing pipeline + +#### Remove + +- Remove ACL and legacy login requirement from dgraphtest package +- Remove Ludicrous mode from postings + +### ISA-95 + +#### Add + + - Add `/debug/ingress/cache` handler to get and delete cache when running with debug on + - Add `overfetch` option to history query, that allows for querying a certain number of records outside the given time range while still respecting the `limit` + - Add Audit handlers to ISA-95 + - Add bypass for Ingress to go direct to Kafka + - Add build information to startup + - Add check mandatory fields when creating a new version of an object + - Add comment syncEquipmentDBtoKV to indicate cache update + - Add concurrency to Kafka.ValueDirect Consumer + - Add configuration option `RHIZE_ISA95_KAFKA_CONSUMER_COUNT` to run ingress with multiple consumers per topic + - Add configuration option `RHIZE_ISA95_LUDICROUS_MODE` to run ingress with common comiter goroutine + - Add configuration option `RHIZE_ISA95_VALUECHANGE_TOPIC` to run multiple go-routines per ingress topic + - Add context to rule evaluation call + - Add default order by when none provided and overfetching + - Add default timeouts to restate http2 client to prevent running indefinitely + - Add default timeouts to server http handler to prevent running indefinitely + - Add Equipment IID to historical records + - Add golang pprof port enabled with `RHIZE_ISA95_PPROF_LISTEN` + - Add goroutine labels to assist debugging + - Add ISO8601DateTime data type scalar to schema definition + - Add histogram metric for time spent waiting for ingress to read and commit messages + - Add ludicrous mode to ingress that uses a channels and a goroutine to commit multiple in the background + - Add metrics to IngressValueChange + - Add migration for Equipment.equipmentAssetMapping + - Add missing permission to MutationStatus + - Add move/rename mutations + - Add nested equipment property inheritance and bindings + - Add OIDC Token to restate calls when bypassing restate + - Add Operations Parameter, Operations Data, Work Parameter and Work Data to schema + - Add option to bypass saving restate state on every BPMN task execution when calling a BPMN + - Add otel tracing to ISA-95 microservice + - Add resolver to bypass restate for equipment history queries to increase performance + - Add restate handler metrics for observability + - Add search by equipment level to graph + - Add search to equipment asset mapping date times + - Add syncEquipment mutation + - Add Test Specification Fields to PhysicalAssetSpecification + - Add TTL to cloud event ingress Value Bindings and Equipment Version cache (10min) + - Add `update` auth rule to schema + - Add update bpmn method + - Add version to schema generation + - Add work master version references across multiple schemas + - Add Work Master versioning as per other master data entities + - Add workflow synchronization mutation + - Add WorkMasterVersion Service Implementation + +#### Change + + - Change BAAS container to v4.2.0-rc2 + - Change cached values to allow nil to prevent constant lookups on value changes without bindings + - Change CDC Processing to batch changes for performance enhancement + - Change CI/CD pipeline keycloak to sslRequired=NONE + - Change CI/CD to use golangci-lint v2 + - Change CreateMaterialDefinitionVersionInput to allow creating with base UoM, properties and material classes + - Change default token to work new Keycloak v26.4 seeded database + - Change equipment property history resolver to use ISO8601 datetime (ns support) + - Change exportJSON to use JSONObect instead of string + - Change getInformationObjectData to use JSONObject instead of string + - Change GetInformationObjectDataRequest to return restate terminal error for better restate handling + - Change golang to v1.24 + - Change Grafana LGTM to 0.11.17 + - Change ingress handler to allow to subscribing to multiple topics + - Change strategy to only get equipment state when required and from tsClient + - Change Keycloak to v26.4 + - Change Keycloak seed database to align with Keycloak v26.4 + - Change libraries to latest versions + - Change metric counter library to otel from prometheus + - Change OIDC to use rhize-go Library + - Change queryInstances endpoint for workflow instance log list + - Change questdb equipment property data schema to use SYMBOL type to reduce storage requirements and improve performance + - Chang processing of agent data to restate service rather than virtual objects by default + - Change pipeline services to latest versions + - Change QuestbDB to v9.1.1 + - Change Redpanda to v25.2.10 + - Change Restate to v1.21.1 + - Change to restate restate.UUID from restate.Rand.UUID to align with client upgrade +- Change schema resolution of dgraph types in inherited properties to only return IIds and allow dgraph to resolve + - Change service context span to return new context to allow for nested spans + - Change typescript-host to v4.2.0-rc1 + - Change various golang libraries to latest minor/bugfix release + +#### Fix + + - Fix cache population on map value + - Fix Calendar generation not scheduling itself + - Fix concurrent map read/write + - Fix equipment property data schema for QuestDB to Use SYMBOL Type + - Fix goroutine leak by upgrading restate sdk-go to v0.18.1 + - Fix inconsistent isa95.equipment restate key usage + - Fix Inconsistent Key Usage in Topic Bindings + - Fix memory usage of inherited properties by using @local directive on local return types + - Fix metrics endpoint resolver + - Fix must not query state unless a service_key is provided in the WHERE clause + - Fix non-pointer binding and ensure correct content-type is passed to restate based on body content + - Fix optional fields value missing after version creation of a Person + - Fix restate OOM by changing default schema to use int limits and require datetimes for queryInstances + - Fix several security vulnerabilities in event catalog + +#### Remove + - Remove excess information being stored in equipment active version + - Remove over-fetched data for inherited equipment properties + - Remove tests that are specific to old combined WorkMaster header/version structure + +### Typescript host service + +#### Change + - Change pipeline release step to use docker v29.0 + - Change restate sdk version to 1.9.0 + + +### Workflow + +#### Add + - Add a struct for type Rule Evaluation Payload Context + - Add action payload expressions to workflow definition outputs to allow UI to display the existing value + - Add blank rule expression just to evaluate the struct instead to increase rule evaluation performance (this skips the need for communicating with the type script service for eval) + - Add cache of workflow specifications + - Add debug message for rule evaluation + - Add documentation on `CreateAndRun` function for the `BPMN` restate service + - Add error logging for binding failures + - Add expand env vars to JSONata service task transform & input + - Add foreign character support to BPMNs + - Add function to rebind message start event triggers + - Add instance node logging + - Add instance node log delete function for cleanup of logging + - Add Intermediate catch via message start back in + - Add option to specify BPMN version on CreateAndRun handler + - Add option to run workflow without restate + - Add process documentation as description when marshalling and the inverse when unmarshalling to xml + - Add restate metrics to support debugging and performance analysis + - Add round-robin load balancing for typescript host in HA environments + - Add schema validation function back in + - Add SQL Task Handler for PostgreSQL + - Add support for saving bpmn extension properties as workflow spec properties + - Add task Data Source Method Call back in + - Add task handler for GoogleBigQuery + - Add Update BPMN restate handler + - Add Update Workflow Specification Property handler + - Add validation logic to update BPMN timers + - Add validation to the SQL Playground sql query to prevent full table scans of value or value_utf8 fields + - Add workflow id search in instance query + - Add workflow instance log list as queryInstances end point + +#### Change + +- Change /healthz definition to inline defined for readability +- Change BPMN loading for SQL tasks to accept inputs for `url`, `query`, `args`, and `responseTransformExpression`. +- Change BPMNService to initialize with a query/mutation adapter instead of creating one every time +- Change `buildActionPayload` to always treat expressions as non-constant, due to issues with the UI +- Change completed workflow buffer when equal to or greater than 10 +- Change connection string logging to use slog for consistency +- Change `createAndRunBpmn` strategy to fallback to database query when workflow specification not found +- Change datasource task to route via restate instead of NATS +- Change dependencies to align with release +- Change DQL, GraphQL & RestAPI Task HTTP Clients to use common client with shared Certificate Authority pool to allow for external certificate use +- Change error messages to restate terminalize errors +- Change event message start to trigger off of rule-definition bound to datasource topic instead in restate instead of Kafka Topic +- Change golang libraries to align with release + - Change instance filter to filter out un-versioned, un-started or un-ended instances + - Change JSONata-go to v1.8.8 + - Change JSONata errors to terminalize back to restate instead of panic'ing + - Change JSONata resolution to use typescript microservice implementation instead of golangs +- Change Kafka implementation to allow use of Redpanda + - Change log usage to slog in main.go for consistency + - Change pipeline variables to work with v4 app-config-local +- Change RestateSDK to 0.18.1 +- Change rule evaluation error messages to wrap additional context before erroring + - Change versions of filippo/age + - Change versions of golang.org/x/exp + - Change versions of golang-jwt and golang.org/x/net + - Change workflow to clear token on abort to prevent a loop + - Change workflow to only handle Message/Timer starts on start nodes (i.e. filter out the others) + +#### Fix + + - Fix an issue where `buildActionPayload` would overwrite inputs before sending it to typescript-host-service + - Fix bpmn timer disable + - sts/99) Fix CI pipeline failure due to creating a datasource with curl compressed + - Fix duplicate subscriptions from ever being created on message start events + - Fix error where an end node would be displayed as a failed node + - Fix http protocols served by Workflow + - sts/92) Fix lint issues + - Fix metrics complete not finding any instances due to incorrect key from complete token method + - Fix node errors shown in View Instance UI + - Fix platform state SQL playground error + - Fix sync bpmn service to include case where workflow does not have a virtual object + - Fix workflownode automated delete to use node.NodeID instead of the restate key & NodeID + +#### Remove + + - Remove Ingress BAAS CDC handler from Workflow + - Remove unused duplicated fromBpmnXML + - Remove unused/commented out code + - Remove azure client information in agent test configuration + + + + + +The following sections document the changes this release brings to each service. + + +## Compatibility + +{{< compatible "4.2.0" >}} + +## Checksums + +{{% checksums "v4.2.0-checksums.txt" %}} + +## Upgrade + +To upgrade to v4.2.0, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/_index.md b/content/versions/v3.2.1/_index.md new file mode 100644 index 000000000..0b9cd02f0 --- /dev/null +++ b/content/versions/v3.2.1/_index.md @@ -0,0 +1,30 @@ +--- +title: ##Leave only home page without title +description: User guides, deploy docs, references, and deep dives about the + Rhize manufacturing data hub. +cascade: + type: docs + v: "3.2.1" +--- + + + +

+The Rhize Manufacturing Data Hub +

+ +Rhize is a real-time, event-driven manufacturing data hub. + +Rhize unites all events from your manufacturing processes, relates these events as a single graph structure, +and provides access to any combination of them through a single API endpoint. +The tight integration of all levels of manufacturing data, from real-time sensor data to operations orders, serves a wide variety of business needs, including as: + +- **A manufacturing knowledge graph.** Help humans and algorithms analyze plant processes and discover places to optimize. +- **An integrator of systems.** Orchestrate processes across applications to standardize, coordinate, and transform data flows. +- **A backend for {{< abbr "MES" >}} applications.** Rapidly build frontends on top of the database and workflow engine. + Design the MES system that makes sense for your processes and people. + + +{{< card-list >}} + + diff --git a/content/versions/v3.2.1/deploy/_index.md b/content/versions/v3.2.1/deploy/_index.md new file mode 100644 index 000000000..de92f8929 --- /dev/null +++ b/content/versions/v3.2.1/deploy/_index.md @@ -0,0 +1,38 @@ +--- +title: Deploy +description: >- + A collection of pages to administrate Rhize: install, upgrade, back up, and more. +weight: 100 +icon: server +identifier: deploy +cascade: + icon: server + domain_name: libremfg.ai + brand_name: Libre + application_name: libre + db: libreBaas + pre_reqs: |- + - Optional: [kubectx](https://github.com/ahmetb/kubectx) utilities + - `kubectx` to manage multiple clusters + - `kubens` to switch between and configure namespaces easily + - Optional: the [k8 Lens IDE](https://k8slens.dev), if you prefer to use Kubernetes graphically + k8s_cluster_ns: |- + ```bash + ## context + kubectl config current-context + ## namespace + kubectl get namespace + ``` + + To change the namespace for all subsequent [`kubectl` commands](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) to `libre`, run this command: + + ```bash + kubectl config set-context --current --namespace=libre + ``` + +--- + +A collection of pages to administrate Rhize: install, upgrade, back up, and more. + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/deploy/backup/_index.md b/content/versions/v3.2.1/deploy/backup/_index.md new file mode 100644 index 000000000..b69b7a7bf --- /dev/null +++ b/content/versions/v3.2.1/deploy/backup/_index.md @@ -0,0 +1,21 @@ +--- +date: "2023-09-12T19:35:35+11:00" +title: Back up +description: Guides to back up your data on Rhize +categories: ["how-to"] +weight: 200 +cascade: + icon: database +--- + +Backup is critical to ensure reliability and recovery. + +These guides show you how to back up different services and data on Rhize. +They also serve as blueprints for automation. + +Your organization must determine how frequently you backup services, and how long you store them for. +The correct practice here is highly contextual, +depending on the size of the data, the importance of the data, and the general regulatory and governance demands of your industry. + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/deploy/backup/audit.md b/content/versions/v3.2.1/deploy/backup/audit.md new file mode 100644 index 000000000..0b0bd9d7e --- /dev/null +++ b/content/versions/v3.2.1/deploy/backup/audit.md @@ -0,0 +1,52 @@ +--- +title: 'Back up Audit PostgreSQL' +date: '2024-03-26T11:20:56-03:00' +categories: ["how-to"] +description: How to backup Audit PostgreSQL on your Rhize deployment +weight: 300 +--- + +This guide shows you the procedure to backup your Audit PostgreSQL database on your Rhize Kubernetes deployment. + +## Prerequisites + +Before you start, ensure you have the following: + +- A designated backup location, for example `~/rhize-backups/libre-audit`. +- Access to the [Rhize Kubernetes Environment](/deploy/install/setup-kubernetes) +{{% param pre_reqs %}} + + +Also, before you start, confirm you are in the right context and namespace. + +{{% param k8s_cluster_ns %}} + +## Steps + +To back up Audit PostgreSQL, follow these steps: + +1. Check the logs for the Audit pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors. + +1. Retrieve the Audit user password using the following command: + + + ```bash + kubectl get secret -o jsonpath="{.data.}" | base64 --decode + ``` + +1. Execute a command on the Audit Postgres pod to perform a full backup: + + ```bash + kubectl exec -i audit-postgres-0 -- pg_dumpall -U | gzip > audit-postgres-backup-$(date +"%Y%m%dT%I%M%p").sql.gz + ``` + +On success, the backup creates a GZIP file, `audit-postgres-backup-YYYYMMDDTHHMMSS.sql.gz`. +To check that the backup succeeded, unzip the files and inspect the data. + +## Next Steps + +- To back up other Rhize services, read how to backup: + - [Keycloak]({{< relref "keycloak" >}}). + - [Grafana]({{< relref "grafana" >}}). + - [The Graph Database]({{< relref "graphdb" >}}). diff --git a/content/versions/v3.2.1/deploy/backup/binary.md b/content/versions/v3.2.1/deploy/backup/binary.md new file mode 100644 index 000000000..a81db4be3 --- /dev/null +++ b/content/versions/v3.2.1/deploy/backup/binary.md @@ -0,0 +1,79 @@ +--- +title: 'Back up the Graph DB to S3' +date: '2024-11-04T11:01:46-03:00' +categories: ["how-to"] +description: How to back up the Rhize graph database to Amazon S3 storage. +weight: 100 +--- + +This guide shows you how to back up the Rhize Graph database to Amazon S3 and S3-compatible storage. + +## Prerequisites + +Before you start, ensure you have the following: + + +- A designated S3 backup location, for example `s3://s3..amazonaws.com/`. +- Access to your [Rhize Kubernetes Environment]({{< relref "../install" >}}) +{{% param pre_reqs %}}. + + +Before you start, confirm you are in the right context and namespace: + +{{% param "k8s_cluster_ns" %}} + +## Steps + +To back up the database, follow these steps: + +1. Check the logs for the alpha and zero pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors. + + ```bash + kubectl logs {{< param application_name >}}-baas-baas-alpha-0 --tail=80 + ``` +1. Set the following environmental variables: + - `AWS_ACCESS_KEY_ID`. Your AWS access key with permissions to write to the destination bucket + - `AWS_SECRET_ACCESS_KEY`. Your AWS access key with permissions to write to the destination bucket + - `AWS_SESSION_TOKEN`. Your AWS session token (if required) + +1. Make a POST request to your Keycloak `/token` endpoint to get an `access_token` value. +For example, with `curl` and `jq`: + + ```bash + ## replace USERNAME and PASSWORD with your credentials + USERNAME=backups@libremfg.com \ + && PASSWORD=password \ + && curl --location \ + --request POST "${BAAS_OIDC_URL}/realms/libre/protocol/openid-connect/token" \ + --header 'Content-Type\ application/x-www-form-urlencoded' \ + --data-urlencode 'grant_type=password' \ + --data-urlencode "username=" \ + --data-urlencode "password=" \ + --data-urlencode "client_id=" \ + --data-urlencode "client_secret=" | jq .access_token + ``` + +1. Using the token from the previous step, send a POST to `:8080/admin` to create a backup of the node to your S3 bucket. +For example, with `curl`: + + ```bash + curl --location 'http://alpha:8080/admin' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer ' \ + --data '{"query":"mutation {\n backup(input: {destination: \"s3://s3..amazonaws.com/\"}) {\n response {\n message\n code\n }\n taskId\n }\n}","variables":{}}' + ``` + +1. List available backups to confirm your backup succeeded: + + ```bash + curl --location 'http://alpha:8080/admin' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer ' \ + --data '{"query":"query backup {\n\tlistBackups(input: {location: \"s3://s3.>.amazonaws.com/\"}) {\n\t\tbackupId\n\t\tbackupNum\n\t\tpath\n\t\tsince\n\t\ttype\n\t}\n}","variables":{}}' + ``` + +## Next Steps + +- Test the [Restore Graph Database From S3]({{< relref "../restore/binary" >}}) procedure to ensure you can recover data from Amazon S3 in case of an emergency. +- To back up other Rhize services, read how to backup [Grafana]({{< relref "grafana" >}}). diff --git a/content/versions/v3.2.1/deploy/backup/grafana.md b/content/versions/v3.2.1/deploy/backup/grafana.md new file mode 100644 index 000000000..c5d47f98e --- /dev/null +++ b/content/versions/v3.2.1/deploy/backup/grafana.md @@ -0,0 +1,105 @@ +--- +title: 'Back up Grafana' +date: '2023-10-18T11:01:56-03:00' +categories: ["how-to"] +description: How to backup Grafana on your Rhize deployment +weight: 300 +--- + +This guide shows you the procedure to back up Grafana on your Rhize Kubernetes deployment. +For general instructions, refer to the official [Back up Grafana](https://grafana.com/docs/grafana/latest/administration/back-up-grafana/) documentation. + +## Prerequisites + +Before you start, ensure you have the following: + +- A designated backup location, for example `~/rhize-backups/grafana`. +- Access to the [Rhize Kubernetes Environment](/deploy/install/setup-kubernetes) +{{% param pre_reqs %}} + + +Also, before you start, confirm you are in the right context and namespace. + +{{% param k8s_cluster_ns %}} + +## Steps + +To back up the Grafana, follow these steps: + +1. Check the logs for the Grafana pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors. + +1. Open a pod shell for one of the Grafana pods: + + ```bash + kubectl exec --stdin --tty -- /bin/bash + ``` + + For details, read the Kubernetes topic [Get Shell to a Running Container](https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/). + +1. Use `tar` to backup the Grafana data and `conf` directories: + + ```bash + ## Data Directory Backup Command + tar -v -c -f /home/grafana/grafana-data-$(date +"%Y-%m-%dT%H.%M.%S").tar.gz /var/lib/grafana + ## Conf Directory Backup Command + tar -v -c -f /home/grafana/grafana-conf-$(date +"%Y-%m-%dT%H.%M.%S").tar.gz /usr/share/grafana/conf + ``` + +1. Change to the backup directory. For example: + + ```bash + cd /home/grafana/ + ``` + +1. Check for the latest `.gz` files (for example, with `ls -lt`). + There should be new backup `data` and `conf` files whose names include timestamps from when you ran the preceding `tar` commands. + +1. Create a checksum file for the latest backups: + + ```bash + sha256sum .tar.gz .tar.gz > backup.sums + ``` + + +1. Exit the container shell, and then copy files out of the container to your backup location: + + ```bash + ## exit shell + exit + ## copy container files to backup + kubectl cp :/home/grafana/ \ + ./ -c grafana + + kubectl cp :/home/grafana/ \ + ./ -c grafana + kubectl cp :/home/grafana/backup.sums \ + ./backup.sums -c grafana + ``` + +## Confirm success + + +To confirm the backup, check their sha256 sums and their content. + +To check the sums: + +1. Change to the directory where you sent the backups: + + ```bash + cd // + ``` + +1. Confirm the checksums match: + + ```bash + sha256sum -c backup.sums \ + .tar.gz .tar.gz + ``` + +To check that the content is correct, unzip the files and inspect the data. + +## Next steps + +- Test the [Restore Grafana]({{< relref "../restore" >}}) procedure to ensure you can recover data in case of an emergency. +- To back up other Rhize services, read how to backup [the Graph Database]({{< relref "graphdb" >}}). diff --git a/content/versions/v3.2.1/deploy/backup/graphdb.md b/content/versions/v3.2.1/deploy/backup/graphdb.md new file mode 100644 index 000000000..10ace29cf --- /dev/null +++ b/content/versions/v3.2.1/deploy/backup/graphdb.md @@ -0,0 +1,134 @@ +--- +title: 'Back up the Graph DB' +date: '2023-10-18T11:01:46-03:00' +categories: ["how-to"] +description: How to back up the Rhize graph database +weight: 100 +--- + +This guide shows you how to back up the Rhize Graph database. +You can also use it to model an automation workflow. + +## Prerequisites + +Before you start, ensure you have the following: + + +- A designated backup location, for example `~/rhize-backups/database`. +- Access to your [Rhize Kubernetes Environment]({{< relref "../install" >}}) +{{% param pre_reqs %}}. + + +Before you start, confirm you are in the right context and namespace: + +{{% param "k8s_cluster_ns" %}} + +## Steps + +To back up the database, follow these steps: + +1. Check the logs for the alpha and zero pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors. + + ```bash + kubectl logs {{< param application_name >}}-baas-baas-alpha-0 --tail=80 + ``` + +1. Open a pod shell for one of the alpha pods. If you are using the terminal, run this command: + + ```bash + kubectl exec --stdin --tty {{< param application_name >}}-baas-baas-alpha-0 \ + -n {{< param "application_name" >}} -- /bin/bash + ``` + + For details, read the Kubernetes topic [Get Shell to a Running Container](https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/). + +1. Make a POST request to your Keycloak `/token` endpoint to get an `access_token` value. +For example, with `curl` and `jq`: + + ```bash + ## replace USERNAME and PASSWORD with your credentials + USERNAME=backups@libremfg.com \ + && PASSWORD=password \ + && curl --location \ + --request POST "${BAAS_OIDC_URL}/realms/libre/protocol/openid-connect/token" \ + --header 'Content-Type\ application/x-www-form-urlencoded' \ + --data-urlencode 'grant_type=password' \ + --data-urlencode "username=${USERNAME}" \ + --data-urlencode "password=${PASSWORD}" \ + --data-urlencode "client_id=${BAAS_OIDC_CLIENT_ID}" \ + --data-urlencode "client_secret=${OIDC_SECRET}" | jq .access_token + ``` + +1. Using the token from the previous step, send a POST to `localhost:8080/admin` to create a backup of the node. +For example, with `curl`: + + ```bash + curl --location --request POST 'http://localhost:8080/admin' \ + --header 'Authorization: Bearer ' \ + --header 'Content-Type: application/json' \ + --data-raw '{"query":"mutation {\r\n export(input: {format: \"json\", destination: \"/dgraph/backups/'"$(date +"%Y-%m-%dT%H.%M.%SZ")"'\"}) {\r\n response {\r\n message\r\n code\r\n }\r\n}\r\n}","variables":{}}' + ``` + +1. Change to the backup directory (the `destination` parameter in the preceding `curl` command). For example: + + ```bash + cd /dgraph/backups + ``` + +1. Check for the latest directory. Its name should be the timestamp of when you sent the preceding `curl` request. For example: + + ```bash + ls -lt + ``` + + With these flags, the first listed directory should be the latest backup, named something like `2023-10-31T16.55.56Z` + +1. Create a file that holds the sha256 checksums of the latest backup files. You'll use this file to confirm the copy is identical. + + ```bash + sha256sum /dgraph./*.gz > /backup.sums + ``` + +1. Exit the container shell, then copy files out of the container to your backup location: + + ```bash + ## exit shell + exit + ## copy container files to backup + kubectl cp --retries=10 /:backups/ \ + .// + ``` + +1. Use the checksum to confirm that the pod files and the local files are the same. +If you are using Windows, you can run an equivalent check with the [`CertUtil`](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/certutil) utility: + + {{< tabs items="bash,cmd">}} + {{% tab "bash" %}} + ```bash + ## Change to the directory + cd ./// + ## Check sums + sha256sum -c backup.sums *.gz + ``` + {{% /tab %}} + {{% tab "cmd" %}} + ```cmd + CertUtil -hashfile C:\\\backup.sums sha256 + ``` + {{% /tab %}} + {{< /tabs >}} + +## Confirm success + +On success, the backup creates three zipped files: +- The GraphQL schema +- The DB schema +- A JSON file with the real database data. + +To check that the backup succeeded, unzip the files and inspect the data. + +## Next Steps + +- Test the [Restore Graph Database]({{< relref "../restore/graphdb" >}}) procedure to ensure you can recover data in case of an emergency. +- To back up other Rhize services, read how to backup [Grafana]({{< relref "grafana" >}}). diff --git a/content/versions/v3.2.1/deploy/backup/influx.md b/content/versions/v3.2.1/deploy/backup/influx.md new file mode 100644 index 000000000..aecdb39bf --- /dev/null +++ b/content/versions/v3.2.1/deploy/backup/influx.md @@ -0,0 +1,67 @@ +--- +title: 'Back up Influx' +date: '2023-10-18T11:01:56-03:00' +categories: ["how-to"] +description: How to backup InfluxDB on your Rhize deployment +draft: true +weight: 300 +--- + +This guide shows you the procedure to back up the InfluxDB on your Rhize Kubernetes deployment. +For general instructions, refer to the official [Backup Grafana](https://grafana.com/docs/grafana/latest/administration/back-up-grafana/) documentation. + +## Prerequisites + +Before you start, ensure you have the following: + +- A designated backup location, for example `~/rhize-backups/influx`. +- Access to the [Rhize Kubernetes Environment](/deploy/install/setup-kubernetes) +{{% param pre_reqs %}} + + +Also, before you start, confirm you are in the right context and namespace. + +```bash +## context +kubectl config current-context +## namespace +kubectl get namespace +``` + + +## Steps + +Then, to back up the Influx, follow these steps: + +1. Check the logs for the Influx pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors. + +1. Open a pod shell for one of the Influx pods: + + ```bash + kubectl exec --stdin --tty -- /bin/bash + ``` + + For details, read the Kubernetes topic [Get Shell to a Running Container](https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/). + +1. Use the `influx backup` command to backup the data. + + ```bash + influx backup --org {{< param brand_name >}} --bucket {{< param brand_name >}} --token /backups/$(date +"%Y-%m-%dT%H.%M.%S") + ``` + +1. Open the backup directory. Check the latest directory (for example with `ls -lt`) for the latest `.gz` files. Its name should be a timestamp from when you ran the preceding backup command. + +1. Leave the container shell. Copy files out of the container to your backup location: + + ```bash + kubectl cp /:backups/ \ + ./ + ``` + +To check that the backup succeeded, unzip the files and inspect the data. + +## Next steps + +- Test the [Restore Influxdb]({{< relref "../restore/influxdb" >}}) procedure to ensure you can recover data in case of an emergency. +- To back up other Rhize services, read how to backup [the Graph Database]({{< relref "graphdb" >}}) and [Grafana]({{< relref "grafana" >}}). diff --git a/content/versions/v3.2.1/deploy/backup/keycloak.md b/content/versions/v3.2.1/deploy/backup/keycloak.md new file mode 100644 index 000000000..1c0b1ba0f --- /dev/null +++ b/content/versions/v3.2.1/deploy/backup/keycloak.md @@ -0,0 +1,60 @@ +--- +title: 'Back up Keycloak' +date: '2024-01-08T14:30:15-05:00' +categories: ["how-to"] +description: How to backup Keycloak on your Rhize deployment +weight: 300 +--- + +This guide shows you how to back up Keycloak on your Rhize Kubernetes deployment. + +## Prerequisites + +Before you start, ensure you have the following: + +- A designated backup location, for example `~/rhize-backups/keycloak`. +- Access to the [Rhize Kubernetes Environment](/deploy/install/setup-kubernetes) +{{% param pre_reqs %}} + +Also, before you start, confirm you are in the right context and namespace. + +{{% param k8s_cluster_ns %}} + +## Steps + +To back up Keycloak, follow these steps: + +1. Check the logs for the Keycloak pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors. + +1. Retrieve the Keycloak user password using the following command, replacing with your namespace: + + + ```bash + kubectl get secret keycloak--postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode + ``` + +1. Execute a command on the Keycloak Postgres pod to perform a full backup, replacing with your namespace: + + ```bash + kubectl exec -i keycloak--postgresql-0 -- pg_dumpall -U postgres | gzip > keycloak-postgres-backup-$(date +"%Y%m%dT%I%M%p").sql.gz + ``` + + +1. When prompted, use the password from the previous step. Expect the prompt multiple times for each database. + +1. Check the logs for the Keycloak Postgres pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors relating to the backup. + +## Confirm success + +On success, the backup creates a gzip file, `keycloak-postgres-backup-YYYYMMDDTHHMMSS.sql.gz`. + +To check that the backup succeeded, unzip the files and inspect the data. + +## Next Steps + +- Test the [Restore Keycloak]({{< relref "../restore/keycloak" >}}) procedure to ensure you can recover data in case of an emergency. +- To back up other Rhize services, read how to backup: + - [Grafana]({{< relref "grafana" >}}). + - [The Graph Database]({{< relref "graphdb" >}}). diff --git a/content/versions/v3.2.1/deploy/cluster-sizing.md b/content/versions/v3.2.1/deploy/cluster-sizing.md new file mode 100644 index 000000000..e657e6b1c --- /dev/null +++ b/content/versions/v3.2.1/deploy/cluster-sizing.md @@ -0,0 +1,92 @@ +--- +title: Recommended Kubernetes cluster sizing +description: The recommended number of nodes and compute per pod in your Rhize Kubernetes cluster +--- + +Rhize runs on Kubernetes. + +This document provides compute recommendations for the nodes, pods services of your [Rhize Install]({{< relref "install" >}}). +Some services also have recommended replication factors to increase reliability. + +## Node recommendations + +The following tables are the minimum recommended sizes to provision your cluster for Rhize {{% param v %}}. + +### Rhize nodes + +For high availability, Rhize recommends a **minimum of three nodes** with the following specifications. + + +| Property | Value | +|-----------------------|-------------------| +| Number of nodes | 3 | +| CPU Speed (GHz) | 3.3 | +| vCPU per Node | 16 | +| Memory per node (GiB) | 32 (64 is better) | +| Persisted volumes | 12 | +| Persisted Volume IOPS | 5000 | +| PV Throughput (MBps) | 500 | +| Total Disk Space (TB) | 3 | +| Disk IOPS | 5000 | +| Disk MBps | 500MBps | + +### Rhize agent + +The Rhize agent typically runs on the edge, outside of the cluster entirely. +For the Rhize Agent, the minimum recommended specifications are as follows: + +| Property | Value | +|-----------------------|-------| +| CPU Speed (GHz) | 2.8 | +| vCPU per Node | 2 | +| Memory per node (GiB) | 1 | +| Persisted volumes | 1 | + +## Service-level recommendations + +The following table lists the **minimum** recommended specifications for the main services. +Services with stateful PV have a persistent volume per pod. + + +| Service | Pods for HA (replica count) | vCPU per Pod | Memory Per Pod | Stateful PV | DiskSize (GiB) | Comments | +|------------------------|-----------------------------|--------------|----------------|-------------|----------------|----------------------------------------------------------------------| +| `baas-alpha` | 3 | 8 | 16 (at least) | Yes | 750 | High throughput and IOPS | +| `baas-zero` | 3 | 2 | 2 | Yes | 350 | High throughput and IOPS | +| `libre-core` | 3 | 1 | 2 | No | N/A | HA requires 2 pods, but 3 is to avoid hotkey issues and balance load | +| `bpmn-engine` | 3 | 1 | 2 | No | N/A | HA requires 2 pods, but 3 is to avoid hotkey issues and balance load | +| `nats` | 3 | 1 | 2 | Yes | 100 | High IOPS | +| `nats-box` | 1 | 0.25 | 0.25 | No | N/A | | +| `libre-audit` | 2 | 1 | 1 | No | N/A | | +| `libre-audit-postgres` | 2 | 1 | 2 | Yes | 250 | Runs in pod with `libre-audit` | +| `libre-ui` | 3 | 0.25 | 0.25 | No | N/A | | +| `keycloak` | 2 | 1 | 2 | No | N/A | | +| `keycloak-postgres` | 2 | 1 | 2 | No | 200 | Runs in pod with `keycloak` | +| `router` | 2 | 1 | 2 | Yes | <1 | Requires volume to compose supergraph | +| `grafana`* | 3 | 0.5 | 2 | No | 20-50 | Storage can be in host or in object bucket. | + + * May run [in separate cluster](#monitoring-stack) + +### Monitoring stack + +The following table provides minimal compute recommendations for the monitoring stack. + +The default recommendation is to run your Rhize observability stack in the nodes that also run the Rhize application. +However, some deployments prefer to separate monitoring to its own cluster. + +| Service | Pods for HA (replica count) | vCPU cores per pod | Memory per pod | DiskSize (GiB) | +|-------------------------|-----------------------------|--------------------|----------------|----------------| +| `grafana` | 3 | 0.5 | 2 | 50GB | +| `prometheus-node` | 4 | 0.25 | 0.05 | N/A | +| `prometheus-server` | 1 per pod | 1 | 2 | 1 | +| `promtail` | 4 | 0.25 | 0.2 | N/A | +| `loki` | 1 | 1 | 1 | 1 | +| `loki-logs` | 1 per pod | 0.25 | 0.1 | N/A | +| `loki-canary` | 4 | 0.25 | 0.1 | N/A | +| `loki-gateway` | 1 | 0.25 | 0.05 | 0.25 | +| `loki-grafana-operator` | 1 | 0.25 | 0.1 | 0.25 | +| `tempo-compactor` | 1 | 0.25 | 2 | 0.25 | +| `tempo-ingester` | 3 | 0.5 | 0.75 | 1.5 | +| `tempo-querier` | 1 | 0.25 | 0.5 | 0.25 | +| `tempo-distributor` | 1 | 0.25 | 0.5 | 0.25 | +| `tempo-query-frontend` | 1 | 0.25 | 0.5 | 0.25 | +| `temp-memcache` | 1 | 0.25 | 0.1 | 0.25 | diff --git a/content/versions/v3.2.1/deploy/get-keycloak-token.md b/content/versions/v3.2.1/deploy/get-keycloak-token.md new file mode 100644 index 000000000..006bb674e --- /dev/null +++ b/content/versions/v3.2.1/deploy/get-keycloak-token.md @@ -0,0 +1,46 @@ +--- +title: Get Keycloak Token +description: Get a Keycloak bearer token +--- + +When build applications on Rhize, your clients to need authenticate requests. +To do that, they need to periodically request a bearer token from the Keycloak service. + +## Get token + +With `grant_type` set for client credentials, you can get a bearer token with the following request: + +```shell +curl -X POST "https:///realms//protocol/openid-connect/token" \ + -H "Content-Type: application/x-www-form-urlencoded" \ + -d "grant_type=client_credentials" \ + -d "client_id=" \ + -d "client_secret=" +``` + +If the `grant_type` is set for password authentication the request will also require a username and password: + +```shell +curl -X POST "https:///realms//protocol/openid-connect/token" \ + -H "Content-Type: application/x-www-form-urlencoded" \ + -d "grant_type=password" \ + -d "client_id=" \ + -d "client_secret=" \ + -d "username=" \ + -d "password=" +``` + +## Response + +An example response returns a JSON object with the following structure. +The `access_token` property has the token value. + +```json +{ + "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldU...", + "expires_in": 300, + "token_type": "Bearer", + "scope": "email profile" +} +``` + diff --git a/content/versions/v3.2.1/deploy/install/_index.md b/content/versions/v3.2.1/deploy/install/_index.md new file mode 100644 index 000000000..73587e378 --- /dev/null +++ b/content/versions/v3.2.1/deploy/install/_index.md @@ -0,0 +1,26 @@ +--- +title: 'Install' +date: '2023-09-22T13:54:26-03:00' +category: how-to +description: >- + A guide to install Rhize services on your Kubernetes cluster. +weight: 100 +cascade: + domain_name: libremfg.ai + brand_name: Libre + application_name: libre + icon: terminal +--- + +This guide shows you how to install Rhize services on your Kubernetes cluster. + + + +{{< callout type="info">}} +This procedure aims to be as generic and vendor-neutral as possible. +Some configuration depends on where and how you run your IT infrastructure—what cloud provider you use, preferred auxiliary tools, and so on---so your team must adapt the process for its particular use cases. +{{< /callout >}} + + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/deploy/install/image.png b/content/versions/v3.2.1/deploy/install/image.png new file mode 100644 index 000000000..f86a2c0d1 Binary files /dev/null and b/content/versions/v3.2.1/deploy/install/image.png differ diff --git a/content/versions/v3.2.1/deploy/install/keycloak.md b/content/versions/v3.2.1/deploy/install/keycloak.md new file mode 100644 index 000000000..bb97b5d84 --- /dev/null +++ b/content/versions/v3.2.1/deploy/install/keycloak.md @@ -0,0 +1,387 @@ +--- +title: Configure Keycloak +description: The Rhize GraphQL implementation uses OpenIDConnect for + Authentication and role-based access control. This section describes how to + set up Keycloak +weight: 100 +icon: key +categories: "how-to" +--- + +Rhize uses [Keycloak](https://keycloak.org) as an OpenID provider. +In your cluster, the Keycloak server to authenticate users, services, and manage Role-based access controls. + +This topic describes how to set up Keycloak in your Rhize cluster. +For a conceptual overview of the authentication flow, +read [About OpenID Connect](/explanations/about-openidconnect) + +## Prerequisites + +First, ensure that you have followed the instructions from [Set up Kubernetes](/deploy/install/setup-kubernetes). +All prerequisites for that step apply here. + +## Steps + +Follow these steps to configure a Keycloak realm and associate Rhize services to Keycloak clients, groups, roles, and policies. + +{{% steps %}} + +### Log in + +1. Go to `localhost` on the port where you forwarded the URL. If you used the example values from the last step, that's `localhost:5101`. +1. Use the container credentials to log in. + + To find this, look in the `keycloak.yaml` file. + +### Create a realm + +A Keycloak _realm_ is like a tenant that contains all configuration. + +To create your Rhize realm, follow these steps. + +1. In the side menu, select **Master** then **Create Realm**. +1. For the **Realm Name**, enter `{{< param application_name >}}`. **Create.** + +### Configure realm settings + +#### Configure frontend URL and SSL + +1. In the side menu, select **Realm Settings**. +1. Enter the following values: + | Field | value | + |--------------|-----------------------| + | Frontend URL | Keycloak frontend URL | + | Require SSL | External requests | + +#### Enable Keycloak Audit Trail + +1. Select **Realm Settings**, and then **Events**. +1. Select the tab **User event settings**. +1. Enable **Save Events** and set an expiration. +1. **Save**. +1. Repeat the process for the **Admin event settings** tab. + +#### Configure password policy + +1. Select **Authentication** and then the **Policies** tab. +1. Select the **Password policy** tab. +1. Add your organisation's password policy. + +#### Configure brute-force protections + +1. Select **Realm settings** and then the **Security defenses** tab. +1. In **Brute force detection**, enable the feature and configure it to your requirements. + +#### Configure theme (Optional) +If created with the Libre Theme `init` container, configure the **Login Theme** in **Realm settings** for `libre`. + +### Create clients + +In Keycloak, _clients_ are entities that request Keycloak to authenticate a user. +You need to create a client for each service. + +The DB client requires additional configuration of flows and grants. +Other clients, such as the UI and Dashboard, use the standard flow to coordinate authorization between the browser and Keycloak to simplify security and improve user convenience. + +{{< callout type="info" >}} +Each standard-flow client has its own subdomain. +Refer to [Default URLs and Ports]({{< relref "../../reference/default-ports" >}}) for our recommended conventions. +{{< /callout >}} + +#### Create DB client + +Create a client for the DB as follows: +1. In the side menu, select **Clients > create client**. +1. Configure the **General Settings**: + + - **Client Type**: `OpenID Connect` + - **Client ID**: `{{< param db >}}` + - **Name**: `{{< param brand_name >}} Backend as a Service` + - **Description**: `{{< param brand_name >}} Backend as a Service` + + When finished, select **Next.** + +1. Configure the **Capability config**: + - **Client Authentication**: On + - **Authorization**: On + - For **Authentication flow**, enable: + - 🗸 Standard flow + - 🗸 Direct access grants + - 🗸 Implicit flow + +1. Select **Next**, then **Save**. + + On success, this opens the **Client details** page for the newly created client. + +1. Select the **Service accounts roles** tab and assign the following roles to the `{{< param db >}}` service account. To locate roles, change the filter to **Filter by clients**: + - `manage-clients` + - `manage-account` + - `manage-users` + +#### Create UI client + +Create a client for the UI as follows: +1. In the side menu, select **Clients > create client**. +1. Configure the **General Settings**: + + - **Client Type**: `OpenID Connect` + - **Client ID**: `{{< param application_name >}}UI` + - **Name**: `{{< param brand_name >}} User Interface` + - **Description**: `{{< param brand_name >}} User Interface` + + When finished, select **Next.** + +1. Configure the **Capability config**: + - **Client Authentication**: On + - **Authorization**: On + - For **Authentication flow**, enable: + - 🗸 Standard flow + - 🗸 Direct access grants + - 🗸 Implicit flow + +1. Configure the **Access Settings**: + + - **Root URL**: `.` without trailing slashes + - **Home URL**: `.` without trailing slashes + - **Web Origins**: `.` without trailing slashes + +1. Select **Next**, then **Save**. + +#### Create dashboard client + +1. In the side menu, select **Clients > create client**. +1. Configure the **General Settings**: + + - **Client Type**: `OpenID Connect` + - **Client ID**: `dashboard` + - **Name**: `{{< param brand_name >}} Dashboard` + - **Description**: `{{< param brand_name >}} Dashboard` + +1. Configure the **Capability config**: + + - **Client Authentication**: On + - **Authorization**: On + - For **Authentication flow**, enable: + - 🗸 Standard flow + - 🗸 Direct access grants + - 🗸 Implicit flow + +1. Configure the **Access Settings**: + + - **Root URL**: `.` without trailing slashes + - **Home URL**: `.` without trailing slashes + - **Valid redirect URIs**: `/login/generic_oauth` without trailing slashes + - **Valid post logout redirect URIs**: `+` without trailing slashes + - **Web origins**: `.` without trailing slashes + +1. Select **Next**, then **Save**. + +#### Create other service clients + +The other services do not need authorization but do need client authentication. +By default you need to add only the client ID. + +For example, to create the BPMN engine client: +1. In the side menu, select **Clients > create client**. +1. For **Client ID**, enter `{{< param application_name >}}Bpmn` +1. Configure the **Capability config**: + - **Client Authentication**: On +1. Select **Next**, then **Save**. + +**Repeat this process for each of the following services:** + +| Client ID | Description | +|----------------------------------------|-----------------------| +| `{{< param application_name >}}Audit` | The audit log service | +| `{{< param application_name >}}Core` | The edge agent | +| `{{< param application_name >}}Router` | API router | + +Based on your architecture, repeat for any Libre Edge Agents, `{{< param application_name >}}Agent`. + +### Scope services + +In Keycloak, a _scope_ bounds the access a service has. +Rhize creates a default client scope, then binds services to that scope. + +#### Create a client scope + +To create a scope for your Rhize services, follow these steps: + + +1. Select **Client Scopes > Create client scope**. +1. Fill in the following values: + - **Name**: `{{< param application_name >}}ClientScope` + - **Description**: `{{< param brand_name >}} Client Scope` + - **Type**: `None` + - **Display on consent screen**: `On` + - **Include in token scope**: `On` +1. **Create**. +1. Select the **Mappers** tab, then **Configure new mapper**. Add an audience mapper for the DB client: + - **Mapper Type**: `Audience` + - **Name**: `{{< param db >}}AudienceMapper` + - **Include Client Audience**: `{{< param db >}}` + - **Add to ID Token**: `On` + - **Add to access token**: `On` +1. Repeat the preceding step for a mapper for the UI client: + - **Mapper Type**: `Audience` + - **Name**: `{{< param application_name >}}UIAudienceMapper` + - **Include Client Audience**: `{{< param application_name >}}UI` + - **Add to ID Token**: `On` + - **Add to access token**: `Off` +1. Repeat the preceding step for a mapper for the BPMN client: + - **Mapper Type**: `Audience` + - **Name**: `{{< param application_name >}}BpmnAudienceMapper` + - **Include Client Audience**: `{{< param application_name >}}Bpmn` + - **Add to ID Token**: `On` + - **Add to access token**: `On` +1. If using the Rhize Audit microservice, repeat the preceding step for an Audit scope and audience mapper: + - **Mapper Type**: `Audience` + - **Name**: `{{< param application_name >}}AuditAudienceMapper` + - **Include Client Audience**: + - **Included Custom Audience**: `audit` + - **Add to ID Token**: `On` + - **Add to access token**: `On` + +#### Add services to the scope + +1. Go to **Clients**. Select `{{< param db >}}`. +1. Select the **Client Scopes** tab. +1. Select **Add Client scope** +1. Select `{{< param application_name >}}ClientScope` from the list. +1. **Add > Default**. + +Repeat this process for the `dashboard`, `{{< param application_name >}}UI`, `{{< param application_name >}}Bpmn`, `{{< param application_name >}}Core`, `{{< param application_name >}}Router`, `{{< param application_name >}}Audit` (if applicable). Based on your architecture repeat for any Libre Edge Agent clients. + +### Create roles and groups + +In Keycloak, _roles_ identify a category or type of user. +_Groups_ are a common set of attributes for a set of users. + + +#### Add the Admin Group + +1. In the left hand menu, select **Groups > Create group**. +1. Give the group a name like `{{< param application_name >}}AdminGroup`. +1. **Create**. + +#### Add the dashboard realm roles + +1. Select **Realm Roles**, and then **Create role**. +1. Name the role `dashboard-admin`. +1. **Save**. +1. Repeat the process to create a role `dashboard-dev`. + +#### Add the dashboard groups + +1. In the left hand menu, select **Groups**, and then **Create Group**. +1. Name the group `dashboard-admin` +1. **Create.** +1. Repeat the process to create `dashboard-dev` and `dashboard-user` groups. + +Now map the group to a role: +1. Select dashboard-admin from the list +1. Select the **Role mapping** tab. +1. Select **Assign Role.** +1. Select **`dashboard-admin`** +1. **Assign.** +1. Repeat the process for `dashboard-dev` + + +#### Add the group client scope + +1. In the left hand menu, select **Client scopes** and **Create client scope**. +1. Name it `groups` and provide a description. +1. **Save**. + +Now map the scope: +1. Select the **Mappers** tab. +1. **Add predefined mappers.** +1. Select `groups`. +1. **Add**. + +#### Add new client scopes to dashboard client + +1. In the left hand menu, select **Clients**, and then `dashboard`. +1. Select the **Client scopes** tab. +1. **Add client scope**. +1. Select `groups`. +1. **Add > Default**. + +### Add Client Policy + +In Keycloak, _policies_ define authorization. +Rhize requires authorization for the database service. + +1. In the left hand menu, select **Clients**, and then `{{< param db >}}`. +1. Select the **Authorization** tab. +1. Select the **Policies** sub-tab. +1. Select **Create Policy > Group**. +1. Name the policy `{{< param application_name >}}AdminGroupPolicy`. +1. Select **Add Groups**. +1. Select `{{< param application_name >}}AdminGroup`. +1. **Add**. +1. For **Logic**, choose `Positive`. +1. **Save**. + +### Add users + +1. In the left hand menu, select **Users**, and **Add User**. +1. Fill in the following values: + - **Username**: `system@{{< param domain_name >}}`. + - **Email**: `system@{{< param domain_name >}}`. + - **Email Verified**: `On` + - **First name**: `system` + - **Last name**: `{{< param brand_name >}}` + - **Join Groups**: `{{< param application_name >}}AdminGroup` +1. **Create**. + +Now create a user password: +1. Select the **Credentials** tab. +1. **Set Password**. +1. Enter a strong password. +1. For **Temporary**, choose `Off`. +1. **Save**. + +Repeat this process for the following accounts: + +- Audit: + - **Username**: `{{< param application_name >}}Audit@{{< param domain_name >}}` + - **Email**: `{{< param application_name >}}Audit@{{< param domain_name >}}` + - **Email Verified**: `On` + - **First name**: `Audit` + - **Last name**: `{{< param brand_name >}}` + - **Join Groups**: `{{< param application_name >}}AdminGroup` +- Core: + - **Username**: `{{< param application_name >}}Core@{{< param domain_name >}}` + - **Email**: `{{< param application_name >}}Core@{{< param domain_name >}}` + - **Email Verified**: `On` + - **First name**: `Core` + - **Last name**: `{{< param brand_name >}}` + - **Join Groups**: `{{< param application_name >}}AdminGroup` +- BPMN + - **Username**: `{{< param application_name >}}Bpmn@{{< param domain_name >}}` + - **Email**: `{{< param application_name >}}Bpmn@{{< param domain_name >}}` + - **Email Verified**: `On` + - **First name**: `Bpmn` + - **Last name**: `{{< param brand_name >}}` + - **Join Groups**: `{{< param application_name >}}AdminGroup` +- Router + - **Username**: `{{< param application_name >}}Router@{{< param domain_name >}}` + - **Email**: `{{< param application_name >}}Router@{{< param domain_name >}}` + - **Email Verified**: `On` + - **First name**: `Router` + - **Last name**: `{{< param brand_name >}}` + - **Join Groups**: `{{< param application_name >}}AdminGroup` +- Agent + - **Username**: `{{< param application_name >}}Agent@{{< param domain_name >}}` + - **Email**: `{{< param application_name >}}Agent@{{< param domain_name >}}` + - **Email Verified**: `On` + - **First name**: `Agent` + - **Last name**: `{{< param brand_name >}}` + - **Join Groups**: `{{< param application_name >}}AdminGroup` + +{{% /steps %}} + +## Next steps + +[Install services]({{< relref "services" >}}). diff --git a/content/versions/v3.2.1/deploy/install/overview.md b/content/versions/v3.2.1/deploy/install/overview.md new file mode 100644 index 000000000..8e50c08bd --- /dev/null +++ b/content/versions/v3.2.1/deploy/install/overview.md @@ -0,0 +1,46 @@ +--- +title: 'Overview' +categories: ["how-to"] +description: >- + A high-level overview of the Rhize install process. +weight: 010 +--- + +This guide walks you through how to Install Rhize and its services in a Kubernetes environment. +You can also use these docs to model automation workflows in your CI. + +> This procedure aims to be as generic and vendor-neutral as possible. +> Some configuration depends on where and how you run your IT infrastructure—what cloud provider you use, preferred auxiliary tools, and so on---so your team must adapt the process for its particular use cases. + +## Condensed instructions + +This guide has three steps, each of which has its own page. +The essential procedure is as follows: + +1. **[Set up the Kubernetes environment](/deploy/install/setup-kubernetes)**. + + 1. Within a Kubernetes cluster, create a new namespace. + 1. In this namespace, add the Rhize container images and Helm repositories. + 1. In the cluster, create passwords. + 1. Use Helm to install Keycloak. + +1. **[Configure Keycloak]({{< relref "keycloak" >}})**. + + 1. In Keycloak, create a realm and clients for each service. + 1. In the cluster, create secrets for the KeyCloak clients. + +1. **[Install services]({{< relref "services" >}})**. + + + 1. Use Helm to install {{< param db >}} . + 1. In Keycloak, give {{< param db >}} admin permissions. + 1. Use these admin permissions to POST the database schema. + 1. Return to Keycloak and add the newly created permissions to the {{< param db >}} group. + 1. Use Helm to install all other services in this sequence: + 1. Install the service dependencies. + 1. Edit its YAML files to override defaults as needed. + 1. Install through Helm. + + + + diff --git a/content/versions/v3.2.1/deploy/install/row-level-access-control.md b/content/versions/v3.2.1/deploy/install/row-level-access-control.md new file mode 100644 index 000000000..9c52208e7 --- /dev/null +++ b/content/versions/v3.2.1/deploy/install/row-level-access-control.md @@ -0,0 +1,50 @@ +--- +title: Row Level Access Control +description: >- + Instructions to configure Rhize BAAS scopemap for Row level Access Control. +weight: 200 +categories: "how-to" +--- + + +Row Level Access Control (RLAC) restricts access to specific rows of data based on user roles and permissions. This provides a way to enforce fine-grained access policies and ensure that users can access only the data they are authorized to see. + +For example, in a contract manufacturing organization (CMO), RLAC enables the CMO to access and manage their specific data while allowing the parent company to view all data across the organization. + + +## Configure Row Level Access Control in Rhize BAAS + +Configure RLAC in Rhize BAAS through the `alpha.scopemap.scopemap.json` property in the `values.yaml` file of the BAAS Helm chart. +The scope map is a JSON file that defines rules, actions, jurisdictions, and resources. These rules combine OpenID Connect (OIDC) roles with resources and actions across jurisdictions (modeled as {{< abbr "hierarchy scope" >}}s in ISA-95). + +### Example Configuration + +Consider the following scenario: Acme Inc. contracts part of its supply chain to a CMO. To implement RLAC: + +1. Create an OIDC Role: Define a role called `cmoAccess` in your OIDC provider (e.g., Keycloak). +2. Define a Hierarchy Scope. Create a hierarchy scope in Rhize called `CMO`. This scope is applied to objects or nodes in the graph that relate to the CMO. +3. Add a Rule to the Scope Map: Define a rule in the `scopemap.json` file as follows: + +```json +{ + { + "rules": [ + { + "id": "cmo-data-access", + "description": "CMO data access to CMO hierarchy scoped resources and entities", + "roles": ["cmoAccess"], + "actions": ["query", "mutation"], + "jurisdictions": ["CMO"] + }, + ...other rules here... + ], + "actions": [ + "delete", + "mutation", + "query" + ], + .... +} +``` + +4. **Assign Roles to Users**: As users sign up in Keycloak, assign them the `cmoAccess` role. This grants them permission to access equipment and data within the `CMO` hierarchy scope, while restricting access to data outside this scope. diff --git a/content/versions/v3.2.1/deploy/install/services.md b/content/versions/v3.2.1/deploy/install/services.md new file mode 100644 index 000000000..6cc3fad43 --- /dev/null +++ b/content/versions/v3.2.1/deploy/install/services.md @@ -0,0 +1,512 @@ +--- +title: Install Rhize services +description: >- + Instructions to install services in the Rhize Kubernetes cluster. +weight: 100 +categories: "how-to" +--- + +The final installation step is to install the Rhize services in your Kubernetes cluster. + +> [!NOTE] +> For the recommended compute per pod for each service, refer to [Cluster sizing]({{< relref "../cluster-sizing" >}}) + +## Prerequisites + +This topic assumes you have done the following: +- [Set up Kubernetes]({{< relref "setup-kubernetes" >}}) and [Configured Keycloak]({{< relref "keycloak" >}}). All the prerequisites for those topics apply here. +- Configured load balancing for the following DNS records: + + {{< reusable/default-urls >}} + + _Note that `rhize-` is only the recommended prefix of the subdomain. Your organization may use something else._ + + +### Overrides + +Each service is installed through a Helm YAML file. +For some of these services, you might need to edit this file to add credential information and modify defaults. + +Common values that are changed include: +- URLs and URL links +- The number of replicas running for each pod +- Ingress values for services exposed on the internet + +## Get client secrets + +Client secrets are necessary for Rhize services to authenticate with Keycloak. These secrets are stored with Kubernetes secrets. + +1. Go to Keycloak and get the secrets for each client you've created. +1. Create Kubernetes secrets for each service. You can either create a secret file, or pass raw data from the command line. + + {{< callout type="caution" >}} + How you create Kubernetes secrets **depends on your implementation details and security procedures.** + For guidance, refer to the official Kubernetes topic, [Managing Secrets using `kubectl`](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/). + {{< /callout >}} + + With raw data, the command might look something like this: + + ```bash + kubectl create secret generic {{< param application_name >}}-client-secrets -n {{< param application_name >}} \ + --from-literal=dashboard=VKIZ6zkQYyPutDzWqIZ9uIEnQRviyqsS \ + --from-literal={{< param application_name >}}Audit=q8JBjuEefWTmhv9IX4KKYxNtXXnYtDPD \ + --from-literal={{< param application_name >}}Baas=KYbMHlRLhXwiDNFuDCl3qtPj1cNdeMSl \ + --from-literal={{< param application_name >}}Bpmn=7OrjB7FhOdsNeb819xzEDBbMyVb6kNdr \ + --from-literal={{< param application_name >}}Core=SH28Wlx2uEXcgf1NffStbmSuruxvrpi6 \ + --from-literal={{< param application_name >}}UI=0qQ7c1EtOKvwsAcpd0xYIvle4zsMcGRq \ + --from-literal={{< param application_name >}}Router=0qQ7c1EtOKvwsAcpd0xYIvle4zsMcGRq + ``` + +1. Create secrets for login passwords. Each service with its own user in Keycloak can have its password supplied through Kubernetes secrets. + + As you install services through Helm, their respective YAML files reference these secrets. + +## Install and add roles for the DB {#db} + +You must install the {{< param db >}} database service first. +You also need to configure the {{< param db >}} service to have roles in Keycloak. + +If enabling the Audit Trail, also the include the configuration in [Enable change data capture](#enable-change-data-capture). + +If you need Row Level Access Control, [configure your scope map]({{< relref "row-level-access-control.md" >}}). + +1. Modify the DB Helm file with your code editor. Edit any necessary overrides. + + +1. Use Helm to install the database: + + ```bash + helm install -f baas.yaml {{< param application_name >}}-baas {{< param application_name >}}/baas -n {{< param application_name >}} + ``` + + To confirm it works, run the following command: + + ```bash + kubectl get pods + ``` + + All statuses should be `RUNNING`. + + +1. Return to the Keycloak UI and add all `{{< param application_name >}}` roles to the admin group. + +1. Proxy the `http:8080` port on `{{< param application_name >}}-baas-dgraph-alpha`. + + ``` + kubectl port-forward -n {{< param application_name >}} pod/baas-baas-alpha-0 8080:8080 + ``` + +1. Get a token using the credentials. With `curl`, it looks like this: + + ```bash + curl --location --request POST 'https://- + auth.{{< param application_name >}}/realms/{{< param application_name >}}/protocol/openid-connect/token' \ + --header 'Content-Type: application/x-www-form-urlencoded' \ + --data-urlencode 'grant_type=password' \ + --data-urlencode 'username=system@{{< param application_name >}}.com' \ + --data-urlencode 'password=' \ + --data-urlencode 'client_id={{< param application_name >}}Baas' \ + --data-urlencode 'client_secret=' + ``` + +1. Post the schema: + + ```bash + curl --location --request POST 'http://localhost:/admin/schema' \ + --header 'Authorization: Bearer ' \ + --header 'Content-Type: application/octet-stream' \ + --data-binary '@' + ``` + + This creates more roles. + +1. Go to Keycloak UI and add all new {{< param db >}} roles to the `ADMIN` group. + +If the install is successful, the Keycloak UI is available on its +[default port]({{< relref "../../reference/default-ports" >}}). + + +## Install services + +Each of the following procedures installs a service through Helm. + +The syntax to install a Rhize service must have arguments for the following: +- The chart YAML file +- The packaged chart +- The path to the unpackaged chart or directory + +Additionally, use the `-n` flag to ensure that the install is scoped to the correct namespace: + + +``` +helm install \ + -f .yaml \ + \ + -n +``` + +For the full configuration options, +read the official [Helm `install` reference](https://helm.sh/docs/helm/helm_install/). + + +### NATS {#nats} + + + +[NATS](https://nats.io) is the message broker that powers Rhize's event-driven architecture. + +Install NATS with these steps: + +1. If it doesn't exist, add the NATS repository: + + ```bash + helm repo add nats https://nats-io.github.io/k8s/helm/charts/ + ``` + +1. Modify the NATS Helm file with your code editor. Edit any necessary overrides. +1. Install with Helm: + + ``` + helm install nats -f nats.yaml nats/nats -n {{< param application_name >}} + ``` + + +### Tempo + +Rhize uses [Tempo](https://grafana.com/oss/tempo/) to trace BPMN processes. + +Install Tempo with these steps: + +1. If it doesn't exist, add the Tempo repository: + + ```bash + helm repo add grafana https://grafana.github.io/helm-charts + ``` + +1. Modify the Helm file as needed. +1. Install with Helm: + + ```bash + helm install tempo -f tempo.yaml grafana/tempo -n {{< param application_name >}} + ``` + +### Core + +The {{< param brand_name >}} Core service is the custom edge agent that monitors data sources, like OPC-UA servers, and publishes and subscribes topics to NATS. + +> **Requirements**: Core requires the [{{< param db >}}](#db) and [NATS](#nats) services. + +Install the Core agent with these steps: + +1. In the `core.yaml` Helm file, edit the `clientSecret` and `password` with settings from the Keycloak client. +1. Override any other values, as needed. +1. Install with Helm: + + ```bash + helm install core -f core.yaml {{< param application_name >}}/core -n {{< param application_name >}} + ``` + +### BPMN + +The BPMN service is the custom engine Rhize uses to process low-code workflows modeled in the BPMN UI. + +> **Requirements**: The BPMN service requires the [{{< param db >}}](#db), [NATS](#nats), and [Tempo](#tempo) services. + +Install the BPMN engine with these steps: + +1. Open `bpmn.yaml` Update the `clientSecret` and `password` for your BPMN Keycloak credentials. +1. Modify any other values, as needed. +1. Install with Helm: + + ```bash + helm install bpmn -f bpmn.yaml {{< param application_name >}}/bpmn -n {{< param application_name >}} + ``` + +### Router + +Rhize uses the [Apollo router](https://www.apollographql.com/docs/router) to unite queries for different services in a single endpoint. + +> **Requirements:** Router requires the [GraphDB](#db), [BPMN](#bpmn), and [Core](#core) services. + +Install the router with these steps: + +1. Modify the router Helm YAML file as needed. +1. Install with Helm: + + ```bash + helm install router -f router.yaml {{< param application_name >}}/router -n {{< param application_name >}} + ``` + +If the install is successful, the Router explorer is available on its +[default port]({{< relref "../../reference/default-ports" >}}). + +### Grafana + +Rhize uses [Grafana](https://grafana.com) for its dashboard to monitor real time data. + +Install Grafana with these steps: + +1. Modify the Grafana Helm YAML file as needed. + +1. Add the Helm repository + ```bash + helm repo add grafana https://grafana.github.io/helm-charts + ``` + +1. Install with Helm: + + ```bash + helm install grafana -f grafana.yaml grafana/grafana -n {{< param application_name >}} + ``` + +If the install is successful, the Grafana service is available on its +[default port]({{< relref "../../reference/default-ports" >}}). + +### Agent + +The Rhize agent bridges your plant processes with the Rhize data hub. +It collects data emitted from the plant and publishes it to the NATS message broker. + +> **Requirements:** Agent requires the [Graph DB](#db), [Nats](#nats), and [Tempo](#tempo) services. + +Install the agent with these steps: + +1. Modify the Agent Helm file as needed. + +1. In the Rhize UI, add a Data Source for Agent to interact with: + - In the lefthand menu, open **Master Data > Data Sources > + Create Data Source**. + - Input a name for the Data Source. + - Add a Connection String and Create. + - Add any relevant Topics. + - Activate the Data Source. + +1. Install with Helm: + + ```bash + helm install agent -f agent.yaml {{< param application_name >}}/agent -n {{< param application_name >}} + ``` + +## Install Admin UI + +The Admin UI is the graphical frontend to [handle events]({{< relref "../../how-to/bpmn" >}}) and [define work masters]({{< relref "../../how-to/model" >}}). + +> **Requirements:** The UI requires the [GraphDB](#db), [BPMN](#bpmn), [Core](#core), and [Router](#router) services. + +After installing all other services, install the UI with these steps: + +1. Forward the port from the Router API. In the example, this forwards Router traffic to port `4000` on `localhost`. + + ```bash + kubectl port-forward svc/router 4000:4000 -n {{< param application_name >}} + ``` + +1. Open the Admin UI Helm file. Update the `envVars` object to reflect the URL for Router and Keycloak. If following the prior examples for port-forwarding, it will look something like this: + + ```yaml + envVars: + APP_APOLLO_CLIENT: "http://localhost:4000" + APP_APOLLO_CLIENT_ADMIN: "http://localhost:4000" + APP_AUTH_KEYCLOAK_SERVER_URL: "http://localhost:8080" + ``` + +1. Modify any other values, as needed. +1. Install with Helm: + + ```bash + helm install admin-ui -f admin-ui.yaml {{< param application_name >}}/admin-ui -n {{< param application_name >}} + ``` + +If the install is successful, Admin UI is available on its +[default port]({{< relref "../../reference/default-ports" >}}). + +## Optional: Audit Trail service + + +The Rhize [Audit]({{< relref "../../how-to/audit" >}}) service provides an audit trail for database changes to install. The Audit service uses PostgreSQL for storage. + +Install Audit Service with these steps: + +1. Modify the Audit trail Helm YAML file. It is *recommended* to change the PostgreSQL username and password values. + +2. Install with Helm: + + ```bash + helm install audit -f audit.yaml {{< param application_name >}}/audit -n {{< param application_name >}} + ``` + +3. Create partition tables in the PostgreSQL database: + + ```sql + create table public.audit_log_partition( like public.audit_log ); + select partman.create_parent( p_parent_table := 'public.audit_log', p_control := 'time', p_interval := '1 Month', p_template_table := 'public.audit_log_partition'); + ``` + +For details about maintaining the Audit trail, read [Archive the PostgresQL Audit trail]({{< relref "../maintain/audit/" >}}). + +### Enable change data capture + +The Audit trail requires [change data capture (CDC)]({{< relref "../../how-to/publish-subscribe/track-changes" >}}) to function. To enable CDC in {{< param application_name >}} BAAS, include the following values for the Helm chart overrides: + +```yaml +alpha: + # Change Data Capture (CDC) + cdc: + # Enable + enabled: true + # If configured for security, configure in NATS url. For example `nats://username:password@nats:4222` + nats: nats://nats:4222 + # Adjust based on high-availability requirements and cluster size. + replicas: 1 +``` + +### Enable Audit subgraph + +To use the Audit trail in the UI, you must add the Audit trail subgraph into the router. To enable router to use and compose the subgraph: + +1. Update the Router Helm chart overrides, `router.yaml`, to include: + + ```yaml + # Add Audit to the router subgraph url override + router: + configuration: + override_subgraph_url: + AUDIT: http://audit.{{< param application_name >}}.svc.cluster.local:8084/query + + # If supergraph compose is enabled + supergraphCompose: + supergraphConfig: + subgraphs: + AUDIT: + routing_url: http://audit.{{< param application_name >}}.svc.cluster.local:8084/query + schema: + subgraph_url: http://audit.{{< param application_name >}}.svc.cluster.local:8084/query + ``` + +2. Update the Router deployment + +```shell +$ helm upgrade --install router -f router.yaml {{< param application_name >}}/router -n {{< param application_name >}} +``` + +## Optional: calendar service + +The [{{< param brand_name >}} calendar service]({{< relref "../../how-to/work-calendars">}}) monitors work calendar definitions and creates work calendar entries in real time, both in the [Graph](#db) and time-series databases. + +> **Requirements:** The calendar service requires the [GraphDB](#db), [Keycloak](#keycloak), and [NATS](#nats) services. + +{{% callout type="info" %}} +The work calendar requires a time-series DB installed such as [InfluxDB](https://influxdata.com/), [QuestDB](https://questdb.io) or [TimescaleDB](https://www.timescale.com/). The following instructions are specific to QuestDB. +{{% /callout %}} + +Install the calendar service with these steps: + +1. Create tables in the time series. For example: + + + ```sql + CREATE TABLE IF NOT EXISTS PSDT_POT( + EquipmentId SYMBOL, + EquipmentVersion STRING, + WorkCalendarId STRING, + WorkCalendarIid STRING, + WorkCalendarDefinitionId STRING, + WorkCalendarDefinitionEntryId STRING, + WorkCalendarDefinitionEntryIid STRING, + WorkCalendarEntryId STRING, + WorkCalendarEntryIid SYMBOL, + HierarchyScopeId STRING, + EntryType STRING, + ISO22400CalendarState STRING, + isDeleted boolean, + updatedAt TIMESTAMP, + time TIMESTAMP, + lockerCount INT, + lockers STRING + ) TIMESTAMP(time) PARTITION BY month + DEDUP UPSERT KEYS(time, EquipmentId, WorkCalendarEntryIid); + + CREATE TABLE IF NOT EXISTS PDOT_PBT( + EquipmentId SYMBOL, + EquipmentVersion STRING, + WorkCalendarId STRING, + WorkCalendarIid STRING, + WorkCalendarDefinitionId STRING, + WorkCalendarDefinitionEntryId STRING, + WorkCalendarDefinitionEntryIid STRING, + WorkCalendarEntryId STRING, + WorkCalendarEntryIid SYMBOL, + HierarchyScopeId STRING, + EntryType STRING, + ISO22400CalendarState STRING, + isDeleted boolean, + updatedAt TIMESTAMP, + time TIMESTAMP, + lockerCount INT, + lockers STRING + ) TIMESTAMP(time) PARTITION BY month + DEDUP UPSERT KEYS(time, EquipmentId, WorkCalendarEntryIid); + + CREATE TABLE IF NOT EXISTS Calendar_AdHoc( + EquipmentId SYMBOL, + EquipmentVersion STRING, + WorkCalendarId STRING, + WorkCalendarIid STRING, + WorkCalendarDefinitionId STRING, + WorkCalendarDefinitionEntryId STRING, + WorkCalendarDefinitionEntryIid STRING, + WorkCalendarEntryId STRING, + WorkCalendarEntryIid SYMBOL, + HierarchyScopeId STRING, + EntryType STRING, + ISO22400CalendarState STRING, + isDeleted boolean, + updatedAt TIMESTAMP, + time TIMESTAMP, + lockerCount INT, + lockers STRING + ) TIMESTAMP(time) PARTITION BY month + DEDUP UPSERT KEYS(time, EquipmentId, WorkCalendarEntryIid); + ``` + +1. Modify the calendar YAML file as needed. + +1. Deploy with helm + + ```bash + helm install calendar-service -f calendar-service.yaml {{< param application_name >}}/calendar-service -n {{< param application_name >}} + ``` + +## Optional: change service configuration + +The services installed in the previous step have many parameters that you can configure for your performance and deployment requirements. +Review the full list in the [Service configuration]({{< relref "../../reference/service-config" >}}) reference. + +## Troubleshoot + +For general Kubernetes issues, the [Kubernetes dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) is great for troubleshooting, and you can configure it to be accessible through the browser. + +For particular problems, try these commands: + +- **Is my service running?** + + To check deployment status, use this command: + + ```bash + kubectl get deployments + ``` + + Look for the pod name and its status. + +- **Access service through browser** + + Some services are accessible through the browser. + To access them, visit local host on the service's [default port]({{< relref "../../reference/default-ports" >}}). + +- **I installed a service too early**. + If you installed a service too early, use Helm to uninstall: + + ```bash + helm uninstall {{< param db >}} + ``` + + Then perform the steps you need and reinstall when ready. diff --git a/content/versions/v3.2.1/deploy/install/setup-kubernetes.md b/content/versions/v3.2.1/deploy/install/setup-kubernetes.md new file mode 100644 index 000000000..6e4048e83 --- /dev/null +++ b/content/versions/v3.2.1/deploy/install/setup-kubernetes.md @@ -0,0 +1,117 @@ +--- +title: 'Set up Kubernetes' +date: '2023-09-22T14:49:53-03:00' +categories: ["how-to"] +description: + How to install Rhize services on your Kubernetes cluster. +weight: 050 +--- + +This guide shows you how to install Rhize services on your Kubernetes cluster. +You can also use this procedure as the model for an automation workflow in your CI. + + +## Prerequisites {#prereqs} + +Before starting, ensure that you have the following technical requirements. + +**Software requirements**: +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/) +- [Helm](https://helm.sh) +- Curl, or some similar program to make HTTP requests from the command line + +**Access requirements**: +- Administrative privileges for a running Kubernetes cluster in your environment. + Your organization must set this up. +- Access to Rhize Helm charts and its build repository. + Rhize provides these to all customers. + +**Optional utilities.** +For manual installs, the following auxiliary tools might make +the experience a little more human friendly: +{{% param pre_reqs %}} + + Again, these are helpers, not requirements. + You can install everything with only the `kubectl` and `helm` commands. + + +## Steps to set up Kubernetes + +First, record your site and environment. +Then, follow these steps. + +1. Create a namespace called {{< param application_name >}}. + + ```bash + kubectl create ns {{< param application_name >}} + ``` + + Confirm it works with `kubectl get ns`. + + On success, the output shows an active `{{< param application_name >}}` namespace. + +1. Set this namespace as a default with + + ```bash + kubectl config set-context --current --namespace={{< param application_name >}} + ``` + + Alternatively, you can modify the kube `config` file or use the `kubens` tool. + +1. Add the Rhize Helm Chart Repository: + + ```bash + helm repo add \ + --username \ + --password \ + {{< param application_name >}} \ + https://gitlab.com/api/v4/projects/42214456/packages/helm/stable + ``` + +1. Create the container image pull secret: + + ```bash + kubectl create secret docker-registry {{< param application_name >}}-registry-credential \ + --docker-server= \ + --docker-password= \ + --docker-email= + ``` + + Confirm the secrets with this command: + + ```bash + kubectl get secrets + ``` + +1. Add the Bitnami Helm repository: + + ```bash + helm repo add bitnami https://charts.bitnami.com/bitnami + ``` + + And update repositories with: + + ```bash + helm repo update + ``` + +1. Pull the build template repository (we will supply this). + +1. Update overrides to `keycloak.yaml`. Then install with this command: + + ```bash + helm install keycloak -f ./keycloak.yaml bitnami/keycloak -n libre + ``` + +> Note: Version may have to be specified by appending on `--version` and the desired chart version. + +1. Set up port forwarding from Keycloak. For example, this forwards traffic to port `8080` on `localhost`. + + ```bash + kubectl port-forward svc/keycloak 8080:80 -n libre + ``` + +## Next steps + +1. [Configure Keycloak]({{< relref "keycloak.md" >}}) +1. [Install services]({{< relref "services.md" >}}). diff --git a/content/versions/v3.2.1/deploy/maintain/_index.md b/content/versions/v3.2.1/deploy/maintain/_index.md new file mode 100644 index 000000000..6d090b4c5 --- /dev/null +++ b/content/versions/v3.2.1/deploy/maintain/_index.md @@ -0,0 +1,19 @@ +--- +date: "2024-03-26T19:35:35+11:00" +title: Maintain +description: Guides to maintain your data on Rhize +categories: ["how-to"] +weight: 250 +--- + +Maintenance is critical to ensure reliability over time. + +These guides show you how to maintain different services and data on Rhize. +They also serve as blueprints for automation. + +Your organization must determine how you maintain your services, and how often you archive or remove data. +The correct practice here is highly contextual, depending on the size of the data, the importance of the data, and the general regulatory and governance demands of your industry. + + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/deploy/maintain/audit.md b/content/versions/v3.2.1/deploy/maintain/audit.md new file mode 100644 index 000000000..3bb7d26b8 --- /dev/null +++ b/content/versions/v3.2.1/deploy/maintain/audit.md @@ -0,0 +1,74 @@ +--- +title: 'Archive the PostgreSQL Audit trail' +date: '2024-03-26T11:20:56-03:00' +categories: ["how-to"] +description: How to archive a partition of the Audit trail on your Rhize deployment +weight: 100 +--- + +The [audit trail]({{< relref "../../how-to/audit" >}}) can generate a high volume of data, so it is a good practice to periodically _archive_ portions of it. +An archive separates a portion of the data from the database and keeps it for long-term storage. This process involves the use of PostgreSQL [Table Partitions](https://www.postgresql.org/docs/current/ddl-partitioning.html). + +Archiving a partition improves query speed for current data, while providing a cost-effective way to store older data. + + +## Prerequisites + +Before you start, ensure you have the following: + +- A designated backup location, for example `~/rhize-archives/libre-audit`. +- Access to the [Rhize Kubernetes Environment]({{< relref "../install/setup-kubernetes" >}}) + +{{% param pre_reqs %}} + +Also, before you start, confirm you are in the right context and namespace. + +{{% param k8s_cluster_ns %}} + +## Steps + +To archive the PostgreSQL Audit trail, follow these steps: + +1. Record the `` of the partition you wish to detach and archive. + This is based on the retention-period query for the names of the existing partitions: + + ```bash + kubectl exec -i audit-postgres-0 -- psql -h localhost \ + -d audit -U \ + -c "select * from partman.show_partitions('public.audit_log')" + ``` + +1. Detach the target partitions from the main table: + + ```bash + + kubectl exec -i audit-postgres-0 -- psql -h localhost \ + -d audit -U \ + -c 'alter table audit_log detach partition ;' + + ``` + +1. Backup the partition table: + + ```bash + pg_dump -U -h audit-postgres-0 -p5433 \ + --file ./audit-p20240101.sql --table public.audit_log_p20240101 audit + ``` + + On success, the backup creates a GZIP file, `.sql`. + To check that the backup succeeded, unzip the files and inspect the data. + +1. Drop the partition table to remove it from the database: + + ```bash + kubectl exec -i audit-postgres-0 -- psql -h localhost -d audit \ + -U -c 'drop table ;' + ``` + +## Next Steps + +- For full backups or Rhize services, read how to back up: + - [Keycloak]({{< relref "../backup/keycloak" >}}) + - [The Audit trail]({{< relref "../backup/audit" >}}) + - [Grafana]({{< relref "../backup/grafana" >}}) + - [The Graph Database]({{< relref "../backup/graphdb" >}}) diff --git a/content/versions/v3.2.1/deploy/maintain/bpmn-nodes.md b/content/versions/v3.2.1/deploy/maintain/bpmn-nodes.md new file mode 100644 index 000000000..e6cfbf47f --- /dev/null +++ b/content/versions/v3.2.1/deploy/maintain/bpmn-nodes.md @@ -0,0 +1,45 @@ +--- +title: "BPMN execution recovery" +weight: 200 +description: >- + If a BPMN node suddenly fails, Rhize has a number of recovery methods to ensure that the workflow finishes executing. +categories: ["concepts"] +--- + +[{{< abbr "BPMN" >}} processes]({{< relref "../../how-to/bpmn" >}}) often have longer execution durations and many steps. +If a BPMN node suddenly fails (for example through a panic or loss of power), +Rhize needs to ensure that the workflow completes. + +To achieve high availability and resiliency, Rhize services run in [Kubernetes nodes](https://kubernetes.io/docs/concepts/architecture/nodes/), and the NATS message broker typically has [data replication](https://docs.nats.io/running-a-nats-service/nats_admin/jetstream_admin/replication). +As long as the remaining BPMN nodes are not already at full processing capacity, +if a BPMN node fails while executing a process, +the Rhize system recovers and finishes the workflow. + +This recovery is automatic, though users may experience an execution gap of up to 30 seconds. + +## BPMN failure and recovery modes + +How Rhize recovers from a halted process depends on where the system failed. + +### BPMN node failure + +If a BPMN container suddenly fails, the process that was currently executing times out after 30 seconds. +As long as the node had not been running for [longer than 10 minutes](#bpmn-age-out), +NATS re-sends the message to another BPMN node and the process finishes. + +### NATS node unavailable + +If the NATS node fails, recovery depends on your replication and backup strategy. + +- If the stream has R3 replication or greater, a new NATS node picks up the process. No noticeable performance issues should occur. + +- If the stream has no replication, everything in the node is lost. However, if you took a snapshot of a stream with `nats stream backup` before the node became unavailable, and the `WorkflowSpecifications` KV is the same at backup and restore sites, then you can use the `nats stream restore` command to replay the stream from when the backup was made. + +To learn more, read the NATS topic on [Disaster recovery](https://docs.nats.io/running-a-nats-service/nats_admin/jetstream_admin/disaster_recovery). + +## All BPMN elements age out after ten minutes {#bpmn-age-out} + +If an element in a BPMN workflow takes longer than 10 minutes, NATS ages the workflow out of the queue. The process continues, but if the pod executing the element dies or is interrupted, that workflow is permanently dropped. + +This ten-minute execution limit should be sufficient for any element in a BPMN process. +Processes that take longer, such as cooling or fermentation periods, should be implemented as [BPMN event triggers]({{ relref "../../how-to/bpmn/bpmn-elements" >}}) or as polls that periodically check data sources between intervals of sleep. diff --git a/content/versions/v3.2.1/deploy/maintain/keycloak-events.md b/content/versions/v3.2.1/deploy/maintain/keycloak-events.md new file mode 100644 index 000000000..f3429e3ce --- /dev/null +++ b/content/versions/v3.2.1/deploy/maintain/keycloak-events.md @@ -0,0 +1,77 @@ +--- +title: Export Keycloak events +description: Guide to export events from Keycloak +--- + +Keycloak stores User and Admin event data in its database. This information can be valuable for your audits. + +This guide shows you how to export your Keycloak events to a file. +To read Keycloak event data, use its [Admin CLI](https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/admin_cli). You can access the CLI from within the Keycloak's container. + +## Prerequisites + +Ensure you have the following: +- The ability to run commands in a Keycloak container or pod. +- A Keycloak admin username and password. + +## Procedure + +To export Keycloak events, first open a shell in your Keycloak container or pod. For example, in Kubernetes and Docker: + +{{< tabs items="Kubernetes,Docker" >}} + +{{% tab "kubernetes" %}} +```sh +kubectl exec -it keycloak_pod_name -n namespace_name -- /bin/sh +``` +{{% /tab %}} + +{{% tab "Docker" %}} +```sh +docker exec -it keycloak_container_name /bin/sh +``` + +{{% /tab %}} + +{{< /tabs >}} + +Then follow these steps: + +1. Change to the directory where the script for the Admin CLI is. This directory is by default `/opt/bitnami/keycloak/bin`. +3. Run `./kcadm.sh get realms/libre/events --server http://localhost:8080 --realm master --user `. Replace `` with the Keycloak admin username. + If the Keycloak port differs from the default, replace `:8080` with the configured port number. +4. When prompted, enter the Keycloak admin password. + + +On success, event data prints to the console. + +## Write event data to file + +The event output can be long. +You can use the following commands write the data to a file (replacing `` with the Keycloak admin password). + +{{< tabs items="Kubernetes,Docker" >}} +{{% tab Kubernetes %}} + +```shell +kubectl exec -it keycloak_pod_name -n namespace_name -- \ + /bin/sh -c "cd /opt/bitnami/keycloak/bin && (echo "" \ + | ./kcadm.sh get realms/libre/events --server http://localhost:8080 \ + --realm master --user admin)" \ + | sed '1,2d' > output.json +``` + +{{% /tab %}} + +{{% tab docker %}} + +```shell +docker exec -it keycloak_container_name \ + /bin/sh -c "cd /opt/bitnami/keycloak/bin && (echo "" \ + | ./kcadm.sh get realms/libre/events --server http://localhost:8080 \ + --realm master --user admin)" \ + | sed '1,2d' > output.json +``` + +{{% /tab %}} +{{< /tab >}} diff --git a/content/versions/v3.2.1/deploy/restore/_index.md b/content/versions/v3.2.1/deploy/restore/_index.md new file mode 100644 index 000000000..faae7da82 --- /dev/null +++ b/content/versions/v3.2.1/deploy/restore/_index.md @@ -0,0 +1,17 @@ +--- +date: "2023-09-12T19:35:35+11:00" +title: Restore +description: Guides to restore your data on Rhize +categories: ["how-to"] +cascade: + icon: database +weight: 200 +--- + +These guides show you how to restore data from [backup]({{< relref "../backup" >}}). +They also serve as blueprints for automation. + +Even if you don't need to restore data, it's a good practice to test restoration periodically. + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/deploy/restore/audit.md b/content/versions/v3.2.1/deploy/restore/audit.md new file mode 100644 index 000000000..d94e189cf --- /dev/null +++ b/content/versions/v3.2.1/deploy/restore/audit.md @@ -0,0 +1,54 @@ +--- +title: 'Restore Audit backup' +date: '2024-03-26T11:20:56-03:00' +categories: ["how-to"] +description: How to restore the backup of the Audit PostgreSQL on your Rhize deployment +weight: 300 +--- + +This guide shows you the procedure to restore your Audit PostgreSQL database in your Rhize Kubernetes deployment. + +## Prerequisites + +Before you start, ensure you have the following: + +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/) +- An [Audit PostgreSQL backup]({{< relref "../backup/audit" >}}) + +Also, before you start, confirm you are in the right context and namespace. + +{{% param k8s_cluster_ns %}} + +## Steps + +To restore Audit PostgreSQL, follow these steps: + +## Steps + +1. Confirm the cluster and namespace are correct: + + {{% param "k8s_cluster_ns" %}} + +1. Retrieve the Audit user password using the following command: + + ```bash + kubectl get secret -o jsonpath="{.data.}" | base64 --decode + ``` + +1. Extract your backup file: + + ```bash + gzip -d audit-postgres-backup-YYYYMMDDTHHMMAA.sql + ``` + +1. Restore the backup: + + ```bash + cat audit-postgres-backup-YYYYMMDDTHHMMAA.sql | kubectl exec -i audit-postgres-0 -- psql postgresql://postgres:@localhost:5432 -U + ``` + + +## Next Steps + +- Test the [Backup Audit]({{< relref "../backup/audit" >}}) procedure +- Plan and execute a [Maintenance Strategy]({{< relref "../maintain/audit" >}}) to handle your audit data. diff --git a/content/versions/v3.2.1/deploy/restore/binary.md b/content/versions/v3.2.1/deploy/restore/binary.md new file mode 100644 index 000000000..63fe383d6 --- /dev/null +++ b/content/versions/v3.2.1/deploy/restore/binary.md @@ -0,0 +1,74 @@ +--- +title: 'Restore the GraphDB from S3' +date: '2023-10-19T13:52:23-03:00' +ategories: ["how-to"] +description: How to restore a backup of the Rhize Graph DB from Amazon S3. +weight: 200 +--- + +This guide shows you how to restore the Graph database from Amazon S3 to your Rhize environment. + +## Prerequisites + +Before you start, ensure you have the following: + +- The GraphDB Helm chart +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/) +- A [Database backup]({{< relref "../backup/binary" >}}) + +## Steps + + + +1. Set the follow environmental variables: + - `AWS_ACCESS_KEY_ID` your AWS access key with permissions to write to the destination bucket + - `AWS_SECRET_ACCESS_KEY` your AWS access key with permissions to write to the destination bucket + - `AWS_SESSION_TOKEN` your AWS session token (if required) + +1. Confirm the cluster and namespace are correct. + + {{% param k8s_cluster_ns %}} + +1. Upgrade or install the Helm chart. + + ```bash + helm upgrade --install -f baas.yaml {{< param application_name >}}-baas {{< param application_name >}}/baas -n {{< param application_name >}} + ``` + +1. Wait for `{{< param application_name >}}-baas-alpha-0` to start serving the GraphQL API. + +1. Make a POST request to your Keycloak `/token` endpoint to get an `access_token` value. + For example, with `curl` and `jq`: + + ```bash + ## replace USERNAME and PASSWORD with your credentials + USERNAME=backups@libremfg.com \ + && PASSWORD=password \ + && curl --location \ + --request POST "${BAAS_OIDC_URL}/realms/libre/protocol/openid-connect/token" \ + --header 'Content-Type\ application/x-www-form-urlencoded' \ + --data-urlencode 'grant_type=password' \ + --data-urlencode "username=" \ + --data-urlencode "password=" \ + --data-urlencode "client_id=" \ + --data-urlencode "client_secret=" | jq .access_token + ``` + +1. Using the token from the previous step, send a POST to `:8080/admin` to retrieve a list of available backups from the s3 bucket. + + ```bash + curl --location 'http://alpha-0:8080/admin' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer ' \ + --data '{"query":"query {\n\tlistBackups(input: {location: \"s3://s3..amazonaws.com/\"}) {\n\t\tbackupId\n\t\tbackupNum\n\t\tencrypted\n\t\tpath\n\t\tsince\n\t\ttype\n readTs\n\t}\n}","variables":{}}' + ``` + +1. Using the backup id and token from the previous step, send a POST to `:8080/admin` to start the restore from the s3 bucket to the alpha node. + For example, with `curl`: + + ```bash + curl --location 'http://alpha-0:8080/admin' \ + --header 'Content-Type: application/json' \ + --header 'Authorization: Bearer ' \ + --data '{"query":"mutation{\n restore(input:{\n location: \"s3://s3..amazonaws.com/\",\n backupId: \"\"\n }){\n message\n code\n }\n}","variables":{}}' + ``` diff --git a/content/versions/v3.2.1/deploy/restore/grafana.md b/content/versions/v3.2.1/deploy/restore/grafana.md new file mode 100644 index 000000000..55c045f9f --- /dev/null +++ b/content/versions/v3.2.1/deploy/restore/grafana.md @@ -0,0 +1,103 @@ +--- +title: 'Restore Grafana' +date: '2023-10-19T13:52:23-03:00' +categories: ["how-to"] +description: How to restore a Grafana backup on Rhize +weight: 300 +--- + +This guide shows you how to restore Grafana in your Rhize environment. + +## Prerequisites + +Before you start, ensure you have the following: + +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/) +- A [Grafana backup]({{< relref "../backup/grafana" >}}) + +## Steps + +1. Confirm the cluster and namespace are correct: + + {{% param "k8s_cluster_ns" %}} + +1. If a checksum file does not exist for the latest backups, create one: + + ```bash + sha256sum .tar.gz .tar.gz > backup.sums + ``` +1. Copy the checksum file into the new Grafana Pod within the `/home/grafana` directory: + + ```bash + kubectl cp ./backup.sums \ + :/home/grafana + ``` + +1. Copy the Grafana data tar file into the new Grafana Pod within the `/home/grafana` directory: + + ```bash + kubectl cp ./.tar.gz \ + :/home/grafana + ``` + +1. Copy the Grafana configuration tar file into the new Grafana Pod within the `/home/grafana` directory: + + + ```bash + kubectl cp ./.tar.gz \ + :/home/grafana + ``` + +1. Confirm that the checksums match: + + ```bash + kubectl exec -it -- /bin/bash + + :~$ cd /home/grafana + :~$ sha256sum -c backup.sums + ./.tar.gz: OK + ./.tar.gz: OK + + ``` + + + +1. Untar the data file: + + ```bash + tar -xvf .tar.gz --directory / + ``` + +1. Untar the configuration file: + + ```bash + tar -xvf .tar.gz --directory /home/grafana/ + ``` + + + +1. Move over the top of current configuration. + + {{< callout type="info" >}} +Typically some files are configured as a Kubernetes [`ConfigMap`](https://kubernetes.io/docs/concepts/configuration/configmap/) and may need to be configured as part of installation. The following command prompts when it is going to overwrite a file, and if it has the permissions to do so. + {{< /callout >}} + + + ```bash + mv /home/grafana/usr/share/grafana/conf/* /usr/share/grafana/conf/ + ``` + +1. Remove restore files and directory + + ```bash + rm /home/grafana/.tar.gz + rm /home/grafana/.tar.gz + rm /home/grafana/backup.sums + rm -r /home/grafana/usr + ``` + +1. Restart the Grafana Deployment. + + ```bash + kubectl rollout restart deployment grafana -n libre + ``` diff --git a/content/versions/v3.2.1/deploy/restore/graphdb.md b/content/versions/v3.2.1/deploy/restore/graphdb.md new file mode 100644 index 000000000..1a41a8284 --- /dev/null +++ b/content/versions/v3.2.1/deploy/restore/graphdb.md @@ -0,0 +1,137 @@ +--- +title: 'Restore the GraphDB' +date: '2023-10-19T13:52:23-03:00' +ategories: ["how-to"] +description: How to restore a backup of the Rhize Graph DB. +weight: 200 +--- + +This guide shows you how to restore the Graph database in your Rhize environment. + +## Prerequisites + +Before you start, ensure you have the following: + +- The GraphDB Helm chart +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/) +- A [Database backup]({{< relref "../backup/graphdb" >}}) + +## Steps + + + +1. Confirm the cluster and namespace are correct. + + {{% param k8s_cluster_ns %}} + +1. Change to the {{< param application_name >}}-baas helm chart overrides, `baas.yaml`. + Set `alpha.initContainers.init.enable` to `true`. + +1. Upgrade or install the Helm chart. + + ```bash + helm upgrade --install -f baas.yaml {{< param application_name >}}-baas {{< param application_name >}}/baas -n {{< param application_name >}} + ``` + +1. In the Alpha 0 initialization container, create the backup directory. + + + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-0 -c {{< param application_name >}}-baas-alpha-init -- \ + mkdir -p /dgraph/backups + ``` + + +1. If the backup directory does not have a checksums file, create one. + + ```bash + sha256sum .//*.gz > .//backup.sums + ``` + +1. Copy the backup into the initialization container. + + ```bash + kubectl cp --retries=10 ./ \ + {{< param application_name >}}-baas-alpha-0:/dgraph/backups/ \ + -c {{< param application_name >}}-baas-alpha-init + ``` + + After the process finishes, confirm that the checksums match: + + ```bash + kubectl exec -it {{< param application_name >}}-baas-alpha-0 -c {{< param application_name >}}-baas-alpha-init -- \ + 'sha256sum -c /dgraph/backups//backup.sums /dgraph/backups//*.gz' + ``` + +1. Restore the backup to the restore directory. + Replace the `` and `` in the arguments for the following command: + + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-0 -c {{< param application_name >}}-baas-alpha-init -- \ + dgraph bulk -f /dgraph/backups//g01.json.gz \ + -g /dgraph/backups//g01.gql_schema.gz \ + -s /dgraph/backups//g01.schema.gz \ + --zero={{< param application_name >}}-baas-zero-0.{{< param application_name >}}-baas-zero-headless..svc.cluster.local:5080 \ + --out /dgraph/restore --replace_out + ``` +1. Copy the backup to the correct directory: + + + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-0 -c {{< param application_name >}}-baas-alpha-init -- \ + mv /dgraph/restore/0/p /dgraph/p + ``` + + +1. Complete the initialization container for alpha 0. + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-0 -c {{< param application_name >}}-baas-alpha-init -- touch /dgraph/doneinit + ``` + +1. Wait for `{{< param application_name >}}-baas-alpha-0` to start serving the GraphQL API. + +1. Make a database mutation to force a snapshot to be taken. +For example, create a `UnitOfMeasure` then delete it: + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-0 -c {{< param application_name >}}-baas-alpha -- \ + curl --location --request POST 'http://localhost:8080/graphql' \ + --header 'Content-Type: application/json' \ + --data-raw '{"query":"mutation RestoringDatabase($input:[AddUnitOfMeasureInput!]!){\r\n addUnitOfMeasure(input:$input){\r\n unitOfMeasure{\r\n id\r\n dataType\r\n code\r\n }\r\n}\r\n}","variables":{"input":[{"code":"Restoring","isActive":true,"dataType":"BOOL"}]}}' + ``` + Wait until you see {{< param application_name >}}-baas creating a snapshot in the logs. For example: + + ```bash + $ kubectl logs {{< param application_name >}}-baas-alpha-0 + ++ hostname -f + ++ awk '{gsub(/\.$/,""); print $0}' + ... + I0314 20:32:21.282271 19 draft.go:805] Creating snapshot at Index: 16, ReadTs: 9 + ``` + + Revert any database mutations: + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-0 -c {{< param application_name >}}-baas-alpha -- \ + curl --location --request POST 'http://localhost:8080/graphql' \ + --header 'Content-Type: application/json' \ + --data-raw '{"query":"mutation {\r\n deleteUnitOfMeasure(filter:{code:{eq:\"Restoring\"}}){\r\n unitOfMeasure{\r\n id\r\n }\r\n }\r\n}","variables":{"input":[{"code":"Restoring","isActive":true,"dataType":"BOOL"}]}}' + ``` + +1. Complete the initialization container for alpha 1: + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-1 -c {{< param application_name >}}-baas-alpha-init -- \ + touch /dgraph/doneinit + ``` + + And alpha 2: + + ```bash + kubectl exec -t {{< param application_name >}}-baas-alpha-2 -c {{< param application_name >}}-baas-alpha-init -- \ + touch /dgraph/doneinit + ``` diff --git a/content/versions/v3.2.1/deploy/restore/influxdb.md b/content/versions/v3.2.1/deploy/restore/influxdb.md new file mode 100644 index 000000000..37917c082 --- /dev/null +++ b/content/versions/v3.2.1/deploy/restore/influxdb.md @@ -0,0 +1,113 @@ +--- +title: 'Restore InfluxDB' +date: '2023-10-19T13:52:23-03:00' +categories: ["how-to"] +description: How to restore an InfluxDB backup on Rhize +draft: true +weight: 400 +--- + + +This guide shows you how to restore InfluxDB in your Rhize environment. + +## Prerequisites + +Before you start, ensure you have the following: + +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/) +- An [InfluxDB backup]({{< relref "../backup/" >}}) + +## Steps + +1. Confirm the cluster and namespace are correct. + + {{% param k8s_cluster_ns %}} + +1. Create a `PersistentVolumeClaim` for the InfluxDB backup file +(adjust size as needed): + + ```yaml + kind: PersistentVolumeClaim + apiVersion: v1 + metadata: + name: influxdb-backup + spec: + accessModes: + - ReadWriteOnce + resources + requests: + storage: 1Gi + ``` + +1. Modify the Influx deployment: + + ```yaml + apiVersion: extensions/v1beta1 + kind: Deployment + metadata: + name: influxdb + labels: + name: influxdb + ... + volumes: + - name: influx + persistentVolumeClaim: + claimName: influxdb + - name: influx-backup + persistentVolumeClaim: + claimName: influxdb-backup + containers: + - name: influxdb + image: "influxdb:alpine" + volumeMounts: + - mountPath: /var/lib/influxdb + name: influx + - mountPath: /tmp/backup + name: influx-backup + ``` + + +1. Copy the backup file in the Kubernetes backup destination created in the preceding step: + + ```bash + kubectl cp /:/tmp/backup/ + ``` + +1. Delete the InfluxDB deployment, as it needs to be stopped for the backup import. +1. Create a job that uses the same container image and volume. Modify the command: + + ```yaml + kind: Job + metadata: + name: influx-restore + spec: + template: + metadata: + name: influx-restore + labels: + task: influx-restore + spec: + volumes: + - name: influx + persistentVolumeClaim: + claimName: influxdb + - name: backup + persistentVolumeClaim: + claimName: influx-backup + containers: + - name: influx + image: "influxdb:alpine" + command: ["/bin/sh"] + args: ["-c", "influxd restore - metadir /var/lib/influxdb/meta -database -datadir /var/lib/influxdb/data /tmp/backup/"] + volumeMounts: + - mountPath: /var/lib/influxdb + name: influx + - mountPath: /tmp/backup + name: backup + restartPolicy: Never + ``` + +1. Apply the job config. Check that it ran successfully + +1. Re-create your InfluxDB deployment. Use the CLI or HTTP to test that it's available. +1. Remove the backup persistent claim and remove its use from the deployment config. diff --git a/content/versions/v3.2.1/deploy/restore/keycloak.md b/content/versions/v3.2.1/deploy/restore/keycloak.md new file mode 100644 index 000000000..c61eecc86 --- /dev/null +++ b/content/versions/v3.2.1/deploy/restore/keycloak.md @@ -0,0 +1,108 @@ +--- +title: 'Restore Keycloak' +date: '2024-01-08T13:26:23-05:00' +categories: ["how-to"] +description: How to restore a Keycloak backup on Rhize +icon: key +weight: 300 +--- + +This guide shows you how to restore Keycloak in your Rhize environment. + +{{% callout type="caution" %}} + +Restoring Keycloak to a running instance involves downtime. + +Typically, this downtime lasts less than a minute. The exact duration needed depends on network constraints, backup size, and the performance of the Kubernetes cluster. + +{{% /callout %}} + +## Prerequisites + +Before you start, ensure you have the following: + +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/) +- A [Keycloak backup]({{< relref "../backup/keycloak" >}}) + +## Steps + +1. Confirm the cluster and namespace are correct: + + {{% param "k8s_cluster_ns" %}} + +1. Retrieve the Keycloak user password using the following command, replacing `` with your namespace: + + ```bash + kubectl get secret keycloak--postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode + ``` + +1. Extract your backup file: + + ```bash + gzip -d keycloak-postgres-backup-YYYYMMDDTHHMMAA.sql + ``` + +1. To prevent new records from being created while the backup is restored, scale down the Keycloak replicas to `0`. Keycloak will be unavailable after this command. + + ```bash + kubectl scale statefulsets keycloak --replicas=0 + ``` + +1. Scale down the replicas of PostgreSQL to 0, so that existing persistent volume claims and persistent volumes can be removed: + + ```bash + kubectl scale statefulsets keycloak-postgresql --replicas=0 + ``` + +1. Remove the Postgres persistent volume claim: + + ```bash + kubectl delete pvc data-keycloak-postgresql-0 + ``` + +1. Identify the Keycloak Postgres volumes: + + ```bash + kubectl get pv | grep keycloak + ``` + + This displays a list of persistent volume claims related to Keycloak. For example: + + ``` + pvc-95176bc4-88f4-4178-83ab-ee7b256991bc 10Gi RWO Delete Terminating libre/data-keycloak-postgresql-0 hostpath 48d + ``` + + Note the names of the ´pvc-*` items. You'll need them for the next step. + +1. Remove the persistent volumes with this command, replacing `` with the `pvc-*` name from the previous step: + + ``` + $ kubectl delete pv + ``` + +1. Scale up the replicas of PostgreSQL to 1: + + ```bash + kubectl scale statefulsets keycloak-postgresql --replicas=1 + ``` + +1. Restore the backup: + + ```bash + cat keycloak-postgres-backup-YYYYMMDDTHHMMAA.sql | kubectl exec -i keycloak-postgresql-0 -- psql postgresql://postgres:@localhost:5432 -U postgres + ``` + +1. Scale up the replicas of Keycloak to `1`: + + ```bash + kubectl scale statefulsets keycloak --replicas=1 + ``` + +1. Proxy the web portal of Keycloak: + + ```bash + kubectl port-forward svc/keycloak 5101:80 + ``` + + +Confirm access by checking `http://localhost:80`. diff --git a/content/versions/v3.2.1/deploy/upgrade.md b/content/versions/v3.2.1/deploy/upgrade.md new file mode 100644 index 000000000..8ef8008af --- /dev/null +++ b/content/versions/v3.2.1/deploy/upgrade.md @@ -0,0 +1,88 @@ +--- +title: 'Upgrade' +date: '2023-10-18T15:02:24-03:00' +categories: ["how-to"] +description: How to upgrade Rhize +weight: 500 +--- + +This guide shows you how to upgrade Rhize. + +{{< reusable/backup >}} + +## Prerequisites + +Before you start, ensure you have the following: + +- Access to the [Rhize Kubernetes Environment]({{< relref ".." >}}) +- [helm](https://helm.sh/docs/helm/helm_install/) +{{% param pre_reqs %}} + +Be sure that you notify relevant parties of the coming upgrade. + +## Procedure + +First, record the old and new versions, their context, and namespaces. + +1. Check the logs for the {{< param application_name >}} pods, either in Lens or with [`kubectl logs`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs). + Ensure there are no errors. + +1. Use Git to pull your Rhize customer build directory. +1. Change to the `kubernetes/charts/{{< param application_name >}}` directory. +1. Check your Kubernetes context and namespace. + + {{% param k8s_cluster_ns %}} + +1. Update Helm repositories with the following command: + + ```bash + helm repo update + ``` + +1. Use the `helm list` command to check for {{< param application_name >}} services. +1. Upgrade with the following command: + + + ```bash + helm upgrade {{< param application_name >}} -f .yaml -n namespace + ``` + +1. Get a token using your credentials. + With `curl`, it looks like this: + + ```bash + curl --location --request POST 'https://- + auth.{{< param application_name >}}/realms/{{< param application_name >}}/protocol/openid-connect/token' \ + --header 'Content-Type: application/x-www-form-urlencoded' \ + --data-urlencode 'grant_type=password' \ + --data-urlencode 'username=system@{{< param application_name >}}.com' \ + --data-urlencode 'password=' \ + --data-urlencode 'client_id={{< param application_name >}}Baas' \ + --data-urlencode 'client_secret=' + ``` + +1. Redeploy the schema. To do so, you need to interact with the `alpha` service on port `8080`. You can do this in multiple ways. Either enter the alpha shell with a command such as `kubectl exec --stdin baas-alpha-0 -- sh`, or forward the port to your local instance using a command such as `kubectl port-forward baas-alpha-0 8080:8080`. + + For example, using port forwarding, a `curl` command to deploy the schema looks like this: + + ```bash + curl --location -X POST 'http://localhost:/admin/schema' \ + -H "Authorization: Bearer $" \ + -H "content-Type: application/octet-stream" \ + --data-binary .sdl + ``` + + The schema file is likely called something like `schema.sdl`. + + +1. Restart the Apollo Router Statefulset so that the Supergraph is composed with all the latest changes. For example: + +```bash +kubectl rollout restart statefulset router +``` + +## Verify success + +Verify success in Kubernetes by checking that the version upgraded properly and that the logs are correct. + +Inform your team that the upgrade was successful. diff --git a/content/versions/v3.2.1/how-to/_index.md b/content/versions/v3.2.1/how-to/_index.md new file mode 100644 index 000000000..660f57a5b --- /dev/null +++ b/content/versions/v3.2.1/how-to/_index.md @@ -0,0 +1,29 @@ +--- +title: User guides +description: Topics about how to use Rhize to query data, build and run workflows, and build frontends. +weight: 200 +identifier: how-to +icon: clipboard-list +cascade: + domain_name: libremfg.ai + brand_name: Libre + application_name: libre + pre_reqs: |- + - Permissions to access the [Rhize Kubernetes Environment](/how-to/install/configure-kubernetes") + - [kubectl](https://kubernetes.io/docs/tasks/tools/) + - Optional: [kubectx](https://github.com/ahmetb/kubectx) utilities + - `kubectx` to manage multiple clusters + - `kubens` to switch between and configure namespaces easily + - Optional: the [k8 Lens IDE](https://k8lens.dev), if you prefer to manage Kubernetes graphically + k8s_cluster_ns: |- + ```bash + ## context + kubectl config current-context + ## namespace + kubectl get namespace + ``` +--- + +Topics about how to use Rhize to query data, build and run workflows, and build frontends. + +{{< card-list >}} diff --git a/content/versions/v3.2.1/how-to/audit.md b/content/versions/v3.2.1/how-to/audit.md new file mode 100644 index 000000000..083caabd7 --- /dev/null +++ b/content/versions/v3.2.1/how-to/audit.md @@ -0,0 +1,75 @@ +--- +title: 'Audit' +date: '2023-12-20T12:47:09-03:00' +categories: ["how-to"] +description: How to use the Audit log to inspect all events in the Rhize system +weight: 600 +icon: search +--- + +The _Audit Log_ provides a tamper-proof and immutable audit trail of all events that occur in the Rhize system. +Users with appropriate permissions can access the audit log either through the UI menu or the GraphQL API. + +## Prerequisites + +To use the audit log, ensure you have the following: + +- If accessing to your Rhize UI environment, a user account with appropriate permissions +- If accessing through GraphQL, you also need: + - The ability to [Use the Rhize GraphQL API]({{< relref "gql" >}}) + - A token configured so that `audience` includes `audit`, and the scopes contain `audit:query`. + + This scope should be created by BaaS, not manually. For details, refer to [Set up Keycloak]({{< relref "../deploy/install/keycloak/" >}}). + + +## Audit through the UI + +To inspect the audit log through the Rhize UI, follow these steps: + +1. From the UI menu, select **Audit**. +1. Select the users that you want to include in the audit. +1. Use the time filters to select the pre-defined or custom range that you want to return. + +On success, a log of events appears for the users and time ranges specified. +For a description of the fields returned, refer to the [Audit fields](#audit-fields) section. + + +### Audit fields + +In the audit UI, each record in the audit has the following fields: + +| Field | Description | +|--------------------|-------------------------------------------------------------------------------------------------------------------------| +| Timestamp | The time when the event occurred | +| User | The user who performed the operation | +| Operation | The [GraphQL operation]({{< relref "gql/call-the-graphql-api#operations" >}}) involved | +| Entity Internal ID | The ID of the resource that was changed | +| Attribute | What changed in the resource. This corresponds to the object properties as defined by the API and its underlying schema | +| Value | The new value of the updated attribute | + + +## Audit through GraphQL + +The audit log is also exposed through the GraphQL API. +To access it, use the `queryAuditLog` operation, and add [filters]({{< relref "gql/call-the-graphql-api#filters" >}}) for the time range and users. + + +Here's an example query: + +```gql +query { + queryAuditLog(filter: {start: "2023-01-01T00:00:00Z", end:"2023-12-31T00:00:00Z", tagFilter:[{id:"user", in: ["admin@libremfg.com"]},{id:"operation", in: ["set"]}]}, order: {}) { + operationType + meta { + time + user + } + event { + operation + uid + attribute + value + } + } +} +``` diff --git a/content/versions/v3.2.1/how-to/bpmn/_index.md b/content/versions/v3.2.1/how-to/bpmn/_index.md new file mode 100644 index 000000000..65323fbf9 --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/_index.md @@ -0,0 +1,16 @@ +--- +title: 'Write BPMN workflows' +date: '2023-09-22T14:50:39-03:00' +draft: false +categories: "how-to" +cascade: + icon: decision-node +description: Create BPMN workflows to handle inputs, listen for events, and throw triggers. +weight: 200 +--- + +In the following topics, learn how to use Rhize's BPMN engine to orchestrate processes. +Coordinate tasks between different systems, transform and calculate data, and set triggers to run workflows automatically. + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/how-to/bpmn/bpmn-elements.md b/content/versions/v3.2.1/how-to/bpmn/bpmn-elements.md new file mode 100644 index 000000000..9af3c3e0e --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/bpmn-elements.md @@ -0,0 +1,404 @@ +--- +title: 'BPMN elements' +date: '2023-09-26T11:10:37-03:00' +draft: false +categories: [reference] +description: >- + A reference of all BPMN elements used in the Rhize BPMN engine. +weight: 1000 +boilerplate: + jsonata_response: >- + Optional [JSONata](https://docs.jsonata.org/1.7.0/overview) expression to map to the [process variable context](#process-variable-context) + max_payload: >- + Number. If the response length exceeds this number of characters, Rhize throws an error. + connection_timeout: >- + Number. Time in milliseconds to establish a connection. + graph_vars: >- + JSON. Variables for the GraphQL query. + data_id: >- + The ID of the datasource + data_expression: >- + JSON or JSONata expression. Topics and values to write to + headers: >- + Additional headers to send in the request +--- + +This document describes the parameters available to each BPMN element in the Rhize UI. +These parameters control how users set conditions, transform data, access variables, call services, and so on. + + + + +## Common parameters + + +Every BPMN workflow and every element that the workflow contains have the following parameters: + +| Parameter | Description | +|----------------------|---------------------------------------------------------------------------------------------------------------------------| +| ID | Mandatory unique ID. For guidance, follow the [BPMN naming conventions]({{< relref "./naming-conventions" >}}). | +| Name | Optional human readable name. If empty, takes ID value. | +| Documentation | Optional freeform text for additional information | +| Extension properties | Optional metadata to add to workflow or node | + + +## Events + +_Events_ are something that happen in the course of a process. +In BPMN, events are drawn with circles. +Events have a _type_ and a _dimension_. + +{{< tabs items="Events,Message type,Timer type" >}} +{{% tab "Events" %}} +![A simplified model of events with no activities](/images/bpmn/rhize-bpmn-events.png) +{{% /tab %}} +{{% tab "Message type" %}} +Message events subscribe or publish to the Rhize broker.
+![A message event](/images/bpmn/bpmn-message-event.svg) +{{% /tab %}} + +{{% tab "Timer type" %}} +Timer events start according to some interval or date, or wait for some duration.
+![Timer event](/images/bpmn/bpmn-timer-event.svg ) +{{% /tab %}} +{{< /tabs >}} + + +In event-driven models, events can happen in one of three _dimensions_: + +Start +: All processes begin with some trigger that starts an event. Start events are drawn with a single thin circle. + +Intermediate. +: Possible events between the start and end. Intermediate events might start from some trigger, or create some result. They are drawn with a double thin line. + +End +: All processes end with some result. End events are drawn with a single thick line. + +Besides these dimensions, BPMN also classifies events by whether they _catch_ a trigger or _throw_ a result. +All start events are catch events; that is, they react to some trigger. +All end events are throw events; that is, they terminate with some output—even an error. +Intermediate events may throw or catch. + +Rhize supports various event types to categorize an event, as described in the following sections. +As with [Gateways](#gateways) and [Activities](#activities), event types are marked by their icons. +Throwing events are represented with icons that are filled in. + + + +### Start events + + +Start events are triggered by the `CreateAndRunBPMN` and `CreateAndRunBPMNSync` {{< abbr "mutation" >}} operations. +The parameters for a start event are as follows: + +| Parameter | Description | +|-----------|------------------------------------------------------------------------------------------------------------------------| +| Outputs | Optional variables to add to the {{< abbr "process variable context" >}}. JSON or JSONata. | + + +### Message start events + +Message events are triggered from a message published to the Rhize broker. +The parameters for a message event are as follows: + +| Parameter | Description | +|-----------|---------------------------------------------------------------------------------------------------| +| Message | The topic the message subscribes to on the Rhize Broker. The topic structure follows MQTT syntax. | +| Outputs | Optional variables to add to the {{< abbr "process variable context" >}}. JSON or JSONata. | + +### Timer start events + +Timer start events are triggered either at a specific date or recurring intervals. +The parameters for a timer start event are as follows: + +| Parameter | Description | +|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Timer | One of
  • `Cycle`, to begin at recurring intervals. For example,`R5/2024-05-09T08:12:55/PT10S` starts on `2024-05-09` and executes every 10 seconds for 5 repetitions. If `` is not set, Rhize uses `2023-01-01T00:00:00Z`.
  • `Date`, to happen at a certain time, for example, `2024-05-09T08:12:55`
Enter values in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format. | +| Outputs | Optional variables to add to the {{< abbr "process variable context" >}}. JSON or JSONata. | + +### Intermediate message events + +Intermediate message events throw a message to the Rhize NATS broker. +This may provide info for a subscribing third-party client, or initiate another BPMN workflow. + +The parameters for an intermediate message event are as follows: + +| Parameter | Description | +|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Message | The topic the message publishes to on the Rhize Broker. The topic structure follows MQTT syntax | +| Inputs | Variables to send in the body. For messages to the Rhize broker, use the [special variable]({{< relref "./variables">}}) `BODY`. Value can be JSON or JSONata. | +| Headers | {{< param boilerplate.headers >}} | +| Outputs | JSON or JSONata. Optional variables to add to the {{< abbr "process variable context" >}}. | + +### Intermediate timer events + +An intermediate message pauses for some duration. +The parameters for an intermediate timer event are as follows: + +| Parameter | Description | +|-----------|------------------------------------------------------------------------------------------------------------------------| +| Timer| A duration to pause. Enter values in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format.| +| Outputs | Optional variables to add to the {{< abbr "process variable context" >}}. The assignment value can be JSON or JSONata. | + + +## Service tasks + +In BPMN, an _activity_ is work performed within a business process. + +On the Rhize platform, most activities are _tasks_, work that cannot be broken down into smaller levels of detail. +Tasks are drawn with rectangles with rounded corners. + +{{< callout type="info" >}} +Besides tasks, you can also use [_call activities_](#call-activities), processes which call and invoke other processes. +{{< /callout >}} + +A service task uses some service. +In Rhize workflows, service tasks include [Calls to the GraphQL API]({{< relref "../gql/call-the-graphql-api" >}}) (and REST APIs), data source reads and writes, and JSON manipulation. +These service tasks come with templates. + +As with [Gateways](#gateways) and [events](#events), service task are marked by their icons. + + +{{< figure +caption="Service tasks have a gear icon marker" +alt="An empty service task" +src="/images/bpmn/bpmn-service-task.svg" +width="100" +>}} + + +To add a service task, select the change icon ("wrench"), then select **Service Task**. +Use **Templates** to structure the service task call and response. + +The service task templates are as follows + +### JSONata transform + + +Transform JSON data with a JSONata expression. +For detailed examples, read [The Rhize Guide to JSONata]({{< relref "use-jsonata" >}}). + + +| Call parameters | Description | +|------------------|-------------------------------------------------------------------------------| +| Input | Input data for the transform | +| Transform | The transform expression | +| Max Payload size | {{< param boilerplate.max_payload >}} | + + +Besides the call parameters, the JSONata task has following additional fields: + +| Parameter | Description | +|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Input response | The name of the variable to add to the {{< abbr "process variable context" >}}| + + +### GraphQL Query + +Run a [GraphQL query]({{< relref "../gql/query" >}}) + +| Call parameters | Description | +|--------------------|----------------------------------------------| +| Query body | GraphQL query expression | +| Variables | {{< param boilerplate.graph_vars >}} | +| Connection Timeout | {{< param boilerplate.connection_timeout >}} | +| Max Payload size | {{< param boilerplate.max_payload >}} | + +Besides the call parameters, the Query task has following additional fields: + +| Parameter | Description | +|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Input response | {{% param boilerplate.jsonata_response %}}. For GraphQL operations, use this only to map values. Rely on [GQL filters]({{< relref "../gql/filter" >}}) to limit the payload. | +| Headers | {{< param boilerplate.headers >}} | + +### GraphQL Mutation + +Run a [GraphQL mutation]({{< relref "../gql/mutate" >}}) + +| Call parameters | description | +|--------------------|----------------------------------------------| +| Mutation body | GraphQL Mutation expression | +| Variables | {{< param boilerplate.graph_vars >}} | +| Connection Timeout | {{< param boilerplate.connection_timeout >}} | +| Max Payload size | {{< param boilerplate.max_payload >}} | + +Besides the call parameters, the mutation task has following additional fields: + +| Parameter | Description | +|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Input response | {{% param boilerplate.jsonata_response %}}. For mutations, use this only to map values. Use the mutation call to limit the payload. | +| Headers | {{% param boilerplate.headers %}} | + +### Call REST API + +HTTP call to a REST API service. + +| Call parameters | Description | +|--------------------|------------------------------------------------------------------------------------| +| Method Type | One of `GET`, `POST`, `PATCH`, `PUT`, `DELETE` | +| Verification | Boolean. Whether verify the Certificate Authority provided in the TLS certificate. | +| URL | The target URL | +| URL Parameters | JSON. The key-value pairs to be used as query parameters in the URL | +| HTTP Headers | JSON. The key-value pairs to be used as request headers | +| Connection Timeout | {{% param boilerplate.connection_timeout %}} | +| Max Payload size | {{< param boilerplate.max_payload >}} | + +Besides the call parameters, the REST task has following additional fields: + +| Parameter | Description | +|----------------|--------------------------------------------| +| Input response | {{% param boilerplate.jsonata_response %}} | +| Headers | {{% param boilerplate.headers %}} | + +### JSON schema + +Validate that a payload conforms to a configured [JSON schema](https://json-schema.org/). +For example, you can validate that `data.arr` contains an array of numbers +and that `userID` contains a string of certain length. + +| Call Parameters | Description | +|------------------|------------------------------------------------------------------------------------------------------------------------------------------------| +| Schema | A JSON schema. You can also create one from a JSON file with a tool such as [JSON to JSON schema](https://transform.tools/json-to-json-schema) | +| Variable | Optional. Key of specific variable to validate (default checks all variables in {{< abbr "process-variable-context" >}} | + +The Schema task has the following output that you can define as a variable: + +| Response mapping | Description | +|------------------|-------------------------------------------------------------------------| +| Valid | The boolean output of the schema evaluation. `True` if schema is valid. | +| Validation error | A string that reports the validation errors if the schema is invalid. | + +### Read Datasource + +Read values from topics of a datasource (for example, an OPC-UA server) + +| Call parameters | Description | +|------------------|-------------------------------------------| +| Data source | {{< param boilerplate.data_id >}} | +| Data | {{< param boilerplate.data_expression >}} | +| Max Payload size | {{< param boilerplate.max_payload >}} | + +Besides the call parameters, the data source task has following additional fields: + +| Parameter | Description | +|----------------|--------------------------------------------------------------------------------| +| Input response | The variable name to store the response in {{< abbr "process variable context" >}} | +| Headers | {{< param boilerplate.headers >}} | + +### Write Datasource + +Write values to topics of a datasource. + +| Call parameters | Description | +|------------------|-------------------------------------------| +| Data source | {{< param boilerplate.data_id >}} | +| Data | {{< param boilerplate.data_expression >}} | +| Max Payload size | {{< param boilerplate.max_payload >}} | + + +Besides the call parameters, the data source task has following additional fields: + +| Parameter | Description | +|----------------|--------------------------------------------------------------------------------| +| Input response | The variable name to store the response in {{< abbr "process variable context" >}} | +| Headers | {{< param boilerplate.headers >}} | + + +## Call activities + +![Call activities have a task with an icon to expand](/images/bpmn/bpmn-call-activity.svg) + +A _call activity_ invokes another workflow. +In this flow, the process that contains the call is the _parent_, and the process that is called is the _child_. + +Call activities have the following parameters: + +| Parameters | Description | +|--------------------|-----------------------------------------------------------------------| +| Called element | The ID of the called process | + +The inputs have the following parameters: + +| Parameters | Description | +|--------------------|-----------------------------------------------------------------------| +|Local variable name | The name of the variable as it will be accessed in the child process (that is, the key name) | +|assignment value | The value to pass from the parent variable context| + +The outputs have the following parameters: + +| Parameters | Description | +|---------------------|-------------------------------------------------------------------------------------------------------| +|Local variable name | What to name the incoming data, as it will be accessed in the parent process (that is, the key name) | +| assignment value | The value to pass from the child variable context | + +For a guide to reusing functions, read the [Reuse workflows section]({{< relref "./create-workflow/#reuse-workflows" >}}) in the "Create workflow" guide. + +## Gateways + +_Gateways_ control how sequence flows interact as they converge and diverge within a process. +They represent mechanisms that either allow or disallow a passage. + +BPMN notation represents gateways as diamonds with single thin lines, as is common in many diagrams with decision flows. +Besides decisions, however, Rhize's BPMN notation also includes parallel gateways. + +As with [Events](#events) and [Activities](#activities), gateway types are marked by their icons. + +{{< figure +alt="Gateway with two branches" +caption="Drawn as diamonds, gateways represent branches in a sequence flow." +src="/images/bpmn/bpmn-gateway-overview.svg" +>}} + +### Exclusive gateway + +![exclusive gateways are marked by an "x" icon](/images/bpmn/bpmn-gateway-exclusive.svg) + +Marked by an "X" icon, an _exclusive gateway_ represents a point in a process where only one path is followed. +In some conversations, an exclusive gateway is also called an _XOR_. + +If a gateway has multiple sequence flows, all flows except one must have a conditional [JSONata expression](https://docs.jsonata.org/1.7.0/overview) that the engine can evaluate. +To designate a default, leave one flow without an expression. + +{{< figure +alt="An exclusive gateway that has a condition and a default" +src="/images/bpmn/screenshot-rhize-bpmn-exclusive-gateway.png" +width="50%" +caption="An exclusive gateway with a condition and default. Configure conditions as JSONata expressions" +>}} + +Exclusive gateways can only branch. That is, they cannot join multiple flows. + +### Parallel gateway + +![Parallel gateways are marked by a "+" icon](/images/bpmn/bpmn-gateway-parallel.svg) + +Marked by a "+" icon, _parallel gateways_ indicate a point where parallel tasks are run. + +{{< figure +alt="A parallel gateway that branches and rejoins" +src="/images/bpmn/screenshot-rhize-bpmn-parallel-gateway.png" +width="50%" +caption="Parallel gateways run jobs in parallel." +>}} + +{{% details title="Parallel joins" %}} + +You can join parallel tasks with another parallel gateway. +This joins the variables from both branches to [process variable context](#process-variable-context). +Note that parallel joins have performance costs, so be mindful of using them, especially in large tasks. +To learn more, read [Tune BPMN performance]({{< relref "tune-performance" >}}). + +{{< figure +alt="A parallel gateway that branches and rejoins" +src="/images/bpmn/screenshot-rhize-bpmn-parallel-join.png" +width="50%" +caption="Parallel joins join variable context, but have performance costs." +>}} + +{{% /details %}} + +## Variables and expressions + +As data passes and transforms from one element to another, variables remain in the _process variable context_. +You can access these variables through JSONata expressions. diff --git a/content/versions/v3.2.1/how-to/bpmn/create-workflow.md b/content/versions/v3.2.1/how-to/bpmn/create-workflow.md new file mode 100644 index 000000000..a0a50fc7a --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/create-workflow.md @@ -0,0 +1,240 @@ +--- +title: "Overview: orchestrate processes" +categories: "how-to" +description: > + An overview of how to use Rhize's custom BPMN engine and UI to orchestrate workflows. +weight: 10 +aliases: + - "/how-to/bpmn/orchestration-overview" +--- + +This guide provides a quick overview of the major features of the Rhize {{< abbr "BPMN">}} engine and interface, with links to detailed guides for specific topics. +For a reference of all BPMN elements and their parameters, refer to [BPMN elements]({{< relref "./bpmn-elements" >}}). + +The Rhize BPMN UI provides a graphical interface to transform and standardize data flows across systems. +Such _process orchestration_ has many uses for manufacturing. +For example, you can write a BPMN workflow to do any of the following: +- Automatically ingest data from ERP and SCADA systems, then transform and store the payloads in the standardized ISA-95 representation +- Coordinate tasks across various systems, creating a layer for different data and protocols to pass through +- Calculate derived values from the data that is exchanged to perform functions such as waste calculation and process control. + + +{{< bigFigure +src="/images/bpmn/rhize-bpmn-coordination-between-multiple-systems.png.webp" +alt="An example of a workflow that transforms, calculates, stores, and sends to external systems" +width="70%" +>}} + + +{{< callout type="info" >}} +Rhize BPMN workflows conform to the visual grammar described in the OMG standard for [Business Process Model and Notation](https://www.omg.org/spec/BPMN/2.0/). +Each process is made of _events_ (circles), _activities_ (rectangles), _gateways_ (diamonds), and _flows_ (arrows). +Some elements are extended for Rhize-specific features, such as service tasks that call the GraphQL API. +Some elements from the standard are unused and thus do not appear in the UI. +{{< /callout >}} + +## Request and send data + +Workflows often exchange data between Rhize and one or more external systems. +The BPMN activity _task templates_ provide multiple ways to communicate with internal and external systems, +and pass data over different protocols. +Additionally, _message_ events provide templates to publish and subscribe to the Rhize broker. + +Each template has a set of parameters to configure it. +**To use a template**: +1. Select the _activity_ (rectangle) element. +1. Select **Template** and then choose the template you want. +1. Configure the template according to its [Task parameters]({{< relref "./bpmn-elements#jsonata-transform" >}}). + +### Interact with the Rhize API + +Use GraphQL tasks to query and change data in your manufacturing knowledge graph. +For example: +- A scheduling workflow could use the [Query task]({{< relref "./bpmn-elements/#graphql-query" >}}) to find all `JobResponses` whose state is `COMPLETED`. +- An ingestion workflow might use a [Mutation task]({{< relref "./bpmn-elements/#graphql-mutation" >}}) to update new `jobResponse` data that was published from a SCADA system. + +You can also use [JSONata]({{< relref "./bpmn-elements#jsonata-transform" >}}) in your GraphQL payloads to dynamically add values at runtime. +For details about how to use the Rhize API, read the [Guide to GraphQL]({{< relref "../gql" >}}). + +### Interact with external systems + +To make HTTP requests to external systems, use the [REST task]({{< relref "./bpmn-elements#call-rest-api" >}}). +For example, you might send a `POST` with performance values to an ERP system, or use a `GET` operation to query test results. + +{{< callout type="info" >}} +Besides REST, you can use this template to interact with any HTTP API. +{{< /callout >}} + +### Publish and subscribe + +Besides HTTP, workflows can also publish and subscribe messages over MQTT, NATS, and OPC UA. + +{{< bigFigure +alt="A workflow that listens to a message and throws a message" +src="/images/bpmn/rhize-bpmn-message-start-throw-conditional.png" +caption="A workflow that evaluates a message and throws a if the payload meets a certain condition message" +width="60%" +>}} + +**To publish and subscribe to the Rhize broker:** +1. Select a start (thin circle) or intermediate (double-line) circle. +1. Select the wrench icon. +1. Select the message event (circle with an envelope). +1. Configure the message topic and body according to the [Event parameters]({{< relref "./bpmn-elements/#events" >}}). +1. If using an [Intermediate throw event]({{< relref "./bpmn-elements#service-tasks" >}}), name the variable `BODY`. + +**To listen and publish to an edge device:** +1. [Create a data source]({{< relref "../publish-subscribe/connect-datasource/" >}}). +1. In your workflow, select the task. And choose the **Data source** template. +1. Configure the [Data Source task parameters]({{< relref "./bpmn-elements#service-tasks" >}}). + +The strategy you choose to send and receive message data depends on your architectural setup. +Generally, data-source messages come from level-1 and level-2 devices on the edge, +and messages published to the Rhize broker come from any NATS, MQTT, or OPC UA client. +The following diagram shows some common ways to interact with messages through BPMN. + +{{< bigFigure +src="/images/bpmn/diagram-rhize-bpmn-control-message-flow.svg" +alt="Diagram providing decision of control flows" +width="50%" +>}} + +## Transform and calculate + +As the data in a workflow passes from start node to end node, it often undergoes some secondary processing. +Mew properties might be added, some data might be filtered, or a node might create a set of derived values from the original input. +For example, you might use calculate statistics, or transform a message payload into a format to be received by the Rhize API or an external system. + + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-jsonata-map.png" +alt="Annotated and truncated version of an example transformation to the operationEvent definition" +caption="Annotated and truncated version of mapping an external event to the `operationEvent` definition" +width=" 70%" +>}} + +To calculate and transform data, BPMN nodes can interpret the JSONata expression language. +For details, read the complete [Rhize guide to JSONata](/how-to/bpmn/use-jsonata). + +## Control flows + +As data passes through a workflow, you might need to conditionally direct it to specific tasks, transformations, and events. +For this, Rhize has _gateways_, represented as diamonds. + +### Exclusive gateway. + +Represented by an `X` symbol, exclusive gateways create decision branches based on whether a condition is true. +While you can use JSONata conditionals to control flows within tasks, +exclusive gateways are the most common and visually understandable way to execute conditional steps. + +To use an exclusive gateway: +1. Create an arrow from the control start to an exclusive gateway (diamond with `X` symbol). +1. Use arrows to create outgoing conditions to new tasks or events. +1. Leave the default condition blank. +1. In all other arrows, use the **Condition** field to write a boolean expression. + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-bpmn-exclusive-gateway.png" +alt="Screenshot showing how gateways create a Job order only if the material state is ready." +caption="This gateway creates a job order only if its material state is ready." +width="70%" +>}} + +### Parallel gateways + +Represented by a `+` (plus sign) symbol, _parallel gateways_ execute multiple tasks at the same time. +To use a parallel gateway: +1. Select a gateway. +1. Use the wrench sign to change its condition to parallel. +1. Use arrows to create parallel conditions. + +When the workflow runs, each parallel branch executes at the same time. + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-bpmn-parallel-gateway.png" +alt="Simultaneously add a record to the database and send an alert" +caption="Simultaneously add a record to the database and send an alert" +width="50%" +>}} + + +## Trigger workflows + +You have multiple ways to trigger a start condition. + +{{% reusable/bpmn-triggers %}} + +To learn more, read [Trigger workflows]({{< relref "./trigger-workflows" >}}). + +## Reuse workflows + +BPMN workflows are _composable_, where each element can be reused by others. +For example, you might create a workflow that calculates common statistics, or one that makes a specified call to an external system. +Using _call activities_ other workflows can reuse the workflow. + + +{{< bigFigure +alt="A call activity" +src="/images/bpmn/diagram-rhize-bpmn-call-activity.png" +width="55%" +caption="An example of a main workflow calling a function. [Template](https://github.com/libremfg/rhize-templates/tree/main/bpmn/call-activity-calculate-stats)" +>}} + +To reuse a workflow: +1. Drag the task element (rectangle) into the workflow. +1. Select the wrench icon. +1. Select **Call Activity**. +1. Configure it according to the [call activity parameters]({{< relref "./bpmn-elements#call-activities" >}}). + +## Access process variable context + +As data passes through the nodes of a workflow, the nodes share access to a variable space. +Nodes can access these variables, create new variables, and mutate existing ones. +This overall variable object is called _process variable context_. + +When working with variables, keep the following in mind: +- **Access the root variable context through `$.`**. + + This follows the conventions of JSONata. + For details and examples, read [Use JSONata]({{< relref "./use-jsonata" >}}). + +- **Access nested properties with dot notation.** + + For example, the following is a reference to the first item in the `orders` object in the variable context: + ``` + $.orders[] + ``` + +- **You can store a node's output in a variable.** + + Many output fields offer a way to create a variable. + For example, the JSON schema field has two variables that you can name, + one that outputs a boolean based on whether the input is valid, and another that outputs + the error string if the variable is invalid. + + You can access these variables in later nodes (unless you mutate them). + +- **Variables.** + + If you direct output to a variable that already exists, the new value overwrites the old one. + This behavior can be used to manage the overall memory footprint of a workflow. + +- **The maximum context size is configurable.** + + By default, the process variable context has a maximum size of 1MB. + When an activity outputs data, the output is added to the process variable context. + When variable size gets large, you have multiple strategies to reduce its size (besides mutating variables). + For ideas, refer to [Tune BPMN performance]({{< relref "./tune-performance" >}}). + +- **You can trace variable context.** + + For details, refer to the [Debug guide]({{< relref "./debug-workflows" >}}). + + +## Examples + +Rhize has a repository of templates that you can import and use in your system. +Use these to explore how the key functionality works. +[Rhize BPMN templates](https://github.com/libremfg/bpmn-templates) + + diff --git a/content/versions/v3.2.1/how-to/bpmn/debug-workflows.md b/content/versions/v3.2.1/how-to/bpmn/debug-workflows.md new file mode 100644 index 000000000..05b9ec486 --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/debug-workflows.md @@ -0,0 +1,248 @@ +--- +title: Handle errors and debug +date: '2024-04-24T19:35:09+03:00' +categories: ["how-to"] +description: Strategies to handle errors in your BPMN workflows, and ways to debug workflows when things don't work as expected. +weight: 250 +--- + +Errors come in two categories: expected and unexpected. +The Rhize BPMN engine has ways to handle both. + +A robust workflow should have built-in logic to anticipate errors. +For unexpected issues, Rhize also creates a _trace_ for each workflow, +which you can use to observe the behavior and performance of the workflow at each element as it executes sequentially. +You can also use debug flags and variables to trace variable context as it transforms across the workflow. + +## Strategies to handle errors + +All error handling likely uses some conditional logic. +The workflow author anticipates the error and then writes some logic to conditionally handle it. +However, you have many ways to handle conditions. When deciding how to direct flows, consider both the context of the error and overall readability of your diagram. +This section describes some key strategies. + +### Gateways + +Use [exclusive gateways]({{< relref "./bpmn-elements#gateways" >}}) for any type of error handling. +For example, you might define a normal range for a value, then send alerts for when the value falls outside of this range. +If it makes sense, these error branches also might flow into an early end event. + + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-bpmn-error-handling-custom-response.png" +alt="A BPMN workflow with customResponse in the output of the end node" +caption="Download this workflow from [BPMN templates](https://github.com/libremfg/rhize-templates/tree/main/bpmn/custom-response-error-events)" +width="80%" +>}} + + +### JSON schema validation + +Validation can greatly limit the scope of possible errors. +To validate your JSON payloads, use the [JSON schema task]({{< relref "./bpmn-elements#json-schema" >}}). + +The JSON schema task outputs a boolean value that indicates whether the input conforms to the schema that you set. +You can then set a condition based on whether this `valid` variable is true, and create logic to handle errors accordingly. +For example, this schema requires that the input variables include a property `arr` whose +value is an array of numbers. + + +```json +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Generated schema for Root", + "type": "object", + "properties": { "arr": { "type": "array", "items": { "type": "number" } } }, + "required": ["arr"] +} +``` + +In a production workflow, you might use this exact schema to validate the input for a function that calculates statistics (perhaps choosing a different variable name). + + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-bpmn-json-schema.png" +alt="Screenshot of a conditional that branches when the JSON schema task receives invalid input." +caption="A conditional that branches when the JSON schema task receives invalid input. [Download the template](https://github.com/libremfg/rhize-templates/tree/main/bpmn/call-activity-calculate-stats)" +width="35%" +>}} + + + + +### JSONata conditions + +Besides the logical gateway, it may make sense to use JSONata ternary expressions in one of the many parameters that accepts JSONata expressions. +For example, this expression creates one message body if `valid` is `true` and another if not: + +```jsonata += +{ + "message": $.valid ? "payload is valid" : "Invalid payload" +} +``` + +### Check JSONata output + +If a field has no value, JSONata outputs nothing. +For example, the following expression outputs only `{"name": "Rhize"}`, +because no `$err` field exists. + +{{< tabs items="Expression,Output" >}} +{{% tab "Expression" %}} +```js +=( + $name := "Rhize"; + + + { + "name": $name, + "error": $err + } +) +``` +{{% /tab %}} +{{% tab "Output" %}} +``` +{ + "name": "Rhize" +} +``` +{{% /tab %}} +{{% /tabs %}} + +You can use this behavior to direct flows. +For example, an exclusive gateway may have a condition such as `$exists(err)` that flows into an error-handling condition. + + + +### Create event logging + +To persist error handling, you can set gateways that flow to mutation tasks that use the `addEvent` operation. +The added event may be a successful operation, an error, or both, +creating a record of events emitted in the workflow that are stored in your manufacturing knowledge graph. +This strategy increases the observability of errors and facilitates future analysis. +It may also be useful when combined with the debugging strategies described in the next section. + +## Strategies to debug + +For detailed debugging, +you can use an embedded instance of [Grafana Tempo](https://github.com/grafana/tempo) to inspect each step of the workflow, node by node. +To debug on the fly, you may also find it useful to use `customResponse` and intermediate message throws to print variables and output at different checkpoints. + +### Debug from the API calls + +When you first test or run a workflow, consider starting the testing and debugging process from an API trigger. +All API triggers return information about the workflow state (for example `COMPLETED` or `ABORTED`). +With the `createAndRunBpmnSync` operation, you can also use the `customResponse` to provide information from the workflow's variable context. +For details of how this works, read the guide to [triggering workflows]({{< relref "trigger-workflows" >}}). + +For example, consider a workflow that has two nodes, a Message throw event and a REST task. +1. When the message completes, the user writes `Message sent` into `customResponse` as an output variable. +1. When the REST task completes, the response is saved into `customResponse`. + +So the `jobState` property reports on the overall workflow status, and `customResponse` serves as a checkpoint to report the state of each node execution. +You can also request the `dataJSON` field, which reports the entire variable context at the last node. +Now imagine that the user has started the workflow from the API and receives this response: + +``` +{ + "data": { + "createAndRunBpmnSync": { + "jobState": "ABORTED", + "customResponse": "message sent", + "traceID": "993ee32af9522f5b35b4ec80f4ff58a8" + } + } +} +``` + + +Note how `ABORTED` indicates the workflow failed somewhere. +Yet, the value of `customResponse` must have been set after the message event executed. +So the problem is likely with the REST node. + +You could also use a similar strategy with intermediate message events. +However, while `customResponse` and messages are undoubtedly useful debugging methods, they are also limited— the BPMN equivalents of `printf()` debugging. +For full-featured debugging, use the `traceID` to explore the workflow through Tempo. + +### Debug in Tempo + +{{< callout type="info" >}} +The instructions here provide the minimum about using Tempo as a tool. +To discover the many ways you can filter your BPMN traces for debugging and analysis, +refer to the [official documentation](https://grafana.com/docs/tempo/latest/). +{{< /callout >}} + +Rhize creates a unique ID and trace for each workflow that runs. +This ID is reported as the `traceID` in the `createAndRunBPMN` mutation operation. +Within this trace, each node is _instrumented_, with spans emitted at every input and output along each node of execution. +With the trace ID, you can find the workflow run in Tempo and follow the behavior. + +To inspect a workflow in Tempo: +1. Go to your Grafana instance. +2. Select **Explore** and then Tempo. +3. From the **TraceQL** tab, enter the `traceID` and query. +Alternatively, use the **Search** tab with the `bpmn-engine` to find traces for all workflows. + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-bpmn-spans-in-tempo-compact.png" +alt="Screenshot of a compact view of spans for a BPMN process in Tempo" +caption="Screenshot of a compact view of spans for a BPMN process in Tempo" +width="55%" +>}} + + +Each workflow instance displays _spans_ that trace the state of each node at its start, execution, and end states. +When debugging, you are likely interested in the spans that result in `ABORTED`. +To inspect the errors: +1. Select the nodes with errors. +1. Use the `events` property to inspect for exceptions. + +For example, this REST task failed because the URL was invalid. + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-bpmn-tempo-error-node.png" +alt="Screenshot of a detailed view of an error for a BPMN process in Tempo" +caption="A detailed view of an error for a BPMN process in Tempo" +width="55%" +>}} + +Also note the names of the spans in the previous two screenshots. +Names that convey semantic information it easier to find specific nodes and easier to understand and follow the overall workflow. +Well-named nodes make debugging easier. +This is one of the reasons we recommend always following a set of [naming conventions]({{< relref "naming-conventions" >}}) when you author BPMN workflows. + +### Adding the debug flag + +For granular debugging, it also helps to trace the variable context as it passes from node to node. +To facilitate this, Rhize provides a debugging option that you can pass in multiple ways: +- From an API call with the `debug:true` [argument]({{< relref "../gql/call-the-graphql-api/#request-body" >}}). +- In the process variable context, by setting `__traceDebug: true` +- In the [BPMN service configuration]({{< relref "../../reference/service-config/bpmn-configuration/" >}}) by setting `OpenTelemetry.defaultDebug` to `true` + +When the debugging variable is set, Tempo reports the entire variable context in the **Span Attributes** at the end of each node. + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-variable-spans.png" +alt="Screenshot showing the process variable context at the end of a node in a BPMN workflow." +caption="The process variable context at the end of a node in a BPMN workflow." +width="55%" +>}} diff --git a/content/versions/v3.2.1/how-to/bpmn/learning-resources.md b/content/versions/v3.2.1/how-to/bpmn/learning-resources.md new file mode 100644 index 000000000..fd01b72df --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/learning-resources.md @@ -0,0 +1,15 @@ +--- +title: 'BPMN learning resources' +categories: ["reference"] +description: Links to supplemental tools and material to learn BPMN +weight: 999 +--- + + +Here are some links to supplemental tools and material to help you build BPMN workflows in Rhize: + +- [BPMN templates](https://github.com/libremfg/bpmn-templates). A repository of BPMN workflows that you can download and run yourself. +- [ bpmn.io](https://github.com/bpmn-io/bpmn-js). Open source rendering toolkits and editors for BPMN 2.0. You can use the `bpmn-js` for local offline building. +- [Rhize Youtube channel](https://www.youtube.com/@rhizemanufacturingdatahub). Includes demos of BPMN. +- 📝 [OMG BPMN standard](https://www.omg.org/spec/BPMN/2.0.2/). The standard on which the Rhize BPMN engine and UI is based. +- [`vscode-language-jsonata`](https://marketplace.visualstudio.com/items?itemName=bigbug.vscode-language-jsonata). A VS code extension to interactively pipe JSONata expressions together, in the literate-programming style of Jupyter notebooks. diff --git a/content/versions/v3.2.1/how-to/bpmn/naming-conventions.md b/content/versions/v3.2.1/how-to/bpmn/naming-conventions.md new file mode 100644 index 000000000..354038d53 --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/naming-conventions.md @@ -0,0 +1,204 @@ +--- +title: 'Naming conventions' +categories: ["reference"] +description: Recommended naming conventions for BPMN processes and their nodes +weight: 950 +--- + +{{< callout type="info" >}} +These are recommendations. Your organization may adapt the conventions to its needs. +{{< /callout >}} + +Each BPMN workflow has an ID, as does each node in the workflow. +Rhize recommends adopting a set of conventions about how you name these elements. +Standardizing BPMN names across an environment has multiple benefits: +- Consistent workflow naming conventions help you filter and find workflows in the process list. +- Well-named nodes make the workflow behavior easier to understand at a glance. +- These names provide discovery and context when debugging and tracing a workflow. + +The following list describes our default naming conventions. + + +## BPMN processes + +Name BPMN Processes according to the following convention: + +`__` + +Where: +- `` describes how the BPMN is expected to be triggered + +- `` Links to the ID supplied in the sequence diagram (if applicable) +- `` describes what the workflow does + +| InvocationTypes | Description | +|-----------------|------------------------------------------------------------------------------------------------| +| `NATS` | Invoked when a message is received in the Rhize NATS broker | +| `API` | Expects to be called from the API using `createAndRunBPMNSync` or `createAndRunBPMN` mutations | +| `RULE` | Invocation is expected from the [rule engine]({{< relref "../publish-subscribe/create-equipment-class-rule" >}}) | +| `FUNC` | Internal functions to be invoked as helpers | + +Examples: +- `NATS_ProcessOrderV1TransformAndPersist` +- `NATS_PLMMaterialMasterV2TransformPersistAndPublish` +- `RULE_ChangeOEEStatusOfCNCEquipment` +- `API_WST04_GetNextLibreBatchNumber` +- `API_WAT01_CloseOpenJobOrders` + +## BPMN Nodes + +Name nodes in a workflow according to the following convention: +- `__` + +Where: +- `` + is the type of node +- `` further categorizes the node +- `` describes the node behavior. + +### Start Events + +For [start events]({{< relref "./bpmn-elements">}}), +use the name to describe each trigger according to the following convention: + +`START__` + +For message starts, include the topics. +For timers, indicate the frequency. + +| SubType | Description | +| --- | --- | +| `API` | Manual start via API | +| `MSG` | Message Start | +| `TIMER` | Timer start | + +Examples: +- `START_MSG_InboundOrders` +- `START_API` +- `START_TIMER_EveryTenMinutes` + +### Query task + +Runs a GraphQL [{{< abbr "query" >}}]({{< relref "../gql/query/" >}}). + +Prefix: `Q`. + +| SubTypes | Description | +|----------|-----------------------------------------------------------------------------------------| +| `GET` | [Get query]({{< relref "../gql/query/#get" >}}). Expected to return one item | +| `QUERY` | [Query operation]({{< relref "../gql/query/#query" >}}). May return multiple items | +| `AGG` | [Aggregate query]({{< relref "../gql/query/#aggregate" >}}) | + +Examples: + +- `Q_GET_OperationsScheduleByOperationId` +- `Q_QUERY_JobOrdersByOperationsRequestId` + +### Mutation task + +Runs a GraphQL [{{< abbr "mutation" >}}]({{< relref "../gql/mutate/" >}}). + + +Prefix: `M` + +| SubTypes | Description | +|----------|------------------------------------------------------------------------------------------| +| `ADD` | [Adds]({{< relref "../gql/mutate/#add" >}}) a new record | +| `UPDATE` | [Updates]({{< relref "../gql/mutate/#update" >}}) existing | +| `UPSERT` | Updates existing [or adds new]({{< relref "../gql/mutate/#upsert" >}}) if not found | +| `DELETE` | [Deletes]({{< relref "../gql/mutate/#delete" >}}) a record | +| | | + +Examples: + +- `M_UPSERT_ProcessOrder` +- `M_ADD_UnitOfMeasure` + +### JSONata transform task + +Prefix: `J` + +| SubType | Description | +|-------------|---------------------------------------------| +| `INIT` | Initialising some new variables | +| `INDEX` | Updating an index variable | +| `MAP` | Mapping one entity to another | +| `TRANSFORM` | Updates existing or adds new if not found | +| `DIFF` | Calculating the difference between entities | +| `CLEANUP` | Deletes the record | +| `CALC` | Performing some calculation | + +Examples: + +- `J_INDEX_CreateLoop1` +- `J_TRANSFORM_PO_to_JO` +- `J_INIT_ProcessingLimits` + +### REST task + + +REST nodes should also indicate the target system and endpoints, +according to the following naming convention: + +`REST____` + +Where: + +- `SYSTEM` abbreviates the system being called, for example `SAP`. + + +| SubType | Description | +|----------|---------------------------------------------| +| `GET` | Initialising some new variables | +| `POST` | Updating an index variable | +| `PUT` | Mapping one entity to another | +| `PATCH` | Updates existing or adds new if not found | +| `DELETE` | Calculating the difference between entities | + +Examples: + +- `REST_SAP_POST_bill-material-bom` +- `REST_EWM_GET_serial-number-by-object-v1_RingSerialNumbers` + +### Gateway + +Prefix = `GW` + +| SubType | Description | +|-----------|-------------------------| +| `X_SPLIT` | Exclusive gateway split | +| `X_JOIN` | Parallel gateway join | +| `P_SPLIT` | Parallel gateway split | +| `P_JOIN` | Parallel gateway join | + +Examples: + +- `GW_X_SPLIT_DifferenceLoop01` +- `GW_P_JOIN_DifferenceLoop01` + +### Sequence Flows + +Only name sequence flows that have conditions. +If a sequence flow carries a condition, indicate the condition in the naming as follows: + +`F_` + +Examples: + +- `F_NoNewEquipment` +- `F_AbandonFlagTrue` + +### End Events + + If a workflow has multiple end events, indicate their number according to this convention: +- `END_` + +Examples: + +- `END_01` +- `END_02` + +## Response Field Naming + +When nodes generate a response, give the response field the same name as its node. + diff --git a/content/versions/v3.2.1/how-to/bpmn/screenshot-rhize-flamegraph-json.png b/content/versions/v3.2.1/how-to/bpmn/screenshot-rhize-flamegraph-json.png new file mode 100644 index 000000000..594b763c1 Binary files /dev/null and b/content/versions/v3.2.1/how-to/bpmn/screenshot-rhize-flamegraph-json.png differ diff --git a/content/versions/v3.2.1/how-to/bpmn/trigger-workflows.md b/content/versions/v3.2.1/how-to/bpmn/trigger-workflows.md new file mode 100644 index 000000000..16e82a04b --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/trigger-workflows.md @@ -0,0 +1,151 @@ +--- +title: 'Trigger workflows' +date: '2024-04-24T19:35:09+03:00' +categories: ["how-to"] +description: How to trigger a workflow in Rhize. Use the API, publish a message to the broker, listen to a data source, or set timers. +weight: 150 +--- + + +You also have multiple ways to start, or _trigger_, a BPMN workflow. +The best choice of trigger depends on the context of the event and the system that initiates the workflow. + +{{% reusable/bpmn-triggers %}} + + +## Start a workflow from an API + +No matter the start event, **all workflows can be triggered by an API call**. +However, if the workflow uses the default blank start event, you must trigger it through API call. +For example, an API trigger may originate from a custom frontend application or a developer's local machine. + +The Rhize API has two mutations to start workflows. +Both must specify the workflow ID as an [argument in the request body]({{< relref "../gql/call-the-graphql-api/#request-body" >}}). +Each run for a workflow returns a unique `ID` that you can use for debugging. + + +### Synchronous and asynchronous API triggers + +To start BPMN workflows, Rhize has two API operations: +- `createAndRunBPMNSync` starts a workflow and waits for the process to complete or abort (synchronous). +- `createAndRunBpmn` starts a workflow and does not wait for the response (asynchronous). + +Use the synchronous operation if you want to receive information about the result of the workflow in the call response. +On the other hand, the asynchronous operation frees up the call system to do more work, no matter whether the workflow runs correctly. + +Both operations have similar call syntax. +For example, compare the syntax for these calls: + +{{% tabs items="Synchronous,Async" %}} +{{% tab "Synchronous" %}} +```gql +mutation sychronousCall{ + createAndRunBpmnSync(id: "API_demo_custom_response") { + id + jobState + customResponse + } +} + +``` +{{% /tab %}} + +{{% tab "Async" %}} +```gql +mutation asyncCall{ + createAndRunBpmn(id: "API_demo_custom_response") { + id + jobState + customResponse + } +} +``` +{{% /tab %}} +{{% /tabs %}} + +The responses for these calls have two differences: +- For synchronous calls, the returned `JobState` should be a finished value (such as `COMPLETED` or `ABORTED`). Asynchronous calls likely return an in-progress status, such as `RUNNING`. +- Synchronous calls can request the `dataJSON` field to report the entire variable context at the final node. +- Only the synchronous call receives data in the `customResponse`. For details, refer to the next section. + +### `customResponse` + +The `customResponse` is a special variable to return data in the response to clients that run `createAndRunBPMNSync` operations. + +You can set the `customResponse` in any [element]({{< relref "./bpmn-elements" >}}) that has an `Output` or `Input response` parameter. +It can use any data from the {{< abbr "process variable context" >}}, including variables added on the fly. + +Functionally, only the last value of the `customResponse` is returned to the client that sent the response. +However, you can use conditional branches and different end nodes to add error handling. +For example, this workflow returns `Workflow ran correctly` if the call variables include the message `CORRECT` and an error message in all other cases. + + +{{< bigFigure +src="/images/bpmn/screenshot-rhize-bpmn-error-handling-custom-response.png" +alt="A BPMN workflow with customResponse in the output of the end node" +caption="Download this workflow from [BPMN templates](https://github.com/libremfg/rhize-templates/tree/main/bpmn/custom-response-error-events)" +width="80%" +>}} + + +### Additional properties for workflow calls {#variables-versions} + +API operations can also include parameters to add variables to the {{< abbr "process variable context" >}} and to specify a workflow version. + +To add variables, use the `variables` input argument. +Note that the variables are accessible from their root object. +For example, a workflow would access the following string value at `$.input.message`: + +```gql{ +"variables": "{\"input\":{\"message\":\"CORRECT\"}}" +} +``` + +To specify a version, use the `version` property. For example, this input instructs Rhize to run version `3` of the `API_demoCallToRemoteAPI` workflow: + +```json +{ + "createAndRunBpmnSyncId": "API_demoCallToRemoteAPI", + "version": "3" +} +``` + + +If the `version` property is empty, Rhize runs the active version of the workflow (if an active version exists). + +## Start from a message + +The [message start event]({{< relref "./bpmn-elements#message-start-event" >}}) subscribes to a topic on the Rhize broker. +Whenever a message is published to this topic, the workflow is triggered. +The Rhize broker can receive messages published over MQTT, NATS, and OPC UA. + +For example, this workflow subscribes to the topic `material/stuff`. +Whenever a message is published to the topic, it evaluates whether the quantity is in the correct threshold. +If the quantity is correct, it uses the [mutation service task]({{< relref "./bpmn-elements#graphql-mutation/" >}}) to add a material lot. +If incorrect, it sends an alert back to the broker. + + +{{% bigFigure +alt="BPMN message start with conditional evaluation" +src="/images/bpmn/rhize-bpmn-message-start-throw-conditional.png" +width="65%" +caption="Download this workflow as a [BPMN template](https://github.com/libremfg/rhize-templates/tree/main/bpmn/msg-start-and-throw)." + %}} + + +Note that, for a workflow to run from a message start event, the workflow **must be enabled.** + +## Rule-based triggers + +Rule-based triggers subscribe to tag changes from a data source and trigger when the rule change condition is met. +Typically, users choose this workflow trigger when they want to orchestrate processes originating from level-1 and level-2 systems. + +To add a data source and create a rule based trigger, refer to [Turn values into events]({{< relref "../publish-subscribe/create-equipment-class-rule" >}}). + +## Timer triggers + +Timer triggers run according to a configured time or interval. +For example, a timer trigger may start a workflow each day, or once at a certain time. + +To use timer triggers, use the [timer start event]({{< relref "./bpmn-elements#timer-start-event" >}}). As with message start events, the workflow **must be enabled** for it to run. + diff --git a/content/versions/v3.2.1/how-to/bpmn/tune-performance.md b/content/versions/v3.2.1/how-to/bpmn/tune-performance.md new file mode 100644 index 000000000..7ce160a73 --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/tune-performance.md @@ -0,0 +1,93 @@ +--- +title: 'Tune BPMN performance' +date: '2024-02-09T09:47:47-03:00' +categories: ["how-to"] +description: Tips to debug and improve the performance of your BPMN process +weight: 300 +--- + +This page documents some tips to debug [BPMN workflows]({{< relref "./create-workflow" >}}) and improve their performance. + +Manufacturing events can generate a vast amount of data. +And a BPMN workflow can have any number of logical flows and data transformations. +So an inefficient BPMN process can introduce performance degradations. + +## Manage the process context size + +{{< callout type="info" >}} +The max size of the process variable context comes from the default max payload size of NATS Jetstreams. +To increase this size, change your NATS configuration. +{{< /callout >}} + +By default, the size of the {{< abbr "process variable context" >}} is 1MB. +If the sum size of all variables exceeds this limit, the BPMN process fails to execute. + +### Be mindful of variable output + +Pay attention the overall size of your variables, especially when outputting to new variables. +For example, imagine an initial JSON payload, `data`, that is 600KB. +If a JSONata task slightly modifies and outputs it to a new variable, `data2`, the process variable context will exceed 1MB and the BPMN process will exit. + +To work around this constraint, you can save memory by mutating variables. +That is, instead of outputting a new variable, you can output the transformed payload to the original variable name. + +### Discard unneeded data from API responses + +Additionally, in service tasks that call APIs, use the **Response Transform Expression** to minimize the returned data to only the necessary fields. +Rhize stores only the output of the expression, and discards the other part of the response. This is especially useful in service tasks that [Call a REST API](https://docs.rhize.com/how-to/bpmn/bpmn-elements/#call-rest-api), since you cannot precisely specify the fields in the response (as you can with a GraphQL query). + +If you still struggle to find what objects create memory bottlenecks, use a tool to observe their footprint, as documented in the next section. + +### Observe payload size + +Each element in a BPMN workflow passes, evaluates, or transforms a JSON body. +Any unnecessary fields occupy unnecessary space in the {{< abbr "process variable context" >}}. +However, it's hard for a human to reason about the size of the inflight payload without a tool to provide measurements and context. + +It's easier to find places to reduce the in-flight payload size if you can visualize its memory footprint. +We recommend the [JSON site analyzer](https://www.debugbear.com/json-size-analyzer), which presents a flame graph of the memory used by the objects in a JSON data structure. + + +{{< figure +src="/how-to/bpmn/screenshot-rhize-flamegraph-json.png" +alt="A simplified diagram of Rhize's architecture" +width="70%" +caption="The material lot object is dominating the size of this JSON payload. This a good place to start looking for optimizations." +>}} + + +## Look for inefficient execution logic + +When you first write a workflow, you may use some logical flows slow down execution time. +If a process seems slow, look for these places to refactor performance. + +### Avoid parallel joins + +Running processes in [parallel]({{< relref "./bpmn-elements#parallel-gateway" >}}) can increase the workflow's complexity. +Parallel joins in particular can also increase memory usage of the NATS service. + +Where possible, prefer exclusive branching and sequential execution. +When a task requires concurrency, keep the amount of data processed and the complexity of the tasks to the minimum necessary. + +### Control wildcards in message start events + +BPMN message start tasks can start on any topic or data source. However, performance varies with the events that the start task subscribes to. + +Subscribing to multiple wildcards can especially drag performance. +To avoid a possible performance hit, try to subscribe to an exact topic, or limit subscriptions to a single wildcard. + +### Avoid loops + +A BPMN process can loop back to a previous task node to repeat execution. +This process can also increase execution time. +If a process with a loop is taking too long to execute, consider refactoring the loop to process the variables as a batch in JSONata tasks. + +## Use the JSONata book extension + +A BPMN process performs better when the JSONata transformations are precise. +A strategy to debug and minimize necessary computation is to break transformations into smaller steps. + +If you use Visual Studio Code, consider the [`jsonata-language`](https://marketplace.visualstudio.com/items?itemName=bigbug.vscode-language-jsonata) extension. +Similar to a Jupyter notebook, the extension provides an interactive environment to write JSONata expressions and pass the output from one expression into the input of another. +Besides its benefit for monitoring performance, we have used it to incrementally build complex JSONata in a way that we can document and share (in the style of literate programming). + diff --git a/content/versions/v3.2.1/how-to/bpmn/use-jsonata.md b/content/versions/v3.2.1/how-to/bpmn/use-jsonata.md new file mode 100644 index 000000000..0bc9f1080 --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/use-jsonata.md @@ -0,0 +1,839 @@ +--- +title: 'Use JSONata' +date: '2024-05-10T16:20:35-03:00' +categories: ["how-to"] +description: The Rhize guide to JSONata, with example transformations and calculations that are relevant to manufacturing. +weight: 200 +--- + + +[JSONata](https://jsonata.org/) +is a query language to filter, transform, and create JSON objects. +Rhize BPMN workflows use JSONata expressions to +transform JSON payloads as they pass through workflow nodes and across integrated systems. +In BPMN workflows, JSONata expressions have some essential functions: +- **Map data.** Moving values from one data structure to another. +- **Calculate data.** Receiving values as input and create new data from them. +- **Create logical conditions.** Generate values to feed to gateways to direct the flow of the BPMN. + +This guide details how to use JSONata in your Rhize environment and provides some examples relevant to manufacturing workflows. +For the full details of the JSONata expression language, read the [Official JSONata documentation](https://docs.jsonata.org/overview.html). + +## Use JSONata in Rhize + +JSONata returns the final value of its expression as output. +This output can be of any data type that JSON supports. +Generally, we recommend outputting a JSON object with the keys and values of the data you want to subsequently work with. + +In practice, creating an expression usually follows these steps: +1. Begin with an `=`. +1. Embed the expression in parenthesis. +1. At the top of expression, write your logic, variables, and functions. +1. At the bottom of the expression, create a JSON object whose keys are names you configure and whose values are derived from your logic. + +For example: + +{{< tabs items="expression,output" >}} +{{< tab >}} + +```js +=( + $logic := "Hello" & " " & "World"; + + { + "output": $logic + } +) + +``` +{{< /tab >}} +{{< tab >}} + +```JSON +{ + "output": "Hello World" +} +``` + +{{< /tab >}} +{{< /tabs >}} + + + +### Begin each expression with a `=` + +Note that the previous expression begins with the equals sign, `=`. +This character instructs Rhize to parse the subsequent data as JSONata (as opposed to raw JSON or some other data structure). + +### Access root variable context with `$.` + +To access the root of the entire BPMN variable space, +use the dollar character followed by a dot, `$.`. +For example, this expression accesses all IDs for an `equipmentClass` object from the root variable context, `$.`. + +```js +$.equipmentClass.id +``` + +{{% tabs items="Input,Output" %}} + +{{% tab %}} + +```json +{ + "equipmentClass": [ + { + "id": "Vessel-A012", + "description": "Stock Solution Vessel", + "effectiveStart": "2023-05-24T09:58:00Z", + "equipmentClassProperties": [ + { + "id": "Volume", + "description": "Vessel Volume" + } + ] + }, + { + "id": "Vessel-A013", + "description": "Stock Solution Vessel" + } + ] +} + +``` +{{% /tab%}} + +{{% tab %}} + +``` +[ + "Vessel-A012", + "Vessel-A013" +] +``` +{{% /tab %}} + + +{{% /tabs %}} + +### JSONata in BPMN elements + +JSONata can be used in many Rhize BPMN elements +Particularly, the [JSONata service task]({{< relref "./bpmn-elements/#jsonata-transform" >}}) exists to receive input and pass it to another element or system. + +Though JSONata tasks are the most common use of JSONata, +you can use the `=` prefix to declare an expression in many other fields. +Parameters that accept expressions include API payloads, message payloads, and flow conditions. + +To review the full list of elements and fields that accept JSONata, read the [BPMN element reference]({{< relref "bpmn-elements" >}}). + +### JSONata version + +Many implementations of JSONata exist. +Rhize uses a custom Go implementation for high performance and safe calculation. + +## JSONata examples + +These snippets provide some examples of JSONata from manufacturing workflows. +To experiment with how they work, copy the data and expression into a [JSONata exerciser](https://try.jsonata.org/) and try changing values. + +### Filter for items that contain + +This expression returns the ID of all `equipmentActual` items that are associated with a specified job response `JR-4`. +It outputs the IDs as an array of strings in a new custom object. + +This is a minimal example of how you can use JSONata to transform data into new representations. +Such transformation is a common prerequisite step for post-processing and service interoperability. + +```js +$.data.queryJobResponse[`id`="JR-4"].( + {"associatedEquipment": equipmentActual.id} +) +``` + +{{% tabs items="Input,Output" %}} + +{{% tab %}} +```json +{ + "data": { + "queryJobResponse": [ + { + "id": "JR-1", + "data": [ + { + "value": 100 + } + ], + "equipmentActual": [ + { + "id": "hauler" + }, + { + "id": "actuator-121" + } + ] + }, + { + "id": "JR-4", + "data": [ + { + "value": "101.8" + } + ], + "equipmentActual": [ + { + "id": "actuator-132" + }, + { + "id": "actuator-133" + } + ] + } + ] + } +} +``` + +{{% /tab %}} + +{{% tab %}} + +```json +{ + "associatedEquipment": [ + "actuator-132", + "actuator-133" + ] +} +``` +{{% /tab %}} +{{% /tabs %}} + +### Find actual associated with high values + +This expression finds all job responses whose `value` exceeds `100`. +It outputs the matching job response IDs along with the associated equipment actual used in the job. + +In production, you may use a similar analysis to isolate all {{< abbr "resource actual" >}}s associated with an abnormal production outcome. + +```js +$map($.data.queryJobResponse, function($v){ + $number($v.data.value) > 102 + ? {"jobResponseId": $v.id, "EquipmentActual": $v.equipmentActual} + } + ) + +``` + +{{% tabs items="Input,Output" %}} +{{% tab "input" %}} +```json +{ + "data": { + "queryJobResponse": [ + { + "id": "JR-1", + "data": [ + { + "value": 100 + } + ], + "equipmentActual": [ + { + "id": "hauler" + }, + { + "id": "actuator-121" + } + ] + }, + { + "id": "JR-5", + "data": [ + { + "value": 103.2 + } + ], + "equipmentActual": [ + { + "id": "actuator-122" + }, + { + "id": "actuator-13" + } + ] + }, + { + "id": "JR-2", + "data": [], + "equipmentActual": [ + { + "id": "actuator-13" + } + ] + }, + { + "id": "JR-4", + "data": [ + { + "value": "101.8" + } + ], + "equipmentActual": [ + { + "id": "actuator-132" + }, + { + "id": "actuator-133" + } + ] + }, + { + "id": "JR-3", + "data": [], + "equipmentActual": [ + { + "id": "actuator-091" + } + ] + }, + { + "id": "JR-12", + "data": [], + "equipmentActual": [] + }, + { + "id": "JR-123", + "data": [], + "equipmentActual": [ + { + "id": "actuator-121" + } + ] + }, + { + "id": "JR-6", + "data": [], + "equipmentActual": [] + }, + { + "id": "JR-8", + "data": [ + { + "value": "96.7" + } + ], + "equipmentActual": [ + { + "id": "actuator-091" + } + ] + }, + { + "id": "JR-9", + "data": [], + "equipmentActual": [] + }, + { + "id": "JR-10", + "data": [ + { + "value": "105.0" + } + ], + "equipmentActual": [ + { + "id": "actuator-12" + } + ] + }, + { + "id": "JR-7", + "data": [ + { + "value": "103.2" + } + ], + "equipmentActual": [ + { + "id": "actuator-12" + } + ] + } + ] + } +} +``` + +{{% /tab %}} +{{% tab %}} +```json +[ + { + "jobResponseId": "JR-5", + "EquipmentActual": [ + { + "id": "actuator-122" + }, + { + "id": "actuator-13" + } + ] + }, + { + "jobResponseId": "JR-10", + "EquipmentActual": [ + { + "id": "actuator-12" + } + ] + }, + { + "jobResponseId": "JR-7", + "EquipmentActual": [ + { + "id": "actuator-12" + } + ] + } +] + +``` + +{{% /tab %}} +{{% /tabs %}} + + + +### Map event to operations event + +This function takes data from an external weather API +and maps it onto the `operationsEvent` ISA-95 object. +It takes the earliest value from the event time data as the start, and last value as the end. +If no event data exists, it outputs a message. + +Although this example uses data that is unlikely to be a source of a real manufacturing event, the practice of receiving data from a remote API and mapping it to ISA-95 representation is quite common. +In production, you may perform a similar operation to map an SAP schedule order to an `operationsSchedule`, or the results from a QA service to the `testResults` object. + + +```js +( + +$count(events[0]) > 0 + + ? events.{ + "id":id, + "description":title, + "hierarchyScope":{ + "id":"Earth", + "label": Earth, + "effectiveStart": $sort(geometry.date)[0] + }, + "category":categories.title, + "recordTimestamp": $sort(geometry.date)[0], + "effectiveStart": $sort(geometry.date)[0], + "effectiveEnd": $sort(geometry.date)[$count(geometries.date)-1], + "source": sources.id & " " & sources.url, + "operationsEventDefinition": { + "id": "Earth event", + "label": "Earth event" + } + } + + : {"message":"No earth events lately"} + +) +``` + +{{% tabs items="Input,Output"%}} +{{% tab "Input" %}} + +{{% details title="Long JSON" closed="false" %}} +```json +{ + "title": "EONET Events", + "description": "Natural events from EONET.", + "link": "https://eonet.gsfc.nasa.gov/api/v3/events", + "events": [ + { + "id": "EONET_6516", + "title": "Ubinas Volcano, Peru", + "description": null, + "link": "https://eonet.gsfc.nasa.gov/api/v3/events/EONET_6516", + "closed": null, + "categories": [ + { + "id": "volcanoes", + "title": "Volcanoes" + } + + ], + "sources": [ + { + "id": "SIVolcano", + "url": "https://volcano.si.edu/volcano.cfm?vn=354020" + } + + + ], + "geometry": [ + { + "magnitudeValue": null, + "magnitudeUnit": null, + "date": "2024-05-06T00:00:00Z", + "type": "Point", + "coordinates": [ -70.8972, -16.345 ] + } + + + ] + }, + + { + "id": "EONET_6513", + "title": "Iceberg D28A", + "description": null, + "link": "https://eonet.gsfc.nasa.gov/api/v3/events/EONET_6513", + "closed": null, + "categories": [ + { + "id": "seaLakeIce", + "title": "Sea and Lake Ice" + } + + ], + "sources": [ + { + "id": "NATICE", + "url": "https://usicecenter.gov/pub/Iceberg_Tabular.csv" + } + + + ], + "geometry": [ + { + "magnitudeValue": 208.00, + "magnitudeUnit": "NM^2", + "date": "2024-02-16T00:00:00Z", + "type": "Point", + "coordinates": [ -33.27, -51.88 ] + }, + + { + "magnitudeValue": 208.00, + "magnitudeUnit": "NM^2", + "date": "2024-03-01T00:00:00Z", + "type": "Point", + "coordinates": [ -32.82, -51.09 ] + }, + + { + "magnitudeValue": 208.00, + "magnitudeUnit": "NM^2", + "date": "2024-03-07T00:00:00Z", + "type": "Point", + "coordinates": [ -30.95, -51.21 ] + } + ] + }, + + { + "id": "EONET_6515", + "title": "Sheveluch Volcano, Russia", + "description": null, + "link": "https://eonet.gsfc.nasa.gov/api/v3/events/EONET_6515", + "closed": null, + "categories": [ + { + "id": "volcanoes", + "title": "Volcanoes" + } + + ], + "sources": [ + { + "id": "SIVolcano", + "url": "https://volcano.si.edu/volcano.cfm?vn=300270" + } + + + ], + "geometry": [ + { + "magnitudeValue": null, + "magnitudeUnit": null, + "date": "2024-04-28T00:00:00Z", + "type": "Point", + "coordinates": [ 161.36, 56.653 ] + } + + + ] + } + + ] +} +``` +{{% /details %}} +{{% /tab %}} + +{{% tab "Output" %}} +```json +[ + { + "id": "EONET_6516", + "description": "Ubinas Volcano, Peru", + "hierarchyScope": { + "id": "Earth", + "effectiveStart": "2024-05-06T00:00:00Z" + }, + "category": "Volcanoes", + "recordTimestamp": "2024-05-06T00:00:00Z", + "effectiveStart": "2024-05-06T00:00:00Z", + "effectiveEnd": "2024-05-06T00:00:00Z", + "source": "SIVolcano https://volcano.si.edu/volcano.cfm?vn=354020", + "operationsEventDefinition": { + "id": "Earth event", + "label": "Earth event" + } + }, + { + "id": "EONET_6513", + "description": "Iceberg D28A", + "hierarchyScope": { + "id": "Earth", + "effectiveStart": "2024-02-16T00:00:00Z" + }, + "category": "Sea and Lake Ice", + "recordTimestamp": "2024-02-16T00:00:00Z", + "effectiveStart": "2024-02-16T00:00:00Z", + "effectiveEnd": "2024-03-07T00:00:00Z", + "source": "NATICE https://usicecenter.gov/pub/Iceberg_Tabular.csv", + "operationsEventDefinition": { + "id": "Earth event", + "label": "Earth event" + } + }, + { + "id": "EONET_6515", + "description": "Sheveluch Volcano, Russia", + "hierarchyScope": { + "id": "Earth", + "effectiveStart": "2024-04-28T00:00:00Z" + }, + "category": "Volcanoes", + "recordTimestamp": "2024-04-28T00:00:00Z", + "effectiveStart": "2024-04-28T00:00:00Z", + "effectiveEnd": "2024-04-28T00:00:00Z", + "source": "SIVolcano https://volcano.si.edu/volcano.cfm?vn=300270", + "operationsEventDefinition": { + "id": "Earth event", + "label": "Earth event" + } + } +] +``` + +{{% /tab %}} +{{% /tabs %}} + +### Calculate summary statistics + +These functions calculate statistics for an array of numbers. +Some of the output uses built-in JSONata functions, such as `$max()`. +Others, such as the ones for median and standard deviation, +are created in the expression. + +You might use statistics such as these to calculate metrics on historical or streamed data. + +```js +( + $mode := function($arr) { + ( + $uniq := $distinct($arr); + $counted := $map($uniq, function($v){ + { "value": $v, "count": $count($filter($arr, function($item) { $item = $v })) } + }); + $modes := $filter($counted, function($item) { + $item.count = $max($counted.count) + }); + $sort($modes.value) + ) + }; + $stdPop := function($arr) { + ( + $variance := $map($arr, function($v, $i, $a) { $power($v - $average($a), 2) }); + $sum($variance) / $count($arr) ~> $sqrt() + ) + }; + $median := function($arr) { + ( + $sorted := $sort($arr); + $length := $count($arr); + $mid := $floor($length / 2); + $length % 2 = 0 ? $median := ($sorted[$mid - 1] + $sorted[$mid]) / 2 : $median := $sorted[$mid] + ) + }; + { + "std_population": $stdPop($.data.arr), + "mean": $average($.data.arr), + "median": $median($.data.arr), + "mode": $mode($.data.arr), + "max": $max($.data.arr), + "min": $min($.data.arr) + } +) +``` +{{% tabs items="Input,Output" %}} +{{% tab "input" %}} +```json +{ + "data": { + "arr": [ + 1, + 1, + 6, + 2, + 3, + 32, + 4, + 5, + 5, + 3, + 3, + 6, + 6 + ] + } +} +``` +{{% /tab %}} +{{% tab "output" %}} +```json +{ + "std_population": 7.72071677084591, + "mean": 5.923076923076923, + "median": 4, + "mode": [ + 3, + 6 + ], + "max": 32, + "min": 1 +} +``` +{{% /tab %}} +{{% /tabs %}} + +### Select random item + +This expression randomly selects an item from the plant's array of available equipment, and then adds that item as the `equipmentRequirement` for a segment associated with a specific job order. + +You might use randomizing functions for scheduling, quality control, and simulation. + +```js +( + +$randomChoice := function($a) { + ( + $selection := + $random() * ($count($a)+1) ~> $floor(); + $a[$selection] + + )}; + +{ +"segmentRequirement": { + "workRequirement": {"id": $.PO}, + "equipmentRequirements":[$randomChoice($.available)], + "id": "Make widget" + } +} + +) +``` +{{% tabs items="Input,Output" %}} +{{% tab "Input" %}} +```json +{ + "available":["line_1","line_2","line_3","line_4","line_5"], + "PO":"po-123" + } +``` +{{% /tab %}} + +{{% tab "Output" %}} +```json +{ + "segmentRequirement": { + "workRequirement": { + "id": "po-123" + }, + "equipmentRequirements": [ + "line_2" + ], + "id": "Make widget" + } +} +``` +{{% /tab %}} +{{% /tabs %}} + +### Recursively find child IDs + +This function uses recursion and a predefined set of naming rules +to find (or generate) a set of child IDs for an entity. +The `n` value determines how many times it's called. + +Many payloads in manufacturing have nested data. +Recursive functions such as the following provide a concise means of traversing a set of subproperties. + +``` +( + $next := function($x, $y) {$x > 1 ? + ( + $namingRules := "123456789ABCDFGHJKLMNOPQRSTUVWXYZ"; + $substring($y[-1],-1) = "Z" ? + $next($x - 1, $append($y, $y[-1] & '1')) : + $next($x - 1, $append( + $y, + $substring($y[-1],0,$length($y[-1])-1) & $substring($substringAfter($namingRules,$substring($y[-1],-1)),0,1) + )) + ) + : $y}; + { + "children": $next(n, [nextId]) + } +) +``` + +{{% tabs items="Input,Output" %}} +{{% tab "Input" %}} +```json +{ +"n":10, +"nextId": "molten-widet-X2FCS" +} +``` +{{% /tab %}} + +{{% tab "output" %}} +```json +{ + "children": [ + "molten-widet-X2FCS", + "molten-widet-X2FCT", + "molten-widet-X2FCU", + "molten-widet-X2FCV", + "molten-widet-X2FCW", + "molten-widet-X2FCX", + "molten-widet-X2FCY", + "molten-widet-X2FCZ", + "molten-widet-X2FCZ1", + "molten-widet-X2FCZ2" + ] +} +``` +{{% /tab %}} +{{% /tabs %}} diff --git a/content/versions/v3.2.1/how-to/bpmn/variables.md b/content/versions/v3.2.1/how-to/bpmn/variables.md new file mode 100644 index 000000000..913599a43 --- /dev/null +++ b/content/versions/v3.2.1/how-to/bpmn/variables.md @@ -0,0 +1,17 @@ +--- +title: 'Special variables' +categories: ["reference"] +description: Special variables used by Rhize BPMN workflows +aliases: + - "/how-to/bpmn/special-variables" +weight: 900 +--- + +Rhize designates some variable names for a special purpose in BPMN workflow. +This list these special variables is as follows: + +| Variable | Purpose | +|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `BODY` | The name of the variable as **Input** in [Intermediate message throws]({{< relref "./bpmn-elements.md#intermediate-message-events" >}}). The value of this variable is the payload sent to the Rhize message broker. | +| `customResponse` | A value to report at the end of a synchronous API call to trigger a workflow. On completion, the call reports whatever the value was in the `customResponse` field of the GraphQL response. For details, read [Trigger workflows]({{< relref "./trigger-workflows.md" >}}). | +| `__traceDebug` | If `true` at the start of the workflow, the BPMN workflow reports the variable context at each node as spans in Tempo. | diff --git a/content/versions/v3.2.1/how-to/gql/_index.md b/content/versions/v3.2.1/how-to/gql/_index.md new file mode 100644 index 000000000..9115fc057 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/_index.md @@ -0,0 +1,9 @@ +--- +title: Use GraphQL +description: Guides to use the GraphQL interface to query information, add records, and build custom UIs. +weight: 100 +cascade: + icon: gql +--- + +{{< card-list >}} diff --git a/content/versions/v3.2.1/how-to/gql/call-the-graphql-api.md b/content/versions/v3.2.1/how-to/gql/call-the-graphql-api.md new file mode 100644 index 000000000..5f22a41b8 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/call-the-graphql-api.md @@ -0,0 +1,371 @@ +--- +title: >- + Overview: the Rhize API +date: '2023-11-22T09:43:30-03:00' +categories: ["how-to"] +description: How to query your manufacturing knowledge graph +weight: 100 +--- + +In a manufacturing operation, all event data is interrelated. +To make these relations explorable, Rhize stores data in a special-purpose graph database designed to represent all levels of the manufacturing process. +This database is enforced by our ISA-95 schema, the most comprehensive data representation of ISA-95 in the world. + +Rhize exposes this database through a [GraphQL API](https://graphql.org/). +Unlike REST, GraphQL requires only one endpoint, and you can define exactly the data that you return for each operation. + + +If you are a customer, the best way to learn both GraphQL and ISA-95 modelling is to use the [Apollo Explorer](https://www.apollographql.com/) for our schema. +However, for newcomers to GraphQL, the flexibility may look overwhelming. +These topics introduce the basics of how to use GraphQL with Rhize's custom database. + +Once you learn how to explore the API, you'll find that the interface is more comfortable and discoverable than a comparable OpenAPI (Swagger) document—and that's before considering the improvements GraphQL brings to precision, performance, and developer experience. + +## Operation types {#operations} + +In GraphQL, an _operation_ is a request to the server. +Rhize supports three types of operations: + +- **[Queries]({{< relref "query" >}})** return data and subsets of data. +- **[Mutations]({{< relref "mutate" >}})** change the data on the server side. +- **[Subscriptions]({{< relref "subscribe" >}})** notify about data changes in real time. + +For details and examples, refer to their specific documentation pages. + +## Call syntax + +The following sections show you the essential features to make a query. + +### Authenticate + +To authenticate your requests, pass a bearer token as an `Authorization` header. +Be sure to preface the value with the word `Bearer `: + + +{{< figure +alt="Example of how it looks in Apollo explorer" +src="/images/screenshot-rhize-apollo-auth-headers.png" +width="50%" +>}} + +For an overview of how Rhize handles token exchange, read [About OpenID connect](/explanations/about-openidconnect). + +### Request body + +By default, all GraphQL operations have the following structure: +1. Define the operation type (one of, `query`, `mutation`, or `subscription`). +1. Name the query anything you want. This example builds a `query` called `myCustomName`: + ```graphql + query myCustomName { + #operations will go here + } + + ``` +1. In curly brackets, define the _operation_ you want to query. +1. In parenthesis next to the operation, add the _arguments_ to pass. This example uses the `getEquipment` operation, and its arguments specify which item of equipment to get. + + ```graphql + query myCustomName { + getEquipment(id: "Kitchen_mixer_b_01") { + #Fields go here + } + } + ``` + +1. Within the operation, define the fields you want to return. This example queries for the equipment ID and the person who created the entity. + + ```graphql + query myCustomName { + getEquipment(id: "Kitchen_mixer_b_01") { + id + _createdBy + } + } + ``` + + As you might expect, the request returns only these fields for the equipment named `Kitchen_mixer_b_01`. + + ```json + { + "data": { + "getEquipment": { + "id": "Kitchen_mixer_b_01", + "_createdBy": "john_snow" + } + } + } + ``` + +### Request exactly the data you want + +A major benefit of GraphQL is that you can modify queries to return only the fields you want. + You can join data entities in a single query and query for entity relationships in the same way that you would for entity attributes. + +Unlike calls to a REST API, where the server-side code defines what a response looks like, GraphQL calls instruct the server to return only what is specified. +Furthermore, you can query diverse sets of data in one call, so you can get exactly the entities you want without calling multiple endpoints, as you would in REST, or composing queries with complex recursive joins, as you would in SQL. +Besides precision, this also brings performance benefits to minimize network calls and their payloads. + +For example, this expands the fields requested from the previous example. +Besides `id` and `_createdBy`, it now returns the `description`, unique ID, and version information about the requested equipment item: + +{{< tabs items="Request,Response" >}} +{{% tab "request" %}} +```graphql + +query ExampleQuery { + queryEquipment(filter: { id: { eq: "Kitchen_mixer_b_0 1" } }) { + id + _createdBy + versions { + iid + description + } + activeVersion { + iid + description + + } + } +} +``` +{{% /tab %}} +{{% tab "response" %}} +```json +{ + "data": { + "queryEquipment": [ + { + "id": "Kitchen_mixer_b_01", + "_createdBy": "john_snow", + "versions": [ + { + "iid": "0xcc701", + "description": "First generation of the mixer 9000" + }, + { + "iid": "0xcc71a", + "description": "Second generation (in testing)" + } + ], + "activeVersion": { + "iid": "0xcc701", + "description": "First generation of the mixer 9000" + } + } + ] + } +} +``` +{{% /tab %}} +{{< /tabs >}} + +You can also add multiple operations to one call. +For example, this query requests all data sources and all persons: + + +{{< tabs items="Request,Response" >}} +{{% tab "request" %}} +```graphql +query peopleAndDataSources { + queryPerson { + id + label + } + queryDataSource { + id + } +} +``` +{{% /tab %}} +{{% tab "response" %}} +```json +{ + "data": { + "queryPerson": [ + { + "id": "235", + "label": "John Ramirez" + }, + { + "id": "234", + "label": "Jan Smith" + } + ], + "queryDataSource": [ + { + "id": "x44_mqtt" + }, + { + "id": "x45_opcUA" + } + + ] + } +} +``` +{{% /tab %}} +{{< /tabs >}} + +## Shortcuts for more expressive requests + +The following sections provide some common ways to reduce boilerplate and shorten the necessary coding for a call. + +### Make input dynamic with variables {#variables} + +The preceding examples place the query input as _inline_ arguments. +Often, calls to production systems separate these arguments out as JSON _variables_. + +Variables add dynamism to your requests, which serves to make them more reusable. +For example: +- If you build a low-code reporting application, you could use variables to change the arguments based on user input. +- In a BPMN event orchestration, you could use variables to make a GraphQL call based on a previous JSONata filter. Refer to the example, [Write ERP material definition to DB]({{< relref "../bpmn/create-workflow/#write-erp-material-definition-to-database" >}}). + + +For example, this query places the ID of the resource that it requests as an inline variable: + +```graphql +query myCustomName { + getEquipment(id: "Kitchen_mixer_b_01") { + _createdBy + } +} +``` + +Instead, you can pass this argument as a variable. +This requires the following changes: + +1. In the argument for your query, name the variable and state its type. +This instructs the query to receive data from outside of its context: + + ```graphql + ## Name variable and type + query myCustomName ($getEquipmentId: String) { + ## operations go here. + } + ``` +1. In the operation, pass the variable as a value in the argument. +In this example, add the variable as a value to the `id` key like this: + + ```graphql + query GetEquipment($getEquipmentId: String) { + + ## pass variable to one or more operations + getEquipment(id: $getEquipmentId) { + ## fields go here + } + } + ``` + +1. In a separate `variables` section of the query, define the JSON object that is your variable: + + ```json + { + "getEquipmentId": "Kitchen_mixer_b_01" + } + ``` + + +{{< tabs items="Query,Mutation" >}} +{{% tab "Query" %}} + +```graphql +query GetEquipment($getEquipmentId: String) { + getEquipment(id: $getEquipmentId) { + _createdBy + } +} +``` +**Variables**: +```json +{ + "getEquipmentId": "Kitchen_mixer_b_01" +} +``` +{{% /tab %}} +{{% /tab %}} + +The preceding example is minimal, but the use of variables to _parameterize_ arguments also applies to complex object creation and filtering. +For example, this _mutation_ uses variables to create an array of Persons: + +```graphql +mutation AddPerson($input: [AddPersonInput!]!) { + addPerson(input: $input) { + person { + id + } + } +} +``` + +**Variables**: + +```json +{ + "input": [ + {"id": "234", "label":"Jan Smith"}, + {"id": "235", "label": "John Ramirez"} + ] +} +``` + +To learn more, read the official GraphQL documentation on [Variables](https://graphql.org/learn/queries/#variables). + +### Template requested fields with fragments + +Along with [variables](#variables), you can use _fragments_ to reduce repetitive writing. + +Fragments are common fields that you use when querying an object. +For example, imagine you wanted to make queries to different equipment objects for their `id`, `label`, `_createdBy`, and `versions[]` properties. +Instead of writing these fields in each operation, you could define them in a fragment, and then refer to that fragment in each specific operation or query. + +To use a fragment: +1. Define them with the `fragment` keyword, declaring its name and object. + ```graphql + ## name ## object + fragment CommonFields on Equipment + ``` +1. Include the fragment in the fields for your operation by prefacing its name with three dots: + ``` + ...CommonFields + ``` + +For example: + +{{< tabs items="Query,Response">}} +{{% tab "query" %}} +```graphql +## Define common fields +fragment CommonFields on Equipment{ + id + label + _createdBy + versions { + id + } +} + +## Use them in your query. +query kitchenEquipment { + getEquipment(id: "Kitchen_mixer_b_02") { + ...CommonFields + } +} +``` +{{% /tab %}} +{{% tab %}} +**Variables:** +```json +{ + "data": { + "getEquipment": { + "id": "Kitchen_mixer_b_02", + "label": "Kitchen mixer B02", + "_createdBy": "admin@rhize.com", + "versions": [] + } + } +} +``` +{{% /tab %}} +{{% /tabs %}} + diff --git a/content/versions/v3.2.1/how-to/gql/default.md b/content/versions/v3.2.1/how-to/gql/default.md new file mode 100644 index 000000000..6dba903e4 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/default.md @@ -0,0 +1,83 @@ ++++ +title = "Use the @default Directive" +description = "The @default directive specifies which GraphQL APIs are generated for a given type. Without it, all queries & mutations are generated except subscription." +weight = 210 +draft = true +[menu.main] + parent = "how-to-query" ++++ + + + +The `@default` directive provides default values to be stored when not supplied in a mutation (`add`/`update`). + +Here's the GraphQL definition of the directives: + +```graphql +directive @default(add: DgraphDefault, update: DgraphDefault) on FIELD_DEFINITION +input DgraphDefault { + value: String +} +``` +Syntax: +```graphql +type Type { + field: FieldType @default( + add: {value: "value"} + update: { value: "value"} + ) +} +``` +Where a value is not provided as input for a mutation, the add value will be used if the node is being created, and the update value will be used if the node exists and is being updated. Values are provided as strings, parsed into the correct field type by Dgraph. + +The string $now is replaced by the current DateTime string on the server, ie: +```graphql +type Type { + createdAt: DateTime! @default( + add: { value: "$now" } + ) + updatedAt: DateTime! @default( + add: { value: "$now" } + update: { value: "$now" } + ) +} +``` + +The string $token.email is replaced by the email claim from the authorization bearer token used for the mutation, ie: +```graphql +type Type { + createdBy: String! @default( + add: { value: "$token.email" } + ) + updatedBy: String! @default( + add: { value: "$token.email" } + update: { value: "$token.email" } + ) +} +``` + +Schema validation will check that: + +Int field values can be parsed strconv.ParseInt +Float field values can be parsed by strconv.ParseFloat +Boolean field values are true or false +$now can only be used with fields of type DateTime (could be extended to include String?) +Schema validation does not currently ensure that @default values for enums are a valid member of the enum, so this is allowed: +```graphql +enum State { + HOT + NOT +} + +type Type { + state: State @default(add: { value: "FOO"}) +} +``` + +## Restrictions / Roadmap + +Our default directive is still in beta and we are improving it quickly. Here's a few points that we plan to work on soon: + +* adding the ability to specify a query to get the default value +* adding additional expressions to default times other than ${now} +--- diff --git a/content/versions/v3.2.1/how-to/gql/directives.md b/content/versions/v3.2.1/how-to/gql/directives.md new file mode 100644 index 000000000..cc7fe1d29 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/directives.md @@ -0,0 +1,137 @@ ++++ +title = "GraphQL Directives" +description = "The list of all directives supported by Rhize's GraphQL implementation. Full details linked within for all directives available with GraphQL." +categories = "reference" +weight = 200 +draft = true +[menu.main] + name = "Directives" + identifier = "directives" + parent = "how-to-query" ++++ + + + +The list of all [directives](https://www.apollographql.com/docs/apollo-server/schema/directives/) supported by Rhize's implementation of Dgraph. + +### @auth + +`@auth` allows you to define how to apply authorization rules on the queries/mutation for a type. + +Reference: [Auth directive](/graphql/authorization/directive) + +### @cascade + +`@cascade` allows you to filter out certain nodes within a query. + +Reference: [Cascade](/graphql/queries/cascade) + +### @custom + +`@custom` directive is used to define custom queries, mutations and fields. + +Reference: [Custom directive](/graphql/custom/directive) + +### @default + +The `@default` directive allows you to specify values that should be used when nil values are received for either `add` mutations or `update` mutations + +Reference: [Default directive](/graphql/schema/default) + +### @deprecated + +The `@deprecated` directive lets you mark the schema definition of a field or `enum` value as deprecated, and also lets you provide an optional reason for the deprecation. + + +### @dgraph + +`@dgraph` directive tells us how to map fields within a type to existing predicates inside Dgraph. + + +### @generate + +The `@generate` directive is used to specify which GraphQL APIs are generated for a type. + +Reference: [Generate directive](/graphql/schema/generate) + +### @hasInverse + +`@hasInverse` is used to setup up two way edges such that adding a edge in +one direction automically adds the one in the inverse direction. + +Reference: [Linking nodes in the graph](/graphql/schema/graph-links) + +### @id + +`@id` directive is used to annotate a field which represents a unique identifier coming from outside + of Dgraph. + +Reference: [Identity](/graphql/schema/ids) + +### @include + +The `@include` directive can be used to include a field based on the value of an `if` argument. + +Reference: [Include directive](/graphql/queries/skip-include) + +### @lambda + +The `@lambda` directive allows you to call custom JavaScript resolvers. The `@lambda` queries, mutations, and fields are resolved through the lambda functions implemented on a given lambda server. + +Reference: [Lambda directive](/graphql/lambda/overview) + +### @primary-key + +The `@primary-key` allows you to specify a list of fields where the concatenation of values of those fields must be unique in the database + +Reference: [Primary Key](/graphql/schema/primarykey) + +### @remote + +`@remote` directive is used to annotate types for which data is not stored in Dgraph. These types +are typically used with custom queries and mutations. + +Reference: [Remote directive](/graphql/custom/directive/#remote-types) + +### @remoteResponse + +The `@remoteResponse` directive allows you to annotate the fields of a `@remote` type in order to map a custom query's JSON key response to a GraphQL field. + +Reference: [Remote directive](/graphql/custom/directive/#remote-response) + +### @search + +`@search` allows you to perform filtering on a field while querying for nodes. + +Reference: [Search](/graphql/schema/search) + +### @secret + +`@secret` directive is used to store secret information, it gets encrypted and then stored in Dgraph. + +Reference: [Password Type](/graphql/schema/types/#password-type) + +### @skip + +The `@skip` directive can be used to fetch a field based on the value of a user-defined GraphQL variable. + +Reference: [Skip directive](/graphql/queries/skip-include) + +### @withSubscription + +`@withSubscription` directive when applied on a type, generates subsciption queries for it. + +Reference: [Subscriptions](/graphql/subscriptions) + +### @lambdaOnMutate + +The `@lambdaOnMutate` directive allows you to listen to mutation events(`add`/`update`/`delete`). Depending on the defined events and the occurrence of a mutation event, `@lambdaOnMutate` triggers the appropriate lambda function implemented on a given lambda server. + +Reference: [LambdaOnMutate directive](/graphql/lambda/webhook) + + +### @default + +The `@default` directive provides default values to be stored when not supplied in a mutation (`add`/`update`). The directive can be used with the current DateTime (via `$now') to allow timestamping of mutation events. + +Reference: [Default directive](/graphql/schema/default) diff --git a/content/versions/v3.2.1/how-to/gql/filter.md b/content/versions/v3.2.1/how-to/gql/filter.md new file mode 100644 index 000000000..c5d8a59b5 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/filter.md @@ -0,0 +1,361 @@ +--- +title: 'Filter' +categories: ["how-to"] +description: How to filter a GraphQL call to a subset of manufacturing items. +weight: 210 +--- + + +_Filters_ limit an operation to a subset of resources. +You can use filters to make operations more precise, remove unneeded items from a payload, and reduce the need for secondary processing. + +To use a filter, specify it in the operation's argument. +Most fields in an object can serve as a filter. +{{< callout type="info" >}} +This page provides a detailed guide of how to use the filters, with examples. +For a bare reference of filters and data types, refer to the [GraphQL type reference]({{< relref "../../reference/gql-types" >}}). +{{< /callout >}} + + +## Filter by property + +The following sections show some common [scalar filters]({{< relref "../../reference/gql-types#scalar-filters" >}}), filters that work on `string`, `dateTime`, and numeric values. +These filters return only the resources that have some specified property or property range. + +### `between` dates + +The `between` property returns items within time ranges for a specific property. +This query returns job responses that started between January 01, 2023 and January 05, 2023. + +```graphql +query { + queryJobResponse( + filter: { + effectiveStart: { between: { min: "2023-01-01", max: "2023-01-05" } } + } + ) { + id + effectiveStart + } +} +``` + +### `has` property + +The `has` keyword returns results only if an item has the specified field. +For example, this query returns only equipment items that have been modified at least once. + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +```graphql +query QueryEquipment { + queryEquipment(filter: {has: _modifiedOn}) { + id + _createdOn + _modifiedOn + } +} +``` +{{% /tab %}} +{{% tab "Response" %}} +```json +{ + "data": { + "queryEquipment": [ + { + "id": "AN-19670-Equipment-1", + "_createdOn": "2023-12-24T16:50:45Z", + "_modifiedOn": "2024-01-23T20:06:30Z" + }, + { + "id": "AN-19670-Equipment-2", + "_createdOn": "2023-12-24T18:16:35Z", + "_modifiedOn": "2024-01-23T20:06:35Z" + }, + // more items + ] + } + } +``` +{{% /tab %}} +{{< /tabs >}} + +To filter for items that have multiple properties, include the fields in an array. +This query returns equipment objects that both have been modified and have next versions: + +```gql +query QueryEquipment { + queryEquipment(filter: {has: [_modifiedOn, nextVersion]}) { + nextVersion + _modifiedOn + } +} +``` + +### `in` this subset + +The `in` keyword filters for objects that have properties with specified values. +For example, this query returns material lots that have material definitions that are either `dough` or `cookie_unit`. + +```graphQL +query{ + queryMaterialLot @cascade { + id + materialDefinition(filter: {id: {in: ["dough", "cookie_unit"]}}) { + id + } + } + } +} +``` + +### `regexp` + +The `regexp` keyword searches for matches using the [RE2](https://github.com/google/re2) regular expression engine. + +For example, +this query uses a regular expression in its variables to filter for items that begin with either `Kitchen_` or `Cooling_` (case insensitive): + +```graphql +query getEquipment($filter: EquipmentFilter) { + aggregateEquipment(filter: $filter) { + count + } + queryEquipment(filter: $filter) { + id + } +} +``` + +**Variables** + +```json +{ + "EquipmentFilter": { + "id": { + "regexp": "/|Kitchen_.*|Cooling_.*/i" + } + }, +} +``` + +{{< callout type="warning" >}} + +The `regexp` filters can have performance costs. +After you refine a query filter to return exactly what you need, consider ways to simplify the regular expression +or, if possible, use a different filter. + +{{< /callout >}} + +## Combine filters with `and`, `or`, `not` + +To filter by multiple properties, use the `and`, `or`, and `not`, operators. +GraphQL syntax uses [infix notation](https://en.wikipedia.org/wiki/Infix_notation), so: "a and b" is `a, and: { b }`, “a or b or c” is `a, or: { b, or: c }`, and “not” is a prefix (`not:`). + +### this `and` that property + +The `and` operator filters for objects that include all specified properties. + +For example, this query returns equipment objects that match two properties: +- The `effectiveStart` must be the 1st and the 10th of January, 2024. +- It must have a non-null `nextVersion`. + +The `and` function is implicit unless you are searching on the same field. +So this filter has an implied `and`: + +```gql +query{ + queryEquipment(filter: { + effectiveStart: { + between: {min: "2024-01-01", max: "2024-01-10"} + + } + has: nextVersion + + } + ) + { + effectiveStart + id + nextVersion + } +} +``` + +{{< callout type="info" >}} + +This preceding filter syntax is a shorter equivalent to `and: {has: nextVersion}`. + +{{< /callout >}} + +### One `or` more properties + +The `or` operator filters for objects that have at least one of the specified properties. +For example, you can take the preceding query and modify it so that it returns objects that have an effective start between the specified range or a `nextVersion` property (inclusive). + +```graphql +queryEquipment(filter: { + effectiveStart: { + between: {min: "2024-01-01", max: "2024-01-10"} + + } + or: {has: nextVersion} + + } + ) +``` + + +### `not` these properties + +The `not` operator filters for objects that do not contain the specified property. +For example, you can take the preceding query and modify it so that it returns objects that have an effective start between the specified range and _do not_ have a `nextVersion` property: + +``` +queryEquipment(filter: { + effectiveStart: { + between: {min: "2024-01-01", max: "2024-01-10"} + + } + not: {has: nextVersion} + + } + ) +``` + +To modify this to include both objects within the range and objects that do not have a `nextVersion`, use `or` with `not`: + +```graphql +or: { not: {has: nextVersion} } +``` + +### This list of filters + +The `and` and `or` operators accept lists of filters. +For example, this query filters for equipment objects whose `id` matches `A`, `B`, or `C`: + +```graphql +queryEquipment (filter: { + or: [ + { id: { eq: "A" } }, + { id: { eq: "B" } }, + { id: { eq: "C" } }, + ] + }) +``` + +## Use directives + +Rhize offers [_query directives_](https://the-guild.dev/graphql/tools/docs/schema-directives#what-about-query-directives), special instructions about how to look up and return values in a query. +These directives can extend your filtering to look at nested properties or to conditionally display a field. + +All directives begin with the `@` sign. + +### Cascade + +The `@cascade` directive filters for certain nodes within a query. +Use it to filter requested resources by a nested sub-property, similar to a `WHERE` clause in SQL. + +{{< callout type="caution" >}} + +`@cascade` is not as performant as flatter queries. +Consider using it only after you've exhausted other query structures to return the data you want. + +{{< /callout >}} + +For example, this query filters for job responses with an ID of `12341`, and then filters that set for only the items that have a `data.properyLabel` field with a value of `INSTANCE ID`. + +{{< tabs items="Query,Response" >}} + +{{% tab "Query" %}} + +```graphql +query QueryJobResponse($filter: JobResponseFilter, $propertyLabel: String) { + queryJobResponse(filter: $filter) @cascade(fields:["data"]){ + id + iid + data(filter: { label: { anyoftext: $propertyLabel } }) { + id + iid + label + value + } + } +} +``` +**Variables**: +```json +{ + "filter": { + "id": { + "alloftext": "12341" + } + }, + "propertyLabel": "INSTANCE ID" +} + +``` +{{% /tab %}} +{{< /tabs >}} + +#### Avoid using @cascade with the [`order`]({{< relref "./query#order" >}}) argument + +The `order` argument returns only the first 1000 records of the query. +If a record matches the `@cascade` filter but comes after these first 1000 records, the API does not return it. + +For example, this query logic works as follows: +1. Return the first 1000 records of equipment as ordered by `effectiveStart`. +1. From these 1000 records, return only the equipment items that are part of `parentEquipment1`. + +```graphql +query($filter: EquipmentFilter){ + queryEquipment (filter: { order: {desc:effectiveStart}) @cascade{ + id + isPartOf (filter: {id:{eq:"parentEquipment1"}}) { + id + } + } +} +``` + +This behavior can be surprising and undesirable, so avoid `@cascade` with the `order` argument. + +### Include + +The `@include` directive returns a field only if its variable is `true`. + +For example, when `includeIf` is `true`, this query omits specified values for `versions`. + + +{{< tabs items="Query,Response" >}} + +{{% tab "query" %}} + +```graphql +query($includeIf: Boolean!) { + queryEquipment { + id + versions @include(if: $includeIf) { + id + } + } +} +``` + +{{% /tab %}} + +{{% tab "variables" %}} + +Change to `true` to include `versions` fields. + +```json +{ + "includeIf": false +} +``` + +{{% /tab %}} + +{{< /tabs >}} + + diff --git a/content/versions/v3.2.1/how-to/gql/generate.md b/content/versions/v3.2.1/how-to/gql/generate.md new file mode 100644 index 000000000..f5a38c978 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/generate.md @@ -0,0 +1,62 @@ ++++ +title = "The @generate Directive" +description = "The @generate directive specifies which GraphQL APIs are generated for a given type. Without it, all queries & mutations are generated except subscription." +weight = 220 +draft = true +[menu.main] + identifier = "schema-generate" + parent = "how-to-query" ++++ + + + +The `@generate` directive is used to specify which GraphQL APIs are generated for a given type. + +Here's the GraphQL definition of the directive +```graphql +input GenerateQueryParams { + get: Boolean + query: Boolean + password: Boolean + aggregate: Boolean +} + +input GenerateMutationParams { + add: Boolean + update: Boolean + delete: Boolean +} +directive @generate( + query: GenerateQueryParams, + mutation: GenerateMutationParams, + subscription: Boolean) on OBJECT | INTERFACE + +``` + +The corresponding APIs are generated by setting the `Boolean` variables inside the `@generate` directive to `true`. Passing `false` forbids the generation of the corresponding APIs. + +The default value of the `subscription` variable is `false` while the default value of all +other variables is `true`. Therefore, if no `@generate` directive is specified for a type, all queries and mutations except `subscription` are generated. + +## Example of @generate directive + +```graphql +type Person @generate( + query: { + get: false, + query: true, + aggregate: false + }, + mutation: { + add: true, + delete: false + }, + subscription: false +) { + id: ID! + name: String! +} +``` + +The GraphQL schema above will generate a `queryPerson` query and `addPerson`, `updatePerson` mutations. It won't generate `getPerson`, `aggregatePerson` queries nor a `deletePerson` mutation as these have been marked as `false` using the `@generate` directive. +Note that the `updatePerson` mutation is generated because the default value of the `update` variable is `true`. diff --git a/content/versions/v3.2.1/how-to/gql/mutate.md b/content/versions/v3.2.1/how-to/gql/mutate.md new file mode 100644 index 000000000..5b68a961a --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/mutate.md @@ -0,0 +1,239 @@ +--- +title: 'Mutate' +categories: ["how-to"] +description: A guide to adding, creating, and deleting data in the Rhize DB +weight: 250 +--- + +{{< watch +text="Add manufacturing data through GraphQL" +src="https://www.youtube.com/watch?v=zQ5X0mg3i_w&t=217s" +>}} + +_Mutations_ change the database in someway by creating, updating, or deleting a resource. +You might use a mutation to update a personnel class, or in to a [{{< abbr "BPMN" >}}]({{< relref "../bpmn" >}}) workflow that automatically creates records of incoming material lots. + +Rhize supports the following ways to change the API. + +## `add` {#add} + +{{< callout type="info" >}} +The `add` operation corresponds to the `Process` verb defined in [Part 5](https://www.isa.org/products/ansi-isa-95-00-05-2018-enterprise-control-system-i) of the ISA-95 standard. +{{< /callout >}} + +Mutations that start with `add` create a resource on the server. + +For example, this mutation adds one more items of equipment. +To add multiple, send the variable as an array of objects, rather than a single object. +The `numUids` property reports how many new objects were created. + +{{% tabs items="Mutation,create 1, Create many"%}} +{{% tab "mutation" %}} + +```graphql +mutation AddEquipment($input: [AddEquipmentInput!]!) { + addEquipment(input: $input) { + equipment { + id + label + } + numUids + } +} +``` + +{{% /tab %}} +{{% tab "Vars: create one object" %}} + +```json +{ + + "input": { + "id": "Kitchen_mixer_a_20", + "label": "Kitchen mixer A11" + } +} +``` +{{% /tab %}} +{{% tab "Vars: Create many" %}} + +```json +{ + + "input": [{ + "id": "Kitchen_mixer_b_01", + "label": "Kitchen mixer A11" + },{ + "id": "Kitchen_mixer_b_02", + "label": "Kitchen mixer A12" + }, + ] +} +``` +{{% /tab %}}{{< /tabs >}} + +### `upsert` + +Many `add` operations support _upserting_, which _update_ or _insert_ (create). +That is, if the object already exists, the operation will update it with the additional fields. +If the object doesn't exist, the operation will create it. + +Besides general UX convenience, upsert is useful when data comes from multiple sources and in no guaranteed order, like from multiple streams from the message broker. + +To enable upsert, set the `upsert:` argument to true: + +```graphql +addEquipment(input: $input, upsert: true) +``` + +## `update` {#update} + +Mutations that start with `update` change something in an object that already exists. +The `update` operations can use [filters]({{< relref "./filter" >}}). + +{{< callout type="info" >}} +The `update` operation corresponds to the `Change` verb defined in [Part 5](https://www.isa.org/products/ansi-isa-95-00-05-2018-enterprise-control-system-i) of the ISA-95 standard. +{{< /callout >}} + +For example, this operation updates the description for a specific version of an equipment item. + + +```graphql +mutation updateMixerVersion( $updateEquipmentVersionInput2: UpdateEquipmentVersionInput!){ + updateEquipmentVersion(input: $updateEquipmentVersionInput2) { + equipmentVersion { + description + id + } + } +} +``` + +**Variables**: +```json +{ + "updateEquipmentVersionInput2": { + "filter": {"iid":"0xcc701"}, + "set": { + "description": "Second generation of the mixer 9000" + } + } +} +``` + +## `delete` {#delete} + +{{< callout type="warning" >}} +Be careful! Without a [Database backup]({{< relref "../../deploy/backup/graphdb" >}}), deleted items cannot be recovered. +{{< /callout >}} + + +Mutations that start with `delete` remove a resource from the database. +The `delete` operations can use [filters]({{< relref "./filter" >}}). + + +{{< callout type="info" >}} +The `delete` operation corresponds to the `Cancel` verb defined in [Part 5](https://www.isa.org/products/ansi-isa-95-00-05-2018-enterprise-control-system-i) of the ISA-95 standard. +{{< /callout >}} + +For example, this operation deletes a unit of measure: + + +```graphql +mutation deleteUoM($filter: UnitOfMeasureFilter!){ + deleteUnitOfMeasure(filter: $filter) { + numUids + } +} +``` + +**Variables:** +```json +{ + "filter": { + "id": { + "eq": "example unit of measure" + } + } +} + +``` + +## Deep mutations + +You can perform deep mutations at multiple levels. +Deep mutations don't alter linked objects but can add nested new objects or link to existing objects. + +For example, this mutation creates a new version of equipment, and associates a new item of equipment with it. Both the `equipmentVersion` and the `equipment` did not exist in the database. + +```graphql + mutation AddEquipmentVersion($addEquipmentVersionInput2: [AddEquipmentVersionInput!]!) { + addEquipmentVersion(input: $addEquipmentVersionInput2) { + equipmentVersion { + id + equipment { + id + } + } + } +} +``` + +**Variables:** + +```json + "addEquipmentVersionInput2": { + "id": "widget_machine_version_1", + "version": "1", + "versionStatus": "DRAFT", + "equipment": { + "id": "widget_maker_1", + "label": "Widget maker 1" + + } + } +} +``` + +You can confirm that the record and its nested property exists with a `get` query. +If the preceding operation succeeded, this query returns both the new `Widget Maker` and +its corresponding version: + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +```graphql +query{ + getEquipment(id: "widget_maker_1") { + id + versions{ + id + version + } + } +} +``` +{{% /tab %}} + + +{{% tab "result" %}} + +```json +{ + "addEquipmentVersionInput2": { + "id": "widget_machine_version_1", + "version": "1", + "versionStatus": "DRAFT", + "equipment": { + "id": "widget_maker_1", + "label": "Widget maker 1" + + } + } +} +``` + +{{% /tab %}} + +{{< /tabs >}} + +To update an existing nested object, use the update mutation for its type. diff --git a/content/versions/v3.2.1/how-to/gql/query.md b/content/versions/v3.2.1/how-to/gql/query.md new file mode 100644 index 000000000..2d5382fe0 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/query.md @@ -0,0 +1,161 @@ +--- +title: 'Query' +categories: ["how-to"] +description: A guide to the three GraphQL operations in Rhize +weight: 200 +--- + +A _query_ returns one or more resources from the database. +Whether you want to investigate manufacturing processes or build a custom report, +a good query is likely the foundation of your workflow. + +Most queries start with these three verbs, each of which indicates the resources to return. + +- `get` for a single resource +- `query` for multiple resources +- `aggregate` for calculations on arrays + + +{{< callout type="info" >}} + +These operations correspond to the `Get` verb defined in [Part 5](https://www.isa.org/products/ansi-isa-95-00-05-2018-enterprise-control-system-i) of the ISA-95 standard. + +{{< /callout >}} + +## `query` multiple resources {#query} + +Queries that start with `query` return an array of objects. +For example, a custom dashboard may use `queryEquipmentVersion` to create a page that displays all active versions of equipment that are running in a certain {{< abbr "hierarchy scope" >}}. + +For example, this query returns the ID of all pieces of equipment. + +```graphql +query allEquipment{ + queryEquipment { + id + } +} +``` + +### Query specified IDs + +To filter your query to a specific set of items, use the `filter` argument with the requested IDs. + +The least verbose way to filter is to specify the requested items' `iid` (their unique database addresses) in an array: +For example, this query returns only equipment with an `iid` of `0xf9b49` or `0x102aa5`. + +```graphql +query ExampleQuery { + queryEquipment(filter: { iid: ["0xf9b49", "0x102aa5"] }) { + iid + id + } +} +``` + +If you don't have the precise `iid`, you can use one of the string [filters]({{< relref "filter" >}}). + + +## `get` single resource {#get} + +Queries that start with `get` return one object. +A common use of `get` is to explore all data related to a particular object. +For example, in a custom dashboard, you may use `getDataSource` to make a custom page that reports a specified data source. + +Typically, the argument specifies the resource by either its human-readable ID (`id`) or its unique address in the database (`iid`). + +For example, this query gets the `iid`, `_createdBy`, and `versions` for the equipment item `Kitchen_mixer_b_01`: + +```graphql +query mixerCheck { + getEquipment(id: "Kitchen_mixer_b_01") { + iid + _createdBy + versions{ + id + } + } +} +``` + +## `Aggregate` data from multiple resources {#aggregate} + +Operations that start with `aggregate` provide aggregated statistics for a specified set of items. + +The syntax and filtering for an `aggregate` operation is the same as for a `query` operation. +However, rather than returning items, the aggregate operation returns one or more computed statistics about these items. +For example, you might use an `aggregate` query to create a summary report about a set of process segments within a certain time frame. + +This request returns the count of all Equipment items that match a certain filter: + +```graphql +query countItems($filter: EquipmentFilter) { + aggregateEquipment(filter: $filter) { + count + } +} +``` + +## Sort and paginate + +A query can take arguments to order and paginate your results. + +{{< callout type="info" >}} +Without an `order` parameter, a query returns items without any default or guaranteed order. +{{< /callout >}} + +### Order + +{{< callout type="caution" >}} + +Ordered queries **return only the first 1000 records of the ordered field.** +This behavior might exclude records that you expect, especially if you [combine `order` with a `@cascade`]({{< relref "filter#avoid-using-cascade-with-the-orderhahahugoshortcode50s8hbhb-argument" >}}) filter in a nested field. + +{{< /callout >}} + +The `order` argument works with any property whose type is `Int`, `Float`, `String`, or `DateTime`. +For example, this query sorts Person objects by ID in ascending alphabetical order: + +```graphql +query{ + queryPerson(order:{ asc: id}) { + id + } +} +``` + +And this orders by the Person's `effectiveStart` date in descending chronological order. + +``` +query{ + queryPerson(order:{ desc: effectiveStart}) { + id + effectiveStart + } +} +``` + +### Paginate with `offset` + +The `offset` argument specifies what item to start displaying results from, and the `first` argument specifies how many items to show. + +For example, this skips the five most recent Person items (as measured by `effectiveStart`), and then displays the next 10: + +```graphql +query{ + queryPerson(order:{ + desc: effectiveStart + }, + offset: 5, + first: 10 + ) { + id + effectiveStart + } +} +``` + +## Filter queries + +Rhize also has many queries to filter or return subsets of items. +To learn how to filter, read [Use query filters]({{< relref "./filter" >}}). diff --git a/content/versions/v3.2.1/how-to/gql/subscribe.md b/content/versions/v3.2.1/how-to/gql/subscribe.md new file mode 100644 index 000000000..d09f5f7b7 --- /dev/null +++ b/content/versions/v3.2.1/how-to/gql/subscribe.md @@ -0,0 +1,34 @@ +--- +title: 'Subscribe' +categories: ["how-to"] +description: A guide to using GraphQL to subscribe to changes in the database. +weight: 280 +--- + +The operations for a `subscription` are similar to the operations for a [`query`]({{< relref "./query" >}}). +But rather than providing information about the entire item, the purpose of subscriptions is to notify about real-time changes to a manufacturing resource. + + +{{< callout type="info" >}} + +These operations correspond to the `SyncGet` verb defined in [Part 5](https://www.isa.org/products/ansi-isa-95-00-05-2018-enterprise-control-system-i) of the ISA-95 standard. + +{{< /callout >}} + + +This example query subscribes to changes in a specified set of `workResponses`, reporting only their `id` and effective end time. + +```graphql + +subscription GetWorkResponse($getWorkResponseId: String) { + getWorkResponse(id: $getWorkResponseId){ + jobResponses { + effectiveEnd + } + } +} +``` + +Try to minimize the payload for subscription operations. +Additionally, you need to subscribe only to changes that persist to the knowledge graph. +For general event handling, it's often better to use a [BPMN workflow]({{< relref "../bpmn" >}}) that subscribes to a NATS, MQTT, or OPC UA topic. diff --git a/content/versions/v3.2.1/how-to/kpi-service/_index.md b/content/versions/v3.2.1/how-to/kpi-service/_index.md new file mode 100644 index 000000000..7b0d3bec4 --- /dev/null +++ b/content/versions/v3.2.1/how-to/kpi-service/_index.md @@ -0,0 +1,17 @@ +--- +title: 'Use the KPI service' +categories: "how-to" +description: How to configure KPI Service to record key ISO22400 OEE Metrics. +weight: 500 +cascade: + experimental: true + icon: oui-stats +--- + +{{< experimental-kpi >}} + +The KPI service records {{< abbr "equipment" >}}-centric metrics related to the manufacturing operation. +To use it, you must: +1. Record machine state data using the [rule pipeline]({{< relref "../publish-subscribe/create-equipment-class-rule/" >}}). +1. Persist this data to a time-series database. + diff --git a/content/versions/v3.2.1/how-to/kpi-service/about-kpi-service.md b/content/versions/v3.2.1/how-to/kpi-service/about-kpi-service.md new file mode 100644 index 000000000..d6473182c --- /dev/null +++ b/content/versions/v3.2.1/how-to/kpi-service/about-kpi-service.md @@ -0,0 +1,69 @@ +--- +title: About KPI Service and overrides +description: >- + An explanation of how the Rhize KPI service works +weight: 200 +--- + +{{< experimental-kpi >}} + +Key Performance Indicators (KPIs) in manufacturing are metrics to help monitor, assess, and optimize the performance of various aspects of your production process. + +Rhize has an optional `KPI` service that queries process values persisted to a time-series database and then calculates various KPIs. +Rhize's implementation of work calendars is inspired by ISO/TR [22400-10](https://www.iso.org/obp/ui/?_escaped_fragment_=iso:std:71283:en), a standard on KPIs in operations management. + +## What the service does + +```mermaid +sequenceDiagram + actor U as User + participant K as KPI Service + participant TSDB as Time Series Database + + U->>K: Query KPI in certain interval + K->>TSDB: Query State Records + TSDB->>K: Response: State records + K->>TSDB: Query Quantity Records + TSDB->>K: Response: Quantity records + K->>TSDB: Query JobResponse Records + TSDB->>K: Response: JobResponse records + K-->>TSDB: (Optional:) Query Planned Downtime Records + TSDB-->>K: Response: Downtime Records + K-->>TSDB: (Optional:) Query Shift Records + TSDB-->>K: Response: Downtime Records + K->>K: Calculate KPIs + K->U: Response: KPI Result +``` + +The KPI service provides an interface in the graph database for the user to query a list of pre-defined KPIs on a piece of equipment in the `equipmentHierarchy` within a certain time interval. +The service then queries the time-series database for all state changes, produced quantities, and job response data. +With the returned data, the service calculates the KPI value and returns it to the user. + +## Supported KPIs + +The service supports all KPIs described by the ISO/TR 22400-10, +along with some other useful KPIs: + +- `ActualProductionTime` +- `ActualUnitSetupTime` +- `ActualSetupTime` +- `ActualUnitDelayTime` +- `ActualUnitDownTime` +- `TimeToRepair` +- `ActualUnitProcessingTime` +- `PlannedShutdownTime` +- `PlannedDownTime` +- `PlannedBusyTime` +- `Availability` +- `GoodQuantity` +- `ScrapQuantity` +- `ReworkQuantity` +- `ProducedQuantityMachineOrigin` +- `ProducedQuantity` +- `Effectiveness` +- `EffectivenessMachineOrigin` +- `QualityRatio` +- `OverallEquipmentEffectiveness` +- `ActualCycleTime` +- `ActualCycleTimeMachineOrigin` + diff --git a/content/versions/v3.2.1/how-to/kpi-service/configure-kpi-service.md b/content/versions/v3.2.1/how-to/kpi-service/configure-kpi-service.md new file mode 100644 index 000000000..b4a16906c --- /dev/null +++ b/content/versions/v3.2.1/how-to/kpi-service/configure-kpi-service.md @@ -0,0 +1,198 @@ +--- +title: Configure the KPI service +description: >- + An explanation of how to configure the KPI service to feed it with process data +weight: 200 +--- + +{{< experimental-kpi >}} + +This guide shows you how to configure the time-series you need for the KPI service. +It does not suggest how to persist these values. + +To learn how the KPI service works, read [About KPI service]({{< ref "about-kpi-service" >}}). +Example use cases include {{< abbr "OEE" >}} and various performance metrics. + +## Prerequisites + +Before you start, ensure you have the following: +- The KPI service installed +- An `equipmentHierarchy` is configured + +## Procedure + +In short, to configure the KPI Service, the procedure works as follows: + +1. Persist machine state records to the `EquipmentState` table +1. Persist quantity records to the `QuantityLog` table +1. Persist job response data to the `JobOrderState` table +1. (Optional) Configure the calendar service to record planned downtime events and shift records to time series. Refer to [Use work calendars]({{< relref "../work-calendars" >}}) + +## Record machine states + +Every time an equipment changes state, it is persisted to the time-series table `EquipmentState`. + +### `EquipmentState` table schema + +{{< tabs items="Schema,Example">}} +{{% tab "schema" %}} + +```sql +CREATE TABLE IF NOT EXISTS EquipmentState( + EquipmentId SYMBOL, + ISO22400State VARCHAR, -- ADOT, AUST, ADET, APT + time TIMESTAMP +) TIMESTAMP(time) PARTITION BY MONTH DEDUP UPSERT KEYS(time, EquipmentId); +``` + +{{< callout type="info" >}} +This table shows a QuestDB specific schema. +You may also add additional columns as required. + +To use the service for another time-series DB, get in touch. +{{< /callout >}} +{{% /tab %}} +{{% tab "example" %}} + +```json +[ + { + "EquipmentId": "Machine A", + "ISO22400State": "ADET", + "PackMLState": "Held", + "time": "2024-03-28T13:13:47.814086Z", + } +] +``` + +{{< callout type="info" >}} +This record includes an additional field, `PackMLState`, to show that additional data can also be recorded. +{{< /callout >}} +{{% /tab %}} +{{< /tabs >}} + +## Record quantity records + +You can persist two categories of quantity records: + +1. (Optional) Values generated by the machine. +1. Final produced quantities (these should be categorised into `Good`, `Scrap`, and `Rework`). + +### QuantityLog table schema + +{{< tabs items="Schema, Machine example, User Example" >}} +{{% tab "schema" %}} + +```sql +CREATE TABLE IF NOT EXISTS QuantityLog( + EquipmentId SYMBOL, + Origin SYMBOL, -- Machine, User + QtyType SYMBOL, -- Delta, RunningTotal (running total not currently supported) + ProductionType SYMBOL, -- Good, Scrap, Rework + Qty FLOAT, + time TIMESTAMP +) TIMESTAMP(time) PARTITION BY MONTH DEDUP UPSERT KEYS(time, EquipmentId, Origin, QtyType, ProductionType); +``` + +{{% /tab %}} +{{% tab "machine example" %}} + +```json +[ + { + "EquipmentId": "Machine A", + "Origin": "Machine", + "QtyType": "Delta", + "ProductionType": "Unknown", + "Qty": 6, + "time": "2024-03-28T09:30:34.000325Z" + } +] +``` + +{{% /tab %}} +{{% tab "user example" %}} + +```json +[ + { + "EquipmentId": "Machine A", + "Origin": "User", + "QtyType": "Delta", + "ProductionType": "Good", + "Qty": 10, + "time": "2024-03-28T09:30:34.000325Z" + }, +{ + "EquipmentId": "Machine A", + "Origin": "User", + "QtyType": "Delta", + "ProductionType": "Scrap", + "Qty": 2, + "time": "2024-03-28T09:30:34.000325Z" + }, +{ + "EquipmentId": "Machine A", + "Origin": "User", + "QtyType": "Delta", + "ProductionType": "Rework", + "Qty": 1, + "time": "2024-03-28T09:30:34.000325Z" + } +] +``` + +{{% /tab %}} +{{< /tabs >}} + +## Record job response records + +Job response records persist to `JobOrderState` and are used to identify the current planned cycle time of each part produced from the machine. +When an operation starts, a record is created setting the planned cycle time. +When the operation is finished, another record is created to reset the planned cycle time to 0. + +### JobOrderState table schema + +{{< tabs items="Schema,Start operation,End operation" >}} +{{% tab "schema" %}} + +```sql +CREATE TABLE IF NOT EXISTS JobOrderState( + EquipmentId SYMBOL, + JobOrderId SYMBOL, + PlanningCycleTime FLOAT, -- Number of seconds per produced part + time TIMESTAMP +) TIMESTAMP(time) PARTITION BY MONTH DEDUP UPSERT KEYS(time, EquipmentId, JobOrderId); +``` + +{{% /tab %}} +{{% tab "start operation" %}} + +```json +[ + { + "EquipmentId": "Machine A", + "JobOrderId": "Order001", + "PlanningCycleTime": 100, + "time": "2024-04-02T14:32:21.947000Z" + } +] +``` + +{{% /tab %}} +{{% tab "end operation" %}} + +```json +[ + { + "EquipmentId": "Machine A", + "JobOrderId": "Order001", + "PlanningCycleTime": 0, + "time": "2024-04-02T14:59:58.947000Z" + } +] +``` + +{{% /tab %}} +{{< /tabs >}} + diff --git a/content/versions/v3.2.1/how-to/kpi-service/query-kpi-service.md b/content/versions/v3.2.1/how-to/kpi-service/query-kpi-service.md new file mode 100644 index 000000000..015e97714 --- /dev/null +++ b/content/versions/v3.2.1/how-to/kpi-service/query-kpi-service.md @@ -0,0 +1,1125 @@ +--- +title: Query the KPI service +description: >- + An explanation of how to query the KPI service to obtain OEE values +weight: 200 +--- + +{{< experimental-kpi >}} + +The KPI service offers a federated GraphQL interface to query KPI values. +This guide provides information on the different querying interfaces. + +## Root level queries + +The KPI service offers two root-level queries: + +- `GetKPI()` +- `GetKPIByShift()` + +### `GetKPI()` + +The `GetKPI()` query is the base-level KPI Query. +You can use it to input an equipment ID or hierarchy-scope ID, a time range, and a list of desired KPIs. +The result is a single KPI object per requested KPI. + +#### GetKPI() - Definition + +{{< tabs items="query,response" >}} +{{% tab "query" %}} +query: + +```graphql +query GetKPI($filterInput: KPIFilter!, $startDateTime: DateTime!, $endDateTime: DateTime!, $kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean) { + GetKPI(filterInput: $filterInput, startDateTime: $startDateTime, endDateTime: $endDateTime, kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime) { + name + to + from + error + value + units + } +} +``` + +input: + +```json +{ + "filterInput": { + "equipmentIds": ["MachineA", "MachineB"], + "hierarchyScopeId": "Enterprise1.SiteA.Line1" + }, + "startDateTime": "2024-09-01T00:00:00Z", + "endDateTime": "2024-09-01T18:00:00Z", + "kpi": ["ActualProductionTime","Availability", "GoodQuantity", "ProducedQuantity", "Effectiveness", "QualityRatio", "ActualCycleTime", "OverallEquipmentEffectiveness"], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false, + "onlyIncludeActiveJobResponses": false +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "GetKPI": [ + { + "name": "ActualProductionTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "seconds" + }, + { + "name": "ActualUnitDelayTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "seconds" + }, + { + "name": "PlannedDownTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "seconds" + }, + { + "name": "Availability", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "%" + }, + { + "name": "GoodQuantity", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "units" + }, + { + "name": "ProducedQuantity", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "units" + }, + { + "name": "Effectiveness", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 100, + "units": "%" + }, + { + "name": "QualityRatio", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 100, + "units": "%" + }, + { + "name": "ActualCycleTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "seconds per unit" + }, + { + "name": "OverallEquipmentEffectiveness", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "%" + } + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +#### Example 1. + +Imagine a scenario where `Machine A` produces parts at a planned cycle time of 10-seconds per part. +The order starts at 09:00 and finishes at 12:00 with 30 minutes of unplanned downtime in between (this could be from loading materials, unplanned maintenance, switching tools, and so on). +After the operation finishes, the user has registered 800 Good parts and 200 scrap parts. +The tables in time series appear as follows: + +{{< tabs items="Equipmentstate,QuantityLog,JobOrderState" >}} +{{% tab "EquipmentState" %}} + +| EquipmentId | ISO22400State | time | +|-------------|---------------|----------------------| +| Machine A | APT | 2024-09-03T09:00:00Z | +| Machine A | ADET | 2024-09-03T10:30:00Z | +| Machine A | APT | 2024-09-03T11:00:00Z | +| Machine A | ADOT | 2024-09-03T12:00:00Z | + +{{% /tab %}} +{{% tab "QuantityLog" %}} + +| EquipmentId | Origin | QtyType | ProductionType | Qty | time | +|-------------|--------|---------|----------------|-----|----------------------| +| Machine A | User | Delta | Good | 800 | 2024-09-03T12:00:00Z | +| Machine A | User | Delta | Scrap | 200 | 2024-09-03T12:00:00Z | + +{{% /tab %}} +{{% tab "JobOrderState" %}} + +| EquipmentId | JobOrderId | PlanningCyleTime | time | +|-------------|------------|------------------|----------------------| +| Machine A | Order A | 10 | 2024-09-03T09:00:00Z | +| Machine A | NONE | 0 | 2024-09-03T12:00:00Z | + +{{% /tab %}} +{{< /tabs >}} + +Calling this KPI Query appears as follows: + +{{< tabs items="Query,Response">}} +{{% tab "query" %}} +query: + +```graphql +query GetKPI($filterInput: KPIFilter!, $startDateTime: DateTime!, $endDateTime: DateTime!, $kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean) { + GetKPI(filterInput: $filterInput, startDateTime: $startDateTime, endDateTime: $endDateTime, kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime) { + name + to + from + error + value + units + } +} +``` + +input: + +```json +{ + "filterInput": { + "equipmentIds": ["MachineA"] + }, + "startDateTime": "2024-09-03T09:00:00Z", + "endDateTime": "2024-09-03T12:00:00Z", + "kpi": ["ActualProductionTime","Availability", "GoodQuantity", "ProducedQuantity", "Effectiveness", "QualityRatio", "ActualCycleTime", "OverallEquipmentEffectiveness"] +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "GetKPI": [ + { + "_comment": "This is the total time spent in APT", + "name": "ActualProductionTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 9000, + "units": "seconds" + }, + { + "_comment": "This is the total time spent in ADET", + "name": "ActualUnitDelayTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 1800, + "units": "seconds" + }, + { + "_comment": "This is the total time spent in PDOT", + "name": "PlannedDownTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 0, + "units": "seconds" + }, + { + "_comment": "This is APT/PBT", + "name": "Availability", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 83.3333333, + "units": "%" + }, + { + "_comment": "This is the total recorded good quantity", + "name": "GoodQuantity", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 800, + "units": "units" + }, + { + "_comment": "This is the total quantity produced in the order", + "name": "ProducedQuantity", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 1000, + "units": "units" + }, + {"_comment": "This is (ProducedQuantity * PlannedCycleTime)/APT", + "name": "Effectiveness", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 111.111111, + "units": "%" + }, + { + "_comment": "This is GoodQuantity/ProducedQuantity", + "name": "QualityRatio", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 80, + "units": "%" + }, + { + "_comment": "This is APT/ProducedQuantity", + "name": "ActualCycleTime", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 10.8, + "units": "seconds per unit" + }, + { + "_comment": "This is Availability * Effectiveness * QualityRatio", + "name": "OverallEquipmentEffectiveness", + "to": "2024-09-01T18:00:00Z", + "from": "2024-09-01T00:00:00Z", + "error": null, + "value": 74.074, + "units": "%" + } + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +### `GetKPIByShift()` + +The `GetKPIByShift()` query is another base-level KPI Query. +It is similar to GetKPI(), but rather than returning a single result per KPI query, it also accepts `WorkCalendarEntryProperty IDs` to filter against and return a result for each instance of a shift. + +#### GetKPIByShift() - Definition + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +query: + +```graphql +query GetKPIByShift($filterInput: GetKPIByShiftFilter!, $startDateTime: DateTime!, $endDateTime: DateTime!, $kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean, $groupByShift: Boolean, $groupByEquipment: Boolean, $onlyIncludeActiveJobResponses: Boolean) { + GetKPIByShift(filterInput: $filterInput, startDateTime: $startDateTime, endDateTime: $endDateTime, kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime, groupByShift: $groupByShift, groupByEquipment: $groupByEquipment, OnlyIncludeActiveJobResponses: $onlyIncludeActiveJobResponses) { + name + equipmentIds + shiftsContained + from + to + error + value + units + } +} +``` + +input: + +```json +{ + "filterInput": { + "shiftFilter": [ + { + "propertyName": "Shift Name", + "eq": "Morning" + } + ], + "equipmentIds": ["Machine A", "Machine B"], + "hierarchyScopeId": "Enterprise1.SiteA.Line1" + }, + "startDateTime": "2024-09-01T00:00:00Z", + "endDateTime": "2024-09-03T18:00:00Z", + "kpi": ["ActualProductionTime", "OverallEquipmentEffectiveness"], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false, + "onlyIncludeActiveJobResponses": false, + "groupByShift": false, + "groupByEquipment": true +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "GetKPIByShift": [ + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Sunday.Morning"], + "from": "2024-09-01T09:00:00Z", + "to": "2024-09-01T17:00:00Z", + "error": null, + "value": 0, + "units": "seconds" + }, + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Monday.Morning"], + "from": "2024-09-02T00:00:00Z", + "to": "2024-09-02T17:00:00Z", + "error": null, + "value": 0, + "units": "seconds" + }, + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Tuesday.Morning"], + "from": "2024-09-03T00:00:00Z", + "to": "2024-09-03T17:00:00Z", + "error": null, + "value": 0, + "units": "seconds" + }, + { + "name": "OverallEquipmentEffectiveness", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Sunday.Morning"], + "from": "2024-09-01T09:00:00Z", + "to": "2024-09-01T17:00:00Z", + "error": null, + "value": 0, + "units": "%" + }, + { + "name": "OverallEquipmentEffectiveness", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Monday.Morning"], + "from": "2024-09-02T00:00:00Z", + "to": "2024-09-02T17:00:00Z", + "error": null, + "value": 0, + "units": "%" + }, + { + "name": "OverallEquipmentEffectiveness", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Tuesday.Morning"], + "from": "2024-09-03T00:00:00Z", + "to": "2024-09-03T17:00:00Z", + "error": null, + "value": 0, + "units": "%" + }, + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +#### Example 2 + +Following on from Example 1. `Machine A` exists on a production line alongside `Machine B`, they both produce parts with a planned cycle time of 10 seconds per part and runs on the same shift pattern. The [work calendar service]({{< relref "../work-calendars" >}}) is configured with 3 distinct daily shifts: + +- Morning (06:00-14:00) +- Afternoon (14:00 - 22:00) +- Night (22:00-06:00) + +Which results in the following tables: + +{{< tabs items="EquipmentState,QuantityLog,Calendar" >}} +{{% tab "EquipmentState" %}} + +| EquipmentId | ISO22400State | time | +|-------------|---------------|----------------------| +| Machine A | APT | 2024-09-01T06:00:00Z | +| Machine B | APT | 2024-09-01T06:00:00Z | +| Machine A | ADET | 2024-09-01T10:30:00Z | +| Machine B | ADET | 2024-09-01T10:30:00Z | +| Machine A | APT | 2024-09-01T11:00:00Z | +| Machine B | APT | 2024-09-01T11:00:00Z | +| Machine A | ADOT | 2024-09-01T14:00:00Z | +| Machine B | ADOT | 2024-09-01T14:00:00Z | +| Machine A | APT | 2024-09-01T14:00:00Z | +| Machine B | APT | 2024-09-01T14:00:00Z | +| Machine A | ADET | 2024-09-01T17:30:00Z | +| Machine B | ADET | 2024-09-01T17:30:00Z | +| Machine A | APT | 2024-09-01T18:00:00Z | +| Machine B | APT | 2024-09-01T18:00:00Z | +| Machine A | ADOT | 2024-09-01T22:00:00Z | +| Machine B | ADOT | 2024-09-01T22:00:00Z | +| Machine A | APT | 2024-09-01T22:00:00Z | +| Machine B | APT | 2024-09-01T22:00:00Z | +| Machine A | ADET | 2024-09-02T04:00:00Z | +| Machine B | ADET | 2024-09-02T04:00:00Z | +| Machine A | APT | 2024-09-02T04:30:00Z | +| Machine B | APT | 2024-09-02T04:30:00Z | +| Machine A | ADOT | 2024-09-02T06:00:00Z | +| Machine B | ADOT | 2024-09-02T06:00:00Z | +| Machine A | APT | 2024-09-02T06:00:00Z | +| Machine B | APT | 2024-09-02T06:00:00Z | +| Machine A | ADET | 2024-09-02T10:30:00Z | +| Machine B | ADET | 2024-09-02T10:30:00Z | +| Machine A | APT | 2024-09-02T11:00:00Z | +| Machine B | APT | 2024-09-02T11:00:00Z | +| Machine A | ADOT | 2024-09-02T14:00:00Z | +| Machine B | ADOT | 2024-09-02T14:00:00Z | +| Machine A | APT | 2024-09-02T14:00:00Z | +| Machine B | APT | 2024-09-02T14:00:00Z | +| Machine A | ADET | 2024-09-02T18:30:00Z | +| Machine B | ADET | 2024-09-02T18:30:00Z | +| Machine A | APT | 2024-09-02T19:00:00Z | +| Machine B | APT | 2024-09-02T19:00:00Z | +| Machine A | ADOT | 2024-09-02T22:00:00Z | +| Machine B | ADOT | 2024-09-02T22:00:00Z | +| Machine A | APT | 2024-09-02T22:00:00Z | +| Machine B | APT | 2024-09-02T22:00:00Z | +| Machine A | ADET | 2024-09-03T04:30:00Z | +| Machine B | ADET | 2024-09-03T04:30:00Z | +| Machine A | APT | 2024-09-03T05:00:00Z | +| Machine B | APT | 2024-09-03T05:00:00Z | +| Machine A | ADOT | 2024-09-03T06:00:00Z | +| Machine B | ADOT | 2024-09-03T06:00:00Z | + +{{% /tab %}} +{{% tab "QuantityLog" %}} + +| EquipmentId | Origin | QtyType | ProductionType | Qty | time | +|-------------|--------|---------|----------------|-----|----------------------| +| Machine A | User | Delta | Good | 800 | 2024-09-01T14:00:00Z | +| Machine A | User | Delta | Scrap | 200 | 2024-09-01T14:00:00Z | +| Machine B | User | Delta | Good | 700 | 2024-09-01T14:00:00Z | +| Machine B | User | Delta | Scrap | 300 | 2024-09-01T14:00:00Z | +| Machine A | User | Delta | Good | 900 | 2024-09-01T22:00:00Z | +| Machine A | User | Delta | Scrap | 100 | 2024-09-01T22:00:00Z | +| Machine B | User | Delta | Good | 950 | 2024-09-01T22:00:00Z | +| Machine B | User | Delta | Scrap | 50 | 2024-09-01T22:00:00Z | +| Machine A | User | Delta | Good | 999 | 2024-09-01T06:00:00Z | +| Machine A | User | Delta | Scrap | 1 | 2024-09-01T06:00:00Z | +| Machine B | User | Delta | Good | 900 | 2024-09-01T06:00:00Z | +| Machine B | User | Delta | Scrap | 100 | 2024-09-01T06:00:00Z | +| Machine A | User | Delta | Good | 850 | 2024-09-02T14:00:00Z | +| Machine A | User | Delta | Scrap | 150 | 2024-09-02T14:00:00Z | +| Machine B | User | Delta | Good | 800 | 2024-09-02T14:00:00Z | +| Machine B | User | Delta | Scrap | 200 | 2024-09-02T14:00:00Z | +| Machine A | User | Delta | Good | 700 | 2024-09-02T22:00:00Z | +| Machine A | User | Delta | Scrap | 300 | 2024-09-02T22:00:00Z | +| Machine B | User | Delta | Good | 750 | 2024-09-02T22:00:00Z | +| Machine B | User | Delta | Scrap | 250 | 2024-09-02T22:00:00Z | +| Machine A | User | Delta | Good | 600 | 2024-09-02T06:00:00Z | +| Machine A | User | Delta | Scrap | 400 | 2024-09-02T06:00:00Z | +| Machine B | User | Delta | Good | 750 | 2024-09-02T06:00:00Z | +| Machine B | User | Delta | Scrap | 250 | 2024-09-02T06:00:00Z | + +{{% /tab %}} +{{% tab "JobOrderState" %}} + +| EquipmentId | JobOrderId | PlanningCyleTime | time | +|-------------|-------------|------------------|----------------------| +| Machine A | Order A1 | 10 | 2024-09-01T06:00:00Z | +| Machine B | Order A2 | 10 | 2024-09-01T06:00:00Z | +| Machine A | NONE | 0 | 2024-09-01T14:00:00Z | +| Machine B | NONE | 0 | 2024-09-01T14:00:00Z | +| Machine A | Order B1 | 10 | 2024-09-01T14:00:00Z | +| Machine B | Order B2 | 10 | 2024-09-01T14:00:00Z | +| Machine A | NONE | 0 | 2024-09-01T22:00:00Z | +| Machine B | NONE | 0 | 2024-09-01T22:00:00Z | +| Machine A | Order C1 | 10 | 2024-09-01T22:00:00Z | +| Machine B | Order C2 | 10 | 2024-09-01T22:00:00Z | +| Machine A | NONE | 0 | 2024-09-02T06:00:00Z | +| Machine B | NONE | 0 | 2024-09-02T06:00:00Z | +| Machine A | Order D1 | 10 | 2024-09-02T06:00:00Z | +| Machine B | Order D2 | 10 | 2024-09-02T06:00:00Z | +| Machine A | NONE | 0 | 2024-09-02T14:00:00Z | +| Machine B | NONE | 0 | 2024-09-02T14:00:00Z | +| Machine A | Order E1 | 10 | 2024-09-02T14:00:00Z | +| Machine B | Order E2 | 10 | 2024-09-02T14:00:00Z | +| Machine A | NONE | 0 | 2024-09-02T22:00:00Z | +| Machine B | NONE | 0 | 2024-09-02T22:00:00Z | +| Machine A | Order F1 | 10 | 2024-09-02T22:00:00Z | +| Machine B | Order F2 | 10 | 2024-09-02T22:00:00Z | +| Machine A | NONE | 0 | 2024-09-03T06:00:00Z | +| Machine B | NONE | 0 | 2024-09-03T06:00:00Z | + +{{% /tab %}} +{{% tab "Calendar_AdHoc" %}} + +| EquipmentId | WorkCalendarDefinitionID | WorkCalendarDefinitionEntryId | EntryType | time | +|-------------|--------------------------|----------------------------------|-----------|----------------------| +| Machine A | ShiftCalendar | ShiftCalendar.Sunday.Morning | START | 2024-09-01T06:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Sunday.Morning | START | 2024-09-01T06:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Sunday.Morning | END | 2024-09-01T14:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Sunday.Morning | END | 2024-09-01T14:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Sunday.Afternoon | START | 2024-09-01T14:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Sunday.Afternoon | START | 2024-09-01T14:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Sunday.Afternoon | END | 2024-09-01T22:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Sunday.Afternoon | END | 2024-09-01T22:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Sunday.Night | START | 2024-09-01T22:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Sunday.Night | START | 2024-09-01T22:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Sunday.Night | END | 2024-09-02T06:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Sunday.Night | END | 2024-09-02T06:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Monday.Morning | START | 2024-09-02T06:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Monday.Morning | START | 2024-09-02T06:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Monday.Morning | END | 2024-09-02T14:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Monday.Morning | END | 2024-09-02T14:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Monday.Afternoon | START | 2024-09-02T14:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Monday.Afternoon | START | 2024-09-02T14:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Monday.Afternoon | END | 2024-09-02T22:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Monday.Afternoon | END | 2024-09-02T22:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Monday.Night | START | 2024-09-02T22:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Monday.Night | START | 2024-09-02T22:00:00Z | +| Machine A | ShiftCalendar | ShiftCalendar.Monday.Night | END | 2024-09-03T06:00:00Z | +| Machine B | ShiftCalendar | ShiftCalendar.Monday.Night | END | 2024-09-03T06:00:00Z | + +{{% /tab %}} +{{< /tabs >}} + +You can run this query in multiple ways: + +- **`groupByEquipment = false and groupByShift = false` -** returns a separate result per shift instance per equipment + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +query: + +```graphql +query GetKPIByShift($filterInput: GetKPIByShiftFilter!, $startDateTime: DateTime!, $endDateTime: DateTime!, $kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean, $groupByShift: Boolean, $groupByEquipment: Boolean, $onlyIncludeActiveJobResponses: Boolean) { + GetKPIByShift(filterInput: $filterInput, startDateTime: $startDateTime, endDateTime: $endDateTime, kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime, groupByShift: $groupByShift, groupByEquipment: $groupByEquipment, OnlyIncludeActiveJobResponses: $onlyIncludeActiveJobResponses) { + name + equipmentIds + shiftsContained + from + to + error + value + units + } +} +``` + +input: + +```json +{ + "filterInput": { + "shiftFilter": [ + { + "propertyName": "Shift Name", + "eq": "Morning" + } + ], + "equipmentIds": ["Machine A", "Machine B"], + }, + "startDateTime": "2024-09-01T00:00:00Z", + "endDateTime": "2024-09-03T18:00:00Z", + "kpi": ["ActualProductionTime"], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false, + "onlyIncludeActiveJobResponses": false, + "groupByShift": false, + "groupByEquipment": false +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "GetKPIByShift": [ + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A"], + "shiftsContained": ["Shift.Sunday.Morning"], + "from": "2024-09-01T06:00:00Z", + "to": "2024-09-01T14:00:00Z", + "error": null, + "value": 27000, + "units": "seconds" + }, + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine B"], + "shiftsContained": ["Shift.Sunday.Morning"], + "from": "2024-09-01T06:00:00Z", + "to": "2024-09-01T14:00:00Z", + "error": null, + "value": 27000, + "units": "seconds" + }, + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A"], + "shiftsContained": ["Shift.Monday.Morning"], + "from": "2024-09-02T06:00:00Z", + "to": "2024-09-02T14:00:00Z", + "error": null, + "value": 27000, + "units": "seconds" + }, + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine B"], + "shiftsContained": ["Shift.Monday.Morning"], + "from": "2024-09-02T06:00:00Z", + "to": "2024-09-02T14:00:00Z", + "error": null, + "value": 27000, + "units": "seconds" + } + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +- **`groupByEquipment = true and groupByShift = false` -** returns a separate result per shift instance containing all equipment + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +query: + +```graphql +query GetKPIByShift($filterInput: GetKPIByShiftFilter!, $startDateTime: DateTime!, $endDateTime: DateTime!, $kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean, $groupByShift: Boolean, $groupByEquipment: Boolean, $onlyIncludeActiveJobResponses: Boolean) { + GetKPIByShift(filterInput: $filterInput, startDateTime: $startDateTime, endDateTime: $endDateTime, kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime, groupByShift: $groupByShift, groupByEquipment: $groupByEquipment, OnlyIncludeActiveJobResponses: $onlyIncludeActiveJobResponses) { + name + equipmentIds + shiftsContained + from + to + error + value + units + } +} +``` + +input: + +```json +{ + "filterInput": { + "shiftFilter": [ + { + "propertyName": "Shift Name", + "eq": "Morning" + } + ], + "equipmentIds": ["Machine A", "Machine B"], + }, + "startDateTime": "2024-09-01T00:00:00Z", + "endDateTime": "2024-09-03T18:00:00Z", + "kpi": ["ActualProductionTime"], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false, + "onlyIncludeActiveJobResponses": false, + "groupByShift": false, + "groupByEquipment": true +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "GetKPIByShift": [ + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Sunday.Morning"], + "from": "2024-09-01T06:00:00Z", + "to": "2024-09-01T14:00:00Z", + "error": null, + "value": 54000, + "units": "seconds" + }, + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Monday.Morning"], + "from": "2024-09-02T06:00:00Z", + "to": "2024-09-02T14:00:00Z", + "error": null, + "value": 54000, + "units": "seconds" + } + + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +- **groupByEquipment = true and groupByShift = true -** groups shifts and equipment together + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +query: + +```graphql +query GetKPIByShift($filterInput: GetKPIByShiftFilter!, $startDateTime: DateTime!, $endDateTime: DateTime!, $kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean, $groupByShift: Boolean, $groupByEquipment: Boolean, $onlyIncludeActiveJobResponses: Boolean) { + GetKPIByShift(filterInput: $filterInput, startDateTime: $startDateTime, endDateTime: $endDateTime, kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime, groupByShift: $groupByShift, groupByEquipment: $groupByEquipment, OnlyIncludeActiveJobResponses: $onlyIncludeActiveJobResponses) { + name + equipmentIds + shiftsContained + from + to + error + value + units + } +} +``` + +input: + +```json +{ + "filterInput": { + "shiftFilter": [ + { + "propertyName": "Shift Name", + "eq": "Morning" + } + ], + "equipmentIds": ["Machine A", "Machine B"], + }, + "startDateTime": "2024-09-01T00:00:00Z", + "endDateTime": "2024-09-03T18:00:00Z", + "kpi": ["ActualProductionTime"], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false, + "onlyIncludeActiveJobResponses": false, + "groupByShift": true, + "groupByEquipment": true +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "GetKPIByShift": [ + { + "name": "ActualProductionTime", + "equipmentIds": ["Machine A", "Machine B"], + "shiftsContained": ["Shift.Sunday.Morning","Shift.Monday.Morning"], + "from": "2024-09-01T06:00:00Z", + "to": "2024-09-01T14:00:00Z", + "error": null, + "value": 108000, + "units": "seconds" + } + + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +## Federated Queries + +The KPI service extends the equipment, work schedule, work request, job order, and job response GraphQL entities with a KPI object. +This makes KPIs easier to query. + +### Query Equipment + +Extending the equipment type allows the equipment ID to be inferred from parent equipment type + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +query: + +```graphql +query QueryEquipment($startDateTime: DateTime!, $endDateTime: DateTime!, $kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean) { + queryEquipment { + id + kpi(startDateTime: $startDateTime, endDateTime: $endDateTime, kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime) { + name + from + to + error + value + units + } + } +} +``` + +input: + +```json +{ + "startDateTime": "2024-09-01T06:00:00Z", + "endDateTime": "2024-09-01T14:00:00Z", + "kpi": ["ActualProductionTime"], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "queryEquipment": [ + { + "id": "Machine A", + "kpi": [ + { + "name": "ActualProductionTime", + "from": "2024-09-01T06:00:00Z", + "to": "2024-09-01T14:00:00Z", + "error": null, + "value": 27000, + "units": "seconds" + } + ] + }, + { + "id": "Machine B", + "kpi": [ + { + "name": "ActualProductionTime", + "from": "2024-09-01T06:00:00Z", + "to": "2024-09-01T14:00:00Z", + "error": null, + "value": 27000, + "units": "seconds" + } + ] + } + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +### Query JobResponse + +Extending the job response type allows: + +- `startDateTime` to be inferred from `jobResponse.startDateTime` +- `endDateTime` to be inferred from `jobResponse.endDateTime` +- `equipmentIds` to be inferred from `jobResponse.equipmentActual.EquipmentVersion.id` + +{{< tabs items="Query,Response" >}} +{{% tab "query" %}} +query: + +```graphql +query QueryJobResponse($kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean, $filter: KPIFilter) { + queryJobResponse { + id + startDateTime + endDateTime + equipmentActual { + id + equipmentVersion { + id + } + } + kpi(kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime, filter: $filter) { + name + from + to + error + value + units + } + } +} +``` + +input: + +```json +{ + "kpi": [ + "ActualProductionTime" + ], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "queryJobResponse": [ + { + "id": "Order A1.JobResponse 1", + "startDateTime": "2024-09-01T08:00:00Z", + "endDateTime": "2024-09-01T14:00:00Z", + "equipmentActual": [ + { + "id": "Machine A.2024-09-01T08:00:00Z", + "equipmentVersion": { + "id": "Machine A" + } + } + ], + "kpi": [ + { + "name": "ActualProductionTime", + "from": "2024-09-01T08:00:00Z", + "to": "2024-09-01T14:00:00Z", + "error": null, + "value": 27000, + "units": "seconds" + } + ] + } + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +### Query Job Order, Work Request, and Work Schedule + +Extending the Job order, Work Request, and Work Schedule entities makes it possible to recursively query all of the attached job responses: + +```mermaid +flowchart TD + WorkSchedule --> WorkRequests + WorkRequests --> JobOrders + JobOrders --> JobResponses +``` + +Imagine that from the data from example 2 has this hierarchy: + +```mermaid +flowchart TD + WorkScheduleA --> WorkRequestA + WorkScheduleA --> WorkRequestB + WorkRequestA --> OrderA1 + WorkRequestA --> OrderA2 + WorkRequestB --> OrderB1 + WorkRequestB --> OrderC1 +``` + +Querying KPI on `workSchedule A` combines all results for order A1, A2, B1 and C1: + +{{< tabs items="Query,Response">}} +{{% tab "query" %}} +query: + +```graphql +query QueryWorkSchedule($kpi: [KPI!], $ignorePlannedDownTime: Boolean, $ignorePlannedShutdownTime: Boolean, $filter: KPIFilter) { + queryWorkSchedule { + id + kpi(kpi: $kpi, ignorePlannedDownTime: $ignorePlannedDownTime, ignorePlannedShutdownTime: $ignorePlannedShutdownTime, filter: $filter) { + name + from + to + error + value + units + } + } +} +``` + +input: + +```json +{ + "kpi": [ + "ActualProductionTime" + ], + "ignorePlannedDownTime": false, + "ignorePlannedShutdownTime": false +} +``` + +{{% /tab %}} +{{% tab "response" %}} + +```json +{ + "data": { + "queryWorkSchedule": [ + { + "id": "WorkScheduleA", + "kpi": [ + { + "name": "ActualProductionTime", + "from": "2024-09-01T08:00:00Z", + "to": "2024-09-02T06:00:00Z", + "error": null, + "value": 108000, + "units": "seconds" + } + ] + } + ] + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +## Additional Filters + +Some KPI Queries provide additional filters that are not mentioned in the preceding examples: + +- `ignorePlannedDownTime` (default: `false`) - Ignores planned down time events. For example if a state change happens while in the planned downtime calendar state, by default it is ignored. If `ignorePlannedDowntime = true`, the underlying state change is still returned. +- `ignorePlannedShutdownTime` (default: `false`). Similar to `ignorePlannedDowntime` except with planned shutdown calendar events. +- `onlyIncludeActiveJobResponses` (default: `false`) - if set to true will adjust the time interval of the KPI query to only be whilst a job response is active. For example if a user queries a KPI between 00:00 - 23:59 but there are only active job responses from 08:00-19:00, the query time range would be adjusted to 08:00-19:00. diff --git a/content/versions/v3.2.1/how-to/model/_index.md b/content/versions/v3.2.1/how-to/model/_index.md new file mode 100644 index 000000000..132c3faab --- /dev/null +++ b/content/versions/v3.2.1/how-to/model/_index.md @@ -0,0 +1,22 @@ +--- +title: 'Define production models' +description: Create models for equipment, data sources, operations definitions, work definitions, and so on. +date: '2023-09-22T14:50:39-03:00' +draft: false +weight: 300 +cascade: + icon: model-cubes +--- + +You have multiple ways to update your production models: + +- Use the UI to define it for an individual unit or class +- POST a batch over the GraphQL API +- Use BPMN as a filter receive an incoming ERP document and map into the system + +The trade offs are usually upfront configuration time, number of items to add, and level of automation. +Adding an item over the UI requires no programming skill, but you can only add only one unit at a time. +Creating a BPMN process to listen for an event and automatically map new units brings the highest automation, but it requires upfront investment to write and test the BPMN workflow. + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/how-to/model/create-objects-ui.md b/content/versions/v3.2.1/how-to/model/create-objects-ui.md new file mode 100644 index 000000000..b37714d22 --- /dev/null +++ b/content/versions/v3.2.1/how-to/model/create-objects-ui.md @@ -0,0 +1,90 @@ +--- +title: 'Create objects from the UI' +date: '2023-11-20T15:36:03-03:00' +draft: false +categories: ["how-to"] +description: How to create manufacturing objects from the Rhize UI. +weight: 010 +--- + +To make a production object visible to the Rhize data hub, you must define it as a data model. +Along with its API, Rhize also has a graphical interface to create and update objects in your role-based equipment hierarchy. + +Often, one object references another: for example, a piece of equipment may belong to an equipment class, have a unit of measure as a property, and be associated with process segment. +These associations form nodes and edges in your knowledge graph, so the more information relationships that you accurately create, the better. + +## Prerequisites + +Ensure that you have the following: + +- Access to the Rhize UI +- Information about the equipment that you want to model + +## General procedure + +1. From the UI, select the menu in the top corner. +1. Select **Master Data**, then the object you want to configure. +1. **Create new** (or the plus sign). +1. Name the object according to your naming conventions. +1. Fill in the other fields. For details, refer to the [Master definitions and fields]({{< relref "master-definitions" >}}). + +You can create many different objects, all with their own parameters and associations. +For that reason, a general procedure such as the preceding lacks any interesting detail. + +To make the action more concrete, +the next section provides an example to create plausible group of objects. + +## Example: create oven class and instance + +AG holdings is a fictional enterprise that makes product called `Alleman Brownies`. +These brownies are produced in its UK site, `AG_House`, specifically in the `brownie_kitchen_1` work center of the `south_wing` area. + +The `brownie_kitchen_1` kitchen has `oven_123`, an instance of the `Oven` class. +This equipment item also has a data source that gives temperature readings, which are published to a dashboard. + +Here's how you could use the Rhize UI to model this. + +{{< callout type="info" >}} +If you are actively following to learn, make sure to use names that will easily identify the objects as example objects for testing. +{{< /callout >}} + +Model the equipment levels: + +1. From **Master Data**, select **Equipment** and enter `AG_house` as the name. +1. Give it a description. Then for **Equipment Level**, choose `Site`. +1. From the new `AG_House` object, create a sub-object with the **+** button. +1. Name the object `south_wing` and choose `Area` as its level. +1. Repeat the preceding steps to make `brownie_kitchen1` a work center in the `south_wing`. + + Once complete, the hierarchy should look like this: + + ![Screenshot of three equipment levels](/images/screenshot-rhize-equipment-levels.png) + + +Model the `Oven` equipment class: + +1. From **Master Data**, select **Equipment Class**. +1. Give it a name that makes sense for your organization. Give it a description, such as `Oven for baking`. +1. Add any additional properties. +1. **Create** . +1. Make it active by changing its version state. + +Create the associated data source: +1. From **Master Data**, select **Data Source**. +1. Add the source's connection string and protocol, along with any credentials (to configure authentication, refer to [Agent configuration]({{< relref "../../reference/service-config/agent-configuration" >}}). +1. Select the **Topics** tab and add the label and data type. +1. **Create** and make the version active. + +Now, create an instance of the Oven. + +1. From **Master Data**, select **Equipment.** Then create a sub-object for the `brownie_kitchen1` work center. +1. Add its unique, globally identifiable ID and give it a description. +1. For **Equipment Class**, add the `Oven` class you just created. +1. For **Equipment Level**, select `WorkUnit`. +1. **Create.** + + After the object is successfully created, you can add the data source. +1. From the **Data Sources** tab, select **Link Data Sources**. Select the data source you just created. + +On success, your UI should now have an item equipment that is associated with an equipment level, equipment class, and data source. +For a complete reference of all objects and properties that you can add through the UI, refer to the Master definitions and Fields]({{< relref "master-definitions" >}}). diff --git a/content/versions/v3.2.1/how-to/model/master-definitions.md b/content/versions/v3.2.1/how-to/model/master-definitions.md new file mode 100644 index 000000000..ee24f6acf --- /dev/null +++ b/content/versions/v3.2.1/how-to/model/master-definitions.md @@ -0,0 +1,345 @@ +--- +title: 'Master definitions and fields' +date: '2023-11-15T16:29:21-03:00' +draft: false +categories: ["reference"] +description: >- + A reference of all manufacturing data objects and properties that you can create in the Rhize UI +weight: 100 +--- + +To make a production object visible to the Rhize data hub, you must define it as a data model. + +These sections document all the objects that you can add through the UI, and the fields and properties that you can associate with them. +All these models are based on the ISA-95 standard, mostly from [Part 2](https://www.isa.org/products/ansi-isa-95-00-02-2018-enterprise-control-system-i), which describes the role-based equipment hierarchy. + +{{< callout type="info" >}} +- For an introduction to the language of ISA-95, read [How to speak ISA-95](/isa-95/how-to-speak-isa-95) +- For visual examples of how some of these models relate, +look at our page of [ISA-95 Diagrams]({{< relref "../../isa-95/isa-95-diagrams" >}}). +{{< /callout >}} + +## Global object fields + +All objects that you define must have a unique name. +Additionally, most objects have the following fields: + +| Global field | Description | +|--------------|----------------------------------------------------------------------------------------| +| Version | The version of the object (and each version has a [state](#version-states)) | +| Description | Freeform text to describe what the object does and help colleagues understand its role | + +### Version states + +Each version of an object can have the following states: +- `Draft` +- `Active` +- `For review` +- `Deprecated` + + +{{< callout type="info" >}} +When recording actual execution, what matters is version of the object, not its general definition. +Thus, **to add a class to an object, you must give that object a version first.** +{{< /callout >}} + + +## Common models + +_Common models_ are data objects that can apply to different resources in your manufacturing process + +### Units of Measure {#uom} + +A _Unit of measure_ is a defined unit to consistently compare values, duration or quantities. + +You can create units of measure in the UI and give them the following parameters: +- Name +- Data type + +### Data Sources + +A _data source_ is a source of real-time data that is collected by the Rhize agent. +For example, in a baking process, a data source might be an OPC UA server that sends readings from an oven thermometer. + +The general fields for a data source are as follows: + +| General fields | Description | +|--------------------------|-------------------------------------------------------------------------------------| +| Connection string | A string to specify information about the data source and the way to connect to it | +| The Data source protocol | Either `MQTT` or `OPCUA` | +| username | If needed, username for [Agent authentication]({{< relref "../../reference/service-config/agent-configuration" >}}) | +| password | If needed, password for [Agent authentication]({{< relref "../../reference/service-config/agent-configuration" >}}) | +| certificate | If needed, certificate for [Agent authentication]({{< relref "../../reference/service-config/agent-configuration" >}}) | + +Additionally, each data source can have _topics_ that Rhize should be able to subscribe to. +Each topic has the following fields: + +| Topic field | Description | +|-------------------|-------------------------------------------------------------------------------| +| Data type | The data type Rhize expects to find when it receives data from that topic | +| Deduplication key | The field that NATS uses to de-duplicate messages from multiple data sources. | +| Label | The name of the topic on the side of the data source | +| Description | A freeform text field to add context | + +Some data sources, such as OPC UA, have methods for RPC calls. + +### Hierarchy Scope + +The _hierarchy scope_ represents the scope within which data information is exchanged. +It provides a flexible way to group entities and data outside of the scope defined by the role-based equipment hierarchy. + +## Resource models + +_Resource models_ are data objects that have a specific role in your role-based equipment hierarchy. + +### Equipment class + +An _equipment class_ is a grouping of [equipment](#equipment) for a definite purpose. +For example, in a baking process, an equipment class might be the category of all ovens, with properties such as `maximum temperature` and `number of shelves`. + + + +Along with the [Global properties](#global-object-fields), an equipment class can include an indefinite number of properties with the following fields: + +| Properties | Description | +|-----------------|------------------------------------------| +| Name | Name of the property | +| Description | A freeform text to describe the property | +| Unit of measure | The property [unit of measure](#uom) | + + +### Equipment + +A piece of _equipment_ is a tool with a defined role in a [process segment](#process-segment). +For example, in a baking process, equipment might be a specific brownie oven. + +Equipment also might be part of hierarchy of levels, starting with Enterprise and ending with granular levels such as `WorkUnit`. + +Along with the following fields, you can also connect an equipment item to a [data source](#data-source), add additional properties, and toggle it to be active or inactive. + +{{% introTable.inline "equipment" %}} +{{ $term := (.Get 0) }} +{{ $vowels := slice "a" "e" "i" "o" "u" }} +Along with the [global object fields](#global-object-fields), +{{cond (in $vowels (index (split (lower $term) "") 0 )) "an" "a" }} +{{ $term }} object has the following fields: +{{% /introTable.inline %}} + +| General equipment fields | Description | +|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Equipment class | The [class of equipment](#equipment-class) that it belongs to. | +| Equipment level | Associated level for the equipment. One of: `Enterprise`, `Site`, `Area`, `ProcessCell`, `Unit`, `ProductionLine`, `WorkCell`, `ProductionUnit`, `Warehouse`, `StorageZone`, `StorageUnit`, `WorkCenter`, `WorkUnit`, `EquipmentModule`, `ControlModule`, `Other` | + +### Material Class + +A _material class_ is a group of material with a shared purpose in the manufacturing process. + +{{% introTable.inline "material-class" /%}} + +| General fields | Description | +|------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Assembly type | Can be one of:
  • `Logical`: the components of the material are not necessarily physically connected
  • `Physical`: the components of the material are physically connected or in the same location
    • | +| Relationship | Can be one of:
      • `Permanent`, if a material that can't be split from the production process
      • `Transient`, for temporary material in assembly, such as a pallet
      +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) that material class belongs to | +| Includes properties of | One or more material classes that a version inherits properties from | +| Is assembled from | Material classes that make this material | + +Material classes may have an indefinite number of properties with parameters for the following fields: +- Value +- [Unit of measure](#uom) + +### Material definition + +_Materials_ are everything required to produce a finished good. +They include raw materials, intermediate materials, and collections of parts. + +{{% introTable.inline "material" /%}} + +| General | Description | +|----------------|----------------------------------------------------------------------------------| +| Material class | One or more [material classes](#material-class) that a version inherits properties from | + + +Materials may have an indefinite number of properties with parameters for the following fields: +- Value +- [Unit of measure](#uom) + +### Personnel Class + +A _personnel class_ is a grouping of persons whose work shares a definite purpose in the manufacturing process. +In a baking process, an example of a personnel class may be `oven_operators`. + +{{% introTable.inline "personnel-class" /%}} + +| General fields | Description | +|-----------------|------------------------------------------------------------------------------------| +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) within which this personnel exchanges data | + +### Person + +A _person_ is a unique member of [personnel class](#personnel-class). + +{{% introTable.inline "person" /%}} + +| General fields | Description | +|---------------------------|------------------------------------------------------------| +| Name | The name of the person | +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) that the person belongs to | +| Inherit personnel classes | One or more personnel classes that a version inherits properties from | +| Operational location | The associated [Operational location](#operational-location) | + +### Physical asset class + +A _physical asset class_ is a class of [physical assets](#physical-assets). + +The physical asset class has properties for: +- ClassType +- Value +- Unit of measure + +### Physical Asset + +A _physical asset_ is portable or swappable equipment. +In a baking process, a physical asset might be the laser jet printer which adds labels to the boxes (and could be used in many segments across the plant). + +In many cases, your process may need to model only [equipment](#equipment), not physical assets. + + +### Operational Location Class + +An _operational location_ class is a grouping of [operational locations](#operational-location) for a defined purpose. +For example, in a baking process, an operational location class may be `Kitchens` + +{{% introTable.inline "operational-location-class" /%}} + +| General fields | Description | +|------------------------------------|-----------------------------------------------------------------------------------| +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) within which this location exchanges data | +| Inherit Operational location class | One or more Operational location classes that a version inherits properties from | + +### Operational Location + +An _operational location_ is where resources are expected to be located in a plant. +For example, in a baking process, an operational location class may be `northwing_kitchen_A` + +{{% introTable.inline "operational-location" /%}} + +| General fields | Description | +|------------------------------|-----------------------------------------------------------------------------------| +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) within which this location exchanges data | +| Operational location classes | Zero or more [operational location classes](#operational-location-class) that a version inherits properties from | +| Map view | Where the location is on the map | + +## Operation models + +_Operation models_ are data objects that describe manufacturing processes from the perspective of the level-4 (ERP) systems. + +### Process segment + +A _process segment_ is a step in a manufacturing activity that is visible to a business process, grouping the necessary personnel, material, equipment, and physical assets. +In a baking process, an example segment might be `mixing`. + +You can associate specifications for: +- Equipment, Material, Personnel, and Physical Assets + +{{% introTable.inline "process-segment" /%}} + +| General fields | Description | +|--------------------------|-------------------------------------------------------------------------------| +| Operations type | One of: ` Inventory`, `maintenance`, `mixed`, `production`, `quality` | +| Definition type | One of: `Instance`, `Pattern` | +| Duration | The expected duration | +| Duration unit of measure | The time [unit of measure](#uom) | +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) within which data is exchanged for this process segment | + +You can add additional parameters for: +- Name +- Value +- Unit of measure + +### Operations Definition + +_Operations Definitions_ describe how resources come together to manufacture product from the perspective of +the level-4 (ERP) systems. + +The operation model carries enough detail to plan the work at resolutions of hours and days. For more granularity, refer to [work models](#work-models). + +{{% introTable.inline "operations-definition" /%}} + +| General fields | Description | +|-----------------|-------------------------------------------------------------------------------------------------------| +| Operation type | One of: ` Inventory`, `maintenance`, `mixed`, `production`, `quality` | +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) within which data is exchanged for this operations definition | + +### Operations event class + +An _operations event class_ defines a class of operations events within some hierarchy. + +The class has the following properties: +- **Version** +- **Operations event classes,** defining one or more operations event classes that a version inherits properties from + +### Operations event definition + +An _operations event definition_ defines the properties that pertain to an _event_ from the perspective of the level-4 (ERP) systems. +Along with the event itself, it may have associated resources, such as material lots or physical assets received. + +{{% introTable.inline "operations-event-definition" /%}} + + +| Field | Description | +|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Category | A string that can be used to group the event | +| Source | The activity, function, task or phase that generated the event. | +| Event type | One of:
      • `Alert`, an potentially significant event, such as a workflow trigger, that does not require notification
      • `Alarm`, an event that requires notification
      • `Event`, any other event that is not at the level of alarm or alert
      | +| Operations event classes | One or more [operations event classes](#operations-event-class) that a version definition inherits properties from. | + +## Work models + +_Work models_ describe how the resources come together to manufacture product from the perspective of level-3 (MES) systems. +As with [Operations models](#operations-models), +the steps in the process are called _segments_. + +The work model carries enough detail to plan the work at resolutions of hours and minutes. For less granularity, refer to [operations definitions](#operations-definitions). + +### Work Master + +A _work master_ is a template for a job order from the perspective of the level-3 (MES/MOM) systems. +In a baking process, an example work master might be `Brownie Recipe`. + +{{% introTable.inline "work-master" /%}} + +| General fields | Description | +|--------------------------|-------------------------------------------------------------------------------| +| Workflow type | One of: ` Inventory`, `maintenance`, `mixed`, `production`, `quality` | +| Workflow specification | An associated BPMN workflow | + + +### Work calendar + +_Work calendars_ describe a set of rules for specific calendar entries, including duration, start and end dates, and times. + +The general fields for a calendar duration are as follows: + +| General fields | Description | +|-----------------|-----------------------------------------| +| Description | A description of the work calendar | +| Hierarchy scope | The [hierarchy scope](#hierarchy-scope) that defines scope of data exchanged for the calendar entries | + +The work calendar can have properties with a `description`, `value`, and [`unit of measure`](#uom). + +The work calendar object can have one or more _entries_, which define the start, end, duration, and recurrence of a rule. +The duration and recurrence attributes for a time-based rule are represented by the [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) standard. +The attributes for an entry are as follows: + +| Entry fields | Description | +|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| +| Description | Freeform text that describes the entry | +| Type | One of: `PlannedBusyTime`, `PlannedDownTime`, and `PlannedShutdown` | +| Start date and time | When the entry starts | +| End date and time | When the entry finishes | +| Recurrence time interval | How often the entry repeats according to the [repeating interval representation](https://en.wikipedia.org/wiki/ISO_8601#Repeating_intervals) of IS0 8601 | +| Duration rule | How long the work calendar lasts, according to the [Duration representation](https://en.wikipedia.org/wiki/ISO_8601) of IS0 8601. | + diff --git a/content/versions/v3.2.1/how-to/publish-subscribe/_index.md b/content/versions/v3.2.1/how-to/publish-subscribe/_index.md new file mode 100644 index 000000000..b35ef06cc --- /dev/null +++ b/content/versions/v3.2.1/how-to/publish-subscribe/_index.md @@ -0,0 +1,16 @@ +--- +title: 'Connect event data' +date: '2023-09-22T14:50:39-03:00' +draft: false +categories: "how-to" +description: Set up event-driven messaging for Rhize +weight: 400 +cascade: + icon: mqtt +--- + +For Rhize to listen to and [handle]({{< relref "../BPMN" >}}) manufacturing events, +you need to connect a {{< abbr "data source" >}}. + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/how-to/publish-subscribe/connect-datasource.md b/content/versions/v3.2.1/how-to/publish-subscribe/connect-datasource.md new file mode 100644 index 000000000..535306a51 --- /dev/null +++ b/content/versions/v3.2.1/how-to/publish-subscribe/connect-datasource.md @@ -0,0 +1,45 @@ +--- +title: 'Connect data source' +date: '2023-09-22T14:50:39-03:00' +categories: "how-to" +description: Configure a data source to publish topics for the Rhize platform. +weight: 10 +--- + +For Rhize to listen to and handle manufacturing events, +you need to connect a {{< abbr "data source" >}}. + +## Prerequisites + +To add a data source, you need the following: +- Access to an MQTT or OPC UA server +- Credentials for this server, if necessary +- The URL and connection string for this server (Rhize will point to this) + +## Steps to connect + +The process has two sides: +- Sending topics from your MQTT, OPCUA, or NATS server to Rhize. +- In Rhize, [defining the data source]({{< relref "../model/create-objects-ui" >}}) and its associated objects. + + To do this, you can create entities in the Rhize UI or through its [GraphQL API]({{< relref "../gql" >}}). + +### Model the data source in the Rhize UI + +1. Enter the Rhize UI and go to **Master Data > Data sources**. +1. Add the connection string, topics, and other necessary parameters. For details of what these fields mean, review the [Data source object reference]({{< relref "../model/master-definitions/#data-sources" >}}). +1. **Create** and then change version state to `Active`. + +Now add the data source to its equipment (or, if it doesn't exist [model new equipment]({{< relref "../model/master-definitions#equipment" >}})): +1. Select the equipment, then **Data sources**. +1. If the equipment has properties bound to this data source, create the properties, then configure them as `BOUND` to the data source. + + +Once active, Rhize reaches out to this data source and synchronizes the equipment properties to the bound topics. + +## Next steps + +Now that you have the data source sending data you can: +- Write a rule to [Turn data into events]({{< ref "create-equipment-class-rule" >}}) that trigger workflows. +- [Create a BPMN workflow]({{< relref "../bpmn" >}}) to run on this trigger. + You can also write a workflow that subscribes to data source directly through a [message start event]({{< relref "../bpmn/bpmn-elements/#message" >}}). diff --git a/content/versions/v3.2.1/how-to/publish-subscribe/create-equipment-class-rule.md b/content/versions/v3.2.1/how-to/publish-subscribe/create-equipment-class-rule.md new file mode 100644 index 000000000..fa66e956b --- /dev/null +++ b/content/versions/v3.2.1/how-to/publish-subscribe/create-equipment-class-rule.md @@ -0,0 +1,412 @@ +--- +title: >- + Tutorial: Trigger a workflow from a rule +date: "2024-04-29T11:39:29+03:00" +draft: false +categories: ["tutorial"] +description: Follow this tutorial to create a rule to run a workflow every time a data source changes. +aliases: + - "/how-to/publish-subscribe/tutorial-create-equipment-class-rule/" + - "/how-to/publish-subscribe/turn-value-into-event/" +weight: 10 +--- + +An equipment class rule [triggers a BPMN]({{< relref "../bpmn/trigger-workflows/" >}}) workflow whenever a data source publishes a value that meets a specified threshold. + +Imagine a scenario when an oven must be preheated every time a new order number is published to an MQTT edge device. +You could automate this workflow with a rule that listens to messages published and evaluates a condition. +If the condition evaluates to `true`, the rule triggers a {{< abbr "BPMN" >}} workflow to preheat the oven. + + +```mermaid +--- +title: Rules trigger workflows from data-source changes +--- +flowchart LR + A(Property changed?) -->|yes| B{"rule evaluates to true?"} + B -->|no| C(do nothing) + B -->|"yes (optional: pass variables)"| D(Run BPMN workflow) +``` + +The broad procedure to create a rule is as follows: +1. In the Rhize UI or through GraphQL, create models for the data source and its associated unit of measure, equipment, and equipment class. +1. In the Rhize UI, write a BPMN workflow that is triggered when this data source changes and executes some business logic. +1. In the equipment class, create a rule that triggers the workflow. + +The following sections describe how to do these steps in more detail. + + +{{% callout type="info" %}} + +This tutorial assumes a data source that exchanges messages over the MQTT protocol. + +{{% /callout %}} + +## Prerequisites + +Before you start, ensure you have the following: +- Access your Rhize customer environment +- The [Agent configured]({{< relref "../../reference/service-config/agent-configuration" >}}) to listen for your data-source ID + +## Set up: configure equipment and workflows + +The setup involves modeling the objects associated with the rule. +- Data source +- Data source topic +- Unit of measure +- BPMN +- Equipment class with bound properties + +Once you have these, you can create a rule and associate it with an actual equipment item. + +### Create a data source + +1. From the main menu, navigate to **Master Data > Data Sources**. +2. Create a new data source. The label (ID) must match the one specified in the configuration file for the `libre-agent` microservice. +3. From the **General** tab, add a draft data source version. +4. Select `MQTT` as the data-source protocol. +5. Optionally, enter a connection string, such as `mqtt://:1883`, that matches the one specified in the configuration file for the `libre-agent` microservice. +6. Save the data source version to create it. + +{{< bigFigure +width="100%" +alt="A new data source and version created in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_a_Data_Source_and_Version.png" +caption="A new data source and version created in the UI." +>}} + +### Create a data source topic + +1. Navigate to the **Topics** tab. +2. Add a new property (that is, a topic). +3. Select `STRING` for the property data type (this assumes an order number is a string such as `Order1`). +4. Select your preferred deduplication key. The default option, `Message Value`, is most appropriate for this scenario. +5. For label, enter the exact topic name as it appears in the data source. Use a slash to access nested topics. For this example, all new order numbers are published to `Oven/OrderNumber`. +6. Confirm by clicking the green tick icon. +7. Navigate to the **General** tab and change the version state to active. + +{{< bigFigure +width="100%" +alt="A new data source topic created in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_a_Data_Source_Topic.png" +caption="A new data source topic created in the UI." +>}} + +### Create a unit of measure + +1. From the Main Menu, navigate to **Master Data > Units of Measure**. +2. Add a new unit of measure. +3. Enter `Order Number` for the unit name. +4. Select `STRING` for the data type. + +{{< bigFigure +width="100%" +alt="A new unit of measure created in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_a_Unit_of_Measure.png" +caption="A new unit of measure created in the UI." +>}} + +### Creating a BPMN workflow + +A rule must trigger a BPMN workflow. +Before setting up a rule, create its workflow. +For this example, this 3-node BPMN is enough: + + +1. Navigate to **Workflows > Process List**. +2. Import the BPMN. +3. Save it. +4. Set the version as active. + +{{< bigFigure +width="100%" +alt="A BPMN created in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_a_BPMN.png" +caption="A BPMN created in the UI." +>}} + +The BPMN has a `Libre Jsonata Transform` task that contains an expression `"Preheating oven for order number " & $.orderNumber"` . +The rule engine triggers this BPMN with a payload that includes the order number value, as follows: + +```json +{ + "orderNumber": "Order1" +} +``` + +### Create an equipment class with bound properties + +#### Equipment class and version + +1. Navigate to **Master Data > Equipment Class**. +2. Create a new equipment class from the sidebar. The label might be `Pizza Line`, for example. +3. From the **General** tab, **Create** a new Draft version. + +{{< bigFigure +width="100%" +alt="A new equipment class and version created in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_an_Equipment_Class_and_Version.png" +caption="A new equipment class and version created in the UI." +>}} + +#### Equipment class property + +1. From the properties tab, create a new property. +1. For type, select `BOUND`. +1. For name, enter `orderNumber`. +1. For UoM, select the unit of measure created earlier (`Order Number`). +1. Confirm by clicking the green tick icon. + +{{< bigFigure +width="100%" +alt="A new equipment class property created in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_an_Equipment_Class_Property.png" +caption="A new equipment class property created in the UI." +>}} + +## Create a rule + +### Add a rule to an existing equipment class + +1. From the Rules tab of an equipment class version, create a new rule. +1. Enter `Run BPMN on Order Number` for the name and confirm. +1. Select `rules_example_bpmn` for the workflow specification. +1. Add `orderNumber` as a trigger property. +1. Add a trigger expression that evaluates to true or false. + +{{% callout type="info" %}} + +The rule runs the preceding workflow only if the expression evaluates to `true`. +It’s common to compare the new value with the previous. + +{{% /callout %}} + +In this case, we can compare the new order number to the previous by adding `OrderNumber.current.value != OrderNumber.previous.value`. +Note that the root of the object path must match the ID of the equipment class property we set up earlier and all evaluations are case-sensitive. + +The entire information that becomes available to the rule engine looks like this: + +{{% tabs items="JSON input,Expression,Output" %}} +{{% tab "JSON input" %}} + +```javascript +{ + orderNumber: { + current: { + bindingType: "BOUND", + description: "bound prop", + equipmentClassProperty: { + id: "EQCLASS1.10.orderNumber", + iid: "0x2a78", + label: "orderNumber" + }, + equipmentVersion: { + equipment: { + id: "EQ1", + iid: "0x1b", + label: "EQ1" + }, + id: "EQ1", + iid: "0x22", + version: "2" + }, + id: "EQCLASS1.10.orderNumber", + label: "orderNumber", + messageKey: "ns=3;i=1003.1695170450000000000", + propertyType: "DefaultType", + serverPicoseconds: 0, + serverTimestamp: "2023-09-20T00:40:50.028Z", + sourcePicoseconds: 0, + sourceTimestamp: "2023-09-20T00:40:50Z", + value: "Order2", + valueUnitOfMeasure: { + dataType: "FLOAT", + id: "FLOAT", + iid: "0x28" + } + }, + previous: { + bindingType: "BOUND", + description: "bound prop", + equipmentClassProperty: { + id: "EQCLASS1.10.orderNumber", + iid: "0x2a78", + label: "orderNumber" + }, + equipmentVersion: { + equipment: { + id: "EQ1", + iid: "0x1b", + label: "EQ1" + }, + id: "EQ1", + iid: "0x22", + version: "2" + }, + id: "EQCLASS1.10.orderNumber", + label: "orderNumber", + messageKey: "ns=3;i=1003.1695170440000000000", + propertyType: "DefaultType", + serverPicoseconds: 0, + serverTimestamp: "2023-09-20T00:40:40.003Z", + sourcePicoseconds: 0, + sourceTimestamp: "2023-09-20T00:40:40Z", + value: "Order1", + valueUnitOfMeasure: { + dataType: "FLOAT", + id: "FLOAT", + iid: "0x28" + } + } + } +} +``` +{{% /tab %}} +{{% tab "Expression" %}} +```javascript +`OrderNumber.current.value != OrderNumber.previous.value` +``` +{{% /tab %}} + +{{% tab "output" %}} +``` +True +``` + +The expression evaluates to `false`, because the `current` and `previous` values differ. + +{{% /tab %}} +{{% /tabs %}} + +Optionally, pass information to the BPMN by adding a payload message. The message is an object with multiple keys. +1. Enter `orderNumber` for the field name. +1. Enter `orderNumber.current.value` for the JSON expression. +1. Confirm by clicking the green tick icon. +1. **Create**. +1. From the **General** tab, change the equipment class version state to active. + +{{< bigFigure +width="100%" +alt="Creating an equipment class rule in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_a_Rule.png" +caption="Creating an equipment class rule in the UI." +>}} + +### Associate an equipment with a bound property + +The final steps to setting up a rule are to: + +1. Create a new equipment version. +2. Link it to a data source. +3. Set up bound properties. + +#### Create an equipment and version + +1. From the Main Menu, navigate to **Master Data > Equipment**. +2. Select a piece of equipment. If none, create one called `Line 1`. +3. From the **General** tab, **Create**. +4. Link the version to the equipment class you created earlier (`Pizza Line`). +5. Save the version to create it. + +{{< bigFigure +width="100%" +alt="A new equipment class and version created in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Creating_an_Equipment_and_Version.png" +caption="A new equipment class and version created in the UI." +>}} + +#### Link a data source + +1. From the **Data Sources** tab, link the equipment version to the data source you created in the previous section. + +{{< bigFigure +width="100%" +alt="An equipment linked to a data source in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Link_Equipment_to_Data_Source.png" +caption="An equipment linked to a data source in the UI." +>}} + +#### Set up the bound property + +1. From the **Properties** tab, find a property that you want this equipment to inherit and select the binding icon. +2. If you chose the property `orderNumber`, add the topic `Oven/OrderNumber` you added previously. + +{{< bigFigure +width="100%" +alt="An equipment property bound to a data source topic in the UI." +src="/images/equipment-class-rules/screenshot-rhize-Binding_an_Equipment_Property_to_a_Topic.png" +caption="An equipment property bound to a data source topic in the UI." +>}} + +## Test the binding and the rule + +Send a message to test that the value of the property `orderNumber` of the equipment `Line 1` is bound to the topic `Oven/OrderNumber`. + +### Test using an MQTT client + +For example, using MQTT Explorer: + +1. Open MQTT Explorer and connect to the broker. + +The microservice Libre Agent (`libre-agent`) should immediately publish a message to indicate the data source topic `Oven/OrderNumber` has been set up successfully. + +{{< bigFigure +width="65%" +alt="The Libre Agent has connected to the data source." +src="/images/equipment-class-rules/screenshot-rhize-Libre_Agent_has_Connected_to_the_Data_Source.png" +caption="The Libre Agent has connected to the data source." +>}} + +2. Publish the string `Order1` to the topic `Oven/OrderNumber`. + +{{< bigFigure +width="65%" +alt="A new order number was published to the data source." +src="/images/equipment-class-rules/screenshot-rhize-Publish_Order_Number_to_NATS.png" +caption="A new order number was published to the data source." +>}} + + +If the message has been received, +a new topic, `Oven`, appears with its subtopic `OrderNumber`. + +If there is an equipment property bound to this topic, +a topic called `MQTT//ValueChanged` also appears. +In addition, the published value should show in the column `Expression` of the equipment property `orderNumber`. + +{{< bigFigure +width="100%" +alt="The bound property assumes the last value published to the data source." +src="/images/equipment-class-rules/screenshot-rhize-New_orderNumber_in_the_Admin_UI.png" +caption="The bound property assumes the last value published to the data source." +>}} + +{{% callout type="info" %}} + +If this is the first message published to the topic, the rule will not be triggered because Rhize has no previous value to compare it the message value to. However, if you publish another order number, a new topic called `Core` will show up containing a subtopic called `RuleTriggered` to indicate that the rule has indeed been triggered. + +{{% /callout %}} + +{{< bigFigure +width="65%" +alt="The rule engine has published a message to indicate that the equipment class rule has indeed been triggered." +src="/images/equipment-class-rules/screenshot-rhize-Rule_Triggered_in_broker.png" +caption="The rule engine has published a message to indicate that the equipment class rule has indeed been triggered." +>}} + +### Confirm in execution in Tempo + +To confirm the intended BPMN was executed, navigate to Grafana (Tempo) and look for a trace containing the expected BPMN ID. + +{{< bigFigure +width="100%" +alt="Grafana shows a recent trace with the id of the target BPMN." +src="/images/equipment-class-rules/screenshot-rhize-Executed_BPMNs_in_Grafana.png" +caption="Grafana shows a recent trace with the id of the target BPMN." +>}} + + + +## Video example + +- :movie_camera: [Trigger BPMN]( https://www.youtube.com/watch?v=y5lr9JRmxDA). This video provides an example of creating a rule based on values for an OPC UA server in a baking process. diff --git a/content/versions/v3.2.1/how-to/publish-subscribe/screenshot-rhize-rules-engine.png b/content/versions/v3.2.1/how-to/publish-subscribe/screenshot-rhize-rules-engine.png new file mode 100644 index 000000000..6122830df Binary files /dev/null and b/content/versions/v3.2.1/how-to/publish-subscribe/screenshot-rhize-rules-engine.png differ diff --git a/content/versions/v3.2.1/how-to/publish-subscribe/track-changes.md b/content/versions/v3.2.1/how-to/publish-subscribe/track-changes.md new file mode 100644 index 000000000..258746d83 --- /dev/null +++ b/content/versions/v3.2.1/how-to/publish-subscribe/track-changes.md @@ -0,0 +1,135 @@ +--- +title: Track changes (CDC) +description: Streaming data in and out of RHIZE +--- + + +You can use _change data capture_ (CDC) to track data changes over time, including +a {{< abbr "mutation" >}} or drop in your database. +Rhize's CDC implementation can use +Kafka, NATS, or a local file as a *{{< abbr "sink" >}}* to store CDC updates streamed by Rhize's Alpha +leader nodes. + +When CDC is enabled, Rhize streams events for: +- All `set` and `delete` mutations, except those that affect password fields +- Drop events. + +Live Loader events are recorded by CDC, but Bulk Loader events aren't. + +CDC events are based on changes to Raft logs. So, if the sink is unreachable +by the Alpha leader node, then Raft logs expand as events are collected on +that node until the sink is available again. + +You should enable CDC on all Rhize +Alpha nodes to avoid interruptions in the stream of CDC events. + +## Enable CDC with Kafka sink + +Kafka records CDC events under the `libre-cdc` topic. The topic must be created before events +are sent to the broker. To enable CDC and sink events to Kafka, start Dgraph Alpha with the `--cdc` +command and the sub-options shown below, as follows: + +```bash +dgraph alpha --cdc "kafka=kafka-hostname:port; sasl-user=tstark; sasl-password=m3Ta11ic" +``` + +If you use Kafka on the localhost without SASL authentication, you can just +specify the hostname and port used by Kafka, as follows: + +```bash +dgraph alpha --cdc "localhost:9092" +``` + +If the Kafka cluster to which you are connecting requires TLS, the `ca-cert` option is required. +Note that this certificate can be self-signed. + +## Enable CDC with file sink + +To enable CDC and sink results to a local unencrypted file, start Dgraph Alpha +with the `--cdc` command and the sub-option shown below, as follows: + +```bash +dgraph alpha --cdc "file=local-file-path" +``` + +## Enable CDC with NATS JetStream KV store sink + +To enable CDC and sink results to a NATS JetStream KV store, start Dgraph Alpha +with the `--cdc` command and the sub-option shown below, as follows: + +```bash +dgraph alpha --cdc "nats=nats://system:system@localhost:4222" +``` + + +## CDC command reference + +The `--cdc` option includes several sub-options that you can use to configure +CDC when running the `dgraph alpha` command: + + + +| Sub-option | Example `dgraph alpha` command option | Notes | +|------------------|--------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------| +| `ca-cert` | `--cdc "ca-cert=/cert-dir/ca.crt"` | Path and filename of the CA root certificate used for TLS encryption, required if Kafka endpoint requires TLS | +| `client-cert` | `--cdc "client-cert=/c-certs/client.crt"` | Path and filename of the client certificate used for TLS encryption | +| `client-key` | `--cdc "client-cert=/c-certs/client.key"` | Path and filename of the client certificate private key | +| `file` | `--cdc "file=/sink-dir/cdc-file"` | Path and filename of a local file sink (alternative to Kafka sink) | +| `nats` | `--cdc "nats=nats://user:password@localhost:4222"` | URL connection string to nats sink (alternative to Kafka sink) | +| `kafka` | `--cdc "kafka=kafka-hostname; sasl-user=tstark; sasl-password=m3Ta11ic"` | Hostname(s) of the Kafka hosts. May require authentication using the `sasl-user` and `sasl-password` sub-options | +| `sasl-user` | `--cdc "kafka=kafka-hostname; sasl-user=tstark; sasl-password=m3Ta11ic"` | SASL username for Kafka. Requires the `kafka` and `sasl-password` sub-options | +| `sasl-password` | `--cdc "kafka=kafka-hostname; sasl-user=tstark; sasl-password=m3Ta11ic"` | SASL password for Kafka. Requires the `kafka` and `sasl-username` sub-options | +| `sasl-mechanism` | `--cdc "kafka=kafka-hostname; sasl-mechanism=PLAIN"` | The SASL mechanism for Kafka (PLAIN, SCRAM-SHA-256 or SCRAM-SHA-512). The default is PLAIN | + + + +## CDC data format + + +CDC events are in JSON format. Most CDC events look like the following example: + +```json +{ "key": "0", "value": {"meta":{"commit_ts":5},"type":"mutation","event":{"operation":"set","uid":2,"attr":"counter.val","value":1,"value_type":"int"}}} +``` + +The `Meta.Commit_Ts` value (shown above as `"meta":{"commit_ts":5}`) will increase +with each CDC event, so you can use this value to find duplicate events if those +occur due to Raft leadership changes in your Dgraph Alpha group. + +### Mutation event examples + +A set mutation event updating `counter.val` to 10 would look like the following: + +```json +{"meta":{"commit_ts":29},"type":"mutation","event":{"operation":"set","uid":3,"attr":"counter.val","value":10,"value_type":"int"}} +``` + +Similarly, a delete mutation event that removes all values for the `Author.name` +field for a specified node would look like the following: + +```json +{"meta":{"commit_ts":44},"type":"mutation","event":{"operation":"del","uid":7,"attr":"Author.name","value":"_STAR_ALL","value_type":"default"}} +``` + +### Drop event examples + +CDC drop events look like the following example event for "drop all": + +```json +{"meta":{"commit_ts":13},"type":"drop","event":{"operation":"all"}} +``` + +The `operation` field specifies which drop operation (`attribute`, `type`, +specified `data`, or `all` data) is tracked by the CDC event. + +## Known limitations + +CDC has the following known limitations: + +* CDC events do not track old values that are updated or removed by mutation or + drop operations; only new values are tracked +* CDC does not currently track schema updates +* You can only configure or enable CDC when starting Alpha nodes using the + `dgraph alpha` command +* If a node crashes or the leadership of a Raft group changes, CDC might have + duplicate events, but no data loss diff --git a/content/versions/v3.2.1/how-to/work-calendars/_index.md b/content/versions/v3.2.1/how-to/work-calendars/_index.md new file mode 100644 index 000000000..eb46f8297 --- /dev/null +++ b/content/versions/v3.2.1/how-to/work-calendars/_index.md @@ -0,0 +1,25 @@ +--- +title: 'Use work calendars' +date: '2023-09-22T14:50:39-03:00' +draft: false +categories: "how-to" +description: How to configure work calendars to account for planned and unplanned downtime in your operation. +weight: 500 +cascade: + icon: calendar +--- + +Work calendars represent planned periods of time in your operation, +including shifts, planned shutdowns, or recurring stops for maintenance. +The Rhize API represents calendars through the `workCalendar` entity, +which has close associations with the {{< abbr "equipment" >}} and {{< abbr "hierarchy scope" >}} models. + +Rhize also has a `calendar` service that periodically queries the Rhize DB for workCalendarDefinitions. +If it finds active definitions for that period, the service creates work calendar entries and persists the data to a time-series database. + +{{< callout type="info" >}} +Rhize's implementation of work calendars was inspired by ISO/TR +22400-10, a standard on KPIs in operations management. +{{< /callout >}} + + diff --git a/content/versions/v3.2.1/how-to/work-calendars/about-calendars-and-overrides.md b/content/versions/v3.2.1/how-to/work-calendars/about-calendars-and-overrides.md new file mode 100644 index 000000000..f942b50b7 --- /dev/null +++ b/content/versions/v3.2.1/how-to/work-calendars/about-calendars-and-overrides.md @@ -0,0 +1,111 @@ +--- +title: About calendars and overrides +description: >- + An explanation of how the Rhize calendar service works, and how it handles planned shutdowns across hierarchies. +weight: 200 +--- + +Work calendars represent planned periods of time in your operation, +including shifts, planned shutdowns, or recurring stops for maintenance. +The Rhize API represents calendars through a `workCalendar` entity and this calendar's associated definitions and entries. +They provide helpful abstractions for activities such as scheduling and performance analysis. + +Rhize has an optional `calendar` service that periodically queries the Rhize DB for `workCalendarDefinitions`. +If it finds active definitions and equipment for that period, the service creates work calendar entries and persists the data to a time-series database. +This topic explains how that calendar service works. + +{{< callout type="info" >}} +Rhize's implementation of work calendars was inspired by ISO/TR +22400-10, a standard on KPIs in operations management. +{{< /callout >}} + +## What the service does + +{{< bigFigure +src="/images/work-calendars/diagram-rhize-calendar-service-swimlane.png" +alt="A simplified view of how the calendar service coordinates and exchanges data with the Rhize DB and a time-series DB" +caption="A simplified view of how the calendar service coordinates and exchanges data with the Rhize DB and a time-series DB" +width="80%" +>}} + + +The calendar service queries all active work calendar definitions at an interval designated in your [service configuration]({{< relref "../../reference/service-config" >}}). +The service then checks for any active `workCalendarDefinitionEntry` items that start or end within that interval. +If any exist, Rhize creates a `workCalendarEntry` with the start and end time. + +The service then traverses all the calendar `hierarchyScope` entities (designated by the prefix `WorkCalendar`) and their `equipmentHierarchy` properties. +Rhize checks each equipment item for any `workCalendarEntries`. +If the scope includes active equipment, Rhize persists the entry to a time-series database. + +### The relationship between hierarchy scope, equipment, and calendars + +```mermaid +flowchart TD +hs(Hierarchy scope) -->|provides scope for| wc(Work calendar) +e(Equipment) -->|physical structure maps to| hs +hs --> |calendar structure maps to| e +wc -->|create calendar entries for active|e +``` + +As the behavior in the previous section describes, the work calendar service coordinates data between +three entities in your knowledge graph: {{< abbr "hierarchy scope" >}}, {{< abbr "equipment" >}}, and {{< abbr "work calendar" >}}. +These entities work together to configure work calendars and automate changes of equipment state. + +- **Equipment** provides the physical hierarchy of the plant's equipment, at levels that can be as small as a `workUnit` or as broad as the entire enterprise. +- **Hierarchy scope** creates a calendar hierarchy that the equipment hierarchy maps to. +- **Work calendars** and their associated definitions and entries have a hierarchy scope property, which Rhize uses to determine what the equipment state is. + + +The Rhize service uses the hierarchy scope to establish calendar precedence. +Then, it uses scope's associated calendar states to automatically set the state of the equipment for each hierarchy. +So, when you [create a calendar]({{< relref "create-work-calendar" >}}), ensure that you configure these three objects. + + + +### Calendar states + +The Rhize database and service has three calendar types: + +- `PlannedDowntime` +- `PlannedShutdown` +- `None`, for events that are not considered in OEE calculations. + +## Calendar precedence + +You can use calendar entries to set different calendar states at different levels of a hierarchy. +It is also possible for multiple shutdown periods to overlap in the same scope. +If an equipment belongs to multiple scopes, the service needs a way to handle this ambiguity. + +To prevent conflicts in these situations, Rhize has logic to determine _calendar precedence_. + +### The lowest hierarchy scope has precedence + +The lowest level of the hierarchy scope defines the calendar state for the equipment in this hierarchy. +For example, imagine two scopes: +- `Scope A` corresponds to an equipment line. +- `Scope B`, the child of `Scope A`, corresponds to equipment items in the line. + +If `Scope A` has planned downtime and `Scope B` does not. Then all the equipment in `Scope B` takes state defined by its associated work calendar entries. As `Scope B` is at a lower level, it has precedence. + + + +### The first start time, the last end time + + +It might occur that multiple active work calendars overlap with the same state. +For example, consider three scopes at the same hierarchy level. +- `Scope A` has a planned downtime starting at 00:00 and ending at 12:00, +- `Scope A2` has a planned downtime that starts at 01:00 and ends at 13:00. +- `Scope A3` has a planned busy time that starts at 05:00 and ends at 06:00. + +If an equipment item belonged to all these scopes, Rhize would calculate its planned downtime as being from 00:00 to 13:00. +The planned busy time is locked out, since another active entry type has already taken effect. +For a technical overview of how this locking and unlocking of states works, read about the [Semaphore pattern](https://en.wikipedia.org/wiki/Semaphore_(programming)) in computer science. + diff --git a/content/versions/v3.2.1/how-to/work-calendars/create-work-calendar.md b/content/versions/v3.2.1/how-to/work-calendars/create-work-calendar.md new file mode 100644 index 000000000..e183b1d84 --- /dev/null +++ b/content/versions/v3.2.1/how-to/work-calendars/create-work-calendar.md @@ -0,0 +1,491 @@ +--- +title: Create work calendars +description: >- + A guide to creating work calendars. Control, configure, and calculate planned downtime for your manufacturing equipment. +weight: 200 +--- + +This guide shows you how to create a work calendar using the Rhize GraphQL API. +As a calendar has associations with multiple other entities, +he process involves a series of [mutations]({{< relref "../gql/mutate" >}}) to create +associated data. + +To learn how work calendars work, +read [About work calendars]({{< ref "about-calendars-and-overrides" >}}). + +## Prerequisites + + +To use the work calendar service, ensure you have the following: +- The [calendar service installed]({{< relref "../../deploy/install/services" >}}) +- A plan for how to organize and name your calendars according to equipment. + +## Procedure + +In short, the procedure works as follows: + +1. Add equipment that follows some hierarchy. +1. Add hierarchy scopes for the calendar rules. These scopes should map to the equipment hierarchy. +1. Add work calendar definitions. + +You can add these objects in the UI or through the GraphQL API. +The following sections provide the requirements for each of these entities and examples of a mutation to create them. + + +{{< bigFigure +src="/images/work-calendars/diagram-rhize-work-calendar-relationships.png" +caption="The calendar service uses the relationships between equipment, hierarchy scope, and work calendars." +alt="Diagram of relationship between three configurations" +width="50%" +>}} + +### Add equipment + +The first step is to add an equipment hierarchy. + +**Requirements.** +- For an equipment calendar state to be recorded, it must have an active version. + +**Example** + +This mutation adds multiple items of equipment in a batch. +Note that some items, such as `Equipment A`, `Equipment B`, and `Equipment C`, +have links to child equipment, as expressed in the `isMadeUpOf` property. +These relationships express the equipment hierarchy. + +{{< details title="mutation addEquipment" closed="true" >}} +```gql +mutation AddEquipment($input: [AddEquipmentInput!]!, $upsert: Boolean) { + addEquipment(input: $input, upsert: $upsert) { + numUids + } +} +{ + "input": [ + { + "id": "Equipment A", + "nextVersion": "2", + "label": "Equipment A", + "activeVersion": { + "id": "Equipment A", + "version": "1", + "versionStatus": "ACTIVE", + "equipment": { + "id": "Equipment A" + } + }, + "isMadeUpOf": [ + {"id": "Equipment B"}, + {"id": "Equipment D"} + ] + }, + { + "id": "Equipment B", + "nextVersion": "2", + "label": "Equipment B", + "activeVersion": { + "id": "Equipment B", + "version": "1", + "versionStatus": "ACTIVE", + "equipment": { + "id": "Equipment B" + } + }, + "isMadeUpOf": [ + {"id": "Equipment C"} + + ] + }, + { + "id": "Equipment C", + "nextVersion": "2", + "label": "Equipment C", + "activeVersion": { + "id": "Equipment C", + "version": "1", + "versionStatus": "ACTIVE", + "equipment": { + "id": "Equipment C" + } + }, + "isMadeUpOf": [ + {"id": "Equipment Ca"}, + {"id": "Equipment Cb"} + ] + }, + { + "id": "Equipment Ca", + "nextVersion": "2", + "label": "Equipment Ca", + "activeVersion": { + "id": "Equipment Ca", + "version": "1", + "versionStatus": "ACTIVE", + "equipment": { + "id": "Equipment Ca" + } + } + }, + { + "id": "Equipment Cb", + "nextVersion": "2", + "label": "Equipment Cb", + "activeVersion": { + "id": "Equipment Cb", + "version": "1", + "versionStatus": "ACTIVE", + "equipment": { + "id": "Equipment Cb" + } + } + }, + { + "id": "Equipment D", + "nextVersion": "2", + "label": "Equipment D", + "activeVersion": { + "id": "Equipment D", + "version": "1", + "versionStatus": "ACTIVE", + "equipment": { + "id": "Equipment D" + } + } + } + ], + "upsert": true +} +``` +{{< /details >}} + + +### Add Hierarchy scope + +The hierarchy scope establishes the calendar hierarchy that the Rhize calendar service uses to establish [calendar precedence]({{< relref "about-calendars-and-overrides" >}}). + + +**Requirements.** +A calendar hierarchy scope must have the following properties. +- A time zone. +- An ID that starts with the prefix `WorkCalendar_` + +To associate equipment with the hierarchy scope, add the equipment items to the `equipmentHierarchy`. +To create levels of calendar scope, add `children`, each of which can link to equipment. + + +**Example** + + +This example adds a work-calendar hierarchy, `WorkCalendar_PSDT`, with associated children scopes. +The scope and its children link to equipment created in the previous step through `equipmentHierarchy`. + +{{< details title=" mutation addHierarchyScope" >}} + + +```gql +mutation AddHierarchyScope($input: [AddHierarchyScopeInput!]!) { + addHierarchyScope(input: $input) { + numUids + } +} +{ + "input": [ + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PSDT.Scope A", + "label": "WorkCalendar_PSDT.Scope A", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment A", + "version": "1" + }, + "children": [ + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PSDT.Scope A.Scope B", + "label": "Scope B", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment B", + "version": "1" + }, + "children": [ + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PSDT.Scope A.Scope B.Scope C", + "label": "Scope C", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment C", + "version": "1" + }, + } + ] + }, + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PSDT.Scope A.Scope D", + "label": "Scope D", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment D", + "version": "1" + }, + } + ] + }, + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PDOT.Scope A", + "label": "WorkCalendar_PDOT.Scope A", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment A", + "version": "1" + }, + "children": [ + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PDOT.Scope A.Scope B", + "label": "Scope B", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment B", + "version": "1" + }, + "children": [ + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PDOT.Scope A.Scope B.Scope C", + "label": "Scope C", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment C", + "version": "1" + }, + } + ] + }, + { + "effectiveStart": "2024-05-29T00:00:00Z", + "id": "WorkCalendar_PDOT.Scope A.Scope D", + "label": "Scope D", + "timeZoneName": "Europe/London", + "equipmentHierarchy": { + "id": "Equipment D", + "version": "1" + }, + } + ] + } + ] +} +``` + +{{< /details >}} + +### Create work calendar definition + +After you have created equipment and hierarchy scopes, create a `workCalendarDefinition`. +The calendar service reads the entries to create records of machine states. + +**Requirements:** +The work calendar definition must have the following: +- An associated work calendar +- A label. Note that Rhize **uses the to label to configure overrides**. +- At least one entry that has at least these properties: + - Start date + - type (one of: `PlannedDowntime`, `PlannedShutdown`, and `PlannedBusyTime`). + - A recurrence time interval in the representation defined by the [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) standard + +You can optionally add `properties` to each entry to add additional context and information. + +**Naming conventions** +- When you name the ID, the recommended convention is `{CalendarDefinition.Label}.{HierarchyScope.id}`. This convention helps readers quickly understand which scopes and equipment its entries affect. + + +**Example** + +This `addWorkCalendarDefinition` mutation adds entries for planned downtime and shutdown time. +Note that the calendar definitions link to a hierarchy scope defined in the previous step. + +{{< details title="mutation addWorkCalendarDefinition" closed="true" >}} +```gql +mutation AddWorkCalendarDefinition($input: [AddWorkCalendarDefinitionInput!]!, $upsert: Boolean) { + addWorkCalendarDefinition(input: $input, upsert: $upsert) { + numUids + } +} +{ + "input": [ + { + "id": "PDOT C.Scope C", + "label": "PDOT C", + "workCalendars": [ + { + "id": "PDOT C.Scope C", + "label": "PDOT C.Scope C" + } + ], + "hierarchyScope": { + "id": "WorkCalendar_PDOT.Scope A.Scope B.Scope C" + }, + "entries": [ + { + "id": "PDOT C.Scope C.1", + "label": "PDOT C.Scope C.1", + "durationRule": "PT15M", + "startRule": "2024-05-29T09:30:00Z", + "recurrentTimeIntervalRule": "R/P1D", + "properties": [ + { + "id": "PDOT C.Scope C.1.PropA", + "label": "Prop A", + "value": "1" + }, + { + "id": "PDOT C.Scope C.1.PropB", + "label": "Prop B", + "value": "2" + } + ], + "entryType": "PlannedDowntime" + }, + { + "id": "PDOT C.Scope C.2", + "label": "PDOT C.Scope C.2", + "durationRule": "PT1H5M", + "startRule": "2024-05-29T08:45:00Z", + "recurrentTimeIntervalRule": "R/P1D", + "properties": [ + { + "id": "PDOT C.Scope C.2.PropA", + "label": "Prop A", + "value": "3" + }, + { + "id": "PDOT C.Scope C.2.PropB", + "label": "Prop B", + "value": "4" + } + ], + "entryType": "PlannedDowntime" + } + ] + }, + { + "id": "PSDT D.Scope D", + "label": "PSDT D", + "workCalendars": [ + { + "id": "PSDT D.Scope D", + "label": "PSDT D.Scope D" + } + ], + "hierarchyScope": { + "id": "WorkCalendar_PSDT.Scope A.Scope D" + }, + "entries": [ + { + "id": "PSDT D.Scope D.1", + "label": "PSDT D.Scope D.1", + "durationRule": "PT1H", + "startRule": "2024-05-29T13:00:00Z", + "recurrentTimeIntervalRule": "R/P1D", + "properties": [ + { + "id": "PSDT D.Scope D.1.PropA", + "label": "Prop A", + "value": "1" + }, + { + "id": "PSDT D.Scope D.1.PropB", + "label": "Prop B", + "value": "2" + } + ], + "entryType": "PlannedShutdown" + }, + { + "id": "PSDT D.Scope D.2", + "label": "PSDT D.Scope D.2", + "durationRule": "PT2H", + "startRule": "2024-05-29T12:00:00Z", + "recurrentTimeIntervalRule": "R/P1D", + "properties": [ + { + "id": "PSDT D.Scope D.2.PropA", + "label": "Prop A", + "value": "3" + }, + { + "id": "PSDT D.Scope D.2.PropB", + "label": "Prop B", + "value": "4" + } + ], + "entryType": "PlannedShutdown" + } + ] + }, + { + "id": "PDOT D.Scope D", + "label": "PDOT D", + "workCalendars": [ + { + "id": "PDOT D.Scope D", + "label": "PDOT D.Scope D" + } + ], + "hierarchyScope": { + "id": "WorkCalendar_PDOT.Scope A.Scope D" + }, + "entries": [ + { + "id": "PDOT D.Scope D.1", + "label": "PDOT D.Scope D.1", + "durationRule": "PT3H", + "startRule": "2024-05-29T18:00:00Z", + "recurrentTimeIntervalRule": "R/P1D", + "properties": [ + { + "id": "PDOT D.Scope D.1.PropA", + "label": "Prop A", + "value": "1" + }, + { + "id": "PDOT D.Scope D.1.PropB", + "label": "Prop B", + "value": "2" + } + ], + "entryType": "PlannedDowntime" + }, + { + "id": "PDOT D.Scope D.2", + "label": "PDOT D.Scope D.2", + "durationRule": "PT2H", + "startRule": "2024-05-29T21:00:00Z", + "recurrentTimeIntervalRule": "R/P1D", + "properties": [ + { + "id": "PDOT D.Scope D.2.PropA", + "label": "Prop A", + "value": "3" + }, + { + "id": "PDOT D.Scope D.2.PropB", + "label": "Prop B", + "value": "4" + } + ], + "entryType": "PlannedDowntime" + } + ] + } + ], + "upsert": true +} +``` +{{< /details >}} + diff --git a/content/versions/v3.2.1/reference/_index.md b/content/versions/v3.2.1/reference/_index.md new file mode 100644 index 000000000..a709f6eb9 --- /dev/null +++ b/content/versions/v3.2.1/reference/_index.md @@ -0,0 +1,13 @@ +--- +title: Reference +description: A collection of pages to look up values for schemas, definitions, and anything else related to using Rhize. +weight: 400 +identifier: reference +cascade: + icon: table + +--- + +A collection of pages to look up values for schemas, definitions, and anything else related to using Rhize. + +{{< card-list >}} diff --git a/content/versions/v3.2.1/reference/default-ports.md b/content/versions/v3.2.1/reference/default-ports.md new file mode 100644 index 000000000..12795d3bf --- /dev/null +++ b/content/versions/v3.2.1/reference/default-ports.md @@ -0,0 +1,24 @@ +--- +title: 'Default URLs and local ports' +date: '2023-11-02T16:49:42-03:00' +draft: false +categories: ["reference"] +description: "A list of the default ports for the various Rhize services" +weight: 900 +--- + +After you [install Rhize services](/deploy/install/services), they are accessible, by default, on the following ports: + +| Service | Default Port | +|---------------------------|------------------------------------| +| Admin UI | [`localhost:3030`](http://localhost:3030) | +| Grafana | [`localhost:3001`](http://localhost:3001) | +| Router | [`localhost:4000`](http://localhost:4000) | +| Keycloak | [`localhost:8090`](http://localhost:8090) | +| `baas-alpha` command line | [`localhost:8080`](http://localhost:8080) | + +## URLs + +When you create DNS records, Rhize recommends the following URLs: + +{{< reusable/default-urls >}} diff --git a/content/versions/v3.2.1/reference/glossary.md b/content/versions/v3.2.1/reference/glossary.md new file mode 100644 index 000000000..1ed67b49c --- /dev/null +++ b/content/versions/v3.2.1/reference/glossary.md @@ -0,0 +1,16 @@ +--- +date: "2023-09-12T19:35:35+11:00" +title: Glossary +description: A list of terms relevant to Rhize, or that are frequently used in manufacturing contexts. +categories: ["reference"] +weight: 1500 +icon: dictionary +--- + +The manufacturing industry has many specialized terms—and many abbreviations. +This glossary is a reference of how Rhize defines terms used in this documentation. + +{{% glossary %}} + + + diff --git a/content/versions/v3.2.1/reference/gql-types.md b/content/versions/v3.2.1/reference/gql-types.md new file mode 100644 index 000000000..739890ecb --- /dev/null +++ b/content/versions/v3.2.1/reference/gql-types.md @@ -0,0 +1,125 @@ +--- +title: GraphQL types and filters + +description: >- + A reference of the data types in the Rhize API and of the filters available for each type. +categories: ["reference"] +weight: 930 +--- + +This page provides a reference of the data types enforced by the Rhize database schema, +and of the filters that can apply to these types when you query, update, or delete a set of resources. +For an extended guide, with examples, read [Use query filters]({{< relref "../how-to/gql/filter" >}}). + +{{< callout type="info" >}} +These filters are based on Rhize's implementation of the Dgraph [`@search` directives](https://dgraph.io/docs/graphql/schema/directives/search/). +{{< /callout >}} + +## Data types + +Every object in the Rhize schema has fields that are of one of the basic data types. +From the other perspective, these data types define fields that compose manufacturing objects, +objects defined precisely by ISA-95 and enforced by Rhize's database schema. + +### Basic types + +Every manufacturing object in the Rhize database is made of fields that are of one the following basic types. +In official GraphQL terminology, these types are called [_scalar types_](https://graphql.org/learn/schema/#scalar-types). + +- `String`: A sequence of characters. For example, `machine_2` +- `Int`: An integer number. For example, `2`. +- `Float`: A number that includes a fraction. For example, `2.25`. +- `Boolean`: A field whose value is either `true` or `false`. +- `Enum`: A field whose values are restricted to a defined set. For example, `versionState` might be one of `ACTIVE`, `APPROVED`, `FOR_REVIEW`, `DRAFT`, or `DEPRECATED` + +- `id`: A string representing a unique object within a defined [object type](#object-type). +- `iid`: The object's unique address in the database. For example, `0xf9b49`. +- `DateTime`: A timestamp in [RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339) format. +- `Geo`: Geometry types for geo-spatial coordinates + +### Object type + +The preceding basic types form the building blocks for Rhize's manufacturing object schema, with data models corresponding to ISA-95. + +Each object is made of manufacturing-specific fields of one of the basic types. +For example, the `materialActual` object has basic fields, including: +- `description`, a `String`. +- `effectiveEnd`, a `DateTime` + +The `materialActual` also has complex fields describing associated manufacturing objects. +For example, its fields include +the array of associated `MaterialLot` objects, the `MaterialDefinition` object, and so on. +All objects in the database have relationships to other objects. + +{{< callout type="info" >}} +Metadata fields start with an underscore (`_`). +For example, `_createdOn` reports the time when the object was created. +{{< /callout >}} + +## Scalar filters + +Most objects have some fields that can be filters for a query or mutation. +The filters that are available depend on the data type, but the behavior of the `String` filters corresponds closely to `DateTime` and `Int` filters. + +### String filters + +String properties have the following filters: + +| Filter | Description | Example argument | +|--------------------------------|-----------------------------------------------------------|--------------------------------------------------| +| `eq` | Equals (exact match) | `(filter: {id: {eq: "match"}})` | +| `in` | From this match of arrays | `(filter: {id: {in: ["dough", "cookie_unit"]}})` | +| `lt`,`gt`,`le`, `ge` `between` | Less than, greater than, or between a lexicographic range | `(filter: {id: {lt: "M"}` | +| `regexp` | A regular expression match using [`RE2` syntax](https://github.com/google/re2/wiki/Syntax/) | `(filter: {id: {regexp: "/hello/i"}})` | +| `anyoftext` | A match for any entered strings, separated by spaces | `(filter: {id: {anyoftext: "100 MD"}})` | +| `alloftext` | A match for all entered strings | `(filter: {id {alloftext: "MD"}})` | + +### Integers, floats, DateTimes + +Properties that have a type of `Int`, `Float`, or `DateTime` can be filtered by the following keywords. + + - `lt` + - `le` + - `eq` + - `in` + - `between` + - `ge` + - `gt` + +Each keyword has the same behavior as described in [string filters](#string-filters), only they operate on numerical rather than lexicographic values. +{{< callout type="info" >}} +While the `dateTime` type uses the RFC 3339 format, some string fields may use the [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format. This depends on the object and customer requirement. For these fields, the string filters work as chronological filters too. +{{< /callout >}} +### Enum filters + +Properties of the type `Enum` can be filtered by the following: + - `lt` + - `le` + - `eq` + - `in` + - `between` + - `ge` + - `gt` + +Each keyword has the same behavior as described in [string filters](#string-filters). + +### Boolean filters + +Boolean filters can be either `true` or `false`. + +### Geolocation filters + +Geolocation filters return objects within specified geographic coordinates. +They return matches within the specified [GeoJSON polygon](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.6). + +If a geolocation field can act as a filter, then the filter can work in one of the following behaviors: +| Filter | Description | +|------------|--------------------------------------------------| +| near | Within the specified `distance` from the polygon | +| within | In the polygon coordinates | +| Intersects | In the intersection of two polygons | + +## Read more + +- [Rhize guide to GraphQL](/how-to/gql) +- [Dgraph `@search` directive](https://dgraph.io/docs/graphql/schema/directives/search/). diff --git a/content/versions/v3.2.1/reference/image.png b/content/versions/v3.2.1/reference/image.png new file mode 100644 index 000000000..a9eb65456 Binary files /dev/null and b/content/versions/v3.2.1/reference/image.png differ diff --git a/content/versions/v3.2.1/reference/nats-configuration.md b/content/versions/v3.2.1/reference/nats-configuration.md new file mode 100644 index 000000000..3dc0c7cb6 --- /dev/null +++ b/content/versions/v3.2.1/reference/nats-configuration.md @@ -0,0 +1,94 @@ +--- +title: 'NATS configuration' +date: '2023-10-04T10:22:15-03:00' +draft: true +categories: ["reference"] +description: Values and parameters to configure NATS in your Rhize operation +weight: 300 +--- + +Rhize uses the [NATS message broker](https://nats.io/) for its publish-subscribe messaging. +Through NATS, Rhize can decouple services, exchange messages in real-time, +and receive event data from all levels of the operation. + +These sections describe NATS parameters that are particularly relevant to Rhize configuration. +For general use, refer to the [NATS official documentation](https://docs.nats.io/nats-concepts/overview). + +## Reserved topics + +Topics that begin with a dollar sign ($) denote topics specifically about the NATS system. + +### Jet stream (`$JS`) + +The `$JS` topic is reserved for messages about the NATS [JetStream](https://docs.nats.io/nats-concepts/jetstream). + +### Key value store (`$KV`) + +The `$KV` topic is reserved for messages about the [Key/Value Store](https://docs.nats.io/nats-concepts/jetstream/key-value-store). + +Subtopics include the following: + +| Topic | Description | +|--------------------|-------------| +| `$KV/JobResponses` | | + +## BPMN topics and configuration + +The `libreBPMN` topic is for messages about the BPMN engine. +Subtopics include the following: + + +| Topic | Description | +|---------------------------------------|-------------| +| `libreBPMN/command/START_EVENT` | | +| `libreBPMN/command/TASK_COMPLETE` | | +| `libreBPMN/command/SERVICE_TASK` | | +| `libreBPMN/command/EXCLUSIVE_GATEWAY` | | + +### `Streams` + +- `libreBpmn_Command` +- `LibreTimerStart` +- `JobResponses KV` +- `WorkflowSpecificationKV` + + +## NATS configuration + +The following parameters configure the NATS message queues for different services. + +### `BPMN` + +The NATS configuration parameters for the BMPN streams are as follows: + +| Topic | Description | +|-----------------------------------|-------------| +| `CommandStreamReplicas` | | +| `JobResponseKVMaxGB` | | +| `JobResponseKVReplicas` | | +| `JobResponseKVTTLMinutes` | | +| `WorkflowSpecificationKVReplicas` | | + +For example: + +```json + "NATS": { + "CommandStreamReplicas": 1, + "JobResponseKVMaxGB": 2, + "JobResponseKVReplicas": 1, + "JobResponseKVTTLMinutes": 7, + "WorkflowSpecificationKVReplicas": 1 + }, +``` + +### Libre core + +The NATS configuration parameters for the Libre core topics are as follows: + +| Parameter | Description | +|-------------|-------------| +| `serverUrl` | | +| `replicas` | | + + + diff --git a/content/versions/v3.2.1/reference/observability-metrics.md b/content/versions/v3.2.1/reference/observability-metrics.md new file mode 100644 index 000000000..2ca2caaaa --- /dev/null +++ b/content/versions/v3.2.1/reference/observability-metrics.md @@ -0,0 +1,341 @@ +--- +title: Observability metrics +description: Metrics from the Rhize microservices, collected by Prometheus. +weight: 350 +category: "reference" +--- + +Rhize uses [Prometheus](https://prometheus.io/docs/introduction/overview/) to monitor metrics from many of its [microservices]({{< relref "service-config" >}}). +For the Kubernetes cluster, Rhize runs the Prometheus operator and monitors the accumulated metrics in Grafana dashboards. +Monitoring occurs granularly, on the levels of cluster, pod, and container. + + +## Metrics endpoints + +The service metrics have endpoints at the following ports: + +| Service | Available | Enabled | Port | +|---------|-----------|---------|------| +| Audit | Y | Y | 8084 | +| BAAS | Y | Y | 8080 | +| BPMN | Y | Y | 8081 | +| Core | Y | Y | 8080 | +| NATS | | Y | 7777 | +| Router | Y | | 9090 | +| Tempo | Y | Y | 3100 | + +NATS has an available endpoint through an exporter that is present on the cluster. +Router has an available endpoint that is disabled by default. + +## Metrics configuration + +For services where metrics are disabled by default, some configuration steps may be required. +In case you are experimenting locally, this document includes for both the cluster and for Docker. + +After you enable metrics, add them into the Prometheus configuration file by pointing to that service endpoint. +For example: + +```yaml +- job_name: 'rhize-application-monitoring' + honor_timestamps: true + scrape_interval: 15s + scrape_timeout: 10s + metrics_path: /metrics + scheme: http + static_configs: + - targets: ['audit-demo.demo.svc.cluster.local:8084', 'baas-alpha.demo.svc.cluster.local:8080', 'bpmn-demo.demo.svc.cluster.local:8081', 'core-demo.demo.svc.cluster.local:8080', 'grafana-demo.demo.svc.cluster.local:3000', 'tempo.demo.svc.cluster.local:3100', 'nats-demo-headless.demo.svc.cluster.local:7777', 'router-demo.demo.svc.cluster.local:9090'] +``` + +### NATS + +While NATS has no available metrics endpoint, the cluster includes a [NATS Prometheus exporter](https://github.com/nats-io/prometheus-nats-exporter). +NATS metrics are exposed through port `7777`. + + +#### Cluster + +Since the cluster already includes the exporter, no further configuration is required. +This endpoint should be connected through the NATS headless pod. For example: + +`nats-demo-headless.demo.svc.cluster.local:7777` + +#### Docker + +To get NATS metrics in Docker, use the exporter mentioned in the preceding section. +The following is a sample docker-compose configuration for the exporter. + +```yaml +services: + # -- Other services + nats-exporter: + image: natsio/prometheus-nats-exporter:latest + container_name: nats-exporter + command: "-varz 'http://nats:5555'" + depends_on: + - nats + ports: + - 7777:7777 +``` + +Access NATS metrics at `localhost:7777/metrics` + +### Router + +#### Cluster + +To enable metrics, the Router Helm chart needs to have several options added or changed, as follows: +For details, refer to the [Official Apollo instructions](https://www.apollographql.com/docs/router/containerization/kubernetes/#deploy-with-metrics-endpoints). + +```yaml +router: + configuration: + # -- Other configuration prior + telemetry: + metrics: + prometheus: + enabled: true + listen: 0.0.0.0:9090 + path: "/metrics" + +# -- Open container ports +containerPorts: + metrics: 9090 + +# -- Enable service monitor +serviceMonitor: + enabled: true +``` + +You can connect to this endpoint through the router pod. +For example: + +`router-demo.demo.svc.cluster.local:9090` + +#### Docker + + +To enable Router metrics, modify its configuration file. +For details, refer to the [Official Apollo Instructions](https://www.apollographql.com/docs/router/configuration/telemetry/exporters/metrics/prometheus/). + +The following is an example configuration: + + +```yaml +# -- Other configuration prior +telemetry: + exporters: + metrics: + prometheus: + enabled: true + listen: 0.0.0.0:9090 + path: /metrics +``` + +This opens the metrics endpoint on port `9090`. +To view it externally, you must expose the port in docker-compose. +Once the port is exposed, view the metrics at `localhost:9090/metrics` + +## Available Rhize microservice metrics + +Several common metrics appear between Rhize microservices: +- `go` +- `process` +- `http`. + + +| Service | [Instrumented Prometheus Go Application](https://prometheus.io/docs/guides/go-application/) | process metrics | HTTP metrics* | Additional | +|---------|---------------------------------------------------|-----------------|---------------|------------| +| Audit | Y | Y | Y | | +| BAAS | Y | Y | | Y | +| BPMN | Y | Y | Y | Y | +| Core | Y | | Y | | +| NATS | Y | Y | Y | Y | +| Router | | | | Y | +| Tempo | Y | | | Y | + +{{< callout type="info" >}} +HTTP metrics are noted as `promhttp`. +{{< /callout >}} + +### BAAS + +Additional metrics on BAAS are from [dgraph](https://dgraph.io/docs/deploy/admin/metrics/). These include two categories: Badger and Dgraph. + +#### Sample + +``` +# HELP badger_disk_reads_total Number of cumulative reads by Badger +# TYPE badger_disk_reads_total untyped +badger_disk_reads_total 0 + +# HELP badger_disk_writes_total Number of cumulative writes by Badger +# TYPE badger_disk_writes_total untyped +badger_disk_writes_total 0 + +# HELP badger_gets_total Total number of gets +# TYPE badger_gets_total untyped +badger_gets_total 0 +``` + +``` +# HELP dgraph_alpha_health_status Status of the alphas +# TYPE dgraph_alpha_health_status gauge +dgraph_alpha_health_status 1 + +# HELP dgraph_disk_free_bytes Total number of bytes free on disk +# TYPE dgraph_disk_free_bytes gauge +dgraph_disk_free_bytes{dir="postings_fs"} 1.0153562112e+10 + +# HELP dgraph_disk_total_bytes Total number of bytes on disk +# TYPE dgraph_disk_total_bytes gauge +dgraph_disk_total_bytes{dir="postings_fs"} 1.0447245312e+10 +``` + +### BPMN + +BPMN has four metrics unique to it, shown in full in the sample below. + +#### Sample + +``` +# HELP bpmn_execution_commands The number of BPMN commands that have started executing +# TYPE bpmn_execution_commands counter +bpmn_execution_commands 1162 + +# HELP bpmn_instances_started BPMN Instances started but not necessarily completed +# TYPE bpmn_instances_started counter +bpmn_instances_started 166 + +# HELP bpmn_queue_CommandConsumerQueue Number of BPMN Commands currently waiting to be executed +# TYPE bpmn_queue_CommandConsumerQueue gauge +bpmn_queue_CommandConsumerQueue 0 + +# HELP bpmn_queue_StartOnNatsMessages Number of BPMN trigger messages received from NATS +# TYPE bpmn_queue_StartOnNatsMessages gauge +bpmn_queue_StartOnNatsMessages 0 +``` + +### NATS + +NATS has two categories of metrics: +- `gnatsd` +- `jetstream` + +#### Sample + +``` +# HELP gnatsd_connz_in_bytes in_bytes +# TYPE gnatsd_connz_in_bytes counter +gnatsd_connz_in_bytes{server_id="nats-demo-0"} 0 + +# HELP gnatsd_connz_in_msgs in_msgs +# TYPE gnatsd_connz_in_msgs counter +gnatsd_connz_in_msgs{server_id="nats-demo-0"} 0 + +# HELP gnatsd_connz_limit limit +# TYPE gnatsd_connz_limit gauge +gnatsd_connz_limit{server_id="nats-demo-0"} 1024 +``` + +``` +# HELP jetstream_server_jetstream_disabled JetStream disabled or not +# TYPE jetstream_server_jetstream_disabled gauge +jetstream_server_jetstream_disabled{cluster="nats-demo",domain="",is_meta_leader="false",meta_leader="nats-demo-1",server_id="nats-demo-0",server_name="nats-demo-0"} 0 + +# HELP jetstream_server_max_memory JetStream Max Memory +# TYPE jetstream_server_max_memory gauge +jetstream_server_max_memory{cluster="nats-demo",domain="",is_meta_leader="false",meta_leader="nats-demo-1",server_id="nats-demo-0",server_name="nats-demo-0"} 2.147483648e+09 + +# HELP jetstream_server_max_storage JetStream Max Storage +# TYPE jetstream_server_max_storage gauge +jetstream_server_max_storage{cluster="nats-demo",domain="",is_meta_leader="false",meta_leader="nats-demo-1",server_id="nats-demo-0",server_name="nats-demo-0"} 5.36870912e+10 +``` + +### Router + +All metrics provided by Router are unique to Apollo Router. + +#### Sample + +``` +# HELP apollo_router_cache_hit_count apollo_router_cache_hit_count +# TYPE apollo_router_cache_hit_count counter +apollo_router_cache_hit_count{kind="query planner",service_name="router-demo",storage="memory"} 121802 + +# HELP apollo_router_cache_hit_time apollo_router_cache_hit_time +# TYPE apollo_router_cache_hit_time histogram +apollo_router_cache_hit_time_bucket{kind="query planner",service_name="router-demo",storage="memory",le="0.001"} 121802 +apollo_router_cache_hit_time_bucket{kind="query planner",service_name="router-demo",storage="memory",le="0.005"} 121802 +apollo_router_cache_hit_time_bucket{kind="query planner",service_name="router-demo",storage="memory",le="0.015"} 121802 +``` + +### Tempo + +Tempo has a few categories of metrics: +- `jaeger` +- `prometheus` +- `tempo` +- `tempodb`. + +The Tempo documentation [details](https://grafana.com/docs/tempo/latest/metrics-generator/) what these metrics measure. + +#### Sample + +``` +# HELP jaeger_tracer_baggage_restrictions_updates_total Number of times baggage restrictions were successfully updated +# TYPE jaeger_tracer_baggage_restrictions_updates_total counter +jaeger_tracer_baggage_restrictions_updates_total{result="err"} 0 +jaeger_tracer_baggage_restrictions_updates_total{result="ok"} 0 + +# HELP jaeger_tracer_baggage_truncations_total Number of times baggage was truncated as per baggage restrictions +# TYPE jaeger_tracer_baggage_truncations_total counter +jaeger_tracer_baggage_truncations_total 0 +``` + +``` +# HELP prometheus_remote_storage_exemplars_in_total Exemplars in to remote storage, compare to exemplars out for queue managers. +# TYPE prometheus_remote_storage_exemplars_in_total counter +prometheus_remote_storage_exemplars_in_total 0 + +# HELP prometheus_remote_storage_histograms_in_total HistogramSamples in to remote storage, compare to histograms out for queue managers. +# TYPE prometheus_remote_storage_histograms_in_total counter +prometheus_remote_storage_histograms_in_total 0 + +# HELP prometheus_remote_storage_samples_in_total Samples in to remote storage, compare to samples out for queue managers. +# TYPE prometheus_remote_storage_samples_in_total counter +prometheus_remote_storage_samples_in_total 0 +``` + +``` +# HELP tempo_distributor_ingester_clients The current number of ingester clients. +# TYPE tempo_distributor_ingester_clients gauge +tempo_distributor_ingester_clients 0 + +# HELP tempo_distributor_metrics_generator_clients The current number of metrics-generator clients. +# TYPE tempo_distributor_metrics_generator_clients gauge +tempo_distributor_metrics_generator_clients 0 + +# HELP tempo_distributor_push_duration_seconds Records the amount of time to push a batch to the ingester. +# TYPE tempo_distributor_push_duration_seconds histogram +tempo_distributor_push_duration_seconds_bucket{le="0.005"} 0 +tempo_distributor_push_duration_seconds_bucket{le="0.01"} 0 +``` + +``` +# HELP tempodb_backend_hedged_roundtrips_total Total number of hedged backend requests. Registered as a gauge for code sanity. This is a counter. +# TYPE tempodb_backend_hedged_roundtrips_total gauge +tempodb_backend_hedged_roundtrips_total 0 + +# HELP tempodb_blocklist_poll_duration_seconds Records the amount of time to poll and update the blocklist. +# TYPE tempodb_blocklist_poll_duration_seconds histogram +tempodb_blocklist_poll_duration_seconds_bucket{le="0"} 0 +tempodb_blocklist_poll_duration_seconds_bucket{le="60"} 2012 +tempodb_blocklist_poll_duration_seconds_bucket{le="120"} 2012 +``` + +## Dashboards + +A number of Grafana dashboards are pre-configured for use with Prometheus metrics. +All dashboards in Grafana use Prometheus as a data source. + +You can download them from [Rhize Dashboard templates](https://github.com/libremfg/rhize-templates/tree/main/dashboards). diff --git a/content/versions/v3.2.1/reference/service-config/_index.md b/content/versions/v3.2.1/reference/service-config/_index.md new file mode 100644 index 000000000..398ec2149 --- /dev/null +++ b/content/versions/v3.2.1/reference/service-config/_index.md @@ -0,0 +1,13 @@ +--- +title: Service configuration +description: A collection of pages to look up configuration parameters for various Rhize services. +weight: 100 +--- + +The Rhize services have different configuration parameters, as documented in the following pages. + +{{% reusable/config-map "core" %}} + + +{{< card-list >}} + diff --git a/content/versions/v3.2.1/reference/service-config/adminUI-configuration.md b/content/versions/v3.2.1/reference/service-config/adminUI-configuration.md new file mode 100644 index 000000000..b7d7ae9c3 --- /dev/null +++ b/content/versions/v3.2.1/reference/service-config/adminUI-configuration.md @@ -0,0 +1,42 @@ +--- +title: 'Admin UI configuration' +categories: ["reference"] +description: List of Environmental Variables and their description +weight: 900 +--- + +The Rhize Admin UI offers a graphical interface to configure master data and users. + +## Preact environment variables + +The following table lists all Preact environment variables. +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `NODE_ENV` | If true will also put admin UI into debug mode. This and PREACT_APP_DEBUG.
      (Default: false) | +| `PREACT_APP_AMPLITUDE_API_KEY` | The Amplitude API Key.
      (Default: None) | +| `PREACT_APP_APOLLO_CLIENT` | The Apollo Client URL.
      (Default: None, http://localhost:4000 when running locally)  | +| `PREACT_APP_APOLLO_CLIENT_ADMIN` | The Apollo Client Admin URL.
      (Default: None, http://localhost:4000 when running locally)  | +| `PREACT_APP_APOLLO_JSONATA_CLIENT` | The JSONata Client URL used for running the JSONata playground.
      (Default: none)  | +| `PREACT_APP_APPSMITH_PORTAL` | Shows the Appsmith portal page in the admin UI if true.
      (Default: false)  | +| `PREACT_APP_AUTH_KEYCLOAK_CLIENT_ID` | The Keycloak Realms Client ID.
      (Default: none, libreUI when running locally)  | +| `PREACT_APP_AUTH_KEYCLOAK_LOGOUT_URI` | The Keycloak logout URL.
      (Default: None, libre when running locally)  | +| `PREACT_APP_AUTH_KEYCLOAK_REALM` | The Keycloak Realm.
      (Default: None)  | +| `PREACT_APP_AUTH_KEYCLOAK_SECRET` | The Keycloak Realms Client IDs Secret Key.
      (Default: None)  | +| `PREACT_APP_AUTH_KEYCLOAK_SERVER_URL` | The Keycloak Server URL.
      (None, http://localhost:8080 when running)  | +| `PREACT_APP_DEBUG` | Will put admin UI page in debug mode if true.
      (Default: false)  | +| `PREACT_APP_GRAPHIQL_ENABLED` | Uses GraphiQL playground if true else uses apollo sandbox.
      (Default: false)  | +| `PREACT_APP_LIBRE_PAGE_LIMIT` | The libre table page size.
      (Default: 15)  | +| `PREACT_APP_LIBRE_VERSION` | The libre version.
      (Default: none)  | +| `PREACT_APP_MAPBOX_API_KEY` | The Mapbox access token API key.
      (Default: none)  | +| `PREACT_APP_VERSION` | The schema version.
      (Default: none)  | +| `PREACT_APP_WORK_MASTER_UI_ENABLED` | Uses the new work master page if true, else uses old work master page.  | + +## `Cypress ENV Variables` + + All Cypress Environmental Variables, what they do and what there default value is. + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `CYPRESS_FRONTEND_TESTING_DOMAIN` | The Cypress Frontend Testing Domain URL
      | +| `CYPRESS_LOGIN_ADMIN_EMAIL` | The Cypress Testing Email.
      | +| `CYPRESS_LOGIN_ADMIN_PASSWORD` | The Cypress Testing Password.
      | diff --git a/content/versions/v3.2.1/reference/service-config/agent-configuration.md b/content/versions/v3.2.1/reference/service-config/agent-configuration.md new file mode 100644 index 000000000..646e1b081 --- /dev/null +++ b/content/versions/v3.2.1/reference/service-config/agent-configuration.md @@ -0,0 +1,121 @@ +--- +title: 'Agent configuration' +categories: ["reference"] +description: Configuration parameters for the Rhize agent +aliases: + - "/reference/agent-configuration/" +weight: 900 +--- + +The Rhize agent collects data that is emitted in the manufacturing process and makes this data visible in the Rhize system. +It works by connecting to equipment or groups of equipment that run over protocols such as OPC UA. + +As the communication bridge between the Rhize Data Hub and your plant, the agent has multiple functions: +- It subscribes to tags and republishes the changes in NATS. +- It creates an interface for the BPMN engine to send reads and writes to a data source and its associated equipment. + + +## OPC UA authentication types + + When authenticating over OPC UA, Rhize supports the following authentication types: + +| Authentication type | Behavior | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Anonymous | Connects without any necessary credential. | +| Username | Authenticates through a `username` and `password` in the config file, or through a Kubernetes secret. | +| Certificate | Uses the certificate on disk specified in the `OPCUA.CertFile` and `OPCUA.KeyFile` configs. If no certificate exists and the config specifies the `OPCUA.GenCert` property as `true`, automatically generates one. | + +## `logging` + + Logs the configurations to the console. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `type` | Specifies the logging configuration type: `json`, `multi`, or console.
      (Default: `console`) | +| `Level` | Configures the level of logging: `Trace`, `Debug`, `Info`, `Warn`, `Error`, `Fatal`, `Panic`. Defaults to `Trace`.
      (Default: `trace`) | + +## `libreDataStoreGraphQL` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `GRAPHQL_URL` | The URL of the GraphQL endpoint to use for interacting with Rhize services.
      (Default: `http://localhost:8080/graphql`) | + +## `NATS` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `SERVER_URL` | The URL for connecting to the NATS server.
      (Default: `nats://system:system@localhost:4222`) | + +## `OIDC` + + Configurations for Keycloak authentication and connection with OpenID Connect. + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenID Connect server.
      (Default: `http://localhost:8090`) | +| `realm` | Identifies the authentication domain for which the authentication request is being made. | +| `client_id` | The unique identifier assigned to the client application by the OIDC server. | +| `client_secret` | Used to authenticate the client alongside the client ID when making confidential requests. | +| `username` | The username credentials to authenticate with the OIDC server. | +| `password` | The password credentials to authenticate with the OIDC server. | + +## `OpenTelemetry` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenTelemetry server.
      (Default: `localhost:4317`) | + +## `OPCUA` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `DiscoveryUrl` | The URL to locate and connect to OPC UA servers on a network.
      (Default: `opc.tcp://localhost:4840`) | +| `Endpoint` | The URL of the OPC UA service server.
      (Default: `opc.tcp://localhost:4840`) | +| `Username` | The username credentials to authenticate with the OPC UA server. | +| `Password` | The password credentials to authenticate with the OPC UA server. | +| `Mode` | The operational mode of the OPC UA server/client.
      (Default: `None`) | +| `Policy` | The security measures for OPC UA server communication.
      (Default: `None`) | +| `Auth` | The authentication mechanisms and user access control.
      (Default: `Anonymous`) | +| `AppUri` | The application's unique URI within the OPC UA system.
      (Default: `opc.tcp://localhost:4840`) | + +## `BUFFERS` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `ProtocolQueueType` | The type of queue used for buffering communication protocol data.
      (Default: `0`) | + +## `HEALTH` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `PollInterval` | The frequency of scans for component status and health.
      (Default: `1000`) | +| `SubscriptionTimeout` | The maximum duration to wait to receive updates from subscribed data sources.
      (Default: `60000`) | +| `SubscriptionMaxCount` | The maximum number of concurrent subscriptions for monitoring.
      (Default: `5`) | + +## `MQTT` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `Version` | The version of MQTT used: `5.0` or `3.1.1`.
      (Default: `3.1.1`) | +| `ClientId` | The ID used in the MQTT broker.
      (Default: `mqtt-client`) | +| `Endpoint` | The URL of the MQTT broker.
      (Default: `mqtt://localhost:1883`) | +| `Username` | The username credentials to authenticate with the MQTT broker. | +| `Password` | The password credentials to authenticate with the MQTT broker. | +| `DecomposeJSON` | Enables or disables JSON payload decomposition into individual data fields.
      (Default: `false`) | +| `TimestampField` | The field to search to return timestamp information.
      (Default: `timestamp`) | +| `RequestTimeout` | The maximum duration to wait to receive a response to an MQTT request from the broker.
      (Default: `10`) | + +## `DATASOURCE` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `ID` | The source ID to retrieve payload data from.
      (Default: `DS_0806`) | + +## `AZURE` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `CLIENT_ID` | The ID used to securely authenticate Azure service access. | +| `CLIENT_SECRET` | The secret key associated with the client ID for authentication. | +| `TENANT_ID` | The ID of the Azure Active Directory tenant where the service is registered. | +| `SERVICEBUS_HOSTNAME` | The URL of the Azure Service Bus namespace used for Azure ecosystem communication.
      (Default: `bsl-dev.servicebus.windows.net`) | diff --git a/content/versions/v3.2.1/reference/service-config/audit-configuration.md b/content/versions/v3.2.1/reference/service-config/audit-configuration.md new file mode 100644 index 000000000..eef4df59d --- /dev/null +++ b/content/versions/v3.2.1/reference/service-config/audit-configuration.md @@ -0,0 +1,62 @@ +--- +title: 'Audit configuration' +categories: ["reference"] +description: Configuration for the Rhize audit +weight: 900 +--- + +Audit offers a secure and unchangeable record of all activities that happen within the Rhize system. + +## `logging` + + Logs the configuration to the console. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `type` | Specifies the logging configuration type: `json`, `multi`, or console.
      (Default: `console`) | +| `Level` | Configures the level of logging: `Trace`, `Debug`, `Info`, `Warn`, `Error`, `Fatal`, and `Panic`.
      (Default: `Trace`) | + +## `OIDC` + + Configurations for Keycloak authentication and connection with OpenID Connect. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenID Connect server.
      (Default: `http://localhost:8090`) | +| `realm` | Identifies the authentication domain for which the authentication request is being made. | +| `client_id` | The unique identifier assigned to the client application by the OIDC server. | +| `client_secret` | Used to authenticate the client when making confidential requests. | +| `username` | The username credentials of the user who is attempting to authenticate with the OIDC server. | +| `password` | The password credentials of the user who is attempting to authenticate with the OIDC server. | + +## `OpenTelemetry` + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenTelemetry server.
      (Default: `localhost:4317`) | + +## `storage` + +| Description | +|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Storage system for the configuration data. Value options include: `influxdb` and `pg`. | + +## `influxdb` + + A time-series database that is used in conjunction with Grafana designed for handling time-stamped data, such as metrics, events, and logs, that change over time. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the InfluxDB server.
      (Default: `http://localhost:8086`) | +| `token` | The authentication token to authenticate requests to the InfluxDB server.
      (Default: `my-token`) | + +## `pg` + + PostgreSQL is a general-purpose relational database management system that supports a wide range of features and data types. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `host` | The host name of the PostgreSQL database server to which the client application connects.
      (Default: `dbname`) | +| `user` | The username to authenticate with the PostgreSQL database server. | +| `password` | The password associated with the specified PostgreSQL user account. | +| `port` | The port number on which the PostgreSQL database server is listening for incoming connections.
      (Default: `5432`) | diff --git a/content/versions/v3.2.1/reference/service-config/bpmn-configuration.md b/content/versions/v3.2.1/reference/service-config/bpmn-configuration.md new file mode 100644 index 000000000..89ca13816 --- /dev/null +++ b/content/versions/v3.2.1/reference/service-config/bpmn-configuration.md @@ -0,0 +1,102 @@ +--- +title: 'BPMN configuration' +categories: ["reference"] +description: Authentication types for the Rhize BPMN +weight: 900 +--- + +The Rhize BPMN acts as the tailored engine for processing low-code workflows designed within the [BPMN UI]{{< relref "../../how-to/bpmn" >}}. The configurations manage the connection and data flow on the BPMN engine to the other Rhize microservices. + +## `http` + + All HTTP configurations are measured in `seconds`. + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `ReadHeaderTimeout` | Wait duration for the request header to be fully read before timing out.
      (Default: `10`) | +| `ReadTimeout` | Wait duration for the entire request to be read before timing out.
      (Default: `15`) | +| `WriteTimeout` | Wait duration for the entire response to be written before timing out.
      (Default: `10`) | +| `IdleTimeout` | Wait duration for the next request while the connection is idle before timing out.
      (Default: `30`) | + +## `logging` + + Logs the configurations to the console. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `type` | Specifies the logging configuration type: `json`, `multi`, or console.
      (Default: `console`) | +| `Level` | Configures the level of logging: `Trace`, `Debug`, `Info`, `Warn`, `Error`, `Fatal`, or `Panic`.
      (Default: `Debug`) | + +## `libreDataStoreGraphQL` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `GRAPHQL_URL` | The URL of the GraphQL endpoint to use for interacting with Rhize services.
      (Default: `http://localhost:4000/`) | +| `GRAPHQL_CA_FILE` | The file path of the CA certificate used for secure communication with the GraphQL endpoint.
      (Default: `''`) | + +## `viewInstance` + + Configuration for service viewing instances. + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `grafana` | `org`: The organization ID for the Grafana instance (Default: `1`).
      `tempoUid`: The UID for Tempo integration in Grafana (Default: `libre-tempo`).
      `url`: The URL of the Grafana instance (Default: `http://localhost:3000`). | +| `loki` | `accessToken`: The access token for authentication with Loki (Default: `''`).
      `url`: The URL of the Loki instance (Default: `http://localhost:3100`). | +| `tempo` | `accessToken`: The access token for authentication with Tempo (Default: `''`).
      `url`: The URL of the Tempo instance (Default: `http://localhost:3200`). | + +## `commandConsumer` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `threads` | The number of threads for command consumption.
      (Default: `3`) | + +## `GraphQLSubscriber` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `GRAPHQL_URL` | The URL of the GraphQL endpoint for the GraphQLSubscriber.
      (Default: `http://localhost:4000/`) | + + +## `NATS` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `CommandStreamReplicas` | The number of replicas for the command stream.
      (Default: `1`) | +| `JobResponseKVMaxGB` | The maximum size (in gigabytes) for the job response key-value store.
      (Default: `2`) | +| `JobResponseKVReplicas` | The number of replicas for the job response key-value store.
      (Default: `1`) | +| `JobResponseKVTTLMinutes` | the "time-to-live" (in minutes) for job response key-values.
      (Default: `7`) | +| `WorkflowSpecificationKVReplicas` | The number of replicas for the workflow specification key-value store.
      (Default: `1`) | +| `serverUrl` | The URL for connecting to the NATS server.
      (Default: `nats://system:system@localhost:4222`) | + +## `OIDC` + + Configurations for Keycloak authentication and connection with OpenID Connect. + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenID Connect server.
      (Default: `http://localhost:8090`) | +| `realm` | Identifies the authentication domain for which the authentication request is being made. | +| `client_id` | The unique identifier assigned to the client application by the OIDC server. | +| `client_secret` | Used to authenticate the client alongside the client ID when making confidential requests. | +| `username` | The username credentials to authenticate with the OIDC server. | +| `password` | The password credentials to authenticate with the OIDC server. | + +## `OpenTelemetry` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenTelemetry server.
      (Default: `localhost:4317`) | +| `defaultDebug` | Enables or disables default debug mode.
      (Default: `false`) | + +## `RESTAPI` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `PORT` | The port number for RestAPI connection.
      (Default: `8080`) | + +## `SECRET` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `KEY` | The secret key used to connect to the BPMN client. | + diff --git a/content/versions/v3.2.1/reference/service-config/calendar-configuration.md b/content/versions/v3.2.1/reference/service-config/calendar-configuration.md new file mode 100644 index 000000000..6ed1f1b27 --- /dev/null +++ b/content/versions/v3.2.1/reference/service-config/calendar-configuration.md @@ -0,0 +1,109 @@ +--- +title: 'Calendar configuration' +categories: ["reference"] +description: Configuration for the Rhize Calendar Service +weight: 900 +--- + + The Calendar Service handles polling work calendar definitions and generating work calendar entries in the graph and time series databases. + +## `logging` + + Logs the configuration to the console. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `type` | Specifies the logging configuration type: `json`, `multi`, or console.
      (Default: `console`) | +| `Level` | Configures the level of logging: `Trace`, `Debug`, `Info`, `Warn`, `Error`, `Fatal`, and `Panic`.
      (Default: `Trace`) | + +## `NATS` + + Message broker that drives Rhize's event-based architecture. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `SERVER_URL` | The URL of the NATS server.
      (Default: `nats://system:system@localhost:4222`) | + + +## `OIDC` + + Configurations for Keycloak authentication and connection with OpenID Connect. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenID Connect server.
      (Default: `http://localhost:8090`) | +| `realm` | Identifies the authentication domain for which the authentication request is being made. | +| `client_id` | The unique identifier assigned to the client application by the OIDC server. | +| `client_secret` | Used to authenticate the client when making confidential requests. | +| `username` | The username credentials of the user who is attempting to authenticate with the OIDC server. | +| `password` | The password credentials of the user who is attempting to authenticate with the OIDC server. | + +## `OpenTelemetry` + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenTelemetry server.
      (Default: `localhost:4317`) | +| `samplingRate` | The sampling rate for traces.
      (Default: `1`) | + +## `libreDataStoreGraphQL` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `GRAPHQL_URL` | The URL of the GraphQL endpoint to use for interacting with Rhize services.
      (Default: `http://localhost:8080/graphql`) | + +## `GraphQLSubscriber` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `GRAPHQL_URL` | The URL of the GraphQL endpoint for the GraphQLSubscriber.
      (Default: `http://localhost:4000/`)| + +## `RESTAPI` + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `PORT` | The port number for RestAPI connection.
      (Default: `8080`) | + +## `Calendar` + +Specific configuration options for the calendar service. + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `QueryIntervalMinutes` | How often to poll the work calendar definitions
      (Default: `10`)| + +## `QUERY` + +Query options specific to the calendar service + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `hierarchyScopeRecursionDepth` | How deep to recurse through the hierarchy scope hierarchy
      (Default: `3`)| +| `equipmentRecursionDepth` | How deep to recurse through the equipment hierarchy
      (Default: `3`)| + +## `Influx3` + +InfluxDB3 server options + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `Database` | The name of the database to connect to
      (Default: `Libre_calendar-service`)| +| `Host` | The host of the Influx3 database
      (Default: `http://localhost:8096`)| +| `Organization` | The Influx3 Organization (Influx3 Cloud)
      (Default: `Libre`)| +| `TokenPrefix` | The prefix used in the authorization token (`Token` for Influx 3 cloud, `Bearer` for Influx3 Edge)
      (Default: `Token`)| +| `Token` | The authorization token to attach to any Influx3 requests| + +## `Postgres` + +Postgres server options + +| Attribute | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `Host` | The hostname of the Postgres database
      (Default: `localhost`)| +| `Port` | Those port of the Postgres database
      (Default: `5432`)| +| `User` | The Postgres user name
      (Default: `postgres`)| +| `Password` | The Postgres instance password
      (Default: `postgres`)| +| `Database` | The database name for the Postgres instance
      (Default: `Libre`)| + +## `Database` + +Which database instance to use, either `Postgres` or `Influx3`
      (Default: `Influx3`) diff --git a/content/versions/v3.2.1/reference/service-config/core-configuration.md b/content/versions/v3.2.1/reference/service-config/core-configuration.md new file mode 100644 index 000000000..c3e7174a1 --- /dev/null +++ b/content/versions/v3.2.1/reference/service-config/core-configuration.md @@ -0,0 +1,82 @@ +--- +title: 'Core configuration' +categories: ["reference"] +description: Configuration for the Rhize core +weight: 900 +--- + + The Core service oversees data sources such as OPC-UA servers and manages the publication and subscription of topics within the NATS messaging system. + +## `logging` + + Logs the configuration to the console. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `type` | Specifies the logging configuration type: `json`, `multi`, or console.
      (Default: `console`) | +| `Level` | Configures the level of logging: `Trace`, `Debug`, `Info`, `Warn`, `Error`, `Fatal`, and `Panic`.
      (Default: `Trace`) | + +## `NATS` + + Message broker that drives Rhize's event-based architecture. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the NATS server.
      (Default: `nats://system:system@localhost:4222`) | +| `replicas` | The number of replicas or instances of the NATS server to be deployed.
      (Default: `1`) | + + +## `OIDC` + + Configurations for Keycloak authentication and connection with OpenID Connect. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenID Connect server.
      (Default: `http://localhost:8090`) | +| `realm` | Identifies the authentication domain for which the authentication request is being made. | +| `client_id` | The unique identifier assigned to the client application by the OIDC server. | +| `client_secret` | Used to authenticate the client when making confidential requests. | +| `username` | The username credentials of the user who is attempting to authenticate with the OIDC server. | +| `password` | The password credentials of the user who is attempting to authenticate with the OIDC server. | + +## `OpenTelemetry` + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `serverUrl` | The URL of the OpenTelemetry server.
      (Default: `localhost:4317`) | +| `samplingRate` | The sampling rate for traces.
      (Default: `1`) | + +## `SECRET` + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `KEY` | The SECRET KEY used for authorization within Core. | + +## `graphQLServer` + + The server used to connect to the GraphQL playground. + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `Port` | The port used within the URL that connects to the graphQLServer.
      (Default: `4001`) | + + +## `libreDataStoreGraphQL` + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `GraphQLUrl` | The URL of the GraphQL endpoint to use for interacting with Rhize services.
      (Default: `http://localhost:8080/graphql`) | + + +## `BPMN` + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `GraphQLUrl` | The URL of the BPMN endpoint to use for interacting with Rhize services.
      (Default: `http://localhost:8081`) | + +## `TimeSeries` + +| Attributes | Description | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `Enabled` | Enables the use of TimeSeries.
      (Default: `false`) | + diff --git a/content/versions/v3.2.1/releases/3-0-1.md b/content/versions/v3.2.1/releases/3-0-1.md new file mode 100644 index 000000000..3afad9755 --- /dev/null +++ b/content/versions/v3.2.1/releases/3-0-1.md @@ -0,0 +1,130 @@ +--- +title: 3.0.1 +date: '2024-07-23T10:42:31-03:00' +description: Release notes for v3.0.1 of the Rhize application +categories: ["releases"] +weight: 1696325267 ## auto-generated, don't change +v: "3.0.1" +images: + - /images/og/graphic-rhize-release-v3.0.1.png +--- + +Release notes for version 3.0.1 of the Rhize application. + +This release includes bug fixes, improvements for developer experience, and new relationships between entities in the ISA-95 schema. +It also clears the way for the new, improved Work Calendar service (still in release candidacy). + +_Release date:_ +23 Jul 2024 + +{{< callout type="warning" >}} +This release includes a few minor breaking changes, review the [upgrade](#upgrade) instructions for details. +{{< /callout >}} + +## Changes by service + +The following sections document the changes this release brings to each service. + +### Audit + +**Fixes** +- Fix Postgres user tag query to use column name `user_id` instead of the `user` reserved keyword + +### BPMN + +**Features** + +- Error boundary events now support call-activity tasks. If a task in the called process aborts, the error boundary captures that event. +- Support Custom [Certificate authorities](https://en.wikipedia.org/wiki/Certificate_authority) so that the BPMN service can run in networks with internal certificates authorities. +- [Message events]({{< relref "../how-to/bpmn/bpmn-elements/#events" >}}) now support topics that use the `+` wildcard. +- Add configuration option to set __traceDebug flag on all BPMN executions. For details, read [Debug workflows]({{< relref "../how-to/bpmn/debug-workflows" >}}). +- Allow `__traceDebug` to be set from [variable context]({{< relref "../how-to/bpmn/create-workflow/#access-process-variable-context" >}}). +- Add support for querying a range of [work calendar]({{< relref "../how-to/work-calendars" >}}) entries for a Work Calendar Definition. + - When configured, calendar data persists to a timeseries DB. + - {{< breaking >}}. Work calendars now query for `WorkCalendarDefinitions` not `workCalendars`. + This makes it possible to configure calendars in the UI. + - Work calendar execution is now based on Hierarchy Scope. + +**Change** +- Reduce number of retries in [BPMN recovery]({{< relref "../deploy/maintain/bpmn-nodes/" >}}), speeding up response times when node does not exist. +From the user's perspective, the delay was particularly notable on [API BPMN triggers]({{< relref "../how-to/bpmn/trigger-workflows/#start-a-workflow-from-an-api" >}}) when the specified workflow did not exist. + +**Fixes** +- Capture panics in JSONata go for better recovery and discovery. +- {{< breaking >}}. Limit BPMN users to only tokens with specific audience and scope. +- Workflow specifications not found in NATS now fall back to database lookups before failing. + Fixes issue when spinning up new development containers. +- Include metadata fields for `jobResponse`. Fixes panic if node tried to access these fields. +- Respect [`debug`](/how-to/bpmn/debug-workflows/#adding-the-debug-flag) flag for BPMNs triggered with `createAndRunBpmn`. +- Reuse expected Audience for clients to match documented client name. +- Do not create calendar entry for end events. +- Fix nil pointer error when aborting tasks without parents. + + +### Schema + +**Features** +- {{< breaking >}}. Add `id` and `label` to `stateTransitionInstance`. +- Add menu URL for portal in UI (preparation for future work in UI). +- Add filter to query material actual by a material requirement and vice-versa. +- Link `workCalendarEntries` to equipment versions. +- Include Kafka in the enum values for data sources. +- Added optional `InformationObjectTypeEnum` to `OperationsEventRecordEntry`. + +**Fixes** +- Add 0:N relationship from `materialRequirements` to `materialActual`. +- The `segmentResponse` entity can now have multiple `jobResponses` attached. +- Change `segmentResponse` relationship to `JobResponse` to be a 1:N relationship +- {{< breaking >}}. Change `stateModelTransition` from `state` to a relationship. +- Fix inverse relationship between `operationseventDefinitionProperty` and `operationsEventProperty`. + +## Compatibility + +{{< compatible "3.0.1" >}} + +## Checksums + +{{% checksums "v3.0.1-checksums.txt" %}} + +## Upgrade + +{{< reusable/backup >}} + +To upgrade to v3.0.1, first ensure that you have made corrections for the aforementioned breaking changes. + +Rhize now limits BPMN users to the `libreBpmn` audience and `bpmn:mutation` role. +If users do not have this audience and role, they will unable to log in. +To fix this: +1. Log in to the Keycloak service in your Rhize environment. +2. Select the `libre` realm. +3. Configure the audience and role for the Users or groups that should be able to run BPMN workflows. For details, read [Install Keycloak]({{< relref "../deploy/install/keycloak/" >}}). + +Work calendars now query for `workCalendarDefinitions`, not `workCalendars`. +There is a chance this may break some calendars that already exist. +To mitigate this, ensure you set active definitions for the calendars you want to query. +Read [About calendars and overrides]({{< relref "../how-to/work-calendars/about-calendars-and-overrides" >}}) for details about how the service and its relationships work. + +After you've made the necessary mitigations, follow the upgrade the [Upgrade instructions](/deploy/upgrade). + +### Post upgrade + +Two schema changes to the state model also might cause breaking changes after upgrade. +You can correct these issues by re-uploading the data. + +For the specific breaks: + +The `StateTransitionInstance` entity now has an `id` and `label` property. +Any such entities that were created without an ID and label are now invalid. +To mitigate this: +1. Query for your `StateTransitionInstance`: + ```gql + query QueryStateTransitionInstance { + queryStateTransitionInstance { + iid + } + } + ``` +2. Use a [mutation]({{< relref "../how-to/gql/mutate/#update" >}}) to update these to have an ID and label. + +Any existing `StateModelTransitions` using a `from` or `to` property will break. Change these to a relationship. + diff --git a/content/versions/v3.2.1/releases/3-0-3.md b/content/versions/v3.2.1/releases/3-0-3.md new file mode 100644 index 000000000..89e4e20db --- /dev/null +++ b/content/versions/v3.2.1/releases/3-0-3.md @@ -0,0 +1,141 @@ +--- +title: 3.0.3 +date: '2024-11-13T00:41:53-03:00' +description: Release notes for v3.0.3 of the Rhize application +categories: ["releases"] +weight: 1686972505 ## auto-generated, don't change +--- + +Changelog for version 3.0.3 of the Rhize application. + +_Release date:_ +13 Nov 2024 + +## Changes by service + +The following sections document the changes this release brings to each service. + +### Admin + +**Add** + - Add portal menu link + - Add output container build to `$CI_REGISTRY/libremfg/docker/admin-ui:$CI_COMMIT_TAG` + +**Change** + - Change `WorkMaster` parameters input to textbox to support multi-line values better + - Change BPMN Process instances modal default search and width + - Refactor Material Class Properties table + - Refactor Material Definition table + - Refactor Operational Location Class properties table + - Refactor Operational Location properties table + - Refactor Operations Event Class table + - Refactor Operations Event Definition to use infinite scroll + - Refactor Person Properties table + - Refactor Personnel Class table + - Refactor Physical Asset Class table + - Refactor Physical Asset Classes to use infinite scroll + - Refactor Physical Asset table + - Refactor Physical assets to use infinite scroll + - Refactor Work Masters to use infinite scroll + +**Fix** + + - Fix Person description allowing `e` character + - Fix Change material definition Version State prompt text + - Fix Create or Select a material definition + - Fix Personnel Class version containers text + - Fix Personnel Class save as hierarchy scope not saving + - Fix typos in person page referring to Physical Asset + - Fix Material Class search + - Fix Operational Location Class hierarchy scope edit + - Fix search on large list of Operational Location Class property + - Fix enable/disable equipment property + - Fix equipment property edit + - Fix equipment property edit query + - Fix Material Class `isAssembledFrom` relationship edits + - Fix BPMN view sidebar title + - Fix equipment nested tree items not showing active version + - Fix user details and support typo + - Fix large number of Person failing to display + - Fix large number of Personnel Classes failing to display + - Fix large number of Operational Location Classes failing to display + - Fix Variable page logic to Create and Set as active + - Fix Operations Definition Segment `WorkType` selection + - Fix invalid characters in Operations Definition Segment Component + - Fix portal hang when not configured + - Fix Equipment properties text + - Fix numerical error in input fields on Person version for `Name` and `Details` + - Fix Operational Location change not refreshing screen after edit + - Fix validation of `canCreateNewVersion` in Material Class page + - Fix typos in Operational Location page referring to Operational Location Class + - Fix audit trail PDF export header text overlap + - Fix cache issue when enabling and disabling Operations Event Definition + - Fix previous tag selection script in CI/CD + - Fix typo in Create Operational Location + +### BPMN engine + +**Fix** + - Fix linting and container version issues from CI + +### Schema + +**Add** +- Add regex search to `Event.messageText` +- Add `JobResponseData.valueLong` for large values without search + + +**Change** + - Change baas version dependencies to v3.0.3 from v3.0.0 + +**Fix** + - Fix pipeline error by pinning version of `gqlgen` + + +### BAAS + +**Fix** + - Fix large strings breaking badger indexes by constraining string length to `64000` when indexed and not by hash + + +### Core + +**Fix** + - Fix supergraph compose warning on `JobState` enum descriptions + + +### Agent + +No changes for this release + + +### Audit + + +**Add** +- Add default partition size of 1 month to Postgres container + +**Fix** +- Fix Postgres initialization of partition tables + + +### Keycloak Theme + +No changes for this release. + +### Router + +No changes for this release. + +## Compatibility + +{{< compatible "3.0.3" >}} + +## Checksums + +{{% checksums "v3.0.3-checksums.txt" %}} + +## Upgrade + +To upgrade to v3.0.3, follow the [Upgrade instructions](/deploy/upgrade). + diff --git a/content/versions/v3.2.1/releases/3-0.md b/content/versions/v3.2.1/releases/3-0.md new file mode 100644 index 000000000..7045fda7e --- /dev/null +++ b/content/versions/v3.2.1/releases/3-0.md @@ -0,0 +1,156 @@ +--- +title: "Rhize 3.0" +date: '2024-02-27T09:17:43-03:00' +description: >- + Notes for v3.0 of the Rhize Manufacturing Data Hub. A flexible architecture for workflow orchestration, message handling, standards-based modeling, and custom MES apps. +categories: ["releases"] +weight: 1709031155 ## auto-generated, don't change +v: "3.0" +--- + +Rhize version 3.0 is now in general release! :partying_face: +As a full rewrite of our earlier Libre 2.0 application, this release functionally announces a new product. + +Read more to learn about the features available in the Rhize Manufacturing Data Hub. + +## What is Rhize? + +Rhize is the world's first manufacturing data hub. +It collects event data emitted from all levels of a manufacturing process, +stores this data in a standardized schema, +and exposes access to the event stream so users can orchestrate workflows in real-time. + +Customers use Rhize to observe and react to past and present states of their manufacturing operation. Its use cases include: +- **A single source of truth for manufacturing records**. With manufacturing events represented as _a knowledge graph_, Rhize provides access to any view of the system through a single query to a single endpoint. You can query detailed reports about a specific asset, such as a manufacturing batch, or compare production across broad categories, for example, between equipment items or production lines. +- **A backend for custom MES and MOM applications**. As the data hub has a message broker and real-time database, operators can program custom level-three applications for functions such as track and trace, material genealogy, electronic batch record creation, and KPI calculations. +- **An integrator of legacy systems**. Rhize accepts data from legacy systems and converts them to a standardized schema. Thus, it serves as a hub to communicate between legacy systems in ways that would otherwise be impossible or very difficult to maintain. + + +## Features + +Each of the following features supports Rhize's key design goals: +- Provide a highly reliable means to collect manufacturing data from varied sources +- Standardize this data in a way that accurately places it within an _event_ in the context of an entire manufacturing operation +- Offer a programmable engine to write custom logic to process the data and send it between systems +- Serve as a complete backend application and architecture for MES/MOM frontends +- Expose this system through an interface that is accessible to the widest number of stakeholders in the manufacturing operation. + +### Knowledge graph and GraphQL + +Data is stored in the Rhize DB, a graph database with a custom schema that uses the ISA-95 standard as a data model. +This database provides several advantages over a relational database or data lake: + +- **Standardization.** + The database schema enforces standardization using a data model that adequately represents an entire manufacturing operation. + All data stored in Rhize has a common structure, unlike the heterogeneous data of a data lake. +- **Graph Structures.** + The graph database represents the object model of ISA-95 exactly. + Every event in a manufacturing process is connected to all other events. + For example, a job response might have associated operations requests, personnel, equipment, and materials that are consumed and produced. + All these objects have their associated classes, instances, and contexts (where context could be a time range, operations segment, or zone of information exchange). +- **Intuitive, minimal queries**. The Rhize DB is exposed through a GraphQL API, which provides a complete interface through a single endpoint. + Users query exactly what they want without needing to handle the relational algebra of SQL or the over-fetching of a REST API. + +With standardization, graph structure, and complete interfaces, the Rhize DB thus constitutes a knowledge graph that represents the entire state of a manufacturing operation. +Users use this knowledge graph to run simulations, discover optimizations, and train machine-learning models for predictive analysis. + +You can read and write to the knowledge graph through a GraphQL explorer, a BPMN workflow, a custom frontend, or through the Modeling UI. +To learn how to use GraphQL for manufacturing analysis, read the [Rhize guide to GraphQL]({{< relref "../how-to/gql" >}}). + +### Modeling UI + +The Rhize UI is a graphical interface to model and store the objects in your manufacturing process. Without needing programming knowledge, your operators can use the Admin UI to define the items in your role-based equipment hierarchy. For example, you can create and associate equipment with equipment classes, hierarchy scopes, data sources, and segments―all the objects that change infrequently. + +These models provide the foundational data objects to associate with the dynamic data that you collect, analyze, and handle during ongoing operations. +To learn more, read the [Model objects]({{< relref "../how-to/model/create-objects-ui/" >}}) guide and its corresponding topic that describes the [Master definitions and fields]({{< relref "../how-to/model/master-definitions/" >}}). + +### Low-code workflow orchestration + +While some aspects of manufacturing, such as the plant site, are constant, most manufacturing data is emitted dynamically during production. +Rhize's {{< abbr "BPMN" >}} editor provides a graphical interface to listen to events, write logic based on event conditions, and process data to send to other devices or store in the knowledge graph. +It has gateways to represent conditions, messaging to subscribe and publish to topics, and a JSON interpreter to transform variables and process manufacturing data in batches. + +For example, you can write a BPMN workflow to do any combination of the following: +- Automatically write the values associated with a specific job to the database +- Evaluate incoming data for conditions and write logic to transform this data and send it +- Subscribe and publish to topics, coordinating communication between systems +- Listen to events and read and write machine data through OPC-UA + +To get started, read the Guides to [Use BPMN]({{< relref "../how-to/bpmn" >}}) and [Write rules]({{< relref "../how-to/publish-subscribe/create-equipment-class-rule" >}}) to trigger workflows from data source values. + +### Agent and message broker + +Rhize collects data from multiple data sources and protocols. +To bridge the Rhize system with your devices, Rhize has an agent to collect data from OPC UA and MQTT servers. +The Rhize agent listens for your devices' messages and publishes them to the message broker. + +Based on NATS, Rhize's message broker decouples communication through a publish-and-subscribe pattern. +Rhize BPMN workflows can subscribe to topics and publish topics back. +On your side, your manufacturing devices can publish topics to Rhize and subscribe to topics that are published from some BPMN trigger or to changes in the database. +You can also use Rhize workflows to publish and subscribe to topics on an external broker. + +To learn more, read the guide to [Connect a data source]({{< relref "../how-to/publish-subscribe" >}}) and the reference for [Agent Configuration]({{< relref "../reference/service-config/agent-configuration" >}}). + +### Audit trails + +Manufacturing often happens in strict regulatory environments and in systems where security is critical. +The Rhize audit log maintains a record of all changes that happen in the system. +To learn more, read the [Audit]({{< relref "../how-to/audit" >}}) guide. + + +### Secure, vendor-agnostic infrastructure + +Rhize is built on open standards and heavily uses open-source cloud infrastructure. +As such, Rhize is designed to conform to your operation's IT and manufacturing processes. + +Rhize runs on your IT infrastructure. +The data model itself is based on a widely recognized standard, ISA-95. +Your teams control its security and access policies, with authentication provided through [OpenID Connect]({{< relref "../explanations/about-openidconnect.md" >}}). + +### High availability and reliability + +Mission-critical systems, such as an MES, must be highly reliable and available. +Information flows in manufacturing can also have very high demands for CPU, network throughput, and memory. + +The design of Rhize accommodates horizontal scaling, where the application runs in clusters of multiple computers. +This distributed execution ensures that the system continues functioning during upgrades, periods of high use, and momentary failures in single nodes. +Specifically, Rhize uses Kubernetes for container orchestration, a distributed, ACID-compliant database, and a message broker that uses replication to harden against failure. + +The reliance on open source also means that your operators do not need to learn a specialized skill to deploy Rhize. +It uses the same cloud-native applications that power the modern web. + +Read all about how to Deploy Rhize in the [Deploy]({{< relref "../deploy" >}}) guides. + +### A backend for MES applications + +With the knowledge graph, message broker, and workflow executor, and secure infrastructure, +Rhize also provides all the components necessary to write custom applications for Manufacturing Execution Systems. +Rather than rely on a vendor to sell an MES system that prescribes exactly what data it can use and how it can use this data, use Rhize to build the custom MES frontend of your choice. + +Combined with rapid prototyping and low-code tools, you can use Rhize to build MES applications quickly and iterate on them as your operators use them in production. + +## Install + +To install, read the [Install guide]({{< relref "../deploy/install" >}}). +Or, if upgrading from a release candidate, use the [Upgrade guide]({{< relref "../deploy/upgrade" >}}). +If upgrading, be sure to review the changelog to be aware of any possible breaking changes. + +{{< compatible "3.0.0" >}} + +{{% checksums "v3.0.0-checksums.txt" %}} + +### Changelogs + +The following changelogs document the features, fixes, and refactoring that went into this release. +- [3.0.0]({{< relref "changelog/3-0-0/" >}}) +- [3.0.0rc09]({{< relref "changelog/3-0-0rc09/" >}}) +- [3.0.0rc08]({{< relref "changelog/3-0-0rc08/" >}}) +- [3.0.0rc07]({{< relref "changelog/3-0-0rc07/" >}}) +- [3.0.0rc06]({{< relref "changelog/3-0-0rc06/" >}}) +- [3.0.0rc05]({{< relref "changelog/3-0-0rc05/" >}}) + +## Read more + +- [Get started]({{< relref "../get-started" >}}) introduces Rhize's application and architecture. +- [Manufacturing data hub]({{< relref "../explanations" >}}) explains why Rhize chose its design. +- [Use cases]({{< relref "../use-cases" >}}) explains ways customers use Rhize. diff --git a/content/versions/v3.2.1/releases/3-1-0.md b/content/versions/v3.2.1/releases/3-1-0.md new file mode 100644 index 000000000..b9a01dc9f --- /dev/null +++ b/content/versions/v3.2.1/releases/3-1-0.md @@ -0,0 +1,428 @@ +--- +title: 3.1.0 +date: '2025-02-28T12:38:30-05:00' +description: Release notes for v3.1.0 of the Rhize application +categories: ["releases"] +weight: 1677303108 ## auto-generated, don't change +--- + +Release notes for version 3.1.0 of the Rhize application. + +_Release date:_ +7 Mar 2025 + +{{< callout type="warning" >}} +This release includes a breaking change to permissions, review the [upgrade](#upgrade) instructions for details. +{{< /callout >}} + +## Changes by service + +The following sections document the changes this release brings to each service. + +### Admin + +**Add** +- Add REST Playground. +- Add expand input JSON to BPMN editor Viewer mode. +- Add includes properties of to material class. +- Add SSO for Appsmith Portal. +- Add pagination to Operations Event Definition sidebar. + +**Change** +- Change schema to accommodate inherited properties with custom resolver. +- Change GraphQL Playground to use Apollo Sandbox. +- Change to use `libre-schema` v3.1.0. +- Change GraphQL types to match `libre-schema` v3.1.0. +- Change scripts to force line endings LF for compatibility with Windows. +- Change tree library with `react-arborist`. + +**Fix** +- Fix disabled Material Definition properties not showing as yellow. +- Fix disabling inheritance of Material Definition Property that are inherited through classes. +- Fix disabling property edits on approved entities. +- Fix duplicate personnel class draft being created on start. +- Fix equipment save not working under all scenarios. +- Fix Operations Event Class search bar not showing results. +- Fix sidebar menu to hide disabled not showing for all entities. +- Fix work calendar entry not showing after being created and requiring refresh. +- Fix lazy loading of MaterialDefinition, MaterialClass and WorkMaster when none exist. +- Fix multiple HTML5 backend errors by adding DndManager to ArboristTree usage. +- Fix incorrect error toast when Equipment Class Rule expression results in false. +- Fix issue where certain errors would post to console and not show in Response field. +- Fix REST Playground request headers and body not being both sent in POST and PUT requests. +- Fix issue causing newly created Equipment to appear at bottom of list. + +### BPMN engine + +**Add** +- Add config logging to the terminal on start-up. +- Add config validation checks, specifically nil checks and URL formatting+response checks. +- Add generation of config file when prompted with flag and file path in terminal start-up. +- Add obfuscation for sensitive information in config on publish. +- Add JSONata expression evaluation query handler, for calling BPMN service tasks outside of a process. +- Add synchronous message wait to allow BPMN to block execution and wait for a message. +- Add span end to Call Activity error log. +- Add dot support to MQTT topics +- Add responseHeaders to RestAPI service task output variables +- Add check for `jobState` in `getInstance`. +- Add detection for when NATS deletes KV stores and to resync them. +- Add domain object `bpmn.Task` for BPMN tasks that uses a `domain.JobResponse` as base type. +- Add domain object for BPMN tasks. +- Add `EffectiveEnd` to `WorkCalendarEntry`. +- Add `EvaluateJsonata` domain function. +- Add facades for service tasks with NATS Adapter & service tasks with Database Query Adapter. +- Add `gotemplate` wrapper for `ExpandEnvAndSecrets`. +- Add interface for service tasks. +- Add internal variable to ExpandEnvAndSecrets for GraphQL URL. +- Add `TaskDefinition` type to domain. + +**Change** +- Change to use YAML instead of JSON for configuration. +- Change `HandleCallActivity` to use `getWorkflowSpecification` helper function. +- Change go version from `1.22` to `1.23`. +- Change "ABORTED" Job State check in `getInstance` to reference `jobState`. +- Change "ABORTED" Job State check in `getInstance` to be in final workflow span check. +- Change configManager to accept a config file name argument. +- Change `domain.JobResponse` functions to `bpmn.Task` functions. +- Change Get/Set `TaskVariable` to be `jobResponse` domain functions. +- Change individual service task functions to individual files and modified to adhere to interfaces. +- Change `job *domain.JobResponse` to `task *bpmn.Task` & update symbol in functions. +- Change JSONata compile and evaluation to be in a separate function. +- Change `pickServiceTaskDefinition` to domain function on `bpmn.Task` `GetServiceTaskDefinition`. +- Change pipelines to use `CI_JOB_TOKEN` in place of personal access tokens. +- Change pipelines to use go version `1.23` and docker `27.4.1`. +- Change `TaskDefinition`s in domain to use base `TaskDefinition` type, for stricter type checking. +- Change to check and log if `workflowSpec` is empty in `HandleTaskAborted`. +- Change to importing libre-schema for types. +- Change to separate '.' and ' ' cases in `MQTTToNATSSubjectConversion`, adding new conversion for '.' case to '//'. +- Change to use globally configured CA for all GraphQL and REST requests. +- Change `WorkCalendar` trigger to filter for only active equipment. +- Change BPMN `IntermediateThrowEvent` to allow unconfigured `IntermediateThrowEvents` to complete successfully. + +**Fix** +- Fix BPMN SQL Task errors causing nil pointer exception for nil job. +- Fix nil dereferences in `bpmnEngine.go`. +- Fix `getInstance` not returning an `endTime` for a running BPMN. +- Fix issue that would cause `Task` `DataJSON` to grow exponentially. +- Fix issue where `oidc.bypass` was not being handled. +- Fix issue with some NATS Connect instances not referencing username and password. +- Fix `WorkCalendarDefinition` label field not aligning. +- Fix Tempo search query parameter encoding for Instance searching. + +**Remove** +- Remove impossible error condition handling. +- Remove repeated Call Activity error logging. +- Remove unused config fields from configuration. +- Remove OTEL URL validation. + +### Schema + +**Add** +- Add deprecation warning to incorrect JobOrder dispatch status' +- Add deprecation warning to quantityUnitOfMeasure on MaterialLot, MaterialSubLot, PersonnelCapabilityProperty, MaterialCapabilityProperty, EquipmentCapabilityProperty and PhysicalAssetCapabilityProperty +- Add migration directive for OperationsEventDefinitionVersion.workMasters +- Add missing JobOrder dispatch status' from the standard ISA-95 2019 standard +- Add Operations Capability schema model +- Add quantityUoM attribute to MaterialLot, MaterialSubLot, PersonnelCapabilityProperty, MaterialCapabilityProperty, EquipmentCapabilityProperty and PhysicalAssetCapabilityProperty +- Add regex search to event message text +- Add signature to InformationObjectTypeEnum +- Add a alternative permission group for just Materials (MaterialLot & MaterialSublot) and their properties +- Add continuous deployment for tagged versions to staging environments +- Add continuous deployment for v3.x.x branch to Libre 3 (L3) staging environment +- Add relationship between Workflow Specification Nodes and Work Master +- Add migration directives for Work Calendar information model + +**Change** +- Change `_createdDateTime` to `_createdOn` for consistency +- Change federation version from 2.7.0 to 2.9.0 +- Change migration method to `parent` for Hierarchy Scope's inverse relationships +- Change Operations Event Definition Record Specification ID to be unique (`@id`) +- Change scripts to use `docker compose` instead of `docker-compose` +- Change unicode characters to ascii +- Change libre `baas` version from v3.0.1 to v3.0.3 in container builds and CI/CD +- Change golang version to `1.23` from `1.21` +- Change build containers to use v3.1.0 of BAAS +- Change comment signoff to allow 0..n, was 0..1 + +**Fix** +- Fix circular dependency on migration of Hierarchy Scope +- Fix `ImportJSON` mutation not checking for conflicts in data for versioned entities +- Fix `isAssembledFrom` migration for MaterialClass +- Fix migration to include `equipmentVersions` in HierarchyScope migration +- Fix order of operations so both models are complete when generating entities and schemas +- Fix diff check on commit +- Fix incorrect merge that duplicated JobOrder Dispatch Status `CANCELLED` +- Fix permission file format for Row Level Access Control +- Fix BAAS permission file generation to align with existing installation instructions + +**Remove** +- Remove `fulltext` search on `_createdBy` and `_modifiedBy` +- Remove `gqlparse` library version pinning in CI/CD +- Remove generation of scopemap update in CI/CD +- Remove publish of tagged subgraph to Apollo +- Remove repeated word in resource Specifications descriptions +- Remove `@deprecation` from JobOrder dispatch status' Cancelled + +### BAAS + +**Add** +- Add authorization error messages when authorization fails on GraphQL queries +- Add arguments to lambda execution +- Add check for claims in token for Row Level Access Control +- Add cli flag `--graphql runtime-url` to configure to override the base url of the custom url handler +- Add command line argument to synchronize roles with OIDC Server +- Add support for `_Any` type in `@custom` queries and mutations +- Add `@constraint` directive handling to limit field values +- Add Apollo Sandbox user interface as default handler +- Add GraphQL documentation +- Add nullability checks when adding nested objects to only nested objects +- Add tags and Continuous Delivery (CD) for docker repository + +**Change** +- Change Apache thrift library to v0.13.0 from v0.12.0 +- Change CI/CD `docker` version to bump 27.4.1 from 20.10.16 +- Change golang.org/x/crypto to v0.31.0 from v0.27.0 +- Change golang.org/x/sync to v0.8.0 v0.10.0 from +- Change golang.org/x/sys to v0.25.0 v0.28.0 from +- Change golang.org/x/text to v0.18.0 v0.21.0 from +- {{< breaking >}} Change permissions file structure from `map[string][]interface{}` to a known structure for resource authorization rules +- Change string indexed fields to limit size to 64000 characters to prevent key overflow +- Change SyncOIDCGraphQLResource to respective new cil flag for `syncRoles` +- Change `golang` version from v1.21 to v1.23 +- Change alpha's Cross Origin Resource Sharing (CORS) policy to allow all +- Change Change Data Capture (CDC) payload to include all entity attributes changes in the transaction as a single object, instead of individual attribute changes. +- Change regex to backticks from double quotes to avoid double escaping slashes +- Change the GraphQL parser to gqlgen's from dgraph-io's +- Change to `chi` as default http multiplexer from http.ServeMux + +**Fix** +- Fix authorization rewriter handling `not:{has:acl}` as type handling of `[]interface{}` instead of `interface{}` +- Fix error with custom http error responses not propagating +- Fix security issue CVE-2024-41110 by upgrading docker library to 27.3.1 from 1.13.1 +- Fix security issue with possible sql injection attack + +**Remove** +- Remove authorization check on mutation +- Remove commented out code as our SCM provides this history already +- Remove unused code or superfluous comparisons + +### Core + +**Add** +- Add check for migration directives in `schema.sdl` for `checkMigrationDependencies` +- Add check to prevent infinite loop when disabling Equipment +- Add deprecation warning for changes to existing configuration keys +- Add data exporter and importer +- Add equipmentActualRollup as a query and accompanying resolvers. +- Add `exportJSON` as mutation +- Add handler for equipmentActualRollup to JobResponse handlers. +- Add migration event as `PENDING` before execution +- Add override of data when existing data in target DB is invalid +- Add support for exporting multiple objects +- Add support for YAML configuration files +- Add tests for migration functionalities +- Add timestamp in import event id so the same JSON can be imported again +- Add update to migration event to `COMPLETE` after execution +- Add update to migration event to update `effectiveEnd` when execution finished +- Add validation check before executing to migration plan strategy +- Add support for separating NATS username and password from connection string + +**Change** +- Change configuration keys names (backward compatible with deprecation warning) +- Change import to allow empty `effectiveStart` while importing +- Change import to use `effectiveStart` if set from DB when import data is empty +- Change GetJobResponse query adapter to also get child JobResponses and EquipmentActuals. +- Change github.com/99designs/gqlgen to v0.17.54 from v0.17.47 +- Change golang to v1.23.4 from 1.21 +- Change golang.org/x/crypto to v0.31.0 from v0.30.0 +- Change golangci-lint to v1.60.2 from v1.54.0 +- Change migrated entity versions to be imported with status `DRAFT` +- Change CI/CD docker container build path to `registry.gitlab.com/libremfg/docker/core` from `registry.gitlab.com/libremfg/libre-core` +- Change CI/CD docker version from 20.10.16 to 27.4.1 +- Change libre-schema to v3.1.0 from v3.0.3 +- Change GraphQL resolution of equipmentActualRollup to exist on Job Response instead of top level query +- Change log to write as soon as possible after receiving message from NATS + +**Fix** +- Fix data source update when migrating an active version over the top of a draft version +- Fix `includesPropertiesOf` cloning +- Fix migration of circular HierarchyScope +- Fix JIRA testcase upload pipeline CI/CD step + +### Agent + +**Add** +- Add adapter for MQTT v3.1.1. +- Add field in config to choose MQTT Version. +- Add field in config to choose timestamp field for JSON messages. +- Add `MaxMixedUpdates`, allowing agent to run for a configurable number of failed config checks without access to Keycloak or BAAS. +- Add configuration validation checks. +- Add configuration logging on startup. +- Add generation of config file when prompted with flag and file path in terminal start-up. +- Add obfuscation for sensitive config fields on publish. +- Add support for MQTT and NATS bridge mode. +- Add Kafka egress handler using CloudEvents structured mode. +- Add an adapter for Kafka event streams that implements `DataSourceConnector`, `DataSourceWriter` and `DataSourceSubscriber`. +- Add configuration field for specifying which field to use as timestamp field using `MQTT.timestampField` and it's format with `MQTT.timestampFormat`. +- Add NATS KV support for message deduplication to the MQTT adapters. + +**Change** +- Change OCUPA version to newer version to take advantage of bug fixes. +- Change config to use YAML instead of JSON, and certain configuration variables. +- Change egress to be abstracted behind an interface, add configuration to select egress, inject egress providers into `messageHandler`. +- Change `messageHandler` to support sending a message to multiple egress sources. +- Change to allow Agent to trigger OPCUA commands without NATS. +- Change Kafka adapter to use SegmentIO instead of Confluent library. +- Change NATS reporting topics to native NATS topics. +- Change stats reporting topic to a native NATS topic. +- Change to update topic properties for `dataType` and `messageKeyDeterminedBy` when polling BAAS. + +**Fix** +- Fix issue where Agent would panic if DataSourceTopic DataType is missing. +- Fix issue where `OidcPEP.GetAccessToken` was calling `jwt.ParseWithClaims`, resulting in a failed parse even with a valid Keycloak token. +- Fix issue where Agent would panic if its config didn't have an active version. +- Fixed bug where unable to unmarshal raw string messages received through MQTT. +- Fix issue with BaaS configuration change not updating existing topics. + +**Remove** +- Remove unused configuration fields: `RESTAPI`, `BUFFERS.DataChannelCapacity`, and `AZURE`. +- Remove an extra config file. + +### Audit + +**Add** +- Add `Previous` field to `ValueChange` type, allowing querying of Audit history with Influx. +- Add an HTTP endpoint "/config" that returns the active configuration with passwords, tokens, and client_secrets redacted. Endpoint is disabled by default and can be enabled by setting `CONFIGLISTEN` to any value. +- Add custom postgres container with pg-partman and jobmon to optionally allow automated partition management. +- Add logging of configuration options. +- Add obfuscation for sensitive config fields. +- Add pg_partman defaults to postgres init. +- Add publishing the configuration to NATS `\_libre.audit..status` subject. +- Add test cases for configuration. +- Add validation to existing configuration with `ConfigManager`. + +**Change** +- Change CDC payload to accept multiple events per message, by updating `ValueChange` adapter to output an array of `ValueChange`s. +- Change configuration to use YAML instead of JSON. +- Change drivers to use rhize-go `v3.0.0-3`. +- Change env prefix from `LIBRE_AUDIT` to `RHIZE_AUDIT`. +- Change existing configuration options for consistency: `pg` to `postgres`, `pg.user` to `postgres.user`, and other options changed to camel casing. +- Change go to `1.23` from `1.21` +- Change queries to use parameterized versions to avoid injection attacks. +- Change `influxdb.token` to be optional. +- Change NATS URL configuration to be split between `serverUrl`, `username`, and `password`. +- Change postgres container to have 1 month default partition size. +- Change to use the DTO to Domain conversion to create individual objects for the `delete` and `set` operations for `Event` objects. +- Change value change logging from Trace to Debug. + +**Fix** +- Fix `libreAudit` not accepting audience in access token, now accepts both audit and . +- Fix bug preventing messages from being ACK/NAKed when the Audit record was written to Influx. +- Fix handling for user tag in filter and output. +- Fix error where `commit_ts` and `uid` in postgres init were `integer` instead of `bigint`. + +**Remove** +- Removed unused configuration options: `libreDataStoreGraphQL`, `logging.type`, `OpenTelemetry`, and `influxdb`. + +### Keycloak Theme + +No changes for this release. + +### Router Initilazation + +No changes for this release. + +## Compatibility + +{{< compatible "3.1.0" >}} + +## Checksums + +{{% checksums "v3.1.0-checksums.txt" %}} + +## Upgrade + +{{< reusable/backup >}} + +To upgrade to v3.1.0 from v3.0.3, first ensure that you have made corrections for the following breaking changes. + +### Row Level Access Control +Rhize now features row level access control. It also requires a different scopemap than previous versions of BAAS. This is a breaking change. + +The new scope map features rules and jurisdictions. Jurisdictions can be used to create a group of permissions for certain entities within a hierarchy scope. This is useful for allowing contract manufacturing organizations to read and write to the data hub without being able to access the entire manufacturing knowledge graph. + +If you are updating to v3.1.0 from 3.0.3 or lower, update your BAAS scopemap to add a rule with the `admin` role. + +Your scopemap should have the following rule in some form. Note that `admin` is case-sensitive: + +```json +{ + "id": "rule-001", + "description": "Admins can do everything", + "roles": [ + "admin" + ], + "resources": [ + "*" + ], + "actions": [ + "*" + ], + "jurisdictions": [ + "*" + ] +} +``` +Read how to [Configure your scopemap]({{< relref "../deploy/install/row-level-access-control.md" >}}). + +### YAML config files +Rhize services now use YAML instead of JSON for their configuration files. This is represented in Helm overrides by the object `rhizeConfig` (i.e. `rhizeAuditConfig`). Charts for v3.1.0 should create only YAML config files, though there may be differences in the keys for certain configuration options. Consult the values file for each Helm chart and the changelog for specific changes as necessary. + +### Notable schema changes + +#### Material Lot & Sublot Quantity Unit of Measure +- `quantityUnitOfMeasure` has been deprecated in favor of `quantityUoM` for MaterialLot and MaterialSubLot in order to standardize variable names across the schema. +While `quantityUnitOfMeasure` is deprecated, it will continue to work until it is removed in a future releaase. To migrate before the `quantityUnitOfMeasure` is completely removed, ensure all clients are now using `quantityUoM`: + +1. Query all `quantityUnitOfMeasure` for `MaterialLot` and `MaterialSubLot`, and store information to disk. +1. Delete all relationships for `quantityUnitOfMeasure` from `MaterialLot` and `MaterialSubLot`. +1. Upgrade schema. +1. Patch in stored `quantityUnitOfMeasure` as `quantityUoM` for `MaterialLot` and `MaterialSubLot`. + +#### Job Order Dispatch Status +- `DispatchStatus` has been changed to match the ISA-95 standard. Existing statuses that didn't align are now deprecated. They will continue to work until removed in a future release. It is recommended to migrate clients to the new set. + +#### Operations Event Definition Record Specification Uniqueness +- The `id` field for `OperationsEventDefinitionRecordSpecification` has been changed to use the `@id` directive. Each `id` must now be unique, thereby any duplicate ids will cause issues. To fix: + + 1. Query all `id` on `OperationsEventDefinitionRecordSpecification`, and store information to disk. + 1. Determine any duplicate values. + 1. For any duplicate values, assign a unique `id`. + 1. Patch in any `id` changes. + 1. Upgrade schema. + +#### WorkMaster defines & definedBy +- WorkMaster has been changed to invert `defines` and `definedBy`, such that `defines` is now a list and `definedBy` is now scalar. Depending on what version you are upgrading from you may come across this error when upgrading: + + ```bash + {"errors":[{"message":"input: rpc error: code = Unknown desc = succeeded in saving GraphQL schema but failed to alter Dgraph schema - GraphQL layer may exhibit unexpected behaviour, reapplying the old GraphQL schema may prevent any issues: Type can't be changed from list to scalar for attr: [WorkMaster.definedBy] without dropping it first.\ninput:3: resolving updateGQLSchema failed\n","extensions":{"code":"Error"}}]} + ``` + + To fix: + + 1. Query all `id` for `definedBy` and `defines` on `WorkMaster`, and store information to disk. + 2. Determine any relationship changes. + 3. Acquire a token with BAAS's credentials from Keycloak. + 4. Drop `definedBy` attribute with this command: + + ```bash + curl -X POST /alter -d '{"drop_attr": "WorkMaster.definedBy"}' \ + -H "Authorization: Bearer " + ``` + + 5. Upgrade schema. + 6. Patch in any `defines` and `definedBy` relationships as needed. + +----- + +After you've made the necessary mitigations, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/3-2-0.md b/content/versions/v3.2.1/releases/3-2-0.md new file mode 100644 index 000000000..b9dd86f50 --- /dev/null +++ b/content/versions/v3.2.1/releases/3-2-0.md @@ -0,0 +1,131 @@ +--- +title: 3.2.0 +date: '2025-07-01T15:13:10-04:00' +description: Release notes for v3.2.0 of the Rhize application +categories: ["releases"] +weight: 1666670228 ## auto-generated, don't change +--- + +Release notes for version 3.2.0 of the Rhize application. + +_Release date:_ +14 Jul 2025 + +## Changes by service + +The following sections document the changes this release brings to each service. + +### Admin + +#### Fix + +- Fix character set in Personnel Class form's include properties of search to allow for capitalized characters +- Fix label on personnel class change version change to align with entity name +- Fix personnel class change version state disable condition to allow for draft +- Fix segment editor dependency save + +### BPMN engine + +#### Fix + +- Fix getInstance race condition +- Fix repeated call activity error logging & add span end to error return + +### Schema + +No changes, releasing in step with other software repositories. + +### BAAS + +#### Add + +- Add kafka producer maximum message size +- Add @custom directive +- Add websocket transport to allow for to GraphQL Subscriptions +- Add http change-data-capture sink +- Add admin resolver for query:lookup, mutation:rollup, mutation:recoverSplitList & mutation:indexRebuild +- Add logging to badger ErrTooBig + +#### Change + +- Change NATS Sink handler to support new CDC Format +- Change badger to v4 from v3 +- Change ristretto to v2 from v1 +- Change protobuf for badger and regenerate +- Change postings cache to align with generic declaration in ristretto v2 +- Change postinglistCountAndLength function to improve performance +- Change ioutil.ReadAll to io.ReadAll and ioutil.TempDir to os.MkdirTemp + +#### Fix + +- Fix cascade directive field arguments not being coerced to lists +- Fix deleteBelowTs rollup issue +- Fix incrRollupi Process ensure to consistent use of time units to prevent erroneous cleanup +- Fix performance issue in type filter +- Fix resolution of _Any scalar type by moving from apolloSchemaExtras to schemaInputs +- Fix RLAC resources not evaluated correctly +- Fix the conflict in accessing split parts during a rollUp +- Fix validation panic on type check +- Fix wal replay issue during rollup +- Fix wget urls for large datasets in testing pipeline + +#### Remove + +- Remove Ludicrous mode from postings + +### Core + +#### Add + +- Add sort on inherited properties + +#### Change + +- Change golangci-lint to v2.0.1 + +#### Fix + + - [CI] Fix resolution of app-config-local to use git instead of package for end-to-end test stage mock environment +- Fix lints errors + +### Agent + +Releasing in step with other components. + +### Audit + +#### Add + +- [CI] Add vulnerability check to CI + +#### Change + +- Change to use v4 rhize-go drivers to allow a https keycloak connection +- Change `rhize-go` library to v4.0.0-rc4 to allow usage of username and password in NATS configuration + +#### Fix + +- Fix relevant [Go vulns](https://pkg.go.dev/vuln/) + +### Keycloak Theme + +#### Change + +- Change application name to Rhize + +### Router + +Releasing in step with other components + + +## Compatibility + +{{< compatible "3.2.0" >}} + +## Checksums + +{{% checksums "v3.2.0-checksums.txt" %}} + +## Upgrade + +To upgrade to v3.2.0, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/3-2-1.md b/content/versions/v3.2.1/releases/3-2-1.md new file mode 100644 index 000000000..e5b0a6737 --- /dev/null +++ b/content/versions/v3.2.1/releases/3-2-1.md @@ -0,0 +1,76 @@ +--- +title: 3.2.1 +date: '2025-08-04T11:45:10-04:00' +description: Release notes for v3.2.1 of the Rhize application +categories: ["releases"] +weight: 1666670228 +--- + +Release notes for version 3.2.1 of the Rhize application. + +_Release date:_ +8 Aug 2025 + +## Changes by service + +The following sections document the changes this release brings to each service. + +### Admin + +#### Fix + + - Fix audit trail user timeout in large datasets by querying tags over the time range instead of entire time range + - Fix portal link visibility by trimming whitespace on environmental variable + +### BPMN engine + +Releasing in step with other components. + +### Schema + +Releasing in step with other components. + +### BAAS + +#### Fix + +- Fix list intersection detection on large values to prevent try/abort loop on startup + +#### Remove + +- Remove superfluous logging of cascade directives + +### Core + +Releasing in step with other components. + +### Agent + +#### Fix + - Fix kafka repeated errors when not configured by optionally checking for kafka at startup + +### Audit + +#### Fix + +- Fix audit tag user query timeout by adding optional time range to filter and optionally configuring http server timeouts + +### Keycloak Theme + +Releasing in step with other components. + +### Router Initialization + +Releasing in step with other components + +## Compatibility + +{{< compatible "3.2.1" >}} + +## Checksums + +{{% checksums "v3.2.1-checksums.txt" %}} + +## Upgrade + +To upgrade to v3.2.1, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/_index.md b/content/versions/v3.2.1/releases/_index.md new file mode 100644 index 000000000..bf8000849 --- /dev/null +++ b/content/versions/v3.2.1/releases/_index.md @@ -0,0 +1,13 @@ +--- +title: Releases +description: Documentation about new features and upgrade instructions. +weight: 1000 +identifier: releases +cascade: + icon: rss +--- + +Read about new Rhize features and how to upgrade versions. + +{{< card-list >}} + diff --git a/content/versions/v3.2.1/releases/changelog/3-0-0.md b/content/versions/v3.2.1/releases/changelog/3-0-0.md new file mode 100644 index 000000000..fe36566a4 --- /dev/null +++ b/content/versions/v3.2.1/releases/changelog/3-0-0.md @@ -0,0 +1,230 @@ +--- +title: 3.0.0 (general release) +date: '2024-03-25T11:09:55-05:00' +description: Change Log for v3.0.0 of the Rhize application +categories: ["releases"] +weight: 1709033170 ## auto-generated, don't change +--- + +Changelog for version 3.0.0 of the Rhize application. + +_Release date:_ 25th March, 2024 + +## Breaking changes + +## Changes by service + +### Admin UI + +**Features** + +- Add `includesPropertiesOf` input when creating a new version of OperationsEventClass +- Add `IncludesPropertiesOf` option to Equipment Class General tab +- Add `momentjs` for timezones as select in Equipment Version `timezone` field +- Add BPMN side panel for OPCUA Method Call +- Add clear option to WorkMaster disable modal +- Add edits for Static properties +- Add enable and disable functionality to work calendar definition +- Add EquipmentClass ISA-95 property type selection +- Add EquipmentLevel to Equipment Class +- Add inheritance of Operational Location Class properties into Operational Location +- Add inherited properties from linked Operations Event Class to Operations Event Class properties page +- Add manufacturer to Physical Asset Class general tab +- Add new modal to BPMN +- Add new Work Calendar Definition select on Equipment Version management +- Add Operational Location and Spatial Definition to Equipment General tab +- Add OperationsEventClass to OperationsEventDefinition +- Add option to disable Operations Definition +- Add option to disable previous work master version when create a new version +- Add option to disable Work Calendar Definition Property +- Add option to disable Work Calendar Entry +- Add option to enable a disabled Process Segment +- Add page to add manual Work Calendar Entries +- Add pagination to Work Calendar Entries +- Add properties to Work Calendar Definition +- Add Published date and Hierarchy Scope to WorkMaster +- Add relationship between Operations Event Definition and Work Master +- Add scrollbar to Work Calendar Definition +- Add Spatial Definition to Physical Location & Physical Asset +- Add start weekday selection to Work Calendar Definition +- Add static properties on Equipment Class +- Add timezone on Work Calendar +- Add timezone on Work Calendar Entry and Equipment +- Add user store to UserManager and set automatic silent renew +- Add Work Calendar Definition manage entries component +- Add Work Calendar Definition management +- Add Work Calendar Setup + +**Change** + +- Change ability to edit approved version of an Operations Event Class +- Change ability to edit for review version of a Material Class +- Change data source topic to prevent usage of `.` in the name +- Change input validation on Work Calendar Definitions to prevent use of dots `.` +- Change input validation on Work Calendars to prevent spaces +- Change Material Definition Properties table sort order +- Change the Work Calendar Definition Entry card design to use ne card props + +**Fix** + +- Fix `Get Started` action trigger +- Fix ability to create an Operational Location Class Version +- Fix ability to create duplicate property labels for the same Physical Asset Class +- Fix ability to edit an Active Process Segment +- Fix ability to edit an Active Work Calendar +- Fix ability to edit Operations Definition Segment Parameter +- Fix ability to edit Person Versions when in Draft or For Review State +- Fix adding new Material Definition Property +- Fix adding Personnel Class property by adding safe operator to prevent table from breaking under certain pre-conditions +- Fix allowing edit of an approved Physical Asset +- Fix Audit Log date format +- Fix auto-increment of Person Version, when creating a new one +- Fix BPMN Viewer Get Instance invalid syntax error +- Fix changing Process Segment version status from Draft to Active +- Fix disable of Material Definition +- Fix disabled nature of Hierarchy Scope when the selected Physical Asset Class version is in Draft or For Review +- Fix disabled state of buttons when creating new Equipment +- Fix display of BPMN instances that were created with the active version +- Fix display of Equipment Property metadata +- Fix display of inherited properties in Personnel Class +- Fix display of linked Material Class properties +- Fix display of more than 1000 Equipment Class in the left sidebar +- Fix display of more than 1000 Equipment in the left sidebar +- Fix display of more than 1000 Data Source in the left sidebar +- Fix display of more than 1000 Hierarchy Scope in the left sidebar +- Fix display of more than 1000 Material Class in the left sidebar +- Fix display of more than 1000 Material Definitions in left sidebar +- Fix display of more than 1000 Operational Definition in the left sidebar +- Fix display of more than 1000 Operational Location Class in the left sidebar +- Fix display of more than 1000 Operational Location in the left sidebar +- Fix display of more than 1000 Person in the left sidebar +- Fix display of more than 1000 Personnel Class in the left sidebar +- Fix display of more than 1000 Work Calendar in the left sidebar +- Fix display of Person Property metadata +- Fix display of value field in linked properties from Resource Specifications +- Fix duplicated column name on Operational Location Class Property and Property Type +- Fix edit of linked property on resource specifications +- Fix environmental variable styling and navigation +- Fix Equipment Class version save-as with properties +- Fix Equipment general tab input boxes to be disabled when ACTIVE or APPROVED +- Fix Equipment Property error when Work Calendar Definition query failed +- Fix error message when creating a Work Calendar Definition Entry with a duplicate ID +- Fix error message when creating a Work Calendar Definition with a duplicate ID +- Fix heading of Data Source modal when changing to For Review +- Fix heading of disable modal for Material Class to remove version +- Fix heading on BPMN Editor modal for marking current BPMN as Active +- Fix heading on changing Data Source Version from Draft to Active +- Fix hierarchy scope persisting after creating a new Work Calendar Definition +- Fix hierarchy scope selection on Physical Asset general tab +- Fix inconsistent display of inputs when editing draft Data Source +- Fix instance list repeat queries using start/end times +- Fix manual page refresh to display added Operations Definition +- Fix Material Definition inherited properties filter on Draft versions +- Fix Material Definition property enable/disable +- Fix missing display of inherited properties in Person from Personnel Class +- Fix mount localStorage data as moment object to prevent errors and add a mount the table on load the page +- Fix silenced error message when using createSecret +- Fix Process Segment Parameter search +- Fix Operations Segment Parameter search +- Fix query to support Custom Query without broke the entire response +- Fix re-enable of disabled Work Calendar Definition Entry +- Fix references to Physical Asset in Operations Event pages +- Fix refresh of Equipment when adding using the `+` symbol +- Fix refresh when adding a new Operations Definition version +- Fix required validation for Hierarchy Scope on Equipment Class Version +- Fix required validation for Hierarchy Scope on Physical Asset Version +- Fix search for Equipment Class Property by type and UoM +- Fix See Active toggle functionality on Material Definition Properties +- Fix sidebar order for Work Calendar Entries +- Fix Work Calendar Definition enable and disable code review fixes +- Fix Work Calendar Definition Properties search box placeholder text +- Fix WorkMaster search + +### Agent + +No changes since previous release. + +### Audit + +**Features** + + - Add storage option for postgres + +**Fix** + +- Fix environmental variable conflict + +### BAAS + +No changes since previous release. + +### BPMN + +**Features** + +- Add viper configuration for http timeout values + +**Fix** + +- Fix one workflow deprecating when two subscribe to message start +- Fix execution of nested call activity when depth greater than 2 + +**Change** + +- Change JSONata-go library to v1.8.4 from v1.6.6 + +### Core + +**Features** + +- Add `hierarchyScope` to `CreateEquipmentClassVersionInput` +- Add `includesPropertiesOf` to `CreateEquipmentClassVersionInput` +- Add `includesPropertiesOf` to `CreateOperationsEventClassVersionInput` +- Add `inheritedProperties` to `OperationalLocationClassVersion` +- Add `inheritedProperties` to `OperationsEventClassVersion` +- Add `inheritedProperties` to `PersonnelClassVersion` +- Add `samplingRate` configuration option to OpenTelemetry +- Add data source methods when cloning a `DataSourceVersion` +- Add hostname to data source payload +- Add service hostname to trace spans +- Add specific error message for binding path that evaluates to an empty string + +**Change** + +- Change core's GraphQL schema to extend `MOMFunctionEnum`, `OperationsEventLevelEnum` and `OperationsEventTypeEnum` +- Change domain logger to respect `logging.level` configuration option +- Change GraphQL query to include option to retrieve disabled properties and include by default +- Change schema to use `v3.0.0x` +- Change to use a version's `iid` when getting operational location class inherited properties +- Change to use a version's `iid` when getting personnel class inherited properties + +**Fix** + +- Fix `operationsEventClasses` to `CreateOperationsEventDefinitionVersionInput` +- Fix rule trigger evaluating on stale data + +### Schema + +**Features** +- Add `@id` to `WorkCalendarDefinitionEntry` +- Add `@search` to `requestState` on `OperationsRequest` +- Add `@search` to `segmentState` on `SegmentRequirement` +- Add `comments` to `MaterialLot` and `MaterialSublot` +- Add `comments` to `SegmentRequirement` +- Add `effectiveStart` and `effectiveEnd` to `WorkCalendarDefinition` +- Add `effectiveStart` and `effectiveEnd` to `WorkCalendarDefinitionEntry` +- Add inverse relationships for `HiearchyScope` +- Add inverse relationships for `OperationalLocation` +- Add relationship between `MaterialSublot` and `MaterialDefinition` +- Add resource actual links to `InformationObject` +- Add Work Calendar Information models +- Add Work Calendar Information models to `permissions.json` +- Add fields `@cascade` directive to subgraph + +**Changes** +- Change `OperationsRequest` relationship to `OperationsSchedule` to optional +- Change `startRule` to required on `WorkCalendarDefinitionEntry` + +## Upgrade + +To upgrade to v3.0.0, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/changelog/3-0-0rc05.md b/content/versions/v3.2.1/releases/changelog/3-0-0rc05.md new file mode 100644 index 000000000..6bb3f1e25 --- /dev/null +++ b/content/versions/v3.2.1/releases/changelog/3-0-0rc05.md @@ -0,0 +1,112 @@ +--- +title: 3.0.0rc05 +date: '2023-10-31T20:32:34-03:00' +description: Release notes for v3.0.0rc05 of the Rhize application +categories: ["releases"] +weight: 1709033486 ## auto-generated, don't change +aliases: + - "/releases/3-0-0rc05/" +--- + +Changelog for version 3.0.0rc05 of the Rhize application. + +_Release date:_ October 24, 2023 + +## Breaking changes + +- Renamed UI environment variables and changed `KEYCLOACK` to `KEYCLOAK` + +## Changes by service + +### Core + +**Features** +- Add `deleteSyncEquipmentsFromDBtoNATSStatus` mutation +- Add NATS connection name +- Add fields to `InformationObject` + +**Fixes** +- Fix tracing typo + +**Changes** +- Change Core to purge keys that are not in the the database +- Change core to delete property value from KV when the equipment no longer have active version + +### BPMN + +**Features** +- Add DQL Query Service Task Handling +- Add DQL Mutate Service Task Handling +- Add verbose log to `HandleTaskComplete` +- Add default log level to `config.json` +- Add GraphQL resolvers to get by key from known KV Stores +- Add Errors to `GetWorkflowSpecification` query +- Add shutdown handling to drain `commandConsumer` before shutting down +- Add a basic Intermediate Timer Event + +**Changes** +- Change NATS connection to include a client Name including hostname +- Change NATS server library to `v2.10.2` from `v2.9.9` +- Change NATS client library to `v1.30.2` from `v1.21.0` +- Change NATS GET/PUT error messages to be more verbose +- Change to synchronize `WorkflowSpecifications` to NATS once, instead of every possible update +- Change `CallActivity` to enforce variable context mapping + +**Fixes** +- Fix linking Workflow Specifications by IID +- Fix `NextVersion` datatype in `GetWorkflowSpecificationNextVersion` query +- Fix referencing duplicate nodes in `LoadBpmnFromXml` +- Fix duplicate `WorkflowMessage` Error on Import +- Fix interpretation of escape characters on Linux + +**Remove** +- Remove `printf` statements from `GraphQLQueryAdapter` +- Remove license scanning from CI +- Remove slow execution debugging spans + +### Agent + +**Features** +- Add value to OPC-UA Value span +- Add Edge-Agent heartbeat details + +**Changes** +- Change OPC-UA Value span to log error if status isn't ` OK (0x0)` +- Change agent to filter out disabled topics + +**Fixes** +- Fix OPC-UA Subscription Statistics panic in test suite + +### Admin UI + + +**Breaking changes** +- Renamed environment variables and change `KEYCLOACK` to `KEYCLOAK` + +**Features** +- Add parameter specification in `WorkMaster` +- Add Equipment Property Test +- Add data migration popup auth +- Add BPMN Node Template for Schema Validation + +**Changes** +- Change Libre to Rhize + +**Fixes** +- Fix no download option for properties on Person +- Fix no download option for properties on Personnel Class + +### Schema + +**Features** +- Add search to `MaterialUse` on `OperationsDefinition`, `OperationsSchedule` and `OperationsPerformance` Models +- Add search to `EquipmentUse`, `Personneluse` & `PhsicalAssetUse` on `OperationsDefinition` +- Add fields to `InformationObject` + +**Fixes** +- Add missing `@id` +- Fix omitting `omitempty` for non-pointer Boolean types + +## Upgrade + +To upgrade to `v3.0.0rc05, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/changelog/3-0-0rc06.md b/content/versions/v3.2.1/releases/changelog/3-0-0rc06.md new file mode 100644 index 000000000..4a16b7d96 --- /dev/null +++ b/content/versions/v3.2.1/releases/changelog/3-0-0rc06.md @@ -0,0 +1,95 @@ +--- +title: 3.0.0rc06 +date: '2023-10-31T21:07:43-03:00' +description: Release notes for v3.0.0rc06 of the Rhize application +categories: ["releases"] +weight: 1709033362 ## auto-generated, don't change +aliases: + - "/releases/3-0-0rc06/" +--- + +Changelog for version 3.0.0rc06 of the Rhize application. + +_Release date:_ October 31, 2023 + +## Breaking changes + +- NATS streams `libreBpmn_command` and `LibreTimerStart` must be deleted prior to starting + +## Changes by service + +### Core + +**Features** +- Add mutation for dependency check for `DataSource`, Equipment, `EquipmentClass`, `OperationalLocation`, and `OperationalLocationClass` + +**Changes** +- Change to go-module for schema +- Change struct literal unkeyed fields to keyed + + +### BPMN + + +**Breaking Changes** +- NATS streams `libreBpmn_command` and `LibreTimerStart` must be deleted prior to starting + +**Features** +- Add incoming `libreBPMN_command` data to traces for debugging on trace level logging +- Add notification to NATS of command progress +- Add JSONata processing of output-element in multi-instance execution +- Add JSONata processing of intermediate timer catch duration + +**Changes** +- Change DQL mutation node to use `application/rdf` +- Change the BPMN process ID to match the trace ID +- Change to zero-based loop counter for multi-instance execution +- Change multi-instance nodes error early on sequential parallel execution (not implemented) + +**Remove** +- Remove legacy domain code + +**Fixes** +- Fix multi-instance requiring sequential for parallel execution +- Fix govulncheck identified issues + +### Agent + +**Changes** +- Change to libre-schema go module import + +**Remove** +- Remove empty GraphQL API endpoint + + +### Admin UI + +**Features** +- Add Physical Asset to Sidebar +- Add Certificate Authority input option to Rest Service Task + +**Fixes** +- Fix large memory usage in production +- Fix Work Master UI Issues + +### Schema + + +**Features** +- Add search by hash to material use +- Add signature to record entries +- Add requirements for for dependency changes + +**Changes** +- Change domain to be a go-module for import + +**Fixes** +- Fix permissions generation with new gqlgen + +**Remove** +- Remove entity interface from generated code + + +## Upgrade + +To upgrade to v3.0.0rc06, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/changelog/3-0-0rc07.md b/content/versions/v3.2.1/releases/changelog/3-0-0rc07.md new file mode 100644 index 000000000..73d4c44ff --- /dev/null +++ b/content/versions/v3.2.1/releases/changelog/3-0-0rc07.md @@ -0,0 +1,133 @@ +--- +title: 3.0.0rc07 +date: '2023-11-15T09:32:12-05:00' +description: Release notes for v3.0.0rc07 of the Rhize application +categories: ["releases"] +weight: 1709033190 ## auto-generated, don't change +aliases: + - "/releases/3-0-0rc07/" +--- + +Changelog for version 3.0.0rc07 of the Rhize application. + +_Release date:_ 15th November, 2023 + +## Breaking changes + +- [SCHEMA] Change types `OperationalLocationClass`, `OperationalLocationClassVersion`, `OperationalLocationClassProperty`, `OperationalLocation`, `OperationalLocationVersion` and `OperationalLocationProperty` to have: `isPartOf (0..1)`, `isMadeUpOf (0..*)` +- [BPMN] Change CommandConsumer and Timers to use new JetStream library and durable consumers. This requires you to drop and re-create streams `KV_JobResponses`, `KV_WorkflowSpecifications`, `libreBpmn_Command` and `libreTimerStart`. + +## Changes by service + +## Admin UI + +**Features** +- Add ability to create Process Segment Version +- Add ability to edit linked process segment resource specification property +- Add ability to link a Hierarchy Scope to a Operational Location Class version +- Add check for renaming a linked property with an existing property name +- Add Homepage screen +- Add optional link from Operational Location Class to a Operational Location Class version +- Add Physical Asset Properties +- Add Physical Asset Resource Specifications + +**Change** +- Change available BPMN UI palette options to supported objects only + +**Fix** +- Fix incorrect version indicators in Data Source sidebar +- Fix Process Segment Version Bug +- Fix sidebar typo in Work Masters +- Fix template service task hiding multi-instance properties + +**Remove** +- Remove the unused or unsupported BPMN elements from the BPMN UI + +### Agent + + +**Features** +- Add hostname as service instance to otel span + +**Fix** +- Fix invalid errors reported to OTEL + +### BAAS + + +**Changes** +- Change CDC to use a JetStream from KV Store + +**Fixes** +- Fix getting user from authorization token for setting `_modifiedBy` and `_createdBy` + +### BPMN + + +**Features** +- Add a flag to bypass any OIDC requirements so that we can run BPMN without security enabled +- Add fallback to BAAS when NATS fails in `HandleTaskComplete` +- Add input validation on process id to check for dots in the name +- Add option for custom BPMN complete variable context +- Add OS hostname to service instance in otel spans +- Add port for adapter debugger so that adapter runtime configuration and information can be queried +- Add process ID to log when starting a new instance +- Add retry backoff to NATS KV Get +- Add string trim logic to all inputs/outputs on BPMN upload +- Add test case for High Availability +- Add token argument to `bpmnctl` to allow users to pass a token directly + +**Change** +- Change BPMN to NAK messages for unknown timers/streams to avoid dropping messages on startup +- Change CI/CD to use a minimal docker compose `docker-compose.ci.yml` from app-config-local +- Change logging message type based on error type when CreateAndRunInstance is called +- Change NATS client library to v1.31.0 from v1.30.2 +- Change NATS KV watchers to immediately defer stop to ensure lifecycle handling +- Change Parallel gateway join to use a GetOnce KV Get + +**Fix** +- Fix goroutine leak on ack pending + +### Core + + +**Features** +- Add dependency check to operations definition & work master +- Add docker login for CI/CD +- Add Equipment KV sync on startup +- Add OIDC bypass functionality when running in test pipelines +- Add `OperationsEventClass` Version Handlers + +**Change** +- Change CI/CD to use docker-compose.ci from `app-config-local` +- Change to Libre Schema `v3.0.0rc7` +- Change subscriptions and watchers to wait until ready before starting synchronization +- Change to use libre-schema as a golang module instead of copying + +**Remove** +- Remove IntelliJ IDE workspace directory and files `./.idea/*` + +### Schema + + +**Features** +- Add Comments to `OperationsEvent` +- Add example `docker-compose.yaml` usage +- Add missing types for `_createdBy` and `_modifiedBy` +- Add Relationships to Class and Definition Versions +- Add Resource Relationship Network Model + +**Change** +- Change dockerfile to use baas v3.0.0rc7 +- Change library [`golang.org/x/crypto`](https://golang.org/x/crypto) to v0.15.0 from v0.14.0 +- Change library [`golang.org/x/net`](https://golang.org/x/net) v0.18.0 from v0.16.0 +- Change library [`golang.org/x/sync`](https://golang.org/x/sync) v0.5.0 from v0.4.0 +- Change library [`golang.org/x/tools`](https://golang.org/x/tools) v0.15.0 from v0.14.0 + +**Fix** +- Fix missing defaults on `_createdBy` and `_modifiedBy` +- Fix test for Signature relationship to `recordEntries` + +## Upgrade + +To upgrade to v3.0.0rc07, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/changelog/3-0-0rc08.md b/content/versions/v3.2.1/releases/changelog/3-0-0rc08.md new file mode 100644 index 000000000..2b16efc83 --- /dev/null +++ b/content/versions/v3.2.1/releases/changelog/3-0-0rc08.md @@ -0,0 +1,177 @@ +--- +title: 3.0.0rc08 +date: '2023-12-19T13:01:02-05:00' +description: Release notes for v3.0.0rc08 of the Rhize application +categories: ["releases"] +weight: 1709033173 ## auto-generated, don't change +aliases: + - "/releases/3-0-0rc08/" +--- + +Changelog for version 3.0.0rc08 of the Rhize application. + +_Release date:_ 19th December, 2023 + +## Breaking changes + + - [ADMINUI] Change UI navigation bar design + - [SCHEMA] Change `OperationsEventClass.IncludedIn` to reference an `OperationsEventClassVersion` instead of `OperationsEventClass` + - [SCHEMA] Change `OperationsEventClass.IncludesPropertiesOf` to move to the version instance, `OperationsEventClassVersion.IncludesPropertiesOf`, instead of header + - [SCHEMA] Change `OperationsEventDefinition.IncludedIn` to reference an `OperationsEventDefinitionVersion` instead of `OperationsEventDefinition` + - [SCHEMA] Change `OperationsEventDefinition.IncludesPropertiesOf` to move to the version instance, `OperationsEventDefinitionVersion.IncludesPropertiesOf`, instead of header + - [SCHEMA] Change `FromResourceReference` and `ToResourceReference` to combine them into a single `resourceReference` + + +## Changes by service + +### Admin UI + + +**Features** +- Add Audit Trail View in UI +- Add BPMN Instance Viewer +- Add Operations Definition Segment Specifications +- Add Operations Event Class Page +- Add Operations Event Definition Page +- Add option to change Personnel Class version status from `DRAFT` to `ACTIVE` +- Add property metadata to Material Definition Properties page +- Add the table for parameters and physical assets + +**Change** +- Change Audit Log to GraphQL Playground +- Change Picker implementation + +**Fix** +- Fix person with two `ACTIVE` versions + +- Fix parameter tab +- Fix selection of `WorkMaster` parameter selection + +**Remove** +- Remove ability to edit active version Data Source general properties + +### Agent + + +**Features** +- Add check to avoid continuous resubscription to bad OPCUA topics +- Add interactive OPC UA server for end-to-end testing +- Add support for Azure Service Bus +- Add support for MQTT + +**Change** +- Change OPC-UA subscription item reference strategy to use ClientHandles, MonitoredItems, and Node Ids in order +- Change agent to buffer protocol messages to disk if NATS is offline to avoid message loss +- Change data source interfaces into smaller pieces for readability and cognitive complexity +- Change from `scratch` to `alpine` base image +- Change monitored Items with bad status behavior to moved to a new subscription after a configurable timeout to encourage the OPC UA server to start providing value changes again + +**Fix** +- Fix issue with `gopcua` client that resulted in OPC UA Session not being recreated after a loss of Secure Channel on reconnect + +### Audit + + +**Features** +- Add GraphQL Subgraph to query audit log and query audit log tags +- Add Influx setup if buckets not available +- Add InfluxDB as data sink +- Add configuration option scanning via configuration file, environment, and command line arguments +- Add restart of consumer on NATS reconnect +- Add subscription of audit events +- Add write to data sink + +### BAAS + + +**Features** +- Add `_modifiedBy` user to Audit Event +- Add check for required OIDC Roles +- Add warning for missing `ScopeMap` parameter when using OIDC Bypass + +**Fix** +- Fix dgraph hanging on shutdown request + +**Remove** +- Remove wait groups for Enterprise Dgraph ACL functionality +- Remove license scanning CI/CD job + +### BPMN + + +**Add** +- Add Async Publish Error logging to NATS KVs +- Add environmental variable expansion to json-schema service task +- Add graceful shutdown to command consumer port +- Add log message and time delay to `CallActivity` watcher +- Add multi-file JSON schema validation +- Add profile labels to go-routine launches + +**Change** +- Change `CallActivity` to event driven as opposed to a blocking go-routine to wait for complete of a sync call +- Change `InProgess` message to `20s` on `CommandConsumer` from `29s` +- Change `libreBpmn.command.` strings to use domain constant +- Change debug level log messages for timer checks and active workflows to trace level +- Change log level of gateways without inputs to trace level from error +- Change to git commits to use LN on *.go files + +**Fix** +- Fix BPMN long save times by only updating the touched Workflow Specification +- Fix NATS reconnect re-subscribing to startOnNATS Topics +- Fix docker permissions in end-to-end CI/CD test case +- Fix memory leak in OIDC context value recursively growing +- Fix panic on nil `workflowspec` in `HandleTaskComplete` + +### Core + +**Features** +- Add `hierarchyScope`, `materialAlternate`, `spatialDefinition` and `unitOfMeasure` as information objects to `GetOperationsEvent` `operationsEventRecords` +- Add agent MQTT message handling +- Add binding path test cases +- Add check for empty migration records +- Add check for migration dependencies on Operations Event Record Entry +- Add debug logging to `updateOperationsEventRecordEntry` +- Add entity path to migration dependency checks +- Add initialization for Azure stream +- Add label to `OperationsEventClassProperty`, `OperationsEventDefinitionProperty`, `OperationsEventProperty` +- Add operations event definition versioning mutations +- Add option for Azure data source type +- Add option to activate newly created version if requested + +**Change** +- Change Equipment Class rule triggered event to immediately publish to NATS and then be picked up by Core instead of waiting an triggering to prevent libre-core shutdowns missing the event fire +- Change Operations Event Record Entry migration to remove existing children before checking for migration dependencies +- Change `IncludesPropertiesOf` to be on the version not the header of `OperationsEventDefinition` & `OperationsEventClass` +- Change async `SaveVersionAs` because other cases have been tested in sync tests +- Change consumer creation to delete/add consumer if it fails to update consumer +- Change database ping to allow `no access token provided` and `context cancelled` when pinging database +- Change default logger to use `hostname` instead of PID +- Change logging messages to reflect data source type +- Change migrations to remove existing children before checking for migration dependencies + +**Remove** +- Remove obsolete comments + + +### Schema + +**Features** +- Add @id for OperationsEventDefinitionProperty +- Add `JobOrder` parent/children relationship +- Add `stateTransitionInstance.previous`, `.next`, and `.comments` +- Add automatic scopemap update step +- Add azure for datasource protocol +- Add comments to `operationsSegment` +- Add event subtype to event +- Add label to `OperationsEventClassProperty` +- Add label to `OperationsEventDefinitionProperty` +- Add label to `OperationsEventProperty` +- Add permission holder for Audit +- Add reason and status to operations event + +**Changes** +- Change mermaid diagrams to include recent changes + +## Upgrade + +To upgrade to v3.0.0rc08, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/changelog/3-0-0rc09.md b/content/versions/v3.2.1/releases/changelog/3-0-0rc09.md new file mode 100644 index 000000000..2ed7da71a --- /dev/null +++ b/content/versions/v3.2.1/releases/changelog/3-0-0rc09.md @@ -0,0 +1,102 @@ +--- +title: 3.0.0rc09 +date: '2024-03-25T11:09:55-05:00' +description: Change Log for v3.0.0rc9 of the Rhize application +categories: ["releases"] +weight: 1709033172 ## auto-generated, don't change +--- + +Changelog for version 3.0.0rc9 of the Rhize application. + +_Release date:_ 27th February, 2024 + +## Breaking changes + +## Changes by service + +### Admin UI + +No changes since previous release. + +### Agent + +**Features** + +- Add check to validate OPC UA topics against server metadata +- Add hostname in message payload +- Add logic for MQTT connections to resubscribe on disconnect +- Add option to deconstruct complex JSON payloads to simple types + +### Audit + +**Features** + +- Add previous audit value relative to current + +### BAAS + +**Changes** + +- Change server aborts due to a conflict to report the error and block transaction that was aborted + +### BPMN + +**Features** + +- Add a trigger for a NATS message on end of a Work Calendar Entry +- Add BPMN Run Instance flag to log variables on every task +- Add check for 0 duration when calculating next entry +- Add domain entities for tempo/loki queries and generalize them +- Add go profile guided optimisation +- Add improved error logging to view instance +- Add named constants for constant strings +- Add test case for call activities in high availability +- Add timeouts to `http.Server` +- Add type type assertion checks +- Add warn when not enough data is returned to process nodes to spans + +**Change** + +- Change application to refresh token before fetching work calendar +- Change BPMN Datasource Method Call Method arguments to allow map[string]any, map[int]any, or string +- Change BPMN engine execution to use JetStream errors for parallel gateway check +- Change BPMN engine to expand variables and secrets before sanitizing NATS subject +- Change BPMN Instances to use a unique consumer name +- Change BPMN to error if Router unavailable on startup +- Change CSM to include child job to avoid relying on NATS to synchronize +- Change NATS KV Get timeout to 34 attempts from 7 +- Change stream expiry to 10 minutes for CommanStreamReplicas +- Change stringified ints into non-stringy types (e.g. durations as time.Duration) +- Change tempo/loki facing code into separate driver +- Change timers to start calculating from the closest year +- Change to only log task variables on Complete or Error +- Change VersionState to use GraphQL Enums +- Change View Instance to pull the latest version when unspecified +- Change Work Calendar invocations to use natsClient.Publish instead of natsClient.StreamPublishMsg + +**Fix** + +- Fix issue where timers are being called inconsistently +- Fix test cases for view instance +- Fix unstable high-availability test +- Fix variable and secret expansion permissions by including OIDC context + +**Remove** + +- Remove additional calls to NATS to avoid retrieval issues due to eventual consistency +- Remove overuse of of arbitrary pointers to strings +- Remove superfluous marshals/unmarshals of Job Responses during execution +- Remove superfluous parsing from string -> time.Time only to run time.Format() in UnixToRFC3339 +- Remove vestigial config argument for `CommandConsumer` + +### Core + +No changes since previous release. + +### Schema + +No changes since previous release. + +## Upgrade + +To upgrade to v3.0.0rc09, follow the [Upgrade instructions](/deploy/upgrade). diff --git a/content/versions/v3.2.1/releases/changelog/_index.md b/content/versions/v3.2.1/releases/changelog/_index.md new file mode 100644 index 000000000..a6fc14856 --- /dev/null +++ b/content/versions/v3.2.1/releases/changelog/_index.md @@ -0,0 +1,6 @@ +--- +title: Changelog +description: A log of all changes to the Rhize application +--- + +{{< card-list >}} diff --git a/content/versions/v3.2.1/use-cases/_index.md b/content/versions/v3.2.1/use-cases/_index.md new file mode 100644 index 000000000..9e71204d1 --- /dev/null +++ b/content/versions/v3.2.1/use-cases/_index.md @@ -0,0 +1,13 @@ +--- +title: Use cases +description: Examples of how to use Rhize for specific use cases. +weight: 250 +cascade: + icon: light-bulb +identifier: use-cases +--- + +Topics about how to use Rhize for end-to-end workflows + + +{{< card-list >}} diff --git a/content/versions/v3.2.1/use-cases/calculate-oee.md b/content/versions/v3.2.1/use-cases/calculate-oee.md new file mode 100644 index 000000000..2973cbd8d --- /dev/null +++ b/content/versions/v3.2.1/use-cases/calculate-oee.md @@ -0,0 +1,243 @@ +--- +title: >- + Calculate OEE +description: The Rhize guide to modelling and querying OEE +categories: ["howto", "use-cases"] +weight: 0100 +draft: true +--- + +This guide provides a high-level overview of how to use Rhize to calculate various _key performance indicators_ (KPIs), including _overall equipment effectiveness_ (OEE). +As an example, the implementation section walks through a full end-to-end solution. + +## About OEE + +OEE is a key performance indicator that measures how effectively a manufacturing process uses its equipment. +As defined in [{{< abbr "ISO 22400" >}}](https://www.iso.org/standard/56847.html), OEE measures the ratio of actual output to the maximum potential output. +To calculate this, the metric evaluates three primary factors: +- Availability +- Performance +- Quality + +This measure is a common method in manufacturing to assess and improve production efficiency in industrial operations. + +## Background Architecture + +### ISA95 architecture for OEE + +The following diagram shows the ISA-95 entities that are involved with OEE calculations. + +{{< bigFigure +src="/images/oee/data-architecture.svg" +alt="A diagram showing the overall isa95 architecture required for OEE calculations. It shows the relationship between the relationship between the Work Schedule, Operations Performance, Role Based Equipment and work calendar models." +caption="A diagram showing the overall ISA95 architecture involved in making OEE calculations." +width="90%" +>}} + +### Overall system architecture + +```mermaid +sequenceDiagram +Actor U as user +participant M as Machine +participant A as libre-agent +participant C as libre-core +participant B as bpmn-engine +participant G as graph database +participant T as timeseries database +loop each process value message received +M->>A:Process Values Published to broker +A->>A:Values deduplicated +A->>C:Values ingested +C->>C:Bound to equipment properties +C->>C:Rules evaluated +C->>B:BPMN triggered +B->>G:Data persisted +B->>T:Data persisted +end +loop each user involvement +U->>B:BPMN Triggered by operator's frontend +B->>B:Process runs, transforming data +B->>G:Data Persisted +B->>T:Data persisted +end +``` + + +## Implement OEE in Rhize + +### Pre Requisties + +Before you start, ensure you have the following: +- Rhize installed and configured, including timeseries tools + +This implementation guide also involves doing the following actions in Rhize: + +- [Use the rules engine to persist process values]({{< relref "how-to/publish-subscribe/create-equipment-class-rule" >}}) +- [Use messages to trigger BPMN workflows]({{< relref "how-to/bpmn/create-workflow" >}}) +- [Use user-triggered workflows]({{< relref "how-to/bpmn/create-workflow" >}}) + +## Handle real-time values + +For detailed information see: [How To: Create equipment class rule]({{< relref "how-to/publish-subscribe/create-equipment-class-rule" >}}) + +The Rhize agent ingests values from an external broker using protocols such as OPCUA, MQTT, Kafka. +The OEE calculation is particularly interested in machine state changes and produced quantities. + +Data Flow Diagram: + +```mermaid +sequenceDiagram +participant M as Machine +participant A as libre-agent +participant C as libre-core +participant B as bpmn-engine +participant T as timeseries database +M->>A:{
      "state":"Running",
      "timestamp":"2024-09-04T09:00:00Z"
      } +A->>C:{
      "dataSource.id":"MQTT",
      "payload":
      {
      "state":"Running",
      },
      "timestamp":"2024-09-04T09:00:00Z"
      } +C->>C:["Machine A.state":{"previous":"Held","current":"Running"},
      "timestamp":"2024-09-04T09:00:00Z"] +C->>B:{"EquipmentId":"Machine A",
      "State":"Running",
      "timestamp":"2024-09-04T09:00:00Z"} +B->>T:{"Table":"EquipmentState",
      "EquipmentId":"Machine A",
      "State":"APT",
      "timestamp":"2024-09-04T09:00:00Z"} +``` + +Rhize Architecture: + +In the preceding diagram, a machine publishes telemetry values to an MQTT server in the following form: + +```json +{ + "state": "Running|Held|Stopped", + "quantityCounter": 10 +} +``` + +The Rhize rules engine processes these values to run actions when conditions are met. +Including: + +- Trigger a BPMN workflow when the machine state changes to persist the value to timeseries + + ```pseudocode + Trigger Property: State + Trigger Expression: State.current.value != State.previous.value + BPMN Variables: + State: State.Current.value + Timestamp: SourceTimestamp + EquipmentId: EquipmentId + Workflow: RULE_Handle_StateChange + ``` + + The `RULE_Handle_StateChange`workflow is as follows: + + ```mermaid + flowchart LR + start((start))-->transform(Transform state + to ISO22400) + transform-->map(Map into correct + JSON Structure) + map-->throw(throw to NATS to be + picked up by time series ingester service + and persisted to time series) + throw-->e((end)) + ``` + +- Trigger a BPMN workflow when the produced quantity value changes to perist the value to timeseries + + ```pseudocode + TriggerProperty: QuantityCounter + TriggerExpression QuantityCounter.current.Value != QuantityCounter.previous.Value + BPMN Variables: + QuantityDelta: State.current.value - State.previous.value + Timestamp: SourceTimestamp + EquipmentId: EquipmentId + Workflow: RULE_Handle_QuantityChange + ``` + + The `RULE_Handle_QuantityChange` workflow is as follows: + + ```mermaid + flowchart LR + start((start))-->map(Map into correct + JSON Structure) + map-->throw(throw to NATS to be + picked up by time series ingester service + and persisted to time series) + throw-->e((end)) + ``` + +### Import orders + +In this scenario, a production order is published to the MQTT server. The Rhize agent bridges the message to the NATS broker. + +The production order contains information such as operations, materials produced, and consumed and any particular equipment requirements. +It includes the planned rate of production for each operation, added as a job order parameter. +The import workflow listens for the order to be published to NATS, then maps the data into ISA95 entities, and perists to the graph database. + +Workflow NATS_ImportOrder: + +```mermaid +flowchart LR +start((start))-->map(Map into correct + ISA95 Structure) +map-->mutate(Persist to graph database) +mutate-->e((end)) +``` + +### User orchestrated workflow + +An operator has the responsibility to start and stop operations as well as record the quantities of good and scrap material. +These values must be persisted to the time-series database (see TODO:Link and TODO:Link). +These workflows will be triggered by an API call from the operations' front end. + +Workflows: + +API_StartOperation + +{{< bigFigure +alt="add job response" +src="/images/oee/rhize-bpmn-oee-start-order.png" +>}} + +```mermaid +flowchart LR +start((start))-->query(Query: Lookup job order) +query-->map1(Map: Map job response input) +map1-->mutate(Mutate: add job response to graphql database) +mutate-->map2(Map: jobOrderState payload) +map2-->mutate2(Persist: Add record to time series database) +mutate2-->e((end)) +``` + +API_EndOperation + +```mermaid +flowchart LR +start((start))-->query(Query: Lookup currently running job response) +query-->map1(Map: Map job response input) +map1-->mutate(Mutate: update job response with end data time) +mutate-->map2(Map: jobOrderState payload) +map2-->mutate2(Persist: Add record to time series database) +mutate2-->e((end)) +``` + +API_RecordProducedQuantities + +```mermaid +flowchart LR +start((start))-->query(Query: Lookup currently running job response) +query-->map1(Map: Map MaterialActual payload with good/scrap/rework quantities) +map1-->mutate(Mutate: add MaterialActuals linked to job response) +mutate-->map2(Map: QuantityLog payload) +map2-->mutate2(Persist: Add records to time series database) +mutate2-->e((end)) +``` + +#### Dashboarding KPI Queries + +Using the KPI Queries, we can create Grafana dashboards which may look as follows: + +{{< bigFigure +src="/images/oee/oee-dashboard.png" +alt="A diagram showing the an example of an OEE dashboard in Grafana. It includes key metrics such as Availability, Performance, Quality, Quantites produced and an overall OEE figure" +caption="A diagram showing an example KPI dashboard in Grafana." +width="90%" +>}} diff --git a/content/versions/v3.2.1/use-cases/data-collection-ebr.md b/content/versions/v3.2.1/use-cases/data-collection-ebr.md new file mode 100644 index 000000000..8514d3903 --- /dev/null +++ b/content/versions/v3.2.1/use-cases/data-collection-ebr.md @@ -0,0 +1,174 @@ +--- +icon: inbox-in +title: >- + Data collection (eBR example) +description: An example of how Rhize ingests data from various sources to create Electronic Batch Records for pharmaceutical manufacturing +categories: ["howto", "use-cases"] +weight: 0100 +images: + - /images/og/graphic-rhize-data-collection-ebr.png +--- + + +> :memo: Looking to implement Rhize for your Pharma operation? +[Talk to an engineer](https://rhize.com/contact-us/) + + +This document provides examples of how you can use Rhize to automatically ingest data from various sources and store it in a standardized ISA-95 schema. +The examples here are to produce an Electronic Batch Record, an important use case in pharmaceutical manufacturing. +However, the process described here is very similar to what data collection would look like when integrating with any third-party system (such as an MRP, CMMS, and so on), regardless of the type of manufacturing process. + +The procedure has the following steps: + +1. Identify the sources of data. +1. Map the fields for these data sources to Rhize's ISA-95 schema. +1. Write {{< abbr "BPMN" >}} processes that listen for relevant eBR events and transform incoming data to the ISA-95 schema. +1. After the batch finishes, query the database with the fields for your {{< abbr "ebr" >}}. + +The following sections describe this process in a bit more detail. + +## Prerequisites + +This procedure involves multiple data sources and different operations to transform the incoming data and store it in the graph database. +Before you start, ensure that you have the following: +- Awareness of the different sources of eBR data +- If real-time data comes from equipment, [Connected data sources]({{< relref "../how-to/publish-subscribe/connect-datasource" >}}) +- Sufficient knowledge of the ISA-95 standard to model the input data as ISA-95 schema +- In the BPMN process, the ability to filter JSON and [call the GraphQL API]({{< relref "../how-to/gql" >}}) + +In larger operations, different teams may help with different parts of the eBR-creation process. +For example, your integrators may help appropriately model the data, and the frontend team may render the outputted JSON into a final document. + +## Steps to automate eBR creation + +The following steps describe the broad procedure to automatically update the database for incoming batch data, then automatically create an eBR after the batch run. + +### Identify the sources of data + +The first step is to identify the sources of data for your eBR. + +{{< figure +alt="Diagram showing some examples of eBR data sources" +src="/images/ebr/diagram-rhize-example-ebr-sources.png" +width="75%" +>}} + +Common data sources for an eBR include: + +- **{{< abbr "ERP" >}} documents:** high-level operations documents. This might include information about planning and scheduling. +- **{{< abbr "MES" >}} data:** granular process data, tracking objects such as the weight of individual material and the responses for different jobs. +- **{{< abbr "LIMS" >}} documents:** information about the laboratory environment and testing samples +- **Real-time event data:** for example, data sent from OPC UA or MQTT servers +- **Exceptions:** Errors and alerts raised from the operator screen or automation system + +### Model the fields as ISA-95 schema + +After you've identified the source data, the next step is to map this data into ISA-95 models. + +{{< figure +alt="diagram showing some examples of ISA-95 modeling" +src="/images/ebr/diagram-rhize-map-ebr-isa95.png" +width="80%" +>}} + +Some common objects to map include raw and final material, equipment, personnel, operations schedule, segments, job responses, exceptions, and the ERP batch number. +Once ingested, all data is linked through common associations in the graph database and is thus accessible through a single query. + +### Write a BPMN workflow to ingest the data in real-time + +{{< bigFigure +alt="Example of a BPMN workflow" +src="/images/ebr/diagram-rhize-bpmn-ebr.png" +width="60%" +caption="A simplified BPMN workflow. For an example of a real workflow with nodes for each logic step, refer to the next image." +>}} + +With the sources of data and their corresponding models, the next step +is to write a [{{< abbr "BPMN" >}}]({{< relref "../how-to/bpmn" >}}) workflow to automatically transform the data and update the database. + +{{< callout type="info" >}} +You may want to break these steps into multiple parts. +Or, for increased modularity, you can call another BPMN workflow with a [Call activity]({{< relref "../how-to/bpmn/bpmn-elements">}}#call-activities). +{{< /callout >}} + +The procedure is as follows: + +1. Create a BPMN that is [triggered]({{< relref "../how-to/bpmn/trigger-workflows" >}}) by a relevant eBR event. For example, the workflow might subscribe to a `/lims/lab1`, or be triggered by a call to the Rhize API. If the data comes from certain equipment, you first need to [Connect a data source]({{< relref "../how-to/publish-subscribe/connect-datasource" >}}). + +1. Transform with JSONata + + Rhize has a built-in [JSONata]({{< relref "../how-to/bpmn/use-jsonata" >}}) interpreter, which can filter and transform JSON. + Use a [JSONata service task]({{< relref "../how-to/bpmn/bpmn-elements">}}#jsonata-transform) to map the data sources into the corresponding ISA-95 fields that you defined on the previous step. + + Use the output as a variable for the next step. + + +1. POST data with a graph mutation. + + Use the variable returned by the JSONata step to send a mutation to update the Graph database with the new fields. + To learn more, read the [Guide to GraphQL with Rhize]({{< relref "../how-to/gql" >}}). + +In real BPMN workflows, you can dynamically create and assign fields as they enter the system. +For example, this workflow creates a new material definition and material-definition version based on whether this object already exists. + +{{< bigFigure +alt="Screenshot of a BPMN workflow that adds material only if it exists" +src="/images/bpmn/screenshot-rhize-bpmn-add-material-definition.png" +>}} + +This step can involve multiple BPMN processes subscribing to different topics. +As long as the incoming event data has a common association, for example, through the `id` of the batch data and associated `JobResponse`, you can return all eBR fields in one GraphQL query—no recursive SQL joins needed. + +{{< figure +alt="Multiple BPMN processes can be united in one batch" +src="/images/ebr/diagram-rhize-inputs-for-ebr.png" +width="75%" +>}} + +### Query the DB with the eBR fields + +After the batch finishes, use a [GraphQL query]({{< relref "../how-to/gql/query" >}}) to receive all relevant batch data. +You only need one precise request to return exactly the data you specify. + +{{< figure +alt="Diagram showing how a query makes an ebr" +src="/images/ebr/diagram-rhize-make-ebr-query.png" +width="55%" +>}} + +Here is a small, generic snippet of how it looks: +Note how the query specifies exactly the fields to return: no further response filtering is required. +For an idea of how a more complete query looks, refer to the [Electronic Batch Records]({{< relref "ebr" >}}) guide. + +{{< details title="Snippet of a makeEbr query" >}} +```graphql +query makeEbr ($filter: JobOrderFilter) { + queryJobResponse(filter: $filter) { + EXAMPLE_id: id + description + matActualProduced: materialActual(filter:{materialUse: { eq:Produced }}){ + id + material: materialDefinitionVersion{ + id + } + quantity + quantityUoM { + id + } + } + ## More eBR fields + } +} +``` +{{< /details >}} + +The only extra step is to use the returned JSON object as the input for however you create your eBR documents. + +## Next steps + +Fast eBR automation is just one of many use cases of Rhize in the pharmaceutical industry. +With the same event data that you automatically ingest and filter in this workflow, you can also: +- Program reactive logic using BPMN for {{< abbr "event orchestration" >}}. For example, you might send an alert after detecting a threshold condition. +- Analyze multiple batch runs for deviations. For example, you can query every instance of a failure mode across all laboratories. +- Compare batches against some variable. For example, you can compare all runs for two versions of equipment. + diff --git a/content/versions/v3.2.1/use-cases/ebr.md b/content/versions/v3.2.1/use-cases/ebr.md new file mode 100644 index 000000000..777f1f5f3 --- /dev/null +++ b/content/versions/v3.2.1/use-cases/ebr.md @@ -0,0 +1,454 @@ +--- +title: >- + Electronic Batch Records +description: The Rhize guide to querying all the information that happened in a manufacturing job. +categories: ["howto", "use-cases"] +weight: 0100 +images: + - /images/og/graphic-rhize-ebr.png +aliases: + - "/use-cases/track-and-trace/" +icon: document-report +--- + +This document shows you how to use ISA-95 and Rhize to create a generic, reusable model for any use case that involves _Electronic batch records_. +As long as you properly [map and ingest]({{< relref "data-collection-ebr" >}}) the source data, +the provided queries here should work for any operation, though you'll need to tweak them to fit the particular structure of your job response and reporting requirements. + +An Electronic Batch Record (eBR) is a detailed report about a specific batch. +Generating eBRs is an important component of pharmaceutical manufacturing, whose high standards of compliance and complexity of inputs demand a great deal of detail in each report. + +Rhize can model events and resources at a high degree of granularity, and its ISA-95 schema creates built-in relationships between these entities. +So it makes an ideal backend to generate eBRs with great detail and context. +In a single query, +you can use Rhize to identify answers to questions such as: +- What material, equipment, and personnel were involved in this job? And what was their function in performance? +- When and where did this job happen? How long did it run for? +- Why was this work performed? That is, what is the order that initiated the work? +- What are the results of quality testing for this work? + + +{{< callout >}} +:memo: +The focus here is modeling and querying. +For a high-level overview of how eBR data may enter the Rhize data hub, read the guide to [Data collection]({{< relref "../use-cases/data-collection-ebr" >}}). +{{< /callout >}} + + +## Quick query + +{{< watch src="https://www.youtube.com/watch?v=oXG5f3O9xjU&t=0s" text="Full eBR in 1 query" >}} + +If you just want to build out a [GraphQL query]({{< relref "../how-to/gql/query" >}}) for your reporting, use these templates to get started. + + +If you know the IDs for the relevant job response, job order, and test results, you can structure each group as a top-level object. +If you want to input only one ID, you can also use nested fields on a response, order, or test specification to pull all necessary information. +The Rhize DB stores relationships, so the values are identical—only the structure of the response changes. + + +{{< tabs items="Flat,Nested">}} +{{< tab "flat" >}} + +```gql + +query eBR { + performance: getJobResponse(id: "") { + # duration, actuals, and so on + } + planning: getJobOrder(id: "") { + # requirements, work master, and so on + } + testing: getTestResult(id: "") { + # evaluation properties and tested objects + } +} +``` +{{< /tab >}} +{{< tab "nested" >}} + +```gql +query nestedBatchReport { + jobResponse: getJobResponse(id: "ds1d-batch-119-jr-fc-make-frosting") { + id + ## more fields about performance + materialActual { ## repeat for other resources as needed + id + quantity + testResults { ## test results for material + id + } + ## More material fields + } + associated_order: jobOrder { + id + ## more planning fields + } + } +} + +``` +{{< /tab >}} +{{}} + +For more detail, refer to the +[complete example query](#example-query). + + +## Background: ISA-95 entities in an eBR query + +{{< callout >}} +:memo: For an introduction to ISA-95 and its terminology, +read [How to speak ISA-95](/isa-95/how-to-speak-isa-95). +{{< /callout >}} + +The following lists detail the ISA-95 entities that you might need when querying the Rhize database for an eBR. +As always, your manufacturing needs and data-collection capabilities determine the exact data that is necessary. +It is likely that some of the following fields are irrelevant to your particular use case. + +### Performance information + +A _job response_ represents a unit of performed work in a manufacturing operation. +The job response typically forms the core of an eBR query, +as you can query it to obtain duration and all {{< abbr "resource actual" >}}s involved in the job. +A job response may also contain child job responses, as displayed in the following diagram: + + + +{{< bigFigure +src="/images/s95/diagram-rhize-s95-job-response-with-children.svg" +caption="An example of a job response with child job responses. The parent job has a material actual representing the final produced good. The child job responses also have their own properties that may be important to model. This is just one variation of many. **ISA-95 is flexible and the best model always depends on your use case**." +>}} + + +For an eBR, some important job response properties and associations include the following: + +- **Start and End times.** When work started and how long it lasted. +- **Material Actuals.** The quantities of material involved and how they are used: consumed, produced, tested, scrapped, and so on. Material actuals may also have associated lots for unique identification. Test results may be derived from samples of the material actual. +- **Equipment Actuals.** The real equipment used in a job, along with associated equipment properties and testing results. +- **Personnel actuals.** The people involved in a job or test. +- **Process values**. Associated process data and calculations. +- **Comments and Signatures.** Additional input from operators. + +### Scheduling information + +An eBR report might also include information +about the work that was demanded. +The simplest relationship between performance and demand is the link between a job response and a _job order_. +So your eBR might include information about the order that initiated the response. +Through this order, you could also include higher-level scheduling information. + +When adding order information, consider whether you need the following properties: +* **Scheduled start and end times.** These might be compared to the real start and end. +* **Material requirements**. The material that corresponds to the material actuals in the performance. Requirements may include: + - Material to be produced, along with their scheduled quantities and units of measure + * Material to be consumed, along with their scheduled quantities and units of measure + * Any by-product material and scrap +* **Planned equipment**. This can be compared to the real equipment used. +* **Work Directive**. The dispatched version of the planned work. The directive may include: + - Specifications or a BoM (if the requirements are not in the order itself) + - Any relevant work master configuration (routing, process parameters like temperature, durations, and so on) + + +### Quality information + +Your eBR trace also may record test results. +These results provide context about the quality of the work produced in the job response. + +Each {{< abbr "resource actual" >}} can have a corresponding test result. +For example: +- The material actual and lot may record the sample. +- The equipment actual may record test locations. +- Physical asset actuals may record instruments used for the test. +- Personnel actuals may record who performed the test. + +## Example query + +The following snippet is an example of how to pull a full eBR from a single [GraphQL query]({{< relref "../how-to/gql/query" >}}). +Each top-level object has an [alias](https://graphql.org/learn/queries/#aliases), which serves as the key for the object in the JSON payload. + +```gql +query eBR { + performance: getJobResponse(id: "ds1d-119-as") { + # duration, actuals, and so on + } + planning: getJobOrder(id: "ds1d-119-jo-119-3") { + # requirements, work directive, and so on + } + testing: getTestResult(id: "ds1d-119-tr-3") { + # evaluation properties and tested objects +} +``` + +{{< tabs items="Full query,Example response: performance eBR">}} +{{% tab "Full query" %}} + +**Variables** +```json +{ + "getJobResponseId": "ds1d-batch-119-jr-fc-make-frosting", + "getJobOrderId": "ds1d-batch-jo-119", + "getTestResultId": "ds1d-batch-tr-119" +} +``` + +**Query** + +```gql + +query eBR ($getJobResponseId: String $getJobOrderId: String $getTestResultId: String) { + performance: getJobResponse(id:$getJobResponseId) { + jobResponseId: id + startDateTime + endDateTime + duration + jobState + workDirective { + id + } + materialActual { + id + materialUse + quantity + quantityUoM { + id + } + materialDefinition { + id + } + materialLot { + id + materialDefinition { + id + } + } + materialSubLot { + id + } + properties { + id + } + } + equipmentActual { + id + equipment { + id + } + description + children { + id + } + properties { + id + value + valueUnitOfMeasure { + id + } + } + } + personnelActual { + id + + } + } + + planning: getJobOrder(id: $getJobOrderId) { + orderId: id + scheduledStartDateTime + scheduledEndDateTime + materialRequirements { + id + quantity + quantityUoM { + id + } + } + equipmentRequirements { + id + equipment { + id + } + + } + workMaster{ + id + parameterSpecifications { + id + description + } + } + + } + + testing: getTestResult(id: $getTestResultId) { + resultsId: id + id + evaluationDate + expiration + evaluationCriterionResult + equipmentActual { + id + } + materialActual { + id + materialUse + } + physicalAsset { + id + } + equipmentActual { + id + } + } + + + } + +} + +``` +{{< /tab >}} +{{< tab "Response: performance eBR" >}} + +The `performance` section of this query may return data that looks something like this. +Note that every object does not necessarily have every requested field. +In this example, only some material actuals have additional properties. + +```json +{ + "data": { + "performance": { + "id": "ds1d-batch-119-jr-fc-make-frosting", + "startDateTime": "2024-09-23T23:22:25Z", + "endDateTime": "2024-09-23T23:38:04.783Z", + "duration": 939.783, + "materialActual": [ + { + "id": "ds1d-batch-fc-cookie-frosting-actual-119", + "materialUse": "Produced", + "quantity": 3499.46, + "quantityUoM": { + "id": "g" + }, + "materialLot": [ + { + "id": "ds1d-batch-fc-cookie-frosting-lot-119" + } + ], + "materialSubLot": [], + "properties": [ + { + "id": "viscosity", + "value": "0.1", + "valueUnitOfMeasure": { + "id": "mm2/s" + } + }, + { + "id": "temperature", + "value": "22", + "valueUnitOfMeasure": { + "id": "C" + } + } + ] + }, + { + "id": "ds1d-batch-fc-butter-actual-119", + "materialUse": "Consumed", + "quantity": 1124.05, + "quantityUoM": { + "id": "g" + }, + "materialLot": [ + { + "id": "ds1d-batch-fc-butter-lot-119" + } + ], + "materialSubLot": [], + "properties": [ + { + "id": "fat-percent", + "value": "15", + "valueUnitOfMeasure": null + } + ] + }, + { + "id": "ds1d-batch-fc-confectioner-sugar-actual-119", + "materialUse": "Consumed", + "quantity": 249.08, + "quantityUoM": { + "id": "g" + }, + "materialLot": [ + { + "id": "ds1d-batch-fc-confectioner-sugar-lot-119" + } + ], + "materialSubLot": [], + "properties": [] + }, + { + "id": "ds1d-batch-fc-peanut-butter-actual-119", + "materialUse": "Consumed", + "quantity": 2249.63, + "quantityUoM": { + "id": "g" + }, + "materialLot": [ + { + "id": "ds1d-batch-fc-peanut-butter-lot-119" + } + ], + "materialSubLot": [], + "properties": [] + } + ], + "equipmentActual": [ + { + "id": "ds1d-batch-kitchen-mixer-actual-119", + "description": null, + "children": [], + "properties": [] + }, + { + "id": "ds1d-batch-kitchen-actual-119", + "description": null, + "children": [], + "properties": [] + } + ], + "personnelActual": [ + { + "id": "ds1d-batch-fc-supervisor-actual-119" + }, + { + "id": "ds1d-batch-fc-handler-actual-119" + } + ] + } + } +} +``` + +{{< /tab >}} +{{< /tabs >}} + +## Build a reporting frontend + +{{< watch src="https://www.youtube.com/watch?v=gOs3185ACao" text="Prototyping two example eBR frontends" >}} + +As a final step, you can also transform the JSON payload into a more human-readable presentation. +As always, you have a few options. Here are a few, from least to most interactive: +- Create a PDF report, perhaps using specialized software such as InfoBatch +- Create a static web report, using basic HTML and CSS +- Build an interactive report explorer, which may include links to other reports and dynamic visualizations of alerts and performance + + + +## Next steps: combine with other use cases + +You can reuse and combine the queries here for other use cases that involve tracking and performance analysis. +For example, if you want a detailed report for the movement of material, you can combine the queries here with a query for a [material lot genealogy]({{< relref "genealogy" >}}). This would provide a detailed report for every job that involved an ancestor or descendent of the queried material. + diff --git a/content/versions/v3.2.1/use-cases/genealogy.md b/content/versions/v3.2.1/use-cases/genealogy.md new file mode 100644 index 000000000..1f09b308b --- /dev/null +++ b/content/versions/v3.2.1/use-cases/genealogy.md @@ -0,0 +1,625 @@ +--- +title: >- + Genealogy +description: The Rhize guide to modeling and querying the forward and backward genealogy of a material lot +categories: ["howto", "use-cases"] +weight: 0100 +images: + - "/images/og/graphic-rhize-genealogy.png" +icon: finger-print +--- + +This document provides a high-level overview of how to use Rhize for material genealogy. + +In manufacturing, a _genealogy_ is the record of what a material contains or what it is now a part of. +As its name implies, genealogy represents the family tree of material. +A material genealogy helps manufacturers in multiple ways: +- Prevent product recalls by isolating deviations early in material flow +- Decrease recall time by maintaining a complete record of where material starts and ends +- Build reports and analysis of material deviations +- Create documentation and compliance records for material tracking + +Rhize provides a standards-based schema to represent material at all levels of granularity. +The graph structure of its DB has built-in properties to associate material lots with other information about planned and performed work. +This database has a GraphQL API that can pull full genealogies from terse queries. +Rhize makes an **ideal backend for genealogical use cases**. + + +```echart width="80%" height="600px" +const option = { + title: { + subtext: + "Click solid squares to expand.\nHover for quantities and definitions", + fontSize: 13, + }, + tooltip: [ + { + z: 60, + show: true, + showContent: true, + alwaysShowContent: true, + formatter: function (params) { + return `Material definition: ${params.data.definition}
      + ID: ${params.data.name}
      + Value: ${params.data.value}`; + }, + }, + ], + initialTreeDepth: 3, + series: [ + { + type: "tree", + left: "15%", + right: "30%", + top: "10%", + bottom: "2%", + symbol: "emptySquares", + orient: "RL", + expandAndCollapse: true, + lineStyle: { + color: "#006838", + }, + label: { + position: "bottom", + rotate: 0, + verticalAlign: "middle", + align: "right", + fontSize: 11, + }, + itemStyle: { + color: "#006838", + }, + leaves: { + label: { + position: "bottom", + backgroundColor: "#FFFFFF", + rotate: 10, + verticalAlign: "middle", + align: "left", + fontSize: 13, + }, + }, + + animationDurationUpdate: 650, + data: [], + }, + ], +}; + +(option.series[0].data[0] = { + name: "cookie-box-2f", + value: "1 cookie box", + definition: "cookie-box", + children: [ + { + name: "cookie-unit-dh", + value: "1000 cookie unit", + definition: "cookie-unit", + children: [ + { + name: "cookie-frosting-9Q", + value: "3500 g", + definition: "cookie-frosting", + children: [ + { + name: "butter-67", + value: "1125 g", + definition: "butter", + }, + { + name: "confectioner-sugar-yN", + value: "250 g", + definition: "confectioner-sugar", + }, + { + name: "peanut-butter-Cq", + value: "2250 g", + definition: "peanut-butter", + }, + ], + }, + { + name: "cookie-dough-Vr", + value: "15000 g", + definition: "cookie-dough", + children: [ + { + name: "egg-gY", + value: "50 large-egg", + definition: "egg", + }, + { + name: "flour-kO", + value: "7500 g", + definition: "flour", + }, + { + name: "saZ3", + value: "150 g", + definition: "salt", + }, + { + name: "sugar-32", + value: "2500 g", + definition: "sugar", + }, + { + name: "vanilla-extract-px", + value: "10 g", + definition: "vanilla-extract", + }, + ], + }, + ], + }, + { + name: "cookie-wrapper-NR", + value: "150 wrapper", + definition: "cookie-wrapper", + }, + ], +}), + (option.title.text = `Example frontend:\nreverse genealogy of ${option.series[0].data[0].name}`); +``` + +_Data from a Rhize query in an Apache Echart. Read the [Build frontend](#frontend) section for details._ + + +## Quick query + +To get started with genealogy quickly, use these [query]({{< relref "../how-to/gql/query" >}}) templates. +One template is for the reverse genealogy, and the other is for the forward genealogy. +For each, you need to input the Lot ID. + +{{< tabs items="Reverse genealogy query,Forward genealogy query" >}} +{{< tab "Reverse" >}} + +```gql +query reverseGenealogy{ + getMaterialLot(id: "") { + parent_lots: isAssembledFromMaterialLot { + id + grandparent_lots: isAssembledFromMaterialLot { + id + great_grandparent_lots: isAssembledFromMaterialLot { + id + } + } + } + } +} +``` +{{< /tab >}} +{{< tab "Forward" >}} + +```gql +query forwardGenealogy{ + getMaterialLot(id: "") { + child_lots: isAssembledFromMaterialLot { + id + grandchildren_lots: isAssembledFromMaterialLot { + id + great_grandgrandchildren_lots: isAssembledFromMaterialLot { + id + } + } + } + } +} +``` +{{< /tab >}} +{{}} + +You can modify the query to include more fields, levels, or get the forward and backward genealogy. +For an idea of how a more complete query would look, refer to the [Examples](#examples) section. + + +## Background: material entities in Rhize + +{{< callout >}} +:memo: For a more complete introduction to ISA-95 and its terminology, +read [How to speak ISA-95]({{< relref "../isa-95/how-to-speak-isa-95" >}}). +{{< /callout >}} + +In ISA-95 terminology, the lineage of each material is expressed through the following entities: +- **Material lots.** Unique amounts of identifiable material. For example, a material lot might be a camshaft in an engine or a package of sugar from a supplier. +- **Material Sublots.** Uniquely identifiable parts of a material lot. For example, if a box of consumer-packaged goods represents a material lot, the individual serial numbers of the packages within might be the material sublots. Each sublot is unique, but multiple sublots may share properties from their parent material lot (for example, the expiry date). + +The relationship between lots is expressed through the following properties: +- `isAssembledFromMaterial[Sub]lot` and `isComponentOf[Sub]lot`. The material lots or sublots that went into another material. +- `parentMaterialLot` and `childSubLot`. The relationships between a material lot and its sublots. + +Note that these properties are symmetrical. If lot `final-1` has the property `{isAssembledFromMaterialLot: "intermediate-1"`}, +then lot `intermediate-1` has the property `{isComponentOfMaterialLot: "final-1" }`. +The graph structure of the RhizeDB creates these links automatically. + + +{{< callout type="info" >}} + +The distinction between sublots and material lots varies with processes. +The rest of this document simplifies terminology by using only the word "lots". + +{{< /callout >}} + + +## Steps to use Rhize for genealogy + +The following sections describe how to use Rhize to build a genealogy use case. +In short: +1. Identify the unique lots in your material flows. +2. Add these lots to your model. +3. Implement how to collect these lots. +4. Query the database. + +The example uses a simple baking process to demonstrate the material flow. + +{{< bigFigure +src="/images/genealogy/diagram-rhize-genealogy-of-a-batch.png" +width="30%" +alt="A simplified view of how a pallet of packaged goods is assembled from lots and sublots." +caption="A simplified view of how a pallet of packaged goods is assembled from lots and sublots." +>}} + +### Identify lots to collect + +To use Rhize for genealogy, first identify the material lots that you want to identify. +How you identify these depends on your processes and material flows. +The following guidelines generally are true: +- The lot must be uniquely identifiable. +- The level of granularity of the lots is realistic for your current processes. + +For example, in a small baking operation, lots might come from the following areas: +- The serial numbers of ingredients from suppliers +- The batches of baked pastries +- The wrappers consumed by the packing process +- The pallets of packed goods (with individual packages being material sublots). + +{{< callout type="info" >}} +For some best practices of how to model, read our blog [How much do I need to model?](https://rhize.com/blog/how-much-do-i-need-to-model-when-applying-the-isa-95-standard/) +{{< /callout >}} + + +### Model these lots into your knowledge graph + +After you have identified the material lots, model how the data fits with the other components of your manufacturing knowledge graph. +At minimum, your material lots must have a {{< abbr "material definition" >}} with an active version. + +Beyond these requirements, the graph structure of the ISA-95 database provides many ways to create links between lots and other manufacturing entities, including: +- A work request or job response +- The associated {{< abbr "resource actual" >}} +- In aggregations such as part of a material class, or part of the material specifications in a {{< abbr "work master" >}}. + +In the aforementioned baking process, the lots may have: +- Material classes (raw, intermediate, and final) +- Associated equipment (such as `mixers`, `ovens`, and `trays`) +- Associated segments (such as `mixing` or `cooling`) +- Associated measurements and properties + +### Implement how to store your lots in the RhizeDB + +After you have planned the process and defined your models, next implement how to add material lot IDs to Rhize in the course of your real manufacturing operation. + +Your manufacturing process determines where lot IDs are created. +The broad patterns are as follows: +- **Scheduled.** Assign lots at the time of creating the work request or schedule (while the job response might create a material actual that maps to the requested lot ID). +- **Scheduled and event-driven.** Generate lot IDs beforehand, and then use a GraphQL call to create records in the Rhize DB after some event. Example events might be a button press or an automated signal that indicates the lot has been physically created. +- **Event-driven.** Assign lot IDs at the exact time of work performance. For example, you can write a [BPMN workflow]({{< relref "../how-to/bpmn/" >}}) to subscribe to a topic that receives information about lots and automatically forwards the IDs to your knowledge graph. + +In the example baking process, lots may be collected in the following ways: +- Scanned from supplier bar code +- Generated after the quality inspector indicates that a tray is finished +- Planned in terms of final package number and expiration date + +### Query the data + +After you start collecting data, you can also start querying it through the `materialLot` query operations. +The following section provides example genealogy queries. + +## Examples + +The following examples show how to query for forward and backward genealogies using the [`get`](https://docs.rhize.com/how-to/gql/query/#get) operation to query material lots. + + +{{< callout type="info" >}} +You could also query for multiple genealogies—either through material lots or through aggregations such as material definitions and specifications— then apply [filters](https://docs.rhize.com/how-to/gql/filter/). +{{< /callout >}} + +### Backward genealogy + + +A backward genealogy examines all material lots that make the assembly of some later material lot. + +In Rhize, you can query this relationship through the `isAssembledFromMaterialLot` property, +using nesting to indicate the level of material ancestry to return. +For example, this returns four levels of backward genealogy for the material lot +`cookie-box-2f` (using a [fragment]({{< relref "../how-to/gql/call-the-graphql-api/#shortcuts-for-more-expressive-requests" >}}) to standardize the properties returned for each lot). + +```gql +query{ + getMaterialLot (id:"cookie-box-2f") { + ...lotFields + isAssembledFromMaterialLot { + ...lotFields + isAssembledFromMaterialLot { + ...lotFields + isAssembledFromMaterialLot { + ...lotFields + } + } + } + } + } +} + +## Common fields for all nested material + +fragment lotFields on MaterialLot{ + id + quantity + quantityUnitOfMeasure{id} + materialDefinition{id} +} +``` + +The returned genealogy looks something like the following: + +{{% details title="example-backward-genealogy.json" closed="true" %}} + +```json +{ + "data": { + "getMaterialLot": { + "id": "cookie-box-2f", + "quantity": 1, + "quantityUnitOfMeasure": { + "id": "cookie box" + }, + "materialDefinition": { + "id": "cookie-box" + }, + "isAssembledFromMaterialLot": [ + { + "id": "cookie-unit-dh", + "quantity": 1000, + "quantityUnitOfMeasure": { + "id": "cookie unit" + }, + "materialDefinition": { + "id": "cookie-unit" + }, + "isAssembledFromMaterialLot": [ + { + "id": "cookie-frosting-9Q", + "quantity": 3500, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "cookie-frosting" + }, + "isAssembledFromMaterialLot": [ + { + "id": "butter-67", + "quantity": 1125, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "butter" + } + }, + { + "id": "confectioner-sugar-yN", + "quantity": 250, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "confectioner-sugar" + } + }, + { + "id": "peanut-butter-Cq", + "quantity": 2250, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "peanut-butter" + } + } + ] + }, + { + "id": "cookie-dough-Vr", + "quantity": 15000, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "cookie-dough" + }, + "isAssembledFromMaterialLot": [ + { + "id": "egg-gY", + "quantity": 50, + "quantityUnitOfMeasure": { + "id": "large-egg" + }, + "materialDefinition": { + "id": "egg" + } + }, + { + "id": "flour-kO", + "quantity": 7500, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "flour" + } + }, + { + "id": "saZ3", + "quantity": 150, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "salt" + } + }, + { + "id": "sugar-32", + "quantity": 2500, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "sugar" + } + }, + { + "id": "vanilla-extract-px", + "quantity": 10, + "quantityUnitOfMeasure": { + "id": "g" + }, + "materialDefinition": { + "id": "vanilla-extract" + } + } + ] + } + ] + }, + { + "id": "cookie-wrapper-NR", + "quantity": 150, + "quantityUnitOfMeasure": { + "id": "wrapper" + }, + "materialDefinition": { + "id": "cookie-wrapper" + }, + "isAssembledFromMaterialLot": [] + } + ] + } + } +} + +``` +{{% /details %}} + +#### Forward genealogy + +A forward genealogy examines the history of how one lot becomes a component of another. +For example, if a supplier informs a manufacturer about an issue with a specific raw material, +the manufacturer can run a forward genealogy that looks at the downstream material that consumed these bad lots. + +In Rhize, you can query the forward genealogy through the `isComponentOfMaterialLot` property, +using nesting to indicate the number of levels of forward generations. +For example, this query returns the full chain of material that contains (or contains material that contains) +the material sublot `peanut-butter-Cq`: + +```gql +query GetMaterialLot($getMaterialLotId: String) { + getMaterialLot(id: "peanut-butter-Cq") { + id + isComponentOfMaterialLot { + id + isComponentOfMaterialLot { + id + isComponentOfMaterialLot { + id + } + } + } + } +} + +``` + +This query returns data in the following structure: + +```json +{ + "data": { + "getMaterialLot": { + "id": "peanut-butter-Cq", + "isComponentOfMaterialLot": { + "id": "cookie-frosting-9Q", + "isComponentOfMaterialLot": { + "id": "cookie-unit-dh", + "isComponentOfMaterialLot": { + "id": "cookie-box-2f" + } + } + } + } + } +} +``` + +## Next steps: display and analyze + +The preceding steps are all you need to create a data foundation to use Rhize for genealogy. +After you've started collecting data, you can use the genealogy queries to build frontends and isolate entities for more detailed tracing and performance analysis. + +### Build frontends {#frontend} + +All data that you store in the Rhize DB is exposed through the GraphQL API. +This provides a flexible way to create custom frontends to organize your genealogical analysis in the presentation that makes sense for your use case. +For example, you might represent the genealogy in any of the following ways: +- In a summary report, providing a brief list of the material and all impacted upstream or downstream lots +- As an interactive list, which you can expand to view a lot's associated quantities, job order, personnel and so on +- As the input for a secondary query +- In a display using some data visualization library + +For a visual example, the interactive chart in the introduction takes the data from the preceding reverse-genealogy query, +transforms it with a [JSONata expression]({{< relref "../how-to/bpmn/use-jsonata" >}}), and visualizes the relationship using +[Apache echarts](https://echarts.apache.org/). + +The JSONata expression accepts an array of material lots, +then recursively renames all IDs and `isAssembledFrom` properties to `name` and `children`, the data structure expected by the visualization. +Additional properties remain to provide context in the chart's tooltips. + +```golang +( +$makeParent := function($data){ + $data.{ + "name": id, + "value": quantity & " " & quantityUnitOfMeasure.id, + "definition": materialDefinition.id, + "children": $makeParent(isAssembledFromMaterialLot) + } +}; + +$makeParent($.data.getMaterialLot) + +) +``` + +We've also embedded Echarts in Grafana workspaces to make interactive dashboards for forward and reverse genealogies: + +{{< bigFigure +src="/images/genealogy/screenshot-rhize-genealogy-grafana.png" +alt="An interactive genealogy dashboard. Filter by material definition, then select specific material lots." +caption="An interactive genealogy dashboard. Filter by material definition, then select specific material lots." +width="90%" +>}} + + + +### Combine with granular tracing and performance analysis + +While a genealogy is an effective tool on its own, the usefulness of the Rhize data hub compounds with each use case. +So genealogy implementations are most effective if you can combine them with the other data stored in your manufacturing knowledge graph. + +For example, the genealogical record may provide the input for more granular _track and trace_ , in which you use the Lot IDs to determine track material movement across equipment and storage, associated personnel, and so on. + +You could also combine genealogy with performance analysis, using the genealogical record as the starting point to analyze and predict failures and deviations. + + diff --git a/content/versions/v3.2.1/use-cases/overview.md b/content/versions/v3.2.1/use-cases/overview.md new file mode 100644 index 000000000..26f884d6b --- /dev/null +++ b/content/versions/v3.2.1/use-cases/overview.md @@ -0,0 +1,66 @@ +--- +title: >- + Overview of use cases +description: >- + Handle manufacturing events, access the knowledge graph of the operation, and build custom MOM applications. +weight: 1 +--- + +Rhize's flexible, event-centric architecture serves many functions. +While Rhize has components that can replace an {{< abbr "MES" >}}, historian, andon, and real-time monitoring solutions, +it can complement them equally well. +You can map data from an MES or {{< abbr "ERP" >}} into the database to create a coherent data model that unites your operations IT. + +Besides better performance and flexibility, Rhize provides tighter integration of plant and system data. +Its data model is generic enough to conform to many use cases, chiefly: +- **A manufacturing knowledge graph**. Query the entire context and history of the operation. +- **Headless MES or MOM**. Use the API to build custom applications for a variety of MES and MOM activities. +- **An event handler**. Receive manufacturing message streams and react to them. + + +## Manufacturing knowledge graph + +All data that Rhize collects, whether from sensors or an ERP, is contextual and interconnected. Rather than a relational database, Rhize uses a graph database where any node can link to any other. Users can query any data combination without requiring complex joins. + +The graph database unlocks new possibilities for manufacturing analysis and data science. +For example: +- Run queries to find anomalies in an operation---which may trace to a specific site, segment, equipment, material lot, personnel, and so on. +- Discover places to optimize the system, whether they are bottlenecks to remove or highly productive areas to replicate +- Train deep-learning models to detect conditions that lead to batch failures. +- Use the historical record as a model to run simulations of new events + +Guide: [Use the knowledge graph]({{< relref "../how-to/gql" >}}) + +## Headless MES or MOM + +Rhize serves as a backend to create custom applications to replace traditional MES or MOM systems. +Rather than force its opinion of what an MES interface should look like, Rhize provides the only data model, API, and BPMN engine. +Your frontend teams can then use the tools of their choice to make the MES designed for your use case, with all the backend work delegated to Rhize's normal operation features. + + +With the combination of its event-driven architecture and unified data model, Rhize can: +- calculate OEE, or far more granular metrics +- Handle schedules and maintenance orders +- Track and trace material +- Execute dynamic workflows. + +Besides building bespoke frontends, many operators choose to integrate Rhize with low-code systems like Appsmith. +For some problems, lowcode models can reduce the time to create applications dramatically, making it easier to create and test prototypes, involve more stakeholders in the application design process, iterate on working models, and generally do useful things more quickly. + +Guides: [Model production]({{< relref "../how-to/model" >}}), [Connect process data]({{< relref "../how-to/publish-subscribe" >}}). + +## Real-time event handling + +The fundamental design of Rhize is low-latency and event-driven. +Rhize can collect and monitor data from protocols like MQTT and OPC-UA. + +Rhize also has components to monitor and react to this data stream, ensuring that you can stop problems early—and program corrective measures to execute automatically. +Event orchestration is handled through {{< abbr "BPMN" >}}, a low-code interface that can listen for events and initiate conditional flows. + +Guide: [Handle events]({{< relref "../how-to/bpmn" >}}) + +## Calculating OEE + +Rhize includes an optional KPI service which can calculate OEE. Using a combintion of Rhize Workflows and Real-time event handling. Data can be transformed and persisted to a time series database in a format that allows the KPI sercvice to calculate key metrics. + +Guide: [KPI Service]({{< relref "../how-to/kpi-service" >}}) diff --git a/data/versionCompat.yaml b/data/versionCompat.yaml index 9a33ee735..5b5843e7b 100644 --- a/data/versionCompat.yaml +++ b/data/versionCompat.yaml @@ -1,3 +1,11 @@ +- version: "v4.2.0" + keycloak: "v26.4" + prometheus: "v3.7.3" + questdb: "v9.1.1" + redpanda: "console v3.2.2" + redpanda: "v25.2.10" + restate: "v1.5.3" + - version: "3.2.1" apollo_router: "1.61.0" grafana: "9.4.7"