Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(karpor): refine helm installation document #598

Merged
merged 4 commits into from
Jan 27, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 30 additions & 29 deletions docs/karpor/1-getting-started/2-installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,63 +91,62 @@ helm install karpor-release kusionstack/karpor --set registryProxy=docker.m.daoc

### Enable AI features

If you are trying to install Karpor with AI features, including natural language search and AI analyze, `ai-auth-token` and `ai-base-url` should be configured, e.g.:
If you want to install Karpor with AI features, including natural language search and AI analysis, you should configure parameters such as `ai-auth-token`, `ai-base-url`, etc., for example:

```shell
# At a minimum, server.ai.authToken and server.ai.baseUrl must be configured.
# Minimal configuration, using OpenAI as the default AI backend
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1
--set server.ai.authToken={YOUR_AI_TOKEN}

# server.ai.backend has default values `openai`, which can be overridden when necessary.
# If the backend you are using is compatible with OpenAI, then there is no need to make
# any changes here.
# Example using Azure OpenAI
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.backend=huggingface
--set server.ai.authToken={YOUR_AI_TOKEN} \
--set server.ai.baseUrl=https://{YOUR_RESOURCE_NAME}.openai.azure.com \
--set server.ai.backend=azureopenai

# server.ai.model has default values `gpt-3.5-turbo`, which can be overridden when necessary.
# Example using Hugging Face
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.model=gpt-4o
--set server.ai.authToken={YOUR_AI_TOKEN} \
--set server.ai.model={YOUR_HUGGINGFACE_MODEL} \
--set server.ai.backend=huggingface

# server.ai.topP and server.ai.temperature can also be manually modified.
# Custom configuration
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.topP=0.5 \
--set server.ai.temperature=0.2
--set server.ai.authToken={YOUR_AI_TOKEN} \
--set server.ai.baseUrl=https://api.deepseek.com \
--set server.ai.backend=openai \
--set server.ai.model=deepseek-chat \
--set server.ai.topP=0.5 \
--set server.ai.temperature=0.2
```

### Chart Parameters
## Chart Parameters

The following table lists the configurable parameters of the chart and their default values.

#### General Parameters
### General Parameters

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| namespace | string | `"karpor"` | Which namespace to be deployed. |
| namespaceEnabled | bool | `true` | Whether to generate namespace. |
| registryProxy | string | `""` | Image registry proxy will be the prefix as all component image. |

#### Global Parameters
### Global Parameters

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| global.image.imagePullPolicy | string | `"IfNotPresent"` | Image pull policy to be applied to all Karpor components. |

#### Karpor Server
### Karpor Server

The Karpor Server Component is main backend server. It itself is an `apiserver`, which also provides `/rest-api` to serve Dashboard.

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI configuration section. The AI analysis feature requires that [authToken, baseUrl] be assigned values. |
| server.ai.authToken | string | `""` | Authentication token for accessing the AI service. |
| server.ai.backend | string | `"openai"` | Backend service or platform that the AI model is hosted on. e.g., "openai". If the backend you are using is compatible with OpenAI, then there is no need to make any changes here. |
| server.ai.authToken | string | `""` | Authentication token for accessing the AI service. |
| server.ai.backend | string | `"openai"` | Backend service or platform that the AI model is hosted on. Available options: <br/>- `"openai"`: OpenAI API (default)<br/>- `"azureopenai"`: Azure OpenAI Service<br/>- `"huggingface"`: Hugging Face API<br/> If the backend you are using is compatible with OpenAI, then there is no need to make any changes here. |
| server.ai.baseUrl | string | `""` | Base URL of the AI service. e.g., "https://api.openai.com/v1". |
| server.ai.model | string | `"gpt-3.5-turbo"` | Name or identifier of the AI model to be used. e.g., "gpt-3.5-turbo". |
| server.ai.temperature | float | `1` | Temperature parameter for the AI model. This controls the randomness of the output, where a higher value (e.g., 1.0) makes the output more random, and a lower value (e.g., 0.0) makes it more deterministic. |
Expand All @@ -161,7 +160,7 @@ The Karpor Server Component is main backend server. It itself is an `apiserver`,
| server.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor server pods. |
| server.serviceType | string | `"ClusterIP"` | Service type for the karpor server. The available type values list as ["ClusterIP"、"NodePort"、"LoadBalancer"]. |

#### Karpor Syncer
### Karpor Syncer

The Karpor Syncer Component is independent server to synchronize cluster resources in real-time.

Expand All @@ -174,7 +173,7 @@ The Karpor Syncer Component is independent server to synchronize cluster resourc
| syncer.replicas | int | `1` | The number of karpor syncer pods to run. |
| syncer.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor syncer pods. |

#### ElasticSearch
### ElasticSearch

The ElasticSearch Component to store the synchronized resources and user data.

Expand All @@ -187,7 +186,7 @@ The ElasticSearch Component to store the synchronized resources and user data.
| elasticsearch.replicas | int | `1` | The number of ElasticSearch pods to run. |
| elasticsearch.resources | object | `{"limits":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"},"requests":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"}}` | Resource limits and requests for the karpor elasticsearch pods. |

#### ETCD
### ETCD

The ETCD Component is the storage of Karpor Server as `apiserver`.

Expand All @@ -202,11 +201,13 @@ The ETCD Component is the storage of Karpor Server as `apiserver`.
| etcd.replicas | int | `1` | The number of etcd pods to run. |
| etcd.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor etcd pods. |

#### Job
### Job

This one-time job is used to generate root certificates and some preliminary work.

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| job.image.repo | string | `"kusionstack/karpor"` | Repository for the Job image. |
| job.image.tag | string | `""` | Tag for Karpor image. Defaults to the chart's appVersion if not specified. |


Original file line number Diff line number Diff line change
Expand Up @@ -91,109 +91,121 @@ helm install karpor-release kusionstack/karpor --set registryProxy=docker.m.daoc

### 启用 AI 功能

如果您要安装带有AI功能的Karpor,包括自然语言搜索和AI分析,则应配置 `ai-auth-token``ai-base-url`,例如:
如果您要安装带有 AI 功能的 Karpor,包括自然语言搜索和 AI 分析,则应配置 `ai-auth-token``ai-base-url` 等参数,例如:

```shell
# 至少需要配置 server.ai.authToken 和 server.ai.baseUrl。
# 最少配置,默认使用 OpenAI 作为 AI Backend
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1
# server.ai.backend 的默认值是 `openai`,可以根据需要进行覆盖。如果你使用的后端与 OpenAI 兼容,则无需在此处进行任何更改。
--set server.ai.authToken={YOUR_AI_TOKEN}

# 使用 Azure OpenAI 的样例
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.backend=huggingface
# server.ai.model 的默认值是 `gpt-3.5-turbo`,可以根据需要进行覆盖。
--set server.ai.authToken={YOUR_AI_TOKEN} \
--set server.ai.baseUrl=https://{YOUR_RESOURCE_NAME}.openai.azure.com \
--set server.ai.backend=azureopenai

# 使用 Hugging Face 的样例
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.model=gpt-4o
# server.ai.topP 和 server.ai.temperature 也可以手动修改。
--set server.ai.authToken={YOUR_AI_TOKEN} \
--set server.ai.model={YOUR_HUGGINGFACE_MODEL} \
--set server.ai.backend=huggingface

# 自定义配置
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.topP=0.5 \
--set server.ai.temperature=0.2
--set server.ai.authToken={YOUR_AI_TOKEN} \
--set server.ai.baseUrl=https://api.deepseek.com \
--set server.ai.backend=openai \
--set server.ai.model=deepseek-chat \
--set server.ai.topP=0.5 \
--set server.ai.temperature=0.2
```

### Chart 参数
## Chart 参数

以下表格列出了 Chart 的所有可配置参数及其默认值。

#### 通用参数
### 通用参数

| 键 | 类型 | 默认值 | 描述 |
|-----|------|---------|-------------|
| namespace | string | `"karpor"` | 部署的目标命名空间 |
| namespaceEnabled | bool | `true` | 是否生成命名空间 |
| registryProxy | string | `""` | 镜像代理地址,配置后将作为所有组件镜像的前缀。 比如,`golang:latest` 将替换为 `<registryProxy>/golang:latest` |
| namespace | string | `"karpor"` | 部署的目标命名空间 |
| namespaceEnabled | bool | `true` | 是否生成命名空间 |
| registryProxy | string | `""` | 镜像仓库代理,将作为所有组件镜像的前缀。 |

#### 全局参数
### 全局参数

| 键 | 类型 | 默认值 | 描述 |
|-----|------|---------|-------------|
| global.image.imagePullPolicy | string | `"IfNotPresent"` | 应用于所有 Karpor 组件的镜像拉取策略 |
| global.image.imagePullPolicy | string | `"IfNotPresent"` | 应用于所有 Karpor 组件的镜像拉取策略 |

#### Karpor Server
### Karpor 服务端

Karpor Server 组件是主要的后端服务。它本身就是一个 `apiserver`,也提供 `/rest-api` 来服务 Web UI
Karpor 服务器组件是主要的后端服务器。它本身是一个 `apiserver`,同时也提供 `/rest-api` 来服务仪表板。

| 键 | 类型 | 默认值 | 描述 |
|-----|------|---------|-------------|
| server.image.repo | string | `"kusionstack/karpor"` | Karpor Server 镜像的仓库 |
| server.image.tag | string | `""` | Karpor Server 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
| server.name | string | `"karpor-server"` | Karpor Server 的组件名称 |
| server.port | int | `7443` | Karpor Server 的端口 |
| server.replicas | int | `1` | 要运行的 Karpor Server pod 的数量 |
| server.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor Server pod 的资源规格 |
| server.serviceType | string | `"ClusterIP"` | Karpor Server 的服务类型,可用的值为 ["ClusterIP"、"NodePort"、"LoadBalancer"] |

#### Karpor Syncer

Karpor Syncer 组件是独立的服务,用于实时同步集群资源。
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI 配置部分。AI 分析功能需要为 [authToken, baseUrl] 赋值。 |
| server.ai.authToken | string | `""` | 访问 AI 服务的认证令牌。 |
| server.ai.backend | string | `"openai"` | 托管 AI 模型的后端服务或平台。可用选项:<br/>- `"openai"`: OpenAI API(默认)<br/>- `"azureopenai"`: Azure OpenAI 服务<br/>- `"huggingface"`: Hugging Face API<br/>如果您使用的后端与 OpenAI 兼容,则无需在此处进行任何更改。 |
| server.ai.baseUrl | string | `""` | AI 服务的基础 URL。例如:"https://api.openai.com/v1"。 |
| server.ai.model | string | `"gpt-3.5-turbo"` | 要使用的 AI 模型的名称或标识符。例如:"gpt-3.5-turbo"。 |
| server.ai.temperature | float | `1` | AI 模型的温度参数。控制输出的随机性,较高的值(例如 1.0)使输出更随机,较低的值(例如 0.0)使输出更确定性。 |
| server.ai.topP | float | `1` | AI 模型的 Top-p(核采样)参数。控制采样的概率质量,较高的值导致生成内容的多样性更大(通常范围为 0 到 1)。 |
| server.enableRbac | bool | `false` | 如果设置为 true,则启用 RBAC 授权。 |
| server.image.repo | string | `"kusionstack/karpor"` | Karpor 服务器镜像的仓库。 |
| server.image.tag | string | `""` | Karpor 服务器镜像的标签。如果未指定,则默认为 Chart 的 appVersion。 |
| server.name | string | `"karpor-server"` | Karpor 服务器的组件名称。 |
| server.port | int | `7443` | Karpor 服务器的端口。 |
| server.replicas | int | `1` | 要运行的 Karpor 服务器 Pod 数量。 |
| server.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor 服务器 Pod 的资源限制和请求。 |
| server.serviceType | string | `"ClusterIP"` | Karpor 服务器的服务类型。可用类型值为 ["ClusterIP"、"NodePort"、"LoadBalancer"]。 |

### Karpor 同步器

Karpor 同步器组件是一个独立的服务器,用于实时同步集群资源。

| 键 | 类型 | 默认值 | 描述 |
|-----|------|---------|-------------|
| syncer.image.repo | string | `"kusionstack/karpor"` | Karpor Syncer 镜像的仓库 |
| syncer.image.tag | string | `""` | Karpor Syncer 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
| syncer.name | string | `"karpor-syncer"` | karpor Syncer 的组件名称 |
| syncer.port | int | `7443` | karpor Syncer 的端口 |
| syncer.replicas | int | `1` | 要运行的 karpor Syncer pod 的数量 |
| syncer.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | karpor Syncer pod 的资源规格 |
| syncer.image.repo | string | `"kusionstack/karpor"` | Karpor 同步器镜像的仓库。 |
| syncer.image.tag | string | `""` | Karpor 同步器镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
| syncer.name | string | `"karpor-syncer"` | Karpor 同步器的组件名称。 |
| syncer.port | int | `7443` | Karpor 同步器的端口。 |
| syncer.replicas | int | `1` | 要运行的 Karpor 同步器 Pod 数量。 |
| syncer.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor 同步器 Pod 的资源限制和请求。 |

#### ElasticSearch
### ElasticSearch

ElasticSearch 组件用于存储同步的资源和用户数据
ElasticSearch 组件用于存储同步的资源数据和用户数据

| 键 | 类型 | 默认值 | 描述 |
|-----|------|---------|-------------|
| elasticsearch.image.repo | string | `"docker.elastic.co/elasticsearch/elasticsearch"` | ElasticSearch 镜像的仓库 |
| elasticsearch.image.tag | string | `"8.6.2"` | ElasticSearch 镜像的特定标签 |
| elasticsearch.name | string | `"elasticsearch"` | ElasticSearch 的组件名称 |
| elasticsearch.port | int | `9200` | ElasticSearch 的端口 |
| elasticsearch.replicas | int | `1` | 要运行的 ElasticSearch pod 的数量 |
| elasticsearch.resources | object | `{"limits":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"},"requests":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"}}` | karpor elasticsearch pod 的资源规格 |
| elasticsearch.image.repo | string | `"docker.elastic.co/elasticsearch/elasticsearch"` | ElasticSearch 镜像的仓库 |
| elasticsearch.image.tag | string | `"8.6.2"` | ElasticSearch 镜像的特定标签 |
| elasticsearch.name | string | `"elasticsearch"` | ElasticSearch 的组件名称 |
| elasticsearch.port | int | `9200` | ElasticSearch 的端口 |
| elasticsearch.replicas | int | `1` | 要运行的 ElasticSearch Pod 数量。 |
| elasticsearch.resources | object | `{"limits":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"},"requests":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"}}` | Karpor ElasticSearch Pod 的资源限制和请求。 |

#### ETCD
### ETCD

ETCD 组件是 Karpor Server 作为 `apiserver` 背后的存储
ETCD 组件是 Karpor 服务器作为 `apiserver` 的存储

| 键 | 类型 | 默认值 | 描述 |
|-----|------|---------|-------------|
| etcd.image.repo | string | `"quay.io/coreos/etcd"` | ETCD 镜像的仓库 |
| etcd.image.tag | string | `"v3.5.11"` | ETCD 镜像的标签 |
| etcd.name | string | `"etcd"` | ETCD 的组件名称 |
| etcd.persistence.accessModes[0] | string | `"ReadWriteOnce"` | |
| etcd.persistence.size | string | `"10Gi"` | |
| etcd.port | int | `2379` | ETCD 的端口 |
| etcd.replicas | int | `1` | 要运行的 etcd pod 的数量 |
| etcd.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | karpor etcd pod 的资源规格 |
| etcd.image.repo | string | `"quay.io/coreos/etcd"` | ETCD 镜像的仓库 |
| etcd.image.tag | string | `"v3.5.11"` | ETCD 镜像的特定标签。 |
| etcd.name | string | `"etcd"` | ETCD 的组件名称 |
| etcd.persistence.accessModes | list | `["ReadWriteOnce"]` | 卷访问模式,ReadWriteOnce 表示单节点读写访问。 |
| etcd.persistence.size | string | `"10Gi"` | ETCD 持久卷的大小。 |
| etcd.port | int | `2379` | ETCD 的端口 |
| etcd.replicas | int | `1` | 要运行的 ETCD Pod 数量。 |
| etcd.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor ETCD Pod 的资源限制和请求。 |

#### Job
### 任务

这是一个一次性 Kubernetes Job,用于生成根证书和一些前置工作。Karpor Server 和 Karpor Syncer 都需要依赖它完成才能正常启动
此一次性任务用于生成根证书和一些准备工作

| 键 | 类型 | 默认值 | 描述 |
|-----|------|---------|-------------|
| job.image.repo | string | `"kusionstack/karpor"` | Job 镜像的仓库 |
| job.image.tag | string | `""` | Karpor 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
| job.image.repo | string | `"kusionstack/karpor"` | 任务镜像的仓库。 |
| job.image.tag | string | `""` | Karpor 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
Loading