Skip to content
This repository has been archived by the owner on Aug 16, 2023. It is now read-only.

Commit

Permalink
Bump milvus version to 2.2.0 (#391)
Browse files Browse the repository at this point in the history
Signed-off-by: Edward Zeng <jie.zeng@zilliz.com>

Signed-off-by: Edward Zeng <jie.zeng@zilliz.com>
  • Loading branch information
LoveEachDay authored Nov 18, 2022
1 parent ec78133 commit 59715af
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 11 deletions.
4 changes: 2 additions & 2 deletions charts/milvus/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
apiVersion: v1
name: milvus
appVersion: "2.1.4"
appVersion: "2.2.0"
kubeVersion: "^1.10.0-0"
description: Milvus is an open-source vector database built to power AI applications and vector similarity search.
version: 3.2.18
version: 3.3.0
keywords:
- milvus
- elastic
Expand Down
2 changes: 2 additions & 0 deletions charts/milvus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ $ helm upgrade --install my-release milvus/milvus --set pulsar.enabled=false --s
```

### Upgrade an existing Milvus cluster
> **IMPORTANT** If you have installed a milvus cluster with version below v2.1.x, you need follow the instructions at here: https://github.com/milvus-io/milvus/blob/master/deployments/migrate-meta/README.md. After meta migration, you use `helm upgrade` to update your cluster again.
E.g. to scale out query node from 1(default) to 2:
```bash
# Helm v3.x
Expand Down
8 changes: 5 additions & 3 deletions charts/milvus/templates/config.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -297,8 +297,10 @@ dataNode:
flowGraph:
maxQueueLength: 1024 # Maximum length of task queue in flowgraph
maxParallelism: 1024 # Maximum number of tasks executed in parallel in the flowgraph
flush:
insertBufSize: "{{ .Values.dataNode.flush.insertBufSize }}" # Bytes, 16 MB
segment:
insertBufSize: "{{ .Values.dataNode.segment.insertBufSize }}" # Bytes, 16 MB
deleteBufBytes: "{{ .Values.dataNode.segment.deleteBufBytes }}" # Bytes, 64 MB
syncPeriod: "{{ .Values.dataNode.segment.syncPeriod }}" # Seconds, 10min

log:
level: {{ .Values.log.level }}
Expand Down Expand Up @@ -367,7 +369,7 @@ common:
authorizationEnabled: {{ .Values.authorization.enabled }}
simdType: {{ .Values.common.simdType }} # default to auto
indexSliceSize: 16 # MB
threadCoreCoefficient: {{ .Values.common.threadCoreCoefficient }} # default to 10
threadCoreCoefficient: {{ .Values.common.threadCoreCoefficient }}

storageType: minio
mem_purge_ratio: 0.2 # in Linux os, if memory-fragmentation-size >= used-memory * ${mem_purge_ratio}, then do `malloc_trim`
Expand Down
17 changes: 11 additions & 6 deletions charts/milvus/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ cluster:
image:
all:
repository: milvusdb/milvus
tag: v2.1.4
tag: v2.2.0
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
Expand Down Expand Up @@ -510,9 +510,9 @@ dataCoordinator:
enableAutoCompaction: true

gc:
interval: 60 # gc interval in seconds
missingTolerance: 3600 # file meta missing tolerance duration in seconds, 1 day
dropTolerance: 3600 # file belongs to dropped entity tolerance duration in seconds, 1 day
interval: 3600 # gc interval in seconds
missingTolerance: 86400 # file meta missing tolerance duration in seconds, 1 day
dropTolerance: 86400 # file belongs to dropped entity tolerance duration in seconds, 1 day


service:
Expand All @@ -536,8 +536,13 @@ dataNode:
profiling:
enabled: false # Enable live profiling

flush:
insertBufSize: "16777216" ## bytes, 16MB
segment:
# Max buffer size to flush for a single segment.
insertBufSize: "16777216" # Bytes, 16 MB
# Max buffer size to flush del for a single channel
deleteBufBytes: "67108864" # Bytes, 64MB
# The period to sync segments if buffer is not empty.
syncPeriod: 600 # Seconds, 10min

common:
compaction:
Expand Down

0 comments on commit 59715af

Please sign in to comment.