diff --git a/migrate_bucket.adoc b/migrate_bucket.adoc deleted file mode 100644 index cd5c695e11..0000000000 --- a/migrate_bucket.adoc +++ /dev/null @@ -1,313 +0,0 @@ -= Migrate a Bucket's Storage Backend -:description: Full and Cluster Administrators can migrate a bucket's storage backend by calling the REST API and then performing full restores on the nodes containing the bucket. - -[.edition]#{enterprise}# - -== Storage Backend Migration Overview - -You can migrate a bucket's storage backend if you find the bucket's current performance is not meeting your needs. -For example, you can migrate a bucket from Couchstore to Magma if the bucket's working set grows beyond its memory quota. -You can migrate from Couchstore to Magma, or from Magma to Couchstore. -Migrating to a Magma bucket always results in a bucket with 1024 vBuckets, regardless of the number of vBuckets in the original bucket. - -NOTE: The backend migration described in this section does not support migrating between buckets with different numbers of vBuckets. -You cannot migrate a Couchstore or Magma bucket with 1024 vBuckets to a Magma bucket with 128 vBuckets. -Similarly, you cannot migrate from a Magma bucket with 128 vBuckets to a Couchstore or a Magma bucket with 128 vBuckets. -To migrate between buckets with different number of vBuckets, you can use a local cross datacenter replication (XDCR). -See <> for more information. - -You start a bucket's migration by calling the REST API to edit the bucket's `storageBackend` setting. -This call changes the bucket's global storage backend parameter. -However, it does not trigger an immediate conversion of the vBuckets to the new backend. -Instead, Couchbase adds override settings to each node to indicate its vBuckets still use the old storage backend. -To complete the migration, you must force the vBuckets to be rewritten. -The two ways to trigger this rewrite are to perform a swap rebalance or a graceful failover followed by a full recovery. -As Couchbase writes the vBuckets during these processes, it removes the storage override and saves the vBuckets using the new storage backend. - -NOTE: While you're migrating a bucket between storage backends, you can only change the bucket's `ramQuota` and `storageBackend` parameters. -Couchbase Server prevents you from making changes to the bucket's other parameters. - -== Prerequisites - -Before migrating a bucket, verify that the bucket's parameters meet the requirements for the new storage backend. -For example, a Magma bucket must have a memory quota of at least 1{nbsp}GB. -The REST API call to change the bucket's storage backend returns an error if the bucket does not meet the new storage backend's requirements. -See xref:learn:buckets-memory-and-storage/storage-engines.adoc[] for a list of storage backend requirements. - -If you're planning to migrate from Couchstore to Magma, also consider the current disk usage on the nodes containing the bucket. -Magma's default fragmentation settings can result in higher disk use. -See xref:#disk_usage[Disk Use Under Couchstore Verses Magma] for more information. - -[#perform_migration] -== Perform a Migration - -. Call the REST API to change the bucket's `storageBackend` parameter. -For example, the following command changes the storage backend of the travel-sample bucket to Magma. -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=change-backend] ----- -. Verify that the nodes containing the bucket now have storage backend override settings for their vBuckets. -The following example calls the REST API to get the bucket configuration and filters the result through the `jq` command to list the node names and their storage backend formats. -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=get-node-overrides] ----- -+ -The output of the previous command lists each node and the backend storage format used locally by the vBuckets: -+ ----- -include::example$storage_backend_overrides.log[] ----- -. For every node that contains the bucket, perform either a xref:install:upgrade-procedure-selection.adoc#swap-rebalance[swap rebalance] or a xref:learn:clusters-and-availability/graceful-failover.adoc[graceful failover] followed by a xref:learn:clusters-and-availability/recovery.adoc#full-recovery[full recovery] and xref:learn:clusters-and-availability/rebalance.adoc[rebalance] to rewrite the vBuckets on the node. -Both of these methods have their own limitations. -Swap rebalance requires that you add an additional node to the cluster. -The graceful failover and full recovery method temporarily removes a node from your cluster which can cause disruptions. -+ -You can take these steps via the UI, the command-line tool, or REST API calls. -The following example demonstrates using the REST API to perform a graceful failover and full recovery on a node named node3. -+ -.. Perform a graceful failover of node3: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=failover-node] ----- -.. Wait until the failover is complete. -Then perform a full recovery on the node: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=recover-node] ----- -.. When recovery is complete, perform a rebalance: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=rebalance-cluster] ----- -. After triggering each node to rewrite its vBuckets, verify the node is now using the new storage backend. -Re-run the command from step 2 to list the nodes and any storage backend overrides: -+ -[source,console] ----- -include::manage:example$storage-backend-override-node3.sh[] ----- -+ -The `null` under node3 indicates that it does not have a storage backend override. -It has migrated to the new storage backend. -. Repeat the previous two steps for the remaining nodes in the cluster. - -[#disk_usage] -== Disk Use Under Couchstore Verses Magma - -If you migrate a bucket's storage from Couchstore to Magma, you may see increased disk usage. -Couchstore's default threshold for fragmentation is 30%. -When a Couchstore bucket reaches this threshold, Couchbase Server attempts to fully compact the bucket. -If the bucket has a low write workload, Couchbase Server may be able to compact the bucket to 0% fragmentation. - -Magma's default fragmentation threshold is 50%. -Couchbase Server treats this threshold differently than the Couchstore threshold. -It does not perform a full compaction with the goal of reducing the bucket's fragmentation to 0%. -Instead, Couchbase Server compacts a Magma bucket to maintain its fragmentation at the threshold value. -This maintenance of the default 50% fragmentation can result in greater disk use for a Magma-backed bucket verses the Couchstore-backed bucket. - -If a bucket you migrated to Magma has higher sustained disk use that interferes with the node's performance, you have two options: - -* Reduce the fragmentation threshold of the Magma bucket. -For example, you could choose to reduce the fragmentation threshold to 30%. -You should only consider changing the threshold if the bucket's workload is not write-intensive. -For write-intensive workloads, the best practice for Magma buckets is to leave the fragmentation setting at 50%. -See xref:manage:manage-settings/configure-compact-settings.adoc[] to learn how to change the bucket's database fragmentation setting. - -* Roll back the migration. -You can revert a bucket from Magma back to Couchstore during or after a migration. -See the next section for more information. - -== Rolling Back a Migration - -As you migrate each node's vBuckets to a new storage backend, you may decide that the migration is not meeting your needs. -For example, you may see increased disk usage when moving from Couchstore to Magma as explained in xref:#disk_usage[Disk Use Under Couchstore Verses Magma]. -You can roll back the migration by: - -. Changing the bucket's backend setting to its original value. -. Force any migrated nodes to rewrite their vBuckets back to the old backend. - -You do not have to perform any steps for nodes you did not migrate. - -For example, to roll back the migration shown in xref:#perform_migration[Perform a Migration], you would follow these steps: - -. Call the REST API to change the bucket's backend back to Couchstore: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=change-backend-couchstore] ----- - -. Determine which nodes you have already migrated by calling the REST API to get the bucket's metadata: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=get-node-overrides] ----- -+ -For the migration shown in xref:#perform_migration[Perform a Migration], the output looks like this: -+ -[source,json] ----- -"node3.:8091" -"magma" -"node2.:8091" -null -"node1.:8091" -null ----- -+ -In this case, you must roll back node3 because you migrated it to Magma. - -. For each node that you have already migrated, perform another xref:install:upgrade-procedure-selection.adoc#swap-rebalance[swap rebalance] or a xref:learn:clusters-and-availability/graceful-failover.adoc[graceful failover] followed by a xref:learn:clusters-and-availability/recovery.adoc#full-recovery[full recovery] and xref:learn:clusters-and-availability/rebalance.adoc[rebalance] to roll the vBuckets on the node back to the previous backend. -+ -To roll back node3, follow these steps: -+ -.. Perform a graceful failover of node3: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=failover-node] ----- -.. Wait until the failover is complete. -Then perform a full recovery on the node: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=recover-node] ----- -.. When recovery is complete, perform a rebalance: -+ -[source,console] ----- -include::manage:example$migrate-bucket-storage-backend.sh[tag=rebalance-cluster] ----- - -. Repeat the previous step until all nodes that you'd migrated have rolled back to their original storage backend. - - -[#xdcr-migration] -== XDCR Storage Backend Migration - -You can use xref:learn:clusters-and-availability/xdcr-overview.adoc[] to migrate data between two buckets with different storage backends, including between Magma buckets using different numbers of vBuckets. -You can perform this migration on the same cluster or between two clusters. - -include::learn:partial$xdcr-magma-128-vbucket-incompatibility.adoc[] - -To perform an XDCR storage backed migration on the same cluster, it must have enough memory and storage for two copies of the bucket's data. -After the migration, you can drop the original bucket to free the resources it uses. - -The process for performing a backend migration using XDCR is similar to configuring any other XDCR replication. -The only difference is that the source and destination of the replication are the same cluster. - -The following steps demonstrate migrating a Magma bucket with 128 vBuckets named `travel-sample` to a Magma bucket with 1024 vBuckets named `travel-sample-1024`: - -. Create a new bucket named `travel-sample-1024` using the Magma storage backend with 1024 vBuckets. -For more information about creating a bucket, see xref:manage:manage-buckets/create-bucket.adoc[]. -The following example uses the REST API to create the new bucket: - -+ -[source,console] ----- -curl -X POST http://127.0.0.1:8091/pools/default/buckets \ - -u Administrator:password \ - -d name=travel-sample-1024 \ - -d storageBackend=magma \ - -d numVbuckets=1024 \ - -d ramQuota=1024 ----- - -. Recreate any scopes and collections in the new bucket that are in the original bucket. -Replication does not recreating missing scopes and collections for you. -You can create the scopes and collections manually or reuse any deployment scripts you have. -See xref:manage:manage-scopes-and-collections/manage-scopes-and-collections.adoc[] for details on creating scopes and collections. - -+ -You can also create a script to recreate the scopes and collections in the new bucket. -For example, the following Python script uses the Python SDK to accomplish this task: - -+ -[source,python] ----- -include::manage:example$duplicate-scopes-collections.py[] ----- - -. Add a loopback reference to the cluster. -The following example uses the REST API to add an XDCR reference named `self` to the cluster that uses the loopback IP address as the hostname: - -+ -[source,console] ----- - curl -X POST http://127.0.0.1:8091/pools/default/remoteClusters -u Administrator:password \ --d username=Administrator \ --d password=password \ --d hostname=127.0.0.1 \ --d name=self \ --d demandEncryption=0 | jq ----- - -+ -The out of previous command is: - -+ -[source,json] ----- -{ - "connectivityErrors": null, - "deleted": false, - "hostname": "127.0.0.1:8091", - "name": "self", - "network_type": "", - "secureType": "none", - "uri": "/pools/default/remoteClusters/self", - "username": "Administrator", - "uuid": "a43e930240738b5aee16e2688a65d08f", - "validateURI": "/pools/default/remoteClusters/self?just_validate=1" -} ----- - -. Create an XDCR replication from the original bucket to the new bucket. -The following example uses the REST API to create the replication: - -+ -[source,console] ----- -curl -v -X POST -u Administrator:password \ -http://127.0.0.1:8091/controller/createReplication \ --d fromBucket=travel-sample \ --d toCluster=self \ --d toBucket=travel-sample-1024 \ --d replicationType=continuous \ --d createTarget=true \ --d enableCompression=1 | jq ----- - -+ -The result of the previous command looks like this: - -+ -[source,json] ----- -{ - "id": "a43e930240738b5aee16e2688a65d08f/travel-sample/travel-sample-1024" -} ----- - -+ -The replication process starts. - -. Monitor the replication process until it completes. -You can monitor the replication process xref:manage:manage-xdcr/create-xdcr-replication.adoc#monitor-current-replications[via the Couchbase Server Web Console] or by xref:rest-api:rest-xdcr-statistics.adoc[calling the REST API]. -Once the replication has duplicated all of the documents in the original bucket without errors, you can stop and delete it. -Then you can drop the original bucket. - -+ -IMPORTANT: Be sure to update all clients to use the new bucket before you stop the replication. diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 874c1b7c49..0cbec5d9bc 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -123,8 +123,9 @@ include::third-party:partial$nav.adoc[] ** xref:manage:manage-buckets/create-bucket.adoc[Create a Bucket] ** xref:manage:manage-buckets/edit-bucket.adoc[Edit a Bucket] ** xref:manage:manage-buckets/flush-bucket.adoc[Flush a Bucket] - ** xref:manage:manage-buckets/delete-bucket.adoc[Drop a Bucket] - ** xref:manage:manage-buckets/migrate-bucket.adoc[] +** xref:manage:manage-buckets/delete-bucket.adoc[Drop a Bucket] +** xref:manage:manage-buckets/migrate-bucket.adoc[] +** xref:manage:manage-buckets/change-ejection-policy.adoc[] * xref:manage:manage-scopes-and-collections/manage-scopes-and-collections.adoc[Manage Scopes and Collections] * xref:manage:manage-logging/manage-logging.adoc[Manage Logging] * xref:manage:manage-settings/manage-settings.adoc[Manage Settings] diff --git a/modules/introduction/partials/new-features-80.adoc b/modules/introduction/partials/new-features-80.adoc index 05b0fecf08..82e4e4d010 100644 --- a/modules/introduction/partials/new-features-80.adoc +++ b/modules/introduction/partials/new-features-80.adoc @@ -217,6 +217,32 @@ The metric includes the first 32 characters sent by any clients up to the first and limits the number of metrics to 100. Additional information sent by clients at connection time can be found in the logs. +[#ejection-policy-without-restart] +https://jira.issues.couchbase.com/browse/MB-67082[MB-67082] Allow eviction policy to be changed without bucket restart:: +The new `noRestart` parameter for the `/pools/default/buckets/{BUCKET-NAME}` REST API lets you change the ejection policy of a Couchbase bucket without automatically restarting it. +If you prevent the restart, perform one of the following: ++ +-- +* A swap rebalance +* On each data node, perform a graceful failover, then either a delta (when not performing a storage backend migration) or full recovery followed by a rebalance. +-- ++ +Either of these procedures lets you avoid the downtime associated with restarting the bucket. +For more information, see xref:manage:manage-buckets/change-ejection-policy.adoc[]. + ++ +This setting is useful when you migrate a bucket to a new storage engine that would benefit from a new ejection policy. +For example, you should consider changing a bucket's ejection policy to Full Ejection when migrating a bucket to the xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-magma[Magma storage engine]. +Using the `noRestart` parameter, you can change the ejection policy at the same time you migrate the bucket to the new storage engine. +See xref:manage:manage-buckets/migrate-bucket.adoc[] for more information. + +[#ejection-ephemeral-buckets] +https://jira.issues.couchbase.com/browse/MB-64104[MB-64104] Allow changing the ejection policy of ephemeral buckets:: +You can now change the ejection policy of an ephemeral bucket using the Couchbase Server Web Console and the REST API. +Unlike Couchstore buckets, changing the ejection policy of an ephemeral bucket does not require a bucket restart or other additional steps. +For more information, see xref:manage:manage-buckets/change-ejection-policy.adoc[]. + + [#section-new-feature-800-disk-limits] https://jira.issues.couchbase.com/browse/MB-59113[MB-59113] Prevent buckets from causing nodes to run out of disk space:: You can configure Couchbase Server to prevent the Data Service from writing to the data service path once the filesystem has filled to a configurable threshold. @@ -254,6 +280,7 @@ include::learn:partial$maintain-durability-warning.adoc[] See xref:learn:data/durability.adoc#maintaining-durable-writes[Maintaining Durable Writes During Single Replica Failovers] for more information about this feature. + [#section-new-feature-800-backup-service] === Backup and Restore https://jira.issues.couchbase.com/browse/MB-44863[MB-44863]:: diff --git a/modules/learn/pages/buckets-memory-and-storage/memory.adoc b/modules/learn/pages/buckets-memory-and-storage/memory.adoc index d5dea6bc08..cc9fb3a63a 100644 --- a/modules/learn/pages/buckets-memory-and-storage/memory.adoc +++ b/modules/learn/pages/buckets-memory-and-storage/memory.adoc @@ -133,50 +133,100 @@ For more information, see xref:rest-api:rest-bucket-create.adoc#warmupbehavior[w [#ejection] == Ejection -If a bucket's memory quota is exceeded, items may be _ejected_ from the bucket by the Data Service. +If a bucket's memory use gets close to its memory quota, the Data Service may eject data from memory. +See <<#watermarks>> for more information about how Couchbase Server manages memory use. +You assign an ejection policy (also known as an eviction method) to each bucket that determines if and how the Data Service ejects data from memory. +Couchbase and Ephemeral buckets each have their own ejection policies. -Different ejection methods are available, and are configured per bucket. -Note that in some cases, ejection is configured _not_ to occur. +NOTE: Capella refers to Couchbase buckets as Memory and Disk buckets, and Ephemeral buckets as Memory Only buckets. -For a Couchbase bucket, you can choose betweeen a *Value-only* or *Full* ejection method: +The two ejection policies available for Couchbase buckets are: -* *Value-only*: The bucket only ejects data when it removes a document from memory. +Value-only:: +The Data Service only ejects a document's data when it ejects it from memory. +It keeps the document's keys and metadata in memory. +Retaining the keys and metadata help limit the performance impact of ejecting the document from memory. +This is the default policy for Couchbase buckets. Choose this method if you need better performance, but be aware that it uses more system memory. -* *Full*: The bucket ejects data, metadata, keys, and values when it removes a document from memory. +Full:: +The Data Service removes the entire document, including its metadata and keys, when it ejects a document from memory. Choose this method if you want to reduce your memory overhead requirement. -For an Ephemeral bucket, you can choose between a *No ejection* or *Eject data when RAM is full* ejection policy: +include::partial$full-ejection-note.adoc[] -* *No ejection*: If the bucket reaches its memory quota, the bucket doesn't eject any existing data and attempts to cache new data fail. -* *Eject data when RAM is full*: If the bucket reaches its memory quota, the bucket ejects older documents from RAM to make space for new data. +For more information about Couchbase bucket ejection policies, see the blog post https://blog.couchbase.com/a-tale-of-two-ejection-methods-value-only-vs-full/[A Tale of Two Ejection Methods: Value-only vs. Full^] -NOTE: Ejection from Ephemeral buckets removes data without persistence because Ephemeral buckets have no presence on disk. +The two ejection methods available for Ephemeral buckets are: -For more information about buckets and bucket types, see xref:buckets-memory-and-storage/buckets.adoc[Buckets]. +No ejection:: +If the bucket reaches its memory quota, the Data Service does not eject data. +Instead, it refuses to load any new data until memory becomes available. +Memory can become available when users delete documents or documents expire because of xref:learn:data/expiration.adoc[expiration settings]). + +Eject data when RAM is full:: +If the bucket approaches its memory quota, the Data Service ejects documents to make space for new data. +It chooses the documents to eject based on the Not Recently Used (NRU) algorithm. +This algorithm uses metadata to determine which documents have not been accessed recently. + +[IMPORTANT] +==== +Data ejected from an Ephemeral bucket is lost because it's never persisted to disk. + +Ejecting data from a Couchbase bucket does not remove the data from disk, so it's still available. +The only effect is that the next access to the data is slower because the Data Service has to read it from disk instead of from memory. +==== + +[#changing-ejection-policy] + +### Changing the Ejection Policy of a Couchbase Bucket + +You can change the ejection policy of an existing bucket. +When you change the ejection policy for an ephemeral bucket, the change takes effect immediately. +When you change the ejection policy for a Couchbase bucket, the change does not take effect until you perform one of the following procedures: + +* Restart the bucket, which is the default behavior unless you set the `noRestart` parameter to `true` in the REST API call to change the ejection policy. +* If you set the `noRestart` parameter to `true`, you must perform one of the following processes: +** Swap rebalance each node in the cluster running the Data Service. +** Perform a graceful failover followed by a delta or full recovery and rebalance of each node running the Data Service. + +You may want to change the ejection policy of a bucket if you're changing the storage engine it uses. +For example, suppose you're changing a bucket from using the xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-couchstore[Couchstore] to the xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-magma[Magma]. +Then you should consider changing the ejection policy to Full Ejection, which is better for buckets with low memory to storage ratios. +See xref:manage:manage-buckets/migrate-bucket.adoc[] for more information about migrating a bucket to a different storage engine. + +NOTE: If you change the ejection policy while performing a backend storage migration, you must use a full recovery when you recover a node after a graceful failover. +The storage migration requires the full recovery to complete its migration. +You also cannot allow Couchbase Server to restart the bucket after changing the ejection policy during a migration. + +See xref:manage:manage-buckets/change-ejection-policy.adoc[] for more information about changing a bucket's ejection policy. + +[#watermarks] +=== Memory Watermarks +For each bucket, Couchbase Server manages available memory using two watermarks: `memoryLowWatermark` and `memoryHighWatermark`. +The `memoryHighWatermark` watermark is the threshold where Couchbase Server takes action to prevent the bucket from exceeding its memory allocation. +When memory use reaches this watermark, the Data Service ejects items if the bucket's ejection policy allows item ejection. +It continues ejecting items until the bucket's memory use drops to the `memoryLowWatermark` watermark. +If ejection cannot free enough space, the Data Service stops ingesting data, sends error messages to clients, and displays an insufficient memory notification. +When enough memory becomes available, data ingestion resumes. NOTE: The settings `mem_high_wat`, and `mem_low_wat` are no longer supported through `cbepctl`. These settings are replaced by new settings `memoryLowWatermark` and `memoryHighWatermark`, which can be configured using REST APIs. -For each bucket, available memory is managed according to two _watermarks_, which are `memoryLowWatermark` and `memoryHighWatermark`. -If data is continuously loaded into the bucket, its quantity eventually increases to the value indicated by the `memoryLowWatermark` watermark. -At this point, no action is taken. -Then, as still more data is loaded, the data's quantity increases to the value indicated by the `memoryHighWatermark` watermark. -If, based on the bucket's configuration, items can be ejected from the bucket, the Data Service ejects items from the bucket until the quantity of data has decreased to the `memoryLowWatermark` watermark. -In cases where ejection cannot free enough space to support continued data-ingestion, the Data Service stops ingesting data, error messages are sent to clients, and the system displays an _insufficient memory_ notification. -When sufficient memory is again available, data-ingestion resumes. +Couchbase Server selects items for ejection based on metadata that shows whether the item is Not Recently Used (NRU). +If an item was not used recently, it becomes a candidate for ejection. -Items are selected for ejection based on metadata that each contains, indicating whether the item can be classified as _Not Recently Used_ (NRU). -If an item has not been recently used, it is a candidate for ejection. +The following diagram shows how `mem_low_wat` and `mem_high_wat` relate to the bucket's overall memory quota: The relationship of `memoryLowWatermark` and `memoryHighWatermark` to the bucket's overall memory quota is illustrated as follows: + [#tunable_memory] image::buckets-memory-and-storage/tunableMemory.png[,416] The default setting for `memoryLowWatermark` is 75%. The default setting for `memoryHighWatermark` is 85%. The default settings can be changed using the REST API. -See xref:rest-api:rest-bucket-create.adoc#memorylowwatermark[memoryLowWatermark] and xref:rest-api:rest-bucket-create.adoc#memoryhighwatermark[memoryHighWatermark]. +See xref:cli:cbepctl/set-flush_param.adoc[set flush_param] for more information about changing these values. [#expiry-pager] == Expiry Pager diff --git a/modules/learn/pages/buckets-memory-and-storage/storage-engines.adoc b/modules/learn/pages/buckets-memory-and-storage/storage-engines.adoc index 969da64f31..e4dc2e69dd 100644 --- a/modules/learn/pages/buckets-memory-and-storage/storage-engines.adoc +++ b/modules/learn/pages/buckets-memory-and-storage/storage-engines.adoc @@ -39,6 +39,7 @@ This setting makes sure there are enough threads to sustain high write rates. To learn more about how you should configure the Writer Thread settings for your Magma bucket, see xref:manage:manage-settings/general-settings.adoc#data-settings[Data Settings] +[#couchstore] == Couchstore Couchstore is the original storage engine for Couchbase Server. diff --git a/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc b/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc index 44328f775b..81b5781729 100644 --- a/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc +++ b/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc @@ -15,7 +15,7 @@ Couchbase Server restores data that's not in memory from disk when needed. Ephemeral buckets and their items exist only in memory and are never written to disk. -For more details, see xref:buckets-memory-and-storage/buckets.adoc[Buckets]. +For more information, see xref:buckets-memory-and-storage/buckets.adoc[]. Couchbase Server compresses the data it writes to disk. Compression reduces the amount of disk space used which can help reduce costs. @@ -57,7 +57,7 @@ Set the thread count for each between 1 and 64. Use [.cmd]`cbstats` command line tool with the [.param]`raw workload` option to view the thread status. See xref:cli:cbstats-intro.adoc[cbstats] for information. -For information on using the REST API to manage thread counts, see xref:rest-api:rest-reader-writer-thread-config.adoc[Setting Thread Allocations]. +For information about using the REST API to manage thread counts, see xref:rest-api:rest-reader-writer-thread-config.adoc[Setting Thread Allocations]. [#deletion] == Deletion @@ -166,57 +166,13 @@ For information about performing manual compaction with the command line, see xr For all information about using the REST API for compaction, see the xref:rest-api:compaction-rest-api.adoc[Compaction API]. -== Disk I/O Priority - -Disk I/O means reading items from and writing them to disk. -Disk I/O does not block client interactions because it runs as a background task. -You can configure the priority of disk I/O and other background tasks, such as item paging and DCP stream processing, for each bucket. -For example, you can give one bucket a higher disk I/O priority than another. -For further information, see -xref:manage:manage-buckets/create-bucket.adoc[Create a Bucket]. - [#storage-settings-ejection-policy] == Ejection Policy -To improve performance, Couchbase Server tries to keep as much data as possible in memory. -When memory fills, Couchbase Server ejects data from memory to make room for new data. -Ejection policies control how Couchbase Server decides what data to remove. - -Ejection has a different effect on different bucket types. -In an ephemeral bucket, data that Couchbase Server ejects is lost, because it only exists in memory. -In Couchbase buckets, data is removed from memory but still exists on disk. -If the data is needed again, Couchbase Server can reload the data from disk back into memory. - -The available ejection policies depend on the bucket type, as shown in the following table. - -.Ejection policies -|=== -|Policy |Bucket type |Description - -|No Ejection -|Ephemeral -|When memory runs out, the bucket becomes read-only to prevent data loss. -This is the default setting. - -|Not Recently Used (NRU) Ejection -|Ephemeral -|The server removes from memory the documents that have not been used for the longest time. - -|Value Only Ejection -|Couchbase -|When memory is low, Couchbase Server ejects values and data from memory but keeps keys and metadata. -This is the default policy for Couchbase buckets. - -|Full Ejection -|Couchbase -|The server ejects data, keys, and metadata from memory. - -|=== - -You can set the policy using the xref:rest-api:rest-bucket-create.adoc#evictionpolicy[REST API] when you create the bucket. -For more information about ejection policies, read https://blog.couchbase.com/a-tale-of-two-ejection-methods-value-only-vs-full/ - -include::partial$full-ejection-note.adoc[] +The ejection policy (also known as the eviction method) controls how Couchbase Server prevents data loss due to running out of memory to store data. +It controls whether and how it ejects data from memory when the bucket's memory quota is exhausted. +The policies you can set depend on the type of the bucket. +See xref:learn:buckets-memory-and-storage/memory.adoc#ejection[Ejection] for more information. -NOTE: In Capella, Couchbase buckets are called Memory and Disk buckets. -Ephemeral buckets are called Memory Only buckets. +You set the policy when creating the bucket and can change it later using the REST API, command-line interface, or Couchbase Server Web Console. +See xref:manage:manage-buckets/create-bucket.adoc[] for more information. diff --git a/modules/learn/partials/full-ejection-note.adoc b/modules/learn/partials/full-ejection-note.adoc index 4ee8a3512b..6a81fceda8 100644 --- a/modules/learn/partials/full-ejection-note.adoc +++ b/modules/learn/partials/full-ejection-note.adoc @@ -1,5 +1,7 @@ [NOTE] ==== -Full Ejection is recommended when the xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-magma[Magma storage engine] is used as the storage engine for a bucket. -This is especially the case when the ratio of memory to data is very low (Magma allows you to go as low as 1% of memory to data ratio). +Use the Full Ejection policy for buckets using the xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-magma[Magma storage engine]. +This setting works well when the ratio of memory to data is low. +In these cases, retaining just the keys and metadata of documents can still consume significant portions of the allocated memory. +Magma allows you to set a memory to data ratio as low as 1%. ==== diff --git a/modules/manage/assets/images/manage-buckets/accessBucketTab.png b/modules/manage/assets/images/manage-buckets/accessBucketTab.png deleted file mode 100644 index 6bbf7916f7..0000000000 Binary files a/modules/manage/assets/images/manage-buckets/accessBucketTab.png and /dev/null differ diff --git a/modules/manage/assets/images/manage-buckets/addBucketWithMagmaOption.png b/modules/manage/assets/images/manage-buckets/addBucketWithMagmaOption.png index 4da723e6b8..f77f7c0911 100644 Binary files a/modules/manage/assets/images/manage-buckets/addBucketWithMagmaOption.png and b/modules/manage/assets/images/manage-buckets/addBucketWithMagmaOption.png differ diff --git a/modules/manage/assets/images/manage-buckets/addDataBucketDialogExpandedForEphemeral.png b/modules/manage/assets/images/manage-buckets/addDataBucketDialogExpandedForEphemeral.png index b59638eb7a..bc79dec2d0 100644 Binary files a/modules/manage/assets/images/manage-buckets/addDataBucketDialogExpandedForEphemeral.png and b/modules/manage/assets/images/manage-buckets/addDataBucketDialogExpandedForEphemeral.png differ diff --git a/modules/manage/assets/images/manage-buckets/bucket-edit-dialog-expanded.png b/modules/manage/assets/images/manage-buckets/bucket-edit-dialog-expanded.png new file mode 100644 index 0000000000..ce9f30e433 Binary files /dev/null and b/modules/manage/assets/images/manage-buckets/bucket-edit-dialog-expanded.png differ diff --git a/modules/manage/assets/images/manage-buckets/bucket-edit-dialog-initial.png b/modules/manage/assets/images/manage-buckets/bucket-edit-dialog-initial.png new file mode 100644 index 0000000000..65664c0e51 Binary files /dev/null and b/modules/manage/assets/images/manage-buckets/bucket-edit-dialog-initial.png differ diff --git a/modules/manage/assets/images/manage-buckets/bucketsViewInitialEdit.png b/modules/manage/assets/images/manage-buckets/bucketsViewInitialEdit.png index c9ee6e7c6e..8893968147 100644 Binary files a/modules/manage/assets/images/manage-buckets/bucketsViewInitialEdit.png and b/modules/manage/assets/images/manage-buckets/bucketsViewInitialEdit.png differ diff --git a/modules/manage/assets/images/manage-buckets/bucketsViewWithExpandedBucketRow.png b/modules/manage/assets/images/manage-buckets/bucketsViewWithExpandedBucketRow.png index c47b447358..43464ae0cb 100644 Binary files a/modules/manage/assets/images/manage-buckets/bucketsViewWithExpandedBucketRow.png and b/modules/manage/assets/images/manage-buckets/bucketsViewWithExpandedBucketRow.png differ diff --git a/modules/manage/assets/images/manage-buckets/editBucketButton.png b/modules/manage/assets/images/manage-buckets/editBucketButton.png deleted file mode 100644 index 3922c7eeef..0000000000 Binary files a/modules/manage/assets/images/manage-buckets/editBucketButton.png and /dev/null differ diff --git a/modules/manage/examples/change-ejection-policy.sh b/modules/manage/examples/change-ejection-policy.sh new file mode 100644 index 0000000000..75e0c0471e --- /dev/null +++ b/modules/manage/examples/change-ejection-policy.sh @@ -0,0 +1,57 @@ +# Change ejection policy of Couchbase bucket +# tag::change-ejection-no-restart[] +curl -v -X POST http://localhost:8091/pools/default/buckets/travel-sample \ + -u Administrator:password \ + -d evictionPolicy="fullEviction" \ + -d noRestart=true +# end::change-ejection-no-restart[] + +# tag::show-policy-overrides[] +curl -s GET -u Administrator:password \ + http://localhost:8091/pools/default/buckets/travel-sample \ + | jq '[ .nodes[] | { (.hostname): .evictionPolicy }] + [{ (.name): .evictionPolicy }]' +# end::show-policy-overrides[] + +# Get the current ejection policy +# tag::get-ejection-policy[] +curl -s GET -u Administrator:password \ + http://localhost:8091/pools/default/buckets/travel-sample \ + | jq '.evictionPolicy' +# end::get-ejection-policy[] + +# Graceful failover of node 3 +# tag::failover-node[] +curl -X POST -u Administrator:password \ + http://localhost:8091/controller/startGracefulFailover \ + -d 'otpNode=ns_1@node3.' +# end::failover-node[] + +# Delta recovery of node 3 +# tag::recover-node[] +curl -X POST -u Administrator:password \ + http://localhost:8091/controller/setRecoveryType \ + -d 'otpNode=ns_1@node3.' \ + -d 'recoveryType=delta' +# end::recover-node[] + +# Rebalance +# tag::rebalance-cluster[] +curl -X POST -u Administrator:password \ + http://localhost:8091/controller/rebalance \ + -d 'knownNodes=ns_1@node1.,ns_1@node2.,ns_1@node3.' +# end::rebalance-cluster[] + +# Show setting of ejection policy on ephemeral bucket +# tag::show-ephemeral-policy[] +curl -s GET -u Administrator:password \ + http://localhost:8091/pools/default/buckets/sample-ephemeral \ + | jq '{ (.name): .evictionPolicy }' +# end::show-ephemeral-policy[] + +# Change Ephemeral bucket ejection policy +# tag::change-ephemeral-policy[] +curl -s -X POST http://localhost:8091/pools/default/buckets/sample-ephemeral \ + -u Administrator:password \ + -d evictionPolicy="nruEviction" +# end::change-ephemeral-policy[] + diff --git a/modules/manage/examples/migrate-bucket-storage-backend.sh b/modules/manage/examples/migrate-bucket-storage-backend.sh index 10b460d2a8..15dcb645bb 100644 --- a/modules/manage/examples/migrate-bucket-storage-backend.sh +++ b/modules/manage/examples/migrate-bucket-storage-backend.sh @@ -16,6 +16,14 @@ curl -X POST -u Administrator:password \ -d 'storageBackend=magma' # end::change-backend[] +# tag::change-backend-and-ejection[] +curl -X POST -u Administrator:password \ + http://localhost:8091/pools/default/buckets/travel-sample \ + -d 'storageBackend=magma' \ + -d 'evictionPolicy=fullEviction' \ + -d 'noRestart=true' +# end::change-backend-and-ejection[] + # tag::get-backend[] curl -s GET -u Administrator:password \ http://localhost:8091/pools/default/buckets/travel-sample \ diff --git a/modules/manage/pages/manage-buckets/change-ejection-policy.adoc b/modules/manage/pages/manage-buckets/change-ejection-policy.adoc new file mode 100644 index 0000000000..65d4dbeb9e --- /dev/null +++ b/modules/manage/pages/manage-buckets/change-ejection-policy.adoc @@ -0,0 +1,307 @@ += Change a Bucket's Ejection Policy +:description: You can change the ejection method of a bucket using the Couchbase Server Web Console or the REST API. +:toclevels: 3 + + +[abstract] +{description} +The bucket's ejection policy (also known as its eviction method) controls how Couchbase Server removes documents from memory as the bucket approaches its memory quota. +See xref:learn:buckets-memory-and-storage/memory.adoc#ejection[Ejection] for more information about ejection policies. + +You initially set the ejection policy when you create a Couchbase bucket. +However, you can change the ejection policy of an existing bucket. + +You may want to change the ejection policy of a Couchbase bucket when migrating its xref:learn:buckets-memory-and-storage/storage-engines.adoc[storage engine]. +For example, when migrating a bucket from Couchstore to Magma, you should consider changing the bucket's ejection policy to Full Ejection. +This policy works well for Magma buckets that have a low memory to storage ratio. +You can change both the storage backend and the ejection policy at the same time. +See xref:manage:manage-buckets/migrate-bucket.adoc[] for steps to perform both operations. + +== Change the Ejection Policy Using the Couchbase Server Web Console + +You can edit a bucket's ejection policy using the Couchbase Server Web Console. + +[IMPORTANT] +==== +When you change the ejection policy of a Couchbase bucket using the Couchbase Server Web Console, Couchbase Server automatically restarts the bucket. +Restarting the bucket closes all open connections and results in some downtime. +Do not change the ejection policy of a bucket in production unless you're prepared this downtime. +You can also change the ejection policy using REST API which lets you avoid downtime. +See <> for more details. + +Changing the ejection policy of an ephemeral bucket does not require a bucket restart or other additional steps. +The new setting takes effect immediately. +==== + +To change the ejection policy of a bucket, follow these steps: + +. In the Web Console, click btn:[Buckets]. +The [.ui]*Buckets* page opens, listing all of the buckets in the cluster. + ++ +[#buckets_view_initial] +image::manage-buckets/bucketsViewInitialEdit.png[] + +. Click the row containing the bucket that you want to edit to expand it. + ++ +[#buckets_view_with_expanded_bucket_row] +image::manage-buckets/bucketsViewWithExpandedBucketRow.png[] + +. Click btn:[Edit] to edit the bucket's settings. + ++ +image::manage-buckets/bucket-edit-dialog-initial.png[,400] + +. Click [.ui]*Advanced bucket settings* to expand the section containing the ejection policy setting. + ++ +image::manage-buckets/bucket-edit-dialog-expanded.png[,400] + +. Change the [.ui]*Ejection Method* setting the new value you want. +The available settings depend on the bucket's type: ++ +-- +Couchbase buckets:: +* *Value-only*: Couchbase Server removes a document's data, but keeps its keys and metadata in memory. +Keeping these values in memory helps lessen the performance overhead of removing the document from memory. +* *Full*: Couchbase Server removes the entire document from memory. + ++ +Ephemeral buckets:: +* *No ejection*: If the bucket reaches its memory quota, the Data Service does not eject data. +Instead, it refuses to load any new data until memory becomes available. +* *Eject data when RAM is full*: If the bucket approaches its memory quota, the Data Service ejects the least-recently used documents to make space for new data. +-- ++ +If you change the setting of a Couchbase bucket, the Couchbase Server Web Console warns you of the consequences of changing the ejection policy. + +. Click btn:[Save] to save the changes. +If you changed the ejection policy of a Couchbase bucket, Couchbase Server restarts the bucket to apply the new ejection policy. +If you changed the ejection policy of an ephemeral bucket, the new setting takes effect immediately. + +[#ephemeral-rest-api] +== Change Ephemeral Bucket Ejection Policy Using the REST API + +You can change the ejection policy for an ephemeral bucket by sending a POST request to the `/pools/default/buckets/{BUCKETNAME}` endpoint to change the `evictionPolicy` setting. +Unlike changing a Couchbase bucket's ejection policy, changing the ejection policy of an ephemeral bucket does not require a bucket restart or other additional steps. +The new setting takes effect immediately. + +The following steps demonstrate how to change the ejection policy of an ephemeral bucket named `sample-ephemeral` to `nruEviction` using the REST API. + +. View the current ejection policy of the bucket by sending a GET request to the `/pools/default/buckets/{BUCKETNAME}` endpoint. +The following command calls the REST API to get the configuration of the `sample-ephemeral` bucket. +It filters the result through the `jq` command to show just the `evictionPolicy` value: + ++ +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=show-ephemeral-policy] +---- + ++ +The output of the previous command shows that the `evictionPolicy` of the `sample-ephemeral` bucket is set to `noEviction`: ++ +[source,json] +---- +{ + "sample-ephemeral": "noEviction" +} +---- + +. Send a POST request to the `/pools/default/buckets/{BUCKETNAME}` endpoint to change the `evictionPolicy` setting to the new ejection policy that you want. +The following command changes the ejection policy of the `sample-ephemeral` bucket to `nruEviction`: ++ +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=change-ephemeral-policy] +---- ++ +The call to the REST API returns no output if it succeeds. +. Verify that the ejection policy has changed by sending another GET request to `/pools/default/buckets/{BUCKETNAME}` as shown in step 1. +The output shows the new `evictionPolicy` value: ++ +[source,json] +---- +{ + "sample-ephemeral": "nruEviction" +} +---- + +See xref:rest-api:rest-bucket-create.adoc[] for more information about changing a bucket's settings. + +[#rest-api] +== Change Couchbase Bucket Ejection Policy Using the REST API + +You can change the ejection policy for a Couchbase bucket using the REST API. +You change the ejection policy by sending a POST request to the `/pools/default/buckets/{BUCKETNAME}` endpoint to change the `evictionPolicy` setting. + +Unlike the Couchbase Server Web Console, using the REST API to change a Couchbase bucket's ejection policy lets you prevent the downtime caused by restarting a Couchbase bucket. +When you call the REST API to change the ejection policy, can set the `noRestart` parameter to `true` to prevent Couchbase Server from restarting the bucket after you make the change. +If you do not allow Couchbase Server to restart the Couchbase bucket, you must take additional steps to apply the change. + +If you do not set the `noRestart` parameter or set it to `false` for a Couchbase bucket, Couchbase Server may automatically restart the bucket after you change the ejection policy. +Couchbase Server only automatically restarts the bucket if you're not changing the ejection policy during a backend storage migration. +If you're changing the ejection policy during a backend storage migration, you must set `noRestart` to `true` to prevent Couchbase Server from restarting the bucket. +See <<#change-during-migration>> for an explanation. + +NOTE: The `noRestart` parameter has no effect when changing the ejection policy of an ephemeral bucket. +Changing the ejection policy of an ephemeral bucket takes effect without a bucket restart or other additional steps. +See <<#ephemeral-rest-api>> for more details. + +How you apply the ejection policy change for a Couchbase bucket depends on whether you're changing the ejection policy during a backend storage migration. + +[#change-during-migration] +=== Change Ejection Policy During a Backend Storage Migration + +You can change the ejection policy of a Couchbase bucket while you're migrating the storage engine it uses. +You often want to change the ejection policy during migration because the Couchstore and Magma storage engines work best with different ejection policies. +See xref:learn:buckets-memory-and-storage/storage-engines.adoc[] for information about storage engines and xref:manage:manage-buckets/migrate-bucket.adoc[] for information about migrating a bucket's storage backend. + +If you choose to change the ejection policy while migrating the storage backend, you must set `noRestart` to `true`. +This setting prevents Couchbase Server from restarting the bucket after changing the policy. +If you set this option to `false` or do not include it from your REST API call, Couchbase Server does not change the ejection policy. + +After you change the ejection policy while migrating the storage backend, the policy does not take effect until you complete the migration. +You complete the migration by performing 1 of the following procedures: + +* a xref:install:upgrade-procedure-selection.adoc#swap-rebalance[swap rebalance] on all nodes in the cluster running the data service. +* a xref:learn:clusters-and-availability/graceful-failover.adoc[graceful failover] followed by a xref:learn:clusters-and-availability/recovery.adoc#full-recovery[full recovery] and xref:learn:clusters-and-availability/rebalance.adoc[rebalance] for all nodes running the data service in the cluster. + +See xref:manage:manage-buckets/migrate-bucket.adoc[] for information about migrating a bucket's storage backend and complete steps to change the ejection policy while performing a migration. + +=== Standalone Ejection Policy Change + +You can change a Couchbase bucket's ejection policy when you're not migrating its storage backend using the REST API. +In this case, you can choose whether to have Couchbase Server restart the bucket after you change the ejection policy. +You control this behavior using the `noRestart` parameter in the REST API POST method to `/pools/default/buckets/{BUCKETNAME}`: + +* To trigger the automatic bucket restart, set the `noRestart` parameter to `false` or do not supply it in the REST API call. +Couchbase Server automatically restart the bucket once the REST API call completes. +Restarting the bucket is disruptive--it closes all open connections and results in some downtime. +Do not change the ejection policy of a bucket in production unless you're prepared this downtime. + +* To prevent the automatic bucket restart, set the `noRestart` parameter to `true`. +The new ejection setting does not take effect until you perform one of the following procedures: + +** a xref:install:upgrade-procedure-selection.adoc#swap-rebalance[swap rebalance] on all nodes in the cluster running the data service. +** the following steps for all nodes running the data service in the cluster: ++ +-- +. A xref:learn:clusters-and-availability/graceful-failover.adoc[graceful failover] +. A xref:learn:clusters-and-availability/recovery.adoc#delta-recovery[delta recovery] or a xref:learn:clusters-and-availability/recovery.adoc#full-recovery[full recovery] +. A xref:learn:clusters-and-availability/rebalance.adoc[rebalance] +-- + +The following procedure shows you how to change the ejection policy of a bucket using the REST API. +It then demonstrates performing a graceful failover followed by a delta recovery and rebalance to apply the change. + +. Send a POST request to the `/pools/default/buckets/{BUCKETNAME}` REST API endpoint to edit the bucket, and change the `evictionPolicy` parameter to the new ejection policy that you want. +Also set the `noRestart` parameter to `true` to prevent Couchbase Server from restarting the bucket. + ++ +The following command changes the ejection policy of the `travel-sample` bucket to Full Ejection without restarting the bucket: + ++ +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=change-ejection-no-restart] +---- + ++ +. At this point, the bucket's ejection policy has changed at the bucket level. +However, each node has an override setting that has them use the old ejection policy. + ++ +You can compare the bucket and node settings to see which nodes have an override by sending a GET request to the `/pools/default/buckets/{BUCKETNAME}` REST API endpoint. +The following command calls the REST API to get the configuration of the `travel-sample` bucket. +It filters the result through the `jq` command to show just the `evictionPolicy` values from the bucket level and the individual nodes: + ++ +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=show-policy-overrides] +---- + ++ +The output of the previous command shows that the bucket's `evictionPolicy` parameter has changed to `fullEviction`, but the nodes are still using the old `valueOnly` ejection policy: + ++ +[source,json] +---- +[ + { + "node3.:8091": "valueOnly" + }, + { + "node2.:8091": "valueOnly" + }, + { + "node1.:8091": "valueOnly" + }, + { + "travel-sample": "fullEviction" + } +] +---- + +. Send a POST request to the `/controller/startGracefulFailover` REST API endpoint to perform a graceful failover on 1 of the nodes running the data service. +See xref:rest-api:rest-failover-graceful.adoc[] for more information about performing a graceful failover. + ++ +The following calls the REST API to perform a graceful failover on a node named `node3`: ++ +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=failover-node] +---- + +. Wait until the failover completes. +Then send a POST request to the `/controller/setRecoveryType` REST API endpoint to set the type of recovery for the node to `delta`. +See xref:rest-api:rest-node-recovery-incremental.adoc[] for more information about setting the recovery type. + ++ +The following calls the REST API to set the recovery type for `node3` to `delta`: ++ +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=recover-node] +---- + +. When the recovery completes, send a POST request to the `/controller/rebalance` REST API endpoint to rebalance the cluster. +See xref:rest-api:rest-cluster-rebalance.adoc[] for more information about rebalancing a cluster. + ++ +The following calls the REST API to rebalance a three-node cluster containing the nodes named `node1`, `node2`, and `node3`: + ++ +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=rebalance-cluster] +---- + +. After the rebalance completes, repeat steps 3 through 5 for the rest of the data nodes in the cluster. + ++ +You can verify which nodes still need to have the change applied to them by sending another GET request to `/pools/default/buckets/{BUCKETNAME}` as shown in step 2. +For example, suppose you have performed a graceful failover, full recovery, and rebalance steps on nodes `node3` and `node2`. +Then the output of the command from step 2 shows that the `evictionPolicy` for `node1` is still set to `valueOnly`: + ++ +[source,json] +---- +[ + { + "node3.:8091": null + }, + { + "node2.:8091": null + }, + { + "node1.:8091": "valueOnly" + }, + { + "travel-sample": "fullEviction" + } +] +---- diff --git a/modules/manage/pages/manage-buckets/create-bucket.adoc b/modules/manage/pages/manage-buckets/create-bucket.adoc index 3b9431763d..5f416b153e 100644 --- a/modules/manage/pages/manage-buckets/create-bucket.adoc +++ b/modules/manage/pages/manage-buckets/create-bucket.adoc @@ -168,20 +168,6 @@ For more information about ejection, see the xref:learn:buckets-memory-and-stora include::learn:partial$full-ejection-note.adoc[] -[#bucket-priority, start="6"] -. Choose a *Bucket Priority* for the bucket: -+ --- -* *Default* -* *High* --- -+ -Bucket Priority sets the priority of the bucket's background tasks relative to the background tasks of other buckets on the cluster. - -+ -Background tasks may involve disk I/O, DCP stream-processing, item-paging, and more. -Specifying High might result in faster processing for the current bucket's tasks. -This setting only takes effect when there is more than one bucket defined for the cluster, and you have assigned different Bucket Priority values. [#durability-level, start="7"] . In the *Minimum Durability Level* list, select a durability level for the bucket: @@ -226,6 +212,9 @@ For more details about enabling encryption at rest, see xref:manage:manage-secur [#ephemeral-bucket-settings] ==== Ephemeral Bucket Settings +[#add-data-bucket-dialog-expanded-for-ephemeral] +image::manage-buckets/addDataBucketDialogExpandedForEphemeral.png[,500,align=center, alt="An image that displays the Add Data Bucket dialog, with the Bucket Type set to Ephemeral and the Storage Backend set to CouchStore. The Advanced bucket settings are expanded to show the default selections for a Ephemeral and Couchstore bucket."] + To configure advanced settings for an Ephemeral bucket: . To enable xref:learn:clusters-and-availability/intra-cluster-replication.adoc[replica creation and management], under *Replicas*, select the *Enable* checkbox. @@ -256,17 +245,6 @@ For more information about the available compression modes, see xref:learn:bucke + For more information about XDCR conflict resolution, see xref:learn:clusters-and-availability/xdcr-conflict-resolution.adoc[]. -. Choose a *Bucket Priority*: -+ -* *Default* -* *High* - -+ -Bucket Priority sets the priority of the bucket's background tasks relative to the background tasks of other buckets on the cluster. - -Background tasks may involve DCP stream-processing, item-paging, and more. -Specifying High might result in faster processing for the current bucket's tasks. -This setting only takes effect when there is more than one bucket defined for the cluster, and the buckets are assigned different Bucket Priority values. . Choose an *Ejection Policy* for the bucket: + @@ -291,9 +269,6 @@ If set too low, data might be inconsistent in XDCR or Views. + For more information about durability, see xref:learn:data/durability.adoc[]. -[#add-data-bucket-dialog-expanded-for-ephemeral] -image::manage-buckets/addDataBucketDialogExpandedForEphemeral.png[,350,align=center, alt="An image that displays the Add Data Bucket dialog, with the Bucket Type set to Ephemeral and the Storage Backend set to CouchStore. The Advanced bucket settings are expanded to show the default selections for a Ephemeral and Couchstore bucket."] - [#create-bucket-with-the-cli] == Create a Bucket with the CLI diff --git a/modules/manage/pages/manage-buckets/edit-bucket.adoc b/modules/manage/pages/manage-buckets/edit-bucket.adoc index 5e9419e135..a08e4602c7 100644 --- a/modules/manage/pages/manage-buckets/edit-bucket.adoc +++ b/modules/manage/pages/manage-buckets/edit-bucket.adoc @@ -1,112 +1,155 @@ = Edit a Bucket -:description: Full, Cluster, and Bucket Administrators can edit a subset of the settings already established on an existing bucket. +:description: Full, Cluster, and Bucket Administrators can edit some settings of an existing bucket. :page-aliases: clustersetup:change-settings-bucket [abstract] {description} -This section explains how to do so; and notes the possible consequences of such configuration-changes. +This section explains how to make changes to existing bucket settings using the Couchbase Web Console and the REST API. +It also explains the possible consequences of these configuration changes. -== Edit an Existing Bucket-Configuration +== Edit an Existing Bucket -To edit an existing bucket-configuration, access Couchbase Web Console, and left-click on the [.ui]*Buckets* tab, in the vertical navigation-bar at the left-hand side. +To edit an existing bucket configuration using the Couchbase Web Console: -[#access_bucket_tab] -image::manage-buckets/accessBucketTab.png[,100,align=left] +. Click btn:[Buckets] in the vertical navigation bar. -The [.ui]*Buckets* screen now appears, showing the buckets that have already been defined for your system: ++ +The [.ui]*Buckets* page appears, listing the buckets in the cluster: ++ [#buckets_view_initial] -image::manage-buckets/bucketsViewInitialEdit.png[,880,align=left] +image::manage-buckets/bucketsViewInitialEdit.png[] -To edit the settings for a particular bucket, left-click on the bucket's row in the UI. -This expands the row, to display additional information: +. Click the row containing the bucket that you want to edit. +This expands the row to display additional information: ++ [#buckets_view_with_expanded_bucket_row] -image::manage-buckets/bucketsViewWithExpandedBucketRow.png[,820,align=left] +image::manage-buckets/bucketsViewWithExpandedBucketRow.png[] -Values for the current settings for the selected bucket are shown at the left-hand side, in a vertical column under the bucket-name. -Current [.ui]*Memory* and [.ui]*Disk* status is shown further to the right. -(Note that disk-related information pertains to Couchbase buckets only.) ++ +Underneath the bucket's name are the bucket's configuration settings. +Next to this information are the memory use and disk use if the bucket is a Couchbase bucket. +This row also contains links to examine and manage the bucket's xref:manage:manage-ui/manage-ui.adoc#console-documents[Documents] and xref:learn:data/scopes-and-collections.adoc[Scopes and Collections]. -Further values are displayed in successive columns, to the right of the bucket-name. -These indicate the number of items in the bucket, the number of items with data that is currently resident in memory (for Couchbase buckets only), the number of operations performed on the bucket during the last second, the amount of RAM currently in use from the available quota, and the amount of disk-space used (for Couchbase buckets only). ++ +At the bottom are buttons to drop, compact (for Couchbase buckets only), and edit the bucket. +If your bucket has flushing enabled, a btn:[Flush] button also appears. +See <> for more information about flushing a bucket. -To the right hand side of the row are tabs that allow examination of the xref:manage:manage-ui/manage-ui.adoc#console-documents[Documents] within the bucket; and creation of xref:learn:data/scopes-and-collections.adoc[Scopes and Collections], into which documents can be organized. +. Click btn:[Edit] to edit the bucket's settings. +The [.ui]*Edit Bucket Settings* dialog appears. -(For information on importing documents into a bucket, see xref:manage:import-documents/import-documents.adoc[Import Documents].) ++ +image::manage-buckets/bucket-edit-dialog-initial.png[,400] -At the lower right, buttons are provided for dropping, compacting (for Couchbase buckets only), and editing the bucket. -Note that _dropping_ means deleting the bucket, all the documents it contains, and all scopes and collections into which the documents have been organized. ++ +This dialog lets you change a subset of existing settings. +The next section explains the settings you can change. -To display the user-interface for editing, left-click on the *Edit* button: -[#edit_bucket_button] -image::manage-buckets/editBucketButton.png[,260,align=left] +== Making Changes +Many of the settings for buckets are in the [.ui]*Advanced Settings* section that you need to expand to be able to edit. This displays the [.ui]*Edit Bucket Settings* dialog, which permits changes to be made to a subset of existing settings. All the settings contained here are described in detail for the [.ui]*Add Data Bucket* dialog, on the page xref:manage-buckets/create-bucket.adoc[Create a Bucket]. -[#edit-bucket-for-eccv] -=== Edit a Bucket to Enable Cross Cluster Versioning - -[.ui]*Edit Bucket Settings* dialog displays the additional setting, *Enable Cross Cluster Versioning*. Use this setting to enable the xref:learn:clusters-and-availability/xdcr-enable-crossclusterversioning.adoc[XDCR enableCrossClusterVersioning] bucket property. +image::manage-buckets/bucket-edit-dialog-expanded.png[,400] -image::manage-buckets/edit-bucket-with-eccv.png[,399,align=center] +Not all the settings that appear in the dialog are editable. +The settings you can edit are: -When you enable the bucket setting [.ui]*Enable Cross Cluster Versioning*, for each document processed by XDCR, XDCR stores additional metadata for the document in the extended attributes. -This metadata is called, xref:learn:clusters-and-availability/xdcr-enable-crossclusterversioning.adoc#hlv-data-maintained-in-xattr[Hybrid Logical Vector (HLV)] or a version vector. -NOTE: Once enabled, the Enable Cross Cluster Versioning bucket setting cannot be disabled. +Storage Backend [.edition]#{enterprise}#:: +This setting is only available for Couchbase buckets on Couchbase Server Enterprise Edition. +It allows you to change the storage backend used by the bucket. +Changing this setting does not immediately migrate the bucket to the new storage backend. +See xref:manage:manage-buckets/migrate-bucket.adoc[] for more information about migrating a bucket to a different storage backend. -To use the following features, enable the bucket setting [.ui]*Enable Cross Cluster Versioning* on all buckets that are a part of the XDCR topology: +Number of vBuckets [.edition]#{enterprise}#:: +For Couchbase buckets using the Magma storage backend, you can change the number of vBuckets. -* xref:learn:clusters-and-availability/xdcr-conflict-logging-feature.adoc[XDCR Conflict Logging]. +Memory Quota:: +Sets the amount of RAM allocated per node to this bucket. +You cannot lower this value to be less than the current amount of memory the bucket is using on any node in your cluster. +Changes you make to this setting have an immediate effect. -* xref:learn:clusters-and-availability/xdcr-active-active-sgw.adoc[XDCR Active-Active with Sync Gateway]. +Replicas:: +The number of replicas of the bucket to be maintained by the cluster. +You can change this number at any time for either type of bucket. +However, you must rebalance the cluster to redistribute the replicas across the cluster. -== Making Changes ++ +NOTE: You cannot change the [.ui]*Replica view indexes* setting after you have created the bucket. -Only a subset of settings is available for modification, after the creation of a bucket. -These settings are listed below: -* *Memory Quota*: The amount of RAM allocated per node to this bucket. -Can be changed for a Couchbase or Ephemeral bucket only. -If you decide to lower this setting, note that the value you specify cannot be lower than the amount of memory currently used by the bucket on any of the nodes in your cluster. -Once changed, this setting takes effect immediately. +Bucket Max Time to Live:: +Sets the amount of time Couchbase Server keeps a document in this bucket before deleting it. +Changes to this setting only affect documents created or mutated after the change. +Other settings also affect document expiration. +For more information, see xref:learn:data/expiration.adoc[Expiration]. -* *Bucket Max Time-to-Live*: The maximum time a document can exist, following its creation within this bucket, before being deleted. -Can be changed for a Couchbase or Ephemeral bucket only. -A modified setting applies only to documents that will be created or modified subsequently. -* *Compression Mode*: Whether and how compression is applied to data within the bucket. -For information on available _modes_, and the effect of changing the mode of an existing bucket, see xref:learn:buckets-memory-and-storage/compression.adoc[Compression]. -* *Ejection Method*: The ejection policy used by a bucket. -Can be changed for a Couchbase bucket only. -Note that changing the ejection-policy forces a bucket-restart; resulting in the temporary inaccessibility of data, while the bucket warms up. +Compression Mode:: +Whether and how Couchbase Server compresses the bucket's data. +For information on available modes and the effect of changing the mode of an existing bucket, see xref:learn:buckets-memory-and-storage/compression.adoc[Compression]. -* *Replicas*: The number of bucket-replicas to be maintained by the cluster. -This number can be changed at any time for a Couchbase or Ephemeral bucket: however, a rebalance is required after a setting-change, in order to redistribute the correct number of replica-items across the cluster. -Note that Couchbase-bucket _View Index Replicas_ cannot be enabled or disabled once a bucket has been created. +Ejection Method:: +Controls how the bucket removes documents from memory when the bucket's memory use approaches its memory quota. +You can only change this setting for Couchbase buckets. +See xref:learn:buckets-memory-and-storage/memory.adoc#ejection[Ejection] for more information about ejection policies and xref:change-ejection-policy.adoc[] for steps to change a bucket's ejection policy. -* *Bucket Priority*: The priority to be assigned to the current bucket's background tasks. -Can be changed for Couchbase and Ephemeral buckets. -Note that a priority-change invokes a bucket restart, resulting in the temporary inaccessibility of data, while the bucket warms up. -* *Minimum Durability Level*: Allows an appropriate durability level to be assigned to the bucket. +Minimum Durability Level:: +Allows an appropriate durability level to be assigned to the bucket. Levels are accessed by means of a pull-down menu. The options are *none*, *majority*, *majorityAndPersistActive*, and *persistToMajority*. For information, see xref:learn:data/durability.adoc[Durability]. -* *Auto-Compaction*: When established, these settings, which determine the conditions under which data-compaction for the bucket is performed, override the cluster-wide defaults; as discussed in xref:manage:manage-settings/configure-compact-settings.adoc[Auto-Compaction]. -The full range of settings applies to and can be changed for Couchbase buckets; while only the [.ui]*Metadata Purge Interval* applies to and can be changed for Ephemeral buckets. +Auto-Compaction:: +Overrides the cluster-wide default setting for compacting the bucket's data. +See xref:manage:manage-settings/configure-compact-settings.adoc[Auto-Compaction] for more information. +The auto-compation settings you see depend on the type of bucket you are editing. +For Couchbase buckets, you must first select [.ui]*Override the default auto-compaction settings?* to be able to edit the settings. +The available settings are: + ++ +* [.ui]*Database Fragmentation*: The percentage of data fragmentation that triggers database compaction. +Only settable on Couchbase buckets. +* [.ui]*Metadata Purge Interval*: The number of days between purges of the metadata for deleted items from the bucket. +Settable for both Couchbase and Ephemeral buckets. + +Encryption At Rest [.edition]#{enterprise}#:: +Enables or disables encryption of data at rest for Couchbase buckets. +This setting is only available for Couchbase buckets on Couchbase Server Enterprise Edition. +See xref:learn:buckets-memory-and-storage/encryption-at-rest.adoc[Encryption at Rest] for an overview of encryption at rest and xref:manage:manage-security/manage-native-encryption-at-rest.adoc[] for steps you need to take in order to enable encryption at rest. + +[#edit-bucket-for-eccv] +Enable Cross Cluster Versioning:: +Use this setting to enable the xref:learn:clusters-and-availability/xdcr-enable-crossclusterversioning.adoc[XDCR enableCrossClusterVersioning] bucket property. +When you enable the bucket setting [.ui]*Enable Cross Cluster Versioning*, for each document processed by XDCR, XDCR stores additional metadata for the document in the extended attributes. +This metadata is called, xref:learn:clusters-and-availability/xdcr-enable-crossclusterversioning.adoc#hlv-data-maintained-in-xattr[Hybrid Logical Vector (HLV)] or a version vector. + ++ +NOTE: Once enabled, the Enable Cross Cluster Versioning bucket setting cannot be disabled. + ++ +To use the following features, enable the bucket setting [.ui]*Enable Cross Cluster Versioning* on all buckets that are a part of the XDCR topology: + ++ +* xref:learn:clusters-and-availability/xdcr-conflict-logging-feature.adoc[XDCR Conflict Logging]. +* xref:learn:clusters-and-availability/xdcr-active-active-sgw.adoc[XDCR Active-Active with Sync Gateway]. -* *Flush*: This setting enables or disables the xref:manage-buckets/flush-bucket.adoc[Flush] command for the current bucket. -It can be changed at any time for all three types of bucket. -Note that when flushing is enabled, left-clicking on the bucket's display-row on the [.ui]*Buckets* screen displays the *Flush* button: +[#flush] +Flush:: +This setting enables or disables the xref:manage-buckets/flush-bucket.adoc[Flush] command for the current bucket. +It can be changed at any time for all types of buckets. +When you enable flushing for a bucket, a btn:[Flush] button appears on the bucket's row in the Buckets view: + [#flush_bucket_button] image::manage-buckets/flushBucketButton.png[,360,align=left] + -If flushing is _disabled_, the *Flush* button does not appear. +If you leave flushing turned off, the btn:[Flush] button does not appear. +See xref:manage:manage-buckets/flush-bucket.adoc[] for more information about flushing a bucket. == Changing Bucket-Settings with the CLI and REST API diff --git a/modules/manage/pages/manage-buckets/migrate-bucket.adoc b/modules/manage/pages/manage-buckets/migrate-bucket.adoc index 1d4323cabd..428593b97d 100644 --- a/modules/manage/pages/manage-buckets/migrate-bucket.adoc +++ b/modules/manage/pages/manage-buckets/migrate-bucket.adoc @@ -24,7 +24,7 @@ To complete the migration, you must force the vBuckets to be rewritten. The two ways to trigger this rewrite are to perform a swap rebalance or a graceful failover followed by a full recovery. As Couchbase writes the vBuckets during these processes, it removes the storage override and saves the vBuckets using the new storage backend. -NOTE: While you're migrating a bucket between storage backends, you can only change the bucket's `ramQuota` and `storageBackend` parameters. +NOTE: While you're migrating a bucket between storage backends, you can only change the bucket's `evictionPolicy`, `ramQuota` and `storageBackend` parameters. Couchbase Server prevents you from making changes to the bucket's other parameters. == Prerequisites @@ -38,6 +38,20 @@ If you're planning to migrate from Couchstore to Magma, also consider the curren Magma's default fragmentation settings can result in higher disk use. See xref:#disk_usage[Disk Use Under Couchstore Verses Magma] for more information. +You should also consider changing the bucket's ejection policy. +The Full Ejection policy works well for Magma buckets, especially when the ratio of memory to data storage is low. +Magma allows you to set a memory to data storage ratio as low as 1%. +Couchstore buckets usually work best with Value Only Ejection. +See xref:learn:buckets-memory-and-storage/memory.adoc#ejection[Ejection] for more information about ejection policies. + +You can change the ejection policy at the same time you change the storage backend. +If you choose to do so, you must set the `noRestart` parameter to `true` in the REST API call to change the storage backend. +This setting prevents Couchbase Server from restarting the bucket after changing the storage backend. +If you do not set `noRestart` to `true`, Couchbase Server does not change the ejection policy. +Instead of taking effect after a bucket restart, the new ejection policy takes effect after you finish the backend migration. + +See xref:manage:manage-buckets/change-ejection-policy.adoc#change-during-migration[Change Ejection Policy During a Backend Storage Migration] for more information. + [#perform_migration] == Perform a Migration @@ -48,6 +62,17 @@ For example, the following command changes the storage backend of the travel-sam ---- include::manage:example$migrate-bucket-storage-backend.sh[tag=change-backend] ---- + ++ +If you also want to change the bucket's ejection policy, you can do so at the same time by adding the `evictionPolicy` and `noRestart` parameters to the REST API call. +For example, the following command changes the storage backend of the `travel-sample` bucket to Magma and sets the ejection policy to Full Ejection without restarting the bucket immediately: + ++ +[source,console] +---- +include::manage:example$migrate-bucket-storage-backend.sh[tag=change-backend-and-ejection] +---- + . Verify that the nodes containing the bucket now have storage backend override settings for their vBuckets. The following example calls the REST API to get the bucket configuration and filters the result through the `jq` command to list the node names and their storage backend formats. + diff --git a/modules/rest-api/pages/rest-bucket-create.adoc b/modules/rest-api/pages/rest-bucket-create.adoc index 1fd3894b05..703562fc8a 100644 --- a/modules/rest-api/pages/rest-bucket-create.adoc +++ b/modules/rest-api/pages/rest-bucket-create.adoc @@ -27,7 +27,7 @@ These endpoints create a new bucket and edit an existing bucket. You can create two types of buckets: Couchbase or Ephemeral. When you create a bucket, you must assign it a name that is unique across all buckets on the cluster. You cannot change the name after creation. -Bucket names must not exceed 100 bytes (i.e., 100 characters). +Bucket names must not exceed 100 bytes (100 characters in most cases). A single cluster can contain up to 30 buckets. @@ -56,12 +56,12 @@ curl -X POST -u : [ valueOnly | fullEviction ] | [ noEviction | nruEviction ] ] + -d noRestart=[true|false] -d durabilityMinLevel=[ [ none | majority | majorityAndPersistActive | persistToMajority ] | [ none | majority ] ] -d durabilityImpossibleFallback= [ disabled | fallbackToActiveAck ] - -d threadsNumber=[ 3 | 8 ] -d rank= -d replicaNumber=[ 1 | 2 | 3 ] -d compressionMode=[ off | passive | active ] @@ -97,9 +97,12 @@ curl -X POST -u : All parameters are described in the following subsections. +NOTE: The `threadsNumber` parameter, which sets the number of threads for the bucket, has not had any effect since version Couchbase Server 7.0.0. +It's deprecated and is no longer listed in the syntax. + == Parameter Groups -Parameters that support the creation and editing of buckets can be broken into two groups: Genera and Auto-compaction. +Parameters that support the creation and editing of buckets can be broken into two groups: General and Auto-compaction. === General @@ -116,7 +119,6 @@ The following parameters can be edited after bucket creation: * <> * <> -* <> * <> * <> * <> @@ -131,7 +133,7 @@ The following parameters can be edited after bucket creation: * <> * <> -** Parameters that _can_ be edited after bucket creation; these being xref:rest-api:rest-bucket-create.adoc#evictionpolicy[evictionPolicy], xref:rest-api:rest-bucket-create.adoc#durabilityminlevel[durabilityMinLevel], xref:rest-api:rest-bucket-create.adoc#threadsnumber[threadsNumber], xref:rest-api:rest-bucket-create.adoc#rank[rank], xref:rest-api:rest-bucket-create.adoc#replicanumber[replicaNumber], xref:rest-api:rest-bucket-create.adoc#compressionmode[compressionMode], xref:rest-api:rest-bucket-create.adoc#maxttl[maxTTL], xref:rest-api:rest-bucket-create.adoc#flushenabled[flushEnabled], xref:rest-api:rest-bucket-create.adoc#magmaseqtreedatablocksize[magmaSeqTreeDataBlockSize], +** Parameters that _can_ be edited after bucket creation; these being xref:rest-api:rest-bucket-create.adoc#evictionpolicy[evictionPolicy], xref:rest-api:rest-bucket-create.adoc#durabilityminlevel[durabilityMinLevel], xref:rest-api:rest-bucket-create.adoc#rank[rank], xref:rest-api:rest-bucket-create.adoc#replicanumber[replicaNumber], xref:rest-api:rest-bucket-create.adoc#compressionmode[compressionMode], xref:rest-api:rest-bucket-create.adoc#maxttl[maxTTL], xref:rest-api:rest-bucket-create.adoc#flushenabled[flushEnabled], xref:rest-api:rest-bucket-create.adoc#magmaseqtreedatablocksize[magmaSeqTreeDataBlockSize], xref:rest-api:rest-bucket-create.adoc#historyretentioncollectiondefault[historyRetentionCollectionDefault], xref:rest-api:rest-bucket-create.adoc#historyretentionbytes[historyRetentionBytes], xref:rest-api:rest-bucket-create.adoc#storagebackend[storageBackend], xref:rest-api:rest-bucket-create.adoc#historyretentionseconds[historyRetentionSeconds], xref:rest-api:rest-bucket-create.adoc#accessscannerenabled[accessScannerEnabled], xref:rest-api:rest-bucket-create.adoc#expirypagersleeptime[expiryPagerSleepTime], xref:rest-api:rest-bucket-create.adoc#warmupbehavior[warmupBehavior], xref:rest-api:rest-bucket-create.adoc#memorylowwatermark[memoryLowWatermark], and xref:rest-api:rest-bucket-create.adoc#memoryhighwatermark[memoryHighWatermark]. @@ -382,25 +384,33 @@ This example returns the status code `202 Accepted` and no additional output. [#evictionpolicy] === evictionPolicy -The ejection policy to be assigned to and used by the bucket. -(Note that eviction is, in the current release, referred to as ejection; and this revised naming will continue to be used in future releases.) -Policy-assignment depends on bucket type. +Sets the ejection policy for the bucket. +You can change the eviction policy after bucket creation. +See xref:learn:buckets-memory-and-storage/memory.adoc#ejection[Ejection] for more information about ejection policies. + +Each type of bucket has its own set of ejection policies: -For a Couchbase bucket, the policy can be `valueOnly` (which is the default) or `fullEviction`. -For an Ephemeral bucket, the policy can be `noEviction` (which is the default) or `nruEviction`. +* Couchbase bucket: `valueOnly` (the default for buckets using the xref:learn:buckets-memory-and-storage/storage-engines.adoc#couchstore[Couchstore] storage engine) or `fullEviction` (the default for buckets using xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-magma[Magma]). +* Ephemeral bucket: `noEviction` (which is the default) or `nruEviction`. -This value can be modified, following bucket-creation. -If such modification occurs, the bucket is restarted with the new setting: this may cause inaccessibility of data, during the bucket's warm-up period. +Changes to the ejection policy of an ephemeral bucket take effect without requiring any further steps. +Before the change takes effect on a Couchbase bucket, you must perform 1 of the following actions: -Incorrect specification of an ejection policy returns an error-notification, such as `{"evictionPolicy":"Eviction policy must be either 'valueOnly' or 'fullEviction' for couchbase buckets"}`. +* Allow Couchbase Server to restart the bucket automatically. +It does so automatically unless you set <> to `true`. +When Couchbase Server restarts the bucket, it closes connections and makes the bucket unavailable temporarily. +* Perform a xref:install:upgrade-procedure-selection.adoc#swap-rebalance[swap rebalance] on all nodes in the cluster running the data service. +* Perform a xref:learn:clusters-and-availability/graceful-failover.adoc[graceful failover] followed by a xref:learn:clusters-and-availability/recovery.adoc#delta-recovery[delta recovery] and xref:learn:clusters-and-availability/rebalance.adoc[rebalance] for all nodes running the data service in the cluster. ++ +NOTE: If you're performing a storage backend migration (see xref:manage:manage-buckets/migrate-bucket.adoc[]) while you're changing the ejection policy, you must set `noRestart` to `true`. +You must also perform a full recovery instead of a delta recovery after the graceful failover because the migration process requires it. -For information on ejection policies, see xref:learn:buckets-memory-and-storage/buckets.adoc#bucket-types[Bucket Types]. -For general information on memory management in the context of ejection, see xref:learn:buckets-memory-and-storage/memory.adoc#ejection[Ejection]. +For more information about changing the ejection policy of a bucket, including the steps to take to change the policy without downtime, see xref:manage:manage-buckets/change-ejection-policy.adoc[]. [#example-evictionpolicy-create] -==== Example: Specifying an Eviction Policy, when Creating +==== Example: Specifying an Ejection Policy, when Creating -The following example creates a new bucket, named `testBucket`, which is a Couchbase bucket by default; and assigns it the `fullEviction` policy. +The following example creates a new bucket named `testBucket` which is a Couchbase bucket and assigns it the `fullEviction` policy. [source,bash] ---- @@ -414,7 +424,7 @@ If successful, the call returns a `202 Accepted` notification. No object is returned. [#example-evictionpolicy-edit] -==== Example: Specifying a New Eviction Policy, when Editing +==== Example: Specifying a New Ejection Policy, when Editing The following example modifies the eviction policy of the existing bucket `testBucket`, specifying that it should be `valueOnly`. @@ -425,8 +435,33 @@ curl -v -X POST http://127.0.0.1:8091/pools/default/buckets/testBucket \ -d evictionPolicy=valueOnly ---- -If successful, the call returns a `200 OK` notification. -No object is returned. +If successful, the call returns a `200 OK` notification with no other value returned. +Couchbase Server also starts the process of restarting the bucket. + +[#norestart] +=== noRestart + +Set this parameter to `true` to prevent Couchbase Server from automatically restarting the bucket when you change the ejection policy using <>. +It only has an effect if you've also set a value in the `evictionPolicy` parameter. + +This parameter defaults to `false`, meaning Couchbase Server automatically restarts the bucket after you change the ejection policy. + +When set to `true`, the new ejection policy does not take effect until you perform further steps (see <> for details). + +[#example-evictionpolicy-edit] +==== Example: Set a New Ejection Policy Without Bucket Restart + +The following example set the eviction policy of the bucket `travel-sample` to `fullEviction` and prevents the restart of the bucket: + +[source,console] +---- +include::manage:example$change-ejection-policy.sh[tag=change-ejection-no-restart] +---- + +If successful, the call returns a `200 OK` notification with no other value returned. +Couchbase Server does not restart the bucket. +The new ejection policy does not take effect until you perform one of the procedures described in <>. + [#durabilityminlevel] === durabilityMinLevel @@ -522,53 +557,6 @@ curl -v -X POST -u Administrator:password \ If successful, the call returns a `200 OK` notification. -[#threadsnumber] -=== threadsNumber - -The priority for the bucket, as described in xref:manage:manage-buckets/create-bucket.adoc#bucket-priority[Create a Bucket]. -Priority can be established as either Low or High. -To establish priority as Low (which is the default), the value of `threadsNumber` must be `3`. -To establish priority as High, the value must be `8`. -If any other value is used, the value is ignored; and the bucket's priority remains low. - -If this parameter is incorrectly specified, an error-notification such as the following is returned: `{"threadsNumber":"The number of threads must be an integer between 2 and 8"}`. -(Note that, as indicated above, all values other than `3` and `8` are ignored.) - -This parameter can be modified, following bucket-creation. -If such modification occurs, the bucket is restarted with the new setting: this may cause inaccessibility of data, during the bucket's warm-up period. - -[#example-threadsnumber-create] -==== Example: Specifying a Bucket Priority, when Creating - -The following example creates a new bucket, named `testBucket`, which is a Couchbase bucket by default; and assigns it a High priority, by specifying `8` as the value to the `threadsNumber` parameter. - -[source,bash] ----- -curl -v -X POST http://127.0.0.1:8091/pools/default/buckets \ --u Administrator:password \ --d name=testBucket \ --d ramQuota=256 \ --d threadsNumber=8 ----- - -If successful, the call returns a `202 Accepted` notification. -No object is returned. - -[#example-threadsnumber-edit] -==== Example: Specifying a New Bucket Priority, when Editing - -The following example modifies the priority of the existing bucket `testBucket`, changing the level to Low, by establishing `3` as the value of the `threadsNumber` parameter. - -[source,bash] ----- -curl -v -X POST http://127.0.0.1:8091/pools/default/buckets/testBucket \ --u Administrator:password \ --d threadsNumber=3 ----- - -If successful, the call returns a `200 OK` notification. -No object is returned. - [#rank] === rank diff --git a/preview/DOC-12483_remove_memcached_buckets.yml b/preview/MB-68541_Change_eviction_on_Ephemeral_buckets.yml similarity index 82% rename from preview/DOC-12483_remove_memcached_buckets.yml rename to preview/MB-68541_Change_eviction_on_Ephemeral_buckets.yml index 58eddfd02b..c5a7dff6be 100644 --- a/preview/DOC-12483_remove_memcached_buckets.yml +++ b/preview/MB-68541_Change_eviction_on_Ephemeral_buckets.yml @@ -18,8 +18,8 @@ sources: #analytics: # url: ../../docs-includes/docs-analytics # branches: HEAD - cb-swagger: - rl: https://github.com/couchbaselabs/cb-swagger - branches: release/8.0 - start_path: docs + #cb-swagger: + # rl: https://github.com/couchbaselabs/cb-swagger + # branches: release/8.0 + # start_path: docs