Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions src/current/v25.4/recommended-production-settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,8 @@ We recommend provisioning volumes with {% include {{ page.version.version }}/pro

This is especially recommended if you are using local disks rather than a cloud provider's network-attached disks that are often replicated under the hood, because local disks have a greater risk of failure. You can do this for the [entire cluster]({% link {{ page.version.version }}/configure-replication-zones.md %}#edit-the-default-replication-zone) or for specific [databases]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-database), [tables]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-table), or [rows]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-partition).

- Avoid distributed storage systems. This includes distributed block storage and file systems such as Ceph, GlusterFS, DRBD, and SAN-style solutions. CockroachDB is already a distributed, replicated storage system. It manages [data distribution]({% link {{ page.version.version }}/architecture/distribution-layer.md %}), [replication and rebalancing]({% link {{ page.version.version }}/architecture/replication-layer.md %}), and fault tolerance using [Raft]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft). Putting CockroachDB's data directory on a distributed storage system adds a second, separate layer that also does replication and failure handling. These layers do not coordinate with each other. This can cause problems such as: duplicate replication and extra writes; higher and more variable latency due to network hops in the I/O path; conflicting recovery behavior when something fails; and more complex, harder-to-debug failures in general.

{{site.data.alerts.callout_info}}
Under-provisioning storage leads to node crashes when the disks fill up. Once this has happened, it is difficult to recover from. To prevent your disks from filling up, provision enough storage for your workload, monitor your disk usage, and use a [ballast file]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#automatic-ballast-files). For more information, see [capacity planning issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#capacity-planning-issues) and [storage issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#storage-issues).
{{site.data.alerts.end}}
Expand Down
Loading