Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restructure hardware requirements table #559

Merged
merged 5 commits into from
Jun 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions docs/install/requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,18 @@ Certain versions of Harvester support the deployment of [single-node clusters](h

Harvester nodes have the following hardware requirements and recommendations for installation and testing.

| Type | Requirements and Recommendations |
|:-----------------|:------------------------------------------------------------------------------------------------------------------------------------------|
| CPU | x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above required for production |
| Memory | 32 GB minimum for testing; 64 GB or above required for production |
| Disk Capacity | 250 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above required for production |
| Disk Performance | 5,000+ random IOPS per disk(SSD/NVMe). Management nodes (first 3 nodes) must be [fast enough for etcd](https://www.suse.com/support/kb/doc/?id=000020100) |
| Network Card | Minimum Ethernet speed: 1 Gbps for testing, 10 Gbps for production; Recommended network interfaces: 2 NICs for the management cluster network, 2 additional NICs for the dedicated VM workload network (does not apply to the [witness node](../advanced/witness.md)) |
| Network Switch | Trunking of ports required for VLAN support |
| Hardware | Development/Testing | Production |
| :--- | :--- | :--- |
| CPU | x86_64 (with hardware-assisted virtualization); 8 cores minimum | x86_64 (with hardware-assisted virtualization); 16 cores minimum |
| Memory | 32 GB minimum | 64 GB minimum |
| Disk capacity | 250 GB minimum (180 GB minimum when using multiple disks) | 500 GB minimum |
| Disk performance | 5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet [etcd]((https://www.suse.com/support/kb/doc/?id=000020100)) speed requirements. Only local disks and hardware RAID are supported. | 5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet [etcd]((https://www.suse.com/support/kb/doc/?id=000020100)) speed requirements. Only local disks and hardware RAID are supported. |
| Network card count | Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the [witness node](../advanced/witness.md)) | Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the [witness node](../advanced/witness.md)) |
| Network card speed | 1 Gbps Ethernet minimum | 10 Gbps Ethernet minimum |
| Network switch | Port trunking for VLAN support | Port trunking for VLAN support |

:::info important
- Use server-class hardware to achieve the best results. Laptops and nested virtualization are not supported.
- For best results, use [YES-certified hardware](https://www.suse.com/partners/ihv/yes/) for SUSE Linux Enterprise Server (SLES) 15 SP3 or SP4. Harvester is built on SLE technology and YES-certified hardware has additional validation of driver and system board compatibility. Laptops and nested virtualization are not supported.
- Each node must have a unique `product_uuid` (fetched from `/sys/class/dmi/id/product_uuid`) to prevent errors from occurring during VM live migration and other operations. For more information, see [Issue #4025](https://github.com/harvester/harvester/issues/4025).
- Harvester has a [built-in management cluster network](../networking/clusternetwork.md#built-in-cluster-network) (`mgmt`). To achieve high availability and the best performance in production environments, use at least two NICs in each node to set up a bonded NIC for the management network (see step 6 in [ISO Installation](../install/iso-install.md#installation-steps)). You can also create [custom cluster networks](../networking/clusternetwork.md#custom-cluster-network) for VM workloads. Each custom cluster network requires at least two additional NICs to set up a bonded NIC in every involved node of the Harvester cluster. The [witness node](../advanced/witness.md) does not require additional NICs. For more information, see [Cluster Network](../networking/clusternetwork.md#concepts).
- During testing, you can use only one NIC for the [built-in management cluster network](../networking/clusternetwork.md#built-in-cluster-network) (`mgmt`), and for testing the [VM network](../networking/harvester-network.md#create-a-vm-network) that is also carried by `mgmt`. High availability and optimal performance are not guaranteed.
Expand Down
19 changes: 10 additions & 9 deletions versioned_docs/version-v1.2/install/requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,18 @@ Certain versions of Harvester support the deployment of [single-node clusters](h

Harvester nodes have the following hardware requirements and recommendations for installation and testing.

| Type | Requirements and Recommendations |
|:-----------------|:------------------------------------------------------------------------------------------------------------------------------------------|
| CPU | x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above required for production |
| Memory | 32 GB minimum for testing; 64 GB or above required for production |
| Disk Capacity | 250 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above required for production |
| Disk Performance | 5,000+ random IOPS per disk(SSD/NVMe). Management nodes (first 3 nodes) must be [fast enough for etcd](https://www.suse.com/support/kb/doc/?id=000020100) |
| Network Card | Minimum Ethernet speed: 1 Gbps for testing, 10 Gbps for production; Recommended network interfaces: 2 NICs for the management cluster network, 2 additional NICs for the dedicated VM workload network |
| Network Switch | Trunking of ports required for VLAN support |
| Hardware | Development/Testing | Production |
| :--- | :--- | :--- |
| CPU | x86_64 (with hardware-assisted virtualization); 8 cores minimum | x86_64 (with hardware-assisted virtualization); 16 cores minimum |
| Memory | 32 GB minimum | 64 GB minimum |
| Disk capacity | 250 GB minimum (180 GB minimum when using multiple disks) | 500 GB minimum |
| Disk performance | 5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet [etcd]((https://www.suse.com/support/kb/doc/?id=000020100)) speed requirements. Only local disks and hardware RAID are supported. | 5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet [etcd]((https://www.suse.com/support/kb/doc/?id=000020100)) speed requirements. Only local disks and hardware RAID are supported. |
| Network card count | Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended | Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended |
| Network card speed | 1 Gbps Ethernet minimum | 10 Gbps Ethernet minimum |
| Network switch | Port trunking for VLAN support | Port trunking for VLAN support |

:::info important
- Use server-class hardware to achieve the best results. Laptops and nested virtualization are not supported.
- For best results, use [YES-certified hardware](https://www.suse.com/partners/ihv/yes/) for SUSE Linux Enterprise Server (SLES) 15 SP3 or SP4. Harvester is built on SLE technology and YES-certified hardware has additional validation of driver and system board compatibility. Laptops and nested virtualization are not supported.
- Each node must have a unique `product_uuid` (fetched from `/sys/class/dmi/id/product_uuid`) to prevent errors from occurring during VM live migration and other operations. For more information, see [Issue #4025](https://github.com/harvester/harvester/issues/4025).
- Harvester has a [built-in management cluster network](../networking/clusternetwork.md#built-in-cluster-network) (`mgmt`). To achieve high availability and the best performance in production environments, use at least two NICs in each node to set up a bonded NIC for the management network (see step 6 in [ISO Installation](../install/iso-install.md#installation-steps)). You can also create [custom cluster networks](../networking/clusternetwork.md#custom-cluster-network) for VM workloads. Each custom cluster network requires at least two additional NICs to set up a bonded NIC in every involved node of the Harvester cluster. For more information, see [Cluster Network](../networking/clusternetwork.md#concepts).
- During testing, you can use only one NIC for the [built-in management cluster network](../networking/clusternetwork.md#built-in-cluster-network) (`mgmt`), and for testing the [VM network](../networking/harvester-network.md#create-a-vm-network) that is also carried by `mgmt`. High availability and optimal performance are not guaranteed.
Expand Down
19 changes: 10 additions & 9 deletions versioned_docs/version-v1.3/install/requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,18 @@ Certain versions of Harvester support the deployment of [single-node clusters](h

Harvester nodes have the following hardware requirements and recommendations for installation and testing.

| Type | Requirements and Recommendations |
|:-----------------|:------------------------------------------------------------------------------------------------------------------------------------------|
| CPU | x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above required for production |
| Memory | 32 GB minimum for testing; 64 GB or above required for production |
| Disk Capacity | 250 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above required for production |
| Disk Performance | 5,000+ random IOPS per disk(SSD/NVMe). Management nodes (first 3 nodes) must be [fast enough for etcd](https://www.suse.com/support/kb/doc/?id=000020100) |
| Network Card | Minimum Ethernet speed: 1 Gbps for testing, 10 Gbps for production; Recommended network interfaces: 2 NICs for the management cluster network, 2 additional NICs for the dedicated VM workload network (does not apply to the [witness node](../advanced/witness.md)) |
| Network Switch | Trunking of ports required for VLAN support |
| Hardware | Development/Testing | Production |
| :--- | :--- | :--- |
| CPU | x86_64 (with hardware-assisted virtualization); 8 cores minimum | x86_64 (with hardware-assisted virtualization); 16 cores minimum |
| Memory | 32 GB minimum | 64 GB minimum |
| Disk capacity | 250 GB minimum (180 GB minimum when using multiple disks) | 500 GB minimum |
| Disk performance | 5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet [etcd]((https://www.suse.com/support/kb/doc/?id=000020100)) speed requirements. Only local disks and hardware RAID are supported. | 5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet [etcd]((https://www.suse.com/support/kb/doc/?id=000020100)) speed requirements. Only local disks and hardware RAID are supported. |
| Network card count | Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the [witness node](../advanced/witness.md)) | Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the [witness node](../advanced/witness.md)) |
| Network card speed | 1 Gbps Ethernet minimum | 10 Gbps Ethernet minimum |
| Network switch | Port trunking for VLAN support | Port trunking for VLAN support |

:::info important
- Use server-class hardware to achieve the best results. Laptops and nested virtualization are not supported.
- For best results, use [YES-certified hardware](https://www.suse.com/partners/ihv/yes/) for SUSE Linux Enterprise Server (SLES) 15 SP3 or SP4. Harvester is built on SLE technology and YES-certified hardware has additional validation of driver and system board compatibility. Laptops and nested virtualization are not supported.
- Each node must have a unique `product_uuid` (fetched from `/sys/class/dmi/id/product_uuid`) to prevent errors from occurring during VM live migration and other operations. For more information, see [Issue #4025](https://github.com/harvester/harvester/issues/4025).
- Harvester has a [built-in management cluster network](../networking/clusternetwork.md#built-in-cluster-network) (`mgmt`). To achieve high availability and the best performance in production environments, use at least two NICs in each node to set up a bonded NIC for the management network (see step 6 in [ISO Installation](../install/iso-install.md#installation-steps)). You can also create [custom cluster networks](../networking/clusternetwork.md#custom-cluster-network) for VM workloads. Each custom cluster network requires at least two additional NICs to set up a bonded NIC in every involved node of the Harvester cluster. The [witness node](../advanced/witness.md) does not require additional NICs. For more information, see [Cluster Network](../networking/clusternetwork.md#concepts).
- During testing, you can use only one NIC for the [built-in management cluster network](../networking/clusternetwork.md#built-in-cluster-network) (`mgmt`), and for testing the [VM network](../networking/harvester-network.md#create-a-vm-network) that is also carried by `mgmt`. High availability and optimal performance are not guaranteed.
Expand Down