This Ansible role installs the ZFS filesystem module, creates or imports zpools and manages ZFS datasets on Debian and RedHat systems.
Destructive operations such as ZFS pool deletions are out of scope and not supported by the role. Therefore, you don't have to worry to much about losing data when using this role.
Table of Contents:
- For Debian systems, the
backports
contrib
repository needs to be enabled first. It is not included in the role. See Debian Backports for instructions. - The target system needs to be managed with Systemd.
Bold variables are required.
Variable | Default | Comments |
---|---|---|
zfs_manage_repository |
true |
Manage package repositories (YUM, APT). |
zfs_redhat_style |
kmod |
Style of ZFS module installation. Can be either kmod or dkms. Applies only to RedHat systems. See official documentation for information on DKMS and kmod version of openZFS. |
zfs_redhat_repo_dkms_url |
http://download.zfsonlinux.org/ epel/{{ ansible_distribution_version }}/$basearch/ |
Repository URL used for DKMS installation of ZFS. Applies only to RedHat systems. |
zfs_redhat_repo_kmod_url |
http://download.zfsonlinux.org/ epel/{{ ansible_distribution_version }}/kmod/$basearch/ |
Repository URL used for kmod installation of ZFS. Applies only to RedHat systems. |
zfs_redhat_repo_proxy |
YUM/DNF repository proxy URL | |
zfs_debian_repo |
{{ ansible_distribution_release }}-backports |
Repository used for installation. Applies only to Debian systems. |
zfs_service_import_cache_enabled |
true |
Enable service to import ZFS pools by cache file. |
zfs_service_import_scan_enabled |
false |
Enable service to import ZFS pools by device scanning. |
zfs_service_mount_enabled |
false if zfs_use_zfs_mount_generator else true }}"` |
Enable service to mount ZFS filesystems using the ZFS built-in mounting mechanism. |
zfs_service_share_enabled |
false |
Enable ZFS file system shares service. |
zfs_service_volume_wait_enabled |
true |
Enable service to wait for ZFS Volume links in /dev . |
zfs_service_zed_enabled |
false |
Enable ZFS Event Daemon (ZED) service. |
zfs_use_zfs_mount_generator |
false |
Enable Systemd Mount Generator, to automatically mount volumes on boot with Systemd. |
zfs_kernel_module_parameters |
{} |
Dictionary (key-value pairs) of ZFS kernel module parameters. See official documentation for available parameters. |
zfs_scrub_schedule |
monthly |
Time schedule for zpool scrubs. Valid options can be looked up here. |
zfs_trim_schedule |
weekly |
Time schedule for trim operations (for SSDs or virtual drives). Valid options can be looked up here. |
zfs_config_none_ioscheduler |
[] |
Set IO scheduler for the listed HDDs to none . |
zfs_pools |
[] |
List of ZFS Pools (zpools). |
zfs_pools[].name |
Name of the ZPool. | |
zfs_pools[].vdev |
VDev definition for the ZPool. | |
zfs_pools[].scrub |
true |
Enable scrub for this ZPool. |
zfs_pools[].dont_enable_features |
false |
Don't enable any feature. Use this in combination with properties to enable a custom set of features. |
zfs_pools[].properties |
{} |
ZPool properties. |
zfs_pools[].filesystem_properties |
{} |
Filesystem properties to apply to the whole ZPool. |
zfs_pools[].extra_import_options |
"" |
String of extra options to pass to the ZPool import command. |
zfs_pools[].extra_create_options |
"" |
String of extra options to pass to the ZPool create command. |
zfs_pools_defaults |
{} |
Default properties for ZPools. The properties can be overwritten on a per ZPool basis. |
zfs_volumes |
[] |
List of ZFS Volumes. |
zfs_volume[].name |
ZFS volume name. | |
zfs_volume[].properties |
{} |
Dictionary (key-value pairs) of volume properties to be set. |
zfs_volume[].state |
present |
Whether to create (present), or remove (absent) the volume. |
zfs_volumes_properties_defaults |
volblocksize: 8K volsize: 1G compression: lz4 dedup: false sync: standard |
Default properties for ZFS volumes. The properties can be overwritten on a per Volume basis. |
zfs_filesystems |
[] |
List of ZFS Filesystems. |
zfs_filesystem[].name |
ZFS Filesystem name. | |
zfs_filesystem[].properties |
{} |
Dictionary (key-value pairs) of filesystem properties to be set. |
zfs_filesystem[].state |
present |
Whether to create (present), or remove (absent) the filesystem. |
zfs_filesystems_properties_defaults |
acltype: posix atime: false canmount: true casesensitivity: sensitive compression: lz4 dedup: false normalization: formD setuid: true snapdir: hidden sync: standard utf8only: true xattr: sa |
Default properties for ZFS filesystems. The properties can be overwritten on a per FS basis. |
zfs_zrepl_config |
{} |
Configuration for ZREPL. See the official documentation for a list of available parameters. Examples can be found here. |
zfs_zrepl_enabled |
false |
Install and enable ZREPL for replication and snapshots. |
zfs_zrepl_redhat_repo_url |
https://zrepl.cschwarz.com/ rpm/repo |
Repository URL used for ZREPL installation. Applies only to RedHat systems. |
zfs_zrepl_debian_repo_url |
https://zrepl.cschwarz.com/ apt |
Repository URL used for ZREPL installation. Applies only to Debian systems. |
Depends on community.general
collection.
A simple example to create a ZFS pool with a mirror vdev and two disks. You can test this one out using Vagrant+VirtualBox and the Vagrantfile
provided in the examples/
directory. Simply run vagrant up
in the examples/
directory and a virtual machine will be spun up with ZFS installed and a pool created.
Note: Depending on the mechanism used to install ZFS (DKMS or kmod), it might take some time to compile the kernel module and for the role to finish. This is especially true for the first run.
- hosts: all
tasks:
# ensure you have enabled backports repository on Debian systems first
- ansible.builtin.include_role:
name: aisbergg.zfs
vars:
zfs_pools:
- name: pool
vdev: >-
mirror
sdb
sdc
scrub: true
properties:
ashift: 12
filesystem_properties:
mountpoint: /mnt/raid1
compression: lz4
# properties of zfs_filesystems_properties_defaults also apply here
zfs_filesystems:
- name: pool1/vol1
- name: pool1/vol2
# schedule for ZFS scrubs
zfs_scrub_schedule: monthly
# schedule for TRIM
zfs_trim_schedule: weekly
This example is a more advanced setup with multiple pools, volumes and filesystems. It also includes ZREPL configuration for automatic snapshots. It reflects a system setup of mine where I installed the whole system on ZFS. It consists of two pools, rpool
and bpool
. rpool
is the root pool with the system installed on it and bpool
is the boot pool with the boot partition.
Note: Depending on the mechanism used to install ZFS (DKMS or kmod), it might take some time to compile the kernel module and for the role to finish. This is especially true for the first run.
- hosts: all
vars:
#
# Service
#
# generate mount points using systemd
zfs_use_zfs_mount_generator: true
# use zfs_mount_generator but don't invoke ZED (Docker triggers it quite often)
zfs_service_zed_enabled: false
#
# Configuration
#
# https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-module-parameters
zfs_kernel_module_parameters:
# use 1/4 of the memory for ZFS ARC
zfs_arc_max: "{{ (ansible_memtotal_mb * 1024**2 * 0.25) | int }}"
# schedule for ZFS scrubs
zfs_scrub_schedule: monthly
# schedule for TRIM
zfs_trim_schedule: weekly
_zfs_performance_tuning_default:
# store less metadata (still redundant in mirror setups)
redundant_metadata: most
# use standard behaviour for synchronous writes
sync: standard
_zfs_performance_tuning_async_only:
# store less metadata (still redundant in mirror setups)
redundant_metadata: most
# turn synchronous writes into asynchronous ones
sync: disabled
_zfs_performance_tuning_ssd:
# use standard behaviour for synchronous writes
sync: standard
# store less metadata (still redundant in mirror setups)
redundant_metadata: most
# optimize synchronous operations to write directly to disk instead of writing
# to log. On HDDs this decreases the latency, but won't do much on SSDs.
logbias: throughput
_zfs_filesytems_properties:
canmount: true
snapdir: hidden
# make ZFS behave like a Linux FS
casesensitivity: sensitive
normalization: formD
utf8only: on
setuid: true
atime: false
# enable use of ACLs
acltype: posix
xattr: sa
# compression and deduplication
compression: lz4
dedup: false
zfs_filesystems_properties_defaults: "{{
_zfs_filesytems_properties | combine(
_zfs_performance_tuning_async_only
)}}"
_zfs_volumes_properties:
volblocksize: 8K
volsize: 1G
compression: lz4
dedup: false
# https://openzfs.github.io/openzfs-docs/man/7/zfsprops.7.html
zfs_volumes_properties_defaults: "{{
_zfs_volumes_properties | combine(
_zfs_performance_tuning_async_only
)}}"
#
# ZPools
#
zfs_pools:
- name: rpool
vdev: >-
mirror
r1
r2
scrub: true
properties:
ashift: 12
filesystem_properties:
# don't mount, just supply a base path for sub datasets
canmount: off
mountpoint: /
- name: bpool
vdev: >-
mirror
{{ _zfs_boot_partition1 }}
{{ _zfs_boot_partition2 }}
scrub: true
properties:
ashift: 12
"feature@async_destroy": enabled
"feature@bookmarks": enabled
"feature@embedded_data": enabled
"feature@empty_bpobj": enabled
"feature@enabled_txg": enabled
"feature@extensible_dataset": enabled
"feature@filesystem_limits": enabled
"feature@hole_birth": enabled
"feature@large_blocks": enabled
"feature@lz4_compress": enabled
"feature@spacemap_histogram": enabled
dont_enable_features: true
filesystem_properties:
canmount: off
mountpoint: /boot
#
# Datasets
#
zfs_filesystems:
# root
- name: rpool/ROOT
properties:
canmount: off
mountpoint: none
- name: rpool/ROOT/default
properties:
canmount: noauto
mountpoint: /
- name: rpool/home
- name: rpool/home/root
properties:
mountpoint: /root
- name: rpool/var/lib/docker
- name: rpool/var/log
- name: rpool/var/spool
- name: rpool/var/cache
# boot
- name: bpool/default
properties:
mountpoint: /boot
#
# Automatic Snapthots Using ZREPL
#
zfs_zrepl_enabled: true
zfs_zrepl_config:
jobs:
- name: storage
type: snap
filesystems: {
"rpool<": true,
"rpool/var<": false,
}
snapshotting:
type: periodic
interval: 12h
prefix: auto_
pruning:
keep:
# prune automatic snapshots
- type: grid
# in first 24 hours keep all snapshots
# in first 7 days 1 snapshot each day
# in first month keep 1 snapshot each week
# discard the rest
# details see: https://zrepl.github.io/configuration/prune.html#policy-grid
grid: 1x24h(keep=all) | 7x1d(keep=1) | 3x7d(keep=1)
regex: "^auto_.*"
# keep manual snapshots
- type: regex
regex: "^manual_.*"
roles:
- aisbergg.zfs
MIT
Andre Lehmann (aisberg@posteo.de)