storage pools. A storage pool is a collection of devices that provides physical storage and data replication for
datasets.
All datasets within a storage pool share the same space. See zfs(1M) for information on managing datasets.
Virtual Devices (vdevs)
A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:
disk
A block device, typically located under "/dev/dsk". ZFS can use individual slices or partitions, though the recommended mode of operation is to use whole disks. A disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under "/dev/dsk"). A whole disk can be specified by omitting the slice or partition designation. For example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole disk, ZFS automatically labels the disk, if necessary.
file
A regular file. The use of files as a backing store is strongly discouraged. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. A file must be specified by a full path.
mirror
A mirror of two or more devices. Data is replicated in an identical fashion across all components of a mirror. A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices failing before data integrity is compromised.
raidz
raidz1
raidz2
A variation on
RAID-5 that allows for better distribution of parity and eliminates the "
RAID-5 write hole" (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks within a
raidz group.
A
raidz group can have either single- or double-parity, meaning that the
raidz group can sustain one or two failures respectively without losing any data. The
raidz1 vdev type specifies a single-parity
raidz group and the
raidz2 vdev type specifies a double-parity
raidz group. The
raidz vdev type is an alias for
raidz1.
A
raidz group with
N disks of size
X with
P parity disks can hold approximately (
N-P)*
X bytes and can withstand one device failing before data integrity is compromised. The minimum number of devices in a
raidz group is one more than the number of parity disks. The recommended number is between 3 and 9.
spare
A special pseudo-vdev which keeps track of available hot spares for a pool. For more information, see the "Hot Spares" section.
Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.
A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices.
Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords "mirror" and "raidz" are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks:
# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
Subcommands
All subcommands that modify state are logged persistently to the pool in their original form.
The zpool command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported:
zpool -?
Displays a help message.
zpool create [
-fn] [
-R root] [
-m mountpoint]
pool vdev ...
Creates a new storage pool containing the virtual devices specified on the command line. The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore ("_"), dash ("-"), and period ("."). The pool names "mirror", "raidz", and "spare" are reserved, as are names beginning with the pattern "c[0-9]". The
vdev specification is described in the "Virtual Devices" section.
The command verifies that each device specified is accessible and not currently in use by another subsystem. There are some uses, such as being currently mounted, or specified as the dedicated dump device, that prevents a device from ever being used by
ZFS. Other uses, such as having a preexisting
UFS file system, can be overridden with the
-f option.
The command also checks that the replication strategy for the pool is consistent. An attempt to combine redundant and non-redundant storage in a single pool, or to mix disks and files, results in an error unless
-f is specified. The use of differently sized devices within a single
raidz or mirror group is also flagged as an error unless
-f is specified.
Unless the
-R option is specified, the default mount point is "/
pool". The mount point must not exist or must be empty, or else the root dataset cannot be mounted. This can be overridden with the
-m option.
-f
Forces use of vdevs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.
-n
Displays the configuration that would be used without actually creating the pool. The actual pool creation can still fail due to insufficient privileges or device sharing.
-R root
Creates the pool with an alternate root. See the "Alternate Root Pools" section. The root dataset has its mount point set to "/" as part of this operation.
-m mountpoint
Sets the mount point for the root dataset. The default mount point is "/pool". The mount point must be an absolute path, "legacy", or "none". For more information on dataset mount points, see zfs(1M).
zpool destroy [
-f]
pool
Destroys the given pool, freeing up any devices for other use. This command tries to unmount any active datasets before destroying the pool.
-f
Forces any active datasets contained within the pool to be unmounted.
zpool add [
-fn]
pool vdev ...
Adds the specified virtual devices to the given pool. The
vdev specification is described in the "Virtual Devices" section. The behavior of the
-f option, and the device checks performed are described in the "zpool create" subcommand.
-f
Forces use of vdevs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.
-n
Displays the configuration that would be used without actually adding the vdevs. The actual pool creation can still fail due to insufficient privileges or device sharing.
Do not add a disk that is currently configured as a quorum device to a zpool. Once a disk is in a zpool, that disk can then be configured as a quorum device.
zpool remove pool vdev
Removes the given vdev from the pool. This command currently only supports removing hot spares. Devices which are part of a mirror can be removed using the "zpool detach" command. Raidz and top-level vdevs cannot be removed from a pool.
zpool list [
-H] [
-o field[,
field*]] [
pool] ...
Lists the given pools along with a health status and space usage. When given no arguments, all pools in the system are listed.
-H
Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space.
-o field
Comma-separated list of fields to display. Each field must be one of:
name Pool name
size Total size
used Amount of space used
available Amount of space available
capacity Percentage of pool space used
health Health status
The default is all fields.
This command reports actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(1M) command takes into account, but the zpool command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable.
zpool iostat [
-v] [
pool] ... [
interval [
count]]
Displays
I/O statistics for the given pools. When given an interval, the statistics are printed every
interval seconds until
Ctrl-C is pressed. If no
pools are specified, statistics for every pool in the system is shown. If
count is specified, the command exits after
count reports are printed.
-v
Verbose statistics. Reports usage statistics for individual vdevs within the pool, in addition to the pool-wide statistics.
zpool status [
-xv] [
pool] ...
Displays the detailed health status for the given pools. If no
pool is specified, then the status of each pool in the system is displayed.
If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change.
-x
Only display status for pools that are exhibiting errors or are otherwise unavailable.
-v
Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub.
zpool offline [
-t]
pool device ...
Takes the specified physical device offline. While the
device is offline, no attempt is made to read or write to the device.
This command is not applicable to spares.
-t
Temporary. Upon reboot, the specified physical device reverts to its previous state.
zpool online pool device ...
Brings the specified physical device online.
This command is not applicable to spares.
zpool clear pool [
device] ...
Clears device errors in a pool. If no arguments are specified, all device errors within the pool are cleared. If one or more devices is specified, only those errors associated with the specified device or devices are cleared.
zpool attach [
-f]
pool device new_device
Attaches
new_device to an existing
zpool device. The existing device cannot be part of a
raidz configuration. If
device is not currently part of a mirrored configuration,
device automatically transforms into a two-way mirror of
device and
new_device. If
device is part of a two-way mirror, attaching
new_device creates a three-way mirror, and so on. In either case,
new_device begins to resilver immediately.
-f
Forces use of new_device, even if its appears to be in use. Not all devices can be overridden in this manner.
zpool detach pool device
Detaches device from a mirror. The operation is refused if there are no other valid replicas of the data.
zpool replace [
-f]
pool old_device [
new_device]
Replaces
old_device with
new_device. This is equivalent to attaching
new_device, waiting for it to resilver, and then detaching
old_device.
The size of
new_device must be greater than or equal to the minimum size of all the devices in a mirror or
raidz configuration.
If
new_device is not specified, it defaults to
old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same
/dev/dsk path as the old device, even though it is actually a different disk.
ZFS recognizes this.
-f
Forces use of new_device, even if its appears to be in use. Not all devices can be overridden in this manner.
zpool scrub [
-s]
pool ...
Begins a scrub. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror or
raidz) devices,
ZFS automatically repairs any damage discovered during the scrub. The "
zpool status" command reports the progress of the scrub and summarizes the results of the scrub upon completion.
Scrubbing and resilvering are very similar operations. The difference is that resilvering only examines data that
ZFS knows to be out of date (for example, when attaching a new device to a mirror or replacing an existing device), whereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure.
Because scrubbing and resilvering are
I/O-intensive operations,
ZFS only allows one at a time. If a scrub is already in progress, the "
zpool scrub" command terminates it and starts a new scrub. If a resilver is in progress,
ZFS does not allow a scrub to be started until the resilver completes.
-s
Stop scrubbing.
zpool export [
-f]
pool ...
Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems. The devices can be moved between systems (even those of different endianness) and imported as long as a sufficient number of devices are present.
Before exporting the pool, all datasets within the pool are unmounted.
For pools to be portable, you must give the
zpool command whole disks, not just slices, so that
ZFS can label the disks with portable
EFI labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks.
-f
Forcefully unmount all datasets, using the "unmount -f" command.
zpool import [
-d dir] [
-D]
Lists pools available to import. If the
-d option is not specified, this command searches for devices in "/dev/dsk". The
-d option can be specified multiple times, and all directories are searched. If the device appears to be part of an exported pool, this command displays a summary of the pool with the name of the pool, a numeric identifier, as well as the
vdev layout and current health of the device for each device or file. Destroyed pools, pools that were previously destroyed with the "
-zpool destroy" command, are not listed unless the
-D option is specified.
The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available.
-d dir
Searches for devices or files in dir. The -d option can be specified multiple times.
-D
Lists destroyed pools only.
zpool import [
-d dir] [
-D] [
-f] [
-o opts] [
-R root]
pool |
id [
newpool]
Imports a specific pool. A pool can be identified by its name or the numeric identifier. If
newpool is specified, the pool is imported using the name
newpool. Otherwise, it is imported with the same name as its exported name.
If a device is removed from a system without running "
zpool export" first, the device appears as potentially active. It cannot be determined if this was a failed export, or whether the device is really in use from another host. To import a pool in this state, the
-f option is required.
-d dir
Searches for devices or files in dir. The -d option can be specified multiple times.
-D
Imports destroyed pool. The -f option is also required.
-f
Forces import, even if the pool appears to be potentially active.
-o opts
Comma-separated list of mount options to use when mounting datasets within the pool. See zfs(1M) for a description of dataset properties and mount options.
-R root
Imports pool(s) with an alternate root. See the "Alternate Root Pools" section.
zpool import [
-d dir] [
-D] [
-f] [
-a]
Imports all pools found in the search directories. Identical to the previous command, except that all pools with a sufficient number of devices available are imported. Destroyed pools, pools that were previously destroyed with the "
-zpool destroy" command, will not be imported unless the
-D option is specified.
-d dir
Searches for devices or files in dir. The -d option can be specified multiple times.
-D
Imports destroyed pools only. The -f option is also required.
-f
Forces import, even if the pool appears to be potentially active.
zpool upgrade
Displays all pools formatted using a different ZFS on-disk version. Older versions can continue to be used, but some features may not be available. These pools can be upgraded using "zpool upgrade -a". Pools that are formatted with a more recent version are also displayed, although these pools will be inaccessible on the system.
zpool upgrade -v
Displays ZFS versions supported by the current software. The current ZFS versions and all previous supportedversions are displayed, along with an explanation of the features provided with each version.
zpool upgrade [
-a |
pool]
Upgrades the given pool to the latest on-disk version. Once this is done, the pool will no longer be accessible on systems running older versions of the software.
-a
Upgrades all pools.
zpool history [
pool] ...
Displays the command history of the specified pools (or all pools if no pool is specified).