1 ZPOOL(1M)                    Maintenance Commands                    ZPOOL(1M)
   2 
   3 NAME
   4      zpool - configure ZFS storage pools
   5 
   6 SYNOPSIS
   7      zpool -?
   8      zpool add [-fn] pool vdev...
   9      zpool attach [-f] pool device new_device
  10      zpool clear pool [device]
  11      zpool create [-dfn] [-m mountpoint] [-o property=value]...
  12            [-O file-system-property=value]... [-R root] pool vdev...
  13      zpool destroy [-f] pool
  14      zpool detach pool device
  15      zpool export [-f] pool...
  16      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
  17      zpool history [-il] [pool]...
  18      zpool import [-D] [-d dir]
  19      zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  20            [-o property=value]... [-R root]
  21      zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
  22            [-o property=value]... [-R root] pool|id [newpool]
  23      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
  24      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
  25            [interval [count]]
  26      zpool offline [-t] pool device...
  27      zpool online [-e] pool device...
  28      zpool reguid pool
  29      zpool reopen pool
  30      zpool remove pool device...
  31      zpool replace [-f] pool device [new_device]
  32      zpool scrub [-s] pool...
  33      zpool set property=value pool
  34      zpool split [-n] [-o property=value]... [-R root] pool newpool
  35      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
  36      zpool upgrade
  37      zpool upgrade -v
  38      zpool upgrade [-V version] -a|pool...
  39 
  40 DESCRIPTION
  41      The zpool command configures ZFS storage pools. A storage pool is a
  42      collection of devices that provides physical storage and data replication
  43      for ZFS datasets. All datasets within a storage pool share the same
  44      space. See zfs(1M) for information on managing datasets.
  45 
  46    Virtual Devices (vdevs)
  47      A "virtual device" describes a single device or a collection of devices
  48      organized according to certain performance and fault characteristics. The
  49      following virtual devices are supported:
  50 
  51      disk    A block device, typically located under /dev/dsk.  ZFS can use
  52              individual slices or partitions, though the recommended mode of
  53              operation is to use whole disks. A disk can be specified by a
  54              full path, or it can be a shorthand name (the relative portion of
  55              the path under /dev/dsk).  A whole disk can be specified by
  56              omitting the slice or partition designation.  For example, c0t0d0
  57              is equivalent to /dev/dsk/c0t0d0s2.  When given a whole disk, ZFS
  58              automatically labels the disk, if necessary.
  59 
  60      file    A regular file. The use of files as a backing store is strongly
  61              discouraged. It is designed primarily for experimental purposes,
  62              as the fault tolerance of a file is only as good as the file
  63              system of which it is a part. A file must be specified by a full
  64              path.
  65 
  66      mirror  A mirror of two or more devices. Data is replicated in an
  67              identical fashion across all components of a mirror. A mirror
  68              with N disks of size X can hold X bytes and can withstand (N-1)
  69              devices failing before data integrity is compromised.
  70 
  71      raidz, raidz1, raidz2, raidz3
  72              A variation on RAID-5 that allows for better distribution of
  73              parity and eliminates the RAID-5 "write hole" (in which data and
  74              parity become inconsistent after a power loss).  Data and parity
  75              is striped across all disks within a raidz group.
  76 
  77              A raidz group can have single-, double-, or triple-parity,
  78              meaning that the raidz group can sustain one, two, or three
  79              failures, respectively, without losing any data. The raidz1 vdev
  80              type specifies a single-parity raidz group; the raidz2 vdev type
  81              specifies a double-parity raidz group; and the raidz3 vdev type
  82              specifies a triple-parity raidz group. The raidz vdev type is an
  83              alias for raidz1.
  84 
  85              A raidz group with N disks of size X with P parity disks can hold
  86              approximately (N-P)*X bytes and can withstand P device(s) failing
  87              before data integrity is compromised. The minimum number of
  88              devices in a raidz group is one more than the number of parity
  89              disks. The recommended number is between 3 and 9 to help increase
  90              performance.
  91 
  92      spare   A special pseudo-vdev which keeps track of available hot spares
  93              for a pool. For more information, see the Hot Spares section.
  94 
  95      log     A separate intent log device. If more than one log device is
  96              specified, then writes are load-balanced between devices. Log
  97              devices can be mirrored. However, raidz vdev types are not
  98              supported for the intent log. For more information, see the
  99              Intent Log section.
 100 
 101      cache   A device used to cache storage pool data. A cache device cannot
 102              be configured as a mirror or raidz group. For more information,
 103              see the Cache Devices section.
 104 
 105      Virtual devices cannot be nested, so a mirror or raidz virtual device can
 106      only contain files or disks. Mirrors of mirrors (or other combinations)
 107      are not allowed.
 108 
 109      A pool can have any number of virtual devices at the top of the
 110      configuration (known as "root vdevs").  Data is dynamically distributed
 111      across all top-level devices to balance data among devices. As new
 112      virtual devices are added, ZFS automatically places data on the newly
 113      available devices.
 114 
 115      Virtual devices are specified one at a time on the command line,
 116      separated by whitespace. The keywords mirror and raidz are used to
 117      distinguish where a group ends and another begins. For example, the
 118      following creates two root vdevs, each a mirror of two disks:
 119 
 120      # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
 121 
 122    Device Failure and Recovery
 123      ZFS supports a rich set of mechanisms for handling device failure and
 124      data corruption. All metadata and data is checksummed, and ZFS
 125      automatically repairs bad data from a good copy when corruption is
 126      detected.
 127 
 128      In order to take advantage of these features, a pool must make use of
 129      some form of redundancy, using either mirrored or raidz groups. While ZFS
 130      supports running in a non-redundant configuration, where each root vdev
 131      is simply a disk or file, this is strongly discouraged. A single case of
 132      bit corruption can render some or all of your data unavailable.
 133 
 134      A pool's health status is described by one of three states: online,
 135      degraded, or faulted. An online pool has all devices operating normally.
 136      A degraded pool is one in which one or more devices have failed, but the
 137      data is still available due to a redundant configuration. A faulted pool
 138      has corrupted metadata, or one or more faulted devices, and insufficient
 139      replicas to continue functioning.
 140 
 141      The health of the top-level vdev, such as mirror or raidz device, is
 142      potentially impacted by the state of its associated vdevs, or component
 143      devices. A top-level vdev or component device is in one of the following
 144      states:
 145 
 146      DEGRADED  One or more top-level vdevs is in the degraded state because
 147                one or more component devices are offline. Sufficient replicas
 148                exist to continue functioning.
 149 
 150                One or more component devices is in the degraded or faulted
 151                state, but sufficient replicas exist to continue functioning.
 152                The underlying conditions are as follows:
 153 
 154                o   The number of checksum errors exceeds acceptable levels and
 155                    the device is degraded as an indication that something may
 156                    be wrong. ZFS continues to use the device as necessary.
 157 
 158                o   The number of I/O errors exceeds acceptable levels. The
 159                    device could not be marked as faulted because there are
 160                    insufficient replicas to continue functioning.
 161 
 162      FAULTED   One or more top-level vdevs is in the faulted state because one
 163                or more component devices are offline. Insufficient replicas
 164                exist to continue functioning.
 165 
 166                One or more component devices is in the faulted state, and
 167                insufficient replicas exist to continue functioning. The
 168                underlying conditions are as follows:
 169 
 170                o   The device could be opened, but the contents did not match
 171                    expected values.
 172 
 173                o   The number of I/O errors exceeds acceptable levels and the
 174                    device is faulted to prevent further use of the device.
 175 
 176      OFFLINE   The device was explicitly taken offline by the zpool offline
 177                command.
 178 
 179      ONLINE    The device is online and functioning.
 180 
 181      REMOVED   The device was physically removed while the system was running.
 182                Device removal detection is hardware-dependent and may not be
 183                supported on all platforms.
 184 
 185      UNAVAIL   The device could not be opened. If a pool is imported when a
 186                device was unavailable, then the device will be identified by a
 187                unique identifier instead of its path since the path was never
 188                correct in the first place.
 189 
 190      If a device is removed and later re-attached to the system, ZFS attempts
 191      to put the device online automatically. Device attach detection is
 192      hardware-dependent and might not be supported on all platforms.
 193 
 194    Hot Spares
 195      ZFS allows devices to be associated with pools as "hot spares".  These
 196      devices are not actively used in the pool, but when an active device
 197      fails, it is automatically replaced by a hot spare. To create a pool with
 198      hot spares, specify a spare vdev with any number of devices. For example,
 199 
 200      # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
 201 
 202      Spares can be shared across multiple pools, and can be added with the
 203      zpool add command and removed with the zpool remove command. Once a spare
 204      replacement is initiated, a new spare vdev is created within the
 205      configuration that will remain there until the original device is
 206      replaced. At this point, the hot spare becomes available again if another
 207      device fails.
 208 
 209      If a pool has a shared spare that is currently being used, the pool can
 210      not be exported since other pools may use this shared spare, which may
 211      lead to potential data corruption.
 212 
 213      An in-progress spare replacement can be cancelled by detaching the hot
 214      spare.  If the original faulted device is detached, then the hot spare
 215      assumes its place in the configuration, and is removed from the spare
 216      list of all active pools.
 217 
 218      Spares cannot replace log devices.
 219 
 220    Intent Log
 221      The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
 222      transactions. For instance, databases often require their transactions to
 223      be on stable storage devices when returning from a system call. NFS and
 224      other applications can also use fsync(3C) to ensure data stability. By
 225      default, the intent log is allocated from blocks within the main pool.
 226      However, it might be possible to get better performance using separate
 227      intent log devices such as NVRAM or a dedicated disk. For example:
 228 
 229      # zpool create pool c0d0 c1d0 log c2d0
 230 
 231      Multiple log devices can also be specified, and they can be mirrored. See
 232      the EXAMPLES section for an example of mirroring multiple log devices.
 233 
 234      Log devices can be added, replaced, attached, detached, and imported and
 235      exported as part of the larger pool. Mirrored log devices can be removed
 236      by specifying the top-level mirror for the log.
 237 
 238    Cache Devices
 239      Devices can be added to a storage pool as "cache devices".  These devices
 240      provide an additional layer of caching between main memory and disk. For
 241      read-heavy workloads, where the working set size is much larger than what
 242      can be cached in main memory, using cache devices allow much more of this
 243      working set to be served from low latency media. Using cache devices
 244      provides the greatest performance improvement for random read-workloads
 245      of mostly static content.
 246 
 247      To create a pool with cache devices, specify a cache vdev with any number
 248      of devices. For example:
 249 
 250      # zpool create pool c0d0 c1d0 cache c2d0 c3d0
 251 
 252      Cache devices cannot be mirrored or part of a raidz configuration. If a
 253      read error is encountered on a cache device, that read I/O is reissued to
 254      the original storage pool device, which might be part of a mirrored or
 255      raidz configuration.
 256 
 257      The content of the cache devices is considered volatile, as is the case
 258      with other system caches.
 259 
 260    Properties
 261      Each pool has several properties associated with it. Some properties are
 262      read-only statistics while others are configurable and change the
 263      behavior of the pool.
 264 
 265      The following are read-only properties:
 266 
 267      available
 268              Amount of storage available within the pool. This property can
 269              also be referred to by its shortened column name, avail.
 270 
 271      capacity
 272              Percentage of pool space used. This property can also be referred
 273              to by its shortened column name, cap.
 274 
 275      expandsize
 276              Amount of uninitialized space within the pool or device that can
 277              be used to increase the total capacity of the pool.
 278              Uninitialized space consists of any space on an EFI labeled vdev
 279              which has not been brought online (e.g, using zpool online -e).
 280              This space occurs when a LUN is dynamically expanded.
 281 
 282      fragmentation
 283              The amount of fragmentation in the pool.
 284 
 285      free    The amount of free space available in the pool.
 286 
 287      freeing
 288              After a file system or snapshot is destroyed, the space it was
 289              using is returned to the pool asynchronously.  freeing is the
 290              amount of space remaining to be reclaimed. Over time freeing will
 291              decrease while free increases.
 292 
 293      health  The current health of the pool. Health can be one of ONLINE,
 294              DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
 295 
 296      guid    A unique identifier for the pool.
 297 
 298      size    Total size of the storage pool.
 299 
 300      unsupported@feature_guid
 301              Information about unsupported features that are enabled on the
 302              pool. See zpool-features(5) for details.
 303 
 304      used    Amount of storage space used within the pool.
 305 
 306      The space usage properties report actual physical space available to the
 307      storage pool. The physical space can be different from the total amount
 308      of space that any contained datasets can actually use. The amount of
 309      space used in a raidz configuration depends on the characteristics of the
 310      data being written. In addition, ZFS reserves some space for internal
 311      accounting that the zfs(1M) command takes into account, but the zpool
 312      command does not. For non-full pools of a reasonable size, these effects
 313      should be invisible. For small pools, or pools that are close to being
 314      completely full, these discrepancies may become more noticeable.
 315 
 316      The following property can be set at creation time and import time:
 317 
 318      altroot
 319              Alternate root directory. If set, this directory is prepended to
 320              any mount points within the pool. This can be used when examining
 321              an unknown pool where the mount points cannot be trusted, or in
 322              an alternate boot environment, where the typical paths are not
 323              valid.  altroot is not a persistent property. It is valid only
 324              while the system is up. Setting altroot defaults to using
 325              cachefile=none, though this may be overridden using an explicit
 326              setting.
 327 
 328      The following property can be set only at import time:
 329 
 330      readonly=on|off
 331              If set to on, the pool will be imported in read-only mode. This
 332              property can also be referred to by its shortened column name,
 333              rdonly.
 334 
 335      The following properties can be set at creation time and import time, and
 336      later changed with the zpool set command:
 337 
 338      autoexpand=on|off
 339              Controls automatic pool expansion when the underlying LUN is
 340              grown. If set to on, the pool will be resized according to the
 341              size of the expanded device. If the device is part of a mirror or
 342              raidz then all devices within that mirror/raidz group must be
 343              expanded before the new space is made available to the pool. The
 344              default behavior is off.  This property can also be referred to
 345              by its shortened column name, expand.
 346 
 347      autoreplace=on|off
 348              Controls automatic device replacement. If set to off, device
 349              replacement must be initiated by the administrator by using the
 350              zpool replace command. If set to on, any new device, found in the
 351              same physical location as a device that previously belonged to
 352              the pool, is automatically formatted and replaced. The default
 353              behavior is off.  This property can also be referred to by its
 354              shortened column name, replace.
 355 
 356      bootfs=pool/dataset
 357              Identifies the default bootable dataset for the root pool. This
 358              property is expected to be set mainly by the installation and
 359              upgrade programs.
 360 
 361      cachefile=path|none
 362              Controls the location of where the pool configuration is cached.
 363              Discovering all pools on system startup requires a cached copy of
 364              the configuration data that is stored on the root file system.
 365              All pools in this cache are automatically imported when the
 366              system boots. Some environments, such as install and clustering,
 367              need to cache this information in a different location so that
 368              pools are not automatically imported. Setting this property
 369              caches the pool configuration in a different location that can
 370              later be imported with zpool import -c.  Setting it to the
 371              special value none creates a temporary pool that is never cached,
 372              and the special value "" (empty string) uses the default
 373              location.
 374 
 375              Multiple pools can share the same cache file. Because the kernel
 376              destroys and recreates this file when pools are added and
 377              removed, care should be taken when attempting to access this
 378              file. When the last pool using a cachefile is exported or
 379              destroyed, the file is removed.
 380 
 381      comment=text
 382              A text string consisting of printable ASCII characters that will
 383              be stored such that it is available even if the pool becomes
 384              faulted.  An administrator can provide additional information
 385              about a pool using this property.
 386 
 387      dedupditto=number
 388              Threshold for the number of block ditto copies. If the reference
 389              count for a deduplicated block increases above this number, a new
 390              ditto copy of this block is automatically stored. The default
 391              setting is 0 which causes no ditto copies to be created for
 392              deduplicated blocks. The miniumum legal nonzero setting is 100.
 393 
 394      delegation=on|off
 395              Controls whether a non-privileged user is granted access based on
 396              the dataset permissions defined on the dataset. See zfs(1M) for
 397              more information on ZFS delegated administration.
 398 
 399      failmode=wait|continue|panic
 400              Controls the system behavior in the event of catastrophic pool
 401              failure. This condition is typically a result of a loss of
 402              connectivity to the underlying storage device(s) or a failure of
 403              all devices within the pool. The behavior of such an event is
 404              determined as follows:
 405 
 406              wait      Blocks all I/O access until the device connectivity is
 407                        recovered and the errors are cleared. This is the
 408                        default behavior.
 409 
 410              continue  Returns EIO to any new write I/O requests but allows
 411                        reads to any of the remaining healthy devices. Any
 412                        write requests that have yet to be committed to disk
 413                        would be blocked.
 414 
 415              panic     Prints out a message to the console and generates a
 416                        system crash dump.
 417 
 418      feature@feature_name=enabled
 419              The value of this property is the current state of feature_name.
 420              The only valid value when setting this property is enabled which
 421              moves feature_name to the enabled state. See zpool-features(5)
 422              for details on feature states.
 423 
 424      listsnaps=on|off
 425              Controls whether information about snapshots associated with this
 426              pool is output when zfs list is run without the -t option. The
 427              default value is off.
 428 
 429      version=version
 430              The current on-disk version of the pool. This can be increased,
 431              but never decreased. The preferred method of updating pools is
 432              with the zpool upgrade command, though this property can be used
 433              when a specific version is needed for backwards compatibility.
 434              Once feature flags is enabled on a pool this property will no
 435              longer have a value.
 436 
 437    Subcommands
 438      All subcommands that modify state are logged persistently to the pool in
 439      their original form.
 440 
 441      The zpool command provides subcommands to create and destroy storage
 442      pools, add capacity to storage pools, and provide information about the
 443      storage pools. The following subcommands are supported:
 444 
 445      zpool -?
 446              Displays a help message.
 447 
 448      zpool add [-fn] pool vdev...
 449              Adds the specified virtual devices to the given pool. The vdev
 450              specification is described in the Virtual Devices section. The
 451              behavior of the -f option, and the device checks performed are
 452              described in the zpool create subcommand.
 453 
 454              -f      Forces use of vdevs, even if they appear in use or
 455                      specify a conflicting replication level. Not all devices
 456                      can be overridden in this manner.
 457 
 458              -n      Displays the configuration that would be used without
 459                      actually adding the vdevs.  The actual pool creation can
 460                      still fail due to insufficient privileges or device
 461                      sharing.
 462 
 463      zpool attach [-f] pool device new_device
 464              Attaches new_device to the existing device.  The existing device
 465              cannot be part of a raidz configuration. If device is not
 466              currently part of a mirrored configuration, device automatically
 467              transforms into a two-way mirror of device and new_device.  If
 468              device is part of a two-way mirror, attaching new_device creates
 469              a three-way mirror, and so on. In either case, new_device begins
 470              to resilver immediately.
 471 
 472              -f      Forces use of new_device, even if its appears to be in
 473                      use. Not all devices can be overridden in this manner.
 474 
 475      zpool clear pool [device]
 476              Clears device errors in a pool. If no arguments are specified,
 477              all device errors within the pool are cleared. If one or more
 478              devices is specified, only those errors associated with the
 479              specified device or devices are cleared.
 480 
 481      zpool create [-dfn] [-m mountpoint] [-o property=value]... [-O
 482              file-system-property=value]... [-R root] pool vdev...
 483              Creates a new storage pool containing the virtual devices
 484              specified on the command line. The pool name must begin with a
 485              letter, and can only contain alphanumeric characters as well as
 486              underscore ("_"), dash ("-"), and period (".").  The pool names
 487              mirror, raidz, spare and log are reserved, as are names beginning
 488              with the pattern c[0-9].  The vdev specification is described in
 489              the Virtual Devices section.
 490 
 491              The command verifies that each device specified is accessible and
 492              not currently in use by another subsystem. There are some uses,
 493              such as being currently mounted, or specified as the dedicated
 494              dump device, that prevents a device from ever being used by ZFS .
 495              Other uses, such as having a preexisting UFS file system, can be
 496              overridden with the -f option.
 497 
 498              The command also checks that the replication strategy for the
 499              pool is consistent. An attempt to combine redundant and non-
 500              redundant storage in a single pool, or to mix disks and files,
 501              results in an error unless -f is specified. The use of
 502              differently sized devices within a single raidz or mirror group
 503              is also flagged as an error unless -f is specified.
 504 
 505              Unless the -R option is specified, the default mount point is
 506              /pool.  The mount point must not exist or must be empty, or else
 507              the root dataset cannot be mounted. This can be overridden with
 508              the -m option.
 509 
 510              By default all supported features are enabled on the new pool
 511              unless the -d option is specified.
 512 
 513              -d      Do not enable any features on the new pool. Individual
 514                      features can be enabled by setting their corresponding
 515                      properties to enabled with the -o option. See
 516                      zpool-features(5) for details about feature properties.
 517 
 518              -f      Forces use of vdevs, even if they appear in use or
 519                      specify a conflicting replication level. Not all devices
 520                      can be overridden in this manner.
 521 
 522              -m mountpoint
 523                      Sets the mount point for the root dataset. The default
 524                      mount point is /pool or altroot/pool if altroot is
 525                      specified. The mount point must be an absolute path,
 526                      legacy, or none.  For more information on dataset mount
 527                      points, see zfs(1M).
 528 
 529              -n      Displays the configuration that would be used without
 530                      actually creating the pool. The actual pool creation can
 531                      still fail due to insufficient privileges or device
 532                      sharing.
 533 
 534              -o property=value
 535                      Sets the given pool properties. See the Properties
 536                      section for a list of valid properties that can be set.
 537 
 538              -O file-system-property=value
 539                      Sets the given file system properties in the root file
 540                      system of the pool. See the Properties section of zfs(1M)
 541                      for a list of valid properties that can be set.
 542 
 543              -R root
 544                      Equivalent to -o cachefile=none -o altroot=root
 545 
 546      zpool destroy [-f] pool
 547              Destroys the given pool, freeing up any devices for other use.
 548              This command tries to unmount any active datasets before
 549              destroying the pool.
 550 
 551              -f      Forces any active datasets contained within the pool to
 552                      be unmounted.
 553 
 554      zpool detach pool device
 555              Detaches device from a mirror. The operation is refused if there
 556              are no other valid replicas of the data.
 557 
 558      zpool export [-f] pool...
 559              Exports the given pools from the system. All devices are marked
 560              as exported, but are still considered in use by other subsystems.
 561              The devices can be moved between systems (even those of different
 562              endianness) and imported as long as a sufficient number of
 563              devices are present.
 564 
 565              Before exporting the pool, all datasets within the pool are
 566              unmounted. A pool can not be exported if it has a shared spare
 567              that is currently being used.
 568 
 569              For pools to be portable, you must give the zpool command whole
 570              disks, not just slices, so that ZFS can label the disks with
 571              portable EFI labels. Otherwise, disk drivers on platforms of
 572              different endianness will not recognize the disks.
 573 
 574              -f      Forcefully unmount all datasets, using the unmount -f
 575                      command.
 576 
 577                      This command will forcefully export the pool even if it
 578                      has a shared spare that is currently being used. This may
 579                      lead to potential data corruption.
 580 
 581      zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
 582              Retrieves the given list of properties (or all properties if all
 583              is used) for the specified storage pool(s). These properties are
 584              displayed with the following fields:
 585 
 586                      name          Name of storage pool
 587                      property      Property name
 588                      value         Property value
 589                      source        Property source, either 'default' or 'local'.
 590 
 591              See the Properties section for more information on the available
 592              pool properties.
 593 
 594              -H      Scripted mode. Do not display headers, and separate
 595                      fields by a single tab instead of arbitrary space.
 596 
 597              -o field
 598                      A comma-separated list of columns to display.
 599                      name,property,value,source is the default value.
 600 
 601              -p      Display numbers in parsable (exact) values.
 602 
 603      zpool history [-il] [pool]...
 604              Displays the command history of the specified pool(s) or all
 605              pools if no pool is specified.
 606 
 607              -i      Displays internally logged ZFS events in addition to user
 608                      initiated events.
 609 
 610              -l      Displays log records in long format, which in addition to
 611                      standard format includes, the user name, the hostname,
 612                      and the zone in which the operation was performed.
 613 
 614      zpool import [-D] [-d dir]
 615              Lists pools available to import. If the -d option is not
 616              specified, this command searches for devices in /dev/dsk.  The -d
 617              option can be specified multiple times, and all directories are
 618              searched. If the device appears to be part of an exported pool,
 619              this command displays a summary of the pool with the name of the
 620              pool, a numeric identifier, as well as the vdev layout and
 621              current health of the device for each device or file. Destroyed
 622              pools, pools that were previously destroyed with the zpool
 623              destroy command, are not listed unless the -D option is
 624              specified.
 625 
 626              The numeric identifier is unique, and can be used instead of the
 627              pool name when multiple exported pools of the same name are
 628              available.
 629 
 630              -c cachefile
 631                      Reads configuration from the given cachefile that was
 632                      created with the cachefile pool property. This cachefile
 633                      is used instead of searching for devices.
 634 
 635              -d dir  Searches for devices or files in dir.  The -d option can
 636                      be specified multiple times.
 637 
 638              -D      Lists destroyed pools only.
 639 
 640      zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
 641              property=value]... [-R root]
 642              Imports all pools found in the search directories. Identical to
 643              the previous command, except that all pools with a sufficient
 644              number of devices available are imported. Destroyed pools, pools
 645              that were previously destroyed with the zpool destroy command,
 646              will not be imported unless the -D option is specified.
 647 
 648              -a      Searches for and imports all pools found.
 649 
 650              -c cachefile
 651                      Reads configuration from the given cachefile that was
 652                      created with the cachefile pool property. This cachefile
 653                      is used instead of searching for devices.
 654 
 655              -d dir  Searches for devices or files in dir.  The -d option can
 656                      be specified multiple times. This option is incompatible
 657                      with the -c option.
 658 
 659              -D      Imports destroyed pools only. The -f option is also
 660                      required.
 661 
 662              -f      Forces import, even if the pool appears to be potentially
 663                      active.
 664 
 665              -F      Recovery mode for a non-importable pool. Attempt to
 666                      return the pool to an importable state by discarding the
 667                      last few transactions. Not all damaged pools can be
 668                      recovered by using this option. If successful, the data
 669                      from the discarded transactions is irretrievably lost.
 670                      This option is ignored if the pool is importable or
 671                      already imported.
 672 
 673              -m      Allows a pool to import when there is a missing log
 674                      device. Recent transactions can be lost because the log
 675                      device will be discarded.
 676 
 677              -n      Used with the -F recovery option. Determines whether a
 678                      non-importable pool can be made importable again, but
 679                      does not actually perform the pool recovery. For more
 680                      details about pool recovery mode, see the -F option,
 681                      above.
 682 
 683              -N      Import the pool without mounting any file systems.
 684 
 685              -o mntopts
 686                      Comma-separated list of mount options to use when
 687                      mounting datasets within the pool. See zfs(1M) for a
 688                      description of dataset properties and mount options.
 689 
 690              -o property=value
 691                      Sets the specified property on the imported pool. See the
 692                      Properties section for more information on the available
 693                      pool properties.
 694 
 695              -R root
 696                      Sets the cachefile property to none and the altroot
 697                      property to root.
 698 
 699      zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
 700              property=value]... [-R root] pool|id [newpool]
 701              Imports a specific pool. A pool can be identified by its name or
 702              the numeric identifier. If newpool is specified, the pool is
 703              imported using the name newpool.  Otherwise, it is imported with
 704              the same name as its exported name.
 705 
 706              If a device is removed from a system without running zpool export
 707              first, the device appears as potentially active. It cannot be
 708              determined if this was a failed export, or whether the device is
 709              really in use from another host. To import a pool in this state,
 710              the -f option is required.
 711 
 712              -c cachefile
 713                      Reads configuration from the given cachefile that was
 714                      created with the cachefile pool property. This cachefile
 715                      is used instead of searching for devices.
 716 
 717              -d dir  Searches for devices or files in dir.  The -d option can
 718                      be specified multiple times. This option is incompatible
 719                      with the -c option.
 720 
 721              -D      Imports destroyed pool. The -f option is also required.
 722 
 723              -f      Forces import, even if the pool appears to be potentially
 724                      active.
 725 
 726              -F      Recovery mode for a non-importable pool. Attempt to
 727                      return the pool to an importable state by discarding the
 728                      last few transactions. Not all damaged pools can be
 729                      recovered by using this option. If successful, the data
 730                      from the discarded transactions is irretrievably lost.
 731                      This option is ignored if the pool is importable or
 732                      already imported.
 733 
 734              -m      Allows a pool to import when there is a missing log
 735                      device. Recent transactions can be lost because the log
 736                      device will be discarded.
 737 
 738              -n      Used with the -F recovery option. Determines whether a
 739                      non-importable pool can be made importable again, but
 740                      does not actually perform the pool recovery. For more
 741                      details about pool recovery mode, see the -F option,
 742                      above.
 743 
 744              -o mntopts
 745                      Comma-separated list of mount options to use when
 746                      mounting datasets within the pool. See zfs(1M) for a
 747                      description of dataset properties and mount options.
 748 
 749              -o property=value
 750                      Sets the specified property on the imported pool. See the
 751                      Properties section for more information on the available
 752                      pool properties.
 753 
 754              -R root
 755                      Sets the cachefile property to none and the altroot
 756                      property to root.
 757 
 758      zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
 759              Displays I/O statistics for the given pools. When given an
 760              interval, the statistics are printed every interval seconds until
 761              ^C is pressed. If no pools are specified, statistics for every
 762              pool in the system is shown. If count is specified, the command
 763              exits after count reports are printed.
 764 
 765              -T u|d  Display a time stamp. Specify u for a printed
 766                      representation of the internal representation of time.
 767                      See time(2).  Specify d for standard date format. See
 768                      date(1).
 769 
 770              -v      Verbose statistics. Reports usage statistics for
 771                      individual vdevs within the pool, in addition to the
 772                      pool-wide statistics.
 773 
 774      zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
 775              [interval [count]]
 776              Lists the given pools along with a health status and space usage.
 777              If no pools are specified, all pools in the system are listed.
 778              When given an interval, the information is printed every interval
 779              seconds until ^C is pressed. If count is specified, the command
 780              exits after count reports are printed.
 781 
 782              -H      Scripted mode. Do not display headers, and separate
 783                      fields by a single tab instead of arbitrary space.
 784 
 785              -o property
 786                      Comma-separated list of properties to display. See the
 787                      Properties section for a list of valid properties. The
 788                      default list is name, size, used, available,
 789                      fragmentation, expandsize, capacity, dedupratio, health,
 790                      altroot.
 791 
 792              -p      Display numbers in parsable (exact) values.
 793 
 794              -T u|d  Display a time stamp. Specify -u for a printed
 795                      representation of the internal representation of time.
 796                      See time(2).  Specify -d for standard date format. See
 797                      date(1).
 798 
 799              -v      Verbose statistics. Reports usage statistics for
 800                      individual vdevs within the pool, in addition to the
 801                      pool-wise statistics.
 802 
 803      zpool offline [-t] pool device...
 804              Takes the specified physical device offline. While the device is
 805              offline, no attempt is made to read or write to the device. This
 806              command is not applicable to spares.
 807 
 808              -t      Temporary. Upon reboot, the specified physical device
 809                      reverts to its previous state.
 810 
 811      zpool online [-e] pool device...
 812              Brings the specified physical device online. This command is not
 813              applicable to spares.
 814 
 815              -e      Expand the device to use all available space. If the
 816                      device is part of a mirror or raidz then all devices must
 817                      be expanded before the new space will become available to
 818                      the pool.
 819 
 820      zpool reguid pool
 821              Generates a new unique identifier for the pool. You must ensure
 822              that all devices in this pool are online and healthy before
 823              performing this action.
 824 
 825      zpool reopen pool
 826              Reopen all the vdevs associated with the pool.
 827 
 828      zpool remove pool device...
 829              Removes the specified device from the pool. This command
 830              currently only supports removing hot spares, cache, and log
 831              devices. A mirrored log device can be removed by specifying the
 832              top-level mirror for the log. Non-log devices that are part of a
 833              mirrored configuration can be removed using the zpool detach
 834              command. Non-redundant and raidz devices cannot be removed from a
 835              pool.
 836 
 837      zpool replace [-f] pool device [new_device]
 838              Replaces old_device with new_device.  This is equivalent to
 839              attaching new_device, waiting for it to resilver, and then
 840              detaching old_device.
 841 
 842              The size of new_device must be greater than or equal to the
 843              minimum size of all the devices in a mirror or raidz
 844              configuration.
 845 
 846              new_device is required if the pool is not redundant. If
 847              new_device is not specified, it defaults to old_device.  This
 848              form of replacement is useful after an existing disk has failed
 849              and has been physically replaced. In this case, the new disk may
 850              have the same /dev/dsk path as the old device, even though it is
 851              actually a different disk. ZFS recognizes this.
 852 
 853              -f      Forces use of new_device, even if its appears to be in
 854                      use. Not all devices can be overridden in this manner.
 855 
 856      zpool scrub [-s] pool...
 857              Begins a scrub. The scrub examines all data in the specified
 858              pools to verify that it checksums correctly. For replicated
 859              (mirror or raidz) devices, ZFS automatically repairs any damage
 860              discovered during the scrub. The zpool status command reports the
 861              progress of the scrub and summarizes the results of the scrub
 862              upon completion.
 863 
 864              Scrubbing and resilvering are very similar operations. The
 865              difference is that resilvering only examines data that ZFS knows
 866              to be out of date (for example, when attaching a new device to a
 867              mirror or replacing an existing device), whereas scrubbing
 868              examines all data to discover silent errors due to hardware
 869              faults or disk failure.
 870 
 871              Because scrubbing and resilvering are I/O-intensive operations,
 872              ZFS only allows one at a time. If a scrub is already in progress,
 873              the zpool scrub command terminates it and starts a new scrub. If
 874              a resilver is in progress, ZFS does not allow a scrub to be
 875              started until the resilver completes.
 876 
 877              -s      Stop scrubbing.
 878 
 879      zpool set property=value pool
 880              Sets the given property on the specified pool. See the Properties
 881              section for more information on what properties can be set and
 882              acceptable values.
 883 
 884      zpool split [-n] [-o property=value]... [-R root] pool newpool
 885              Splits devices off pool creating newpool.  All vdevs in pool must
 886              be mirrors. At the time of the split, newpool will be a replica
 887              of pool.
 888 
 889              -n      Do dry run, do not actually perform the split. Print out
 890                      the expected configuration of newpool.
 891 
 892              -o property=value
 893                      Sets the specified property for newpool.  See the
 894                      Properties section for more information on the available
 895                      pool properties.
 896 
 897              -R root
 898                      Set altroot for newpool to root and automaticaly import
 899                      it.
 900 
 901      zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
 902              Displays the detailed health status for the given pools. If no
 903              pool is specified, then the status of each pool in the system is
 904              displayed. For more information on pool and device health, see
 905              the Device Failure and Recovery section.
 906 
 907              If a scrub or resilver is in progress, this command reports the
 908              percentage done and the estimated time to completion. Both of
 909              these are only approximate, because the amount of data in the
 910              pool and the other workloads on the system can change.
 911 
 912              -D      Display a histogram of deduplication statistics, showing
 913                      the allocated (physically present on disk) and referenced
 914                      (logically referenced in the pool) block counts and sizes
 915                      by reference count.
 916 
 917              -T u|d  Display a time stamp. Specify -u for a printed
 918                      representation of the internal representation of time.
 919                      See time(2).  Specify -d for standard date format. See
 920                      date(1).
 921 
 922              -v      Displays verbose data error information, printing out a
 923                      complete list of all data errors since the last complete
 924                      pool scrub.
 925 
 926              -x      Only display status for pools that are exhibiting errors
 927                      or are otherwise unavailable. Warnings about pools not
 928                      using the latest on-disk format will not be included.
 929 
 930      zpool upgrade
 931              Displays pools which do not have all supported features enabled
 932              and pools formatted using a legacy ZFS version number. These
 933              pools can continue to be used, but some features may not be
 934              available. Use zpool upgrade -a to enable all features on all
 935              pools.
 936 
 937      zpool upgrade -v
 938              Displays legacy ZFS versions supported by the current software.
 939              See zpool-features(5) for a description of feature flags features
 940              supported by the current software.
 941 
 942      zpool upgrade [-V version] -a|pool...
 943              Enables all supported features on the given pool. Once this is
 944              done, the pool will no longer be accessible on systems that do
 945              not support feature flags. See zpool-features(5) for details on
 946              compatibility with systems that support feature flags, but do not
 947              support all features enabled on the pool.
 948 
 949              -a      Enables all supported features on all pools.
 950 
 951              -V version
 952                      Upgrade to the specified legacy version. If the -V flag
 953                      is specified, no features will be enabled on the pool.
 954                      This option can only be used to increase the version
 955                      number up to the last supported legacy version number.
 956 
 957 EXIT STATUS
 958      The following exit values are returned:
 959 
 960      0       Successful completion.
 961 
 962      1       An error occurred.
 963 
 964      2       Invalid command line options were specified.
 965 
 966 EXAMPLES
 967      Example 1 Creating a RAID-Z Storage Pool
 968              The following command creates a pool with a single raidz root
 969              vdev that consists of six disks.
 970 
 971              # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
 972 
 973      Example 2 Creating a Mirrored Storage Pool
 974              The following command creates a pool with two mirrors, where each
 975              mirror contains two disks.
 976 
 977              # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
 978 
 979      Example 3 Creating a ZFS Storage Pool by Using Slices
 980              The following command creates an unmirrored pool using two disk
 981              slices.
 982 
 983              # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
 984 
 985      Example 4 Creating a ZFS Storage Pool by Using Files
 986              The following command creates an unmirrored pool using files.
 987              While not recommended, a pool based on files can be useful for
 988              experimental purposes.
 989 
 990              # zpool create tank /path/to/file/a /path/to/file/b
 991 
 992      Example 5 Adding a Mirror to a ZFS Storage Pool
 993              The following command adds two mirrored disks to the pool tank,
 994              assuming the pool is already made up of two-way mirrors. The
 995              additional space is immediately available to any datasets within
 996              the pool.
 997 
 998              # zpool add tank mirror c1t0d0 c1t1d0
 999 
1000      Example 6 Listing Available ZFS Storage Pools
1001              The following command lists all available pools on the system. In
1002              this case, the pool zion is faulted due to a missing device. The
1003              results from this command are similar to the following:
1004 
1005              # zpool list
1006              NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1007              rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
1008              tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
1009              zion       -      -      -      -         -      -      -  FAULTED -
1010 
1011      Example 7 Destroying a ZFS Storage Pool
1012              The following command destroys the pool tank and any datasets
1013              contained within.
1014 
1015              # zpool destroy -f tank
1016 
1017      Example 8 Exporting a ZFS Storage Pool
1018              The following command exports the devices in pool tank so that
1019              they can be relocated or later imported.
1020 
1021              # zpool export tank
1022 
1023      Example 9 Importing a ZFS Storage Pool
1024              The following command displays available pools, and then imports
1025              the pool tank for use on the system. The results from this
1026              command are similar to the following:
1027 
1028              # zpool import
1029                pool: tank
1030                  id: 15451357997522795478
1031               state: ONLINE
1032              action: The pool can be imported using its name or numeric identifier.
1033              config:
1034 
1035                      tank        ONLINE
1036                        mirror    ONLINE
1037                          c1t2d0  ONLINE
1038                          c1t3d0  ONLINE
1039 
1040              # zpool import tank
1041 
1042      Example 10 Upgrading All ZFS Storage Pools to the Current Version
1043              The following command upgrades all ZFS Storage pools to the
1044              current version of the software.
1045 
1046              # zpool upgrade -a
1047              This system is currently running ZFS version 2.
1048 
1049      Example 11 Managing Hot Spares
1050              The following command creates a new pool with an available hot
1051              spare:
1052 
1053              # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1054 
1055              If one of the disks were to fail, the pool would be reduced to
1056              the degraded state. The failed device can be replaced using the
1057              following command:
1058 
1059              # zpool replace tank c0t0d0 c0t3d0
1060 
1061              Once the data has been resilvered, the spare is automatically
1062              removed and is made available should another device fails. The
1063              hot spare can be permanently removed from the pool using the
1064              following command:
1065 
1066              # zpool remove tank c0t2d0
1067 
1068      Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1069              The following command creates a ZFS storage pool consisting of
1070              two, two-way mirrors and mirrored log devices:
1071 
1072              # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1073                c4d0 c5d0
1074 
1075      Example 13 Adding Cache Devices to a ZFS Pool
1076              The following command adds two disks for use as cache devices to
1077              a ZFS storage pool:
1078 
1079              # zpool add pool cache c2d0 c3d0
1080 
1081              Once added, the cache devices gradually fill with content from
1082              main memory.  Depending on the size of your cache devices, it
1083              could take over an hour for them to fill. Capacity and reads can
1084              be monitored using the iostat option as follows:
1085 
1086              # zpool iostat -v pool 5
1087 
1088      Example 14 Removing a Mirrored Log Device
1089              The following command removes the mirrored log device mirror-2.
1090              Given this configuration:
1091 
1092                pool: tank
1093               state: ONLINE
1094               scrub: none requested
1095              config:
1096 
1097                       NAME        STATE     READ WRITE CKSUM
1098                       tank        ONLINE       0     0     0
1099                         mirror-0  ONLINE       0     0     0
1100                           c6t0d0  ONLINE       0     0     0
1101                           c6t1d0  ONLINE       0     0     0
1102                         mirror-1  ONLINE       0     0     0
1103                           c6t2d0  ONLINE       0     0     0
1104                           c6t3d0  ONLINE       0     0     0
1105                       logs
1106                         mirror-2  ONLINE       0     0     0
1107                           c4t0d0  ONLINE       0     0     0
1108                           c4t1d0  ONLINE       0     0     0
1109 
1110              The command to remove the mirrored log mirror-2 is:
1111 
1112              # zpool remove tank mirror-2
1113 
1114      Example 15 Displaying expanded space on a device
1115              The following command dipslays the detailed information for the
1116              pool data.  This pool is comprised of a single raidz vdev where
1117              one of its devices increased its capacity by 10GB. In this
1118              example, the pool will not be able to utilize this extra capacity
1119              until all the devices under the raidz vdev have been expanded.
1120 
1121              # zpool list -v data
1122              NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1123              data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
1124                raidz1    23.9G  14.6G  9.30G    48%         -
1125                  c1t1d0      -      -      -      -         -
1126                  c1t2d0      -      -      -      -       10G
1127                  c1t3d0      -      -      -      -         -
1128 
1129 INTERFACE STABILITY
1130      Evolving
1131 
1132 SEE ALSO
1133      zfs(1M), attributes(5), zpool-features(5)
1134 
1135 illumos                         March 25, 2016                         illumos