Print this page
6781 zpool man page needs updated to remove duplicate entry of "cannot be" where it discusses cache devices
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/man/man1m/zpool.1m.man.txt
+++ new/usr/src/man/man1m/zpool.1m.man.txt
1 1 ZPOOL(1M) Maintenance Commands ZPOOL(1M)
2 2
3 3 NAME
4 4 zpool - configure ZFS storage pools
5 5
6 6 SYNOPSIS
7 7 zpool -?
8 8 zpool add [-fn] pool vdev...
9 9 zpool attach [-f] pool device new_device
10 10 zpool clear pool [device]
11 11 zpool create [-dfn] [-m mountpoint] [-o property=value]...
12 12 [-O file-system-property=value]... [-R root] pool vdev...
13 13 zpool destroy [-f] pool
14 14 zpool detach pool device
15 15 zpool export [-f] pool...
16 16 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
17 17 zpool history [-il] [pool]...
18 18 zpool import [-D] [-d dir]
19 19 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
20 20 [-o property=value]... [-R root]
21 21 zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts]
22 22 [-o property=value]... [-R root] pool|id [newpool]
23 23 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
24 24 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
25 25 [interval [count]]
26 26 zpool offline [-t] pool device...
27 27 zpool online [-e] pool device...
28 28 zpool reguid pool
29 29 zpool reopen pool
30 30 zpool remove pool device...
31 31 zpool replace [-f] pool device [new_device]
32 32 zpool scrub [-s] pool...
33 33 zpool set property=value pool
34 34 zpool split [-n] [-o property=value]... [-R root] pool newpool
35 35 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
36 36 zpool upgrade
37 37 zpool upgrade -v
38 38 zpool upgrade [-V version] -a|pool...
39 39
40 40 DESCRIPTION
41 41 The zpool command configures ZFS storage pools. A storage pool is a
42 42 collection of devices that provides physical storage and data replication
43 43 for ZFS datasets. All datasets within a storage pool share the same
44 44 space. See zfs(1M) for information on managing datasets.
45 45
46 46 Virtual Devices (vdevs)
47 47 A "virtual device" describes a single device or a collection of devices
48 48 organized according to certain performance and fault characteristics. The
49 49 following virtual devices are supported:
50 50
51 51 disk A block device, typically located under /dev/dsk. ZFS can use
52 52 individual slices or partitions, though the recommended mode of
53 53 operation is to use whole disks. A disk can be specified by a
54 54 full path, or it can be a shorthand name (the relative portion of
55 55 the path under /dev/dsk). A whole disk can be specified by
56 56 omitting the slice or partition designation. For example, c0t0d0
57 57 is equivalent to /dev/dsk/c0t0d0s2. When given a whole disk, ZFS
58 58 automatically labels the disk, if necessary.
59 59
60 60 file A regular file. The use of files as a backing store is strongly
61 61 discouraged. It is designed primarily for experimental purposes,
62 62 as the fault tolerance of a file is only as good as the file
63 63 system of which it is a part. A file must be specified by a full
64 64 path.
65 65
66 66 mirror A mirror of two or more devices. Data is replicated in an
67 67 identical fashion across all components of a mirror. A mirror
68 68 with N disks of size X can hold X bytes and can withstand (N-1)
69 69 devices failing before data integrity is compromised.
70 70
71 71 raidz, raidz1, raidz2, raidz3
72 72 A variation on RAID-5 that allows for better distribution of
73 73 parity and eliminates the RAID-5 "write hole" (in which data and
74 74 parity become inconsistent after a power loss). Data and parity
75 75 is striped across all disks within a raidz group.
76 76
77 77 A raidz group can have single-, double-, or triple-parity,
78 78 meaning that the raidz group can sustain one, two, or three
79 79 failures, respectively, without losing any data. The raidz1 vdev
80 80 type specifies a single-parity raidz group; the raidz2 vdev type
81 81 specifies a double-parity raidz group; and the raidz3 vdev type
82 82 specifies a triple-parity raidz group. The raidz vdev type is an
83 83 alias for raidz1.
84 84
85 85 A raidz group with N disks of size X with P parity disks can hold
86 86 approximately (N-P)*X bytes and can withstand P device(s) failing
87 87 before data integrity is compromised. The minimum number of
88 88 devices in a raidz group is one more than the number of parity
89 89 disks. The recommended number is between 3 and 9 to help increase
90 90 performance.
91 91
↓ open down ↓ |
91 lines elided |
↑ open up ↑ |
92 92 spare A special pseudo-vdev which keeps track of available hot spares
93 93 for a pool. For more information, see the Hot Spares section.
94 94
95 95 log A separate intent log device. If more than one log device is
96 96 specified, then writes are load-balanced between devices. Log
97 97 devices can be mirrored. However, raidz vdev types are not
98 98 supported for the intent log. For more information, see the
99 99 Intent Log section.
100 100
101 101 cache A device used to cache storage pool data. A cache device cannot
102 - be cannot be configured as a mirror or raidz group. For more
103 - information, see the Cache Devices section.
102 + be configured as a mirror or raidz group. For more information,
103 + see the Cache Devices section.
104 104
105 105 Virtual devices cannot be nested, so a mirror or raidz virtual device can
106 106 only contain files or disks. Mirrors of mirrors (or other combinations)
107 107 are not allowed.
108 108
109 109 A pool can have any number of virtual devices at the top of the
110 110 configuration (known as "root vdevs"). Data is dynamically distributed
111 111 across all top-level devices to balance data among devices. As new
112 112 virtual devices are added, ZFS automatically places data on the newly
113 113 available devices.
114 114
115 115 Virtual devices are specified one at a time on the command line,
116 116 separated by whitespace. The keywords mirror and raidz are used to
117 117 distinguish where a group ends and another begins. For example, the
118 118 following creates two root vdevs, each a mirror of two disks:
119 119
120 120 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
121 121
122 122 Device Failure and Recovery
123 123 ZFS supports a rich set of mechanisms for handling device failure and
124 124 data corruption. All metadata and data is checksummed, and ZFS
125 125 automatically repairs bad data from a good copy when corruption is
126 126 detected.
127 127
128 128 In order to take advantage of these features, a pool must make use of
129 129 some form of redundancy, using either mirrored or raidz groups. While ZFS
130 130 supports running in a non-redundant configuration, where each root vdev
131 131 is simply a disk or file, this is strongly discouraged. A single case of
132 132 bit corruption can render some or all of your data unavailable.
133 133
134 134 A pool's health status is described by one of three states: online,
135 135 degraded, or faulted. An online pool has all devices operating normally.
136 136 A degraded pool is one in which one or more devices have failed, but the
137 137 data is still available due to a redundant configuration. A faulted pool
138 138 has corrupted metadata, or one or more faulted devices, and insufficient
139 139 replicas to continue functioning.
140 140
141 141 The health of the top-level vdev, such as mirror or raidz device, is
142 142 potentially impacted by the state of its associated vdevs, or component
143 143 devices. A top-level vdev or component device is in one of the following
144 144 states:
145 145
146 146 DEGRADED One or more top-level vdevs is in the degraded state because
147 147 one or more component devices are offline. Sufficient replicas
148 148 exist to continue functioning.
149 149
150 150 One or more component devices is in the degraded or faulted
151 151 state, but sufficient replicas exist to continue functioning.
152 152 The underlying conditions are as follows:
153 153
154 154 o The number of checksum errors exceeds acceptable levels and
155 155 the device is degraded as an indication that something may
156 156 be wrong. ZFS continues to use the device as necessary.
157 157
158 158 o The number of I/O errors exceeds acceptable levels. The
159 159 device could not be marked as faulted because there are
160 160 insufficient replicas to continue functioning.
161 161
162 162 FAULTED One or more top-level vdevs is in the faulted state because one
163 163 or more component devices are offline. Insufficient replicas
164 164 exist to continue functioning.
165 165
166 166 One or more component devices is in the faulted state, and
167 167 insufficient replicas exist to continue functioning. The
168 168 underlying conditions are as follows:
169 169
170 170 o The device could be opened, but the contents did not match
171 171 expected values.
172 172
173 173 o The number of I/O errors exceeds acceptable levels and the
174 174 device is faulted to prevent further use of the device.
175 175
176 176 OFFLINE The device was explicitly taken offline by the zpool offline
177 177 command.
178 178
179 179 ONLINE The device is online and functioning.
180 180
181 181 REMOVED The device was physically removed while the system was running.
182 182 Device removal detection is hardware-dependent and may not be
183 183 supported on all platforms.
184 184
185 185 UNAVAIL The device could not be opened. If a pool is imported when a
186 186 device was unavailable, then the device will be identified by a
187 187 unique identifier instead of its path since the path was never
188 188 correct in the first place.
189 189
190 190 If a device is removed and later re-attached to the system, ZFS attempts
191 191 to put the device online automatically. Device attach detection is
192 192 hardware-dependent and might not be supported on all platforms.
193 193
194 194 Hot Spares
195 195 ZFS allows devices to be associated with pools as "hot spares". These
196 196 devices are not actively used in the pool, but when an active device
197 197 fails, it is automatically replaced by a hot spare. To create a pool with
198 198 hot spares, specify a spare vdev with any number of devices. For example,
199 199
200 200 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
201 201
202 202 Spares can be shared across multiple pools, and can be added with the
203 203 zpool add command and removed with the zpool remove command. Once a spare
204 204 replacement is initiated, a new spare vdev is created within the
205 205 configuration that will remain there until the original device is
206 206 replaced. At this point, the hot spare becomes available again if another
207 207 device fails.
208 208
209 209 If a pool has a shared spare that is currently being used, the pool can
210 210 not be exported since other pools may use this shared spare, which may
211 211 lead to potential data corruption.
212 212
213 213 An in-progress spare replacement can be cancelled by detaching the hot
214 214 spare. If the original faulted device is detached, then the hot spare
215 215 assumes its place in the configuration, and is removed from the spare
216 216 list of all active pools.
217 217
218 218 Spares cannot replace log devices.
219 219
220 220 Intent Log
221 221 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
222 222 transactions. For instance, databases often require their transactions to
223 223 be on stable storage devices when returning from a system call. NFS and
224 224 other applications can also use fsync(3C) to ensure data stability. By
225 225 default, the intent log is allocated from blocks within the main pool.
226 226 However, it might be possible to get better performance using separate
227 227 intent log devices such as NVRAM or a dedicated disk. For example:
228 228
229 229 # zpool create pool c0d0 c1d0 log c2d0
230 230
231 231 Multiple log devices can also be specified, and they can be mirrored. See
232 232 the EXAMPLES section for an example of mirroring multiple log devices.
233 233
234 234 Log devices can be added, replaced, attached, detached, and imported and
235 235 exported as part of the larger pool. Mirrored log devices can be removed
236 236 by specifying the top-level mirror for the log.
237 237
238 238 Cache Devices
239 239 Devices can be added to a storage pool as "cache devices". These devices
240 240 provide an additional layer of caching between main memory and disk. For
241 241 read-heavy workloads, where the working set size is much larger than what
242 242 can be cached in main memory, using cache devices allow much more of this
243 243 working set to be served from low latency media. Using cache devices
244 244 provides the greatest performance improvement for random read-workloads
245 245 of mostly static content.
246 246
247 247 To create a pool with cache devices, specify a cache vdev with any number
248 248 of devices. For example:
249 249
250 250 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
251 251
252 252 Cache devices cannot be mirrored or part of a raidz configuration. If a
253 253 read error is encountered on a cache device, that read I/O is reissued to
254 254 the original storage pool device, which might be part of a mirrored or
255 255 raidz configuration.
256 256
257 257 The content of the cache devices is considered volatile, as is the case
258 258 with other system caches.
259 259
260 260 Properties
261 261 Each pool has several properties associated with it. Some properties are
262 262 read-only statistics while others are configurable and change the
263 263 behavior of the pool.
264 264
265 265 The following are read-only properties:
266 266
267 267 available
268 268 Amount of storage available within the pool. This property can
269 269 also be referred to by its shortened column name, avail.
270 270
271 271 capacity
272 272 Percentage of pool space used. This property can also be referred
273 273 to by its shortened column name, cap.
274 274
275 275 expandsize
276 276 Amount of uninitialized space within the pool or device that can
277 277 be used to increase the total capacity of the pool.
278 278 Uninitialized space consists of any space on an EFI labeled vdev
279 279 which has not been brought online (e.g, using zpool online -e).
280 280 This space occurs when a LUN is dynamically expanded.
281 281
282 282 fragmentation
283 283 The amount of fragmentation in the pool.
284 284
285 285 free The amount of free space available in the pool.
286 286
287 287 freeing
288 288 After a file system or snapshot is destroyed, the space it was
289 289 using is returned to the pool asynchronously. freeing is the
290 290 amount of space remaining to be reclaimed. Over time freeing will
291 291 decrease while free increases.
292 292
293 293 health The current health of the pool. Health can be one of ONLINE,
294 294 DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.
295 295
296 296 guid A unique identifier for the pool.
297 297
298 298 size Total size of the storage pool.
299 299
300 300 unsupported@feature_guid
301 301 Information about unsupported features that are enabled on the
302 302 pool. See zpool-features(5) for details.
303 303
304 304 used Amount of storage space used within the pool.
305 305
306 306 The space usage properties report actual physical space available to the
307 307 storage pool. The physical space can be different from the total amount
308 308 of space that any contained datasets can actually use. The amount of
309 309 space used in a raidz configuration depends on the characteristics of the
310 310 data being written. In addition, ZFS reserves some space for internal
311 311 accounting that the zfs(1M) command takes into account, but the zpool
312 312 command does not. For non-full pools of a reasonable size, these effects
313 313 should be invisible. For small pools, or pools that are close to being
314 314 completely full, these discrepancies may become more noticeable.
315 315
316 316 The following property can be set at creation time and import time:
317 317
318 318 altroot
319 319 Alternate root directory. If set, this directory is prepended to
320 320 any mount points within the pool. This can be used when examining
321 321 an unknown pool where the mount points cannot be trusted, or in
322 322 an alternate boot environment, where the typical paths are not
323 323 valid. altroot is not a persistent property. It is valid only
324 324 while the system is up. Setting altroot defaults to using
325 325 cachefile=none, though this may be overridden using an explicit
326 326 setting.
327 327
328 328 The following property can be set only at import time:
329 329
330 330 readonly=on|off
331 331 If set to on, the pool will be imported in read-only mode. This
332 332 property can also be referred to by its shortened column name,
333 333 rdonly.
334 334
335 335 The following properties can be set at creation time and import time, and
336 336 later changed with the zpool set command:
337 337
338 338 autoexpand=on|off
339 339 Controls automatic pool expansion when the underlying LUN is
340 340 grown. If set to on, the pool will be resized according to the
341 341 size of the expanded device. If the device is part of a mirror or
342 342 raidz then all devices within that mirror/raidz group must be
343 343 expanded before the new space is made available to the pool. The
344 344 default behavior is off. This property can also be referred to
345 345 by its shortened column name, expand.
346 346
347 347 autoreplace=on|off
348 348 Controls automatic device replacement. If set to off, device
349 349 replacement must be initiated by the administrator by using the
350 350 zpool replace command. If set to on, any new device, found in the
351 351 same physical location as a device that previously belonged to
352 352 the pool, is automatically formatted and replaced. The default
353 353 behavior is off. This property can also be referred to by its
354 354 shortened column name, replace.
355 355
356 356 bootfs=pool/dataset
357 357 Identifies the default bootable dataset for the root pool. This
358 358 property is expected to be set mainly by the installation and
359 359 upgrade programs.
360 360
361 361 cachefile=path|none
362 362 Controls the location of where the pool configuration is cached.
363 363 Discovering all pools on system startup requires a cached copy of
364 364 the configuration data that is stored on the root file system.
365 365 All pools in this cache are automatically imported when the
366 366 system boots. Some environments, such as install and clustering,
367 367 need to cache this information in a different location so that
368 368 pools are not automatically imported. Setting this property
369 369 caches the pool configuration in a different location that can
370 370 later be imported with zpool import -c. Setting it to the
371 371 special value none creates a temporary pool that is never cached,
372 372 and the special value "" (empty string) uses the default
373 373 location.
374 374
375 375 Multiple pools can share the same cache file. Because the kernel
376 376 destroys and recreates this file when pools are added and
377 377 removed, care should be taken when attempting to access this
378 378 file. When the last pool using a cachefile is exported or
379 379 destroyed, the file is removed.
380 380
381 381 comment=text
382 382 A text string consisting of printable ASCII characters that will
383 383 be stored such that it is available even if the pool becomes
384 384 faulted. An administrator can provide additional information
385 385 about a pool using this property.
386 386
387 387 dedupditto=number
388 388 Threshold for the number of block ditto copies. If the reference
389 389 count for a deduplicated block increases above this number, a new
390 390 ditto copy of this block is automatically stored. The default
391 391 setting is 0 which causes no ditto copies to be created for
392 392 deduplicated blocks. The miniumum legal nonzero setting is 100.
393 393
394 394 delegation=on|off
395 395 Controls whether a non-privileged user is granted access based on
396 396 the dataset permissions defined on the dataset. See zfs(1M) for
397 397 more information on ZFS delegated administration.
398 398
399 399 failmode=wait|continue|panic
400 400 Controls the system behavior in the event of catastrophic pool
401 401 failure. This condition is typically a result of a loss of
402 402 connectivity to the underlying storage device(s) or a failure of
403 403 all devices within the pool. The behavior of such an event is
404 404 determined as follows:
405 405
406 406 wait Blocks all I/O access until the device connectivity is
407 407 recovered and the errors are cleared. This is the
408 408 default behavior.
409 409
410 410 continue Returns EIO to any new write I/O requests but allows
411 411 reads to any of the remaining healthy devices. Any
412 412 write requests that have yet to be committed to disk
413 413 would be blocked.
414 414
415 415 panic Prints out a message to the console and generates a
416 416 system crash dump.
417 417
418 418 feature@feature_name=enabled
419 419 The value of this property is the current state of feature_name.
420 420 The only valid value when setting this property is enabled which
421 421 moves feature_name to the enabled state. See zpool-features(5)
422 422 for details on feature states.
423 423
424 424 listsnaps=on|off
425 425 Controls whether information about snapshots associated with this
426 426 pool is output when zfs list is run without the -t option. The
427 427 default value is off.
428 428
429 429 version=version
430 430 The current on-disk version of the pool. This can be increased,
431 431 but never decreased. The preferred method of updating pools is
432 432 with the zpool upgrade command, though this property can be used
433 433 when a specific version is needed for backwards compatibility.
434 434 Once feature flags is enabled on a pool this property will no
435 435 longer have a value.
436 436
437 437 Subcommands
438 438 All subcommands that modify state are logged persistently to the pool in
439 439 their original form.
440 440
441 441 The zpool command provides subcommands to create and destroy storage
442 442 pools, add capacity to storage pools, and provide information about the
443 443 storage pools. The following subcommands are supported:
444 444
445 445 zpool -?
446 446 Displays a help message.
447 447
448 448 zpool add [-fn] pool vdev...
449 449 Adds the specified virtual devices to the given pool. The vdev
450 450 specification is described in the Virtual Devices section. The
451 451 behavior of the -f option, and the device checks performed are
452 452 described in the zpool create subcommand.
453 453
454 454 -f Forces use of vdevs, even if they appear in use or
455 455 specify a conflicting replication level. Not all devices
456 456 can be overridden in this manner.
457 457
458 458 -n Displays the configuration that would be used without
459 459 actually adding the vdevs. The actual pool creation can
460 460 still fail due to insufficient privileges or device
461 461 sharing.
462 462
463 463 zpool attach [-f] pool device new_device
464 464 Attaches new_device to the existing device. The existing device
465 465 cannot be part of a raidz configuration. If device is not
466 466 currently part of a mirrored configuration, device automatically
467 467 transforms into a two-way mirror of device and new_device. If
468 468 device is part of a two-way mirror, attaching new_device creates
469 469 a three-way mirror, and so on. In either case, new_device begins
470 470 to resilver immediately.
471 471
472 472 -f Forces use of new_device, even if its appears to be in
473 473 use. Not all devices can be overridden in this manner.
474 474
475 475 zpool clear pool [device]
476 476 Clears device errors in a pool. If no arguments are specified,
477 477 all device errors within the pool are cleared. If one or more
478 478 devices is specified, only those errors associated with the
479 479 specified device or devices are cleared.
480 480
481 481 zpool create [-dfn] [-m mountpoint] [-o property=value]... [-O
482 482 file-system-property=value]... [-R root] pool vdev...
483 483 Creates a new storage pool containing the virtual devices
484 484 specified on the command line. The pool name must begin with a
485 485 letter, and can only contain alphanumeric characters as well as
486 486 underscore ("_"), dash ("-"), and period ("."). The pool names
487 487 mirror, raidz, spare and log are reserved, as are names beginning
488 488 with the pattern c[0-9]. The vdev specification is described in
489 489 the Virtual Devices section.
490 490
491 491 The command verifies that each device specified is accessible and
492 492 not currently in use by another subsystem. There are some uses,
493 493 such as being currently mounted, or specified as the dedicated
494 494 dump device, that prevents a device from ever being used by ZFS .
495 495 Other uses, such as having a preexisting UFS file system, can be
496 496 overridden with the -f option.
497 497
498 498 The command also checks that the replication strategy for the
499 499 pool is consistent. An attempt to combine redundant and non-
500 500 redundant storage in a single pool, or to mix disks and files,
501 501 results in an error unless -f is specified. The use of
502 502 differently sized devices within a single raidz or mirror group
503 503 is also flagged as an error unless -f is specified.
504 504
505 505 Unless the -R option is specified, the default mount point is
506 506 /pool. The mount point must not exist or must be empty, or else
507 507 the root dataset cannot be mounted. This can be overridden with
508 508 the -m option.
509 509
510 510 By default all supported features are enabled on the new pool
511 511 unless the -d option is specified.
512 512
513 513 -d Do not enable any features on the new pool. Individual
514 514 features can be enabled by setting their corresponding
515 515 properties to enabled with the -o option. See
516 516 zpool-features(5) for details about feature properties.
517 517
518 518 -f Forces use of vdevs, even if they appear in use or
519 519 specify a conflicting replication level. Not all devices
520 520 can be overridden in this manner.
521 521
522 522 -m mountpoint
523 523 Sets the mount point for the root dataset. The default
524 524 mount point is /pool or altroot/pool if altroot is
525 525 specified. The mount point must be an absolute path,
526 526 legacy, or none. For more information on dataset mount
527 527 points, see zfs(1M).
528 528
529 529 -n Displays the configuration that would be used without
530 530 actually creating the pool. The actual pool creation can
531 531 still fail due to insufficient privileges or device
532 532 sharing.
533 533
534 534 -o property=value
535 535 Sets the given pool properties. See the Properties
536 536 section for a list of valid properties that can be set.
537 537
538 538 -O file-system-property=value
539 539 Sets the given file system properties in the root file
540 540 system of the pool. See the Properties section of zfs(1M)
541 541 for a list of valid properties that can be set.
542 542
543 543 -R root
544 544 Equivalent to -o cachefile=none -o altroot=root
545 545
546 546 zpool destroy [-f] pool
547 547 Destroys the given pool, freeing up any devices for other use.
548 548 This command tries to unmount any active datasets before
549 549 destroying the pool.
550 550
551 551 -f Forces any active datasets contained within the pool to
552 552 be unmounted.
553 553
554 554 zpool detach pool device
555 555 Detaches device from a mirror. The operation is refused if there
556 556 are no other valid replicas of the data.
557 557
558 558 zpool export [-f] pool...
559 559 Exports the given pools from the system. All devices are marked
560 560 as exported, but are still considered in use by other subsystems.
561 561 The devices can be moved between systems (even those of different
562 562 endianness) and imported as long as a sufficient number of
563 563 devices are present.
564 564
565 565 Before exporting the pool, all datasets within the pool are
566 566 unmounted. A pool can not be exported if it has a shared spare
567 567 that is currently being used.
568 568
569 569 For pools to be portable, you must give the zpool command whole
570 570 disks, not just slices, so that ZFS can label the disks with
571 571 portable EFI labels. Otherwise, disk drivers on platforms of
572 572 different endianness will not recognize the disks.
573 573
574 574 -f Forcefully unmount all datasets, using the unmount -f
575 575 command.
576 576
577 577 This command will forcefully export the pool even if it
578 578 has a shared spare that is currently being used. This may
579 579 lead to potential data corruption.
580 580
581 581 zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
582 582 Retrieves the given list of properties (or all properties if all
583 583 is used) for the specified storage pool(s). These properties are
584 584 displayed with the following fields:
585 585
586 586 name Name of storage pool
587 587 property Property name
588 588 value Property value
589 589 source Property source, either 'default' or 'local'.
590 590
591 591 See the Properties section for more information on the available
592 592 pool properties.
593 593
594 594 -H Scripted mode. Do not display headers, and separate
595 595 fields by a single tab instead of arbitrary space.
596 596
597 597 -o field
598 598 A comma-separated list of columns to display.
599 599 name,property,value,source is the default value.
600 600
601 601 -p Display numbers in parsable (exact) values.
602 602
603 603 zpool history [-il] [pool]...
604 604 Displays the command history of the specified pool(s) or all
605 605 pools if no pool is specified.
606 606
607 607 -i Displays internally logged ZFS events in addition to user
608 608 initiated events.
609 609
610 610 -l Displays log records in long format, which in addition to
611 611 standard format includes, the user name, the hostname,
612 612 and the zone in which the operation was performed.
613 613
614 614 zpool import [-D] [-d dir]
615 615 Lists pools available to import. If the -d option is not
616 616 specified, this command searches for devices in /dev/dsk. The -d
617 617 option can be specified multiple times, and all directories are
618 618 searched. If the device appears to be part of an exported pool,
619 619 this command displays a summary of the pool with the name of the
620 620 pool, a numeric identifier, as well as the vdev layout and
621 621 current health of the device for each device or file. Destroyed
622 622 pools, pools that were previously destroyed with the zpool
623 623 destroy command, are not listed unless the -D option is
624 624 specified.
625 625
626 626 The numeric identifier is unique, and can be used instead of the
627 627 pool name when multiple exported pools of the same name are
628 628 available.
629 629
630 630 -c cachefile
631 631 Reads configuration from the given cachefile that was
632 632 created with the cachefile pool property. This cachefile
633 633 is used instead of searching for devices.
634 634
635 635 -d dir Searches for devices or files in dir. The -d option can
636 636 be specified multiple times.
637 637
638 638 -D Lists destroyed pools only.
639 639
640 640 zpool import -a [-DfmN] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
641 641 property=value]... [-R root]
642 642 Imports all pools found in the search directories. Identical to
643 643 the previous command, except that all pools with a sufficient
644 644 number of devices available are imported. Destroyed pools, pools
645 645 that were previously destroyed with the zpool destroy command,
646 646 will not be imported unless the -D option is specified.
647 647
648 648 -a Searches for and imports all pools found.
649 649
650 650 -c cachefile
651 651 Reads configuration from the given cachefile that was
652 652 created with the cachefile pool property. This cachefile
653 653 is used instead of searching for devices.
654 654
655 655 -d dir Searches for devices or files in dir. The -d option can
656 656 be specified multiple times. This option is incompatible
657 657 with the -c option.
658 658
659 659 -D Imports destroyed pools only. The -f option is also
660 660 required.
661 661
662 662 -f Forces import, even if the pool appears to be potentially
663 663 active.
664 664
665 665 -F Recovery mode for a non-importable pool. Attempt to
666 666 return the pool to an importable state by discarding the
667 667 last few transactions. Not all damaged pools can be
668 668 recovered by using this option. If successful, the data
669 669 from the discarded transactions is irretrievably lost.
670 670 This option is ignored if the pool is importable or
671 671 already imported.
672 672
673 673 -m Allows a pool to import when there is a missing log
674 674 device. Recent transactions can be lost because the log
675 675 device will be discarded.
676 676
677 677 -n Used with the -F recovery option. Determines whether a
678 678 non-importable pool can be made importable again, but
679 679 does not actually perform the pool recovery. For more
680 680 details about pool recovery mode, see the -F option,
681 681 above.
682 682
683 683 -N Import the pool without mounting any file systems.
684 684
685 685 -o mntopts
686 686 Comma-separated list of mount options to use when
687 687 mounting datasets within the pool. See zfs(1M) for a
688 688 description of dataset properties and mount options.
689 689
690 690 -o property=value
691 691 Sets the specified property on the imported pool. See the
692 692 Properties section for more information on the available
693 693 pool properties.
694 694
695 695 -R root
696 696 Sets the cachefile property to none and the altroot
697 697 property to root.
698 698
699 699 zpool import [-Dfm] [-F [-n]] [-c cachefile|-d dir] [-o mntopts] [-o
700 700 property=value]... [-R root] pool|id [newpool]
701 701 Imports a specific pool. A pool can be identified by its name or
702 702 the numeric identifier. If newpool is specified, the pool is
703 703 imported using the name newpool. Otherwise, it is imported with
704 704 the same name as its exported name.
705 705
706 706 If a device is removed from a system without running zpool export
707 707 first, the device appears as potentially active. It cannot be
708 708 determined if this was a failed export, or whether the device is
709 709 really in use from another host. To import a pool in this state,
710 710 the -f option is required.
711 711
712 712 -c cachefile
713 713 Reads configuration from the given cachefile that was
714 714 created with the cachefile pool property. This cachefile
715 715 is used instead of searching for devices.
716 716
717 717 -d dir Searches for devices or files in dir. The -d option can
718 718 be specified multiple times. This option is incompatible
719 719 with the -c option.
720 720
721 721 -D Imports destroyed pool. The -f option is also required.
722 722
723 723 -f Forces import, even if the pool appears to be potentially
724 724 active.
725 725
726 726 -F Recovery mode for a non-importable pool. Attempt to
727 727 return the pool to an importable state by discarding the
728 728 last few transactions. Not all damaged pools can be
729 729 recovered by using this option. If successful, the data
730 730 from the discarded transactions is irretrievably lost.
731 731 This option is ignored if the pool is importable or
732 732 already imported.
733 733
734 734 -m Allows a pool to import when there is a missing log
735 735 device. Recent transactions can be lost because the log
736 736 device will be discarded.
737 737
738 738 -n Used with the -F recovery option. Determines whether a
739 739 non-importable pool can be made importable again, but
740 740 does not actually perform the pool recovery. For more
741 741 details about pool recovery mode, see the -F option,
742 742 above.
743 743
744 744 -o mntopts
745 745 Comma-separated list of mount options to use when
746 746 mounting datasets within the pool. See zfs(1M) for a
747 747 description of dataset properties and mount options.
748 748
749 749 -o property=value
750 750 Sets the specified property on the imported pool. See the
751 751 Properties section for more information on the available
752 752 pool properties.
753 753
754 754 -R root
755 755 Sets the cachefile property to none and the altroot
756 756 property to root.
757 757
758 758 zpool iostat [-v] [-T u|d] [pool]... [interval [count]]
759 759 Displays I/O statistics for the given pools. When given an
760 760 interval, the statistics are printed every interval seconds until
761 761 ^C is pressed. If no pools are specified, statistics for every
762 762 pool in the system is shown. If count is specified, the command
763 763 exits after count reports are printed.
764 764
765 765 -T u|d Display a time stamp. Specify u for a printed
766 766 representation of the internal representation of time.
767 767 See time(2). Specify d for standard date format. See
768 768 date(1).
769 769
770 770 -v Verbose statistics. Reports usage statistics for
771 771 individual vdevs within the pool, in addition to the
772 772 pool-wide statistics.
773 773
774 774 zpool list [-Hpv] [-o property[,property]...] [-T u|d] [pool]...
775 775 [interval [count]]
776 776 Lists the given pools along with a health status and space usage.
777 777 If no pools are specified, all pools in the system are listed.
778 778 When given an interval, the information is printed every interval
779 779 seconds until ^C is pressed. If count is specified, the command
780 780 exits after count reports are printed.
781 781
782 782 -H Scripted mode. Do not display headers, and separate
783 783 fields by a single tab instead of arbitrary space.
784 784
785 785 -o property
786 786 Comma-separated list of properties to display. See the
787 787 Properties section for a list of valid properties. The
788 788 default list is name, size, used, available,
789 789 fragmentation, expandsize, capacity, dedupratio, health,
790 790 altroot.
791 791
792 792 -p Display numbers in parsable (exact) values.
793 793
794 794 -T u|d Display a time stamp. Specify -u for a printed
795 795 representation of the internal representation of time.
796 796 See time(2). Specify -d for standard date format. See
797 797 date(1).
798 798
799 799 -v Verbose statistics. Reports usage statistics for
800 800 individual vdevs within the pool, in addition to the
801 801 pool-wise statistics.
802 802
803 803 zpool offline [-t] pool device...
804 804 Takes the specified physical device offline. While the device is
805 805 offline, no attempt is made to read or write to the device. This
806 806 command is not applicable to spares.
807 807
808 808 -t Temporary. Upon reboot, the specified physical device
809 809 reverts to its previous state.
810 810
811 811 zpool online [-e] pool device...
812 812 Brings the specified physical device online. This command is not
813 813 applicable to spares.
814 814
815 815 -e Expand the device to use all available space. If the
816 816 device is part of a mirror or raidz then all devices must
817 817 be expanded before the new space will become available to
818 818 the pool.
819 819
820 820 zpool reguid pool
821 821 Generates a new unique identifier for the pool. You must ensure
822 822 that all devices in this pool are online and healthy before
823 823 performing this action.
824 824
825 825 zpool reopen pool
826 826 Reopen all the vdevs associated with the pool.
827 827
828 828 zpool remove pool device...
829 829 Removes the specified device from the pool. This command
830 830 currently only supports removing hot spares, cache, and log
831 831 devices. A mirrored log device can be removed by specifying the
832 832 top-level mirror for the log. Non-log devices that are part of a
833 833 mirrored configuration can be removed using the zpool detach
834 834 command. Non-redundant and raidz devices cannot be removed from a
835 835 pool.
836 836
837 837 zpool replace [-f] pool device [new_device]
838 838 Replaces old_device with new_device. This is equivalent to
839 839 attaching new_device, waiting for it to resilver, and then
840 840 detaching old_device.
841 841
842 842 The size of new_device must be greater than or equal to the
843 843 minimum size of all the devices in a mirror or raidz
844 844 configuration.
845 845
846 846 new_device is required if the pool is not redundant. If
847 847 new_device is not specified, it defaults to old_device. This
848 848 form of replacement is useful after an existing disk has failed
849 849 and has been physically replaced. In this case, the new disk may
850 850 have the same /dev/dsk path as the old device, even though it is
851 851 actually a different disk. ZFS recognizes this.
852 852
853 853 -f Forces use of new_device, even if its appears to be in
854 854 use. Not all devices can be overridden in this manner.
855 855
856 856 zpool scrub [-s] pool...
857 857 Begins a scrub. The scrub examines all data in the specified
858 858 pools to verify that it checksums correctly. For replicated
859 859 (mirror or raidz) devices, ZFS automatically repairs any damage
860 860 discovered during the scrub. The zpool status command reports the
861 861 progress of the scrub and summarizes the results of the scrub
862 862 upon completion.
863 863
864 864 Scrubbing and resilvering are very similar operations. The
865 865 difference is that resilvering only examines data that ZFS knows
866 866 to be out of date (for example, when attaching a new device to a
867 867 mirror or replacing an existing device), whereas scrubbing
868 868 examines all data to discover silent errors due to hardware
869 869 faults or disk failure.
870 870
871 871 Because scrubbing and resilvering are I/O-intensive operations,
872 872 ZFS only allows one at a time. If a scrub is already in progress,
873 873 the zpool scrub command terminates it and starts a new scrub. If
874 874 a resilver is in progress, ZFS does not allow a scrub to be
875 875 started until the resilver completes.
876 876
877 877 -s Stop scrubbing.
878 878
879 879 zpool set property=value pool
880 880 Sets the given property on the specified pool. See the Properties
881 881 section for more information on what properties can be set and
882 882 acceptable values.
883 883
884 884 zpool split [-n] [-o property=value]... [-R root] pool newpool
885 885 Splits devices off pool creating newpool. All vdevs in pool must
886 886 be mirrors. At the time of the split, newpool will be a replica
887 887 of pool.
888 888
889 889 -n Do dry run, do not actually perform the split. Print out
890 890 the expected configuration of newpool.
891 891
892 892 -o property=value
893 893 Sets the specified property for newpool. See the
894 894 Properties section for more information on the available
895 895 pool properties.
896 896
897 897 -R root
898 898 Set altroot for newpool to root and automaticaly import
899 899 it.
900 900
901 901 zpool status [-Dvx] [-T u|d] [pool]... [interval [count]]
902 902 Displays the detailed health status for the given pools. If no
903 903 pool is specified, then the status of each pool in the system is
904 904 displayed. For more information on pool and device health, see
905 905 the Device Failure and Recovery section.
906 906
907 907 If a scrub or resilver is in progress, this command reports the
908 908 percentage done and the estimated time to completion. Both of
909 909 these are only approximate, because the amount of data in the
910 910 pool and the other workloads on the system can change.
911 911
912 912 -D Display a histogram of deduplication statistics, showing
913 913 the allocated (physically present on disk) and referenced
914 914 (logically referenced in the pool) block counts and sizes
915 915 by reference count.
916 916
917 917 -T u|d Display a time stamp. Specify -u for a printed
918 918 representation of the internal representation of time.
919 919 See time(2). Specify -d for standard date format. See
920 920 date(1).
921 921
922 922 -v Displays verbose data error information, printing out a
923 923 complete list of all data errors since the last complete
924 924 pool scrub.
925 925
926 926 -x Only display status for pools that are exhibiting errors
927 927 or are otherwise unavailable. Warnings about pools not
928 928 using the latest on-disk format will not be included.
929 929
930 930 zpool upgrade
931 931 Displays pools which do not have all supported features enabled
932 932 and pools formatted using a legacy ZFS version number. These
933 933 pools can continue to be used, but some features may not be
934 934 available. Use zpool upgrade -a to enable all features on all
935 935 pools.
936 936
937 937 zpool upgrade -v
938 938 Displays legacy ZFS versions supported by the current software.
939 939 See zpool-features(5) for a description of feature flags features
940 940 supported by the current software.
941 941
942 942 zpool upgrade [-V version] -a|pool...
943 943 Enables all supported features on the given pool. Once this is
944 944 done, the pool will no longer be accessible on systems that do
945 945 not support feature flags. See zpool-features(5) for details on
946 946 compatibility with systems that support feature flags, but do not
947 947 support all features enabled on the pool.
948 948
949 949 -a Enables all supported features on all pools.
950 950
951 951 -V version
952 952 Upgrade to the specified legacy version. If the -V flag
953 953 is specified, no features will be enabled on the pool.
954 954 This option can only be used to increase the version
955 955 number up to the last supported legacy version number.
956 956
957 957 EXIT STATUS
958 958 The following exit values are returned:
959 959
960 960 0 Successful completion.
961 961
962 962 1 An error occurred.
963 963
964 964 2 Invalid command line options were specified.
965 965
966 966 EXAMPLES
967 967 Example 1 Creating a RAID-Z Storage Pool
968 968 The following command creates a pool with a single raidz root
969 969 vdev that consists of six disks.
970 970
971 971 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
972 972
973 973 Example 2 Creating a Mirrored Storage Pool
974 974 The following command creates a pool with two mirrors, where each
975 975 mirror contains two disks.
976 976
977 977 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
978 978
979 979 Example 3 Creating a ZFS Storage Pool by Using Slices
980 980 The following command creates an unmirrored pool using two disk
981 981 slices.
982 982
983 983 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
984 984
985 985 Example 4 Creating a ZFS Storage Pool by Using Files
986 986 The following command creates an unmirrored pool using files.
987 987 While not recommended, a pool based on files can be useful for
988 988 experimental purposes.
989 989
990 990 # zpool create tank /path/to/file/a /path/to/file/b
991 991
992 992 Example 5 Adding a Mirror to a ZFS Storage Pool
993 993 The following command adds two mirrored disks to the pool tank,
994 994 assuming the pool is already made up of two-way mirrors. The
995 995 additional space is immediately available to any datasets within
996 996 the pool.
997 997
998 998 # zpool add tank mirror c1t0d0 c1t1d0
999 999
1000 1000 Example 6 Listing Available ZFS Storage Pools
1001 1001 The following command lists all available pools on the system. In
1002 1002 this case, the pool zion is faulted due to a missing device. The
1003 1003 results from this command are similar to the following:
1004 1004
1005 1005 # zpool list
1006 1006 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1007 1007 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1008 1008 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1009 1009 zion - - - - - - - FAULTED -
1010 1010
1011 1011 Example 7 Destroying a ZFS Storage Pool
1012 1012 The following command destroys the pool tank and any datasets
1013 1013 contained within.
1014 1014
1015 1015 # zpool destroy -f tank
1016 1016
1017 1017 Example 8 Exporting a ZFS Storage Pool
1018 1018 The following command exports the devices in pool tank so that
1019 1019 they can be relocated or later imported.
1020 1020
1021 1021 # zpool export tank
1022 1022
1023 1023 Example 9 Importing a ZFS Storage Pool
1024 1024 The following command displays available pools, and then imports
1025 1025 the pool tank for use on the system. The results from this
1026 1026 command are similar to the following:
1027 1027
1028 1028 # zpool import
1029 1029 pool: tank
1030 1030 id: 15451357997522795478
1031 1031 state: ONLINE
1032 1032 action: The pool can be imported using its name or numeric identifier.
1033 1033 config:
1034 1034
1035 1035 tank ONLINE
1036 1036 mirror ONLINE
1037 1037 c1t2d0 ONLINE
1038 1038 c1t3d0 ONLINE
1039 1039
1040 1040 # zpool import tank
1041 1041
1042 1042 Example 10 Upgrading All ZFS Storage Pools to the Current Version
1043 1043 The following command upgrades all ZFS Storage pools to the
1044 1044 current version of the software.
1045 1045
1046 1046 # zpool upgrade -a
1047 1047 This system is currently running ZFS version 2.
1048 1048
1049 1049 Example 11 Managing Hot Spares
1050 1050 The following command creates a new pool with an available hot
1051 1051 spare:
1052 1052
1053 1053 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1054 1054
1055 1055 If one of the disks were to fail, the pool would be reduced to
1056 1056 the degraded state. The failed device can be replaced using the
1057 1057 following command:
1058 1058
1059 1059 # zpool replace tank c0t0d0 c0t3d0
1060 1060
1061 1061 Once the data has been resilvered, the spare is automatically
1062 1062 removed and is made available should another device fails. The
1063 1063 hot spare can be permanently removed from the pool using the
1064 1064 following command:
1065 1065
1066 1066 # zpool remove tank c0t2d0
1067 1067
1068 1068 Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
1069 1069 The following command creates a ZFS storage pool consisting of
1070 1070 two, two-way mirrors and mirrored log devices:
1071 1071
1072 1072 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
1073 1073 c4d0 c5d0
1074 1074
1075 1075 Example 13 Adding Cache Devices to a ZFS Pool
1076 1076 The following command adds two disks for use as cache devices to
1077 1077 a ZFS storage pool:
1078 1078
1079 1079 # zpool add pool cache c2d0 c3d0
1080 1080
1081 1081 Once added, the cache devices gradually fill with content from
1082 1082 main memory. Depending on the size of your cache devices, it
1083 1083 could take over an hour for them to fill. Capacity and reads can
1084 1084 be monitored using the iostat option as follows:
1085 1085
1086 1086 # zpool iostat -v pool 5
1087 1087
1088 1088 Example 14 Removing a Mirrored Log Device
1089 1089 The following command removes the mirrored log device mirror-2.
1090 1090 Given this configuration:
1091 1091
1092 1092 pool: tank
1093 1093 state: ONLINE
1094 1094 scrub: none requested
1095 1095 config:
1096 1096
1097 1097 NAME STATE READ WRITE CKSUM
1098 1098 tank ONLINE 0 0 0
1099 1099 mirror-0 ONLINE 0 0 0
1100 1100 c6t0d0 ONLINE 0 0 0
1101 1101 c6t1d0 ONLINE 0 0 0
1102 1102 mirror-1 ONLINE 0 0 0
1103 1103 c6t2d0 ONLINE 0 0 0
1104 1104 c6t3d0 ONLINE 0 0 0
1105 1105 logs
1106 1106 mirror-2 ONLINE 0 0 0
1107 1107 c4t0d0 ONLINE 0 0 0
1108 1108 c4t1d0 ONLINE 0 0 0
1109 1109
1110 1110 The command to remove the mirrored log mirror-2 is:
1111 1111
1112 1112 # zpool remove tank mirror-2
1113 1113
1114 1114 Example 15 Displaying expanded space on a device
1115 1115 The following command dipslays the detailed information for the
1116 1116 pool data. This pool is comprised of a single raidz vdev where
1117 1117 one of its devices increased its capacity by 10GB. In this
1118 1118 example, the pool will not be able to utilize this extra capacity
1119 1119 until all the devices under the raidz vdev have been expanded.
1120 1120
1121 1121 # zpool list -v data
1122 1122 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1123 1123 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1124 1124 raidz1 23.9G 14.6G 9.30G 48% -
↓ open down ↓ |
1011 lines elided |
↑ open up ↑ |
1125 1125 c1t1d0 - - - - -
1126 1126 c1t2d0 - - - - 10G
1127 1127 c1t3d0 - - - - -
1128 1128
1129 1129 INTERFACE STABILITY
1130 1130 Evolving
1131 1131
1132 1132 SEE ALSO
1133 1133 zfs(1M), attributes(5), zpool-features(5)
1134 1134
1135 -illumos February 15, 2016 illumos
1135 +illumos March 25, 2016 illumos
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX