Print this page
6781 zpool man page needs updated to remove duplicate entry of "cannot be" where it discusses cache devices
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/man/man1m/zpool.1m
+++ new/usr/src/man/man1m/zpool.1m
1 1 .\"
2 2 .\" CDDL HEADER START
3 3 .\"
4 4 .\" The contents of this file are subject to the terms of the
5 5 .\" Common Development and Distribution License (the "License").
6 6 .\" You may not use this file except in compliance with the License.
7 7 .\"
8 8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 .\" or http://www.opensolaris.org/os/licensing.
10 10 .\" See the License for the specific language governing permissions
11 11 .\" and limitations under the License.
12 12 .\"
13 13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 .\" If applicable, add the following below this CDDL HEADER, with the
↓ open down ↓ |
15 lines elided |
↑ open up ↑ |
16 16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 18 .\"
19 19 .\" CDDL HEADER END
20 20 .\"
21 21 .\"
22 22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 24 .\" Copyright 2016 Nexenta Systems, Inc.
25 25 .\"
26 -.Dd February 15, 2016
26 +.Dd March 25, 2016
27 27 .Dt ZPOOL 1M
28 28 .Os
29 29 .Sh NAME
30 30 .Nm zpool
31 31 .Nd configure ZFS storage pools
32 32 .Sh SYNOPSIS
33 33 .Nm
34 34 .Fl \?
35 35 .Nm
36 36 .Cm add
37 37 .Op Fl fn
38 38 .Ar pool vdev Ns ...
39 39 .Nm
40 40 .Cm attach
41 41 .Op Fl f
42 42 .Ar pool device new_device
43 43 .Nm
44 44 .Cm clear
45 45 .Ar pool
46 46 .Op Ar device
47 47 .Nm
48 48 .Cm create
49 49 .Op Fl dfn
50 50 .Op Fl m Ar mountpoint
51 51 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
52 52 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
53 53 .Op Fl R Ar root
54 54 .Ar pool vdev Ns ...
55 55 .Nm
56 56 .Cm destroy
57 57 .Op Fl f
58 58 .Ar pool
59 59 .Nm
60 60 .Cm detach
61 61 .Ar pool device
62 62 .Nm
63 63 .Cm export
64 64 .Op Fl f
65 65 .Ar pool Ns ...
66 66 .Nm
67 67 .Cm get
68 68 .Op Fl Hp
69 69 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
70 70 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
71 71 .Ar pool Ns ...
72 72 .Nm
73 73 .Cm history
74 74 .Op Fl il
75 75 .Oo Ar pool Oc Ns ...
76 76 .Nm
77 77 .Cm import
78 78 .Op Fl D
79 79 .Op Fl d Ar dir
80 80 .Nm
81 81 .Cm import
82 82 .Fl a
83 83 .Op Fl DfmN
84 84 .Op Fl F Op Fl n
85 85 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
86 86 .Op Fl o Ar mntopts
87 87 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
88 88 .Op Fl R Ar root
89 89 .Nm
90 90 .Cm import
91 91 .Op Fl Dfm
92 92 .Op Fl F Op Fl n
93 93 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
94 94 .Op Fl o Ar mntopts
95 95 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
96 96 .Op Fl R Ar root
97 97 .Ar pool Ns | Ns Ar id
98 98 .Op Ar newpool
99 99 .Nm
100 100 .Cm iostat
101 101 .Op Fl v
102 102 .Op Fl T Sy u Ns | Ns Sy d
103 103 .Oo Ar pool Oc Ns ...
104 104 .Op Ar interval Op Ar count
105 105 .Nm
106 106 .Cm list
107 107 .Op Fl Hpv
108 108 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
109 109 .Op Fl T Sy u Ns | Ns Sy d
110 110 .Oo Ar pool Oc Ns ...
111 111 .Op Ar interval Op Ar count
112 112 .Nm
113 113 .Cm offline
114 114 .Op Fl t
115 115 .Ar pool Ar device Ns ...
116 116 .Nm
117 117 .Cm online
118 118 .Op Fl e
119 119 .Ar pool Ar device Ns ...
120 120 .Nm
121 121 .Cm reguid
122 122 .Ar pool
123 123 .Nm
124 124 .Cm reopen
125 125 .Ar pool
126 126 .Nm
127 127 .Cm remove
128 128 .Ar pool Ar device Ns ...
129 129 .Nm
130 130 .Cm replace
131 131 .Op Fl f
132 132 .Ar pool Ar device Op Ar new_device
133 133 .Nm
134 134 .Cm scrub
135 135 .Op Fl s
136 136 .Ar pool Ns ...
137 137 .Nm
138 138 .Cm set
139 139 .Ar property Ns = Ns Ar value
140 140 .Ar pool
141 141 .Nm
142 142 .Cm split
143 143 .Op Fl n
144 144 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
145 145 .Op Fl R Ar root
146 146 .Ar pool newpool
147 147 .Nm
148 148 .Cm status
149 149 .Op Fl Dvx
150 150 .Op Fl T Sy u Ns | Ns Sy d
151 151 .Oo Ar pool Oc Ns ...
152 152 .Op Ar interval Op Ar count
153 153 .Nm
154 154 .Cm upgrade
155 155 .Nm
156 156 .Cm upgrade
157 157 .Fl v
158 158 .Nm
159 159 .Cm upgrade
160 160 .Op Fl V Ar version
161 161 .Fl a Ns | Ns Ar pool Ns ...
162 162 .Sh DESCRIPTION
163 163 The
164 164 .Nm
165 165 command configures ZFS storage pools. A storage pool is a collection of devices
166 166 that provides physical storage and data replication for ZFS datasets. All
167 167 datasets within a storage pool share the same space. See
168 168 .Xr zfs 1M
169 169 for information on managing datasets.
170 170 .Ss Virtual Devices (vdevs)
171 171 A "virtual device" describes a single device or a collection of devices
172 172 organized according to certain performance and fault characteristics. The
173 173 following virtual devices are supported:
174 174 .Bl -tag -width Ds
175 175 .It Sy disk
176 176 A block device, typically located under
177 177 .Pa /dev/dsk .
178 178 ZFS can use individual slices or partitions, though the recommended mode of
179 179 operation is to use whole disks. A disk can be specified by a full path, or it
180 180 can be a shorthand name
181 181 .Po the relative portion of the path under
182 182 .Pa /dev/dsk
183 183 .Pc .
184 184 A whole disk can be specified by omitting the slice or partition designation.
185 185 For example,
186 186 .Pa c0t0d0
187 187 is equivalent to
188 188 .Pa /dev/dsk/c0t0d0s2 .
189 189 When given a whole disk, ZFS automatically labels the disk, if necessary.
190 190 .It Sy file
191 191 A regular file. The use of files as a backing store is strongly discouraged. It
192 192 is designed primarily for experimental purposes, as the fault tolerance of a
193 193 file is only as good as the file system of which it is a part. A file must be
194 194 specified by a full path.
195 195 .It Sy mirror
196 196 A mirror of two or more devices. Data is replicated in an identical fashion
197 197 across all components of a mirror. A mirror with N disks of size X can hold X
198 198 bytes and can withstand (N-1) devices failing before data integrity is
199 199 compromised.
200 200 .It Sy raidz , raidz1 , raidz2 , raidz3
201 201 A variation on RAID-5 that allows for better distribution of parity and
202 202 eliminates the RAID-5
203 203 .Qq write hole
204 204 .Pq in which data and parity become inconsistent after a power loss .
205 205 Data and parity is striped across all disks within a raidz group.
206 206 .Pp
207 207 A raidz group can have single-, double-, or triple-parity, meaning that the
208 208 raidz group can sustain one, two, or three failures, respectively, without
209 209 losing any data. The
210 210 .Sy raidz1
211 211 vdev type specifies a single-parity raidz group; the
212 212 .Sy raidz2
213 213 vdev type specifies a double-parity raidz group; and the
214 214 .Sy raidz3
215 215 vdev type specifies a triple-parity raidz group. The
216 216 .Sy raidz
217 217 vdev type is an alias for
218 218 .Sy raidz1 .
219 219 .Pp
220 220 A raidz group with N disks of size X with P parity disks can hold approximately
221 221 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
222 222 compromised. The minimum number of devices in a raidz group is one more than
223 223 the number of parity disks. The recommended number is between 3 and 9 to help
224 224 increase performance.
225 225 .It Sy spare
226 226 A special pseudo-vdev which keeps track of available hot spares for a pool. For
227 227 more information, see the
↓ open down ↓ |
191 lines elided |
↑ open up ↑ |
228 228 .Sx Hot Spares
229 229 section.
230 230 .It Sy log
231 231 A separate intent log device. If more than one log device is specified, then
232 232 writes are load-balanced between devices. Log devices can be mirrored. However,
233 233 raidz vdev types are not supported for the intent log. For more information,
234 234 see the
235 235 .Sx Intent Log
236 236 section.
237 237 .It Sy cache
238 -A device used to cache storage pool data. A cache device cannot be cannot be
239 -configured as a mirror or raidz group. For more information, see the
238 +A device used to cache storage pool data. A cache device cannot be configured
239 +as a mirror or raidz group. For more information, see the
240 240 .Sx Cache Devices
241 241 section.
242 242 .El
243 243 .Pp
244 244 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
245 245 contain files or disks. Mirrors of mirrors
246 246 .Pq or other combinations
247 247 are not allowed.
248 248 .Pp
249 249 A pool can have any number of virtual devices at the top of the configuration
250 250 .Po known as
251 251 .Qq root vdevs
252 252 .Pc .
253 253 Data is dynamically distributed across all top-level devices to balance data
254 254 among devices. As new virtual devices are added, ZFS automatically places data
255 255 on the newly available devices.
256 256 .Pp
257 257 Virtual devices are specified one at a time on the command line, separated by
258 258 whitespace. The keywords
259 259 .Sy mirror
260 260 and
261 261 .Sy raidz
262 262 are used to distinguish where a group ends and another begins. For example,
263 263 the following creates two root vdevs, each a mirror of two disks:
264 264 .Bd -literal
265 265 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
266 266 .Ed
267 267 .Ss Device Failure and Recovery
268 268 ZFS supports a rich set of mechanisms for handling device failure and data
269 269 corruption. All metadata and data is checksummed, and ZFS automatically repairs
270 270 bad data from a good copy when corruption is detected.
271 271 .Pp
272 272 In order to take advantage of these features, a pool must make use of some form
273 273 of redundancy, using either mirrored or raidz groups. While ZFS supports
274 274 running in a non-redundant configuration, where each root vdev is simply a disk
275 275 or file, this is strongly discouraged. A single case of bit corruption can
276 276 render some or all of your data unavailable.
277 277 .Pp
278 278 A pool's health status is described by one of three states: online, degraded,
279 279 or faulted. An online pool has all devices operating normally. A degraded pool
280 280 is one in which one or more devices have failed, but the data is still
281 281 available due to a redundant configuration. A faulted pool has corrupted
282 282 metadata, or one or more faulted devices, and insufficient replicas to continue
283 283 functioning.
284 284 .Pp
285 285 The health of the top-level vdev, such as mirror or raidz device, is
286 286 potentially impacted by the state of its associated vdevs, or component
287 287 devices. A top-level vdev or component device is in one of the following
288 288 states:
289 289 .Bl -tag -width "DEGRADED"
290 290 .It Sy DEGRADED
291 291 One or more top-level vdevs is in the degraded state because one or more
292 292 component devices are offline. Sufficient replicas exist to continue
293 293 functioning.
294 294 .Pp
295 295 One or more component devices is in the degraded or faulted state, but
296 296 sufficient replicas exist to continue functioning. The underlying conditions
297 297 are as follows:
298 298 .Bl -bullet
299 299 .It
300 300 The number of checksum errors exceeds acceptable levels and the device is
301 301 degraded as an indication that something may be wrong. ZFS continues to use the
302 302 device as necessary.
303 303 .It
304 304 The number of I/O errors exceeds acceptable levels. The device could not be
305 305 marked as faulted because there are insufficient replicas to continue
306 306 functioning.
307 307 .El
308 308 .It Sy FAULTED
309 309 One or more top-level vdevs is in the faulted state because one or more
310 310 component devices are offline. Insufficient replicas exist to continue
311 311 functioning.
312 312 .Pp
313 313 One or more component devices is in the faulted state, and insufficient
314 314 replicas exist to continue functioning. The underlying conditions are as
315 315 follows:
316 316 .Bl -bullet
317 317 .It
318 318 The device could be opened, but the contents did not match expected values.
319 319 .It
320 320 The number of I/O errors exceeds acceptable levels and the device is faulted to
321 321 prevent further use of the device.
322 322 .El
323 323 .It Sy OFFLINE
324 324 The device was explicitly taken offline by the
325 325 .Nm zpool Cm offline
326 326 command.
327 327 .It Sy ONLINE
328 328 The device is online and functioning.
329 329 .It Sy REMOVED
330 330 The device was physically removed while the system was running. Device removal
331 331 detection is hardware-dependent and may not be supported on all platforms.
332 332 .It Sy UNAVAIL
333 333 The device could not be opened. If a pool is imported when a device was
334 334 unavailable, then the device will be identified by a unique identifier instead
335 335 of its path since the path was never correct in the first place.
336 336 .El
337 337 .Pp
338 338 If a device is removed and later re-attached to the system, ZFS attempts
339 339 to put the device online automatically. Device attach detection is
340 340 hardware-dependent and might not be supported on all platforms.
341 341 .Ss Hot Spares
342 342 ZFS allows devices to be associated with pools as
343 343 .Qq hot spares .
344 344 These devices are not actively used in the pool, but when an active device
345 345 fails, it is automatically replaced by a hot spare. To create a pool with hot
346 346 spares, specify a
347 347 .Sy spare
348 348 vdev with any number of devices. For example,
349 349 .Bd -literal
350 350 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
351 351 .Ed
352 352 .Pp
353 353 Spares can be shared across multiple pools, and can be added with the
354 354 .Nm zpool Cm add
355 355 command and removed with the
356 356 .Nm zpool Cm remove
357 357 command. Once a spare replacement is initiated, a new
358 358 .Sy spare
359 359 vdev is created within the configuration that will remain there until the
360 360 original device is replaced. At this point, the hot spare becomes available
361 361 again if another device fails.
362 362 .Pp
363 363 If a pool has a shared spare that is currently being used, the pool can not be
364 364 exported since other pools may use this shared spare, which may lead to
365 365 potential data corruption.
366 366 .Pp
367 367 An in-progress spare replacement can be cancelled by detaching the hot spare.
368 368 If the original faulted device is detached, then the hot spare assumes its
369 369 place in the configuration, and is removed from the spare list of all active
370 370 pools.
371 371 .Pp
372 372 Spares cannot replace log devices.
373 373 .Ss Intent Log
374 374 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
375 375 transactions. For instance, databases often require their transactions to be on
376 376 stable storage devices when returning from a system call. NFS and other
377 377 applications can also use
378 378 .Xr fsync 3C
379 379 to ensure data stability. By default, the intent log is allocated from blocks
380 380 within the main pool. However, it might be possible to get better performance
381 381 using separate intent log devices such as NVRAM or a dedicated disk. For
382 382 example:
383 383 .Bd -literal
384 384 # zpool create pool c0d0 c1d0 log c2d0
385 385 .Ed
386 386 .Pp
387 387 Multiple log devices can also be specified, and they can be mirrored. See the
388 388 .Sx EXAMPLES
389 389 section for an example of mirroring multiple log devices.
390 390 .Pp
391 391 Log devices can be added, replaced, attached, detached, and imported and
392 392 exported as part of the larger pool. Mirrored log devices can be removed by
393 393 specifying the top-level mirror for the log.
394 394 .Ss Cache Devices
395 395 Devices can be added to a storage pool as
396 396 .Qq cache devices .
397 397 These devices provide an additional layer of caching between main memory and
398 398 disk. For read-heavy workloads, where the working set size is much larger than
399 399 what can be cached in main memory, using cache devices allow much more of this
400 400 working set to be served from low latency media. Using cache devices provides
401 401 the greatest performance improvement for random read-workloads of mostly static
402 402 content.
403 403 .Pp
404 404 To create a pool with cache devices, specify a
405 405 .Sy cache
406 406 vdev with any number of devices. For example:
407 407 .Bd -literal
408 408 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
409 409 .Ed
410 410 .Pp
411 411 Cache devices cannot be mirrored or part of a raidz configuration. If a read
412 412 error is encountered on a cache device, that read I/O is reissued to the
413 413 original storage pool device, which might be part of a mirrored or raidz
414 414 configuration.
415 415 .Pp
416 416 The content of the cache devices is considered volatile, as is the case with
417 417 other system caches.
418 418 .Ss Properties
419 419 Each pool has several properties associated with it. Some properties are
420 420 read-only statistics while others are configurable and change the behavior of
421 421 the pool.
422 422 .Pp
423 423 The following are read-only properties:
424 424 .Bl -tag -width Ds
425 425 .It Sy available
426 426 Amount of storage available within the pool. This property can also be referred
427 427 to by its shortened column name,
428 428 .Sy avail .
429 429 .It Sy capacity
430 430 Percentage of pool space used. This property can also be referred to by its
431 431 shortened column name,
432 432 .Sy cap .
433 433 .It Sy expandsize
434 434 Amount of uninitialized space within the pool or device that can be used to
435 435 increase the total capacity of the pool. Uninitialized space consists of
436 436 any space on an EFI labeled vdev which has not been brought online
437 437 .Po e.g, using
438 438 .Nm zpool Cm online Fl e
439 439 .Pc .
440 440 This space occurs when a LUN is dynamically expanded.
441 441 .It Sy fragmentation
442 442 The amount of fragmentation in the pool.
443 443 .It Sy free
444 444 The amount of free space available in the pool.
445 445 .It Sy freeing
446 446 After a file system or snapshot is destroyed, the space it was using is
447 447 returned to the pool asynchronously.
448 448 .Sy freeing
449 449 is the amount of space remaining to be reclaimed. Over time
450 450 .Sy freeing
451 451 will decrease while
452 452 .Sy free
453 453 increases.
454 454 .It Sy health
455 455 The current health of the pool. Health can be one of
456 456 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
457 457 .It Sy guid
458 458 A unique identifier for the pool.
459 459 .It Sy size
460 460 Total size of the storage pool.
461 461 .It Sy unsupported@ Ns Em feature_guid
462 462 Information about unsupported features that are enabled on the pool. See
463 463 .Xr zpool-features 5
464 464 for details.
465 465 .It Sy used
466 466 Amount of storage space used within the pool.
467 467 .El
468 468 .Pp
469 469 The space usage properties report actual physical space available to the
470 470 storage pool. The physical space can be different from the total amount of
471 471 space that any contained datasets can actually use. The amount of space used in
472 472 a raidz configuration depends on the characteristics of the data being
473 473 written. In addition, ZFS reserves some space for internal accounting
474 474 that the
475 475 .Xr zfs 1M
476 476 command takes into account, but the
477 477 .Nm
478 478 command does not. For non-full pools of a reasonable size, these effects should
479 479 be invisible. For small pools, or pools that are close to being completely
480 480 full, these discrepancies may become more noticeable.
481 481 .Pp
482 482 The following property can be set at creation time and import time:
483 483 .Bl -tag -width Ds
484 484 .It Sy altroot
485 485 Alternate root directory. If set, this directory is prepended to any mount
486 486 points within the pool. This can be used when examining an unknown pool where
487 487 the mount points cannot be trusted, or in an alternate boot environment, where
488 488 the typical paths are not valid.
489 489 .Sy altroot
490 490 is not a persistent property. It is valid only while the system is up. Setting
491 491 .Sy altroot
492 492 defaults to using
493 493 .Sy cachefile Ns = Ns Sy none ,
494 494 though this may be overridden using an explicit setting.
495 495 .El
496 496 .Pp
497 497 The following property can be set only at import time:
498 498 .Bl -tag -width Ds
499 499 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
500 500 If set to
501 501 .Sy on ,
502 502 the pool will be imported in read-only mode. This property can also be referred
503 503 to by its shortened column name,
504 504 .Sy rdonly .
505 505 .El
506 506 .Pp
507 507 The following properties can be set at creation time and import time, and later
508 508 changed with the
509 509 .Nm zpool Cm set
510 510 command:
511 511 .Bl -tag -width Ds
512 512 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
513 513 Controls automatic pool expansion when the underlying LUN is grown. If set to
514 514 .Sy on ,
515 515 the pool will be resized according to the size of the expanded device. If the
516 516 device is part of a mirror or raidz then all devices within that mirror/raidz
517 517 group must be expanded before the new space is made available to the pool. The
518 518 default behavior is
519 519 .Sy off .
520 520 This property can also be referred to by its shortened column name,
521 521 .Sy expand .
522 522 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
523 523 Controls automatic device replacement. If set to
524 524 .Sy off ,
525 525 device replacement must be initiated by the administrator by using the
526 526 .Nm zpool Cm replace
527 527 command. If set to
528 528 .Sy on ,
529 529 any new device, found in the same physical location as a device that previously
530 530 belonged to the pool, is automatically formatted and replaced. The default
531 531 behavior is
532 532 .Sy off .
533 533 This property can also be referred to by its shortened column name,
534 534 .Sy replace .
535 535 .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
536 536 Identifies the default bootable dataset for the root pool. This property is
537 537 expected to be set mainly by the installation and upgrade programs.
538 538 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
539 539 Controls the location of where the pool configuration is cached. Discovering
540 540 all pools on system startup requires a cached copy of the configuration data
541 541 that is stored on the root file system. All pools in this cache are
542 542 automatically imported when the system boots. Some environments, such as
543 543 install and clustering, need to cache this information in a different location
544 544 so that pools are not automatically imported. Setting this property caches the
545 545 pool configuration in a different location that can later be imported with
546 546 .Nm zpool Cm import Fl c .
547 547 Setting it to the special value
548 548 .Sy none
549 549 creates a temporary pool that is never cached, and the special value
550 550 .Qq
551 551 .Pq empty string
552 552 uses the default location.
553 553 .Pp
554 554 Multiple pools can share the same cache file. Because the kernel destroys and
555 555 recreates this file when pools are added and removed, care should be taken when
556 556 attempting to access this file. When the last pool using a
557 557 .Sy cachefile
558 558 is exported or destroyed, the file is removed.
559 559 .It Sy comment Ns = Ns Ar text
560 560 A text string consisting of printable ASCII characters that will be stored
561 561 such that it is available even if the pool becomes faulted. An administrator
562 562 can provide additional information about a pool using this property.
563 563 .It Sy dedupditto Ns = Ns Ar number
564 564 Threshold for the number of block ditto copies. If the reference count for a
565 565 deduplicated block increases above this number, a new ditto copy of this block
566 566 is automatically stored. The default setting is
567 567 .Sy 0
568 568 which causes no ditto copies to be created for deduplicated blocks. The miniumum
569 569 legal nonzero setting is
570 570 .Sy 100 .
571 571 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
572 572 Controls whether a non-privileged user is granted access based on the dataset
573 573 permissions defined on the dataset. See
574 574 .Xr zfs 1M
575 575 for more information on ZFS delegated administration.
576 576 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
577 577 Controls the system behavior in the event of catastrophic pool failure. This
578 578 condition is typically a result of a loss of connectivity to the underlying
579 579 storage device(s) or a failure of all devices within the pool. The behavior of
580 580 such an event is determined as follows:
581 581 .Bl -tag -width "continue"
582 582 .It Sy wait
583 583 Blocks all I/O access until the device connectivity is recovered and the errors
584 584 are cleared. This is the default behavior.
585 585 .It Sy continue
586 586 Returns
587 587 .Er EIO
588 588 to any new write I/O requests but allows reads to any of the remaining healthy
589 589 devices. Any write requests that have yet to be committed to disk would be
590 590 blocked.
591 591 .It Sy panic
592 592 Prints out a message to the console and generates a system crash dump.
593 593 .El
594 594 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
595 595 The value of this property is the current state of
596 596 .Ar feature_name .
597 597 The only valid value when setting this property is
598 598 .Sy enabled
599 599 which moves
600 600 .Ar feature_name
601 601 to the enabled state. See
602 602 .Xr zpool-features 5
603 603 for details on feature states.
604 604 .It Sy listsnaps Ns = Ns Sy on Ns | Ns Sy off
605 605 Controls whether information about snapshots associated with this pool is
606 606 output when
607 607 .Nm zfs Cm list
608 608 is run without the
609 609 .Fl t
610 610 option. The default value is
611 611 .Sy off .
612 612 .It Sy version Ns = Ns Ar version
613 613 The current on-disk version of the pool. This can be increased, but never
614 614 decreased. The preferred method of updating pools is with the
615 615 .Nm zpool Cm upgrade
616 616 command, though this property can be used when a specific version is needed for
617 617 backwards compatibility. Once feature flags is enabled on a pool this property
618 618 will no longer have a value.
619 619 .El
620 620 .Ss Subcommands
621 621 All subcommands that modify state are logged persistently to the pool in their
622 622 original form.
623 623 .Pp
624 624 The
625 625 .Nm
626 626 command provides subcommands to create and destroy storage pools, add capacity
627 627 to storage pools, and provide information about the storage pools. The
628 628 following subcommands are supported:
629 629 .Bl -tag -width Ds
630 630 .It Xo
631 631 .Nm
632 632 .Fl \?
633 633 .Xc
634 634 Displays a help message.
635 635 .It Xo
636 636 .Nm
637 637 .Cm add
638 638 .Op Fl fn
639 639 .Ar pool vdev Ns ...
640 640 .Xc
641 641 Adds the specified virtual devices to the given pool. The
642 642 .Ar vdev
643 643 specification is described in the
644 644 .Sx Virtual Devices
645 645 section. The behavior of the
646 646 .Fl f
647 647 option, and the device checks performed are described in the
648 648 .Nm zpool Cm create
649 649 subcommand.
650 650 .Bl -tag -width Ds
651 651 .It Fl f
652 652 Forces use of
653 653 .Ar vdev Ns s ,
654 654 even if they appear in use or specify a conflicting replication level. Not all
655 655 devices can be overridden in this manner.
656 656 .It Fl n
657 657 Displays the configuration that would be used without actually adding the
658 658 .Ar vdev Ns s .
659 659 The actual pool creation can still fail due to insufficient privileges or
660 660 device sharing.
661 661 .El
662 662 .It Xo
663 663 .Nm
664 664 .Cm attach
665 665 .Op Fl f
666 666 .Ar pool device new_device
667 667 .Xc
668 668 Attaches
669 669 .Ar new_device
670 670 to the existing
671 671 .Ar device .
672 672 The existing device cannot be part of a raidz configuration. If
673 673 .Ar device
674 674 is not currently part of a mirrored configuration,
675 675 .Ar device
676 676 automatically transforms into a two-way mirror of
677 677 .Ar device
678 678 and
679 679 .Ar new_device .
680 680 If
681 681 .Ar device
682 682 is part of a two-way mirror, attaching
683 683 .Ar new_device
684 684 creates a three-way mirror, and so on. In either case,
685 685 .Ar new_device
686 686 begins to resilver immediately.
687 687 .Bl -tag -width Ds
688 688 .It Fl f
689 689 Forces use of
690 690 .Ar new_device ,
691 691 even if its appears to be in use. Not all devices can be overridden in this
692 692 manner.
693 693 .El
694 694 .It Xo
695 695 .Nm
696 696 .Cm clear
697 697 .Ar pool
698 698 .Op Ar device
699 699 .Xc
700 700 Clears device errors in a pool. If no arguments are specified, all device
701 701 errors within the pool are cleared. If one or more devices is specified, only
702 702 those errors associated with the specified device or devices are cleared.
703 703 .It Xo
704 704 .Nm
705 705 .Cm create
706 706 .Op Fl dfn
707 707 .Op Fl m Ar mountpoint
708 708 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
709 709 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
710 710 .Op Fl R Ar root
711 711 .Ar pool vdev Ns ...
712 712 .Xc
713 713 Creates a new storage pool containing the virtual devices specified on the
714 714 command line. The pool name must begin with a letter, and can only contain
715 715 alphanumeric characters as well as underscore
716 716 .Pq Qq Sy _ ,
717 717 dash
718 718 .Pq Qq Sy - ,
719 719 and period
720 720 .Pq Qq Sy \&. .
721 721 The pool names
722 722 .Sy mirror ,
723 723 .Sy raidz ,
724 724 .Sy spare
725 725 and
726 726 .Sy log
727 727 are reserved, as are names beginning with the pattern
728 728 .Sy c[0-9] .
729 729 The
730 730 .Ar vdev
731 731 specification is described in the
732 732 .Sx Virtual Devices
733 733 section.
734 734 .Pp
735 735 The command verifies that each device specified is accessible and not currently
736 736 in use by another subsystem. There are some uses, such as being currently
737 737 mounted, or specified as the dedicated dump device, that prevents a device from
738 738 ever being used by ZFS . Other uses, such as having a preexisting UFS file
739 739 system, can be overridden with the
740 740 .Fl f
741 741 option.
742 742 .Pp
743 743 The command also checks that the replication strategy for the pool is
744 744 consistent. An attempt to combine redundant and non-redundant storage in a
745 745 single pool, or to mix disks and files, results in an error unless
746 746 .Fl f
747 747 is specified. The use of differently sized devices within a single raidz or
748 748 mirror group is also flagged as an error unless
749 749 .Fl f
750 750 is specified.
751 751 .Pp
752 752 Unless the
753 753 .Fl R
754 754 option is specified, the default mount point is
755 755 .Pa / Ns Ar pool .
756 756 The mount point must not exist or must be empty, or else the root dataset
757 757 cannot be mounted. This can be overridden with the
758 758 .Fl m
759 759 option.
760 760 .Pp
761 761 By default all supported features are enabled on the new pool unless the
762 762 .Fl d
763 763 option is specified.
764 764 .Bl -tag -width Ds
765 765 .It Fl d
766 766 Do not enable any features on the new pool. Individual features can be enabled
767 767 by setting their corresponding properties to
768 768 .Sy enabled
769 769 with the
770 770 .Fl o
771 771 option. See
772 772 .Xr zpool-features 5
773 773 for details about feature properties.
774 774 .It Fl f
775 775 Forces use of
776 776 .Ar vdev Ns s ,
777 777 even if they appear in use or specify a conflicting replication level. Not all
778 778 devices can be overridden in this manner.
779 779 .It Fl m Ar mountpoint
780 780 Sets the mount point for the root dataset. The default mount point is
781 781 .Pa /pool
782 782 or
783 783 .Pa altroot/pool
784 784 if
785 785 .Ar altroot
786 786 is specified. The mount point must be an absolute path,
787 787 .Sy legacy ,
788 788 or
789 789 .Sy none .
790 790 For more information on dataset mount points, see
791 791 .Xr zfs 1M .
792 792 .It Fl n
793 793 Displays the configuration that would be used without actually creating the
794 794 pool. The actual pool creation can still fail due to insufficient privileges or
795 795 device sharing.
796 796 .It Fl o Ar property Ns = Ns Ar value
797 797 Sets the given pool properties. See the
798 798 .Sx Properties
799 799 section for a list of valid properties that can be set.
800 800 .It Fl O Ar file-system-property Ns = Ns Ar value
801 801 Sets the given file system properties in the root file system of the pool. See
802 802 the
803 803 .Sx Properties
804 804 section of
805 805 .Xr zfs 1M
806 806 for a list of valid properties that can be set.
807 807 .It Fl R Ar root
808 808 Equivalent to
809 809 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
810 810 .El
811 811 .It Xo
812 812 .Nm
813 813 .Cm destroy
814 814 .Op Fl f
815 815 .Ar pool
816 816 .Xc
817 817 Destroys the given pool, freeing up any devices for other use. This command
818 818 tries to unmount any active datasets before destroying the pool.
819 819 .Bl -tag -width Ds
820 820 .It Fl f
821 821 Forces any active datasets contained within the pool to be unmounted.
822 822 .El
823 823 .It Xo
824 824 .Nm
825 825 .Cm detach
826 826 .Ar pool device
827 827 .Xc
828 828 Detaches
829 829 .Ar device
830 830 from a mirror. The operation is refused if there are no other valid replicas of
831 831 the data.
832 832 .It Xo
833 833 .Nm
834 834 .Cm export
835 835 .Op Fl f
836 836 .Ar pool Ns ...
837 837 .Xc
838 838 Exports the given pools from the system. All devices are marked as exported,
839 839 but are still considered in use by other subsystems. The devices can be moved
840 840 between systems
841 841 .Pq even those of different endianness
842 842 and imported as long as a sufficient number of devices are present.
843 843 .Pp
844 844 Before exporting the pool, all datasets within the pool are unmounted. A pool
845 845 can not be exported if it has a shared spare that is currently being used.
846 846 .Pp
847 847 For pools to be portable, you must give the
848 848 .Nm
849 849 command whole disks, not just slices, so that ZFS can label the disks with
850 850 portable EFI labels. Otherwise, disk drivers on platforms of different
851 851 endianness will not recognize the disks.
852 852 .Bl -tag -width Ds
853 853 .It Fl f
854 854 Forcefully unmount all datasets, using the
855 855 .Nm unmount Fl f
856 856 command.
857 857 .Pp
858 858 This command will forcefully export the pool even if it has a shared spare that
859 859 is currently being used. This may lead to potential data corruption.
860 860 .El
861 861 .It Xo
862 862 .Nm
863 863 .Cm get
864 864 .Op Fl Hp
865 865 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
866 866 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
867 867 .Ar pool Ns ...
868 868 .Xc
869 869 Retrieves the given list of properties
870 870 .Po
871 871 or all properties if
872 872 .Sy all
873 873 is used
874 874 .Pc
875 875 for the specified storage pool(s). These properties are displayed with
876 876 the following fields:
877 877 .Bd -literal
878 878 name Name of storage pool
879 879 property Property name
880 880 value Property value
881 881 source Property source, either 'default' or 'local'.
882 882 .Ed
883 883 .Pp
884 884 See the
885 885 .Sx Properties
886 886 section for more information on the available pool properties.
887 887 .Bl -tag -width Ds
888 888 .It Fl H
889 889 Scripted mode. Do not display headers, and separate fields by a single tab
890 890 instead of arbitrary space.
891 891 .It Fl o Ar field
892 892 A comma-separated list of columns to display.
893 893 .Sy name Ns , Ns Sy property Ns , Ns Sy value Ns , Ns Sy source
894 894 is the default value.
895 895 .It Fl p
896 896 Display numbers in parsable (exact) values.
897 897 .El
898 898 .It Xo
899 899 .Nm
900 900 .Cm history
901 901 .Op Fl il
902 902 .Oo Ar pool Oc Ns ...
903 903 .Xc
904 904 Displays the command history of the specified pool(s) or all pools if no pool is
905 905 specified.
906 906 .Bl -tag -width Ds
907 907 .It Fl i
908 908 Displays internally logged ZFS events in addition to user initiated events.
909 909 .It Fl l
910 910 Displays log records in long format, which in addition to standard format
911 911 includes, the user name, the hostname, and the zone in which the operation was
912 912 performed.
913 913 .El
914 914 .It Xo
915 915 .Nm
916 916 .Cm import
917 917 .Op Fl D
918 918 .Op Fl d Ar dir
919 919 .Xc
920 920 Lists pools available to import. If the
921 921 .Fl d
922 922 option is not specified, this command searches for devices in
923 923 .Pa /dev/dsk .
924 924 The
925 925 .Fl d
926 926 option can be specified multiple times, and all directories are searched. If the
927 927 device appears to be part of an exported pool, this command displays a summary
928 928 of the pool with the name of the pool, a numeric identifier, as well as the vdev
929 929 layout and current health of the device for each device or file. Destroyed
930 930 pools, pools that were previously destroyed with the
931 931 .Nm zpool Cm destroy
932 932 command, are not listed unless the
933 933 .Fl D
934 934 option is specified.
935 935 .Pp
936 936 The numeric identifier is unique, and can be used instead of the pool name when
937 937 multiple exported pools of the same name are available.
938 938 .Bl -tag -width Ds
939 939 .It Fl c Ar cachefile
940 940 Reads configuration from the given
941 941 .Ar cachefile
942 942 that was created with the
943 943 .Sy cachefile
944 944 pool property. This
945 945 .Ar cachefile
946 946 is used instead of searching for devices.
947 947 .It Fl d Ar dir
948 948 Searches for devices or files in
949 949 .Ar dir .
950 950 The
951 951 .Fl d
952 952 option can be specified multiple times.
953 953 .It Fl D
954 954 Lists destroyed pools only.
955 955 .El
956 956 .It Xo
957 957 .Nm
958 958 .Cm import
959 959 .Fl a
960 960 .Op Fl DfmN
961 961 .Op Fl F Op Fl n
962 962 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
963 963 .Op Fl o Ar mntopts
964 964 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
965 965 .Op Fl R Ar root
966 966 .Xc
967 967 Imports all pools found in the search directories. Identical to the previous
968 968 command, except that all pools with a sufficient number of devices available are
969 969 imported. Destroyed pools, pools that were previously destroyed with the
970 970 .Nm zpool Cm destroy
971 971 command, will not be imported unless the
972 972 .Fl D
973 973 option is specified.
974 974 .Bl -tag -width Ds
975 975 .It Fl a
976 976 Searches for and imports all pools found.
977 977 .It Fl c Ar cachefile
978 978 Reads configuration from the given
979 979 .Ar cachefile
980 980 that was created with the
981 981 .Sy cachefile
982 982 pool property. This
983 983 .Ar cachefile
984 984 is used instead of searching for devices.
985 985 .It Fl d Ar dir
986 986 Searches for devices or files in
987 987 .Ar dir .
988 988 The
989 989 .Fl d
990 990 option can be specified multiple times. This option is incompatible with the
991 991 .Fl c
992 992 option.
993 993 .It Fl D
994 994 Imports destroyed pools only. The
995 995 .Fl f
996 996 option is also required.
997 997 .It Fl f
998 998 Forces import, even if the pool appears to be potentially active.
999 999 .It Fl F
1000 1000 Recovery mode for a non-importable pool. Attempt to return the pool to an
1001 1001 importable state by discarding the last few transactions. Not all damaged pools
1002 1002 can be recovered by using this option. If successful, the data from the
1003 1003 discarded transactions is irretrievably lost. This option is ignored if the pool
1004 1004 is importable or already imported.
1005 1005 .It Fl m
1006 1006 Allows a pool to import when there is a missing log device. Recent transactions
1007 1007 can be lost because the log device will be discarded.
1008 1008 .It Fl n
1009 1009 Used with the
1010 1010 .Fl F
1011 1011 recovery option. Determines whether a non-importable pool can be made importable
1012 1012 again, but does not actually perform the pool recovery. For more details about
1013 1013 pool recovery mode, see the
1014 1014 .Fl F
1015 1015 option, above.
1016 1016 .It Fl N
1017 1017 Import the pool without mounting any file systems.
1018 1018 .It Fl o Ar mntopts
1019 1019 Comma-separated list of mount options to use when mounting datasets within the
1020 1020 pool. See
1021 1021 .Xr zfs 1M
1022 1022 for a description of dataset properties and mount options.
1023 1023 .It Fl o Ar property Ns = Ns Ar value
1024 1024 Sets the specified property on the imported pool. See the
1025 1025 .Sx Properties
1026 1026 section for more information on the available pool properties.
1027 1027 .It Fl R Ar root
1028 1028 Sets the
1029 1029 .Sy cachefile
1030 1030 property to
1031 1031 .Sy none
1032 1032 and the
1033 1033 .Sy altroot
1034 1034 property to
1035 1035 .Ar root .
1036 1036 .El
1037 1037 .It Xo
1038 1038 .Nm
1039 1039 .Cm import
1040 1040 .Op Fl Dfm
1041 1041 .Op Fl F Op Fl n
1042 1042 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1043 1043 .Op Fl o Ar mntopts
1044 1044 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1045 1045 .Op Fl R Ar root
1046 1046 .Ar pool Ns | Ns Ar id
1047 1047 .Op Ar newpool
1048 1048 .Xc
1049 1049 Imports a specific pool. A pool can be identified by its name or the numeric
1050 1050 identifier. If
1051 1051 .Ar newpool
1052 1052 is specified, the pool is imported using the name
1053 1053 .Ar newpool .
1054 1054 Otherwise, it is imported with the same name as its exported name.
1055 1055 .Pp
1056 1056 If a device is removed from a system without running
1057 1057 .Nm zpool Cm export
1058 1058 first, the device appears as potentially active. It cannot be determined if
1059 1059 this was a failed export, or whether the device is really in use from another
1060 1060 host. To import a pool in this state, the
1061 1061 .Fl f
1062 1062 option is required.
1063 1063 .Bl -tag -width Ds
1064 1064 .It Fl c Ar cachefile
1065 1065 Reads configuration from the given
1066 1066 .Ar cachefile
1067 1067 that was created with the
1068 1068 .Sy cachefile
1069 1069 pool property. This
1070 1070 .Ar cachefile
1071 1071 is used instead of searching for devices.
1072 1072 .It Fl d Ar dir
1073 1073 Searches for devices or files in
1074 1074 .Ar dir .
1075 1075 The
1076 1076 .Fl d
1077 1077 option can be specified multiple times. This option is incompatible with the
1078 1078 .Fl c
1079 1079 option.
1080 1080 .It Fl D
1081 1081 Imports destroyed pool. The
1082 1082 .Fl f
1083 1083 option is also required.
1084 1084 .It Fl f
1085 1085 Forces import, even if the pool appears to be potentially active.
1086 1086 .It Fl F
1087 1087 Recovery mode for a non-importable pool. Attempt to return the pool to an
1088 1088 importable state by discarding the last few transactions. Not all damaged pools
1089 1089 can be recovered by using this option. If successful, the data from the
1090 1090 discarded transactions is irretrievably lost. This option is ignored if the pool
1091 1091 is importable or already imported.
1092 1092 .It Fl m
1093 1093 Allows a pool to import when there is a missing log device. Recent transactions
1094 1094 can be lost because the log device will be discarded.
1095 1095 .It Fl n
1096 1096 Used with the
1097 1097 .Fl F
1098 1098 recovery option. Determines whether a non-importable pool can be made importable
1099 1099 again, but does not actually perform the pool recovery. For more details about
1100 1100 pool recovery mode, see the
1101 1101 .Fl F
1102 1102 option, above.
1103 1103 .It Fl o Ar mntopts
1104 1104 Comma-separated list of mount options to use when mounting datasets within the
1105 1105 pool. See
1106 1106 .Xr zfs 1M
1107 1107 for a description of dataset properties and mount options.
1108 1108 .It Fl o Ar property Ns = Ns Ar value
1109 1109 Sets the specified property on the imported pool. See the
1110 1110 .Sx Properties
1111 1111 section for more information on the available pool properties.
1112 1112 .It Fl R Ar root
1113 1113 Sets the
1114 1114 .Sy cachefile
1115 1115 property to
1116 1116 .Sy none
1117 1117 and the
1118 1118 .Sy altroot
1119 1119 property to
1120 1120 .Ar root .
1121 1121 .El
1122 1122 .It Xo
1123 1123 .Nm
1124 1124 .Cm iostat
1125 1125 .Op Fl v
1126 1126 .Op Fl T Sy u Ns | Ns Sy d
1127 1127 .Oo Ar pool Oc Ns ...
1128 1128 .Op Ar interval Op Ar count
1129 1129 .Xc
1130 1130 Displays I/O statistics for the given pools. When given an
1131 1131 .Ar interval ,
1132 1132 the statistics are printed every
1133 1133 .Ar interval
1134 1134 seconds until ^C is pressed. If no
1135 1135 .Ar pool Ns s
1136 1136 are specified, statistics for every pool in the system is shown. If
1137 1137 .Ar count
1138 1138 is specified, the command exits after
1139 1139 .Ar count
1140 1140 reports are printed.
1141 1141 .Bl -tag -width Ds
1142 1142 .It Fl T Sy u Ns | Ns Sy d
1143 1143 Display a time stamp. Specify
1144 1144 .Sy u
1145 1145 for a printed representation of the internal representation of time. See
1146 1146 .Xr time 2 .
1147 1147 Specify
1148 1148 .Sy d
1149 1149 for standard date format. See
1150 1150 .Xr date 1 .
1151 1151 .It Fl v
1152 1152 Verbose statistics. Reports usage statistics for individual vdevs within the
1153 1153 pool, in addition to the pool-wide statistics.
1154 1154 .El
1155 1155 .It Xo
1156 1156 .Nm
1157 1157 .Cm list
1158 1158 .Op Fl Hpv
1159 1159 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1160 1160 .Op Fl T Sy u Ns | Ns Sy d
1161 1161 .Oo Ar pool Oc Ns ...
1162 1162 .Op Ar interval Op Ar count
1163 1163 .Xc
1164 1164 Lists the given pools along with a health status and space usage. If no
1165 1165 .Ar pool Ns s
1166 1166 are specified, all pools in the system are listed. When given an
1167 1167 .Ar interval ,
1168 1168 the information is printed every
1169 1169 .Ar interval
1170 1170 seconds until ^C is pressed. If
1171 1171 .Ar count
1172 1172 is specified, the command exits after
1173 1173 .Ar count
1174 1174 reports are printed.
1175 1175 .Bl -tag -width Ds
1176 1176 .It Fl H
1177 1177 Scripted mode. Do not display headers, and separate fields by a single tab
1178 1178 instead of arbitrary space.
1179 1179 .It Fl o Ar property
1180 1180 Comma-separated list of properties to display. See the
1181 1181 .Sx Properties
1182 1182 section for a list of valid properties. The default list is
1183 1183 .Sy name , size , used , available , fragmentation , expandsize , capacity ,
1184 1184 .Sy dedupratio , health , altroot .
1185 1185 .It Fl p
1186 1186 Display numbers in parsable
1187 1187 .Pq exact
1188 1188 values.
1189 1189 .It Fl T Sy u Ns | Ns Sy d
1190 1190 Display a time stamp. Specify
1191 1191 .Fl u
1192 1192 for a printed representation of the internal representation of time. See
1193 1193 .Xr time 2 .
1194 1194 Specify
1195 1195 .Fl d
1196 1196 for standard date format. See
1197 1197 .Xr date 1 .
1198 1198 .It Fl v
1199 1199 Verbose statistics. Reports usage statistics for individual vdevs within the
1200 1200 pool, in addition to the pool-wise statistics.
1201 1201 .El
1202 1202 .It Xo
1203 1203 .Nm
1204 1204 .Cm offline
1205 1205 .Op Fl t
1206 1206 .Ar pool Ar device Ns ...
1207 1207 .Xc
1208 1208 Takes the specified physical device offline. While the
1209 1209 .Ar device
1210 1210 is offline, no attempt is made to read or write to the device. This command is
1211 1211 not applicable to spares.
1212 1212 .Bl -tag -width Ds
1213 1213 .It Fl t
1214 1214 Temporary. Upon reboot, the specified physical device reverts to its previous
1215 1215 state.
1216 1216 .El
1217 1217 .It Xo
1218 1218 .Nm
1219 1219 .Cm online
1220 1220 .Op Fl e
1221 1221 .Ar pool Ar device Ns ...
1222 1222 .Xc
1223 1223 Brings the specified physical device online. This command is not applicable to
1224 1224 spares.
1225 1225 .Bl -tag -width Ds
1226 1226 .It Fl e
1227 1227 Expand the device to use all available space. If the device is part of a mirror
1228 1228 or raidz then all devices must be expanded before the new space will become
1229 1229 available to the pool.
1230 1230 .El
1231 1231 .It Xo
1232 1232 .Nm
1233 1233 .Cm reguid
1234 1234 .Ar pool
1235 1235 .Xc
1236 1236 Generates a new unique identifier for the pool. You must ensure that all devices
1237 1237 in this pool are online and healthy before performing this action.
1238 1238 .It Xo
1239 1239 .Nm
1240 1240 .Cm reopen
1241 1241 .Ar pool
1242 1242 .Xc
1243 1243 Reopen all the vdevs associated with the pool.
1244 1244 .It Xo
1245 1245 .Nm
1246 1246 .Cm remove
1247 1247 .Ar pool Ar device Ns ...
1248 1248 .Xc
1249 1249 Removes the specified device from the pool. This command currently only supports
1250 1250 removing hot spares, cache, and log devices. A mirrored log device can be
1251 1251 removed by specifying the top-level mirror for the log. Non-log devices that are
1252 1252 part of a mirrored configuration can be removed using the
1253 1253 .Nm zpool Cm detach
1254 1254 command. Non-redundant and raidz devices cannot be removed from a pool.
1255 1255 .It Xo
1256 1256 .Nm
1257 1257 .Cm replace
1258 1258 .Op Fl f
1259 1259 .Ar pool Ar device Op Ar new_device
1260 1260 .Xc
1261 1261 Replaces
1262 1262 .Ar old_device
1263 1263 with
1264 1264 .Ar new_device .
1265 1265 This is equivalent to attaching
1266 1266 .Ar new_device ,
1267 1267 waiting for it to resilver, and then detaching
1268 1268 .Ar old_device .
1269 1269 .Pp
1270 1270 The size of
1271 1271 .Ar new_device
1272 1272 must be greater than or equal to the minimum size of all the devices in a mirror
1273 1273 or raidz configuration.
1274 1274 .Pp
1275 1275 .Ar new_device
1276 1276 is required if the pool is not redundant. If
1277 1277 .Ar new_device
1278 1278 is not specified, it defaults to
1279 1279 .Ar old_device .
1280 1280 This form of replacement is useful after an existing disk has failed and has
1281 1281 been physically replaced. In this case, the new disk may have the same
1282 1282 .Pa /dev/dsk
1283 1283 path as the old device, even though it is actually a different disk. ZFS
1284 1284 recognizes this.
1285 1285 .Bl -tag -width Ds
1286 1286 .It Fl f
1287 1287 Forces use of
1288 1288 .Ar new_device ,
1289 1289 even if its appears to be in use. Not all devices can be overridden in this
1290 1290 manner.
1291 1291 .El
1292 1292 .It Xo
1293 1293 .Nm
1294 1294 .Cm scrub
1295 1295 .Op Fl s
1296 1296 .Ar pool Ns ...
1297 1297 .Xc
1298 1298 Begins a scrub. The scrub examines all data in the specified pools to verify
1299 1299 that it checksums correctly. For replicated
1300 1300 .Pq mirror or raidz
1301 1301 devices, ZFS automatically repairs any damage discovered during the scrub. The
1302 1302 .Nm zpool Cm status
1303 1303 command reports the progress of the scrub and summarizes the results of the
1304 1304 scrub upon completion.
1305 1305 .Pp
1306 1306 Scrubbing and resilvering are very similar operations. The difference is that
1307 1307 resilvering only examines data that ZFS knows to be out of date
1308 1308 .Po
1309 1309 for example, when attaching a new device to a mirror or replacing an existing
1310 1310 device
1311 1311 .Pc ,
1312 1312 whereas scrubbing examines all data to discover silent errors due to hardware
1313 1313 faults or disk failure.
1314 1314 .Pp
1315 1315 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1316 1316 one at a time. If a scrub is already in progress, the
1317 1317 .Nm zpool Cm scrub
1318 1318 command terminates it and starts a new scrub. If a resilver is in progress, ZFS
1319 1319 does not allow a scrub to be started until the resilver completes.
1320 1320 .Bl -tag -width Ds
1321 1321 .It Fl s
1322 1322 Stop scrubbing.
1323 1323 .El
1324 1324 .It Xo
1325 1325 .Nm
1326 1326 .Cm set
1327 1327 .Ar property Ns = Ns Ar value
1328 1328 .Ar pool
1329 1329 .Xc
1330 1330 Sets the given property on the specified pool. See the
1331 1331 .Sx Properties
1332 1332 section for more information on what properties can be set and acceptable
1333 1333 values.
1334 1334 .It Xo
1335 1335 .Nm
1336 1336 .Cm split
1337 1337 .Op Fl n
1338 1338 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1339 1339 .Op Fl R Ar root
1340 1340 .Ar pool newpool
1341 1341 .Xc
1342 1342 Splits devices off
1343 1343 .Ar pool
1344 1344 creating
1345 1345 .Ar newpool .
1346 1346 All vdevs in
1347 1347 .Ar pool
1348 1348 must be mirrors. At the time of the split,
1349 1349 .Ar newpool
1350 1350 will be a replica of
1351 1351 .Ar pool .
1352 1352 .Bl -tag -width Ds
1353 1353 .It Fl n
1354 1354 Do dry run, do not actually perform the split. Print out the expected
1355 1355 configuration of
1356 1356 .Ar newpool .
1357 1357 .It Fl o Ar property Ns = Ns Ar value
1358 1358 Sets the specified property for
1359 1359 .Ar newpool .
1360 1360 See the
1361 1361 .Sx Properties
1362 1362 section for more information on the available pool properties.
1363 1363 .It Fl R Ar root
1364 1364 Set
1365 1365 .Sy altroot
1366 1366 for
1367 1367 .Ar newpool
1368 1368 to
1369 1369 .Ar root
1370 1370 and automaticaly import it.
1371 1371 .El
1372 1372 .It Xo
1373 1373 .Nm
1374 1374 .Cm status
1375 1375 .Op Fl Dvx
1376 1376 .Op Fl T Sy u Ns | Ns Sy d
1377 1377 .Oo Ar pool Oc Ns ...
1378 1378 .Op Ar interval Op Ar count
1379 1379 .Xc
1380 1380 Displays the detailed health status for the given pools. If no
1381 1381 .Ar pool
1382 1382 is specified, then the status of each pool in the system is displayed. For more
1383 1383 information on pool and device health, see the
1384 1384 .Sx Device Failure and Recovery
1385 1385 section.
1386 1386 .Pp
1387 1387 If a scrub or resilver is in progress, this command reports the percentage done
1388 1388 and the estimated time to completion. Both of these are only approximate,
1389 1389 because the amount of data in the pool and the other workloads on the system can
1390 1390 change.
1391 1391 .Bl -tag -width Ds
1392 1392 .It Fl D
1393 1393 Display a histogram of deduplication statistics, showing the allocated
1394 1394 .Pq physically present on disk
1395 1395 and referenced
1396 1396 .Pq logically referenced in the pool
1397 1397 block counts and sizes by reference count.
1398 1398 .It Fl T Sy u Ns | Ns Sy d
1399 1399 Display a time stamp. Specify
1400 1400 .Fl u
1401 1401 for a printed representation of the internal representation of time. See
1402 1402 .Xr time 2 .
1403 1403 Specify
1404 1404 .Fl d
1405 1405 for standard date format. See
1406 1406 .Xr date 1 .
1407 1407 .It Fl v
1408 1408 Displays verbose data error information, printing out a complete list of all
1409 1409 data errors since the last complete pool scrub.
1410 1410 .It Fl x
1411 1411 Only display status for pools that are exhibiting errors or are otherwise
1412 1412 unavailable. Warnings about pools not using the latest on-disk format will not
1413 1413 be included.
1414 1414 .El
1415 1415 .It Xo
1416 1416 .Nm
1417 1417 .Cm upgrade
1418 1418 .Xc
1419 1419 Displays pools which do not have all supported features enabled and pools
1420 1420 formatted using a legacy ZFS version number. These pools can continue to be
1421 1421 used, but some features may not be available. Use
1422 1422 .Nm zpool Cm upgrade Fl a
1423 1423 to enable all features on all pools.
1424 1424 .It Xo
1425 1425 .Nm
1426 1426 .Cm upgrade
1427 1427 .Fl v
1428 1428 .Xc
1429 1429 Displays legacy ZFS versions supported by the current software. See
1430 1430 .Xr zpool-features 5
1431 1431 for a description of feature flags features supported by the current software.
1432 1432 .It Xo
1433 1433 .Nm
1434 1434 .Cm upgrade
1435 1435 .Op Fl V Ar version
1436 1436 .Fl a Ns | Ns Ar pool Ns ...
1437 1437 .Xc
1438 1438 Enables all supported features on the given pool. Once this is done, the pool
1439 1439 will no longer be accessible on systems that do not support feature flags. See
1440 1440 .Xr zpool-features 5
1441 1441 for details on compatibility with systems that support feature flags, but do not
1442 1442 support all features enabled on the pool.
1443 1443 .Bl -tag -width Ds
1444 1444 .It Fl a
1445 1445 Enables all supported features on all pools.
1446 1446 .It Fl V Ar version
1447 1447 Upgrade to the specified legacy version. If the
1448 1448 .Fl V
1449 1449 flag is specified, no features will be enabled on the pool. This option can only
1450 1450 be used to increase the version number up to the last supported legacy version
1451 1451 number.
1452 1452 .El
1453 1453 .El
1454 1454 .Sh EXIT STATUS
1455 1455 The following exit values are returned:
1456 1456 .Bl -tag -width Ds
1457 1457 .It Sy 0
1458 1458 Successful completion.
1459 1459 .It Sy 1
1460 1460 An error occurred.
1461 1461 .It Sy 2
1462 1462 Invalid command line options were specified.
1463 1463 .El
1464 1464 .Sh EXAMPLES
1465 1465 .Bl -tag -width Ds
1466 1466 .It Sy Example 1 No Creating a RAID-Z Storage Pool
1467 1467 The following command creates a pool with a single raidz root vdev that
1468 1468 consists of six disks.
1469 1469 .Bd -literal
1470 1470 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1471 1471 .Ed
1472 1472 .It Sy Example 2 No Creating a Mirrored Storage Pool
1473 1473 The following command creates a pool with two mirrors, where each mirror
1474 1474 contains two disks.
1475 1475 .Bd -literal
1476 1476 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1477 1477 .Ed
1478 1478 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
1479 1479 The following command creates an unmirrored pool using two disk slices.
1480 1480 .Bd -literal
1481 1481 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1482 1482 .Ed
1483 1483 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
1484 1484 The following command creates an unmirrored pool using files. While not
1485 1485 recommended, a pool based on files can be useful for experimental purposes.
1486 1486 .Bd -literal
1487 1487 # zpool create tank /path/to/file/a /path/to/file/b
1488 1488 .Ed
1489 1489 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
1490 1490 The following command adds two mirrored disks to the pool
1491 1491 .Em tank ,
1492 1492 assuming the pool is already made up of two-way mirrors. The additional space
1493 1493 is immediately available to any datasets within the pool.
1494 1494 .Bd -literal
1495 1495 # zpool add tank mirror c1t0d0 c1t1d0
1496 1496 .Ed
1497 1497 .It Sy Example 6 No Listing Available ZFS Storage Pools
1498 1498 The following command lists all available pools on the system. In this case,
1499 1499 the pool
1500 1500 .Em zion
1501 1501 is faulted due to a missing device. The results from this command are similar
1502 1502 to the following:
1503 1503 .Bd -literal
1504 1504 # zpool list
1505 1505 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1506 1506 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1507 1507 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1508 1508 zion - - - - - - - FAULTED -
1509 1509 .Ed
1510 1510 .It Sy Example 7 No Destroying a ZFS Storage Pool
1511 1511 The following command destroys the pool
1512 1512 .Em tank
1513 1513 and any datasets contained within.
1514 1514 .Bd -literal
1515 1515 # zpool destroy -f tank
1516 1516 .Ed
1517 1517 .It Sy Example 8 No Exporting a ZFS Storage Pool
1518 1518 The following command exports the devices in pool
1519 1519 .Em tank
1520 1520 so that they can be relocated or later imported.
1521 1521 .Bd -literal
1522 1522 # zpool export tank
1523 1523 .Ed
1524 1524 .It Sy Example 9 No Importing a ZFS Storage Pool
1525 1525 The following command displays available pools, and then imports the pool
1526 1526 .Em tank
1527 1527 for use on the system. The results from this command are similar to the
1528 1528 following:
1529 1529 .Bd -literal
1530 1530 # zpool import
1531 1531 pool: tank
1532 1532 id: 15451357997522795478
1533 1533 state: ONLINE
1534 1534 action: The pool can be imported using its name or numeric identifier.
1535 1535 config:
1536 1536
1537 1537 tank ONLINE
1538 1538 mirror ONLINE
1539 1539 c1t2d0 ONLINE
1540 1540 c1t3d0 ONLINE
1541 1541
1542 1542 # zpool import tank
1543 1543 .Ed
1544 1544 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
1545 1545 The following command upgrades all ZFS Storage pools to the current version of
1546 1546 the software.
1547 1547 .Bd -literal
1548 1548 # zpool upgrade -a
1549 1549 This system is currently running ZFS version 2.
1550 1550 .Ed
1551 1551 .It Sy Example 11 No Managing Hot Spares
1552 1552 The following command creates a new pool with an available hot spare:
1553 1553 .Bd -literal
1554 1554 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1555 1555 .Ed
1556 1556 .Pp
1557 1557 If one of the disks were to fail, the pool would be reduced to the degraded
1558 1558 state. The failed device can be replaced using the following command:
1559 1559 .Bd -literal
1560 1560 # zpool replace tank c0t0d0 c0t3d0
1561 1561 .Ed
1562 1562 .Pp
1563 1563 Once the data has been resilvered, the spare is automatically removed and is
1564 1564 made available should another device fails. The hot spare can be permanently
1565 1565 removed from the pool using the following command:
1566 1566 .Bd -literal
1567 1567 # zpool remove tank c0t2d0
1568 1568 .Ed
1569 1569 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
1570 1570 The following command creates a ZFS storage pool consisting of two, two-way
1571 1571 mirrors and mirrored log devices:
1572 1572 .Bd -literal
1573 1573 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1574 1574 c4d0 c5d0
1575 1575 .Ed
1576 1576 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
1577 1577 The following command adds two disks for use as cache devices to a ZFS storage
1578 1578 pool:
1579 1579 .Bd -literal
1580 1580 # zpool add pool cache c2d0 c3d0
1581 1581 .Ed
1582 1582 .Pp
1583 1583 Once added, the cache devices gradually fill with content from main memory.
1584 1584 Depending on the size of your cache devices, it could take over an hour for
1585 1585 them to fill. Capacity and reads can be monitored using the
1586 1586 .Cm iostat
1587 1587 option as follows:
1588 1588 .Bd -literal
1589 1589 # zpool iostat -v pool 5
1590 1590 .Ed
1591 1591 .It Sy Example 14 No Removing a Mirrored Log Device
1592 1592 The following command removes the mirrored log device
1593 1593 .Sy mirror-2 .
1594 1594 Given this configuration:
1595 1595 .Bd -literal
1596 1596 pool: tank
1597 1597 state: ONLINE
1598 1598 scrub: none requested
1599 1599 config:
1600 1600
1601 1601 NAME STATE READ WRITE CKSUM
1602 1602 tank ONLINE 0 0 0
1603 1603 mirror-0 ONLINE 0 0 0
1604 1604 c6t0d0 ONLINE 0 0 0
1605 1605 c6t1d0 ONLINE 0 0 0
1606 1606 mirror-1 ONLINE 0 0 0
1607 1607 c6t2d0 ONLINE 0 0 0
1608 1608 c6t3d0 ONLINE 0 0 0
1609 1609 logs
1610 1610 mirror-2 ONLINE 0 0 0
1611 1611 c4t0d0 ONLINE 0 0 0
1612 1612 c4t1d0 ONLINE 0 0 0
1613 1613 .Ed
1614 1614 .Pp
1615 1615 The command to remove the mirrored log
1616 1616 .Sy mirror-2
1617 1617 is:
1618 1618 .Bd -literal
1619 1619 # zpool remove tank mirror-2
1620 1620 .Ed
1621 1621 .It Sy Example 15 No Displaying expanded space on a device
1622 1622 The following command dipslays the detailed information for the pool
1623 1623 .Em data .
1624 1624 This pool is comprised of a single raidz vdev where one of its devices
1625 1625 increased its capacity by 10GB. In this example, the pool will not be able to
1626 1626 utilize this extra capacity until all the devices under the raidz vdev have
1627 1627 been expanded.
1628 1628 .Bd -literal
1629 1629 # zpool list -v data
1630 1630 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1631 1631 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1632 1632 raidz1 23.9G 14.6G 9.30G 48% -
1633 1633 c1t1d0 - - - - -
1634 1634 c1t2d0 - - - - 10G
1635 1635 c1t3d0 - - - - -
1636 1636 .Ed
1637 1637 .El
1638 1638 .Sh INTERFACE STABILITY
1639 1639 .Sy Evolving
1640 1640 .Sh SEE ALSO
1641 1641 .Xr zfs 1M ,
1642 1642 .Xr attributes 5 ,
1643 1643 .Xr zpool-features 5
↓ open down ↓ |
1394 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX