Print this page
10067 Miscellaneous man page typos
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Andy Fiddaman <andy@omniosce.org>
Reviewed by: Volker A. Brandt <vab@bb-c.de>
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/man/man1m/zfs.1m.man.txt
+++ new/usr/src/man/man1m/zfs.1m.man.txt
1 1 ZFS(1M) Maintenance Commands ZFS(1M)
2 2
3 3 NAME
4 4 zfs - configures ZFS file systems
5 5
6 6 SYNOPSIS
7 7 zfs [-?]
8 8 zfs create [-p] [-o property=value]... filesystem
9 9 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
10 10 zfs destroy [-Rfnprv] filesystem|volume
11 11 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
12 12 zfs destroy filesystem|volume#bookmark
13 13 zfs snapshot [-r] [-o property=value]...
14 14 filesystem@snapname|volume@snapname...
15 15 zfs rollback [-Rfr] snapshot
16 16 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
17 17 zfs promote clone-filesystem
18 18 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
19 19 zfs rename [-fp] filesystem|volume filesystem|volume
20 20 zfs rename -r snapshot snapshot
21 21 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
22 22 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
23 23 zfs remap filesystem|volume
24 24 zfs set property=value [property=value]... filesystem|volume|snapshot...
25 25 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
26 26 [-t type[,type]...] all | property[,property]...
27 27 filesystem|volume|snapshot|bookmark...
28 28 zfs inherit [-rS] property filesystem|volume|snapshot...
29 29 zfs upgrade
30 30 zfs upgrade -v
31 31 zfs upgrade [-r] [-V version] -a | filesystem
32 32 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
33 33 [-t type[,type]...] filesystem|snapshot
34 34 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
35 35 [-t type[,type]...] filesystem|snapshot
36 36 zfs mount
37 37 zfs mount [-Ov] [-o options] -a | filesystem
38 38 zfs unmount [-f] -a | filesystem|mountpoint
39 39 zfs share -a | filesystem
40 40 zfs unshare -a | filesystem|mountpoint
41 41 zfs bookmark snapshot bookmark
42 42 zfs send [-DLPRcenpv] [[-I|-i] snapshot] snapshot
43 43 zfs send [-Lce] [-i snapshot|bookmark] filesystem|volume|snapshot
44 44 zfs send [-Penv] -t receive_resume_token
45 45 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
46 46 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
47 47 zfs receive -A filesystem|volume
48 48 zfs allow filesystem|volume
49 49 zfs allow [-dglu] user|group[,user|group]...
50 50 perm|@setname[,perm|@setname]... filesystem|volume
51 51 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
52 52 filesystem|volume
53 53 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
54 54 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
55 55 zfs unallow [-dglru] user|group[,user|group]...
56 56 [perm|@setname[,perm|@setname]...] filesystem|volume
57 57 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
58 58 filesystem|volume
59 59 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
60 60 zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
61 61 filesystem|volume
62 62 zfs hold [-r] tag snapshot...
63 63 zfs holds [-r] snapshot...
64 64 zfs release [-r] tag snapshot...
65 65 zfs diff [-FHt] snapshot snapshot|filesystem
66 66 zfs program [-jn] [-t timeout] [-m memory_limit] pool script [arg1 ...]
67 67
68 68 DESCRIPTION
69 69 The zfs command configures ZFS datasets within a ZFS storage pool, as
70 70 described in zpool(1M). A dataset is identified by a unique path within
71 71 the ZFS namespace. For example:
72 72
73 73 pool/{filesystem,volume,snapshot}
74 74
75 75 where the maximum length of a dataset name is MAXNAMELEN (256 bytes) and
76 76 the maximum amount of nesting allowed in a path is 50 levels deep.
77 77
78 78 A dataset can be one of the following:
79 79
80 80 file system A ZFS dataset of type filesystem can be mounted within the
81 81 standard system namespace and behaves like other file
82 82 systems. While ZFS file systems are designed to be POSIX
83 83 compliant, known issues exist that prevent compliance in
84 84 some cases. Applications that depend on standards
85 85 conformance might fail due to non-standard behavior when
86 86 checking file system free space.
87 87
88 88 volume A logical volume exported as a raw or block device. This
89 89 type of dataset should only be used under special
90 90 circumstances. File systems are typically used in most
91 91 environments.
92 92
93 93 snapshot A read-only version of a file system or volume at a given
94 94 point in time. It is specified as filesystem@name or
95 95 volume@name.
96 96
97 97 ZFS File System Hierarchy
98 98 A ZFS storage pool is a logical collection of devices that provide space
99 99 for datasets. A storage pool is also the root of the ZFS file system
100 100 hierarchy.
101 101
102 102 The root of the pool can be accessed as a file system, such as mounting
103 103 and unmounting, taking snapshots, and setting properties. The physical
104 104 storage characteristics, however, are managed by the zpool(1M) command.
105 105
106 106 See zpool(1M) for more information on creating and administering pools.
107 107
108 108 Snapshots
109 109 A snapshot is a read-only copy of a file system or volume. Snapshots can
110 110 be created extremely quickly, and initially consume no additional space
111 111 within the pool. As data within the active dataset changes, the snapshot
112 112 consumes more data than would otherwise be shared with the active
113 113 dataset.
114 114
115 115 Snapshots can have arbitrary names. Snapshots of volumes can be cloned
116 116 or rolled back, but cannot be accessed independently.
117 117
118 118 File system snapshots can be accessed under the .zfs/snapshot directory
119 119 in the root of the file system. Snapshots are automatically mounted on
120 120 demand and may be unmounted at regular intervals. The visibility of the
121 121 .zfs directory can be controlled by the snapdir property.
122 122
123 123 Clones
124 124 A clone is a writable volume or file system whose initial contents are
125 125 the same as another dataset. As with snapshots, creating a clone is
126 126 nearly instantaneous, and initially consumes no additional space.
127 127
128 128 Clones can only be created from a snapshot. When a snapshot is cloned,
129 129 it creates an implicit dependency between the parent and child. Even
130 130 though the clone is created somewhere else in the dataset hierarchy, the
131 131 original snapshot cannot be destroyed as long as a clone exists. The
132 132 origin property exposes this dependency, and the destroy command lists
133 133 any such dependencies, if they exist.
134 134
135 135 The clone parent-child dependency relationship can be reversed by using
136 136 the promote subcommand. This causes the "origin" file system to become a
137 137 clone of the specified file system, which makes it possible to destroy
138 138 the file system that the clone was created from.
139 139
140 140 Mount Points
141 141 Creating a ZFS file system is a simple operation, so the number of file
142 142 systems per system is likely to be numerous. To cope with this, ZFS
143 143 automatically manages mounting and unmounting file systems without the
144 144 need to edit the /etc/vfstab file. All automatically managed file
145 145 systems are mounted by ZFS at boot time.
146 146
147 147 By default, file systems are mounted under /path, where path is the name
148 148 of the file system in the ZFS namespace. Directories are created and
149 149 destroyed as needed.
150 150
151 151 A file system can also have a mount point set in the mountpoint property.
152 152 This directory is created as needed, and ZFS automatically mounts the
153 153 file system when the zfs mount -a command is invoked (without editing
154 154 /etc/vfstab). The mountpoint property can be inherited, so if pool/home
155 155 has a mount point of /export/stuff, then pool/home/user automatically
156 156 inherits a mount point of /export/stuff/user.
157 157
158 158 A file system mountpoint property of none prevents the file system from
159 159 being mounted.
160 160
161 161 If needed, ZFS file systems can also be managed with traditional tools
162 162 (mount, umount, /etc/vfstab). If a file system's mount point is set to
163 163 legacy, ZFS makes no attempt to manage the file system, and the
164 164 administrator is responsible for mounting and unmounting the file system.
165 165
166 166 Zones
167 167 A ZFS file system can be added to a non-global zone by using the zonecfg
168 168 add fs subcommand. A ZFS file system that is added to a non-global zone
169 169 must have its mountpoint property set to legacy.
170 170
171 171 The physical properties of an added file system are controlled by the
172 172 global administrator. However, the zone administrator can create,
173 173 modify, or destroy files within the added file system, depending on how
174 174 the file system is mounted.
175 175
176 176 A dataset can also be delegated to a non-global zone by using the zonecfg
177 177 add dataset subcommand. You cannot delegate a dataset to one zone and
178 178 the children of the same dataset to another zone. The zone administrator
179 179 can change properties of the dataset or any of its children. However,
180 180 the quota, filesystem_limit and snapshot_limit properties of the
181 181 delegated dataset can be modified only by the global administrator.
182 182
183 183 A ZFS volume can be added as a device to a non-global zone by using the
184 184 zonecfg add device subcommand. However, its physical properties can be
185 185 modified only by the global administrator.
186 186
187 187 For more information about zonecfg syntax, see zonecfg(1M).
188 188
189 189 After a dataset is delegated to a non-global zone, the zoned property is
190 190 automatically set. A zoned file system cannot be mounted in the global
191 191 zone, since the zone administrator might have to set the mount point to
192 192 an unacceptable value.
193 193
194 194 The global administrator can forcibly clear the zoned property, though
195 195 this should be done with extreme care. The global administrator should
196 196 verify that all the mount points are acceptable before clearing the
197 197 property.
198 198
199 199 Native Properties
200 200 Properties are divided into two types, native properties and user-defined
201 201 (or "user") properties. Native properties either export internal
202 202 statistics or control ZFS behavior. In addition, native properties are
203 203 either editable or read-only. User properties have no effect on ZFS
204 204 behavior, but you can use them to annotate datasets in a way that is
205 205 meaningful in your environment. For more information about user
206 206 properties, see the User Properties section, below.
207 207
208 208 Every dataset has a set of properties that export statistics about the
209 209 dataset as well as control various behaviors. Properties are inherited
210 210 from the parent unless overridden by the child. Some properties apply
211 211 only to certain types of datasets (file systems, volumes, or snapshots).
212 212
213 213 The values of numeric properties can be specified using human-readable
214 214 suffixes (for example, k, KB, M, Gb, and so forth, up to Z for
215 215 zettabyte). The following are all valid (and equal) specifications:
216 216 1536M, 1.5g, 1.50GB.
217 217
218 218 The values of non-numeric properties are case sensitive and must be
219 219 lowercase, except for mountpoint, sharenfs, and sharesmb.
220 220
221 221 The following native properties consist of read-only statistics about the
222 222 dataset. These properties can be neither set, nor inherited. Native
223 223 properties apply to all dataset types unless otherwise noted.
224 224
225 225 available The amount of space available to the dataset and
226 226 all its children, assuming that there is no other
227 227 activity in the pool. Because space is shared
228 228 within a pool, availability can be limited by any
229 229 number of factors, including physical pool size,
230 230 quotas, reservations, or other datasets within the
231 231 pool.
232 232
233 233 This property can also be referred to by its
234 234 shortened column name, avail.
235 235
236 236 compressratio For non-snapshots, the compression ratio achieved
237 237 for the used space of this dataset, expressed as a
238 238 multiplier. The used property includes descendant
239 239 datasets, and, for clones, does not include the
240 240 space shared with the origin snapshot. For
241 241 snapshots, the compressratio is the same as the
242 242 refcompressratio property. Compression can be
243 243 turned on by running: zfs set compression=on
244 244 dataset. The default value is off.
245 245
246 246 createtxg The transaction group (txg) in which the dataset
247 247 was created. Bookmarks have the same createtxg as
248 248 the snapshot they are initially tied to. This
249 249 property is suitable for ordering a list of
250 250 snapshots, e.g. for incremental send and receive.
251 251
252 252 creation The time this dataset was created.
253 253
254 254 clones For snapshots, this property is a comma-separated
255 255 list of filesystems or volumes which are clones of
256 256 this snapshot. The clones' origin property is this
257 257 snapshot. If the clones property is not empty,
258 258 then this snapshot can not be destroyed (even with
259 259 the -r or -f options).
260 260
261 261 defer_destroy This property is on if the snapshot has been marked
262 262 for deferred destroy by using the zfs destroy -d
263 263 command. Otherwise, the property is off.
264 264
265 265 filesystem_count The total number of filesystems and volumes that
266 266 exist under this location in the dataset tree.
267 267 This value is only available when a
268 268 filesystem_limit has been set somewhere in the tree
269 269 under which the dataset resides.
270 270
271 271 guid The 64 bit GUID of this dataset or bookmark which
272 272 does not change over its entire lifetime. When a
273 273 snapshot is sent to another pool, the received
274 274 snapshot has the same GUID. Thus, the guid is
275 275 suitable to identify a snapshot across pools.
276 276
277 277 logicalreferenced The amount of space that is "logically" accessible
278 278 by this dataset. See the referenced property. The
279 279 logical space ignores the effect of the compression
280 280 and copies properties, giving a quantity closer to
281 281 the amount of data that applications see. However,
282 282 it does include space consumed by metadata.
283 283
284 284 This property can also be referred to by its
285 285 shortened column name, lrefer.
286 286
287 287 logicalused The amount of space that is "logically" consumed by
288 288 this dataset and all its descendents. See the used
289 289 property. The logical space ignores the effect of
290 290 the compression and copies properties, giving a
291 291 quantity closer to the amount of data that
292 292 applications see. However, it does include space
293 293 consumed by metadata.
294 294
295 295 This property can also be referred to by its
296 296 shortened column name, lused.
297 297
298 298 mounted For file systems, indicates whether the file system
299 299 is currently mounted. This property can be either
300 300 yes or no.
301 301
302 302 origin For cloned file systems or volumes, the snapshot
303 303 from which the clone was created. See also the
304 304 clones property.
305 305
306 306 receive_resume_token For filesystems or volumes which have saved
307 307 partially-completed state from zfs receive -s, this
308 308 opaque token can be provided to zfs send -t to
309 309 resume and complete the zfs receive.
310 310
311 311 referenced The amount of data that is accessible by this
312 312 dataset, which may or may not be shared with other
313 313 datasets in the pool. When a snapshot or clone is
314 314 created, it initially references the same amount of
315 315 space as the file system or snapshot it was created
316 316 from, since its contents are identical.
317 317
318 318 This property can also be referred to by its
319 319 shortened column name, refer.
320 320
321 321 refcompressratio The compression ratio achieved for the referenced
322 322 space of this dataset, expressed as a multiplier.
323 323 See also the compressratio property.
324 324
325 325 snapshot_count The total number of snapshots that exist under this
326 326 location in the dataset tree. This value is only
327 327 available when a snapshot_limit has been set
328 328 somewhere in the tree under which the dataset
329 329 resides.
330 330
331 331 type The type of dataset: filesystem, volume, or
332 332 snapshot.
333 333
334 334 used The amount of space consumed by this dataset and
335 335 all its descendents. This is the value that is
336 336 checked against this dataset's quota and
337 337 reservation. The space used does not include this
338 338 dataset's reservation, but does take into account
339 339 the reservations of any descendent datasets. The
340 340 amount of space that a dataset consumes from its
341 341 parent, as well as the amount of space that is
342 342 freed if this dataset is recursively destroyed, is
343 343 the greater of its space used and its reservation.
344 344
345 345 The used space of a snapshot (see the Snapshots
346 346 section) is space that is referenced exclusively by
347 347 this snapshot. If this snapshot is destroyed, the
348 348 amount of used space will be freed. Space that is
349 349 shared by multiple snapshots isn't accounted for in
350 350 this metric. When a snapshot is destroyed, space
351 351 that was previously shared with this snapshot can
352 352 become unique to snapshots adjacent to it, thus
353 353 changing the used space of those snapshots. The
354 354 used space of the latest snapshot can also be
355 355 affected by changes in the file system. Note that
356 356 the used space of a snapshot is a subset of the
357 357 written space of the snapshot.
358 358
359 359 The amount of space used, available, or referenced
360 360 does not take into account pending changes.
361 361 Pending changes are generally accounted for within
362 362 a few seconds. Committing a change to a disk using
363 363 fsync(3C) or O_SYNC does not necessarily guarantee
364 364 that the space usage information is updated
365 365 immediately.
366 366
367 367 usedby* The usedby* properties decompose the used
368 368 properties into the various reasons that space is
369 369 used. Specifically, used = usedbychildren +
370 370 usedbydataset + usedbyrefreservation +
371 371 usedbysnapshots. These properties are only
372 372 available for datasets created on zpool "version
373 373 13" pools.
374 374
375 375 usedbychildren The amount of space used by children of this
376 376 dataset, which would be freed if all the dataset's
377 377 children were destroyed.
378 378
379 379 usedbydataset The amount of space used by this dataset itself,
380 380 which would be freed if the dataset were destroyed
381 381 (after first removing any refreservation and
382 382 destroying any necessary snapshots or descendents).
383 383
384 384 usedbyrefreservation The amount of space used by a refreservation set on
385 385 this dataset, which would be freed if the
386 386 refreservation was removed.
387 387
388 388 usedbysnapshots The amount of space consumed by snapshots of this
389 389 dataset. In particular, it is the amount of space
390 390 that would be freed if all of this dataset's
391 391 snapshots were destroyed. Note that this is not
392 392 simply the sum of the snapshots' used properties
393 393 because space can be shared by multiple snapshots.
394 394
395 395 userused@user The amount of space consumed by the specified user
396 396 in this dataset. Space is charged to the owner of
397 397 each file, as displayed by ls -l. The amount of
398 398 space charged is displayed by du and ls -s. See
399 399 the zfs userspace subcommand for more information.
400 400
401 401 Unprivileged users can access only their own space
402 402 usage. The root user, or a user who has been
403 403 granted the userused privilege with zfs allow, can
404 404 access everyone's usage.
405 405
406 406 The userused@... properties are not displayed by
407 407 zfs get all. The user's name must be appended
408 408 after the @ symbol, using one of the following
409 409 forms:
410 410
411 411 o POSIX name (for example, joe)
412 412
413 413 o POSIX numeric ID (for example, 789)
414 414
415 415 o SID name (for example, joe.smith@mydomain)
416 416
417 417 o SID numeric ID (for example, S-1-123-456-789)
418 418
419 419 userrefs This property is set to the number of user holds on
420 420 this snapshot. User holds are set by using the zfs
421 421 hold command.
422 422
423 423 groupused@group The amount of space consumed by the specified group
424 424 in this dataset. Space is charged to the group of
425 425 each file, as displayed by ls -l. See the
426 426 userused@user property for more information.
427 427
428 428 Unprivileged users can only access their own
429 429 groups' space usage. The root user, or a user who
430 430 has been granted the groupused privilege with zfs
431 431 allow, can access all groups' usage.
432 432
433 433 volblocksize For volumes, specifies the block size of the
434 434 volume. The blocksize cannot be changed once the
435 435 volume has been written, so it should be set at
436 436 volume creation time. The default blocksize for
437 437 volumes is 8 Kbytes. Any power of 2 from 512 bytes
438 438 to 128 Kbytes is valid.
439 439
440 440 This property can also be referred to by its
441 441 shortened column name, volblock.
442 442
443 443 written The amount of space referenced by this dataset,
444 444 that was written since the previous snapshot (i.e.
445 445 that is not referenced by the previous snapshot).
446 446
447 447 written@snapshot The amount of referenced space written to this
448 448 dataset since the specified snapshot. This is the
449 449 space that is referenced by this dataset but was
450 450 not referenced by the specified snapshot.
451 451
452 452 The snapshot may be specified as a short snapshot
453 453 name (just the part after the @), in which case it
454 454 will be interpreted as a snapshot in the same
455 455 filesystem as this dataset. The snapshot may be a
456 456 full snapshot name (filesystem@snapshot), which for
457 457 clones may be a snapshot in the origin's filesystem
458 458 (or the origin of the origin's filesystem, etc.)
459 459
460 460 The following native properties can be used to change the behavior of a
461 461 ZFS dataset.
462 462
463 463 aclinherit=discard|noallow|restricted|passthrough|passthrough-x
464 464 Controls how ACEs are inherited when files and directories are created.
465 465
466 466 discard does not inherit any ACEs.
467 467
468 468 noallow only inherits inheritable ACEs that specify "deny"
469 469 permissions.
470 470
471 471 restricted default, removes the write_acl and write_owner
472 472 permissions when the ACE is inherited.
473 473
474 474 passthrough inherits all inheritable ACEs without any modifications.
475 475
476 476 passthrough-x same meaning as passthrough, except that the owner@,
477 477 group@, and everyone@ ACEs inherit the execute
478 478 permission only if the file creation mode also requests
479 479 the execute bit.
480 480
481 481 When the property value is set to passthrough, files are created with a
482 482 mode determined by the inheritable ACEs. If no inheritable ACEs exist
483 483 that affect the mode, then the mode is set in accordance to the
484 484 requested mode from the application.
485 485
486 486 aclmode=discard|groupmask|passthrough|restricted
487 487 Controls how an ACL is modified during chmod(2) and how inherited ACEs
488 488 are modified by the file creation mode.
489 489
490 490 discard default, deletes all ACEs except for those representing
491 491 the mode of the file or directory requested by chmod(2).
492 492
493 493 groupmask reduces permissions granted by all ALLOW entries found in
494 494 the ACL such that they are no greater than the group
495 495 permissions specified by the mode.
496 496
497 497 passthrough indicates that no changes are made to the ACL other than
498 498 creating or updating the necessary ACEs to represent the
499 499 new mode of the file or directory.
500 500
501 501 restricted causes the chmod(2) operation to return an error when used
502 502 on any file or directory which has a non-trivial ACL, with
503 503 entries in addition to those that represent the mode.
504 504
505 505 chmod(2) is required to change the set user ID, set group ID, or sticky
506 506 bit on a file or directory, as they do not have equivalent ACEs. In
507 507 order to use chmod(2) on a file or directory with a non-trivial ACL
508 508 when aclmode is set to restricted, you must first remove all ACEs
509 509 except for those that represent the current mode.
510 510
511 511 atime=on|off
512 512 Controls whether the access time for files is updated when they are
513 513 read. Turning this property off avoids producing write traffic when
514 514 reading files and can result in significant performance gains, though
515 515 it might confuse mailers and other similar utilities. The default
516 516 value is on.
517 517
518 518 canmount=on|off|noauto
519 519 If this property is set to off, the file system cannot be mounted, and
520 520 is ignored by zfs mount -a. Setting this property to off is similar to
521 521 setting the mountpoint property to none, except that the dataset still
522 522 has a normal mountpoint property, which can be inherited. Setting this
523 523 property to off allows datasets to be used solely as a mechanism to
524 524 inherit properties. One example of setting canmount=off is to have two
525 525 datasets with the same mountpoint, so that the children of both
526 526 datasets appear in the same directory, but might have different
527 527 inherited characteristics.
528 528
529 529 When set to noauto, a dataset can only be mounted and unmounted
530 530 explicitly. The dataset is not mounted automatically when the dataset
531 531 is created or imported, nor is it mounted by the zfs mount -a command
532 532 or unmounted by the zfs unmount -a command.
533 533
534 534 This property is not inherited.
535 535
536 536 checksum=on|off|fletcher2|fletcher4|sha256|noparity|sha512|skein|edonr
537 537 Controls the checksum used to verify data integrity. The default value
538 538 is on, which automatically selects an appropriate algorithm (currently,
539 539 fletcher4, but this may change in future releases). The value off
540 540 disables integrity checking on user data. The value noparity not only
541 541 disables integrity but also disables maintaining parity for user data.
542 542 This setting is used internally by a dump device residing on a RAID-Z
543 543 pool and should not be used by any other dataset. Disabling checksums
544 544 is NOT a recommended practice.
545 545
546 546 The sha512, skein, and edonr checksum algorithms require enabling the
547 547 appropriate features on the pool. Please see zpool-features(5) for
548 548 more information on these algorithms.
549 549
550 550 Changing this property affects only newly-written data.
551 551
552 552 compression=on|off|gzip|gzip-N|lz4|lzjb|zle
553 553 Controls the compression algorithm used for this dataset.
554 554
555 555 Setting compression to on indicates that the current default
556 556 compression algorithm should be used. The default balances compression
557 557 and decompression speed, with compression ratio and is expected to work
558 558 well on a wide variety of workloads. Unlike all other settings for
559 559 this property, on does not select a fixed compression type. As new
560 560 compression algorithms are added to ZFS and enabled on a pool, the
561 561 default compression algorithm may change. The current default
562 562 compression algorithm is either lzjb or, if the lz4_compress feature is
563 563 enabled, lz4.
564 564
565 565 The lz4 compression algorithm is a high-performance replacement for the
566 566 lzjb algorithm. It features significantly faster compression and
567 567 decompression, as well as a moderately higher compression ratio than
568 568 lzjb, but can only be used on pools with the lz4_compress feature set
569 569 to enabled. See zpool-features(5) for details on ZFS feature flags and
570 570 the lz4_compress feature.
571 571
572 572 The lzjb compression algorithm is optimized for performance while
573 573 providing decent data compression.
574 574
575 575 The gzip compression algorithm uses the same compression as the gzip(1)
576 576 command. You can specify the gzip level by using the value gzip-N,
577 577 where N is an integer from 1 (fastest) to 9 (best compression ratio).
578 578 Currently, gzip is equivalent to gzip-6 (which is also the default for
579 579 gzip(1)).
580 580
581 581 The zle compression algorithm compresses runs of zeros.
582 582
583 583 This property can also be referred to by its shortened column name
584 584 compress. Changing this property affects only newly-written data.
585 585
586 586 copies=1|2|3
587 587 Controls the number of copies of data stored for this dataset. These
588 588 copies are in addition to any redundancy provided by the pool, for
589 589 example, mirroring or RAID-Z. The copies are stored on different
590 590 disks, if possible. The space used by multiple copies is charged to
591 591 the associated file and dataset, changing the used property and
592 592 counting against quotas and reservations.
593 593
594 594 Changing this property only affects newly-written data. Therefore, set
595 595 this property at file system creation time by using the -o copies=N
596 596 option.
597 597
598 598 devices=on|off
599 599 Controls whether device nodes can be opened on this file system. The
600 600 default value is on.
601 601
602 602 exec=on|off
603 603 Controls whether processes can be executed from within this file
604 604 system. The default value is on.
605 605
606 606 filesystem_limit=count|none
607 607 Limits the number of filesystems and volumes that can exist under this
608 608 point in the dataset tree. The limit is not enforced if the user is
609 609 allowed to change the limit. Setting a filesystem_limit to on a
610 610 descendent of a filesystem that already has a filesystem_limit does not
611 611 override the ancestor's filesystem_limit, but rather imposes an
612 612 additional limit. This feature must be enabled to be used (see
613 613 zpool-features(5)).
614 614
615 615 mountpoint=path|none|legacy
616 616 Controls the mount point used for this file system. See the Mount
617 617 Points section for more information on how this property is used.
618 618
619 619 When the mountpoint property is changed for a file system, the file
620 620 system and any children that inherit the mount point are unmounted. If
621 621 the new value is legacy, then they remain unmounted. Otherwise, they
622 622 are automatically remounted in the new location if the property was
623 623 previously legacy or none, or if they were mounted before the property
624 624 was changed. In addition, any shared file systems are unshared and
625 625 shared in the new location.
626 626
627 627 nbmand=on|off
628 628 Controls whether the file system should be mounted with nbmand (Non
629 629 Blocking mandatory locks). This is used for SMB clients. Changes to
630 630 this property only take effect when the file system is umounted and
631 631 remounted. See mount(1M) for more information on nbmand mounts.
632 632
633 633 primarycache=all|none|metadata
634 634 Controls what is cached in the primary cache (ARC). If this property
635 635 is set to all, then both user data and metadata is cached. If this
636 636 property is set to none, then neither user data nor metadata is cached.
637 637 If this property is set to metadata, then only metadata is cached. The
638 638 default value is all.
639 639
640 640 quota=size|none
641 641 Limits the amount of space a dataset and its descendents can consume.
642 642 This property enforces a hard limit on the amount of space used. This
643 643 includes all space consumed by descendents, including file systems and
644 644 snapshots. Setting a quota on a descendent of a dataset that already
645 645 has a quota does not override the ancestor's quota, but rather imposes
646 646 an additional limit.
647 647
648 648 Quotas cannot be set on volumes, as the volsize property acts as an
649 649 implicit quota.
650 650
651 651 snapshot_limit=count|none
652 652 Limits the number of snapshots that can be created on a dataset and its
653 653 descendents. Setting a snapshot_limit on a descendent of a dataset
654 654 that already has a snapshot_limit does not override the ancestor's
655 655 snapshot_limit, but rather imposes an additional limit. The limit is
656 656 not enforced if the user is allowed to change the limit. For example,
657 657 this means that recursive snapshots taken from the global zone are
658 658 counted against each delegated dataset within a zone. This feature
659 659 must be enabled to be used (see zpool-features(5)).
660 660
661 661 userquota@user=size|none
662 662 Limits the amount of space consumed by the specified user. User space
663 663 consumption is identified by the userspace@user property.
664 664
665 665 Enforcement of user quotas may be delayed by several seconds. This
666 666 delay means that a user might exceed their quota before the system
667 667 notices that they are over quota and begins to refuse additional writes
668 668 with the EDQUOT error message. See the zfs userspace subcommand for
669 669 more information.
670 670
671 671 Unprivileged users can only access their own groups' space usage. The
672 672 root user, or a user who has been granted the userquota privilege with
673 673 zfs allow, can get and set everyone's quota.
674 674
675 675 This property is not available on volumes, on file systems before
676 676 version 4, or on pools before version 15. The userquota@... properties
677 677 are not displayed by zfs get all. The user's name must be appended
678 678 after the @ symbol, using one of the following forms:
679 679
680 680 o POSIX name (for example, joe)
681 681
682 682 o POSIX numeric ID (for example, 789)
683 683
684 684 o SID name (for example, joe.smith@mydomain)
685 685
686 686 o SID numeric ID (for example, S-1-123-456-789)
687 687
688 688 groupquota@group=size|none
689 689 Limits the amount of space consumed by the specified group. Group
690 690 space consumption is identified by the groupused@group property.
691 691
692 692 Unprivileged users can access only their own groups' space usage. The
693 693 root user, or a user who has been granted the groupquota privilege with
694 694 zfs allow, can get and set all groups' quotas.
695 695
696 696 readonly=on|off
697 697 Controls whether this dataset can be modified. The default value is
698 698 off.
699 699
700 700 This property can also be referred to by its shortened column name,
701 701 rdonly.
702 702
703 703 recordsize=size
704 704 Specifies a suggested block size for files in the file system. This
705 705 property is designed solely for use with database workloads that access
706 706 files in fixed-size records. ZFS automatically tunes block sizes
707 707 according to internal algorithms optimized for typical access patterns.
708 708
709 709 For databases that create very large files but access them in small
710 710 random chunks, these algorithms may be suboptimal. Specifying a
711 711 recordsize greater than or equal to the record size of the database can
712 712 result in significant performance gains. Use of this property for
713 713 general purpose file systems is strongly discouraged, and may adversely
714 714 affect performance.
715 715
716 716 The size specified must be a power of two greater than or equal to 512
717 717 and less than or equal to 128 Kbytes. If the large_blocks feature is
718 718 enabled on the pool, the size may be up to 1 Mbyte. See
719 719 zpool-features(5) for details on ZFS feature flags.
720 720
721 721 Changing the file system's recordsize affects only files created
722 722 afterward; existing files are unaffected.
723 723
724 724 This property can also be referred to by its shortened column name,
725 725 recsize.
726 726
727 727 redundant_metadata=all|most
728 728 Controls what types of metadata are stored redundantly. ZFS stores an
729 729 extra copy of metadata, so that if a single block is corrupted, the
730 730 amount of user data lost is limited. This extra copy is in addition to
731 731 any redundancy provided at the pool level (e.g. by mirroring or
732 732 RAID-Z), and is in addition to an extra copy specified by the copies
733 733 property (up to a total of 3 copies). For example if the pool is
734 734 mirrored, copies=2, and redundant_metadata=most, then ZFS stores 6
735 735 copies of most metadata, and 4 copies of data and some metadata.
736 736
737 737 When set to all, ZFS stores an extra copy of all metadata. If a single
738 738 on-disk block is corrupt, at worst a single block of user data (which
739 739 is recordsize bytes long) can be lost.
740 740
741 741 When set to most, ZFS stores an extra copy of most types of metadata.
742 742 This can improve performance of random writes, because less metadata
743 743 must be written. In practice, at worst about 100 blocks (of recordsize
744 744 bytes each) of user data can be lost if a single on-disk block is
745 745 corrupt. The exact behavior of which metadata blocks are stored
746 746 redundantly may change in future releases.
747 747
748 748 The default value is all.
749 749
750 750 refquota=size|none
751 751 Limits the amount of space a dataset can consume. This property
752 752 enforces a hard limit on the amount of space used. This hard limit
753 753 does not include space used by descendents, including file systems and
754 754 snapshots.
755 755
756 756 refreservation=size|none|auto
757 757 The minimum amount of space guaranteed to a dataset, not including its
758 758 descendents. When the amount of space used is below this value, the
759 759 dataset is treated as if it were taking up the amount of space
760 760 specified by refreservation. The refreservation reservation is
761 761 accounted for in the parent datasets' space used, and counts against
762 762 the parent datasets' quotas and reservations.
763 763
764 764 If refreservation is set, a snapshot is only allowed if there is enough
765 765 free pool space outside of this reservation to accommodate the current
766 766 number of "referenced" bytes in the dataset.
767 767
768 768 If refreservation is set to auto, a volume is thick provisioned (or
769 769 "not sparse"). refreservation=auto is only supported on volumes. See
770 770 volsize in the Native Properties section for more information about
771 771 sparse volumes.
772 772
773 773 This property can also be referred to by its shortened column name,
774 774 refreserv.
775 775
776 776 reservation=size|none
777 777 The minimum amount of space guaranteed to a dataset and its
778 778 descendants. When the amount of space used is below this value, the
779 779 dataset is treated as if it were taking up the amount of space
780 780 specified by its reservation. Reservations are accounted for in the
781 781 parent datasets' space used, and count against the parent datasets'
782 782 quotas and reservations.
783 783
784 784 This property can also be referred to by its shortened column name,
785 785 reserv.
786 786
787 787 secondarycache=all|none|metadata
788 788 Controls what is cached in the secondary cache (L2ARC). If this
789 789 property is set to all, then both user data and metadata is cached. If
790 790 this property is set to none, then neither user data nor metadata is
791 791 cached. If this property is set to metadata, then only metadata is
792 792 cached. The default value is all.
793 793
794 794 setuid=on|off
795 795 Controls whether the setuid bit is respected for the file system. The
796 796 default value is on.
797 797
798 798 sharesmb=on|off|opts
799 799 Controls whether the file system is shared via SMB, and what options
800 800 are to be used. A file system with the sharesmb property set to off is
801 801 managed through traditional tools such as sharemgr(1M). Otherwise, the
802 802 file system is automatically shared and unshared with the zfs share and
803 803 zfs unshare commands. If the property is set to on, the sharemgr(1M)
804 804 command is invoked with no options. Otherwise, the sharemgr(1M)
805 805 command is invoked with options equivalent to the contents of this
806 806 property.
807 807
808 808 Because SMB shares requires a resource name, a unique resource name is
809 809 constructed from the dataset name. The constructed name is a copy of
810 810 the dataset name except that the characters in the dataset name, which
811 811 would be invalid in the resource name, are replaced with underscore (_)
812 812 characters. A pseudo property "name" is also supported that allows you
813 813 to replace the data set name with a specified name. The specified name
814 814 is then used to replace the prefix dataset in the case of inheritance.
815 815 For example, if the dataset data/home/john is set to name=john, then
816 816 data/home/john has a resource name of john. If a child dataset
817 817 data/home/john/backups is shared, it has a resource name of
818 818 john_backups.
819 819
820 820 When SMB shares are created, the SMB share name appears as an entry in
821 821 the .zfs/shares directory. You can use the ls or chmod command to
822 822 display the share-level ACLs on the entries in this directory.
823 823
824 824 When the sharesmb property is changed for a dataset, the dataset and
825 825 any children inheriting the property are re-shared with the new
826 826 options, only if the property was previously set to off, or if they
827 827 were shared before the property was changed. If the new property is
828 828 set to off, the file systems are unshared.
829 829
830 830 sharenfs=on|off|opts
831 831 Controls whether the file system is shared via NFS, and what options
832 832 are to be used. A file system with a sharenfs property of off is
833 833 managed through traditional tools such as share(1M), unshare(1M), and
834 834 dfstab(4). Otherwise, the file system is automatically shared and
835 835 unshared with the zfs share and zfs unshare commands. If the property
836 836 is set to on, share(1M) command is invoked with no options. Otherwise,
837 837 the share(1M) command is invoked with options equivalent to the
838 838 contents of this property.
839 839
840 840 When the sharenfs property is changed for a dataset, the dataset and
841 841 any children inheriting the property are re-shared with the new
842 842 options, only if the property was previously off, or if they were
843 843 shared before the property was changed. If the new property is off,
844 844 the file systems are unshared.
845 845
846 846 logbias=latency|throughput
847 847 Provide a hint to ZFS about handling of synchronous requests in this
848 848 dataset. If logbias is set to latency (the default), ZFS will use pool
849 849 log devices (if configured) to handle the requests at low latency. If
850 850 logbias is set to throughput, ZFS will not use configured pool log
851 851 devices. ZFS will instead optimize synchronous operations for global
852 852 pool throughput and efficient use of resources.
853 853
854 854 snapdir=hidden|visible
855 855 Controls whether the .zfs directory is hidden or visible in the root of
856 856 the file system as discussed in the Snapshots section. The default
857 857 value is hidden.
858 858
859 859 sync=standard|always|disabled
860 860 Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC).
861 861 standard is the POSIX specified behavior of ensuring all synchronous
862 862 requests are written to stable storage and all devices are flushed to
863 863 ensure data is not cached by device controllers (this is the default).
864 864 always causes every file system transaction to be written and flushed
865 865 before its system call returns. This has a large performance penalty.
866 866 disabled disables synchronous requests. File system transactions are
867 867 only committed to stable storage periodically. This option will give
868 868 the highest performance. However, it is very dangerous as ZFS would be
869 869 ignoring the synchronous transaction demands of applications such as
870 870 databases or NFS. Administrators should only use this option when the
871 871 risks are understood.
872 872
873 873 version=N|current
874 874 The on-disk version of this file system, which is independent of the
875 875 pool version. This property can only be set to later supported
876 876 versions. See the zfs upgrade command.
877 877
878 878 volsize=size
879 879 For volumes, specifies the logical size of the volume. By default,
880 880 creating a volume establishes a reservation of equal size. For storage
881 881 pools with a version number of 9 or higher, a refreservation is set
882 882 instead. Any changes to volsize are reflected in an equivalent change
883 883 to the reservation (or refreservation). The volsize can only be set to
884 884 a multiple of volblocksize, and cannot be zero.
885 885
886 886 The reservation is kept equal to the volume's logical size to prevent
887 887 unexpected behavior for consumers. Without the reservation, the volume
888 888 could run out of space, resulting in undefined behavior or data
889 889 corruption, depending on how the volume is used. These effects can
890 890 also occur when the volume size is changed while it is in use
891 891 (particularly when shrinking the size). Extreme care should be used
892 892 when adjusting the volume size.
893 893
894 894 Though not recommended, a "sparse volume" (also known as "thin
895 895 provisioned") can be created by specifying the -s option to the zfs
896 896 create -V command, or by changing the value of the refreservation
897 897 property (or reservation property on pool version 8 or earlier) after
898 898 the volume has been created. A "sparse volume" is a volume where the
899 899 value of refreservation is less than the size of the volume plus the
900 900 space required to store its metadata. Consequently, writes to a sparse
901 901 volume can fail with ENOSPC when the pool is low on space. For a
902 902 sparse volume, changes to volsize are not reflected in the
903 903 refreservation. A volume that is not sparse is said to be "thick
904 904 provisioned". A sparse volume can become thick provisioned by setting
905 905 refreservation to auto.
906 906
907 907 vscan=on|off
908 908 Controls whether regular files should be scanned for viruses when a
909 909 file is opened and closed. In addition to enabling this property, the
910 910 virus scan service must also be enabled for virus scanning to occur.
911 911 The default value is off.
912 912
913 913 xattr=on|off
914 914 Controls whether extended attributes are enabled for this file system.
915 915 The default value is on.
916 916
917 917 zoned=on|off
918 918 Controls whether the dataset is managed from a non-global zone. See
919 919 the Zones section for more information. The default value is off.
920 920
921 921 The following three properties cannot be changed after the file system is
922 922 created, and therefore, should be set when the file system is created.
923 923 If the properties are not set with the zfs create or zpool create
924 924 commands, these properties are inherited from the parent dataset. If the
925 925 parent dataset lacks these properties due to having been created prior to
926 926 these features being supported, the new file system will have the default
927 927 values for these properties.
928 928
929 929 casesensitivity=sensitive|insensitive|mixed
930 930 Indicates whether the file name matching algorithm used by the file
931 931 system should be case-sensitive, case-insensitive, or allow a
932 932 combination of both styles of matching. The default value for the
933 933 casesensitivity property is sensitive. Traditionally, UNIX and POSIX
934 934 file systems have case-sensitive file names.
935 935
936 936 The mixed value for the casesensitivity property indicates that the
937 937 file system can support requests for both case-sensitive and case-
938 938 insensitive matching behavior. Currently, case-insensitive matching
939 939 behavior on a file system that supports mixed behavior is limited to
940 940 the SMB server product. For more information about the mixed value
941 941 behavior, see the "ZFS Administration Guide".
942 942
943 943 normalization=none|formC|formD|formKC|formKD
944 944 Indicates whether the file system should perform a unicode
945 945 normalization of file names whenever two file names are compared, and
946 946 which normalization algorithm should be used. File names are always
947 947 stored unmodified, names are normalized as part of any comparison
948 948 process. If this property is set to a legal value other than none, and
949 949 the utf8only property was left unspecified, the utf8only property is
950 950 automatically set to on. The default value of the normalization
951 951 property is none. This property cannot be changed after the file
952 952 system is created.
953 953
954 954 utf8only=on|off
955 955 Indicates whether the file system should reject file names that include
956 956 characters that are not present in the UTF-8 character code set. If
957 957 this property is explicitly set to off, the normalization property must
958 958 either not be explicitly set or be set to none. The default value for
959 959 the utf8only property is off. This property cannot be changed after
960 960 the file system is created.
961 961
962 962 The casesensitivity, normalization, and utf8only properties are also new
963 963 permissions that can be assigned to non-privileged users by using the ZFS
964 964 delegated administration feature.
965 965
966 966 Temporary Mount Point Properties
967 967 When a file system is mounted, either through mount(1M) for legacy mounts
968 968 or the zfs mount command for normal file systems, its mount options are
969 969 set according to its properties. The correlation between properties and
970 970 mount options is as follows:
971 971
972 972 PROPERTY MOUNT OPTION
973 973 devices devices/nodevices
974 974 exec exec/noexec
975 975 readonly ro/rw
976 976 setuid setuid/nosetuid
977 977 xattr xattr/noxattr
978 978
979 979 In addition, these options can be set on a per-mount basis using the -o
980 980 option, without affecting the property that is stored on disk. The
981 981 values specified on the command line override the values stored in the
982 982 dataset. The nosuid option is an alias for nodevices,nosetuid. These
983 983 properties are reported as "temporary" by the zfs get command. If the
984 984 properties are changed while the dataset is mounted, the new setting
985 985 overrides any temporary settings.
986 986
987 987 User Properties
988 988 In addition to the standard native properties, ZFS supports arbitrary
989 989 user properties. User properties have no effect on ZFS behavior, but
990 990 applications or administrators can use them to annotate datasets (file
991 991 systems, volumes, and snapshots).
992 992
993 993 User property names must contain a colon (":") character to distinguish
994 994 them from native properties. They may contain lowercase letters,
995 995 numbers, and the following punctuation characters: colon (":"), dash
996 996 ("-"), period ("."), and underscore ("_"). The expected convention is
997 997 that the property name is divided into two portions such as
998 998 module:property, but this namespace is not enforced by ZFS. User
999 999 property names can be at most 256 characters, and cannot begin with a
1000 1000 dash ("-").
1001 1001
1002 1002 When making programmatic use of user properties, it is strongly suggested
1003 1003 to use a reversed DNS domain name for the module component of property
1004 1004 names to reduce the chance that two independently-developed packages use
1005 1005 the same property name for different purposes.
1006 1006
1007 1007 The values of user properties are arbitrary strings, are always
1008 1008 inherited, and are never validated. All of the commands that operate on
1009 1009 properties (zfs list, zfs get, zfs set, and so forth) can be used to
1010 1010 manipulate both native properties and user properties. Use the zfs
1011 1011 inherit command to clear a user property. If the property is not defined
1012 1012 in any parent dataset, it is removed entirely. Property values are
1013 1013 limited to 8192 bytes.
1014 1014
1015 1015 ZFS Volumes as Swap or Dump Devices
1016 1016 During an initial installation a swap device and dump device are created
1017 1017 on ZFS volumes in the ZFS root pool. By default, the swap area size is
1018 1018 based on 1/2 the size of physical memory up to 2 Gbytes. The size of the
1019 1019 dump device depends on the kernel's requirements at installation time.
1020 1020 Separate ZFS volumes must be used for the swap area and dump devices. Do
1021 1021 not swap to a file on a ZFS file system. A ZFS swap file configuration
1022 1022 is not supported.
1023 1023
1024 1024 If you need to change your swap area or dump device after the system is
1025 1025 installed or upgraded, use the swap(1M) and dumpadm(1M) commands.
1026 1026
1027 1027 SUBCOMMANDS
1028 1028 All subcommands that modify state are logged persistently to the pool in
1029 1029 their original form.
1030 1030
1031 1031 zfs -?
1032 1032 Displays a help message.
1033 1033
1034 1034 zfs create [-p] [-o property=value]... filesystem
1035 1035 Creates a new ZFS file system. The file system is automatically
1036 1036 mounted according to the mountpoint property inherited from the parent.
1037 1037
1038 1038 -o property=value
1039 1039 Sets the specified property as if the command zfs set
1040 1040 property=value was invoked at the same time the dataset was
1041 1041 created. Any editable ZFS property can also be set at creation
1042 1042 time. Multiple -o options can be specified. An error results if
1043 1043 the same property is specified in multiple -o options.
1044 1044
1045 1045 -p Creates all the non-existing parent datasets. Datasets created in
1046 1046 this manner are automatically mounted according to the mountpoint
1047 1047 property inherited from their parent. Any property specified on
1048 1048 the command line using the -o option is ignored. If the target
1049 1049 filesystem already exists, the operation completes successfully.
1050 1050
1051 1051 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
1052 1052 Creates a volume of the given size. The volume is exported as a block
1053 1053 device in /dev/zvol/{dsk,rdsk}/path, where path is the name of the
1054 1054 volume in the ZFS namespace. The size represents the logical size as
1055 1055 exported by the device. By default, a reservation of equal size is
1056 1056 created.
1057 1057
1058 1058 size is automatically rounded up to the nearest 128 Kbytes to ensure
1059 1059 that the volume has an integral number of blocks regardless of
1060 1060 blocksize.
1061 1061
1062 1062 -b blocksize
1063 1063 Equivalent to -o volblocksize=blocksize. If this option is
1064 1064 specified in conjunction with -o volblocksize, the resulting
1065 1065 behavior is undefined.
1066 1066
1067 1067 -o property=value
1068 1068 Sets the specified property as if the zfs set property=value
1069 1069 command was invoked at the same time the dataset was created. Any
1070 1070 editable ZFS property can also be set at creation time. Multiple
1071 1071 -o options can be specified. An error results if the same property
1072 1072 is specified in multiple -o options.
1073 1073
1074 1074 -p Creates all the non-existing parent datasets. Datasets created in
1075 1075 this manner are automatically mounted according to the mountpoint
1076 1076 property inherited from their parent. Any property specified on
1077 1077 the command line using the -o option is ignored. If the target
1078 1078 filesystem already exists, the operation completes successfully.
1079 1079
1080 1080 -s Creates a sparse volume with no reservation. See volsize in the
1081 1081 Native Properties section for more information about sparse
1082 1082 volumes.
1083 1083
1084 1084 zfs destroy [-Rfnprv] filesystem|volume
1085 1085 Destroys the given dataset. By default, the command unshares any file
1086 1086 systems that are currently shared, unmounts any file systems that are
1087 1087 currently mounted, and refuses to destroy a dataset that has active
1088 1088 dependents (children or clones).
1089 1089
1090 1090 -R Recursively destroy all dependents, including cloned file systems
1091 1091 outside the target hierarchy.
1092 1092
1093 1093 -f Force an unmount of any file systems using the unmount -f command.
1094 1094 This option has no effect on non-file systems or unmounted file
1095 1095 systems.
1096 1096
1097 1097 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1098 1098 useful in conjunction with the -v or -p flags to determine what
1099 1099 data would be deleted.
1100 1100
1101 1101 -p Print machine-parsable verbose information about the deleted data.
1102 1102
1103 1103 -r Recursively destroy all children.
1104 1104
1105 1105 -v Print verbose information about the deleted data.
1106 1106
1107 1107 Extreme care should be taken when applying either the -r or the -R
1108 1108 options, as they can destroy large portions of a pool and cause
1109 1109 unexpected behavior for mounted file systems in use.
1110 1110
1111 1111 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
1112 1112 The given snapshots are destroyed immediately if and only if the zfs
1113 1113 destroy command without the -d option would have destroyed it. Such
1114 1114 immediate destruction would occur, for example, if the snapshot had no
1115 1115 clones and the user-initiated reference count were zero.
1116 1116
1117 1117 If a snapshot does not qualify for immediate destruction, it is marked
1118 1118 for deferred deletion. In this state, it exists as a usable, visible
1119 1119 snapshot until both of the preconditions listed above are met, at which
1120 1120 point it is destroyed.
1121 1121
1122 1122 An inclusive range of snapshots may be specified by separating the
1123 1123 first and last snapshots with a percent sign. The first and/or last
1124 1124 snapshots may be left blank, in which case the filesystem's oldest or
1125 1125 newest snapshot will be implied.
1126 1126
1127 1127 Multiple snapshots (or ranges of snapshots) of the same filesystem or
1128 1128 volume may be specified in a comma-separated list of snapshots. Only
1129 1129 the snapshot's short name (the part after the @) should be specified
1130 1130 when using a range or comma-separated list to identify multiple
1131 1131 snapshots.
1132 1132
1133 1133 -R Recursively destroy all clones of these snapshots, including the
1134 1134 clones, snapshots, and children. If this flag is specified, the -d
1135 1135 flag will have no effect.
1136 1136
1137 1137 -d Defer snapshot deletion.
1138 1138
1139 1139 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1140 1140 useful in conjunction with the -p or -v flags to determine what
1141 1141 data would be deleted.
1142 1142
1143 1143 -p Print machine-parsable verbose information about the deleted data.
1144 1144
1145 1145 -r Destroy (or mark for deferred deletion) all snapshots with this
1146 1146 name in descendent file systems.
1147 1147
1148 1148 -v Print verbose information about the deleted data.
1149 1149
1150 1150 Extreme care should be taken when applying either the -r or the -R
1151 1151 options, as they can destroy large portions of a pool and cause
1152 1152 unexpected behavior for mounted file systems in use.
1153 1153
1154 1154 zfs destroy filesystem|volume#bookmark
1155 1155 The given bookmark is destroyed.
1156 1156
1157 1157 zfs snapshot [-r] [-o property=value]...
1158 1158 filesystem@snapname|volume@snapname...
1159 1159 Creates snapshots with the given names. All previous modifications by
1160 1160 successful system calls to the file system are part of the snapshots.
1161 1161 Snapshots are taken atomically, so that all snapshots correspond to the
1162 1162 same moment in time. See the Snapshots section for details.
1163 1163
1164 1164 -o property=value
1165 1165 Sets the specified property; see zfs create for details.
1166 1166
1167 1167 -r Recursively create snapshots of all descendent datasets
1168 1168
1169 1169 zfs rollback [-Rfr] snapshot
1170 1170 Roll back the given dataset to a previous snapshot. When a dataset is
1171 1171 rolled back, all data that has changed since the snapshot is discarded,
1172 1172 and the dataset reverts to the state at the time of the snapshot. By
1173 1173 default, the command refuses to roll back to a snapshot other than the
1174 1174 most recent one. In order to do so, all intermediate snapshots and
1175 1175 bookmarks must be destroyed by specifying the -r option.
1176 1176
1177 1177 The -rR options do not recursively destroy the child snapshots of a
1178 1178 recursive snapshot. Only direct snapshots of the specified filesystem
1179 1179 are destroyed by either of these options. To completely roll back a
1180 1180 recursive snapshot, you must rollback the individual child snapshots.
1181 1181
1182 1182 -R Destroy any more recent snapshots and bookmarks, as well as any
1183 1183 clones of those snapshots.
1184 1184
1185 1185 -f Used with the -R option to force an unmount of any clone file
1186 1186 systems that are to be destroyed.
1187 1187
1188 1188 -r Destroy any snapshots and bookmarks more recent than the one
1189 1189 specified.
1190 1190
1191 1191 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
1192 1192 Creates a clone of the given snapshot. See the Clones section for
1193 1193 details. The target dataset can be located anywhere in the ZFS
1194 1194 hierarchy, and is created as the same type as the original.
1195 1195
1196 1196 -o property=value
1197 1197 Sets the specified property; see zfs create for details.
1198 1198
1199 1199 -p Creates all the non-existing parent datasets. Datasets created in
1200 1200 this manner are automatically mounted according to the mountpoint
1201 1201 property inherited from their parent. If the target filesystem or
1202 1202 volume already exists, the operation completes successfully.
1203 1203
1204 1204 zfs promote clone-filesystem
1205 1205 Promotes a clone file system to no longer be dependent on its "origin"
1206 1206 snapshot. This makes it possible to destroy the file system that the
1207 1207 clone was created from. The clone parent-child dependency relationship
1208 1208 is reversed, so that the origin file system becomes a clone of the
1209 1209 specified file system.
1210 1210
1211 1211 The snapshot that was cloned, and any snapshots previous to this
1212 1212 snapshot, are now owned by the promoted clone. The space they use
1213 1213 moves from the origin file system to the promoted clone, so enough
1214 1214 space must be available to accommodate these snapshots. No new space
1215 1215 is consumed by this operation, but the space accounting is adjusted.
1216 1216 The promoted clone must not have any conflicting snapshot names of its
1217 1217 own. The rename subcommand can be used to rename any conflicting
1218 1218 snapshots.
1219 1219
1220 1220 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
1221 1221
1222 1222 zfs rename [-fp] filesystem|volume filesystem|volume
1223 1223 Renames the given dataset. The new target can be located anywhere in
1224 1224 the ZFS hierarchy, with the exception of snapshots. Snapshots can only
1225 1225 be renamed within the parent file system or volume. When renaming a
1226 1226 snapshot, the parent file system of the snapshot does not need to be
1227 1227 specified as part of the second argument. Renamed file systems can
1228 1228 inherit new mount points, in which case they are unmounted and
1229 1229 remounted at the new mount point.
1230 1230
1231 1231 -f Force unmount any filesystems that need to be unmounted in the
1232 1232 process.
1233 1233
1234 1234 -p Creates all the nonexistent parent datasets. Datasets created in
1235 1235 this manner are automatically mounted according to the mountpoint
1236 1236 property inherited from their parent.
1237 1237
1238 1238 zfs rename -r snapshot snapshot
1239 1239 Recursively rename the snapshots of all descendent datasets. Snapshots
1240 1240 are the only dataset that can be renamed recursively.
1241 1241
1242 1242 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
1243 1243 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
1244 1244 Lists the property information for the given datasets in tabular form.
1245 1245 If specified, you can list property information by the absolute
1246 1246 pathname or the relative pathname. By default, all file systems and
1247 1247 volumes are displayed. Snapshots are displayed if the listsnaps
1248 1248 property is on (the default is off). The following fields are
1249 1249 displayed, name,used,available,referenced,mountpoint.
1250 1250
1251 1251 -H Used for scripting mode. Do not print headers and separate fields
1252 1252 by a single tab instead of arbitrary white space.
1253 1253
1254 1254 -S property
1255 1255 Same as the -s option, but sorts by property in descending order.
1256 1256
1257 1257 -d depth
1258 1258 Recursively display any children of the dataset, limiting the
1259 1259 recursion to depth. A depth of 1 will display only the dataset and
1260 1260 its direct children.
1261 1261
1262 1262 -o property
1263 1263 A comma-separated list of properties to display. The property must
1264 1264 be:
1265 1265
1266 1266 o One of the properties described in the Native Properties
1267 1267 section
1268 1268
1269 1269 o A user property
1270 1270
1271 1271 o The value name to display the dataset name
1272 1272
1273 1273 o The value space to display space usage properties on file
1274 1274 systems and volumes. This is a shortcut for specifying -o
1275 1275 name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t
1276 1276 filesystem,volume syntax.
1277 1277
1278 1278 -p Display numbers in parsable (exact) values.
1279 1279
1280 1280 -r Recursively display any children of the dataset on the command
1281 1281 line.
1282 1282
1283 1283 -s property
1284 1284 A property for sorting the output by column in ascending order
1285 1285 based on the value of the property. The property must be one of
1286 1286 the properties described in the Properties section, or the special
1287 1287 value name to sort by the dataset name. Multiple properties can be
1288 1288 specified at one time using multiple -s property options. Multiple
1289 1289 -s options are evaluated from left to right in decreasing order of
1290 1290 importance. The following is a list of sorting criteria:
1291 1291
1292 1292 o Numeric types sort in numeric order.
1293 1293
1294 1294 o String types sort in alphabetical order.
1295 1295
1296 1296 o Types inappropriate for a row sort that row to the literal
1297 1297 bottom, regardless of the specified ordering.
1298 1298
1299 1299 If no sorting options are specified the existing behavior of zfs
1300 1300 list is preserved.
1301 1301
1302 1302 -t type
1303 1303 A comma-separated list of types to display, where type is one of
1304 1304 filesystem, snapshot, volume, bookmark, or all. For example,
1305 1305 specifying -t snapshot displays only snapshots.
1306 1306
1307 1307 zfs set property=value [property=value]... filesystem|volume|snapshot...
1308 1308 Sets the property or list of properties to the given value(s) for each
1309 1309 dataset. Only some properties can be edited. See the Properties
1310 1310 section for more information on what properties can be set and
1311 1311 acceptable values. Numeric values can be specified as exact values, or
1312 1312 in a human-readable form with a suffix of B, K, M, G, T, P, E, Z (for
1313 1313 bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
1314 1314 or zettabytes, respectively). User properties can be set on snapshots.
1315 1315 For more information, see the User Properties section.
1316 1316
1317 1317 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
1318 1318 [-t type[,type]...] all | property[,property]...
1319 1319 filesystem|volume|snapshot|bookmark...
1320 1320 Displays properties for the given datasets. If no datasets are
1321 1321 specified, then the command displays properties for all datasets on the
1322 1322 system. For each property, the following columns are displayed:
1323 1323
1324 1324 name Dataset name
1325 1325 property Property name
1326 1326 value Property value
1327 1327 source Property source. Can either be local, default,
1328 1328 temporary, inherited, or none (-).
1329 1329
1330 1330 All columns are displayed by default, though this can be controlled by
1331 1331 using the -o option. This command takes a comma-separated list of
1332 1332 properties as described in the Native Properties and User Properties
1333 1333 sections.
1334 1334
1335 1335 The special value all can be used to display all properties that apply
1336 1336 to the given dataset's type (filesystem, volume, snapshot, or
1337 1337 bookmark).
1338 1338
1339 1339 -H Display output in a form more easily parsed by scripts. Any
1340 1340 headers are omitted, and fields are explicitly separated by a
1341 1341 single tab instead of an arbitrary amount of space.
1342 1342
1343 1343 -d depth
1344 1344 Recursively display any children of the dataset, limiting the
1345 1345 recursion to depth. A depth of 1 will display only the dataset and
1346 1346 its direct children.
1347 1347
1348 1348 -o field
1349 1349 A comma-separated list of columns to display.
1350 1350 name,property,value,source is the default value.
1351 1351
1352 1352 -p Display numbers in parsable (exact) values.
1353 1353
1354 1354 -r Recursively display properties for any children.
1355 1355
1356 1356 -s source
1357 1357 A comma-separated list of sources to display. Those properties
1358 1358 coming from a source other than those in this list are ignored.
1359 1359 Each source must be one of the following: local, default,
1360 1360 inherited, temporary, and none. The default value is all sources.
1361 1361
1362 1362 -t type
1363 1363 A comma-separated list of types to display, where type is one of
1364 1364 filesystem, snapshot, volume, bookmark, or all.
1365 1365
1366 1366 zfs inherit [-rS] property filesystem|volume|snapshot...
1367 1367 Clears the specified property, causing it to be inherited from an
1368 1368 ancestor, restored to default if no ancestor has the property set, or
↓ open down ↓ |
1368 lines elided |
↑ open up ↑ |
1369 1369 with the -S option reverted to the received value if one exists. See
1370 1370 the Properties section for a listing of default values, and details on
1371 1371 which properties can be inherited.
1372 1372
1373 1373 -r Recursively inherit the given property for all children.
1374 1374
1375 1375 -S Revert the property to the received value if one exists; otherwise
1376 1376 operate as if the -S option was not specified.
1377 1377
1378 1378 zfs remap filesystem|volume
1379 - Remap the indirect blocks in the given fileystem or volume so that they
1380 - no longer reference blocks on previously removed vdevs and we can
1379 + Remap the indirect blocks in the given filesystem or volume so that
1380 + they no longer reference blocks on previously removed vdevs and we can
1381 1381 eventually shrink the size of the indirect mapping objects for the
1382 1382 previously removed vdevs. Note that remapping all blocks might not be
1383 1383 possible and that references from snapshots will still exist and cannot
1384 1384 be remapped.
1385 1385
1386 1386 zfs upgrade
1387 1387 Displays a list of file systems that are not the most recent version.
1388 1388
1389 1389 zfs upgrade -v
1390 1390 Displays a list of currently supported file system versions.
1391 1391
1392 1392 zfs upgrade [-r] [-V version] -a | filesystem
1393 1393 Upgrades file systems to a new on-disk version. Once this is done, the
1394 1394 file systems will no longer be accessible on systems running older
1395 1395 versions of the software. zfs send streams generated from new
1396 1396 snapshots of these file systems cannot be accessed on systems running
1397 1397 older versions of the software.
1398 1398
1399 1399 In general, the file system version is independent of the pool version.
1400 1400 See zpool(1M) for information on the zpool upgrade command.
1401 1401
1402 1402 In some cases, the file system version and the pool version are
1403 1403 interrelated and the pool version must be upgraded before the file
1404 1404 system version can be upgraded.
1405 1405
1406 1406 -V version
1407 1407 Upgrade to the specified version. If the -V flag is not specified,
1408 1408 this command upgrades to the most recent version. This option can
1409 1409 only be used to increase the version number, and only up to the
1410 1410 most recent version supported by this software.
1411 1411
1412 1412 -a Upgrade all file systems on all imported pools.
1413 1413
1414 1414 filesystem
1415 1415 Upgrade the specified file system.
1416 1416
1417 1417 -r Upgrade the specified file system and all descendent file systems.
1418 1418
1419 1419 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1420 1420 [-t type[,type]...] filesystem|snapshot
1421 1421 Displays space consumed by, and quotas on, each user in the specified
1422 1422 filesystem or snapshot. This corresponds to the userused@user and
1423 1423 userquota@user properties.
1424 1424
1425 1425 -H Do not print headers, use tab-delimited output.
1426 1426
1427 1427 -S field
1428 1428 Sort by this field in reverse order. See -s.
1429 1429
1430 1430 -i Translate SID to POSIX ID. The POSIX ID may be ephemeral if no
1431 1431 mapping exists. Normal POSIX interfaces (for example, stat(2), ls
1432 1432 -l) perform this translation, so the -i option allows the output
1433 1433 from zfs userspace to be compared directly with those utilities.
1434 1434 However, -i may lead to confusion if some files were created by an
1435 1435 SMB user before a SMB-to-POSIX name mapping was established. In
1436 1436 such a case, some files will be owned by the SMB entity and some by
1437 1437 the POSIX entity. However, the -i option will report that the
1438 1438 POSIX entity has the total usage and quota for both.
1439 1439
1440 1440 -n Print numeric ID instead of user/group name.
1441 1441
1442 1442 -o field[,field]...
1443 1443 Display only the specified fields from the following set: type,
1444 1444 name, used, quota. The default is to display all fields.
1445 1445
1446 1446 -p Use exact (parsable) numeric output.
1447 1447
1448 1448 -s field
1449 1449 Sort output by this field. The -s and -S flags may be specified
1450 1450 multiple times to sort first by one field, then by another. The
1451 1451 default is -s type -s name.
1452 1452
1453 1453 -t type[,type]...
1454 1454 Print only the specified types from the following set: all,
1455 1455 posixuser, smbuser, posixgroup, smbgroup. The default is -t
1456 1456 posixuser,smbuser. The default can be changed to include group
1457 1457 types.
1458 1458
1459 1459 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1460 1460 [-t type[,type]...] filesystem|snapshot
1461 1461 Displays space consumed by, and quotas on, each group in the specified
1462 1462 filesystem or snapshot. This subcommand is identical to zfs userspace,
1463 1463 except that the default types to display are -t posixgroup,smbgroup.
1464 1464
1465 1465 zfs mount
1466 1466 Displays all ZFS file systems currently mounted.
1467 1467
1468 1468 zfs mount [-Ov] [-o options] -a | filesystem
1469 1469 Mounts ZFS file systems.
1470 1470
1471 1471 -O Perform an overlay mount. See mount(1M) for more information.
1472 1472
1473 1473 -a Mount all available ZFS file systems. Invoked automatically as
1474 1474 part of the boot process.
1475 1475
1476 1476 filesystem
1477 1477 Mount the specified filesystem.
1478 1478
1479 1479 -o options
1480 1480 An optional, comma-separated list of mount options to use
1481 1481 temporarily for the duration of the mount. See the Temporary Mount
1482 1482 Point Properties section for details.
1483 1483
1484 1484 -v Report mount progress.
1485 1485
1486 1486 zfs unmount [-f] -a | filesystem|mountpoint
1487 1487 Unmounts currently mounted ZFS file systems.
1488 1488
1489 1489 -a Unmount all available ZFS file systems. Invoked automatically as
1490 1490 part of the shutdown process.
1491 1491
1492 1492 filesystem|mountpoint
1493 1493 Unmount the specified filesystem. The command can also be given a
1494 1494 path to a ZFS file system mount point on the system.
1495 1495
1496 1496 -f Forcefully unmount the file system, even if it is currently in use.
1497 1497
1498 1498 zfs share -a | filesystem
1499 1499 Shares available ZFS file systems.
1500 1500
1501 1501 -a Share all available ZFS file systems. Invoked automatically as
1502 1502 part of the boot process.
1503 1503
1504 1504 filesystem
1505 1505 Share the specified filesystem according to the sharenfs and
1506 1506 sharesmb properties. File systems are shared when the sharenfs or
1507 1507 sharesmb property is set.
1508 1508
1509 1509 zfs unshare -a | filesystem|mountpoint
1510 1510 Unshares currently shared ZFS file systems.
1511 1511
1512 1512 -a Unshare all available ZFS file systems. Invoked automatically as
1513 1513 part of the shutdown process.
1514 1514
1515 1515 filesystem|mountpoint
1516 1516 Unshare the specified filesystem. The command can also be given a
1517 1517 path to a ZFS file system shared on the system.
1518 1518
1519 1519 zfs bookmark snapshot bookmark
1520 1520 Creates a bookmark of the given snapshot. Bookmarks mark the point in
1521 1521 time when the snapshot was created, and can be used as the incremental
1522 1522 source for a zfs send command.
1523 1523
1524 1524 This feature must be enabled to be used. See zpool-features(5) for
1525 1525 details on ZFS feature flags and the bookmarks feature.
1526 1526
1527 1527 zfs send [-DLPRcenpv] [[-I|-i] snapshot] snapshot
1528 1528 Creates a stream representation of the second snapshot, which is
1529 1529 written to standard output. The output can be redirected to a file or
1530 1530 to a different system (for example, using ssh(1)). By default, a full
1531 1531 stream is generated.
1532 1532
1533 1533 -D, --dedup
1534 1534 Generate a deduplicated stream. Blocks which would have been sent
1535 1535 multiple times in the send stream will only be sent once. The
1536 1536 receiving system must also support this feature to receive a
1537 1537 deduplicated stream. This flag can be used regardless of the
1538 1538 dataset's dedup property, but performance will be much better if
1539 1539 the filesystem uses a dedup-capable checksum (for example, sha256).
1540 1540
1541 1541 -I snapshot
1542 1542 Generate a stream package that sends all intermediary snapshots
1543 1543 from the first snapshot to the second snapshot. For example, -I @a
1544 1544 fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The
1545 1545 incremental source may be specified as with the -i option.
1546 1546
1547 1547 -L, --large-block
1548 1548 Generate a stream which may contain blocks larger than 128KB. This
1549 1549 flag has no effect if the large_blocks pool feature is disabled, or
1550 1550 if the recordsize property of this filesystem has never been set
1551 1551 above 128KB. The receiving system must have the large_blocks pool
1552 1552 feature enabled as well. See zpool-features(5) for details on ZFS
1553 1553 feature flags and the large_blocks feature.
1554 1554
1555 1555 -P, --parsable
1556 1556 Print machine-parsable verbose information about the stream package
1557 1557 generated.
1558 1558
1559 1559 -R, --replicate
1560 1560 Generate a replication stream package, which will replicate the
1561 1561 specified file system, and all descendent file systems, up to the
1562 1562 named snapshot. When received, all properties, snapshots,
1563 1563 descendent file systems, and clones are preserved.
1564 1564
1565 1565 If the -i or -I flags are used in conjunction with the -R flag, an
1566 1566 incremental replication stream is generated. The current values of
1567 1567 properties, and current snapshot and file system names are set when
1568 1568 the stream is received. If the -F flag is specified when this
1569 1569 stream is received, snapshots and file systems that do not exist on
1570 1570 the sending side are destroyed.
1571 1571
1572 1572 -e, --embed
1573 1573 Generate a more compact stream by using WRITE_EMBEDDED records for
1574 1574 blocks which are stored more compactly on disk by the embedded_data
1575 1575 pool feature. This flag has no effect if the embedded_data feature
1576 1576 is disabled. The receiving system must have the embedded_data
1577 1577 feature enabled. If the lz4_compress feature is active on the
1578 1578 sending system, then the receiving system must have that feature
1579 1579 enabled as well. See zpool-features(5) for details on ZFS feature
1580 1580 flags and the embedded_data feature.
1581 1581
1582 1582 -c, --compressed
1583 1583 Generate a more compact stream by using compressed WRITE records
1584 1584 for blocks which are compressed on disk and in memory (see the
1585 1585 compression property for details). If the lz4_compress feature is
1586 1586 active on the sending system, then the receiving system must have
1587 1587 that feature enabled as well. If the large_blocks feature is
1588 1588 enabled on the sending system but the -L option is not supplied in
1589 1589 conjunction with -c, then the data will be decompressed before
1590 1590 sending so it can be split into smaller block sizes.
1591 1591
1592 1592 -i snapshot
1593 1593 Generate an incremental stream from the first snapshot (the
1594 1594 incremental source) to the second snapshot (the incremental
1595 1595 target). The incremental source can be specified as the last
1596 1596 component of the snapshot name (the @ character and following) and
1597 1597 it is assumed to be from the same file system as the incremental
1598 1598 target.
1599 1599
1600 1600 If the destination is a clone, the source may be the origin
1601 1601 snapshot, which must be fully specified (for example,
1602 1602 pool/fs@origin, not just @origin).
1603 1603
1604 1604 -n, --dryrun
1605 1605 Do a dry-run ("No-op") send. Do not generate any actual send data.
1606 1606 This is useful in conjunction with the -v or -P flags to determine
1607 1607 what data will be sent. In this case, the verbose output will be
1608 1608 written to standard output (contrast with a non-dry-run, where the
1609 1609 stream is written to standard output and the verbose output goes to
1610 1610 standard error).
1611 1611
1612 1612 -p, --props
1613 1613 Include the dataset's properties in the stream. This flag is
1614 1614 implicit when -R is specified. The receiving system must also
1615 1615 support this feature.
1616 1616
1617 1617 -v, --verbose
1618 1618 Print verbose information about the stream package generated. This
1619 1619 information includes a per-second report of how much data has been
1620 1620 sent.
1621 1621
1622 1622 The format of the stream is committed. You will be able to receive
1623 1623 your streams on future versions of ZFS .
1624 1624
1625 1625 zfs send [-Lce] [-i snapshot|bookmark] filesystem|volume|snapshot
1626 1626 Generate a send stream, which may be of a filesystem, and may be
1627 1627 incremental from a bookmark. If the destination is a filesystem or
1628 1628 volume, the pool must be read-only, or the filesystem must not be
1629 1629 mounted. When the stream generated from a filesystem or volume is
1630 1630 received, the default snapshot name will be "--head--".
1631 1631
1632 1632 -L, --large-block
1633 1633 Generate a stream which may contain blocks larger than 128KB. This
1634 1634 flag has no effect if the large_blocks pool feature is disabled, or
1635 1635 if the recordsize property of this filesystem has never been set
1636 1636 above 128KB. The receiving system must have the large_blocks pool
1637 1637 feature enabled as well. See zpool-features(5) for details on ZFS
1638 1638 feature flags and the large_blocks feature.
1639 1639
1640 1640 -c, --compressed
1641 1641 Generate a more compact stream by using compressed WRITE records
1642 1642 for blocks which are compressed on disk and in memory (see the
1643 1643 compression property for details). If the lz4_compress feature is
1644 1644 active on the sending system, then the receiving system must have
1645 1645 that feature enabled as well. If the large_blocks feature is
1646 1646 enabled on the sending system but the -L option is not supplied in
1647 1647 conjunction with -c, then the data will be decompressed before
1648 1648 sending so it can be split into smaller block sizes.
1649 1649
1650 1650 -e, --embed
1651 1651 Generate a more compact stream by using WRITE_EMBEDDED records for
1652 1652 blocks which are stored more compactly on disk by the embedded_data
1653 1653 pool feature. This flag has no effect if the embedded_data feature
1654 1654 is disabled. The receiving system must have the embedded_data
1655 1655 feature enabled. If the lz4_compress feature is active on the
1656 1656 sending system, then the receiving system must have that feature
1657 1657 enabled as well. See zpool-features(5) for details on ZFS feature
1658 1658 flags and the embedded_data feature.
1659 1659
1660 1660 -i snapshot|bookmark
1661 1661 Generate an incremental send stream. The incremental source must
1662 1662 be an earlier snapshot in the destination's history. It will
1663 1663 commonly be an earlier snapshot in the destination's file system,
1664 1664 in which case it can be specified as the last component of the name
1665 1665 (the # or @ character and following).
1666 1666
1667 1667 If the incremental target is a clone, the incremental source can be
1668 1668 the origin snapshot, or an earlier snapshot in the origin's
1669 1669 filesystem, or the origin's origin, etc.
1670 1670
1671 1671 zfs send [-Penv] -t receive_resume_token
1672 1672 Creates a send stream which resumes an interrupted receive. The
1673 1673 receive_resume_token is the value of this property on the filesystem or
1674 1674 volume that was being received into. See the documentation for zfs
1675 1675 receive -s for more details.
1676 1676
1677 1677 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
1678 1678
1679 1679 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
1680 1680 Creates a snapshot whose contents are as specified in the stream
1681 1681 provided on standard input. If a full stream is received, then a new
1682 1682 file system is created as well. Streams are created using the zfs send
1683 1683 subcommand, which by default creates a full stream. zfs recv can be
1684 1684 used as an alias for zfs receive.
1685 1685
1686 1686 If an incremental stream is received, then the destination file system
1687 1687 must already exist, and its most recent snapshot must match the
1688 1688 incremental stream's source. For zvols, the destination device link is
1689 1689 destroyed and recreated, which means the zvol cannot be accessed during
1690 1690 the receive operation.
1691 1691
1692 1692 When a snapshot replication package stream that is generated by using
1693 1693 the zfs send -R command is received, any snapshots that do not exist on
1694 1694 the sending location are destroyed by using the zfs destroy -d command.
1695 1695
1696 1696 The name of the snapshot (and file system, if a full stream is
1697 1697 received) that this subcommand creates depends on the argument type and
1698 1698 the use of the -d or -e options.
1699 1699
1700 1700 If the argument is a snapshot name, the specified snapshot is created.
1701 1701 If the argument is a file system or volume name, a snapshot with the
1702 1702 same name as the sent snapshot is created within the specified
1703 1703 filesystem or volume. If neither of the -d or -e options are
1704 1704 specified, the provided target snapshot name is used exactly as
1705 1705 provided.
1706 1706
1707 1707 The -d and -e options cause the file system name of the target snapshot
1708 1708 to be determined by appending a portion of the sent snapshot's name to
1709 1709 the specified target filesystem. If the -d option is specified, all
1710 1710 but the first element of the sent snapshot's file system path (usually
1711 1711 the pool name) is used and any required intermediate file systems
1712 1712 within the specified one are created. If the -e option is specified,
1713 1713 then only the last element of the sent snapshot's file system name
1714 1714 (i.e. the name of the source file system itself) is used as the target
1715 1715 file system name.
1716 1716
1717 1717 -F Force a rollback of the file system to the most recent snapshot
1718 1718 before performing the receive operation. If receiving an
1719 1719 incremental replication stream (for example, one generated by zfs
1720 1720 send -R [-i|-I]), destroy snapshots and file systems that do not
1721 1721 exist on the sending side.
1722 1722
1723 1723 -d Discard the first element of the sent snapshot's file system name,
1724 1724 using the remaining elements to determine the name of the target
1725 1725 file system for the new snapshot as described in the paragraph
1726 1726 above.
1727 1727
1728 1728 -e Discard all but the last element of the sent snapshot's file system
1729 1729 name, using that element to determine the name of the target file
1730 1730 system for the new snapshot as described in the paragraph above.
1731 1731
1732 1732 -n Do not actually receive the stream. This can be useful in
1733 1733 conjunction with the -v option to verify the name the receive
1734 1734 operation would use.
1735 1735
1736 1736 -o origin=snapshot
1737 1737 Forces the stream to be received as a clone of the given snapshot.
1738 1738 If the stream is a full send stream, this will create the
1739 1739 filesystem described by the stream as a clone of the specified
1740 1740 snapshot. Which snapshot was specified will not affect the success
1741 1741 or failure of the receive, as long as the snapshot does exist. If
1742 1742 the stream is an incremental send stream, all the normal
1743 1743 verification will be performed.
1744 1744
1745 1745 -u File system that is associated with the received stream is not
1746 1746 mounted.
1747 1747
1748 1748 -v Print verbose information about the stream and the time required to
1749 1749 perform the receive operation.
1750 1750
1751 1751 -s If the receive is interrupted, save the partially received state,
1752 1752 rather than deleting it. Interruption may be due to premature
1753 1753 termination of the stream (e.g. due to network failure or failure
1754 1754 of the remote system if the stream is being read over a network
1755 1755 connection), a checksum error in the stream, termination of the zfs
1756 1756 receive process, or unclean shutdown of the system.
1757 1757
1758 1758 The receive can be resumed with a stream generated by zfs send -t
1759 1759 token, where the token is the value of the receive_resume_token
1760 1760 property of the filesystem or volume which is received into.
1761 1761
1762 1762 To use this flag, the storage pool must have the extensible_dataset
1763 1763 feature enabled. See zpool-features(5) for details on ZFS feature
1764 1764 flags.
1765 1765
1766 1766 zfs receive -A filesystem|volume
1767 1767 Abort an interrupted zfs receive -s, deleting its saved partially
1768 1768 received state.
1769 1769
1770 1770 zfs allow filesystem|volume
1771 1771 Displays permissions that have been delegated on the specified
1772 1772 filesystem or volume. See the other forms of zfs allow for more
1773 1773 information.
1774 1774
1775 1775 zfs allow [-dglu] user|group[,user|group]...
1776 1776 perm|@setname[,perm|@setname]... filesystem|volume
1777 1777
1778 1778 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
1779 1779 filesystem|volume
1780 1780 Delegates ZFS administration permission for the file systems to non-
1781 1781 privileged users.
1782 1782
1783 1783 -d Allow only for the descendent file systems.
1784 1784
1785 1785 -e|everyone
1786 1786 Specifies that the permissions be delegated to everyone.
1787 1787
1788 1788 -g group[,group]...
1789 1789 Explicitly specify that permissions are delegated to the group.
1790 1790
1791 1791 -l Allow "locally" only for the specified file system.
1792 1792
1793 1793 -u user[,user]...
1794 1794 Explicitly specify that permissions are delegated to the user.
1795 1795
1796 1796 user|group[,user|group]...
1797 1797 Specifies to whom the permissions are delegated. Multiple entities
1798 1798 can be specified as a comma-separated list. If neither of the -gu
1799 1799 options are specified, then the argument is interpreted
1800 1800 preferentially as the keyword everyone, then as a user name, and
1801 1801 lastly as a group name. To specify a user or group named
1802 1802 "everyone", use the -g or -u options. To specify a group with the
1803 1803 same name as a user, use the -g options.
1804 1804
1805 1805 perm|@setname[,perm|@setname]...
1806 1806 The permissions to delegate. Multiple permissions may be specified
1807 1807 as a comma-separated list. Permission names are the same as ZFS
1808 1808 subcommand and property names. See the property list below.
1809 1809 Property set names, which begin with @, may be specified. See the
1810 1810 -s form below for details.
1811 1811
1812 1812 If neither of the -dl options are specified, or both are, then the
1813 1813 permissions are allowed for the file system or volume, and all of its
1814 1814 descendents.
1815 1815
1816 1816 Permissions are generally the ability to use a ZFS subcommand or change
1817 1817 a ZFS property. The following permissions are available:
1818 1818
1819 1819 NAME TYPE NOTES
1820 1820 allow subcommand Must also have the permission that is
1821 1821 being allowed
1822 1822 clone subcommand Must also have the 'create' ability and
1823 1823 'mount' ability in the origin file system
1824 1824 create subcommand Must also have the 'mount' ability
1825 1825 destroy subcommand Must also have the 'mount' ability
1826 1826 diff subcommand Allows lookup of paths within a dataset
1827 1827 given an object number, and the ability
1828 1828 to create snapshots necessary to
1829 1829 'zfs diff'.
1830 1830 mount subcommand Allows mount/umount of ZFS datasets
1831 1831 promote subcommand Must also have the 'mount' and 'promote'
1832 1832 ability in the origin file system
1833 1833 receive subcommand Must also have the 'mount' and 'create'
1834 1834 ability
1835 1835 rename subcommand Must also have the 'mount' and 'create'
1836 1836 ability in the new parent
1837 1837 rollback subcommand Must also have the 'mount' ability
1838 1838 send subcommand
1839 1839 share subcommand Allows sharing file systems over NFS
1840 1840 or SMB protocols
1841 1841 snapshot subcommand Must also have the 'mount' ability
1842 1842
1843 1843 groupquota other Allows accessing any groupquota@...
1844 1844 property
1845 1845 groupused other Allows reading any groupused@... property
1846 1846 userprop other Allows changing any user property
1847 1847 userquota other Allows accessing any userquota@...
1848 1848 property
1849 1849 userused other Allows reading any userused@... property
1850 1850
1851 1851 aclinherit property
1852 1852 aclmode property
1853 1853 atime property
1854 1854 canmount property
1855 1855 casesensitivity property
1856 1856 checksum property
1857 1857 compression property
1858 1858 copies property
1859 1859 devices property
1860 1860 exec property
1861 1861 filesystem_limit property
1862 1862 mountpoint property
1863 1863 nbmand property
1864 1864 normalization property
1865 1865 primarycache property
1866 1866 quota property
1867 1867 readonly property
1868 1868 recordsize property
1869 1869 refquota property
1870 1870 refreservation property
1871 1871 reservation property
1872 1872 secondarycache property
1873 1873 setuid property
1874 1874 sharenfs property
1875 1875 sharesmb property
1876 1876 snapdir property
1877 1877 snapshot_limit property
1878 1878 utf8only property
1879 1879 version property
1880 1880 volblocksize property
1881 1881 volsize property
1882 1882 vscan property
1883 1883 xattr property
1884 1884 zoned property
1885 1885
1886 1886 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
1887 1887 Sets "create time" permissions. These permissions are granted
1888 1888 (locally) to the creator of any newly-created descendent file system.
1889 1889
1890 1890 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
1891 1891 Defines or adds permissions to a permission set. The set can be used
1892 1892 by other zfs allow commands for the specified file system and its
1893 1893 descendents. Sets are evaluated dynamically, so changes to a set are
1894 1894 immediately reflected. Permission sets follow the same naming
1895 1895 restrictions as ZFS file systems, but the name must begin with @, and
1896 1896 can be no more than 64 characters long.
1897 1897
1898 1898 zfs unallow [-dglru] user|group[,user|group]...
1899 1899 [perm|@setname[,perm|@setname]...] filesystem|volume
1900 1900
1901 1901 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
1902 1902 filesystem|volume
1903 1903
1904 1904 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
1905 1905 Removes permissions that were granted with the zfs allow command. No
1906 1906 permissions are explicitly denied, so other permissions granted are
1907 1907 still in effect. For example, if the permission is granted by an
1908 1908 ancestor. If no permissions are specified, then all permissions for
1909 1909 the specified user, group, or everyone are removed. Specifying
1910 1910 everyone (or using the -e option) only removes the permissions that
1911 1911 were granted to everyone, not all permissions for every user and group.
1912 1912 See the zfs allow command for a description of the -ldugec options.
1913 1913
1914 1914 -r Recursively remove the permissions from this file system and all
1915 1915 descendents.
1916 1916
1917 1917 zfs unallow [-r] -s @setname [perm|@setname[,perm|@setname]...]
1918 1918 filesystem|volume
1919 1919 Removes permissions from a permission set. If no permissions are
1920 1920 specified, then all permissions are removed, thus removing the set
1921 1921 entirely.
1922 1922
1923 1923 zfs hold [-r] tag snapshot...
1924 1924 Adds a single reference, named with the tag argument, to the specified
1925 1925 snapshot or snapshots. Each snapshot has its own tag namespace, and
1926 1926 tags must be unique within that space.
1927 1927
1928 1928 If a hold exists on a snapshot, attempts to destroy that snapshot by
1929 1929 using the zfs destroy command return EBUSY.
1930 1930
1931 1931 -r Specifies that a hold with the given tag is applied recursively to
1932 1932 the snapshots of all descendent file systems.
1933 1933
1934 1934 zfs holds [-r] snapshot...
1935 1935 Lists all existing user references for the given snapshot or snapshots.
1936 1936
1937 1937 -r Lists the holds that are set on the named descendent snapshots, in
1938 1938 addition to listing the holds on the named snapshot.
1939 1939
1940 1940 zfs release [-r] tag snapshot...
1941 1941 Removes a single reference, named with the tag argument, from the
1942 1942 specified snapshot or snapshots. The tag must already exist for each
1943 1943 snapshot. If a hold exists on a snapshot, attempts to destroy that
1944 1944 snapshot by using the zfs destroy command return EBUSY.
1945 1945
1946 1946 -r Recursively releases a hold with the given tag on the snapshots of
1947 1947 all descendent file systems.
1948 1948
1949 1949 zfs diff [-FHt] snapshot snapshot|filesystem
1950 1950 Display the difference between a snapshot of a given filesystem and
1951 1951 another snapshot of that filesystem from a later time or the current
1952 1952 contents of the filesystem. The first column is a character indicating
1953 1953 the type of change, the other columns indicate pathname, new pathname
1954 1954 (in case of rename), change in link count, and optionally file type
1955 1955 and/or change time. The types of change are:
1956 1956
1957 1957 - The path has been removed
1958 1958 + The path has been created
1959 1959 M The path has been modified
1960 1960 R The path has been renamed
1961 1961
1962 1962 -F Display an indication of the type of file, in a manner similar to
1963 1963 the - option of ls(1).
1964 1964
1965 1965 B Block device
1966 1966 C Character device
1967 1967 / Directory
1968 1968 > Door
1969 1969 | Named pipe
1970 1970 @ Symbolic link
1971 1971 P Event port
1972 1972 = Socket
1973 1973 F Regular file
1974 1974
1975 1975 -H Give more parsable tab-separated output, without header lines and
1976 1976 without arrows.
1977 1977
1978 1978 -t Display the path's inode change time as the first column of output.
1979 1979
1980 1980 zfs program [-jn] [-t timeout] [-m memory_limit] pool script [arg1 ...]
1981 1981 Executes script as a ZFS channel program on pool. The ZFS channel
1982 1982 program interface allows ZFS administrative operations to be run
1983 1983 programmatically via a Lua script. The entire script is executed
1984 1984 atomically, with no other administrative operations taking effect
1985 1985 concurrently. A library of ZFS calls is made available to channel
1986 1986 program scripts. Channel programs may only be run with root
1987 1987 privileges.
1988 1988
1989 1989 For full documentation of the ZFS channel program interface, see the
1990 1990 manual page for
1991 1991
1992 1992 -j
1993 1993 Display channel program output in JSON format. When this flag is
1994 1994 specified and standard output is empty - channel program encountered
1995 1995 an error. The details of such an error will be printed to standard
1996 1996 error in plain text.
1997 1997
1998 1998 -n
1999 1999 Executes a read-only channel program, which runs faster. The program
2000 2000 cannot change on-disk state by calling functions from the zfs.sync
2001 2001 submodule. The program can be used to gather information such as
2002 2002 properties and determining if changes would succeed (zfs.check.*).
2003 2003 Without this flag, all pending changes must be synced to disk before
2004 2004 a channel program can complete.
2005 2005
2006 2006 -t timeout
2007 2007 Execution time limit, in milliseconds. If a channel program executes
2008 2008 for longer than the provided timeout, it will be stopped and an error
2009 2009 will be returned. The default timeout is 1000 ms, and can be set to
2010 2010 a maximum of 10000 ms.
2011 2011
2012 2012 -m memory-limit
2013 2013 Memory limit, in bytes. If a channel program attempts to allocate
2014 2014 more memory than the given limit, it will be stopped and an error
2015 2015 returned. The default memory limit is 10 MB, and can be set to a
2016 2016 maximum of 100 MB.
2017 2017
2018 2018 All remaining argument strings are passed directly to the channel
2019 2019 program as arguments. See zfs-program(1M) for more information.
2020 2020
2021 2021 EXIT STATUS
2022 2022 The zfs utility exits 0 on success, 1 if an error occurs, and 2 if
2023 2023 invalid command line options were specified.
2024 2024
2025 2025 EXAMPLES
2026 2026 Example 1 Creating a ZFS File System Hierarchy
2027 2027 The following commands create a file system named pool/home and a file
2028 2028 system named pool/home/bob. The mount point /export/home is set for
2029 2029 the parent file system, and is automatically inherited by the child
2030 2030 file system.
2031 2031
2032 2032 # zfs create pool/home
2033 2033 # zfs set mountpoint=/export/home pool/home
2034 2034 # zfs create pool/home/bob
2035 2035
2036 2036 Example 2 Creating a ZFS Snapshot
2037 2037 The following command creates a snapshot named yesterday. This
2038 2038 snapshot is mounted on demand in the .zfs/snapshot directory at the
2039 2039 root of the pool/home/bob file system.
2040 2040
2041 2041 # zfs snapshot pool/home/bob@yesterday
2042 2042
2043 2043 Example 3 Creating and Destroying Multiple Snapshots
2044 2044 The following command creates snapshots named yesterday of pool/home
2045 2045 and all of its descendent file systems. Each snapshot is mounted on
2046 2046 demand in the .zfs/snapshot directory at the root of its file system.
2047 2047 The second command destroys the newly created snapshots.
2048 2048
2049 2049 # zfs snapshot -r pool/home@yesterday
2050 2050 # zfs destroy -r pool/home@yesterday
2051 2051
2052 2052 Example 4 Disabling and Enabling File System Compression
2053 2053 The following command disables the compression property for all file
2054 2054 systems under pool/home. The next command explicitly enables
2055 2055 compression for pool/home/anne.
2056 2056
2057 2057 # zfs set compression=off pool/home
2058 2058 # zfs set compression=on pool/home/anne
2059 2059
2060 2060 Example 5 Listing ZFS Datasets
2061 2061 The following command lists all active file systems and volumes in the
2062 2062 system. Snapshots are displayed if the listsnaps property is on. The
2063 2063 default is off. See zpool(1M) for more information on pool properties.
2064 2064
2065 2065 # zfs list
2066 2066 NAME USED AVAIL REFER MOUNTPOINT
2067 2067 pool 450K 457G 18K /pool
2068 2068 pool/home 315K 457G 21K /export/home
2069 2069 pool/home/anne 18K 457G 18K /export/home/anne
2070 2070 pool/home/bob 276K 457G 276K /export/home/bob
2071 2071
2072 2072 Example 6 Setting a Quota on a ZFS File System
2073 2073 The following command sets a quota of 50 Gbytes for pool/home/bob.
2074 2074
2075 2075 # zfs set quota=50G pool/home/bob
2076 2076
2077 2077 Example 7 Listing ZFS Properties
2078 2078 The following command lists all properties for pool/home/bob.
2079 2079
2080 2080 # zfs get all pool/home/bob
2081 2081 NAME PROPERTY VALUE SOURCE
2082 2082 pool/home/bob type filesystem -
2083 2083 pool/home/bob creation Tue Jul 21 15:53 2009 -
2084 2084 pool/home/bob used 21K -
2085 2085 pool/home/bob available 20.0G -
2086 2086 pool/home/bob referenced 21K -
2087 2087 pool/home/bob compressratio 1.00x -
2088 2088 pool/home/bob mounted yes -
2089 2089 pool/home/bob quota 20G local
2090 2090 pool/home/bob reservation none default
2091 2091 pool/home/bob recordsize 128K default
2092 2092 pool/home/bob mountpoint /pool/home/bob default
2093 2093 pool/home/bob sharenfs off default
2094 2094 pool/home/bob checksum on default
2095 2095 pool/home/bob compression on local
2096 2096 pool/home/bob atime on default
2097 2097 pool/home/bob devices on default
2098 2098 pool/home/bob exec on default
2099 2099 pool/home/bob setuid on default
2100 2100 pool/home/bob readonly off default
2101 2101 pool/home/bob zoned off default
2102 2102 pool/home/bob snapdir hidden default
2103 2103 pool/home/bob aclmode discard default
2104 2104 pool/home/bob aclinherit restricted default
2105 2105 pool/home/bob canmount on default
2106 2106 pool/home/bob xattr on default
2107 2107 pool/home/bob copies 1 default
2108 2108 pool/home/bob version 4 -
2109 2109 pool/home/bob utf8only off -
2110 2110 pool/home/bob normalization none -
2111 2111 pool/home/bob casesensitivity sensitive -
2112 2112 pool/home/bob vscan off default
2113 2113 pool/home/bob nbmand off default
2114 2114 pool/home/bob sharesmb off default
2115 2115 pool/home/bob refquota none default
2116 2116 pool/home/bob refreservation none default
2117 2117 pool/home/bob primarycache all default
2118 2118 pool/home/bob secondarycache all default
2119 2119 pool/home/bob usedbysnapshots 0 -
2120 2120 pool/home/bob usedbydataset 21K -
2121 2121 pool/home/bob usedbychildren 0 -
2122 2122 pool/home/bob usedbyrefreservation 0 -
2123 2123
2124 2124 The following command gets a single property value.
2125 2125
2126 2126 # zfs get -H -o value compression pool/home/bob
2127 2127 on
2128 2128 The following command lists all properties with local settings for
2129 2129 pool/home/bob.
2130 2130
2131 2131 # zfs get -r -s local -o name,property,value all pool/home/bob
2132 2132 NAME PROPERTY VALUE
2133 2133 pool/home/bob quota 20G
2134 2134 pool/home/bob compression on
2135 2135
2136 2136 Example 8 Rolling Back a ZFS File System
2137 2137 The following command reverts the contents of pool/home/anne to the
2138 2138 snapshot named yesterday, deleting all intermediate snapshots.
2139 2139
2140 2140 # zfs rollback -r pool/home/anne@yesterday
2141 2141
2142 2142 Example 9 Creating a ZFS Clone
2143 2143 The following command creates a writable file system whose initial
2144 2144 contents are the same as pool/home/bob@yesterday.
2145 2145
2146 2146 # zfs clone pool/home/bob@yesterday pool/clone
2147 2147
2148 2148 Example 10 Promoting a ZFS Clone
2149 2149 The following commands illustrate how to test out changes to a file
2150 2150 system, and then replace the original file system with the changed one,
2151 2151 using clones, clone promotion, and renaming:
2152 2152
2153 2153 # zfs create pool/project/production
2154 2154 populate /pool/project/production with data
2155 2155 # zfs snapshot pool/project/production@today
2156 2156 # zfs clone pool/project/production@today pool/project/beta
2157 2157 make changes to /pool/project/beta and test them
2158 2158 # zfs promote pool/project/beta
2159 2159 # zfs rename pool/project/production pool/project/legacy
2160 2160 # zfs rename pool/project/beta pool/project/production
2161 2161 once the legacy version is no longer needed, it can be destroyed
2162 2162 # zfs destroy pool/project/legacy
2163 2163
2164 2164 Example 11 Inheriting ZFS Properties
2165 2165 The following command causes pool/home/bob and pool/home/anne to
2166 2166 inherit the checksum property from their parent.
2167 2167
2168 2168 # zfs inherit checksum pool/home/bob pool/home/anne
2169 2169
2170 2170 Example 12 Remotely Replicating ZFS Data
2171 2171 The following commands send a full stream and then an incremental
2172 2172 stream to a remote machine, restoring them into poolB/received/fs@a and
2173 2173 poolB/received/fs@b, respectively. poolB must contain the file system
2174 2174 poolB/received, and must not initially contain poolB/received/fs.
2175 2175
2176 2176 # zfs send pool/fs@a | \
2177 2177 ssh host zfs receive poolB/received/fs@a
2178 2178 # zfs send -i a pool/fs@b | \
2179 2179 ssh host zfs receive poolB/received/fs
2180 2180
2181 2181 Example 13 Using the zfs receive -d Option
2182 2182 The following command sends a full stream of poolA/fsA/fsB@snap to a
2183 2183 remote machine, receiving it into poolB/received/fsA/fsB@snap. The
2184 2184 fsA/fsB@snap portion of the received snapshot's name is determined from
2185 2185 the name of the sent snapshot. poolB must contain the file system
2186 2186 poolB/received. If poolB/received/fsA does not exist, it is created as
2187 2187 an empty file system.
2188 2188
2189 2189 # zfs send poolA/fsA/fsB@snap | \
2190 2190 ssh host zfs receive -d poolB/received
2191 2191
2192 2192 Example 14 Setting User Properties
2193 2193 The following example sets the user-defined com.example:department
2194 2194 property for a dataset.
2195 2195
2196 2196 # zfs set com.example:department=12345 tank/accounting
2197 2197
2198 2198 Example 15 Performing a Rolling Snapshot
2199 2199 The following example shows how to maintain a history of snapshots with
2200 2200 a consistent naming scheme. To keep a week's worth of snapshots, the
2201 2201 user destroys the oldest snapshot, renames the remaining snapshots, and
2202 2202 then creates a new snapshot, as follows:
2203 2203
2204 2204 # zfs destroy -r pool/users@7daysago
2205 2205 # zfs rename -r pool/users@6daysago @7daysago
2206 2206 # zfs rename -r pool/users@5daysago @6daysago
2207 2207 # zfs rename -r pool/users@yesterday @5daysago
2208 2208 # zfs rename -r pool/users@yesterday @4daysago
2209 2209 # zfs rename -r pool/users@yesterday @3daysago
2210 2210 # zfs rename -r pool/users@yesterday @2daysago
2211 2211 # zfs rename -r pool/users@today @yesterday
2212 2212 # zfs snapshot -r pool/users@today
2213 2213
2214 2214 Example 16 Setting sharenfs Property Options on a ZFS File System
2215 2215 The following commands show how to set sharenfs property options to
2216 2216 enable rw access for a set of IP addresses and to enable root access
2217 2217 for system neo on the tank/home file system.
2218 2218
2219 2219 # zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
2220 2220
2221 2221 If you are using DNS for host name resolution, specify the fully
2222 2222 qualified hostname.
2223 2223
2224 2224 Example 17 Delegating ZFS Administration Permissions on a ZFS Dataset
2225 2225 The following example shows how to set permissions so that user cindys
2226 2226 can create, destroy, mount, and take snapshots on tank/cindys. The
2227 2227 permissions on tank/cindys are also displayed.
2228 2228
2229 2229 # zfs allow cindys create,destroy,mount,snapshot tank/cindys
2230 2230 # zfs allow tank/cindys
2231 2231 ---- Permissions on tank/cindys --------------------------------------
2232 2232 Local+Descendent permissions:
2233 2233 user cindys create,destroy,mount,snapshot
2234 2234
2235 2235 Because the tank/cindys mount point permission is set to 755 by
2236 2236 default, user cindys will be unable to mount file systems under
2237 2237 tank/cindys. Add an ACE similar to the following syntax to provide
2238 2238 mount point access:
2239 2239
2240 2240 # chmod A+user:cindys:add_subdirectory:allow /tank/cindys
2241 2241
2242 2242 Example 18 Delegating Create Time Permissions on a ZFS Dataset
2243 2243 The following example shows how to grant anyone in the group staff to
2244 2244 create file systems in tank/users. This syntax also allows staff
2245 2245 members to destroy their own file systems, but not destroy anyone
2246 2246 else's file system. The permissions on tank/users are also displayed.
2247 2247
2248 2248 # zfs allow staff create,mount tank/users
2249 2249 # zfs allow -c destroy tank/users
2250 2250 # zfs allow tank/users
2251 2251 ---- Permissions on tank/users ---------------------------------------
2252 2252 Permission sets:
2253 2253 destroy
2254 2254 Local+Descendent permissions:
2255 2255 group staff create,mount
2256 2256
2257 2257 Example 19 Defining and Granting a Permission Set on a ZFS Dataset
2258 2258 The following example shows how to define and grant a permission set on
2259 2259 the tank/users file system. The permissions on tank/users are also
2260 2260 displayed.
2261 2261
2262 2262 # zfs allow -s @pset create,destroy,snapshot,mount tank/users
2263 2263 # zfs allow staff @pset tank/users
2264 2264 # zfs allow tank/users
2265 2265 ---- Permissions on tank/users ---------------------------------------
2266 2266 Permission sets:
2267 2267 @pset create,destroy,mount,snapshot
2268 2268 Local+Descendent permissions:
2269 2269 group staff @pset
2270 2270
2271 2271 Example 20 Delegating Property Permissions on a ZFS Dataset
2272 2272 The following example shows to grant the ability to set quotas and
2273 2273 reservations on the users/home file system. The permissions on
2274 2274 users/home are also displayed.
2275 2275
2276 2276 # zfs allow cindys quota,reservation users/home
2277 2277 # zfs allow users/home
2278 2278 ---- Permissions on users/home ---------------------------------------
2279 2279 Local+Descendent permissions:
2280 2280 user cindys quota,reservation
2281 2281 cindys% zfs set quota=10G users/home/marks
2282 2282 cindys% zfs get quota users/home/marks
2283 2283 NAME PROPERTY VALUE SOURCE
2284 2284 users/home/marks quota 10G local
2285 2285
2286 2286 Example 21 Removing ZFS Delegated Permissions on a ZFS Dataset
2287 2287 The following example shows how to remove the snapshot permission from
2288 2288 the staff group on the tank/users file system. The permissions on
2289 2289 tank/users are also displayed.
2290 2290
2291 2291 # zfs unallow staff snapshot tank/users
2292 2292 # zfs allow tank/users
2293 2293 ---- Permissions on tank/users ---------------------------------------
2294 2294 Permission sets:
2295 2295 @pset create,destroy,mount,snapshot
2296 2296 Local+Descendent permissions:
2297 2297 group staff @pset
2298 2298
2299 2299 Example 22 Showing the differences between a snapshot and a ZFS Dataset
2300 2300 The following example shows how to see what has changed between a prior
2301 2301 snapshot of a ZFS dataset and its current state. The -F option is used
2302 2302 to indicate type information for the files affected.
2303 2303
2304 2304 # zfs diff -F tank/test@before tank/test
2305 2305 M / /tank/test/
2306 2306 M F /tank/test/linked (+1)
2307 2307 R F /tank/test/oldname -> /tank/test/newname
2308 2308 - F /tank/test/deleted
2309 2309 + F /tank/test/created
2310 2310 M F /tank/test/modified
2311 2311
2312 2312 INTERFACE STABILITY
2313 2313 Committed.
2314 2314
2315 2315 SEE ALSO
2316 2316 gzip(1), ssh(1), mount(1M), share(1M), sharemgr(1M), unshare(1M),
2317 2317 zonecfg(1M), zpool(1M), chmod(2), stat(2), write(2), fsync(3C),
2318 2318 dfstab(4), acl(5), attributes(5)
2319 2319
2320 2320 illumos February 10, 2018 illumos
↓ open down ↓ |
930 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX