1 ZFS(1M) Maintenance Commands ZFS(1M)
2
3 NAME
4 zfs - configures ZFS file systems
5
6 SYNOPSIS
7 zfs [-?]
8 zfs create [-p] [-o property=value]... filesystem
9 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
10 zfs destroy [-Rfnprv] filesystem|volume
11 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
12 zfs destroy filesystem|volume#bookmark
13 zfs snapshot [-r] [-o property=value]...
14 filesystem@snapname|volume@snapname...
15 zfs rollback [-Rfr] snapshot
16 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
17 zfs promote clone-filesystem
18 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
19 zfs rename [-fp] filesystem|volume filesystem|volume
20 zfs rename -r snapshot snapshot
21 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
22 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
23 zfs remap filesystem|volume
24 zfs set property=value [property=value]... filesystem|volume|snapshot...
25 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
26 [-t type[,type]...] all | property[,property]...
27 filesystem|volume|snapshot|bookmark...
28 zfs inherit [-rS] property filesystem|volume|snapshot...
29 zfs upgrade
30 zfs upgrade -v
31 zfs upgrade [-r] [-V version] -a | filesystem
32 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
33 [-t type[,type]...] filesystem|snapshot
34 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
35 [-t type[,type]...] filesystem|snapshot
36 zfs mount
37 zfs mount [-Ov] [-o options] -a | filesystem
38 zfs unmount [-f] -a | filesystem|mountpoint
39 zfs share -a | filesystem
40 zfs unshare -a | filesystem|mountpoint
41 zfs bookmark snapshot bookmark
42 zfs send [-DLPRcenpv] [[-I|-i] snapshot] snapshot
43 zfs send [-Lce] [-i snapshot|bookmark] filesystem|volume|snapshot
44 zfs send [-Penv] -t receive_resume_token
45 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
46 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
47 zfs receive -A filesystem|volume
48 zfs allow filesystem|volume
49 zfs allow [-dglu] user|group[,user|group]...
50 perm|@setname[,perm|@setname]... filesystem|volume
51 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
52 filesystem|volume
53 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
54 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
55 zfs unallow [-dglru] user|group[,user|group]...
56 [perm|@setname[,perm|@setname]...] filesystem|volume
57 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
58 filesystem|volume
59 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
60 zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
61 filesystem|volume
62 zfs hold [-r] tag snapshot...
63 zfs holds [-r] snapshot...
64 zfs release [-r] tag snapshot...
65 zfs diff [-FHt] snapshot snapshot|filesystem
66 zfs program [-n] [-t timeout] [-m memory_limit] pool script [arg1 ...]
67
68 DESCRIPTION
69 The zfs command configures ZFS datasets within a ZFS storage pool, as
70 described in zpool(1M). A dataset is identified by a unique path within
71 the ZFS namespace. For example:
72
73 pool/{filesystem,volume,snapshot}
74
75 where the maximum length of a dataset name is MAXNAMELEN (256 bytes).
76
77 A dataset can be one of the following:
78
79 file system A ZFS dataset of type filesystem can be mounted within the
80 standard system namespace and behaves like other file
81 systems. While ZFS file systems are designed to be POSIX
82 compliant, known issues exist that prevent compliance in
83 some cases. Applications that depend on standards
84 conformance might fail due to non-standard behavior when
85 checking file system free space.
86
87 volume A logical volume exported as a raw or block device. This
88 type of dataset should only be used under special
89 circumstances. File systems are typically used in most
90 environments.
91
92 snapshot A read-only version of a file system or volume at a given
93 point in time. It is specified as filesystem@name or
94 volume@name.
95
96 ZFS File System Hierarchy
97 A ZFS storage pool is a logical collection of devices that provide space
98 for datasets. A storage pool is also the root of the ZFS file system
99 hierarchy.
100
101 The root of the pool can be accessed as a file system, such as mounting
102 and unmounting, taking snapshots, and setting properties. The physical
103 storage characteristics, however, are managed by the zpool(1M) command.
104
105 See zpool(1M) for more information on creating and administering pools.
106
107 Snapshots
108 A snapshot is a read-only copy of a file system or volume. Snapshots can
109 be created extremely quickly, and initially consume no additional space
110 within the pool. As data within the active dataset changes, the snapshot
111 consumes more data than would otherwise be shared with the active
112 dataset.
113
114 Snapshots can have arbitrary names. Snapshots of volumes can be cloned
115 or rolled back, but cannot be accessed independently.
116
117 File system snapshots can be accessed under the .zfs/snapshot directory
118 in the root of the file system. Snapshots are automatically mounted on
119 demand and may be unmounted at regular intervals. The visibility of the
120 .zfs directory can be controlled by the snapdir property.
121
122 Clones
123 A clone is a writable volume or file system whose initial contents are
124 the same as another dataset. As with snapshots, creating a clone is
125 nearly instantaneous, and initially consumes no additional space.
126
127 Clones can only be created from a snapshot. When a snapshot is cloned,
128 it creates an implicit dependency between the parent and child. Even
129 though the clone is created somewhere else in the dataset hierarchy, the
130 original snapshot cannot be destroyed as long as a clone exists. The
131 origin property exposes this dependency, and the destroy command lists
132 any such dependencies, if they exist.
133
134 The clone parent-child dependency relationship can be reversed by using
135 the promote subcommand. This causes the "origin" file system to become a
136 clone of the specified file system, which makes it possible to destroy
137 the file system that the clone was created from.
138
139 Mount Points
140 Creating a ZFS file system is a simple operation, so the number of file
141 systems per system is likely to be numerous. To cope with this, ZFS
142 automatically manages mounting and unmounting file systems without the
143 need to edit the /etc/vfstab file. All automatically managed file
144 systems are mounted by ZFS at boot time.
145
146 By default, file systems are mounted under /path, where path is the name
147 of the file system in the ZFS namespace. Directories are created and
148 destroyed as needed.
149
150 A file system can also have a mount point set in the mountpoint property.
151 This directory is created as needed, and ZFS automatically mounts the
152 file system when the zfs mount -a command is invoked (without editing
153 /etc/vfstab). The mountpoint property can be inherited, so if pool/home
154 has a mount point of /export/stuff, then pool/home/user automatically
155 inherits a mount point of /export/stuff/user.
156
157 A file system mountpoint property of none prevents the file system from
158 being mounted.
159
160 If needed, ZFS file systems can also be managed with traditional tools
161 (mount, umount, /etc/vfstab). If a file system's mount point is set to
162 legacy, ZFS makes no attempt to manage the file system, and the
163 administrator is responsible for mounting and unmounting the file system.
164
165 Zones
166 A ZFS file system can be added to a non-global zone by using the zonecfg
167 add fs subcommand. A ZFS file system that is added to a non-global zone
168 must have its mountpoint property set to legacy.
169
170 The physical properties of an added file system are controlled by the
171 global administrator. However, the zone administrator can create,
172 modify, or destroy files within the added file system, depending on how
173 the file system is mounted.
174
175 A dataset can also be delegated to a non-global zone by using the zonecfg
176 add dataset subcommand. You cannot delegate a dataset to one zone and
177 the children of the same dataset to another zone. The zone administrator
178 can change properties of the dataset or any of its children. However,
179 the quota, filesystem_limit and snapshot_limit properties of the
180 delegated dataset can be modified only by the global administrator.
181
182 A ZFS volume can be added as a device to a non-global zone by using the
183 zonecfg add device subcommand. However, its physical properties can be
184 modified only by the global administrator.
185
186 For more information about zonecfg syntax, see zonecfg(1M).
187
188 After a dataset is delegated to a non-global zone, the zoned property is
189 automatically set. A zoned file system cannot be mounted in the global
190 zone, since the zone administrator might have to set the mount point to
191 an unacceptable value.
192
193 The global administrator can forcibly clear the zoned property, though
194 this should be done with extreme care. The global administrator should
195 verify that all the mount points are acceptable before clearing the
196 property.
197
198 Native Properties
199 Properties are divided into two types, native properties and user-defined
200 (or "user") properties. Native properties either export internal
201 statistics or control ZFS behavior. In addition, native properties are
202 either editable or read-only. User properties have no effect on ZFS
203 behavior, but you can use them to annotate datasets in a way that is
204 meaningful in your environment. For more information about user
205 properties, see the User Properties section, below.
206
207 Every dataset has a set of properties that export statistics about the
208 dataset as well as control various behaviors. Properties are inherited
209 from the parent unless overridden by the child. Some properties apply
210 only to certain types of datasets (file systems, volumes, or snapshots).
211
212 The values of numeric properties can be specified using human-readable
213 suffixes (for example, k, KB, M, Gb, and so forth, up to Z for
214 zettabyte). The following are all valid (and equal) specifications:
215 1536M, 1.5g, 1.50GB.
216
217 The values of non-numeric properties are case sensitive and must be
218 lowercase, except for mountpoint, sharenfs, and sharesmb.
219
220 The following native properties consist of read-only statistics about the
221 dataset. These properties can be neither set, nor inherited. Native
222 properties apply to all dataset types unless otherwise noted.
223
224 available The amount of space available to the dataset and
225 all its children, assuming that there is no other
226 activity in the pool. Because space is shared
227 within a pool, availability can be limited by any
228 number of factors, including physical pool size,
229 quotas, reservations, or other datasets within the
230 pool.
231
232 This property can also be referred to by its
233 shortened column name, avail.
234
235 compressratio For non-snapshots, the compression ratio achieved
236 for the used space of this dataset, expressed as a
237 multiplier. The used property includes descendant
238 datasets, and, for clones, does not include the
239 space shared with the origin snapshot. For
240 snapshots, the compressratio is the same as the
241 refcompressratio property. Compression can be
242 turned on by running: zfs set compression=on
243 dataset. The default value is off.
244
245 creation The time this dataset was created.
246
247 clones For snapshots, this property is a comma-separated
248 list of filesystems or volumes which are clones of
249 this snapshot. The clones' origin property is this
250 snapshot. If the clones property is not empty,
251 then this snapshot can not be destroyed (even with
252 the -r or -f options).
253
254 defer_destroy This property is on if the snapshot has been marked
255 for deferred destroy by using the zfs destroy -d
256 command. Otherwise, the property is off.
257
258 filesystem_count The total number of filesystems and volumes that
259 exist under this location in the dataset tree.
260 This value is only available when a
261 filesystem_limit has been set somewhere in the tree
262 under which the dataset resides.
263
264 logicalreferenced The amount of space that is "logically" accessible
265 by this dataset. See the referenced property. The
266 logical space ignores the effect of the compression
267 and copies properties, giving a quantity closer to
268 the amount of data that applications see. However,
269 it does include space consumed by metadata.
270
271 This property can also be referred to by its
272 shortened column name, lrefer.
273
274 logicalused The amount of space that is "logically" consumed by
275 this dataset and all its descendents. See the used
276 property. The logical space ignores the effect of
277 the compression and copies properties, giving a
278 quantity closer to the amount of data that
279 applications see. However, it does include space
280 consumed by metadata.
281
282 This property can also be referred to by its
283 shortened column name, lused.
284
285 mounted For file systems, indicates whether the file system
286 is currently mounted. This property can be either
287 yes or no.
288
289 origin For cloned file systems or volumes, the snapshot
290 from which the clone was created. See also the
291 clones property.
292
293 receive_resume_token For filesystems or volumes which have saved
294 partially-completed state from zfs receive -s, this
295 opaque token can be provided to zfs send -t to
296 resume and complete the zfs receive.
297
298 referenced The amount of data that is accessible by this
299 dataset, which may or may not be shared with other
300 datasets in the pool. When a snapshot or clone is
301 created, it initially references the same amount of
302 space as the file system or snapshot it was created
303 from, since its contents are identical.
304
305 This property can also be referred to by its
306 shortened column name, refer.
307
308 refcompressratio The compression ratio achieved for the referenced
309 space of this dataset, expressed as a multiplier.
310 See also the compressratio property.
311
312 snapshot_count The total number of snapshots that exist under this
313 location in the dataset tree. This value is only
314 available when a snapshot_limit has been set
315 somewhere in the tree under which the dataset
316 resides.
317
318 type The type of dataset: filesystem, volume, or
319 snapshot.
320
321 used The amount of space consumed by this dataset and
322 all its descendents. This is the value that is
323 checked against this dataset's quota and
324 reservation. The space used does not include this
325 dataset's reservation, but does take into account
326 the reservations of any descendent datasets. The
327 amount of space that a dataset consumes from its
328 parent, as well as the amount of space that is
329 freed if this dataset is recursively destroyed, is
330 the greater of its space used and its reservation.
331
332 The used space of a snapshot (see the Snapshots
333 section) is space that is referenced exclusively by
334 this snapshot. If this snapshot is destroyed, the
335 amount of used space will be freed. Space that is
336 shared by multiple snapshots isn't accounted for in
337 this metric. When a snapshot is destroyed, space
338 that was previously shared with this snapshot can
339 become unique to snapshots adjacent to it, thus
340 changing the used space of those snapshots. The
341 used space of the latest snapshot can also be
342 affected by changes in the file system. Note that
343 the used space of a snapshot is a subset of the
344 written space of the snapshot.
345
346 The amount of space used, available, or referenced
347 does not take into account pending changes.
348 Pending changes are generally accounted for within
349 a few seconds. Committing a change to a disk using
350 fsync(3C) or O_SYNC does not necessarily guarantee
351 that the space usage information is updated
352 immediately.
353
354 usedby* The usedby* properties decompose the used
355 properties into the various reasons that space is
356 used. Specifically, used = usedbychildren +
357 usedbydataset + usedbyrefreservation +
358 usedbysnapshots. These properties are only
359 available for datasets created on zpool "version
360 13" pools.
361
362 usedbychildren The amount of space used by children of this
363 dataset, which would be freed if all the dataset's
364 children were destroyed.
365
366 usedbydataset The amount of space used by this dataset itself,
367 which would be freed if the dataset were destroyed
368 (after first removing any refreservation and
369 destroying any necessary snapshots or descendents).
370
371 usedbyrefreservation The amount of space used by a refreservation set on
372 this dataset, which would be freed if the
373 refreservation was removed.
374
375 usedbysnapshots The amount of space consumed by snapshots of this
376 dataset. In particular, it is the amount of space
377 that would be freed if all of this dataset's
378 snapshots were destroyed. Note that this is not
379 simply the sum of the snapshots' used properties
380 because space can be shared by multiple snapshots.
381
382 userused@user The amount of space consumed by the specified user
383 in this dataset. Space is charged to the owner of
384 each file, as displayed by ls -l. The amount of
385 space charged is displayed by du and ls -s. See
386 the zfs userspace subcommand for more information.
387
388 Unprivileged users can access only their own space
389 usage. The root user, or a user who has been
390 granted the userused privilege with zfs allow, can
391 access everyone's usage.
392
393 The userused@... properties are not displayed by
394 zfs get all. The user's name must be appended
395 after the @ symbol, using one of the following
396 forms:
397
398 o POSIX name (for example, joe)
399
400 o POSIX numeric ID (for example, 789)
401
402 o SID name (for example, joe.smith@mydomain)
403
404 o SID numeric ID (for example, S-1-123-456-789)
405
406 userrefs This property is set to the number of user holds on
407 this snapshot. User holds are set by using the zfs
408 hold command.
409
410 groupused@group The amount of space consumed by the specified group
411 in this dataset. Space is charged to the group of
412 each file, as displayed by ls -l. See the
413 userused@user property for more information.
414
415 Unprivileged users can only access their own
416 groups' space usage. The root user, or a user who
417 has been granted the groupused privilege with zfs
418 allow, can access all groups' usage.
419
420 volblocksize For volumes, specifies the block size of the
421 volume. The blocksize cannot be changed once the
422 volume has been written, so it should be set at
423 volume creation time. The default blocksize for
424 volumes is 8 Kbytes. Any power of 2 from 512 bytes
425 to 128 Kbytes is valid.
426
427 This property can also be referred to by its
428 shortened column name, volblock.
429
430 written The amount of space referenced by this dataset,
431 that was written since the previous snapshot (i.e.
432 that is not referenced by the previous snapshot).
433
434 written@snapshot The amount of referenced space written to this
435 dataset since the specified snapshot. This is the
436 space that is referenced by this dataset but was
437 not referenced by the specified snapshot.
438
439 The snapshot may be specified as a short snapshot
440 name (just the part after the @), in which case it
441 will be interpreted as a snapshot in the same
442 filesystem as this dataset. The snapshot may be a
443 full snapshot name (filesystem@snapshot), which for
444 clones may be a snapshot in the origin's filesystem
445 (or the origin of the origin's filesystem, etc.)
446
447 The following native properties can be used to change the behavior of a
448 ZFS dataset.
449
450 aclinherit=discard|noallow|restricted|passthrough|passthrough-x
451 Controls how ACEs are inherited when files and directories are created.
452
453 discard does not inherit any ACEs.
454
455 noallow only inherits inheritable ACEs that specify "deny"
456 permissions.
457
458 restricted default, removes the write_acl and write_owner
459 permissions when the ACE is inherited.
460
461 passthrough inherits all inheritable ACEs without any modifications.
462
463 passthrough-x same meaning as passthrough, except that the owner@,
464 group@, and everyone@ ACEs inherit the execute
465 permission only if the file creation mode also requests
466 the execute bit.
467
468 When the property value is set to passthrough, files are created with a
469 mode determined by the inheritable ACEs. If no inheritable ACEs exist
470 that affect the mode, then the mode is set in accordance to the
471 requested mode from the application.
472
473 aclmode=discard|groupmask|passthrough|restricted
474 Controls how an ACL is modified during chmod(2) and how inherited ACEs
475 are modified by the file creation mode.
476
477 discard default, deletes all ACEs except for those representing
478 the mode of the file or directory requested by chmod(2).
479
480 groupmask reduces permissions granted by all ALLOW entries found in
481 the ACL such that they are no greater than the group
482 permissions specified by the mode.
483
484 passthrough indicates that no changes are made to the ACL other than
485 creating or updating the necessary ACEs to represent the
486 new mode of the file or directory.
487
488 restricted causes the chmod(2) operation to return an error when used
489 on any file or directory which has a non-trivial ACL, with
490 entries in addition to those that represent the mode.
491
492 chmod(2) is required to change the set user ID, set group ID, or sticky
493 bit on a file or directory, as they do not have equivalent ACEs. In
494 order to use chmod(2) on a file or directory with a non-trivial ACL
495 when aclmode is set to restricted, you must first remove all ACEs
496 except for those that represent the current mode.
497
498 atime=on|off
499 Controls whether the access time for files is updated when they are
500 read. Turning this property off avoids producing write traffic when
501 reading files and can result in significant performance gains, though
502 it might confuse mailers and other similar utilities. The default
503 value is on.
504
505 canmount=on|off|noauto
506 If this property is set to off, the file system cannot be mounted, and
507 is ignored by zfs mount -a. Setting this property to off is similar to
508 setting the mountpoint property to none, except that the dataset still
509 has a normal mountpoint property, which can be inherited. Setting this
510 property to off allows datasets to be used solely as a mechanism to
511 inherit properties. One example of setting canmount=off is to have two
512 datasets with the same mountpoint, so that the children of both
513 datasets appear in the same directory, but might have different
514 inherited characteristics.
515
516 When set to noauto, a dataset can only be mounted and unmounted
517 explicitly. The dataset is not mounted automatically when the dataset
518 is created or imported, nor is it mounted by the zfs mount -a command
519 or unmounted by the zfs unmount -a command.
520
521 This property is not inherited.
522
523 checksum=on|off|fletcher2|fletcher4|sha256|noparity|sha512|skein|edonr
524 Controls the checksum used to verify data integrity. The default value
525 is on, which automatically selects an appropriate algorithm (currently,
526 fletcher4, but this may change in future releases). The value off
527 disables integrity checking on user data. The value noparity not only
528 disables integrity but also disables maintaining parity for user data.
529 This setting is used internally by a dump device residing on a RAID-Z
530 pool and should not be used by any other dataset. Disabling checksums
531 is NOT a recommended practice.
532
533 The sha512, skein, and edonr checksum algorithms require enabling the
534 appropriate features on the pool. Please see zpool-features(5) for
535 more information on these algorithms.
536
537 Changing this property affects only newly-written data.
538
539 Salted checksum algorithms (edonr, skein) are currently not supported
540 for any filesystem on the boot pools.
541
542 compression=on|off|gzip|gzip-N|lz4|lzjb|zle
543 Controls the compression algorithm used for this dataset.
544
545 Setting compression to on indicates that the current default
546 compression algorithm should be used. The default balances compression
547 and decompression speed, with compression ratio and is expected to work
548 well on a wide variety of workloads. Unlike all other settings for
549 this property, on does not select a fixed compression type. As new
550 compression algorithms are added to ZFS and enabled on a pool, the
551 default compression algorithm may change. The current default
552 compression algorithm is either lzjb or, if the lz4_compress feature is
553 enabled, lz4.
554
555 The lz4 compression algorithm is a high-performance replacement for the
556 lzjb algorithm. It features significantly faster compression and
557 decompression, as well as a moderately higher compression ratio than
558 lzjb, but can only be used on pools with the lz4_compress feature set
559 to enabled. See zpool-features(5) for details on ZFS feature flags and
560 the lz4_compress feature.
561
562 The lzjb compression algorithm is optimized for performance while
563 providing decent data compression.
564
565 The gzip compression algorithm uses the same compression as the gzip(1)
566 command. You can specify the gzip level by using the value gzip-N,
567 where N is an integer from 1 (fastest) to 9 (best compression ratio).
568 Currently, gzip is equivalent to gzip-6 (which is also the default for
569 gzip(1)).
570
571 The zle compression algorithm compresses runs of zeros.
572
573 This property can also be referred to by its shortened column name
574 compress. Changing this property affects only newly-written data.
575
576 copies=1|2|3
577 Controls the number of copies of data stored for this dataset. These
578 copies are in addition to any redundancy provided by the pool, for
579 example, mirroring or RAID-Z. The copies are stored on different
580 disks, if possible. The space used by multiple copies is charged to
581 the associated file and dataset, changing the used property and
582 counting against quotas and reservations.
583
584 Changing this property only affects newly-written data. Therefore, set
585 this property at file system creation time by using the -o copies=N
586 option.
587
588 devices=on|off
589 Controls whether device nodes can be opened on this file system. The
590 default value is on.
591
592 exec=on|off
593 Controls whether processes can be executed from within this file
594 system. The default value is on.
595
596 filesystem_limit=count|none
597 Limits the number of filesystems and volumes that can exist under this
598 point in the dataset tree. The limit is not enforced if the user is
599 allowed to change the limit. Setting a filesystem_limit to on a
600 descendent of a filesystem that already has a filesystem_limit does not
601 override the ancestor's filesystem_limit, but rather imposes an
602 additional limit. This feature must be enabled to be used (see
603 zpool-features(5)).
604
605 mountpoint=path|none|legacy
606 Controls the mount point used for this file system. See the Mount
607 Points section for more information on how this property is used.
608
609 When the mountpoint property is changed for a file system, the file
610 system and any children that inherit the mount point are unmounted. If
611 the new value is legacy, then they remain unmounted. Otherwise, they
612 are automatically remounted in the new location if the property was
613 previously legacy or none, or if they were mounted before the property
614 was changed. In addition, any shared file systems are unshared and
615 shared in the new location.
616
617 nbmand=on|off
618 Controls whether the file system should be mounted with nbmand (Non
619 Blocking mandatory locks). This is used for SMB clients. Changes to
620 this property only take effect when the file system is umounted and
621 remounted. See mount(1M) for more information on nbmand mounts.
622
623 primarycache=all|none|metadata
624 Controls what is cached in the primary cache (ARC). If this property
625 is set to all, then both user data and metadata is cached. If this
626 property is set to none, then neither user data nor metadata is cached.
627 If this property is set to metadata, then only metadata is cached. The
628 default value is all.
629
630 quota=size|none
631 Limits the amount of space a dataset and its descendents can consume.
632 This property enforces a hard limit on the amount of space used. This
633 includes all space consumed by descendents, including file systems and
634 snapshots. Setting a quota on a descendent of a dataset that already
635 has a quota does not override the ancestor's quota, but rather imposes
636 an additional limit.
637
638 Quotas cannot be set on volumes, as the volsize property acts as an
639 implicit quota.
640
641 snapshot_limit=count|none
642 Limits the number of snapshots that can be created on a dataset and its
643 descendents. Setting a snapshot_limit on a descendent of a dataset
644 that already has a snapshot_limit does not override the ancestor's
645 snapshot_limit, but rather imposes an additional limit. The limit is
646 not enforced if the user is allowed to change the limit. For example,
647 this means that recursive snapshots taken from the global zone are
648 counted against each delegated dataset within a zone. This feature
649 must be enabled to be used (see zpool-features(5)).
650
651 userquota@user=size|none
652 Limits the amount of space consumed by the specified user. User space
653 consumption is identified by the userspace@user property.
654
655 Enforcement of user quotas may be delayed by several seconds. This
656 delay means that a user might exceed their quota before the system
657 notices that they are over quota and begins to refuse additional writes
658 with the EDQUOT error message. See the zfs userspace subcommand for
659 more information.
660
661 Unprivileged users can only access their own groups' space usage. The
662 root user, or a user who has been granted the userquota privilege with
663 zfs allow, can get and set everyone's quota.
664
665 This property is not available on volumes, on file systems before
666 version 4, or on pools before version 15. The userquota@... properties
667 are not displayed by zfs get all. The user's name must be appended
668 after the @ symbol, using one of the following forms:
669
670 o POSIX name (for example, joe)
671
672 o POSIX numeric ID (for example, 789)
673
674 o SID name (for example, joe.smith@mydomain)
675
676 o SID numeric ID (for example, S-1-123-456-789)
677
678 groupquota@group=size|none
679 Limits the amount of space consumed by the specified group. Group
680 space consumption is identified by the groupused@group property.
681
682 Unprivileged users can access only their own groups' space usage. The
683 root user, or a user who has been granted the groupquota privilege with
684 zfs allow, can get and set all groups' quotas.
685
686 readonly=on|off
687 Controls whether this dataset can be modified. The default value is
688 off.
689
690 This property can also be referred to by its shortened column name,
691 rdonly.
692
693 recordsize=size
694 Specifies a suggested block size for files in the file system. This
695 property is designed solely for use with database workloads that access
696 files in fixed-size records. ZFS automatically tunes block sizes
697 according to internal algorithms optimized for typical access patterns.
698
699 For databases that create very large files but access them in small
700 random chunks, these algorithms may be suboptimal. Specifying a
701 recordsize greater than or equal to the record size of the database can
702 result in significant performance gains. Use of this property for
703 general purpose file systems is strongly discouraged, and may adversely
704 affect performance.
705
706 The size specified must be a power of two greater than or equal to 512
707 and less than or equal to 128 Kbytes. If the large_blocks feature is
708 enabled on the pool, the size may be up to 1 Mbyte. See
709 zpool-features(5) for details on ZFS feature flags.
710
711 Changing the file system's recordsize affects only files created
712 afterward; existing files are unaffected.
713
714 This property can also be referred to by its shortened column name,
715 recsize.
716
717 redundant_metadata=all|most
718 Controls what types of metadata are stored redundantly. ZFS stores an
719 extra copy of metadata, so that if a single block is corrupted, the
720 amount of user data lost is limited. This extra copy is in addition to
721 any redundancy provided at the pool level (e.g. by mirroring or
722 RAID-Z), and is in addition to an extra copy specified by the copies
723 property (up to a total of 3 copies). For example if the pool is
724 mirrored, copies=2, and redundant_metadata=most, then ZFS stores 6
725 copies of most metadata, and 4 copies of data and some metadata.
726
727 When set to all, ZFS stores an extra copy of all metadata. If a single
728 on-disk block is corrupt, at worst a single block of user data (which
729 is recordsize bytes long) can be lost.
730
731 When set to most, ZFS stores an extra copy of most types of metadata.
732 This can improve performance of random writes, because less metadata
733 must be written. In practice, at worst about 100 blocks (of recordsize
734 bytes each) of user data can be lost if a single on-disk block is
735 corrupt. The exact behavior of which metadata blocks are stored
736 redundantly may change in future releases.
737
738 The default value is all.
739
740 refquota=size|none
741 Limits the amount of space a dataset can consume. This property
742 enforces a hard limit on the amount of space used. This hard limit
743 does not include space used by descendents, including file systems and
744 snapshots.
745
746 refreservation=size|none|auto
747 The minimum amount of space guaranteed to a dataset, not including its
748 descendents. When the amount of space used is below this value, the
749 dataset is treated as if it were taking up the amount of space
750 specified by refreservation. The refreservation reservation is
751 accounted for in the parent datasets' space used, and counts against
752 the parent datasets' quotas and reservations.
753
754 If refreservation is set, a snapshot is only allowed if there is enough
755 free pool space outside of this reservation to accommodate the current
756 number of "referenced" bytes in the dataset.
757
758 If refreservation is set to auto, a volume is made dense (or "not
759 sparse"). refreservation=auto is only supported on volumes. See
760 volsize in the Native Properties section for more information about
761 sparse volumes.
762
763 This property can also be referred to by its shortened column name,
764 refreserv.
765
766 reservation=size|none|auto
767 The minimum amount of space guaranteed to a dataset and its
768 descendants. When the amount of space used is below this value, the
769 dataset is treated as if it were taking up the amount of space
770 specified by its reservation. Reservations are accounted for in the
771 parent datasets' space used, and count against the parent datasets'
772 quotas and reservations.
773
774 See refreservation=auto above for a description of the behavior of
775 setting reservation to auto. If the pool is at version 9 or later,
776 refreservation=auto should be used instead.
777
778 This property can also be referred to by its shortened column name,
779 reserv.
780
781 secondarycache=all|none|metadata
782 Controls what is cached in the secondary cache (L2ARC). If this
783 property is set to all, then both user data and metadata is cached. If
784 this property is set to none, then neither user data nor metadata is
785 cached. If this property is set to metadata, then only metadata is
786 cached. The default value is all.
787
788 setuid=on|off
789 Controls whether the setuid bit is respected for the file system. The
790 default value is on.
791
792 sharesmb=on|off|opts
793 Controls whether the file system is shared via SMB, and what options
794 are to be used. A file system with the sharesmb property set to off is
795 managed through traditional tools such as sharemgr(1M). Otherwise, the
796 file system is automatically shared and unshared with the zfs share and
797 zfs unshare commands. If the property is set to on, the sharemgr(1M)
798 command is invoked with no options. Otherwise, the sharemgr(1M)
799 command is invoked with options equivalent to the contents of this
800 property.
801
802 Because SMB shares requires a resource name, a unique resource name is
803 constructed from the dataset name. The constructed name is a copy of
804 the dataset name except that the characters in the dataset name, which
805 would be invalid in the resource name, are replaced with underscore (_)
806 characters. A pseudo property "name" is also supported that allows you
807 to replace the data set name with a specified name. The specified name
808 is then used to replace the prefix dataset in the case of inheritance.
809 For example, if the dataset data/home/john is set to name=john, then
810 data/home/john has a resource name of john. If a child dataset
811 data/home/john/backups is shared, it has a resource name of
812 john_backups.
813
814 When SMB shares are created, the SMB share name appears as an entry in
815 the .zfs/shares directory. You can use the ls or chmod command to
816 display the share-level ACLs on the entries in this directory.
817
818 When the sharesmb property is changed for a dataset, the dataset and
819 any children inheriting the property are re-shared with the new
820 options, only if the property was previously set to off, or if they
821 were shared before the property was changed. If the new property is
822 set to off, the file systems are unshared.
823
824 sharenfs=on|off|opts
825 Controls whether the file system is shared via NFS, and what options
826 are to be used. A file system with a sharenfs property of off is
827 managed through traditional tools such as share(1M), unshare(1M), and
828 dfstab(4). Otherwise, the file system is automatically shared and
829 unshared with the zfs share and zfs unshare commands. If the property
830 is set to on, share(1M) command is invoked with no options. Otherwise,
831 the share(1M) command is invoked with options equivalent to the
832 contents of this property.
833
834 When the sharenfs property is changed for a dataset, the dataset and
835 any children inheriting the property are re-shared with the new
836 options, only if the property was previously off, or if they were
837 shared before the property was changed. If the new property is off,
838 the file systems are unshared.
839
840 logbias=latency|throughput
841 Provide a hint to ZFS about handling of synchronous requests in this
842 dataset. If logbias is set to latency (the default), ZFS will use pool
843 log devices (if configured) to handle the requests at low latency. If
844 logbias is set to throughput, ZFS will not use configured pool log
845 devices. ZFS will instead optimize synchronous operations for global
846 pool throughput and efficient use of resources.
847
848 snapdir=hidden|visible
849 Controls whether the .zfs directory is hidden or visible in the root of
850 the file system as discussed in the Snapshots section. The default
851 value is hidden.
852
853 sync=standard|always|disabled
854 Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC).
855 standard is the POSIX specified behavior of ensuring all synchronous
856 requests are written to stable storage and all devices are flushed to
857 ensure data is not cached by device controllers (this is the default).
858 always causes every file system transaction to be written and flushed
859 before its system call returns. This has a large performance penalty.
860 disabled disables synchronous requests. File system transactions are
861 only committed to stable storage periodically. This option will give
862 the highest performance. However, it is very dangerous as ZFS would be
863 ignoring the synchronous transaction demands of applications such as
864 databases or NFS. Administrators should only use this option when the
865 risks are understood.
866
867 version=N|current
868 The on-disk version of this file system, which is independent of the
869 pool version. This property can only be set to later supported
870 versions. See the zfs upgrade command.
871
872 volsize=size
873 For volumes, specifies the logical size of the volume. By default,
874 creating a volume establishes a reservation of equal size. For storage
875 pools with a version number of 9 or higher, a refreservation is set
876 instead. Any changes to volsize are reflected in an equivalent change
877 to the reservation (or refreservation). The volsize can only be set to
878 a multiple of volblocksize, and cannot be zero.
879
880 The reservation is kept equal to the volume's logical size to prevent
881 unexpected behavior for consumers. Without the reservation, the volume
882 could run out of space, resulting in undefined behavior or data
883 corruption, depending on how the volume is used. These effects can
884 also occur when the volume size is changed while it is in use
885 (particularly when shrinking the size). Extreme care should be used
886 when adjusting the volume size.
887
888 Though not recommended, a "sparse volume" (also known as "thin
889 provisioning") can be created by specifying the -s option to the zfs
890 create -V command, or by changing the reservation after the volume has
891 been created. A "sparse volume" is a volume where the reservation is
892 less than the size of the volume plus the space required to store its
893 metadata. Consequently, writes to a sparse volume can fail with ENOSPC
894 when the pool is low on space. For a sparse volume, changes to volsize
895 are not reflected in the reservation. A sparse volume can be made
896 dense (or "not sparse") by setting the reservation to auto.
897
898 vscan=on|off
899 Controls whether regular files should be scanned for viruses when a
900 file is opened and closed. In addition to enabling this property, the
901 virus scan service must also be enabled for virus scanning to occur.
902 The default value is off.
903
904 xattr=on|off
905 Controls whether extended attributes are enabled for this file system.
906 The default value is on.
907
908 zoned=on|off
909 Controls whether the dataset is managed from a non-global zone. See
910 the Zones section for more information. The default value is off.
911
912 The following three properties cannot be changed after the file system is
913 created, and therefore, should be set when the file system is created.
914 If the properties are not set with the zfs create or zpool create
915 commands, these properties are inherited from the parent dataset. If the
916 parent dataset lacks these properties due to having been created prior to
917 these features being supported, the new file system will have the default
918 values for these properties.
919
920 casesensitivity=sensitive|insensitive|mixed
921 Indicates whether the file name matching algorithm used by the file
922 system should be case-sensitive, case-insensitive, or allow a
923 combination of both styles of matching. The default value for the
924 casesensitivity property is sensitive. Traditionally, UNIX and POSIX
925 file systems have case-sensitive file names.
926
927 The mixed value for the casesensitivity property indicates that the
928 file system can support requests for both case-sensitive and case-
929 insensitive matching behavior. Currently, case-insensitive matching
930 behavior on a file system that supports mixed behavior is limited to
931 the SMB server product. For more information about the mixed value
932 behavior, see the "ZFS Administration Guide".
933
934 normalization=none|formC|formD|formKC|formKD
935 Indicates whether the file system should perform a unicode
936 normalization of file names whenever two file names are compared, and
937 which normalization algorithm should be used. File names are always
938 stored unmodified, names are normalized as part of any comparison
939 process. If this property is set to a legal value other than none, and
940 the utf8only property was left unspecified, the utf8only property is
941 automatically set to on. The default value of the normalization
942 property is none. This property cannot be changed after the file
943 system is created.
944
945 utf8only=on|off
946 Indicates whether the file system should reject file names that include
947 characters that are not present in the UTF-8 character code set. If
948 this property is explicitly set to off, the normalization property must
949 either not be explicitly set or be set to none. The default value for
950 the utf8only property is off. This property cannot be changed after
951 the file system is created.
952
953 The casesensitivity, normalization, and utf8only properties are also new
954 permissions that can be assigned to non-privileged users by using the ZFS
955 delegated administration feature.
956
957 Temporary Mount Point Properties
958 When a file system is mounted, either through mount(1M) for legacy mounts
959 or the zfs mount command for normal file systems, its mount options are
960 set according to its properties. The correlation between properties and
961 mount options is as follows:
962
963 PROPERTY MOUNT OPTION
964 devices devices/nodevices
965 exec exec/noexec
966 readonly ro/rw
967 setuid setuid/nosetuid
968 xattr xattr/noxattr
969
970 In addition, these options can be set on a per-mount basis using the -o
971 option, without affecting the property that is stored on disk. The
972 values specified on the command line override the values stored in the
973 dataset. The nosuid option is an alias for nodevices,nosetuid. These
974 properties are reported as "temporary" by the zfs get command. If the
975 properties are changed while the dataset is mounted, the new setting
976 overrides any temporary settings.
977
978 User Properties
979 In addition to the standard native properties, ZFS supports arbitrary
980 user properties. User properties have no effect on ZFS behavior, but
981 applications or administrators can use them to annotate datasets (file
982 systems, volumes, and snapshots).
983
984 User property names must contain a colon (":") character to distinguish
985 them from native properties. They may contain lowercase letters,
986 numbers, and the following punctuation characters: colon (":"), dash
987 ("-"), period ("."), and underscore ("_"). The expected convention is
988 that the property name is divided into two portions such as
989 module:property, but this namespace is not enforced by ZFS. User
990 property names can be at most 256 characters, and cannot begin with a
991 dash ("-").
992
993 When making programmatic use of user properties, it is strongly suggested
994 to use a reversed DNS domain name for the module component of property
995 names to reduce the chance that two independently-developed packages use
996 the same property name for different purposes.
997
998 The values of user properties are arbitrary strings, are always
999 inherited, and are never validated. All of the commands that operate on
1000 properties (zfs list, zfs get, zfs set, and so forth) can be used to
1001 manipulate both native properties and user properties. Use the zfs
1002 inherit command to clear a user property. If the property is not defined
1003 in any parent dataset, it is removed entirely. Property values are
1004 limited to 8192 bytes.
1005
1006 ZFS Volumes as Swap or Dump Devices
1007 During an initial installation a swap device and dump device are created
1008 on ZFS volumes in the ZFS root pool. By default, the swap area size is
1009 based on 1/2 the size of physical memory up to 2 Gbytes. The size of the
1010 dump device depends on the kernel's requirements at installation time.
1011 Separate ZFS volumes must be used for the swap area and dump devices. Do
1012 not swap to a file on a ZFS file system. A ZFS swap file configuration
1013 is not supported.
1014
1015 If you need to change your swap area or dump device after the system is
1016 installed or upgraded, use the swap(1M) and dumpadm(1M) commands.
1017
1018 SUBCOMMANDS
1019 All subcommands that modify state are logged persistently to the pool in
1020 their original form.
1021
1022 zfs -?
1023 Displays a help message.
1024
1025 zfs create [-p] [-o property=value]... filesystem
1026 Creates a new ZFS file system. The file system is automatically
1027 mounted according to the mountpoint property inherited from the parent.
1028
1029 -o property=value
1030 Sets the specified property as if the command zfs set
1031 property=value was invoked at the same time the dataset was
1032 created. Any editable ZFS property can also be set at creation
1033 time. Multiple -o options can be specified. An error results if
1034 the same property is specified in multiple -o options.
1035
1036 -p Creates all the non-existing parent datasets. Datasets created in
1037 this manner are automatically mounted according to the mountpoint
1038 property inherited from their parent. Any property specified on
1039 the command line using the -o option is ignored. If the target
1040 filesystem already exists, the operation completes successfully.
1041
1042 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
1043 Creates a volume of the given size. The volume is exported as a block
1044 device in /dev/zvol/{dsk,rdsk}/path, where path is the name of the
1045 volume in the ZFS namespace. The size represents the logical size as
1046 exported by the device. By default, a reservation of equal size is
1047 created.
1048
1049 size is automatically rounded up to the nearest 128 Kbytes to ensure
1050 that the volume has an integral number of blocks regardless of
1051 blocksize.
1052
1053 -b blocksize
1054 Equivalent to -o volblocksize=blocksize. If this option is
1055 specified in conjunction with -o volblocksize, the resulting
1056 behavior is undefined.
1057
1058 -o property=value
1059 Sets the specified property as if the zfs set property=value
1060 command was invoked at the same time the dataset was created. Any
1061 editable ZFS property can also be set at creation time. Multiple
1062 -o options can be specified. An error results if the same property
1063 is specified in multiple -o options.
1064
1065 -p Creates all the non-existing parent datasets. Datasets created in
1066 this manner are automatically mounted according to the mountpoint
1067 property inherited from their parent. Any property specified on
1068 the command line using the -o option is ignored. If the target
1069 filesystem already exists, the operation completes successfully.
1070
1071 -s Creates a sparse volume with no reservation. See volsize in the
1072 Native Properties section for more information about sparse
1073 volumes.
1074
1075 zfs destroy [-Rfnprv] filesystem|volume
1076 Destroys the given dataset. By default, the command unshares any file
1077 systems that are currently shared, unmounts any file systems that are
1078 currently mounted, and refuses to destroy a dataset that has active
1079 dependents (children or clones).
1080
1081 -R Recursively destroy all dependents, including cloned file systems
1082 outside the target hierarchy.
1083
1084 -f Force an unmount of any file systems using the unmount -f command.
1085 This option has no effect on non-file systems or unmounted file
1086 systems.
1087
1088 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1089 useful in conjunction with the -v or -p flags to determine what
1090 data would be deleted.
1091
1092 -p Print machine-parsable verbose information about the deleted data.
1093
1094 -r Recursively destroy all children.
1095
1096 -v Print verbose information about the deleted data.
1097
1098 Extreme care should be taken when applying either the -r or the -R
1099 options, as they can destroy large portions of a pool and cause
1100 unexpected behavior for mounted file systems in use.
1101
1102 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
1103 The given snapshots are destroyed immediately if and only if the zfs
1104 destroy command without the -d option would have destroyed it. Such
1105 immediate destruction would occur, for example, if the snapshot had no
1106 clones and the user-initiated reference count were zero.
1107
1108 If a snapshot does not qualify for immediate destruction, it is marked
1109 for deferred deletion. In this state, it exists as a usable, visible
1110 snapshot until both of the preconditions listed above are met, at which
1111 point it is destroyed.
1112
1113 An inclusive range of snapshots may be specified by separating the
1114 first and last snapshots with a percent sign. The first and/or last
1115 snapshots may be left blank, in which case the filesystem's oldest or
1116 newest snapshot will be implied.
1117
1118 Multiple snapshots (or ranges of snapshots) of the same filesystem or
1119 volume may be specified in a comma-separated list of snapshots. Only
1120 the snapshot's short name (the part after the @) should be specified
1121 when using a range or comma-separated list to identify multiple
1122 snapshots.
1123
1124 -R Recursively destroy all clones of these snapshots, including the
1125 clones, snapshots, and children. If this flag is specified, the -d
1126 flag will have no effect.
1127
1128 -d Defer snapshot deletion.
1129
1130 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1131 useful in conjunction with the -p or -v flags to determine what
1132 data would be deleted.
1133
1134 -p Print machine-parsable verbose information about the deleted data.
1135
1136 -r Destroy (or mark for deferred deletion) all snapshots with this
1137 name in descendent file systems.
1138
1139 -v Print verbose information about the deleted data.
1140
1141 Extreme care should be taken when applying either the -r or the -R
1142 options, as they can destroy large portions of a pool and cause
1143 unexpected behavior for mounted file systems in use.
1144
1145 zfs destroy filesystem|volume#bookmark
1146 The given bookmark is destroyed.
1147
1148 zfs snapshot [-r] [-o property=value]...
1149 filesystem@snapname|volume@snapname...
1150 Creates snapshots with the given names. All previous modifications by
1151 successful system calls to the file system are part of the snapshots.
1152 Snapshots are taken atomically, so that all snapshots correspond to the
1153 same moment in time. See the Snapshots section for details.
1154
1155 -o property=value
1156 Sets the specified property; see zfs create for details.
1157
1158 -r Recursively create snapshots of all descendent datasets
1159
1160 zfs rollback [-Rfr] snapshot
1161 Roll back the given dataset to a previous snapshot. When a dataset is
1162 rolled back, all data that has changed since the snapshot is discarded,
1163 and the dataset reverts to the state at the time of the snapshot. By
1164 default, the command refuses to roll back to a snapshot other than the
1165 most recent one. In order to do so, all intermediate snapshots and
1166 bookmarks must be destroyed by specifying the -r option.
1167
1168 The -rR options do not recursively destroy the child snapshots of a
1169 recursive snapshot. Only direct snapshots of the specified filesystem
1170 are destroyed by either of these options. To completely roll back a
1171 recursive snapshot, you must rollback the individual child snapshots.
1172
1173 -R Destroy any more recent snapshots and bookmarks, as well as any
1174 clones of those snapshots.
1175
1176 -f Used with the -R option to force an unmount of any clone file
1177 systems that are to be destroyed.
1178
1179 -r Destroy any snapshots and bookmarks more recent than the one
1180 specified.
1181
1182 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
1183 Creates a clone of the given snapshot. See the Clones section for
1184 details. The target dataset can be located anywhere in the ZFS
1185 hierarchy, and is created as the same type as the original.
1186
1187 -o property=value
1188 Sets the specified property; see zfs create for details.
1189
1190 -p Creates all the non-existing parent datasets. Datasets created in
1191 this manner are automatically mounted according to the mountpoint
1192 property inherited from their parent. If the target filesystem or
1193 volume already exists, the operation completes successfully.
1194
1195 zfs promote clone-filesystem
1196 Promotes a clone file system to no longer be dependent on its "origin"
1197 snapshot. This makes it possible to destroy the file system that the
1198 clone was created from. The clone parent-child dependency relationship
1199 is reversed, so that the origin file system becomes a clone of the
1200 specified file system.
1201
1202 The snapshot that was cloned, and any snapshots previous to this
1203 snapshot, are now owned by the promoted clone. The space they use
1204 moves from the origin file system to the promoted clone, so enough
1205 space must be available to accommodate these snapshots. No new space
1206 is consumed by this operation, but the space accounting is adjusted.
1207 The promoted clone must not have any conflicting snapshot names of its
1208 own. The rename subcommand can be used to rename any conflicting
1209 snapshots.
1210
1211 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
1212
1213 zfs rename [-fp] filesystem|volume filesystem|volume
1214 Renames the given dataset. The new target can be located anywhere in
1215 the ZFS hierarchy, with the exception of snapshots. Snapshots can only
1216 be renamed within the parent file system or volume. When renaming a
1217 snapshot, the parent file system of the snapshot does not need to be
1218 specified as part of the second argument. Renamed file systems can
1219 inherit new mount points, in which case they are unmounted and
1220 remounted at the new mount point.
1221
1222 -f Force unmount any filesystems that need to be unmounted in the
1223 process.
1224
1225 -p Creates all the nonexistent parent datasets. Datasets created in
1226 this manner are automatically mounted according to the mountpoint
1227 property inherited from their parent.
1228
1229 zfs rename -r snapshot snapshot
1230 Recursively rename the snapshots of all descendent datasets. Snapshots
1231 are the only dataset that can be renamed recursively.
1232
1233 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
1234 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
1235 Lists the property information for the given datasets in tabular form.
1236 If specified, you can list property information by the absolute
1237 pathname or the relative pathname. By default, all file systems and
1238 volumes are displayed. Snapshots are displayed if the listsnaps
1239 property is on (the default is off). The following fields are
1240 displayed, name,used,available,referenced,mountpoint.
1241
1242 -H Used for scripting mode. Do not print headers and separate fields
1243 by a single tab instead of arbitrary white space.
1244
1245 -S property
1246 Same as the -s option, but sorts by property in descending order.
1247
1248 -d depth
1249 Recursively display any children of the dataset, limiting the
1250 recursion to depth. A depth of 1 will display only the dataset and
1251 its direct children.
1252
1253 -o property
1254 A comma-separated list of properties to display. The property must
1255 be:
1256
1257 o One of the properties described in the Native Properties
1258 section
1259
1260 o A user property
1261
1262 o The value name to display the dataset name
1263
1264 o The value space to display space usage properties on file
1265 systems and volumes. This is a shortcut for specifying -o
1266 name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t
1267 filesystem,volume syntax.
1268
1269 -p Display numbers in parsable (exact) values.
1270
1271 -r Recursively display any children of the dataset on the command
1272 line.
1273
1274 -s property
1275 A property for sorting the output by column in ascending order
1276 based on the value of the property. The property must be one of
1277 the properties described in the Properties section, or the special
1278 value name to sort by the dataset name. Multiple properties can be
1279 specified at one time using multiple -s property options. Multiple
1280 -s options are evaluated from left to right in decreasing order of
1281 importance. The following is a list of sorting criteria:
1282
1283 o Numeric types sort in numeric order.
1284
1285 o String types sort in alphabetical order.
1286
1287 o Types inappropriate for a row sort that row to the literal
1288 bottom, regardless of the specified ordering.
1289
1290 If no sorting options are specified the existing behavior of zfs
1291 list is preserved.
1292
1293 -t type
1294 A comma-separated list of types to display, where type is one of
1295 filesystem, snapshot, volume, bookmark, or all. For example,
1296 specifying -t snapshot displays only snapshots.
1297
1298 zfs set property=value [property=value]... filesystem|volume|snapshot...
1299 Sets the property or list of properties to the given value(s) for each
1300 dataset. Only some properties can be edited. See the Properties
1301 section for more information on what properties can be set and
1302 acceptable values. Numeric values can be specified as exact values, or
1303 in a human-readable form with a suffix of B, K, M, G, T, P, E, Z (for
1304 bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
1305 or zettabytes, respectively). User properties can be set on snapshots.
1306 For more information, see the User Properties section.
1307
1308 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
1309 [-t type[,type]...] all | property[,property]...
1310 filesystem|volume|snapshot|bookmark...
1311 Displays properties for the given datasets. If no datasets are
1312 specified, then the command displays properties for all datasets on the
1313 system. For each property, the following columns are displayed:
1314
1315 name Dataset name
1316 property Property name
1317 value Property value
1318 source Property source. Can either be local, default,
1319 temporary, inherited, or none (-).
1320
1321 All columns are displayed by default, though this can be controlled by
1322 using the -o option. This command takes a comma-separated list of
1323 properties as described in the Native Properties and User Properties
1324 sections.
1325
1326 The special value all can be used to display all properties that apply
1327 to the given dataset's type (filesystem, volume, snapshot, or
1328 bookmark).
1329
1330 -H Display output in a form more easily parsed by scripts. Any
1331 headers are omitted, and fields are explicitly separated by a
1332 single tab instead of an arbitrary amount of space.
1333
1334 -d depth
1335 Recursively display any children of the dataset, limiting the
1336 recursion to depth. A depth of 1 will display only the dataset and
1337 its direct children.
1338
1339 -o field
1340 A comma-separated list of columns to display.
1341 name,property,value,source is the default value.
1342
1343 -p Display numbers in parsable (exact) values.
1344
1345 -r Recursively display properties for any children.
1346
1347 -s source
1348 A comma-separated list of sources to display. Those properties
1349 coming from a source other than those in this list are ignored.
1350 Each source must be one of the following: local, default,
1351 inherited, temporary, and none. The default value is all sources.
1352
1353 -t type
1354 A comma-separated list of types to display, where type is one of
1355 filesystem, snapshot, volume, bookmark, or all.
1356
1357 zfs inherit [-rS] property filesystem|volume|snapshot...
1358 Clears the specified property, causing it to be inherited from an
1359 ancestor, restored to default if no ancestor has the property set, or
1360 with the -S option reverted to the received value if one exists. See
1361 the Properties section for a listing of default values, and details on
1362 which properties can be inherited.
1363
1364 -r Recursively inherit the given property for all children.
1365
1366 -S Revert the property to the received value if one exists; otherwise
1367 operate as if the -S option was not specified.
1368
1369 zfs remap filesystem|volume
1370 Remap the indirect blocks in the given fileystem or volume so that they
1371 no longer reference blocks on previously removed vdevs and we can
1372 eventually shrink the size of the indirect mapping objects for the
1373 previously removed vdevs. Note that remapping all blocks might not be
1374 possible and that references from snapshots will still exist and cannot
1375 be remapped.
1376
1377 zfs upgrade
1378 Displays a list of file systems that are not the most recent version.
1379
1380 zfs upgrade -v
1381 Displays a list of currently supported file system versions.
1382
1383 zfs upgrade [-r] [-V version] -a | filesystem
1384 Upgrades file systems to a new on-disk version. Once this is done, the
1385 file systems will no longer be accessible on systems running older
1386 versions of the software. zfs send streams generated from new
1387 snapshots of these file systems cannot be accessed on systems running
1388 older versions of the software.
1389
1390 In general, the file system version is independent of the pool version.
1391 See zpool(1M) for information on the zpool upgrade command.
1392
1393 In some cases, the file system version and the pool version are
1394 interrelated and the pool version must be upgraded before the file
1395 system version can be upgraded.
1396
1397 -V version
1398 Upgrade to the specified version. If the -V flag is not specified,
1399 this command upgrades to the most recent version. This option can
1400 only be used to increase the version number, and only up to the
1401 most recent version supported by this software.
1402
1403 -a Upgrade all file systems on all imported pools.
1404
1405 filesystem
1406 Upgrade the specified file system.
1407
1408 -r Upgrade the specified file system and all descendent file systems.
1409
1410 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1411 [-t type[,type]...] filesystem|snapshot
1412 Displays space consumed by, and quotas on, each user in the specified
1413 filesystem or snapshot. This corresponds to the userused@user and
1414 userquota@user properties.
1415
1416 -H Do not print headers, use tab-delimited output.
1417
1418 -S field
1419 Sort by this field in reverse order. See -s.
1420
1421 -i Translate SID to POSIX ID. The POSIX ID may be ephemeral if no
1422 mapping exists. Normal POSIX interfaces (for example, stat(2), ls
1423 -l) perform this translation, so the -i option allows the output
1424 from zfs userspace to be compared directly with those utilities.
1425 However, -i may lead to confusion if some files were created by an
1426 SMB user before a SMB-to-POSIX name mapping was established. In
1427 such a case, some files will be owned by the SMB entity and some by
1428 the POSIX entity. However, the -i option will report that the
1429 POSIX entity has the total usage and quota for both.
1430
1431 -n Print numeric ID instead of user/group name.
1432
1433 -o field[,field]...
1434 Display only the specified fields from the following set: type,
1435 name, used, quota. The default is to display all fields.
1436
1437 -p Use exact (parsable) numeric output.
1438
1439 -s field
1440 Sort output by this field. The -s and -S flags may be specified
1441 multiple times to sort first by one field, then by another. The
1442 default is -s type -s name.
1443
1444 -t type[,type]...
1445 Print only the specified types from the following set: all,
1446 posixuser, smbuser, posixgroup, smbgroup. The default is -t
1447 posixuser,smbuser. The default can be changed to include group
1448 types.
1449
1450 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1451 [-t type[,type]...] filesystem|snapshot
1452 Displays space consumed by, and quotas on, each group in the specified
1453 filesystem or snapshot. This subcommand is identical to zfs userspace,
1454 except that the default types to display are -t posixgroup,smbgroup.
1455
1456 zfs mount
1457 Displays all ZFS file systems currently mounted.
1458
1459 zfs mount [-Ov] [-o options] -a | filesystem
1460 Mounts ZFS file systems.
1461
1462 -O Perform an overlay mount. See mount(1M) for more information.
1463
1464 -a Mount all available ZFS file systems. Invoked automatically as
1465 part of the boot process.
1466
1467 filesystem
1468 Mount the specified filesystem.
1469
1470 -o options
1471 An optional, comma-separated list of mount options to use
1472 temporarily for the duration of the mount. See the Temporary Mount
1473 Point Properties section for details.
1474
1475 -v Report mount progress.
1476
1477 zfs unmount [-f] -a | filesystem|mountpoint
1478 Unmounts currently mounted ZFS file systems.
1479
1480 -a Unmount all available ZFS file systems. Invoked automatically as
1481 part of the shutdown process.
1482
1483 filesystem|mountpoint
1484 Unmount the specified filesystem. The command can also be given a
1485 path to a ZFS file system mount point on the system.
1486
1487 -f Forcefully unmount the file system, even if it is currently in use.
1488
1489 zfs share -a | filesystem
1490 Shares available ZFS file systems.
1491
1492 -a Share all available ZFS file systems. Invoked automatically as
1493 part of the boot process.
1494
1495 filesystem
1496 Share the specified filesystem according to the sharenfs and
1497 sharesmb properties. File systems are shared when the sharenfs or
1498 sharesmb property is set.
1499
1500 zfs unshare -a | filesystem|mountpoint
1501 Unshares currently shared ZFS file systems.
1502
1503 -a Unshare all available ZFS file systems. Invoked automatically as
1504 part of the shutdown process.
1505
1506 filesystem|mountpoint
1507 Unshare the specified filesystem. The command can also be given a
1508 path to a ZFS file system shared on the system.
1509
1510 zfs bookmark snapshot bookmark
1511 Creates a bookmark of the given snapshot. Bookmarks mark the point in
1512 time when the snapshot was created, and can be used as the incremental
1513 source for a zfs send command.
1514
1515 This feature must be enabled to be used. See zpool-features(5) for
1516 details on ZFS feature flags and the bookmarks feature.
1517
1518 zfs send [-DLPRcenpv] [[-I|-i] snapshot] snapshot
1519 Creates a stream representation of the second snapshot, which is
1520 written to standard output. The output can be redirected to a file or
1521 to a different system (for example, using ssh(1)). By default, a full
1522 stream is generated.
1523
1524 -D, --dedup
1525 Generate a deduplicated stream. Blocks which would have been sent
1526 multiple times in the send stream will only be sent once. The
1527 receiving system must also support this feature to receive a
1528 deduplicated stream. This flag can be used regardless of the
1529 dataset's dedup property, but performance will be much better if
1530 the filesystem uses a dedup-capable checksum (for example, sha256).
1531
1532 -I snapshot
1533 Generate a stream package that sends all intermediary snapshots
1534 from the first snapshot to the second snapshot. For example, -I @a
1535 fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The
1536 incremental source may be specified as with the -i option.
1537
1538 -L, --large-block
1539 Generate a stream which may contain blocks larger than 128KB. This
1540 flag has no effect if the large_blocks pool feature is disabled, or
1541 if the recordsize property of this filesystem has never been set
1542 above 128KB. The receiving system must have the large_blocks pool
1543 feature enabled as well. See zpool-features(5) for details on ZFS
1544 feature flags and the large_blocks feature.
1545
1546 -P, --parsable
1547 Print machine-parsable verbose information about the stream package
1548 generated.
1549
1550 -R, --replicate
1551 Generate a replication stream package, which will replicate the
1552 specified file system, and all descendent file systems, up to the
1553 named snapshot. When received, all properties, snapshots,
1554 descendent file systems, and clones are preserved.
1555
1556 If the -i or -I flags are used in conjunction with the -R flag, an
1557 incremental replication stream is generated. The current values of
1558 properties, and current snapshot and file system names are set when
1559 the stream is received. If the -F flag is specified when this
1560 stream is received, snapshots and file systems that do not exist on
1561 the sending side are destroyed.
1562
1563 -e, --embed
1564 Generate a more compact stream by using WRITE_EMBEDDED records for
1565 blocks which are stored more compactly on disk by the embedded_data
1566 pool feature. This flag has no effect if the embedded_data feature
1567 is disabled. The receiving system must have the embedded_data
1568 feature enabled. If the lz4_compress feature is active on the
1569 sending system, then the receiving system must have that feature
1570 enabled as well. See zpool-features(5) for details on ZFS feature
1571 flags and the embedded_data feature.
1572
1573 -c, --compressed
1574 Generate a more compact stream by using compressed WRITE records
1575 for blocks which are compressed on disk and in memory (see the
1576 compression property for details). If the lz4_compress feature is
1577 active on the sending system, then the receiving system must have
1578 that feature enabled as well. If the large_blocks feature is
1579 enabled on the sending system but the -L option is not supplied in
1580 conjunction with -c, then the data will be decompressed before
1581 sending so it can be split into smaller block sizes.
1582
1583 -i snapshot
1584 Generate an incremental stream from the first snapshot (the
1585 incremental source) to the second snapshot (the incremental
1586 target). The incremental source can be specified as the last
1587 component of the snapshot name (the @ character and following) and
1588 it is assumed to be from the same file system as the incremental
1589 target.
1590
1591 If the destination is a clone, the source may be the origin
1592 snapshot, which must be fully specified (for example,
1593 pool/fs@origin, not just @origin).
1594
1595 -n, --dryrun
1596 Do a dry-run ("No-op") send. Do not generate any actual send data.
1597 This is useful in conjunction with the -v or -P flags to determine
1598 what data will be sent. In this case, the verbose output will be
1599 written to standard output (contrast with a non-dry-run, where the
1600 stream is written to standard output and the verbose output goes to
1601 standard error).
1602
1603 -p, --props
1604 Include the dataset's properties in the stream. This flag is
1605 implicit when -R is specified. The receiving system must also
1606 support this feature.
1607
1608 -v, --verbose
1609 Print verbose information about the stream package generated. This
1610 information includes a per-second report of how much data has been
1611 sent.
1612
1613 The format of the stream is committed. You will be able to receive
1614 your streams on future versions of ZFS .
1615
1616 zfs send [-Lce] [-i snapshot|bookmark] filesystem|volume|snapshot
1617 Generate a send stream, which may be of a filesystem, and may be
1618 incremental from a bookmark. If the destination is a filesystem or
1619 volume, the pool must be read-only, or the filesystem must not be
1620 mounted. When the stream generated from a filesystem or volume is
1621 received, the default snapshot name will be "--head--".
1622
1623 -L, --large-block
1624 Generate a stream which may contain blocks larger than 128KB. This
1625 flag has no effect if the large_blocks pool feature is disabled, or
1626 if the recordsize property of this filesystem has never been set
1627 above 128KB. The receiving system must have the large_blocks pool
1628 feature enabled as well. See zpool-features(5) for details on ZFS
1629 feature flags and the large_blocks feature.
1630
1631 -c, --compressed
1632 Generate a more compact stream by using compressed WRITE records
1633 for blocks which are compressed on disk and in memory (see the
1634 compression property for details). If the lz4_compress feature is
1635 active on the sending system, then the receiving system must have
1636 that feature enabled as well. If the large_blocks feature is
1637 enabled on the sending system but the -L option is not supplied in
1638 conjunction with -c, then the data will be decompressed before
1639 sending so it can be split into smaller block sizes.
1640
1641 -e, --embed
1642 Generate a more compact stream by using WRITE_EMBEDDED records for
1643 blocks which are stored more compactly on disk by the embedded_data
1644 pool feature. This flag has no effect if the embedded_data feature
1645 is disabled. The receiving system must have the embedded_data
1646 feature enabled. If the lz4_compress feature is active on the
1647 sending system, then the receiving system must have that feature
1648 enabled as well. See zpool-features(5) for details on ZFS feature
1649 flags and the embedded_data feature.
1650
1651 -i snapshot|bookmark
1652 Generate an incremental send stream. The incremental source must
1653 be an earlier snapshot in the destination's history. It will
1654 commonly be an earlier snapshot in the destination's file system,
1655 in which case it can be specified as the last component of the name
1656 (the # or @ character and following).
1657
1658 If the incremental target is a clone, the incremental source can be
1659 the origin snapshot, or an earlier snapshot in the origin's
1660 filesystem, or the origin's origin, etc.
1661
1662 zfs send [-Penv] -t receive_resume_token
1663 Creates a send stream which resumes an interrupted receive. The
1664 receive_resume_token is the value of this property on the filesystem or
1665 volume that was being received into. See the documentation for zfs
1666 receive -s for more details.
1667
1668 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
1669
1670 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
1671 Creates a snapshot whose contents are as specified in the stream
1672 provided on standard input. If a full stream is received, then a new
1673 file system is created as well. Streams are created using the zfs send
1674 subcommand, which by default creates a full stream. zfs recv can be
1675 used as an alias for zfs receive.
1676
1677 If an incremental stream is received, then the destination file system
1678 must already exist, and its most recent snapshot must match the
1679 incremental stream's source. For zvols, the destination device link is
1680 destroyed and recreated, which means the zvol cannot be accessed during
1681 the receive operation.
1682
1683 When a snapshot replication package stream that is generated by using
1684 the zfs send -R command is received, any snapshots that do not exist on
1685 the sending location are destroyed by using the zfs destroy -d command.
1686
1687 The name of the snapshot (and file system, if a full stream is
1688 received) that this subcommand creates depends on the argument type and
1689 the use of the -d or -e options.
1690
1691 If the argument is a snapshot name, the specified snapshot is created.
1692 If the argument is a file system or volume name, a snapshot with the
1693 same name as the sent snapshot is created within the specified
1694 filesystem or volume. If neither of the -d or -e options are
1695 specified, the provided target snapshot name is used exactly as
1696 provided.
1697
1698 The -d and -e options cause the file system name of the target snapshot
1699 to be determined by appending a portion of the sent snapshot's name to
1700 the specified target filesystem. If the -d option is specified, all
1701 but the first element of the sent snapshot's file system path (usually
1702 the pool name) is used and any required intermediate file systems
1703 within the specified one are created. If the -e option is specified,
1704 then only the last element of the sent snapshot's file system name
1705 (i.e. the name of the source file system itself) is used as the target
1706 file system name.
1707
1708 -F Force a rollback of the file system to the most recent snapshot
1709 before performing the receive operation. If receiving an
1710 incremental replication stream (for example, one generated by zfs
1711 send -R [-i|-I]), destroy snapshots and file systems that do not
1712 exist on the sending side.
1713
1714 -d Discard the first element of the sent snapshot's file system name,
1715 using the remaining elements to determine the name of the target
1716 file system for the new snapshot as described in the paragraph
1717 above.
1718
1719 -e Discard all but the last element of the sent snapshot's file system
1720 name, using that element to determine the name of the target file
1721 system for the new snapshot as described in the paragraph above.
1722
1723 -n Do not actually receive the stream. This can be useful in
1724 conjunction with the -v option to verify the name the receive
1725 operation would use.
1726
1727 -o origin=snapshot
1728 Forces the stream to be received as a clone of the given snapshot.
1729 If the stream is a full send stream, this will create the
1730 filesystem described by the stream as a clone of the specified
1731 snapshot. Which snapshot was specified will not affect the success
1732 or failure of the receive, as long as the snapshot does exist. If
1733 the stream is an incremental send stream, all the normal
1734 verification will be performed.
1735
1736 -u File system that is associated with the received stream is not
1737 mounted.
1738
1739 -v Print verbose information about the stream and the time required to
1740 perform the receive operation.
1741
1742 -s If the receive is interrupted, save the partially received state,
1743 rather than deleting it. Interruption may be due to premature
1744 termination of the stream (e.g. due to network failure or failure
1745 of the remote system if the stream is being read over a network
1746 connection), a checksum error in the stream, termination of the zfs
1747 receive process, or unclean shutdown of the system.
1748
1749 The receive can be resumed with a stream generated by zfs send -t
1750 token, where the token is the value of the receive_resume_token
1751 property of the filesystem or volume which is received into.
1752
1753 To use this flag, the storage pool must have the extensible_dataset
1754 feature enabled. See zpool-features(5) for details on ZFS feature
1755 flags.
1756
1757 zfs receive -A filesystem|volume
1758 Abort an interrupted zfs receive -s, deleting its saved partially
1759 received state.
1760
1761 zfs allow filesystem|volume
1762 Displays permissions that have been delegated on the specified
1763 filesystem or volume. See the other forms of zfs allow for more
1764 information.
1765
1766 zfs allow [-dglu] user|group[,user|group]...
1767 perm|@setname[,perm|@setname]... filesystem|volume
1768 zfs
1769 allow
1770 [-dl]
1771 -e|everyone
1772 perm|@setname[,perm|@setname]...
1773 filesystem|volume
1774 Delegates ZFS administration permission for the file systems to non-
1775 privileged users.
1776
1777 -d Allow only for the descendent file systems.
1778
1779 -e|everyone
1780 Specifies that the permissions be delegated to everyone.
1781
1782 -g group[,group]...
1783 Explicitly specify that permissions are delegated to the group.
1784
1785 -l Allow "locally" only for the specified file system.
1786
1787 -u user[,user]...
1788 Explicitly specify that permissions are delegated to the user.
1789
1790 user|group[,user|group]...
1791 Specifies to whom the permissions are delegated. Multiple entities
1792 can be specified as a comma-separated list. If neither of the -gu
1793 options are specified, then the argument is interpreted
1794 preferentially as the keyword everyone, then as a user name, and
1795 lastly as a group name. To specify a user or group named
1796 "everyone", use the -g or -u options. To specify a group with the
1797 same name as a user, use the -g options.
1798
1799 perm|@setname[,perm|@setname]...
1800 The permissions to delegate. Multiple permissions may be specified
1801 as a comma-separated list. Permission names are the same as ZFS
1802 subcommand and property names. See the property list below.
1803 Property set names, which begin with @, may be specified. See the
1804 -s form below for details.
1805
1806 If neither of the -dl options are specified, or both are, then the
1807 permissions are allowed for the file system or volume, and all of its
1808 descendents.
1809
1810 Permissions are generally the ability to use a ZFS subcommand or change
1811 a ZFS property. The following permissions are available:
1812
1813 NAME TYPE NOTES
1814 allow subcommand Must also have the permission that is
1815 being allowed
1816 clone subcommand Must also have the 'create' ability and
1817 'mount' ability in the origin file system
1818 create subcommand Must also have the 'mount' ability
1819 destroy subcommand Must also have the 'mount' ability
1820 diff subcommand Allows lookup of paths within a dataset
1821 given an object number, and the ability
1822 to create snapshots necessary to
1823 'zfs diff'.
1824 mount subcommand Allows mount/umount of ZFS datasets
1825 promote subcommand Must also have the 'mount' and 'promote'
1826 ability in the origin file system
1827 receive subcommand Must also have the 'mount' and 'create'
1828 ability
1829 rename subcommand Must also have the 'mount' and 'create'
1830 ability in the new parent
1831 rollback subcommand Must also have the 'mount' ability
1832 send subcommand
1833 share subcommand Allows sharing file systems over NFS
1834 or SMB protocols
1835 snapshot subcommand Must also have the 'mount' ability
1836
1837 groupquota other Allows accessing any groupquota@...
1838 property
1839 groupused other Allows reading any groupused@... property
1840 userprop other Allows changing any user property
1841 userquota other Allows accessing any userquota@...
1842 property
1843 userused other Allows reading any userused@... property
1844
1845 aclinherit property
1846 aclmode property
1847 atime property
1848 canmount property
1849 casesensitivity property
1850 checksum property
1851 compression property
1852 copies property
1853 devices property
1854 exec property
1855 filesystem_limit property
1856 mountpoint property
1857 nbmand property
1858 normalization property
1859 primarycache property
1860 quota property
1861 readonly property
1862 recordsize property
1863 refquota property
1864 refreservation property
1865 reservation property
1866 secondarycache property
1867 setuid property
1868 sharenfs property
1869 sharesmb property
1870 snapdir property
1871 snapshot_limit property
1872 utf8only property
1873 version property
1874 volblocksize property
1875 volsize property
1876 vscan property
1877 xattr property
1878 zoned property
1879
1880 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
1881 Sets "create time" permissions. These permissions are granted
1882 (locally) to the creator of any newly-created descendent file system.
1883
1884 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
1885 Defines or adds permissions to a permission set. The set can be used
1886 by other zfs allow commands for the specified file system and its
1887 descendents. Sets are evaluated dynamically, so changes to a set are
1888 immediately reflected. Permission sets follow the same naming
1889 restrictions as ZFS file systems, but the name must begin with @, and
1890 can be no more than 64 characters long.
1891
1892 zfs unallow [-dglru] user|group[,user|group]...
1893 [perm|@setname[,perm|@setname]...] filesystem|volume
1894 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
1895 filesystem|volume
1896 zfs
1897 unallow
1898 [-r]
1899 -c
1900 [perm|@setname[,perm|@setname]...]
1901 filesystem|volume
1902 Removes permissions that were granted with the zfs allow command. No
1903 permissions are explicitly denied, so other permissions granted are
1904 still in effect. For example, if the permission is granted by an
1905 ancestor. If no permissions are specified, then all permissions for
1906 the specified user, group, or everyone are removed. Specifying
1907 everyone (or using the -e option) only removes the permissions that
1908 were granted to everyone, not all permissions for every user and group.
1909 See the zfs allow command for a description of the -ldugec options.
1910
1911 -r Recursively remove the permissions from this file system and all
1912 descendents.
1913
1914 zfs unallow [-r] -s @setname [perm|@setname[,perm|@setname]...]
1915 filesystem|volume
1916 Removes permissions from a permission set. If no permissions are
1917 specified, then all permissions are removed, thus removing the set
1918 entirely.
1919
1920 zfs hold [-r] tag snapshot...
1921 Adds a single reference, named with the tag argument, to the specified
1922 snapshot or snapshots. Each snapshot has its own tag namespace, and
1923 tags must be unique within that space.
1924
1925 If a hold exists on a snapshot, attempts to destroy that snapshot by
1926 using the zfs destroy command return EBUSY.
1927
1928 -r Specifies that a hold with the given tag is applied recursively to
1929 the snapshots of all descendent file systems.
1930
1931 zfs holds [-r] snapshot...
1932 Lists all existing user references for the given snapshot or snapshots.
1933
1934 -r Lists the holds that are set on the named descendent snapshots, in
1935 addition to listing the holds on the named snapshot.
1936
1937 zfs release [-r] tag snapshot...
1938 Removes a single reference, named with the tag argument, from the
1939 specified snapshot or snapshots. The tag must already exist for each
1940 snapshot. If a hold exists on a snapshot, attempts to destroy that
1941 snapshot by using the zfs destroy command return EBUSY.
1942
1943 -r Recursively releases a hold with the given tag on the snapshots of
1944 all descendent file systems.
1945
1946 zfs diff [-FHt] snapshot snapshot|filesystem
1947 Display the difference between a snapshot of a given filesystem and
1948 another snapshot of that filesystem from a later time or the current
1949 contents of the filesystem. The first column is a character indicating
1950 the type of change, the other columns indicate pathname, new pathname
1951 (in case of rename), change in link count, and optionally file type
1952 and/or change time. The types of change are:
1953
1954 - The path has been removed
1955 + The path has been created
1956 M The path has been modified
1957 R The path has been renamed
1958
1959 -F Display an indication of the type of file, in a manner similar to
1960 the - option of ls(1).
1961
1962 B Block device
1963 C Character device
1964 / Directory
1965 > Door
1966 | Named pipe
1967 @ Symbolic link
1968 P Event port
1969 = Socket
1970 F Regular file
1971
1972 -H Give more parsable tab-separated output, without header lines and
1973 without arrows.
1974
1975 -t Display the path's inode change time as the first column of output.
1976
1977 zfs program [-n] [-t timeout] [-m memory_limit] pool script [arg1 ...]
1978 Executes script as a ZFS channel program on pool. The ZFS channel
1979 program interface allows ZFS administrative operations to be run
1980 programmatically via a Lua script. The entire script is executed
1981 atomically, with no other administrative operations taking effect
1982 concurrently. A library of ZFS calls is made available to channel
1983 program scripts. Channel programs may only be run with root
1984 privileges.
1985
1986 For full documentation of the ZFS channel program interface, see the
1987 manual page for
1988
1989 -n
1990 Executes a read-only channel program, which runs faster. The program
1991 cannot change on-disk state by calling functions from the zfs.sync
1992 submodule. The program can be used to gather information such as
1993 properties and determining if changes would succeed (zfs.check.*).
1994 Without this flag, all pending changes must be synced to disk before
1995 a channel program can complete.
1996
1997 -t timeout
1998 Execution time limit, in milliseconds. If a channel program executes
1999 for longer than the provided timeout, it will be stopped and an error
2000 will be returned. The default timeout is 1000 ms, and can be set to
2001 a maximum of 10000 ms.
2002
2003 -m memory-limit
2004 Memory limit, in bytes. If a channel program attempts to allocate
2005 more memory than the given limit, it will be stopped and an error
2006 returned. The default memory limit is 10 MB, and can be set to a
2007 maximum of 100 MB.
2008
2009 All remaining argument strings are passed directly to the channel
2010 program as arguments. See zfs-program(1M) for more information.
2011
2012 EXIT STATUS
2013 The zfs utility exits 0 on success, 1 if an error occurs, and 2 if
2014 invalid command line options were specified.
2015
2016 EXAMPLES
2017 Example 1 Creating a ZFS File System Hierarchy
2018 The following commands create a file system named pool/home and a file
2019 system named pool/home/bob. The mount point /export/home is set for
2020 the parent file system, and is automatically inherited by the child
2021 file system.
2022
2023 # zfs create pool/home
2024 # zfs set mountpoint=/export/home pool/home
2025 # zfs create pool/home/bob
2026
2027 Example 2 Creating a ZFS Snapshot
2028 The following command creates a snapshot named yesterday. This
2029 snapshot is mounted on demand in the .zfs/snapshot directory at the
2030 root of the pool/home/bob file system.
2031
2032 # zfs snapshot pool/home/bob@yesterday
2033
2034 Example 3 Creating and Destroying Multiple Snapshots
2035 The following command creates snapshots named yesterday of pool/home
2036 and all of its descendent file systems. Each snapshot is mounted on
2037 demand in the .zfs/snapshot directory at the root of its file system.
2038 The second command destroys the newly created snapshots.
2039
2040 # zfs snapshot -r pool/home@yesterday
2041 # zfs destroy -r pool/home@yesterday
2042
2043 Example 4 Disabling and Enabling File System Compression
2044 The following command disables the compression property for all file
2045 systems under pool/home. The next command explicitly enables
2046 compression for pool/home/anne.
2047
2048 # zfs set compression=off pool/home
2049 # zfs set compression=on pool/home/anne
2050
2051 Example 5 Listing ZFS Datasets
2052 The following command lists all active file systems and volumes in the
2053 system. Snapshots are displayed if the listsnaps property is on. The
2054 default is off. See zpool(1M) for more information on pool properties.
2055
2056 # zfs list
2057 NAME USED AVAIL REFER MOUNTPOINT
2058 pool 450K 457G 18K /pool
2059 pool/home 315K 457G 21K /export/home
2060 pool/home/anne 18K 457G 18K /export/home/anne
2061 pool/home/bob 276K 457G 276K /export/home/bob
2062
2063 Example 6 Setting a Quota on a ZFS File System
2064 The following command sets a quota of 50 Gbytes for pool/home/bob.
2065
2066 # zfs set quota=50G pool/home/bob
2067
2068 Example 7 Listing ZFS Properties
2069 The following command lists all properties for pool/home/bob.
2070
2071 # zfs get all pool/home/bob
2072 NAME PROPERTY VALUE SOURCE
2073 pool/home/bob type filesystem -
2074 pool/home/bob creation Tue Jul 21 15:53 2009 -
2075 pool/home/bob used 21K -
2076 pool/home/bob available 20.0G -
2077 pool/home/bob referenced 21K -
2078 pool/home/bob compressratio 1.00x -
2079 pool/home/bob mounted yes -
2080 pool/home/bob quota 20G local
2081 pool/home/bob reservation none default
2082 pool/home/bob recordsize 128K default
2083 pool/home/bob mountpoint /pool/home/bob default
2084 pool/home/bob sharenfs off default
2085 pool/home/bob checksum on default
2086 pool/home/bob compression on local
2087 pool/home/bob atime on default
2088 pool/home/bob devices on default
2089 pool/home/bob exec on default
2090 pool/home/bob setuid on default
2091 pool/home/bob readonly off default
2092 pool/home/bob zoned off default
2093 pool/home/bob snapdir hidden default
2094 pool/home/bob aclmode discard default
2095 pool/home/bob aclinherit restricted default
2096 pool/home/bob canmount on default
2097 pool/home/bob xattr on default
2098 pool/home/bob copies 1 default
2099 pool/home/bob version 4 -
2100 pool/home/bob utf8only off -
2101 pool/home/bob normalization none -
2102 pool/home/bob casesensitivity sensitive -
2103 pool/home/bob vscan off default
2104 pool/home/bob nbmand off default
2105 pool/home/bob sharesmb off default
2106 pool/home/bob refquota none default
2107 pool/home/bob refreservation none default
2108 pool/home/bob primarycache all default
2109 pool/home/bob secondarycache all default
2110 pool/home/bob usedbysnapshots 0 -
2111 pool/home/bob usedbydataset 21K -
2112 pool/home/bob usedbychildren 0 -
2113 pool/home/bob usedbyrefreservation 0 -
2114
2115 The following command gets a single property value.
2116
2117 # zfs get -H -o value compression pool/home/bob
2118 on
2119 The following command lists all properties with local settings for
2120 pool/home/bob.
2121
2122 # zfs get -r -s local -o name,property,value all pool/home/bob
2123 NAME PROPERTY VALUE
2124 pool/home/bob quota 20G
2125 pool/home/bob compression on
2126
2127 Example 8 Rolling Back a ZFS File System
2128 The following command reverts the contents of pool/home/anne to the
2129 snapshot named yesterday, deleting all intermediate snapshots.
2130
2131 # zfs rollback -r pool/home/anne@yesterday
2132
2133 Example 9 Creating a ZFS Clone
2134 The following command creates a writable file system whose initial
2135 contents are the same as pool/home/bob@yesterday.
2136
2137 # zfs clone pool/home/bob@yesterday pool/clone
2138
2139 Example 10 Promoting a ZFS Clone
2140 The following commands illustrate how to test out changes to a file
2141 system, and then replace the original file system with the changed one,
2142 using clones, clone promotion, and renaming:
2143
2144 # zfs create pool/project/production
2145 populate /pool/project/production with data
2146 # zfs snapshot pool/project/production@today
2147 # zfs clone pool/project/production@today pool/project/beta
2148 make changes to /pool/project/beta and test them
2149 # zfs promote pool/project/beta
2150 # zfs rename pool/project/production pool/project/legacy
2151 # zfs rename pool/project/beta pool/project/production
2152 once the legacy version is no longer needed, it can be destroyed
2153 # zfs destroy pool/project/legacy
2154
2155 Example 11 Inheriting ZFS Properties
2156 The following command causes pool/home/bob and pool/home/anne to
2157 inherit the checksum property from their parent.
2158
2159 # zfs inherit checksum pool/home/bob pool/home/anne
2160
2161 Example 12 Remotely Replicating ZFS Data
2162 The following commands send a full stream and then an incremental
2163 stream to a remote machine, restoring them into poolB/received/fs@a and
2164 poolB/received/fs@b, respectively. poolB must contain the file system
2165 poolB/received, and must not initially contain poolB/received/fs.
2166
2167 # zfs send pool/fs@a | \
2168 ssh host zfs receive poolB/received/fs@a
2169 # zfs send -i a pool/fs@b | \
2170 ssh host zfs receive poolB/received/fs
2171
2172 Example 13 Using the zfs receive -d Option
2173 The following command sends a full stream of poolA/fsA/fsB@snap to a
2174 remote machine, receiving it into poolB/received/fsA/fsB@snap. The
2175 fsA/fsB@snap portion of the received snapshot's name is determined from
2176 the name of the sent snapshot. poolB must contain the file system
2177 poolB/received. If poolB/received/fsA does not exist, it is created as
2178 an empty file system.
2179
2180 # zfs send poolA/fsA/fsB@snap | \
2181 ssh host zfs receive -d poolB/received
2182
2183 Example 14 Setting User Properties
2184 The following example sets the user-defined com.example:department
2185 property for a dataset.
2186
2187 # zfs set com.example:department=12345 tank/accounting
2188
2189 Example 15 Performing a Rolling Snapshot
2190 The following example shows how to maintain a history of snapshots with
2191 a consistent naming scheme. To keep a week's worth of snapshots, the
2192 user destroys the oldest snapshot, renames the remaining snapshots, and
2193 then creates a new snapshot, as follows:
2194
2195 # zfs destroy -r pool/users@7daysago
2196 # zfs rename -r pool/users@6daysago @7daysago
2197 # zfs rename -r pool/users@5daysago @6daysago
2198 # zfs rename -r pool/users@yesterday @5daysago
2199 # zfs rename -r pool/users@yesterday @4daysago
2200 # zfs rename -r pool/users@yesterday @3daysago
2201 # zfs rename -r pool/users@yesterday @2daysago
2202 # zfs rename -r pool/users@today @yesterday
2203 # zfs snapshot -r pool/users@today
2204
2205 Example 16 Setting sharenfs Property Options on a ZFS File System
2206 The following commands show how to set sharenfs property options to
2207 enable rw access for a set of IP addresses and to enable root access
2208 for system neo on the tank/home file system.
2209
2210 # zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
2211
2212 If you are using DNS for host name resolution, specify the fully
2213 qualified hostname.
2214
2215 Example 17 Delegating ZFS Administration Permissions on a ZFS Dataset
2216 The following example shows how to set permissions so that user cindys
2217 can create, destroy, mount, and take snapshots on tank/cindys. The
2218 permissions on tank/cindys are also displayed.
2219
2220 # zfs allow cindys create,destroy,mount,snapshot tank/cindys
2221 # zfs allow tank/cindys
2222 ---- Permissions on tank/cindys --------------------------------------
2223 Local+Descendent permissions:
2224 user cindys create,destroy,mount,snapshot
2225
2226 Because the tank/cindys mount point permission is set to 755 by
2227 default, user cindys will be unable to mount file systems under
2228 tank/cindys. Add an ACE similar to the following syntax to provide
2229 mount point access:
2230
2231 # chmod A+user:cindys:add_subdirectory:allow /tank/cindys
2232
2233 Example 18 Delegating Create Time Permissions on a ZFS Dataset
2234 The following example shows how to grant anyone in the group staff to
2235 create file systems in tank/users. This syntax also allows staff
2236 members to destroy their own file systems, but not destroy anyone
2237 else's file system. The permissions on tank/users are also displayed.
2238
2239 # zfs allow staff create,mount tank/users
2240 # zfs allow -c destroy tank/users
2241 # zfs allow tank/users
2242 ---- Permissions on tank/users ---------------------------------------
2243 Permission sets:
2244 destroy
2245 Local+Descendent permissions:
2246 group staff create,mount
2247
2248 Example 19 Defining and Granting a Permission Set on a ZFS Dataset
2249 The following example shows how to define and grant a permission set on
2250 the tank/users file system. The permissions on tank/users are also
2251 displayed.
2252
2253 # zfs allow -s @pset create,destroy,snapshot,mount tank/users
2254 # zfs allow staff @pset tank/users
2255 # zfs allow tank/users
2256 ---- Permissions on tank/users ---------------------------------------
2257 Permission sets:
2258 @pset create,destroy,mount,snapshot
2259 Local+Descendent permissions:
2260 group staff @pset
2261
2262 Example 20 Delegating Property Permissions on a ZFS Dataset
2263 The following example shows to grant the ability to set quotas and
2264 reservations on the users/home file system. The permissions on
2265 users/home are also displayed.
2266
2267 # zfs allow cindys quota,reservation users/home
2268 # zfs allow users/home
2269 ---- Permissions on users/home ---------------------------------------
2270 Local+Descendent permissions:
2271 user cindys quota,reservation
2272 cindys% zfs set quota=10G users/home/marks
2273 cindys% zfs get quota users/home/marks
2274 NAME PROPERTY VALUE SOURCE
2275 users/home/marks quota 10G local
2276
2277 Example 21 Removing ZFS Delegated Permissions on a ZFS Dataset
2278 The following example shows how to remove the snapshot permission from
2279 the staff group on the tank/users file system. The permissions on
2280 tank/users are also displayed.
2281
2282 # zfs unallow staff snapshot tank/users
2283 # zfs allow tank/users
2284 ---- Permissions on tank/users ---------------------------------------
2285 Permission sets:
2286 @pset create,destroy,mount,snapshot
2287 Local+Descendent permissions:
2288 group staff @pset
2289
2290 Example 22 Showing the differences between a snapshot and a ZFS Dataset
2291 The following example shows how to see what has changed between a prior
2292 snapshot of a ZFS dataset and its current state. The -F option is used
2293 to indicate type information for the files affected.
2294
2295 # zfs diff -F tank/test@before tank/test
2296 M / /tank/test/
2297 M F /tank/test/linked (+1)
2298 R F /tank/test/oldname -> /tank/test/newname
2299 - F /tank/test/deleted
2300 + F /tank/test/created
2301 M F /tank/test/modified
2302
2303 INTERFACE STABILITY
2304 Committed.
2305
2306 SEE ALSO
2307 gzip(1), ssh(1), mount(1M), share(1M), sharemgr(1M), unshare(1M),
2308 zonecfg(1M), zpool(1M), chmod(2), stat(2), write(2), fsync(3C),
2309 dfstab(4), acl(5), attributes(5)
2310
2311 illumos December 6, 2017 illumos