zfs —
configures
ZFS file systems
zfs |
create
[-p]
[-o
property=value]...
filesystem |
zfs |
create
[-ps]
[-b
blocksize]
[-o
property=value]...
-V size
volume |
zfs |
destroy
[-Rfnprv]
filesystem|volume |
zfs |
destroy
[-Rdnprv]
filesystem|volume@snap[%snap[,snap[%snap]]]... |
zfs |
destroy
filesystem|volume#bookmark |
zfs |
snapshot
[-r]
[-o
property=value]...
filesystem@snapname|volume@snapname... |
zfs |
rollback
[-Rfr]
snapshot |
zfs |
clone
[-p]
[-o
property=value]...
snapshot
filesystem|volume |
zfs |
promote
clone-filesystem |
zfs |
rename
[-f]
filesystem|volume|snapshot
filesystem|volume|snapshot |
zfs |
rename
[-fp]
filesystem|volume
filesystem|volume |
zfs |
rename -r
snapshot
snapshot |
zfs |
list
[-r|-d
depth]
[-Hp]
[-o
property[,property]...]
[-s
property]...
[-S
property]...
[-t
type[,type]...]
[filesystem|volume|snapshot]... |
zfs |
remap
filesystem|volume |
zfs |
set
property=value
[property=value]...
filesystem|volume|snapshot... |
zfs |
get
[-r|-d
depth]
[-Hp]
[-o
field[,field]...]
[-s
source[,source]...]
[-t
type[,type]...]
all |
property[,property]...
filesystem|volume|snapshot|bookmark... |
zfs |
inherit
[-rS]
property
filesystem|volume|snapshot... |
zfs |
upgrade
[-r]
[-V
version]
-a |
filesystem |
zfs |
userspace
[-Hinp]
[-o
field[,field]...]
[-s
field]...
[-S
field]...
[-t
type[,type]...]
filesystem|snapshot |
zfs |
groupspace
[-Hinp]
[-o
field[,field]...]
[-s
field]...
[-S
field]...
[-t
type[,type]...]
filesystem|snapshot |
zfs |
mount
[-Ov]
[-o
options]
-a |
filesystem |
zfs |
unmount
[-f]
-a |
filesystem|mountpoint |
zfs |
share -a |
filesystem |
zfs |
unshare -a |
filesystem|mountpoint |
zfs |
bookmark
snapshot bookmark |
zfs |
send
[-DLPRcenpv]
[[-I|-i]
snapshot]
snapshot |
zfs |
send
[-Lce]
[-i
snapshot|bookmark]
filesystem|volume|snapshot |
zfs |
send
[-Penv]
-t
receive_resume_token |
zfs |
receive
[-Fnsuv]
[-o
origin=snapshot]
filesystem|volume|snapshot |
zfs |
receive
[-Fnsuv]
[-d|-e]
[-o
origin=snapshot]
filesystem |
zfs |
receive -A
filesystem|volume |
zfs |
allow
filesystem|volume |
zfs |
allow
[-dglu]
user|group[,user|group]...
perm|@setname[,perm|@setname]...
filesystem|volume |
zfs |
allow
[-dl]
-e|everyone
perm|@setname[,perm|@setname]...
filesystem|volume |
zfs |
allow -c
perm|@setname[,perm|@setname]...
filesystem|volume |
zfs |
allow -s
@setname
perm|@setname[,perm|@setname]...
filesystem|volume |
zfs |
unallow
[-dglru]
user|group[,user|group]...
[perm|@setname[,perm|@setname]...]
filesystem|volume |
zfs |
unallow
[-dlr]
-e|everyone
[perm|@setname[,perm|@setname]...]
filesystem|volume |
zfs |
unallow
[-r]
-c
[perm|@setname[,perm|@setname]...]
filesystem|volume |
zfs |
unallow
[-r]
-s
-@setname
[perm|@setname[,perm|@setname]...]
filesystem|volume |
zfs |
hold
[-r]
tag
snapshot... |
zfs |
holds
[-r]
snapshot... |
zfs |
release
[-r]
tag
snapshot... |
zfs |
diff
[-FHt]
snapshot
snapshot|filesystem |
zfs |
program
[-n]
[-t
timeout]
[-m
memory_limit]
pool script
[arg1
...] |
The
zfs command configures ZFS datasets within a
ZFS storage pool, as described in
zpool(1M). A
dataset is identified by a unique path within the ZFS namespace. For example:
pool/{filesystem,volume,snapshot}
where the maximum length of a dataset name is
MAXNAMELEN
(256 bytes).
A dataset can be one of the following:
-
-
- file
system
- A ZFS dataset of type
filesystem can be mounted within the standard
system namespace and behaves like other file systems. While ZFS file
systems are designed to be POSIX compliant, known issues exist that
prevent compliance in some cases. Applications that depend on standards
conformance might fail due to non-standard behavior when checking file
system free space.
-
-
- volume
- A logical volume exported as a raw or block device. This
type of dataset should only be used under special circumstances. File
systems are typically used in most environments.
-
-
- snapshot
- A read-only version of a file system or volume at a given
point in time. It is specified as
filesystem@name
or
volume@name.
A ZFS storage pool is a logical collection of devices that provide space for
datasets. A storage pool is also the root of the ZFS file system hierarchy.
The root of the pool can be accessed as a file system, such as mounting and
unmounting, taking snapshots, and setting properties. The physical storage
characteristics, however, are managed by the
zpool(1M) command.
See
zpool(1M) for more information on creating and
administering pools.
A snapshot is a read-only copy of a file system or volume. Snapshots can be
created extremely quickly, and initially consume no additional space within
the pool. As data within the active dataset changes, the snapshot consumes
more data than would otherwise be shared with the active dataset.
Snapshots can have arbitrary names. Snapshots of volumes can be cloned or rolled
back, but cannot be accessed independently.
File system snapshots can be accessed under the
.zfs/snapshot directory in the root of the file
system. Snapshots are automatically mounted on demand and may be unmounted at
regular intervals. The visibility of the
.zfs
directory can be controlled by the
snapdir
property.
A clone is a writable volume or file system whose initial contents are the same
as another dataset. As with snapshots, creating a clone is nearly
instantaneous, and initially consumes no additional space.
Clones can only be created from a snapshot. When a snapshot is cloned, it
creates an implicit dependency between the parent and child. Even though the
clone is created somewhere else in the dataset hierarchy, the original
snapshot cannot be destroyed as long as a clone exists. The
origin property exposes this dependency, and the
destroy command lists any such dependencies, if
they exist.
The clone parent-child dependency relationship can be reversed by using the
promote subcommand. This causes the
“origin” file system to become a clone of the specified file
system, which makes it possible to destroy the file system that the clone was
created from.
Creating a ZFS file system is a simple operation, so the number of file systems
per system is likely to be numerous. To cope with this, ZFS automatically
manages mounting and unmounting file systems without the need to edit the
/etc/vfstab file. All automatically managed file
systems are mounted by ZFS at boot time.
By default, file systems are mounted under
/path,
where
path is the name of the file system in
the ZFS namespace. Directories are created and destroyed as needed.
A file system can also have a mount point set in the
mountpoint property. This directory is created as
needed, and ZFS automatically mounts the file system when the
zfs mount
-a command is invoked (without editing
/etc/vfstab). The
mountpoint property can be inherited, so if
pool/home has a mount point of
/export/stuff, then
pool/home/user automatically inherits a mount
point of
/export/stuff/user.
A file system
mountpoint property of
none prevents the file system from being mounted.
If needed, ZFS file systems can also be managed with traditional tools
(
mount,
umount,
/etc/vfstab). If a file system's mount point is
set to
legacy, ZFS makes no attempt to manage the
file system, and the administrator is responsible for mounting and unmounting
the file system.
A ZFS file system can be added to a non-global zone by using the
zonecfg add
fs subcommand. A ZFS file system that is added to
a non-global zone must have its
mountpoint
property set to
legacy.
The physical properties of an added file system are controlled by the global
administrator. However, the zone administrator can create, modify, or destroy
files within the added file system, depending on how the file system is
mounted.
A dataset can also be delegated to a non-global zone by using the
zonecfg add
dataset subcommand. You cannot delegate a dataset
to one zone and the children of the same dataset to another zone. The zone
administrator can change properties of the dataset or any of its children.
However, the
quota,
filesystem_limit and
snapshot_limit properties of the delegated
dataset can be modified only by the global administrator.
A ZFS volume can be added as a device to a non-global zone by using the
zonecfg add
device subcommand. However, its physical
properties can be modified only by the global administrator.
For more information about
zonecfg syntax, see
zonecfg(1M).
After a dataset is delegated to a non-global zone, the
zoned property is automatically set. A zoned file
system cannot be mounted in the global zone, since the zone administrator
might have to set the mount point to an unacceptable value.
The global administrator can forcibly clear the
zoned property, though this should be done with
extreme care. The global administrator should verify that all the mount points
are acceptable before clearing the property.
Properties are divided into two types, native properties and user-defined (or
“user”) properties. Native properties either export internal
statistics or control ZFS behavior. In addition, native properties are either
editable or read-only. User properties have no effect on ZFS behavior, but you
can use them to annotate datasets in a way that is meaningful in your
environment. For more information about user properties, see the
User Properties section,
below.
Every dataset has a set of properties that export statistics about the dataset
as well as control various behaviors. Properties are inherited from the parent
unless overridden by the child. Some properties apply only to certain types of
datasets (file systems, volumes, or snapshots).
The values of numeric properties can be specified using human-readable suffixes
(for example,
k,
KB,
M,
Gb, and so forth,
up to
Z for zettabyte). The following are all
valid (and equal) specifications:
1536M, 1.5g, 1.50GB
.
The values of non-numeric properties are case sensitive and must be lowercase,
except for
mountpoint,
sharenfs, and
sharesmb.
The following native properties consist of read-only statistics about the
dataset. These properties can be neither set, nor inherited. Native properties
apply to all dataset types unless otherwise noted.
-
-
- available
- The amount of space available to the dataset and all its
children, assuming that there is no other activity in the pool. Because
space is shared within a pool, availability can be limited by any number
of factors, including physical pool size, quotas, reservations, or other
datasets within the pool.
This property can also be referred to by its shortened column name,
avail.
-
-
- compressratio
- For non-snapshots, the compression ratio achieved for the
used space of this dataset, expressed as a
multiplier. The used property includes
descendant datasets, and, for clones, does not include the space shared
with the origin snapshot. For snapshots, the
compressratio is the same as the
refcompressratio property. Compression can be
turned on by running: zfs
set
compression=on
dataset. The default value is
off.
-
-
- creation
- The time this dataset was created.
-
-
- clones
- For snapshots, this property is a comma-separated list of
filesystems or volumes which are clones of this snapshot. The clones'
origin property is this snapshot. If the
clones property is not empty, then this
snapshot can not be destroyed (even with the
-r or -f
options).
-
-
- defer_destroy
- This property is on if the
snapshot has been marked for deferred destroy by using the
zfs destroy
-d command. Otherwise, the property is
off.
-
-
- filesystem_count
- The total number of filesystems and volumes that exist
under this location in the dataset tree. This value is only available when
a filesystem_limit has been set somewhere in
the tree under which the dataset resides.
-
-
- logicalreferenced
- The amount of space that is “logically”
accessible by this dataset. See the
referenced property. The logical space
ignores the effect of the compression and
copies properties, giving a quantity closer
to the amount of data that applications see. However, it does include
space consumed by metadata.
This property can also be referred to by its shortened column name,
lrefer.
-
-
- logicalused
- The amount of space that is “logically”
consumed by this dataset and all its descendents. See the
used property. The logical space ignores the
effect of the compression and
copies properties, giving a quantity closer
to the amount of data that applications see. However, it does include
space consumed by metadata.
This property can also be referred to by its shortened column name,
lused.
-
-
- mounted
- For file systems, indicates whether the file system is
currently mounted. This property can be either
yes or no.
-
-
- origin
- For cloned file systems or volumes, the snapshot from which
the clone was created. See also the clones
property.
-
-
- receive_resume_token
- For filesystems or volumes which have saved
partially-completed state from zfs receive
-s, this opaque token can be provided to zfs
send -t to resume and complete the zfs
receive.
-
-
- referenced
- The amount of data that is accessible by this dataset,
which may or may not be shared with other datasets in the pool. When a
snapshot or clone is created, it initially references the same amount of
space as the file system or snapshot it was created from, since its
contents are identical.
This property can also be referred to by its shortened column name,
refer.
-
-
- refcompressratio
- The compression ratio achieved for the
referenced space of this dataset, expressed
as a multiplier. See also the compressratio
property.
-
-
- snapshot_count
- The total number of snapshots that exist under this
location in the dataset tree. This value is only available when a
snapshot_limit has been set somewhere in the
tree under which the dataset resides.
-
-
- type
- The type of dataset:
filesystem,
volume, or
snapshot.
-
-
- used
- The amount of space consumed by this dataset and all its
descendents. This is the value that is checked against this dataset's
quota and reservation. The space used does not include this dataset's
reservation, but does take into account the reservations of any descendent
datasets. The amount of space that a dataset consumes from its parent, as
well as the amount of space that is freed if this dataset is recursively
destroyed, is the greater of its space used and its reservation.
The used space of a snapshot (see the
Snapshots section) is space
that is referenced exclusively by this snapshot. If this snapshot is
destroyed, the amount of used space will be
freed. Space that is shared by multiple snapshots isn't accounted for in
this metric. When a snapshot is destroyed, space that was previously
shared with this snapshot can become unique to snapshots adjacent to it,
thus changing the used space of those snapshots. The used space of the
latest snapshot can also be affected by changes in the file system. Note
that the used space of a snapshot is a subset
of the written space of the snapshot.
The amount of space used, available, or referenced does not take into
account pending changes. Pending changes are generally accounted for
within a few seconds. Committing a change to a disk using
fsync(3C) or
O_SYNC
does not necessarily guarantee
that the space usage information is updated immediately.
-
-
- usedby*
- The usedby* properties
decompose the used properties into the
various reasons that space is used. Specifically,
used =
usedbychildren +
usedbydataset +
usedbyrefreservation
+ usedbysnapshots.
These properties are only available for datasets created on
zpool “version 13” pools.
-
-
- usedbychildren
- The amount of space used by children of this dataset, which
would be freed if all the dataset's children were destroyed.
-
-
- usedbydataset
- The amount of space used by this dataset itself, which
would be freed if the dataset were destroyed (after first removing any
refreservation and destroying any necessary
snapshots or descendents).
-
-
- usedbyrefreservation
- The amount of space used by a
refreservation set on this dataset, which
would be freed if the refreservation was
removed.
-
-
- usedbysnapshots
- The amount of space consumed by snapshots of this dataset.
In particular, it is the amount of space that would be freed if all of
this dataset's snapshots were destroyed. Note that this is not simply the
sum of the snapshots' used properties because
space can be shared by multiple snapshots.
-
-
- userused@user
- The amount of space consumed by the specified user in this
dataset. Space is charged to the owner of each file, as displayed by
ls -l. The
amount of space charged is displayed by du
and ls -s. See
the zfs
userspace subcommand for more information.
Unprivileged users can access only their own space usage. The root user, or
a user who has been granted the userused
privilege with zfs
allow, can access everyone's usage.
The userused@...
properties are not displayed by zfs
get all. The
user's name must be appended after the @ symbol, using one of the
following forms:
- POSIX name (for
example, joe)
- POSIX numeric ID (for
example, 789)
- SID name (for example,
joe.smith@mydomain)
- SID numeric ID (for
example, S-1-123-456-789)
-
-
- userrefs
- This property is set to the number of user holds on this
snapshot. User holds are set by using the zfs
hold command.
-
-
- groupused@group
- The amount of space consumed by the specified group in this
dataset. Space is charged to the group of each file, as displayed by
ls -l. See the
userused@user
property for more information.
Unprivileged users can only access their own groups' space usage. The root
user, or a user who has been granted the
groupused privilege with
zfs allow, can
access all groups' usage.
-
-
- volblocksize
- For volumes, specifies the block size of the volume. The
blocksize cannot be changed once the volume
has been written, so it should be set at volume creation time. The default
blocksize for volumes is 8 Kbytes. Any power
of 2 from 512 bytes to 128 Kbytes is valid.
This property can also be referred to by its shortened column name,
volblock.
-
-
- written
- The amount of space referenced
by this dataset, that was written since the previous snapshot (i.e. that
is not referenced by the previous snapshot).
-
-
- written@snapshot
- The amount of referenced space
written to this dataset since the specified snapshot. This is the space
that is referenced by this dataset but was not referenced by the specified
snapshot.
The snapshot may be specified as a short
snapshot name (just the part after the @), in
which case it will be interpreted as a snapshot in the same filesystem as
this dataset. The snapshot may be a full
snapshot name
(filesystem@snapshot),
which for clones may be a snapshot in the origin's filesystem (or the
origin of the origin's filesystem, etc.)
The following native properties can be used to change the behavior of a ZFS
dataset.
-
-
- aclinherit=discard|noallow|restricted|passthrough|passthrough-x
- Controls how ACEs are inherited when files and directories
are created.
-
-
- discard
- does not inherit any ACEs.
-
-
- noallow
- only inherits inheritable ACEs that specify
“deny” permissions.
-
-
- restricted
- default, removes the
write_acl and
write_owner permissions when the ACE is
inherited.
-
-
- passthrough
- inherits all inheritable ACEs without any
modifications.
-
-
- passthrough-x
- same meaning as
passthrough, except that the
owner@,
group@, and
everyone@ ACEs inherit the execute
permission only if the file creation mode also requests the execute
bit.
When the property value is set to passthrough,
files are created with a mode determined by the inheritable ACEs. If no
inheritable ACEs exist that affect the mode, then the mode is set in
accordance to the requested mode from the application.
-
-
- aclmode=discard|groupmask|passthrough|restricted
- Controls how an ACL is modified during
chmod(2) and how inherited ACEs are modified
by the file creation mode.
-
-
- discard
- default, deletes all ACEs except for those representing
the mode of the file or directory requested by
chmod(2).
-
-
- groupmask
- reduces permissions granted by all
ALLOW entries found in the ACL such that
they are no greater than the group permissions specified by the
mode.
-
-
- passthrough
- indicates that no changes are made to the ACL other
than creating or updating the necessary ACEs to represent the new mode
of the file or directory.
-
-
- restricted
- causes the chmod(2)
operation to return an error when used on any file or directory which
has a non-trivial ACL, with entries in addition to those that
represent the mode.
chmod(2) is required to change the set user ID,
set group ID, or sticky bit on a file or directory, as they do not have
equivalent ACEs. In order to use chmod(2) on
a file or directory with a non-trivial ACL when
aclmode is set to
restricted, you must first remove all ACEs
except for those that represent the current mode.
-
-
- atime=on|off
- Controls whether the access time for files is updated when
they are read. Turning this property off avoids producing write traffic
when reading files and can result in significant performance gains, though
it might confuse mailers and other similar utilities. The default value is
on.
-
-
- canmount=on|off|noauto
- If this property is set to
off, the file system cannot be mounted, and
is ignored by zfs
mount -a.
Setting this property to off is similar to
setting the mountpoint property to
none, except that the dataset still has a
normal mountpoint property, which can be
inherited. Setting this property to off
allows datasets to be used solely as a mechanism to inherit properties.
One example of setting
canmount=off is
to have two datasets with the same
mountpoint, so that the children of both
datasets appear in the same directory, but might have different inherited
characteristics.
When set to noauto, a dataset can only be
mounted and unmounted explicitly. The dataset is not mounted automatically
when the dataset is created or imported, nor is it mounted by the
zfs mount
-a command or unmounted by the
zfs unmount
-a command.
This property is not inherited.
-
-
- checksum=on|off|fletcher2|fletcher4|sha256|noparity|sha512|skein|edonr
- Controls the checksum used to verify data integrity. The
default value is on, which automatically
selects an appropriate algorithm (currently,
fletcher4, but this may change in future
releases). The value off disables integrity
checking on user data. The value noparity not
only disables integrity but also disables maintaining parity for user
data. This setting is used internally by a dump device residing on a
RAID-Z pool and should not be used by any other dataset. Disabling
checksums is NOT a recommended practice.
The sha512, skein,
and edonr checksum algorithms require
enabling the appropriate features on the pool. Please see
zpool-features(5) for more information on
these algorithms.
Changing this property affects only newly-written data.
Salted checksum algorithms (edonr,
skein) are currently not supported for any
filesystem on the boot pools.
-
-
- compression=on|off|gzip|gzip-N|lz4|lzjb|zle
- Controls the compression algorithm used for this dataset.
Setting compression to on indicates that the
current default compression algorithm should be used. The default balances
compression and decompression speed, with compression ratio and is
expected to work well on a wide variety of workloads. Unlike all other
settings for this property, on does not
select a fixed compression type. As new compression algorithms are added
to ZFS and enabled on a pool, the default compression algorithm may
change. The current default compression algorithm is either
lzjb or, if the
lz4_compress feature is enabled,
lz4.
The lz4 compression algorithm is a
high-performance replacement for the lzjb
algorithm. It features significantly faster compression and decompression,
as well as a moderately higher compression ratio than
lzjb, but can only be used on pools with the
lz4_compress feature set to
enabled. See
zpool-features(5) for details on ZFS feature
flags and the lz4_compress feature.
The lzjb compression algorithm is optimized for
performance while providing decent data compression.
The gzip compression algorithm uses the same
compression as the gzip(1) command. You can
specify the gzip level by using the value
gzip-N, where
N is an integer from 1 (fastest) to 9 (best
compression ratio). Currently, gzip is
equivalent to gzip-6 (which is also the
default for gzip(1)).
The zle compression algorithm compresses runs
of zeros.
This property can also be referred to by its shortened column name
compress. Changing this property affects only
newly-written data.
-
-
- copies=1|2|3
- Controls the number of copies of data stored for this
dataset. These copies are in addition to any redundancy provided by the
pool, for example, mirroring or RAID-Z. The copies are stored on different
disks, if possible. The space used by multiple copies is charged to the
associated file and dataset, changing the
used property and counting against quotas and
reservations.
Changing this property only affects newly-written data. Therefore, set this
property at file system creation time by using the
-o
copies=N
option.
-
-
- devices=on|off
- Controls whether device nodes can be opened on this file
system. The default value is on.
-
-
- exec=on|off
- Controls whether processes can be executed from within this
file system. The default value is on.
-
-
- filesystem_limit=count|none
- Limits the number of filesystems and volumes that can exist
under this point in the dataset tree. The limit is not enforced if the
user is allowed to change the limit. Setting a
filesystem_limit to
on a descendent of a filesystem that already
has a filesystem_limit does not override the
ancestor's filesystem_limit, but rather
imposes an additional limit. This feature must be enabled to be used (see
zpool-features(5)).
-
-
- mountpoint=path|none|legacy
- Controls the mount point used for this file system. See the
Mount Points section for
more information on how this property is used.
When the mountpoint property is changed for a
file system, the file system and any children that inherit the mount point
are unmounted. If the new value is legacy,
then they remain unmounted. Otherwise, they are automatically remounted in
the new location if the property was previously
legacy or none,
or if they were mounted before the property was changed. In addition, any
shared file systems are unshared and shared in the new location.
-
-
- nbmand=on|off
- Controls whether the file system should be mounted with
nbmand (Non Blocking mandatory locks). This
is used for SMB clients. Changes to this property only take effect when
the file system is umounted and remounted. See
mount(1M) for more information on
nbmand mounts.
-
-
- primarycache=all|none|metadata
- Controls what is cached in the primary cache (ARC). If this
property is set to all, then both user data
and metadata is cached. If this property is set to
none, then neither user data nor metadata is
cached. If this property is set to metadata,
then only metadata is cached. The default value is
all.
-
-
- quota=size|none
- Limits the amount of space a dataset and its descendents
can consume. This property enforces a hard limit on the amount of space
used. This includes all space consumed by descendents, including file
systems and snapshots. Setting a quota on a descendent of a dataset that
already has a quota does not override the ancestor's quota, but rather
imposes an additional limit.
Quotas cannot be set on volumes, as the volsize
property acts as an implicit quota.
-
-
- snapshot_limit=count|none
- Limits the number of snapshots that can be created on a
dataset and its descendents. Setting a
snapshot_limit on a descendent of a dataset
that already has a snapshot_limit does not
override the ancestor's snapshot_limit, but
rather imposes an additional limit. The limit is not enforced if the user
is allowed to change the limit. For example, this means that recursive
snapshots taken from the global zone are counted against each delegated
dataset within a zone. This feature must be enabled to be used (see
zpool-features(5)).
-
-
- userquota@user=size|none
- Limits the amount of space consumed by the specified user.
User space consumption is identified by the
userspace@user
property.
Enforcement of user quotas may be delayed by several seconds. This delay
means that a user might exceed their quota before the system notices that
they are over quota and begins to refuse additional writes with the
EDQUOT
error message. See the
zfs userspace
subcommand for more information.
Unprivileged users can only access their own groups' space usage. The root
user, or a user who has been granted the
userquota privilege with
zfs allow, can
get and set everyone's quota.
This property is not available on volumes, on file systems before version 4,
or on pools before version 15. The
userquota@...
properties are not displayed by zfs
get all. The
user's name must be appended after the @
symbol, using one of the following forms:
- POSIX name (for
example, joe)
- POSIX numeric ID (for
example, 789)
- SID name (for example,
joe.smith@mydomain)
- SID numeric ID (for
example, S-1-123-456-789)
-
-
- groupquota@group=size|none
- Limits the amount of space consumed by the specified group.
Group space consumption is identified by the
groupused@group
property.
Unprivileged users can access only their own groups' space usage. The root
user, or a user who has been granted the
groupquota privilege with
zfs allow, can
get and set all groups' quotas.
-
-
- readonly=on|off
- Controls whether this dataset can be modified. The default
value is off.
This property can also be referred to by its shortened column name,
rdonly.
-
-
- recordsize=size
- Specifies a suggested block size for files in the file
system. This property is designed solely for use with database workloads
that access files in fixed-size records. ZFS automatically tunes block
sizes according to internal algorithms optimized for typical access
patterns.
For databases that create very large files but access them in small random
chunks, these algorithms may be suboptimal. Specifying a
recordsize greater than or equal to the
record size of the database can result in significant performance gains.
Use of this property for general purpose file systems is strongly
discouraged, and may adversely affect performance.
The size specified must be a power of two greater than or equal to 512 and
less than or equal to 128 Kbytes. If the
large_blocks feature is enabled on the pool,
the size may be up to 1 Mbyte. See
zpool-features(5) for details on ZFS feature
flags.
Changing the file system's recordsize affects
only files created afterward; existing files are unaffected.
This property can also be referred to by its shortened column name,
recsize.
-
-
- redundant_metadata=all|most
- Controls what types of metadata are stored redundantly. ZFS
stores an extra copy of metadata, so that if a single block is corrupted,
the amount of user data lost is limited. This extra copy is in addition to
any redundancy provided at the pool level (e.g. by mirroring or RAID-Z),
and is in addition to an extra copy specified by the
copies property (up to a total of 3 copies).
For example if the pool is mirrored,
copies=2, and
redundant_metadata=most,
then ZFS stores 6 copies of most metadata, and 4 copies of data and some
metadata.
When set to all, ZFS stores an extra copy of
all metadata. If a single on-disk block is corrupt, at worst a single
block of user data (which is recordsize bytes
long) can be lost.
When set to most, ZFS stores an extra copy of
most types of metadata. This can improve performance of random writes,
because less metadata must be written. In practice, at worst about 100
blocks (of recordsize bytes each) of user
data can be lost if a single on-disk block is corrupt. The exact behavior
of which metadata blocks are stored redundantly may change in future
releases.
The default value is all.
-
-
- refquota=size|none
- Limits the amount of space a dataset can consume. This
property enforces a hard limit on the amount of space used. This hard
limit does not include space used by descendents, including file systems
and snapshots.
-
-
- refreservation=size|none|auto
- The minimum amount of space guaranteed to a dataset, not
including its descendents. When the amount of space used is below this
value, the dataset is treated as if it were taking up the amount of space
specified by refreservation. The
refreservation reservation is accounted for
in the parent datasets' space used, and counts against the parent
datasets' quotas and reservations.
If refreservation is set, a snapshot is only
allowed if there is enough free pool space outside of this reservation to
accommodate the current number of “referenced” bytes in the
dataset.
If refreservation is set to
auto, a volume is made dense (or “not
sparse”).
refreservation=auto
is only supported on volumes. See volsize in
the Native
Properties section for more information about sparse volumes.
This property can also be referred to by its shortened column name,
refreserv.
-
-
- reservation=size|none|auto
- The minimum amount of space guaranteed to a dataset and its
descendants. When the amount of space used is below this value, the
dataset is treated as if it were taking up the amount of space specified
by its reservation. Reservations are accounted for in the parent datasets'
space used, and count against the parent datasets' quotas and
reservations.
See
refreservation=auto
above for a description of the behavior of setting
reservation to
auto. If the pool is at version 9 or later,
refreservation=auto
should be used instead.
This property can also be referred to by its shortened column name,
reserv.
-
-
- secondarycache=all|none|metadata
- Controls what is cached in the secondary cache (L2ARC). If
this property is set to all, then both user
data and metadata is cached. If this property is set to
none, then neither user data nor metadata is
cached. If this property is set to metadata,
then only metadata is cached. The default value is
all.
-
-
- setuid=on|off
- Controls whether the setuid bit is respected for the file
system. The default value is on.
-
-
- sharesmb=on|off|opts
- Controls whether the file system is shared via SMB, and
what options are to be used. A file system with the
sharesmb property set to
off is managed through traditional tools such
as sharemgr(1M). Otherwise, the file system
is automatically shared and unshared with the
zfs share and
zfs unshare
commands. If the property is set to on, the
sharemgr(1M) command is invoked with no
options. Otherwise, the sharemgr(1M) command
is invoked with options equivalent to the contents of this property.
Because SMB shares requires a resource name, a unique resource name is
constructed from the dataset name. The constructed name is a copy of the
dataset name except that the characters in the dataset name, which would
be invalid in the resource name, are replaced with underscore
(_) characters. A pseudo property
“name” is also supported that allows you to replace the data
set name with a specified name. The specified name is then used to replace
the prefix dataset in the case of inheritance. For example, if the dataset
data/home/john is set to
name=john, then
data/home/john has a resource name of
john. If a child dataset
data/home/john/backups is shared, it has a
resource name of john_backups.
When SMB shares are created, the SMB share name appears as an entry in the
.zfs/shares directory. You can use the
ls or chmod
command to display the share-level ACLs on the entries in this directory.
When the sharesmb property is changed for a
dataset, the dataset and any children inheriting the property are
re-shared with the new options, only if the property was previously set to
off, or if they were shared before the
property was changed. If the new property is set to
off, the file systems are unshared.
-
-
- sharenfs=on|off|opts
- Controls whether the file system is shared via NFS, and
what options are to be used. A file system with a
sharenfs property of
off is managed through traditional tools such
as share(1M),
unshare(1M), and
dfstab(4). Otherwise, the file system is
automatically shared and unshared with the
zfs share and
zfs unshare
commands. If the property is set to on,
share(1M) command is invoked with no options.
Otherwise, the share(1M) command is invoked
with options equivalent to the contents of this property.
When the sharenfs property is changed for a
dataset, the dataset and any children inheriting the property are
re-shared with the new options, only if the property was previously
off, or if they were shared before the
property was changed. If the new property is
off, the file systems are unshared.
-
-
- logbias=latency|throughput
- Provide a hint to ZFS about handling of synchronous
requests in this dataset. If logbias is set
to latency (the default), ZFS will use pool
log devices (if configured) to handle the requests at low latency. If
logbias is set to
throughput, ZFS will not use configured pool
log devices. ZFS will instead optimize synchronous operations for global
pool throughput and efficient use of resources.
-
-
- snapdir=hidden|visible
- Controls whether the .zfs
directory is hidden or visible in the root of the file system as discussed
in the Snapshots section.
The default value is hidden.
-
-
- sync=standard|always|disabled
- Controls the behavior of synchronous requests (e.g. fsync,
O_DSYNC). standard is the POSIX specified
behavior of ensuring all synchronous requests are written to stable
storage and all devices are flushed to ensure data is not cached by device
controllers (this is the default). always
causes every file system transaction to be written and flushed before its
system call returns. This has a large performance penalty.
disabled disables synchronous requests. File
system transactions are only committed to stable storage periodically.
This option will give the highest performance. However, it is very
dangerous as ZFS would be ignoring the synchronous transaction demands of
applications such as databases or NFS. Administrators should only use this
option when the risks are understood.
-
-
- version=N|current
- The on-disk version of this file system, which is
independent of the pool version. This property can only be set to later
supported versions. See the zfs
upgrade command.
-
-
- volsize=size
- For volumes, specifies the logical size of the volume. By
default, creating a volume establishes a reservation of equal size. For
storage pools with a version number of 9 or higher, a
refreservation is set instead. Any changes to
volsize are reflected in an equivalent change
to the reservation (or refreservation). The
volsize can only be set to a multiple of
volblocksize, and cannot be zero.
The reservation is kept equal to the volume's logical size to prevent
unexpected behavior for consumers. Without the reservation, the volume
could run out of space, resulting in undefined behavior or data
corruption, depending on how the volume is used. These effects can also
occur when the volume size is changed while it is in use (particularly
when shrinking the size). Extreme care should be used when adjusting the
volume size.
Though not recommended, a “sparse volume” (also known as
“thin provisioning”) can be created by specifying the
-s option to the
zfs create
-V command, or by changing the reservation
after the volume has been created. A “sparse volume” is a
volume where the reservation is less than the size of the volume plus the
space required to store its metadata. Consequently, writes to a sparse
volume can fail with
ENOSPC
when the
pool is low on space. For a sparse volume, changes to
volsize are not reflected in the reservation.
A sparse volume can be made dense (or “not sparse”) by
setting the reservation to auto.
-
-
- vscan=on|off
- Controls whether regular files should be scanned for
viruses when a file is opened and closed. In addition to enabling this
property, the virus scan service must also be enabled for virus scanning
to occur. The default value is off.
-
-
- xattr=on|off
- Controls whether extended attributes are enabled for this
file system. The default value is on.
-
-
- zoned=on|off
- Controls whether the dataset is managed from a non-global
zone. See the Zones section for
more information. The default value is
off.
The following three properties cannot be changed after the file system is
created, and therefore, should be set when the file system is created. If the
properties are not set with the
zfs
create or
zpool
create commands, these properties are inherited
from the parent dataset. If the parent dataset lacks these properties due to
having been created prior to these features being supported, the new file
system will have the default values for these properties.
-
-
- casesensitivity=sensitive|insensitive|mixed
- Indicates whether the file name matching algorithm used by
the file system should be case-sensitive, case-insensitive, or allow a
combination of both styles of matching. The default value for the
casesensitivity property is
sensitive. Traditionally,
UNIX and POSIX file systems have case-sensitive
file names.
The mixed value for the
casesensitivity property indicates that the
file system can support requests for both case-sensitive and
case-insensitive matching behavior. Currently, case-insensitive matching
behavior on a file system that supports mixed behavior is limited to the
SMB server product. For more information about the
mixed value behavior, see the "ZFS
Administration Guide".
-
-
- normalization=none|formC|formD|formKC|formKD
- Indicates whether the file system should perform a
unicode normalization of file names whenever
two file names are compared, and which normalization algorithm should be
used. File names are always stored unmodified, names are normalized as
part of any comparison process. If this property is set to a legal value
other than none, and the
utf8only property was left unspecified, the
utf8only property is automatically set to
on. The default value of the
normalization property is
none. This property cannot be changed after
the file system is created.
-
-
- utf8only=on|off
- Indicates whether the file system should reject file names
that include characters that are not present in the
UTF-8 character code set. If this property is
explicitly set to off, the normalization
property must either not be explicitly set or be set to
none. The default value for the
utf8only property is
off. This property cannot be changed after
the file system is created.
The
casesensitivity,
normalization, and
utf8only properties are also new permissions that
can be assigned to non-privileged users by using the ZFS delegated
administration feature.
When a file system is mounted, either through
mount(1M) for legacy mounts or the
zfs mount command
for normal file systems, its mount options are set according to its
properties. The correlation between properties and mount options is as
follows:
PROPERTY MOUNT OPTION
devices devices/nodevices
exec exec/noexec
readonly ro/rw
setuid setuid/nosetuid
xattr xattr/noxattr
In addition, these options can be set on a per-mount basis using the
-o option, without affecting the property that is
stored on disk. The values specified on the command line override the values
stored in the dataset. The
nosuid option is an
alias for
nodevices,
nosetuid.
These properties are reported as “temporary” by the
zfs get command. If
the properties are changed while the dataset is mounted, the new setting
overrides any temporary settings.
In addition to the standard native properties, ZFS supports arbitrary user
properties. User properties have no effect on ZFS behavior, but applications
or administrators can use them to annotate datasets (file systems, volumes,
and snapshots).
User property names must contain a colon
(“
:”) character to distinguish them
from native properties. They may contain lowercase letters, numbers, and the
following punctuation characters: colon
(“
:”), dash
(“
-”), period
(“
.”), and underscore
(“
_”). The expected convention is
that the property name is divided into two portions such as
module:
property, but
this namespace is not enforced by ZFS. User property names can be at most 256
characters, and cannot begin with a dash
(“
-”).
When making programmatic use of user properties, it is strongly suggested to use
a reversed
DNS domain name for the
module component of property names to reduce the
chance that two independently-developed packages use the same property name
for different purposes.
The values of user properties are arbitrary strings, are always inherited, and
are never validated. All of the commands that operate on properties
(
zfs list,
zfs get,
zfs set, and so
forth) can be used to manipulate both native properties and user properties.
Use the
zfs inherit
command to clear a user property. If the property is not defined in any parent
dataset, it is removed entirely. Property values are limited to 8192 bytes.
During an initial installation a swap device and dump device are created on ZFS
volumes in the ZFS root pool. By default, the swap area size is based on 1/2
the size of physical memory up to 2 Gbytes. The size of the dump device
depends on the kernel's requirements at installation time. Separate ZFS
volumes must be used for the swap area and dump devices. Do not swap to a file
on a ZFS file system. A ZFS swap file configuration is not supported.
If you need to change your swap area or dump device after the system is
installed or upgraded, use the
swap(1M) and
dumpadm(1M) commands.
All subcommands that modify state are logged persistently to the pool in their
original form.
-
-
- zfs
-?
- Displays a help message.
-
-
- zfs
create
[-p]
[-o
property=value]...
filesystem
- Creates a new ZFS file system. The file system is
automatically mounted according to the
mountpoint property inherited from the
parent.
-
-
- -o
property=value
- Sets the specified property as if the command
zfs set
property=value
was invoked at the same time the dataset was created. Any editable ZFS
property can also be set at creation time. Multiple
-o options can be specified. An error
results if the same property is specified in multiple
-o options.
-
-
- -p
- Creates all the non-existing parent datasets. Datasets
created in this manner are automatically mounted according to the
mountpoint property inherited from their
parent. Any property specified on the command line using the
-o option is ignored. If the target
filesystem already exists, the operation completes successfully.
-
-
- zfs
create
[-ps]
[-b
blocksize]
[-o
property=value]...
-V size
volume
- Creates a volume of the given size. The volume is exported
as a block device in
/dev/zvol/{dsk,rdsk}/path, where
path is the name of the volume in the ZFS
namespace. The size represents the logical size as exported by the device.
By default, a reservation of equal size is created.
size is automatically rounded up to the
nearest 128 Kbytes to ensure that the volume has an integral number of
blocks regardless of blocksize.
-
-
- -b
blocksize
- Equivalent to -o
volblocksize=blocksize.
If this option is specified in conjunction with
-o
volblocksize, the resulting behavior is
undefined.
-
-
- -o
property=value
- Sets the specified property as if the
zfs set
property=value
command was invoked at the same time the dataset was created. Any
editable ZFS property can also be set at creation time. Multiple
-o options can be specified. An error
results if the same property is specified in multiple
-o options.
-
-
- -p
- Creates all the non-existing parent datasets. Datasets
created in this manner are automatically mounted according to the
mountpoint property inherited from their
parent. Any property specified on the command line using the
-o option is ignored. If the target
filesystem already exists, the operation completes successfully.
-
-
- -s
- Creates a sparse volume with no reservation. See
volsize in the
Native
Properties section for more information about sparse volumes.
-
-
- zfs
destroy
[-Rfnprv]
filesystem|volume
- Destroys the given dataset. By default, the command
unshares any file systems that are currently shared, unmounts any file
systems that are currently mounted, and refuses to destroy a dataset that
has active dependents (children or clones).
-
-
- -R
- Recursively destroy all dependents, including cloned
file systems outside the target hierarchy.
-
-
- -f
- Force an unmount of any file systems using the
unmount -f
command. This option has no effect on non-file systems or unmounted
file systems.
-
-
- -n
- Do a dry-run (“No-op”) deletion. No data
will be deleted. This is useful in conjunction with the
-v or -p
flags to determine what data would be deleted.
-
-
- -p
- Print machine-parsable verbose information about the
deleted data.
-
-
- -r
- Recursively destroy all children.
-
-
- -v
- Print verbose information about the deleted data.
Extreme care should be taken when applying either the
-r or the -R
options, as they can destroy large portions of a pool and cause unexpected
behavior for mounted file systems in use.
-
-
- zfs
destroy
[-Rdnprv]
filesystem|volume@snap[%snap[,snap[%snap]]]...
- The given snapshots are destroyed immediately if and only
if the zfs
destroy command without the
-d option would have destroyed it. Such
immediate destruction would occur, for example, if the snapshot had no
clones and the user-initiated reference count were zero.
If a snapshot does not qualify for immediate destruction, it is marked for
deferred deletion. In this state, it exists as a usable, visible snapshot
until both of the preconditions listed above are met, at which point it is
destroyed.
An inclusive range of snapshots may be specified by separating the first and
last snapshots with a percent sign. The first and/or last snapshots may be
left blank, in which case the filesystem's oldest or newest snapshot will
be implied.
Multiple snapshots (or ranges of snapshots) of the same filesystem or volume
may be specified in a comma-separated list of snapshots. Only the
snapshot's short name (the part after the @)
should be specified when using a range or comma-separated list to identify
multiple snapshots.
-
-
- -R
- Recursively destroy all clones of these snapshots,
including the clones, snapshots, and children. If this flag is
specified, the -d flag will have no
effect.
-
-
- -d
- Defer snapshot deletion.
-
-
- -n
- Do a dry-run (“No-op”) deletion. No data
will be deleted. This is useful in conjunction with the
-p or -v
flags to determine what data would be deleted.
-
-
- -p
- Print machine-parsable verbose information about the
deleted data.
-
-
- -r
- Destroy (or mark for deferred deletion) all snapshots
with this name in descendent file systems.
-
-
- -v
- Print verbose information about the deleted data.
Extreme care should be taken when applying either the
-r or the -R
options, as they can destroy large portions of a pool and cause
unexpected behavior for mounted file systems in use.
-
-
- zfs
destroy
filesystem|volume#bookmark
- The given bookmark is destroyed.
-
-
- zfs
snapshot
[-r]
[-o
property=value]...
filesystem@snapname|volume@snapname...
- Creates snapshots with the given names. All previous
modifications by successful system calls to the file system are part of
the snapshots. Snapshots are taken atomically, so that all snapshots
correspond to the same moment in time. See the
Snapshots section for
details.
-
-
- -o
property=value
- Sets the specified property; see
zfs create
for details.
-
-
- -r
- Recursively create snapshots of all descendent
datasets
-
-
- zfs
rollback
[-Rfr]
snapshot
- Roll back the given dataset to a previous snapshot. When a
dataset is rolled back, all data that has changed since the snapshot is
discarded, and the dataset reverts to the state at the time of the
snapshot. By default, the command refuses to roll back to a snapshot other
than the most recent one. In order to do so, all intermediate snapshots
and bookmarks must be destroyed by specifying the
-r option.
The -rR options do not recursively destroy the
child snapshots of a recursive snapshot. Only direct snapshots of the
specified filesystem are destroyed by either of these options. To
completely roll back a recursive snapshot, you must rollback the
individual child snapshots.
-
-
- -R
- Destroy any more recent snapshots and bookmarks, as
well as any clones of those snapshots.
-
-
- -f
- Used with the -R option to
force an unmount of any clone file systems that are to be
destroyed.
-
-
- -r
- Destroy any snapshots and bookmarks more recent than
the one specified.
-
-
- zfs
clone
[-p]
[-o
property=value]...
snapshot
filesystem|volume
- Creates a clone of the given snapshot. See the
Clones section for details.
The target dataset can be located anywhere in the ZFS hierarchy, and is
created as the same type as the original.
-
-
- -o
property=value
- Sets the specified property; see
zfs create
for details.
-
-
- -p
- Creates all the non-existing parent datasets. Datasets
created in this manner are automatically mounted according to the
mountpoint property inherited from their
parent. If the target filesystem or volume already exists, the
operation completes successfully.
-
-
- zfs
promote
clone-filesystem
- Promotes a clone file system to no longer be dependent on
its “origin” snapshot. This makes it possible to destroy the
file system that the clone was created from. The clone parent-child
dependency relationship is reversed, so that the origin file system
becomes a clone of the specified file system.
The snapshot that was cloned, and any snapshots previous to this snapshot,
are now owned by the promoted clone. The space they use moves from the
origin file system to the promoted clone, so enough space must be
available to accommodate these snapshots. No new space is consumed by this
operation, but the space accounting is adjusted. The promoted clone must
not have any conflicting snapshot names of its own. The
rename subcommand can be used to rename any
conflicting snapshots.
-
-
- zfs
rename
[-f]
filesystem|volume|snapshot
filesystem|volume|snapshot
-
- zfs
rename
[-fp]
filesystem|volume
filesystem|volume
- Renames the given dataset. The new target can be located
anywhere in the ZFS hierarchy, with the exception of snapshots. Snapshots
can only be renamed within the parent file system or volume. When renaming
a snapshot, the parent file system of the snapshot does not need to be
specified as part of the second argument. Renamed file systems can inherit
new mount points, in which case they are unmounted and remounted at the
new mount point.
-
-
- -f
- Force unmount any filesystems that need to be unmounted
in the process.
-
-
- -p
- Creates all the nonexistent parent datasets. Datasets
created in this manner are automatically mounted according to the
mountpoint property inherited from their
parent.
-
-
- zfs
rename -r
snapshot
snapshot
- Recursively rename the snapshots of all descendent
datasets. Snapshots are the only dataset that can be renamed
recursively.
-
-
- zfs
list
[-r|-d
depth]
[-Hp]
[-o
property[,property]...]
[-s
property]...
[-S
property]...
[-t
type[,type]...]
[filesystem|volume|snapshot]...
- Lists the property information for the given datasets in
tabular form. If specified, you can list property information by the
absolute pathname or the relative pathname. By default, all file systems
and volumes are displayed. Snapshots are displayed if the
listsnaps property is
on (the default is
off). The following fields are displayed,
name,used,available,referenced,mountpoint.
-
-
- -H
- Used for scripting mode. Do not print headers and
separate fields by a single tab instead of arbitrary white space.
-
-
- -S
property
- Same as the -s option, but
sorts by property in descending order.
-
-
- -d
depth
- Recursively display any children of the dataset,
limiting the recursion to depth. A
depth of
1 will display only the dataset and its
direct children.
-
-
- -o
property
- A comma-separated list of properties to display. The
property must be:
- One of the properties described in the
Native
Properties section
- A user property
- The value name to
display the dataset name
- The value space to
display space usage properties on file systems and volumes. This
is a shortcut for specifying -o
name,avail,used,usedsnap,usedds,usedrefreserv,usedchild
-t
filesystem,volume
syntax.
-
-
- -p
- Display numbers in parsable (exact) values.
-
-
- -r
- Recursively display any children of the dataset on the
command line.
-
-
- -s
property
- A property for sorting the output by column in
ascending order based on the value of the property. The property must
be one of the properties described in the
Properties section, or
the special value name to sort by the
dataset name. Multiple properties can be specified at one time using
multiple -s property options. Multiple
-s options are evaluated from left to
right in decreasing order of importance. The following is a list of
sorting criteria:
- Numeric types sort in numeric order.
- String types sort in alphabetical order.
- Types inappropriate for a row sort that row to
the literal bottom, regardless of the specified ordering.
If no sorting options are specified the existing behavior of
zfs list is
preserved.
-
-
- -t
type
- A comma-separated list of types to display, where
type is one of
filesystem,
snapshot,
volume,
bookmark, or
all. For example, specifying
-t snapshot
displays only snapshots.
-
-
- zfs
set
property=value
[property=value]...
filesystem|volume|snapshot...
- Sets the property or list of properties to the given
value(s) for each dataset. Only some properties can be edited. See the
Properties section for
more information on what properties can be set and acceptable values.
Numeric values can be specified as exact values, or in a human-readable
form with a suffix of B,
K, M,
G, T,
P, E,
Z (for bytes, kilobytes, megabytes,
gigabytes, terabytes, petabytes, exabytes, or zettabytes, respectively).
User properties can be set on snapshots. For more information, see the
User Properties
section.
-
-
- zfs
get
[-r|-d
depth]
[-Hp]
[-o
field[,field]...]
[-s
source[,source]...]
[-t
type[,type]...]
all |
property[,property]...
filesystem|volume|snapshot|bookmark...
- Displays properties for the given datasets. If no datasets
are specified, then the command displays properties for all datasets on
the system. For each property, the following columns are displayed:
name Dataset name
property Property name
value Property value
source Property source. Can either be local, default,
temporary, inherited, or none (-).
All columns are displayed by default, though this can be controlled by using
the -o option. This command takes a
comma-separated list of properties as described in the
Native Properties
and User Properties
sections.
The special value all can be used to display
all properties that apply to the given dataset's type (filesystem, volume,
snapshot, or bookmark).
-
-
- -H
- Display output in a form more easily parsed by scripts.
Any headers are omitted, and fields are explicitly separated by a
single tab instead of an arbitrary amount of space.
-
-
- -d
depth
- Recursively display any children of the dataset,
limiting the recursion to depth. A
depth of 1 will display only the dataset
and its direct children.
-
-
- -o
field
- A comma-separated list of columns to display.
name,property,value,source
is the default value.
-
-
- -p
- Display numbers in parsable (exact) values.
-
-
- -r
- Recursively display properties for any children.
-
-
- -s
source
- A comma-separated list of sources to display. Those
properties coming from a source other than those in this list are
ignored. Each source must be one of the following:
local,
default,
inherited,
temporary, and
none. The default value is all
sources.
-
-
- -t
type
- A comma-separated list of types to display, where
type is one of
filesystem,
snapshot,
volume,
bookmark, or
all.
-
-
- zfs
inherit
[-rS]
property
filesystem|volume|snapshot...
- Clears the specified property, causing it to be inherited
from an ancestor, restored to default if no ancestor has the property set,
or with the -S option reverted to the
received value if one exists. See the
Properties section for a
listing of default values, and details on which properties can be
inherited.
-
-
- -r
- Recursively inherit the given property for all
children.
-
-
- -S
- Revert the property to the received value if one
exists; otherwise operate as if the -S
option was not specified.
-
-
- zfs
remap
filesystem|volume
- Remap the indirect blocks in the given fileystem or volume
so that they no longer reference blocks on previously removed vdevs and we
can eventually shrink the size of the indirect mapping objects for the
previously removed vdevs. Note that remapping all blocks might not be
possible and that references from snapshots will still exist and cannot be
remapped.
-
-
- zfs
upgrade
- Displays a list of file systems that are not the most
recent version.
-
-
- zfs
upgrade -v
- Displays a list of currently supported file system
versions.
-
-
- zfs
upgrade
[-r]
[-V
version]
-a |
filesystem
- Upgrades file systems to a new on-disk version. Once this
is done, the file systems will no longer be accessible on systems running
older versions of the software. zfs
send streams generated from new snapshots of
these file systems cannot be accessed on systems running older versions of
the software.
In general, the file system version is independent of the pool version. See
zpool(1M) for information on the
zpool upgrade
command.
In some cases, the file system version and the pool version are interrelated
and the pool version must be upgraded before the file system version can
be upgraded.
-
-
- -V
version
- Upgrade to the specified
version. If the
-V flag is not specified, this command
upgrades to the most recent version. This option can only be used to
increase the version number, and only up to the most recent version
supported by this software.
-
-
- -a
- Upgrade all file systems on all imported pools.
-
-
- filesystem
- Upgrade the specified file system.
-
-
- -r
- Upgrade the specified file system and all descendent
file systems.
-
-
- zfs
userspace
[-Hinp]
[-o
field[,field]...]
[-s
field]...
[-S
field]...
[-t
type[,type]...]
filesystem|snapshot
- Displays space consumed by, and quotas on, each user in the
specified filesystem or snapshot. This corresponds to the
userused@user
and
userquota@user
properties.
-
-
- -H
- Do not print headers, use tab-delimited output.
-
-
- -S
field
- Sort by this field in reverse order. See
-s.
-
-
- -i
- Translate SID to POSIX ID. The POSIX ID may be
ephemeral if no mapping exists. Normal POSIX interfaces (for example,
stat(2), ls
-l) perform this translation, so the
-i option allows the output from
zfs
userspace to be compared directly with
those utilities. However, -i may lead to
confusion if some files were created by an SMB user before a
SMB-to-POSIX name mapping was established. In such a case, some files
will be owned by the SMB entity and some by the POSIX entity. However,
the -i option will report that the POSIX
entity has the total usage and quota for both.
-
-
- -n
- Print numeric ID instead of user/group name.
-
-
- -o
field[,field]...
- Display only the specified fields from the following
set: type,
name, used,
quota. The default is to display all
fields.
-
-
- -p
- Use exact (parsable) numeric output.
-
-
- -s
field
- Sort output by this field. The
-s and -S
flags may be specified multiple times to sort first by one field, then
by another. The default is -s
type -s
name.
-
-
- -t
type[,type]...
- Print only the specified types from the following set:
all,
posixuser,
smbuser,
posixgroup,
smbgroup. The default is
-t
posixuser,smbuser.
The default can be changed to include group types.
-
-
- zfs
groupspace
[-Hinp]
[-o
field[,field]...]
[-s
field]...
[-S
field]...
[-t
type[,type]...]
filesystem|snapshot
- Displays space consumed by, and quotas on, each group in
the specified filesystem or snapshot. This subcommand is identical to
zfs userspace,
except that the default types to display are
-t
posixgroup,smbgroup.
-
-
- zfs
mount
- Displays all ZFS file systems currently mounted.
-
-
- zfs
mount
[-Ov]
[-o
options]
-a |
filesystem
- Mounts ZFS file systems.
-
-
- -O
- Perform an overlay mount. See
mount(1M) for more information.
-
-
- -a
- Mount all available ZFS file systems. Invoked
automatically as part of the boot process.
-
-
- filesystem
- Mount the specified filesystem.
-
-
- -o
options
- An optional, comma-separated list of mount options to
use temporarily for the duration of the mount. See the
Temporary
Mount Point Properties section for details.
-
-
- -v
- Report mount progress.
-
-
- zfs
unmount
[-f]
-a |
filesystem|mountpoint
- Unmounts currently mounted ZFS file systems.
-
-
- -a
- Unmount all available ZFS file systems. Invoked
automatically as part of the shutdown process.
-
-
- filesystem|mountpoint
- Unmount the specified filesystem. The command can also
be given a path to a ZFS file system mount point on the system.
-
-
- -f
- Forcefully unmount the file system, even if it is
currently in use.
-
-
- zfs
share -a |
filesystem
- Shares available ZFS file systems.
-
-
- -a
- Share all available ZFS file systems. Invoked
automatically as part of the boot process.
-
-
- filesystem
- Share the specified filesystem according to the
sharenfs and
sharesmb properties. File systems are
shared when the sharenfs or
sharesmb property is set.
-
-
- zfs
unshare -a |
filesystem|mountpoint
- Unshares currently shared ZFS file systems.
-
-
- -a
- Unshare all available ZFS file systems. Invoked
automatically as part of the shutdown process.
-
-
- filesystem|mountpoint
- Unshare the specified filesystem. The command can also
be given a path to a ZFS file system shared on the system.
-
-
- zfs
bookmark snapshot
bookmark
- Creates a bookmark of the given snapshot. Bookmarks mark
the point in time when the snapshot was created, and can be used as the
incremental source for a zfs
send command.
This feature must be enabled to be used. See
zpool-features(5) for details on ZFS feature
flags and the bookmarks feature.
-
-
- zfs
send
[-DLPRcenpv]
[[-I|-i]
snapshot]
snapshot
- Creates a stream representation of the second
snapshot, which is written to standard
output. The output can be redirected to a file or to a different system
(for example, using ssh(1)). By default, a
full stream is generated.
-
-
- -D,
--dedup
- Generate a deduplicated stream. Blocks which would have
been sent multiple times in the send stream will only be sent once.
The receiving system must also support this feature to receive a
deduplicated stream. This flag can be used regardless of the dataset's
dedup property, but performance will be
much better if the filesystem uses a dedup-capable checksum (for
example, sha256).
-
-
- -I
snapshot
- Generate a stream package that sends all intermediary
snapshots from the first snapshot to the second snapshot. For example,
-I @a
fs@d is similar to
-i @a
fs@b; -i
@b fs@c;
-i @c
fs@d. The incremental source may be
specified as with the -i option.
-
-
- -L,
--large-block
- Generate a stream which may contain blocks larger than
128KB. This flag has no effect if the
large_blocks pool feature is disabled, or
if the recordsize property of this
filesystem has never been set above 128KB. The receiving system must
have the large_blocks pool feature
enabled as well. See zpool-features(5)
for details on ZFS feature flags and the
large_blocks feature.
-
-
- -P,
--parsable
- Print machine-parsable verbose information about the
stream package generated.
-
-
- -R,
--replicate
- Generate a replication stream package, which will
replicate the specified file system, and all descendent file systems,
up to the named snapshot. When received, all properties, snapshots,
descendent file systems, and clones are preserved.
If the -i or
-I flags are used in conjunction with the
-R flag, an incremental replication
stream is generated. The current values of properties, and current
snapshot and file system names are set when the stream is received. If
the -F flag is specified when this stream
is received, snapshots and file systems that do not exist on the
sending side are destroyed.
-
-
- -e,
--embed
- Generate a more compact stream by using
WRITE_EMBEDDED records for blocks which
are stored more compactly on disk by the
embedded_data pool feature. This flag has
no effect if the embedded_data feature is
disabled. The receiving system must have the
embedded_data feature enabled. If the
lz4_compress feature is active on the
sending system, then the receiving system must have that feature
enabled as well. See zpool-features(5)
for details on ZFS feature flags and the
embedded_data feature.
-
-
- -c,
--compressed
- Generate a more compact stream by using compressed
WRITE records for blocks which are compressed on disk and in memory
(see the compression property for
details). If the lz4_compress feature is
active on the sending system, then the receiving system must have that
feature enabled as well. If the
large_blocks feature is enabled on the
sending system but the -L option is not
supplied in conjunction with -c, then the
data will be decompressed before sending so it can be split into
smaller block sizes.
-
-
- -i
snapshot
- Generate an incremental stream from the first
snapshot (the incremental source) to
the second snapshot (the incremental
target). The incremental source can be specified as the last component
of the snapshot name (the @ character and
following) and it is assumed to be from the same file system as the
incremental target.
If the destination is a clone, the source may be the origin snapshot,
which must be fully specified (for example,
pool/fs@origin, not just
@origin).
-
-
- -n,
--dryrun
- Do a dry-run (“No-op”) send. Do not
generate any actual send data. This is useful in conjunction with the
-v or -P
flags to determine what data will be sent. In this case, the verbose
output will be written to standard output (contrast with a
non-dry-run, where the stream is written to standard output and the
verbose output goes to standard error).
-
-
- -p,
--props
- Include the dataset's properties in the stream. This
flag is implicit when -R is specified.
The receiving system must also support this feature.
-
-
- -v,
--verbose
- Print verbose information about the stream package
generated. This information includes a per-second report of how much
data has been sent.
The format of the stream is committed. You will be able to receive your
streams on future versions of ZFS .
-
-
- zfs
send
[-Lce]
[-i
snapshot|bookmark]
filesystem|volume|snapshot
- Generate a send stream, which may be of a filesystem, and
may be incremental from a bookmark. If the destination is a filesystem or
volume, the pool must be read-only, or the filesystem must not be mounted.
When the stream generated from a filesystem or volume is received, the
default snapshot name will be “--head--”.
-
-
- -L,
--large-block
- Generate a stream which may contain blocks larger than
128KB. This flag has no effect if the
large_blocks pool feature is disabled, or
if the recordsize property of this
filesystem has never been set above 128KB. The receiving system must
have the large_blocks pool feature
enabled as well. See zpool-features(5)
for details on ZFS feature flags and the
large_blocks feature.
-
-
- -c,
--compressed
- Generate a more compact stream by using compressed
WRITE records for blocks which are compressed on disk and in memory
(see the compression property for
details). If the lz4_compress feature is
active on the sending system, then the receiving system must have that
feature enabled as well. If the
large_blocks feature is enabled on the
sending system but the -L option is not
supplied in conjunction with -c, then the
data will be decompressed before sending so it can be split into
smaller block sizes.
-
-
- -e,
--embed
- Generate a more compact stream by using
WRITE_EMBEDDED records for blocks which
are stored more compactly on disk by the
embedded_data pool feature. This flag has
no effect if the embedded_data feature is
disabled. The receiving system must have the
embedded_data feature enabled. If the
lz4_compress feature is active on the
sending system, then the receiving system must have that feature
enabled as well. See zpool-features(5)
for details on ZFS feature flags and the
embedded_data feature.
-
-
- -i
snapshot|bookmark
- Generate an incremental send stream. The incremental
source must be an earlier snapshot in the destination's history. It
will commonly be an earlier snapshot in the destination's file system,
in which case it can be specified as the last component of the name
(the # or @
character and following).
If the incremental target is a clone, the incremental source can be the
origin snapshot, or an earlier snapshot in the origin's filesystem, or
the origin's origin, etc.
-
-
- zfs
send
[-Penv]
-t
receive_resume_token
- Creates a send stream which resumes an interrupted receive.
The receive_resume_token is the value of
this property on the filesystem or volume that was being received into.
See the documentation for zfs receive -s for
more details.
-
-
- zfs
receive
[-Fnsuv]
[-o
origin=snapshot]
filesystem|volume|snapshot
-
- zfs
receive
[-Fnsuv]
[-d|-e]
[-o
origin=snapshot]
filesystem
- Creates a snapshot whose contents are as specified in the
stream provided on standard input. If a full stream is received, then a
new file system is created as well. Streams are created using the
zfs send
subcommand, which by default creates a full stream.
zfs recv can be
used as an alias for zfs
receive.
If an incremental stream is received, then the destination file system must
already exist, and its most recent snapshot must match the incremental
stream's source. For zvols, the destination
device link is destroyed and recreated, which means the
zvol cannot be accessed during the
receive operation.
When a snapshot replication package stream that is generated by using the
zfs send
-R command is received, any snapshots that do
not exist on the sending location are destroyed by using the
zfs destroy
-d command.
The name of the snapshot (and file system, if a full stream is received)
that this subcommand creates depends on the argument type and the use of
the -d or -e
options.
If the argument is a snapshot name, the specified
snapshot is created. If the argument is a
file system or volume name, a snapshot with the same name as the sent
snapshot is created within the specified
filesystem or
volume. If neither of the
-d or -e options
are specified, the provided target snapshot name is used exactly as
provided.
The -d and -e
options cause the file system name of the target snapshot to be determined
by appending a portion of the sent snapshot's name to the specified target
filesystem. If the
-d option is specified, all but the first
element of the sent snapshot's file system path (usually the pool name) is
used and any required intermediate file systems within the specified one
are created. If the -e option is specified,
then only the last element of the sent snapshot's file system name (i.e.
the name of the source file system itself) is used as the target file
system name.
-
-
- -F
- Force a rollback of the file system to the most recent
snapshot before performing the receive operation. If receiving an
incremental replication stream (for example, one generated by
zfs send
-R
[-i|-I]),
destroy snapshots and file systems that do not exist on the sending
side.
-
-
- -d
- Discard the first element of the sent snapshot's file
system name, using the remaining elements to determine the name of the
target file system for the new snapshot as described in the paragraph
above.
-
-
- -e
- Discard all but the last element of the sent snapshot's
file system name, using that element to determine the name of the
target file system for the new snapshot as described in the paragraph
above.
-
-
- -n
- Do not actually receive the stream. This can be useful
in conjunction with the -v option to
verify the name the receive operation would use.
-
-
- -o
origin=snapshot
- Forces the stream to be received as a clone of the
given snapshot. If the stream is a full send stream, this will create
the filesystem described by the stream as a clone of the specified
snapshot. Which snapshot was specified will not affect the success or
failure of the receive, as long as the snapshot does exist. If the
stream is an incremental send stream, all the normal verification will
be performed.
-
-
- -u
- File system that is associated with the received stream
is not mounted.
-
-
- -v
- Print verbose information about the stream and the time
required to perform the receive operation.
-
-
- -s
- If the receive is interrupted, save the partially
received state, rather than deleting it. Interruption may be due to
premature termination of the stream (e.g. due to network failure or
failure of the remote system if the stream is being read over a
network connection), a checksum error in the stream, termination of
the zfs
receive process, or unclean shutdown of
the system.
The receive can be resumed with a stream generated by
zfs send
-t
token, where the
token is the value of the
receive_resume_token property of the
filesystem or volume which is received into.
To use this flag, the storage pool must have the
extensible_dataset feature enabled. See
zpool-features(5) for details on ZFS
feature flags.
-
-
- zfs
receive -A
filesystem|volume
- Abort an interrupted zfs
receive -s,
deleting its saved partially received state.
-
-
- zfs
allow
filesystem|volume
- Displays permissions that have been delegated on the
specified filesystem or volume. See the other forms of
zfs allow for
more information.
-
-
- zfs
allow
[-dglu]
user|group[,user|group]...
perm|@setname[,perm|@setname]...
filesystem|volume
zfs allow
[-dl]
-e|everyone
perm|@setname[,perm|@setname]...
filesystem|volume
- Delegates ZFS administration permission for the file
systems to non-privileged users.
-
-
- -d
- Allow only for the descendent file systems.
-
-
- -e|everyone
- Specifies that the permissions be delegated to
everyone.
-
-
- -g
group[,group]...
- Explicitly specify that permissions are delegated to
the group.
-
-
- -l
- Allow “locally” only for the specified
file system.
-
-
- -u
user[,user]...
- Explicitly specify that permissions are delegated to
the user.
-
-
- user|group[,user|group]...
- Specifies to whom the permissions are delegated.
Multiple entities can be specified as a comma-separated list. If
neither of the -gu options are specified,
then the argument is interpreted preferentially as the keyword
everyone, then as a user name, and lastly
as a group name. To specify a user or group named
“everyone”, use the -g or
-u options. To specify a group with the
same name as a user, use the -g
options.
-
-
- perm|@setname[,perm|@setname]...
- The permissions to delegate. Multiple permissions may
be specified as a comma-separated list. Permission names are the same
as ZFS subcommand and property names. See the property list below.
Property set names, which begin with @,
may be specified. See the -s form below
for details.
If neither of the -dl options are specified, or
both are, then the permissions are allowed for the file system or volume,
and all of its descendents.
Permissions are generally the ability to use a ZFS subcommand or change a
ZFS property. The following permissions are available:
NAME TYPE NOTES
allow subcommand Must also have the permission that is
being allowed
clone subcommand Must also have the 'create' ability and
'mount' ability in the origin file system
create subcommand Must also have the 'mount' ability
destroy subcommand Must also have the 'mount' ability
diff subcommand Allows lookup of paths within a dataset
given an object number, and the ability
to create snapshots necessary to
'zfs diff'.
mount subcommand Allows mount/umount of ZFS datasets
promote subcommand Must also have the 'mount' and 'promote'
ability in the origin file system
receive subcommand Must also have the 'mount' and 'create'
ability
rename subcommand Must also have the 'mount' and 'create'
ability in the new parent
rollback subcommand Must also have the 'mount' ability
send subcommand
share subcommand Allows sharing file systems over NFS
or SMB protocols
snapshot subcommand Must also have the 'mount' ability
groupquota other Allows accessing any groupquota@...
property
groupused other Allows reading any groupused@... property
userprop other Allows changing any user property
userquota other Allows accessing any userquota@...
property
userused other Allows reading any userused@... property
aclinherit property
aclmode property
atime property
canmount property
casesensitivity property
checksum property
compression property
copies property
devices property
exec property
filesystem_limit property
mountpoint property
nbmand property
normalization property
primarycache property
quota property
readonly property
recordsize property
refquota property
refreservation property
reservation property
secondarycache property
setuid property
sharenfs property
sharesmb property
snapdir property
snapshot_limit property
utf8only property
version property
volblocksize property
volsize property
vscan property
xattr property
zoned property
-
-
- zfs
allow -c
perm|@setname[,perm|@setname]...
filesystem|volume
- Sets “create time” permissions. These
permissions are granted (locally) to the creator of any newly-created
descendent file system.
-
-
- zfs
allow -s
@setname
perm|@setname[,perm|@setname]...
filesystem|volume
- Defines or adds permissions to a permission set. The set
can be used by other zfs
allow commands for the specified file system
and its descendents. Sets are evaluated dynamically, so changes to a set
are immediately reflected. Permission sets follow the same naming
restrictions as ZFS file systems, but the name must begin with
@, and can be no more than 64 characters
long.
-
-
- zfs
unallow
[-dglru]
user|group[,user|group]...
[perm|@setname[,perm|@setname]...]
filesystem|volume
zfs unallow
[-dlr]
-e|everyone
[perm|@setname[,perm|@setname]...]
filesystem|volume
zfs unallow
[-r]
-c
[perm|@setname[,perm|@setname]...]
filesystem|volume
- Removes permissions that were granted with the
zfs allow
command. No permissions are explicitly denied, so other permissions
granted are still in effect. For example, if the permission is granted by
an ancestor. If no permissions are specified, then all permissions for the
specified user,
group, or
everyone are removed. Specifying
everyone (or using the
-e option) only removes the permissions that
were granted to everyone, not all permissions for every user and group.
See the zfs
allow command for a description of the
-ldugec options.
-
-
- -r
- Recursively remove the permissions from this file
system and all descendents.
-
-
- zfs
unallow
[-r]
-s
@setname
[perm|@setname[,perm|@setname]...]
filesystem|volume
- Removes permissions from a permission set. If no
permissions are specified, then all permissions are removed, thus removing
the set entirely.
-
-
- zfs
hold
[-r]
tag
snapshot...
- Adds a single reference, named with the
tag argument, to the specified snapshot
or snapshots. Each snapshot has its own tag namespace, and tags must be
unique within that space.
If a hold exists on a snapshot, attempts to destroy that snapshot by using
the zfs destroy
command return
EBUSY
.
-
-
- -r
- Specifies that a hold with the given tag is applied
recursively to the snapshots of all descendent file systems.
-
-
- zfs
holds
[-r]
snapshot...
- Lists all existing user references for the given snapshot
or snapshots.
-
-
- -r
- Lists the holds that are set on the named descendent
snapshots, in addition to listing the holds on the named
snapshot.
-
-
- zfs
release
[-r]
tag
snapshot...
- Removes a single reference, named with the
tag argument, from the specified snapshot
or snapshots. The tag must already exist for each snapshot. If a hold
exists on a snapshot, attempts to destroy that snapshot by using the
zfs destroy
command return
EBUSY
.
-
-
- -r
- Recursively releases a hold with the given tag on the
snapshots of all descendent file systems.
-
-
- zfs
diff
[-FHt]
snapshot
snapshot|filesystem
- Display the difference between a snapshot of a given
filesystem and another snapshot of that filesystem from a later time or
the current contents of the filesystem. The first column is a character
indicating the type of change, the other columns indicate pathname, new
pathname (in case of rename), change in link count, and optionally file
type and/or change time. The types of change are:
- The path has been removed
+ The path has been created
M The path has been modified
R The path has been renamed
-
-
- -F
- Display an indication of the type of file, in a manner
similar to the - option of
ls(1).
B Block device
C Character device
/ Directory
> Door
| Named pipe
@ Symbolic link
P Event port
= Socket
F Regular file
-
-
- -H
- Give more parsable tab-separated output, without header
lines and without arrows.
-
-
- -t
- Display the path's inode change time as the first
column of output.
-
-
- zfs
program
[-n]
[-t
timeout]
[-m
memory_limit]
pool script
[arg1
...]
- Executes script as a ZFS
channel program on pool. The ZFS channel
program interface allows ZFS administrative operations to be run
programmatically via a Lua script. The entire script is executed
atomically, with no other administrative operations taking effect
concurrently. A library of ZFS calls is made available to channel program
scripts. Channel programs may only be run with root privileges.
For full documentation of the ZFS channel program interface, see the manual
page for
-
-
- -n
- Executes a read-only channel program, which runs
faster. The program cannot change on-disk state by calling functions
from the zfs.sync submodule. The program can be used to gather
information such as properties and determining if changes would
succeed (zfs.check.*). Without this flag, all pending changes must be
synced to disk before a channel program can complete.
-
-
- -t
timeout
- Execution time limit, in milliseconds. If a channel
program executes for longer than the provided timeout, it will be
stopped and an error will be returned. The default timeout is 1000 ms,
and can be set to a maximum of 10000 ms.
-
-
- -m
memory-limit
- Memory limit, in bytes. If a channel program attempts
to allocate more memory than the given limit, it will be stopped and
an error returned. The default memory limit is 10 MB, and can be set
to a maximum of 100 MB.
All remaining argument strings are passed directly to the channel
program as arguments. See zfs-program(1M)
for more information.
The
zfs utility exits 0 on success, 1 if an error
occurs, and 2 if invalid command line options were specified.
-
-
- Example
1 Creating a ZFS File System Hierarchy
- The following commands create a file system named
pool/home and a file system named
pool/home/bob. The mount point
/export/home is set for the parent file
system, and is automatically inherited by the child file system.
# zfs create pool/home
# zfs set mountpoint=/export/home pool/home
# zfs create pool/home/bob
-
-
- Example
2 Creating a ZFS Snapshot
- The following command creates a snapshot named
yesterday. This snapshot is mounted on demand
in the .zfs/snapshot directory at the root of
the pool/home/bob file system.
# zfs snapshot pool/home/bob@yesterday
-
-
- Example
3 Creating and Destroying Multiple
Snapshots
- The following command creates snapshots named
yesterday of
pool/home and all of its descendent file
systems. Each snapshot is mounted on demand in the
.zfs/snapshot directory at the root of its
file system. The second command destroys the newly created snapshots.
# zfs snapshot -r pool/home@yesterday
# zfs destroy -r pool/home@yesterday
-
-
- Example
4 Disabling and Enabling File System
Compression
- The following command disables the
compression property for all file systems
under pool/home. The next command explicitly
enables compression for
pool/home/anne.
# zfs set compression=off pool/home
# zfs set compression=on pool/home/anne
-
-
- Example
5 Listing ZFS Datasets
- The following command lists all active file systems and
volumes in the system. Snapshots are displayed if the
listsnaps property is
on. The default is
off. See
zpool(1M) for more information on pool
properties.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool 450K 457G 18K /pool
pool/home 315K 457G 21K /export/home
pool/home/anne 18K 457G 18K /export/home/anne
pool/home/bob 276K 457G 276K /export/home/bob
-
-
- Example
6 Setting a Quota on a ZFS File System
- The following command sets a quota of 50 Gbytes for
pool/home/bob.
# zfs set quota=50G pool/home/bob
-
-
- Example
7 Listing ZFS Properties
- The following command lists all properties for
pool/home/bob.
# zfs get all pool/home/bob
NAME PROPERTY VALUE SOURCE
pool/home/bob type filesystem -
pool/home/bob creation Tue Jul 21 15:53 2009 -
pool/home/bob used 21K -
pool/home/bob available 20.0G -
pool/home/bob referenced 21K -
pool/home/bob compressratio 1.00x -
pool/home/bob mounted yes -
pool/home/bob quota 20G local
pool/home/bob reservation none default
pool/home/bob recordsize 128K default
pool/home/bob mountpoint /pool/home/bob default
pool/home/bob sharenfs off default
pool/home/bob checksum on default
pool/home/bob compression on local
pool/home/bob atime on default
pool/home/bob devices on default
pool/home/bob exec on default
pool/home/bob setuid on default
pool/home/bob readonly off default
pool/home/bob zoned off default
pool/home/bob snapdir hidden default
pool/home/bob aclmode discard default
pool/home/bob aclinherit restricted default
pool/home/bob canmount on default
pool/home/bob xattr on default
pool/home/bob copies 1 default
pool/home/bob version 4 -
pool/home/bob utf8only off -
pool/home/bob normalization none -
pool/home/bob casesensitivity sensitive -
pool/home/bob vscan off default
pool/home/bob nbmand off default
pool/home/bob sharesmb off default
pool/home/bob refquota none default
pool/home/bob refreservation none default
pool/home/bob primarycache all default
pool/home/bob secondarycache all default
pool/home/bob usedbysnapshots 0 -
pool/home/bob usedbydataset 21K -
pool/home/bob usedbychildren 0 -
pool/home/bob usedbyrefreservation 0 -
The following command gets a single property value.
# zfs get -H -o value compression pool/home/bob
on
The following command lists all properties with local settings for
pool/home/bob.
# zfs get -r -s local -o name,property,value all pool/home/bob
NAME PROPERTY VALUE
pool/home/bob quota 20G
pool/home/bob compression on
-
-
- Example
8 Rolling Back a ZFS File System
- The following command reverts the contents of
pool/home/anne to the snapshot named
yesterday, deleting all intermediate
snapshots.
# zfs rollback -r pool/home/anne@yesterday
-
-
- Example
9 Creating a ZFS Clone
- The following command creates a writable file system whose
initial contents are the same as
pool/home/bob@yesterday.
# zfs clone pool/home/bob@yesterday pool/clone
-
-
- Example
10 Promoting a ZFS Clone
- The following commands illustrate how to test out changes
to a file system, and then replace the original file system with the
changed one, using clones, clone promotion, and renaming:
# zfs create pool/project/production
populate /pool/project/production with data
# zfs snapshot pool/project/production@today
# zfs clone pool/project/production@today pool/project/beta
make changes to /pool/project/beta and test them
# zfs promote pool/project/beta
# zfs rename pool/project/production pool/project/legacy
# zfs rename pool/project/beta pool/project/production
once the legacy version is no longer needed, it can be destroyed
# zfs destroy pool/project/legacy
-
-
- Example
11 Inheriting ZFS Properties
- The following command causes
pool/home/bob and
pool/home/anne to inherit the
checksum property from their parent.
# zfs inherit checksum pool/home/bob pool/home/anne
-
-
- Example
12 Remotely Replicating ZFS Data
- The following commands send a full stream and then an
incremental stream to a remote machine, restoring them into
poolB/received/fs@a and
poolB/received/fs@b, respectively.
poolB must contain the file system
poolB/received, and must not initially
contain poolB/received/fs.
# zfs send pool/fs@a | \
ssh host zfs receive poolB/received/fs@a
# zfs send -i a pool/fs@b | \
ssh host zfs receive poolB/received/fs
-
-
- Example
13 Using the zfs receive -d Option
- The following command sends a full stream of
poolA/fsA/fsB@snap to a remote machine,
receiving it into
poolB/received/fsA/fsB@snap. The
fsA/fsB@snap portion of the received
snapshot's name is determined from the name of the sent snapshot.
poolB must contain the file system
poolB/received. If
poolB/received/fsA does not exist, it is
created as an empty file system.
# zfs send poolA/fsA/fsB@snap | \
ssh host zfs receive -d poolB/received
-
-
- Example
14 Setting User Properties
- The following example sets the user-defined
com.example:department property for a
dataset.
# zfs set com.example:department=12345 tank/accounting
-
-
- Example
15 Performing a Rolling Snapshot
- The following example shows how to maintain a history of
snapshots with a consistent naming scheme. To keep a week's worth of
snapshots, the user destroys the oldest snapshot, renames the remaining
snapshots, and then creates a new snapshot, as follows:
# zfs destroy -r pool/users@7daysago
# zfs rename -r pool/users@6daysago @7daysago
# zfs rename -r pool/users@5daysago @6daysago
# zfs rename -r pool/users@yesterday @5daysago
# zfs rename -r pool/users@yesterday @4daysago
# zfs rename -r pool/users@yesterday @3daysago
# zfs rename -r pool/users@yesterday @2daysago
# zfs rename -r pool/users@today @yesterday
# zfs snapshot -r pool/users@today
-
-
- Example
16 Setting sharenfs Property Options on a ZFS File
System
- The following commands show how to set
sharenfs property options to enable
rw access for a set of
IP addresses and to enable root access for
system neo on the
tank/home file system.
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
If you are using DNS for host name resolution,
specify the fully qualified hostname.
-
-
- Example
17 Delegating ZFS Administration Permissions on a ZFS
Dataset
- The following example shows how to set permissions so that
user cindys can create, destroy, mount, and
take snapshots on tank/cindys. The
permissions on tank/cindys are also
displayed.
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
# zfs allow tank/cindys
---- Permissions on tank/cindys --------------------------------------
Local+Descendent permissions:
user cindys create,destroy,mount,snapshot
Because the tank/cindys mount point permission
is set to 755 by default, user cindys will be
unable to mount file systems under
tank/cindys. Add an ACE similar to the
following syntax to provide mount point access:
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
-
-
- Example
18 Delegating Create Time Permissions on a ZFS
Dataset
- The following example shows how to grant anyone in the
group staff to create file systems in
tank/users. This syntax also allows staff
members to destroy their own file systems, but not destroy anyone else's
file system. The permissions on tank/users
are also displayed.
# zfs allow staff create,mount tank/users
# zfs allow -c destroy tank/users
# zfs allow tank/users
---- Permissions on tank/users ---------------------------------------
Permission sets:
destroy
Local+Descendent permissions:
group staff create,mount
-
-
- Example
19 Defining and Granting a Permission Set on a ZFS
Dataset
- The following example shows how to define and grant a
permission set on the tank/users file system.
The permissions on tank/users are also
displayed.
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
# zfs allow staff @pset tank/users
# zfs allow tank/users
---- Permissions on tank/users ---------------------------------------
Permission sets:
@pset create,destroy,mount,snapshot
Local+Descendent permissions:
group staff @pset
-
-
- Example
20 Delegating Property Permissions on a ZFS
Dataset
- The following example shows to grant the ability to set
quotas and reservations on the users/home
file system. The permissions on users/home
are also displayed.
# zfs allow cindys quota,reservation users/home
# zfs allow users/home
---- Permissions on users/home ---------------------------------------
Local+Descendent permissions:
user cindys quota,reservation
cindys% zfs set quota=10G users/home/marks
cindys% zfs get quota users/home/marks
NAME PROPERTY VALUE SOURCE
users/home/marks quota 10G local
-
-
- Example
21 Removing ZFS Delegated Permissions on a ZFS
Dataset
- The following example shows how to remove the snapshot
permission from the staff group on the
tank/users file system. The permissions on
tank/users are also displayed.
# zfs unallow staff snapshot tank/users
# zfs allow tank/users
---- Permissions on tank/users ---------------------------------------
Permission sets:
@pset create,destroy,mount,snapshot
Local+Descendent permissions:
group staff @pset
-
-
- Example
22 Showing the differences between a snapshot and a ZFS
Dataset
- The following example shows how to see what has changed
between a prior snapshot of a ZFS dataset and its current state. The
-F option is used to indicate type
information for the files affected.
# zfs diff -F tank/test@before tank/test
M / /tank/test/
M F /tank/test/linked (+1)
R F /tank/test/oldname -> /tank/test/newname
- F /tank/test/deleted
+ F /tank/test/created
M F /tank/test/modified
Committed.
gzip(1),
ssh(1),
mount(1M),
share(1M),
sharemgr(1M),
unshare(1M),
zonecfg(1M),
zpool(1M),
chmod(2),
stat(2),
write(2),
fsync(3C),
dfstab(4),
acl(5),
attributes(5)