Print this page
Add more details to the description of the fsid_guid dataset property
6333 ZFS should let the user specify or modify the fsid_guid of a dataset
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/man/man1m/zfs.1m.man.txt
+++ new/usr/src/man/man1m/zfs.1m.man.txt
1 1 ZFS(1M) Maintenance Commands ZFS(1M)
2 2
3 3 NAME
4 4 zfs configures ZFS file systems
5 5
6 6 SYNOPSIS
7 7 zfs [-?]
8 8 zfs create [-p] [-o property=value]... filesystem
9 9 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
10 10 zfs destroy [-Rfnprv] filesystem|volume
11 11 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
12 12 zfs destroy filesystem|volume#bookmark
13 13 zfs snapshot [-r] [-o property=value]...
14 14 filesystem@snapname|volume@snapname...
15 15 zfs rollback [-Rfr] snapshot
16 16 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
17 17 zfs promote clone-filesystem
18 18 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
19 19 zfs rename [-fp] filesystem|volume filesystem|volume
20 20 zfs rename -r snapshot snapshot
21 21 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
22 22 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
23 23 zfs set property=value [property=value]... filesystem|volume|snapshot...
24 24 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
25 25 [-t type[,type]...] all | property[,property]...
26 26 filesystem|volume|snapshot...
27 27 zfs inherit [-rS] property filesystem|volume|snapshot...
28 28 zfs upgrade
29 29 zfs upgrade -v
30 30 zfs upgrade [-r] [-V version] -a | filesystem
31 31 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
32 32 [-t type[,type]...] filesystem|snapshot
33 33 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
34 34 [-t type[,type]...] filesystem|snapshot
35 35 zfs mount
36 36 zfs mount [-Ov] [-o options] -a | filesystem
37 37 zfs unmount [-f] -a | filesystem|mountpoint
38 38 zfs share -a | filesystem
39 39 zfs unshare -a | filesystem|mountpoint
40 40 zfs bookmark snapshot bookmark
41 41 zfs send [-DLPRenpv] [[-I|-i] snapshot] snapshot
42 42 zfs send [-Le] [-i snapshot|bookmark] filesystem|volume|snapshot
43 43 zfs send [-Penv] -t receive_resume_token
44 44 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
45 45 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
46 46 zfs receive -A filesystem|volume
47 47 zfs allow filesystem|volume
48 48 zfs allow [-dglu] user|group[,user|group]...
49 49 perm|@setname[,perm|@setname]... filesystem|volume
50 50 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
51 51 filesystem|volume
52 52 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
53 53 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
54 54 zfs unallow [-dglru] user|group[,user|group]...
55 55 [perm|@setname[,perm|@setname]...] filesystem|volume
56 56 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
57 57 filesystem|volume
58 58 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
59 59 zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
60 60 filesystem|volume
61 61 zfs hold [-r] tag snapshot...
62 62 zfs holds [-r] snapshot...
63 63 zfs release [-r] tag snapshot...
64 64 zfs diff [-FHt] snapshot snapshot|filesystem
65 65
66 66 DESCRIPTION
67 67 The zfs command configures ZFS datasets within a ZFS storage pool, as
68 68 described in zpool(1M). A dataset is identified by a unique path within
69 69 the ZFS namespace. For example:
70 70
71 71 pool/{filesystem,volume,snapshot}
72 72
73 73 where the maximum length of a dataset name is MAXNAMELEN (256 bytes).
74 74
75 75 A dataset can be one of the following:
76 76
77 77 file system A ZFS dataset of type filesystem can be mounted within the
78 78 standard system namespace and behaves like other file
79 79 systems. While ZFS file systems are designed to be POSIX
80 80 compliant, known issues exist that prevent compliance in
81 81 some cases. Applications that depend on standards
82 82 conformance might fail due to non-standard behavior when
83 83 checking file system free space.
84 84
85 85 volume A logical volume exported as a raw or block device. This
86 86 type of dataset should only be used under special
87 87 circumstances. File systems are typically used in most
88 88 environments.
89 89
90 90 snapshot A read-only version of a file system or volume at a given
91 91 point in time. It is specified as filesystem@name or
92 92 volume@name.
93 93
94 94 ZFS File System Hierarchy
95 95 A ZFS storage pool is a logical collection of devices that provide space
96 96 for datasets. A storage pool is also the root of the ZFS file system
97 97 hierarchy.
98 98
99 99 The root of the pool can be accessed as a file system, such as mounting
100 100 and unmounting, taking snapshots, and setting properties. The physical
101 101 storage characteristics, however, are managed by the zpool(1M) command.
102 102
103 103 See zpool(1M) for more information on creating and administering pools.
104 104
105 105 Snapshots
106 106 A snapshot is a read-only copy of a file system or volume. Snapshots can
107 107 be created extremely quickly, and initially consume no additional space
108 108 within the pool. As data within the active dataset changes, the snapshot
109 109 consumes more data than would otherwise be shared with the active
110 110 dataset.
111 111
112 112 Snapshots can have arbitrary names. Snapshots of volumes can be cloned or
113 113 rolled back, but cannot be accessed independently.
114 114
115 115 File system snapshots can be accessed under the .zfs/snapshot directory
116 116 in the root of the file system. Snapshots are automatically mounted on
117 117 demand and may be unmounted at regular intervals. The visibility of the
118 118 .zfs directory can be controlled by the snapdir property.
119 119
120 120 Clones
121 121 A clone is a writable volume or file system whose initial contents are
122 122 the same as another dataset. As with snapshots, creating a clone is
123 123 nearly instantaneous, and initially consumes no additional space.
124 124
125 125 Clones can only be created from a snapshot. When a snapshot is cloned, it
126 126 creates an implicit dependency between the parent and child. Even though
127 127 the clone is created somewhere else in the dataset hierarchy, the
128 128 original snapshot cannot be destroyed as long as a clone exists. The
129 129 origin property exposes this dependency, and the destroy command lists
130 130 any such dependencies, if they exist.
131 131
132 132 The clone parent-child dependency relationship can be reversed by using
133 133 the promote subcommand. This causes the "origin" file system to become a
134 134 clone of the specified file system, which makes it possible to destroy
135 135 the file system that the clone was created from.
136 136
137 137 Mount Points
138 138 Creating a ZFS file system is a simple operation, so the number of file
139 139 systems per system is likely to be numerous. To cope with this, ZFS
140 140 automatically manages mounting and unmounting file systems without the
141 141 need to edit the /etc/vfstab file. All automatically managed file systems
142 142 are mounted by ZFS at boot time.
143 143
144 144 By default, file systems are mounted under /path, where path is the name
145 145 of the file system in the ZFS namespace. Directories are created and
146 146 destroyed as needed.
147 147
148 148 A file system can also have a mount point set in the mountpoint property.
149 149 This directory is created as needed, and ZFS automatically mounts the
150 150 file system when the zfs mount -a command is invoked (without editing
151 151 /etc/vfstab). The mountpoint property can be inherited, so if pool/home
152 152 has a mount point of /export/stuff, then pool/home/user automatically
153 153 inherits a mount point of /export/stuff/user.
154 154
155 155 A file system mountpoint property of none prevents the file system from
156 156 being mounted.
157 157
158 158 If needed, ZFS file systems can also be managed with traditional tools
159 159 (mount, umount, /etc/vfstab). If a file system's mount point is set to
160 160 legacy, ZFS makes no attempt to manage the file system, and the
161 161 administrator is responsible for mounting and unmounting the file system.
162 162
163 163 Zones
164 164 A ZFS file system can be added to a non-global zone by using the zonecfg
165 165 add fs subcommand. A ZFS file system that is added to a non-global zone
166 166 must have its mountpoint property set to legacy.
167 167
168 168 The physical properties of an added file system are controlled by the
169 169 global administrator. However, the zone administrator can create, modify,
170 170 or destroy files within the added file system, depending on how the file
171 171 system is mounted.
172 172
173 173 A dataset can also be delegated to a non-global zone by using the zonecfg
174 174 add dataset subcommand. You cannot delegate a dataset to one zone and the
175 175 children of the same dataset to another zone. The zone administrator can
176 176 change properties of the dataset or any of its children. However, the
177 177 quota, filesystem_limit and snapshot_limit properties of the delegated
178 178 dataset can be modified only by the global administrator.
179 179
180 180 A ZFS volume can be added as a device to a non-global zone by using the
181 181 zonecfg add device subcommand. However, its physical properties can be
182 182 modified only by the global administrator.
183 183
184 184 For more information about zonecfg syntax, see zonecfg(1M).
185 185
186 186 After a dataset is delegated to a non-global zone, the zoned property is
187 187 automatically set. A zoned file system cannot be mounted in the global
188 188 zone, since the zone administrator might have to set the mount point to
189 189 an unacceptable value.
190 190
191 191 The global administrator can forcibly clear the zoned property, though
192 192 this should be done with extreme care. The global administrator should
193 193 verify that all the mount points are acceptable before clearing the
194 194 property.
195 195
196 196 Native Properties
197 197 Properties are divided into two types, native properties and user-defined
198 198 (or "user") properties. Native properties either export internal
199 199 statistics or control ZFS behavior. In addition, native properties are
200 200 either editable or read-only. User properties have no effect on ZFS
201 201 behavior, but you can use them to annotate datasets in a way that is
202 202 meaningful in your environment. For more information about user
203 203 properties, see the User Properties section, below.
204 204
205 205 Every dataset has a set of properties that export statistics about the
206 206 dataset as well as control various behaviors. Properties are inherited
207 207 from the parent unless overridden by the child. Some properties apply
208 208 only to certain types of datasets (file systems, volumes, or snapshots).
209 209
210 210 The values of numeric properties can be specified using human-readable
211 211 suffixes (for example, k, KB, M, Gb, and so forth, up to Z for
212 212 zettabyte). The following are all valid (and equal) specifications:
213 213 1536M, 1.5g, 1.50GB.
214 214
215 215 The values of non-numeric properties are case sensitive and must be
216 216 lowercase, except for mountpoint, sharenfs, and sharesmb.
217 217
218 218 The following native properties consist of read-only statistics about the
219 219 dataset. These properties can be neither set, nor inherited. Native
220 220 properties apply to all dataset types unless otherwise noted.
221 221
222 222 available The amount of space available to the dataset and
223 223 all its children, assuming that there is no other
224 224 activity in the pool. Because space is shared
225 225 within a pool, availability can be limited by any
226 226 number of factors, including physical pool size,
227 227 quotas, reservations, or other datasets within the
228 228 pool.
229 229
230 230 This property can also be referred to by its
231 231 shortened column name, avail.
232 232
233 233 compressratio For non-snapshots, the compression ratio achieved
234 234 for the used space of this dataset, expressed as a
235 235 multiplier. The used property includes descendant
236 236 datasets, and, for clones, does not include the
237 237 space shared with the origin snapshot. For
238 238 snapshots, the compressratio is the same as the
239 239 refcompressratio property. Compression can be
240 240 turned on by running: zfs set compression=on
241 241 dataset. The default value is off.
242 242
243 243 creation The time this dataset was created.
244 244
245 245 clones For snapshots, this property is a comma-separated
246 246 list of filesystems or volumes which are clones of
247 247 this snapshot. The clones' origin property is this
248 248 snapshot. If the clones property is not empty, then
249 249 this snapshot can not be destroyed (even with the
250 250 -r or -f options).
251 251
252 252 defer_destroy This property is on if the snapshot has been marked
253 253 for deferred destroy by using the zfs destroy -d
254 254 command. Otherwise, the property is off.
255 255
256 256 filesystem_count The total number of filesystems and volumes that
257 257 exist under this location in the dataset tree. This
258 258 value is only available when a filesystem_limit has
259 259 been set somewhere in the tree under which the
260 260 dataset resides.
261 261
262 262 logicalreferenced The amount of space that is "logically" accessible
263 263 by this dataset. See the referenced property. The
264 264 logical space ignores the effect of the compression
265 265 and copies properties, giving a quantity closer to
266 266 the amount of data that applications see. However,
267 267 it does include space consumed by metadata.
268 268
269 269 This property can also be referred to by its
270 270 shortened column name, lrefer.
271 271
272 272 logicalused The amount of space that is "logically" consumed by
273 273 this dataset and all its descendents. See the used
274 274 property. The logical space ignores the effect of
275 275 the compression and copies properties, giving a
276 276 quantity closer to the amount of data that
277 277 applications see. However, it does include space
278 278 consumed by metadata.
279 279
280 280 This property can also be referred to by its
281 281 shortened column name, lused.
282 282
283 283 mounted For file systems, indicates whether the file system
284 284 is currently mounted. This property can be either
285 285 yes or no.
286 286
287 287 origin For cloned file systems or volumes, the snapshot
288 288 from which the clone was created. See also the
289 289 clones property.
290 290
291 291 receive_resume_token For filesystems or volumes which have saved
292 292 partially-completed state from zfs receive -s, this
293 293 opaque token can be provided to zfs send -t to
294 294 resume and complete the zfs receive.
295 295
296 296 referenced The amount of data that is accessible by this
297 297 dataset, which may or may not be shared with other
298 298 datasets in the pool. When a snapshot or clone is
299 299 created, it initially references the same amount of
300 300 space as the file system or snapshot it was created
301 301 from, since its contents are identical.
302 302
303 303 This property can also be referred to by its
304 304 shortened column name, refer.
305 305
306 306 refcompressratio The compression ratio achieved for the referenced
307 307 space of this dataset, expressed as a multiplier.
308 308 See also the compressratio property.
309 309
310 310 snapshot_count The total number of snapshots that exist under this
311 311 location in the dataset tree. This value is only
312 312 available when a snapshot_limit has been set
313 313 somewhere in the tree under which the dataset
314 314 resides.
315 315
316 316 type The type of dataset: filesystem, volume, or
317 317 snapshot.
318 318
319 319 used The amount of space consumed by this dataset and
320 320 all its descendents. This is the value that is
321 321 checked against this dataset's quota and
322 322 reservation. The space used does not include this
323 323 dataset's reservation, but does take into account
324 324 the reservations of any descendent datasets. The
325 325 amount of space that a dataset consumes from its
326 326 parent, as well as the amount of space that are
327 327 freed if this dataset is recursively destroyed, is
328 328 the greater of its space used and its reservation.
329 329
330 330 When snapshots (see the Snapshots section) are
331 331 created, their space is initially shared between
332 332 the snapshot and the file system, and possibly with
333 333 previous snapshots. As the file system changes,
334 334 space that was previously shared becomes unique to
335 335 the snapshot, and counted in the snapshot's space
336 336 used. Additionally, deleting snapshots can increase
337 337 the amount of space unique to (and used by) other
338 338 snapshots.
339 339
340 340 The amount of space used, available, or referenced
341 341 does not take into account pending changes. Pending
342 342 changes are generally accounted for within a few
343 343 seconds. Committing a change to a disk using
344 344 fsync(3C) or O_SYNC does not necessarily guarantee
345 345 that the space usage information is updated
346 346 immediately.
347 347
348 348 usedby* The usedby* properties decompose the used
349 349 properties into the various reasons that space is
350 350 used. Specifically, used = usedbychildren +
351 351 usedbydataset + usedbyrefreservation +
352 352 usedbysnapshots. These properties are only
353 353 available for datasets created on zpool "version
354 354 13" pools.
355 355
356 356 usedbychildren The amount of space used by children of this
357 357 dataset, which would be freed if all the dataset's
358 358 children were destroyed.
359 359
360 360 usedbydataset The amount of space used by this dataset itself,
361 361 which would be freed if the dataset were destroyed
362 362 (after first removing any refreservation and
363 363 destroying any necessary snapshots or descendents).
364 364
365 365 usedbyrefreservation The amount of space used by a refreservation set on
366 366 this dataset, which would be freed if the
367 367 refreservation was removed.
368 368
369 369 usedbysnapshots The amount of space consumed by snapshots of this
370 370 dataset. In particular, it is the amount of space
371 371 that would be freed if all of this dataset's
372 372 snapshots were destroyed. Note that this is not
373 373 simply the sum of the snapshots' used properties
374 374 because space can be shared by multiple snapshots.
375 375
376 376 userused@user The amount of space consumed by the specified user
377 377 in this dataset. Space is charged to the owner of
378 378 each file, as displayed by ls -l. The amount of
379 379 space charged is displayed by du and ls -s. See
380 380 the zfs userspace subcommand for more information.
381 381
382 382 Unprivileged users can access only their own space
383 383 usage. The root user, or a user who has been
384 384 granted the userused privilege with zfs allow, can
385 385 access everyone's usage.
386 386
387 387 The userused@... properties are not displayed by
388 388 zfs get all. The user's name must be appended
389 389 after the @ symbol, using one of the following
390 390 forms:
391 391
392 392 POSIX name (for example, joe)
393 393
394 394 POSIX numeric ID (for example, 789)
395 395
396 396 SID name (for example, joe.smith@mydomain)
397 397
398 398 SID numeric ID (for example, S-1-123-456-789)
399 399
400 400 userrefs This property is set to the number of user holds on
401 401 this snapshot. User holds are set by using the zfs
402 402 hold command.
403 403
404 404 groupused@group The amount of space consumed by the specified group
405 405 in this dataset. Space is charged to the group of
406 406 each file, as displayed by ls -l. See the
407 407 userused@user property for more information.
408 408
409 409 Unprivileged users can only access their own
410 410 groups' space usage. The root user, or a user who
411 411 has been granted the groupused privilege with zfs
412 412 allow, can access all groups' usage.
413 413
414 414 volblocksize=blocksize
415 415 For volumes, specifies the block size of the
416 416 volume. The blocksize cannot be changed once the
417 417 volume has been written, so it should be set at
418 418 volume creation time. The default blocksize for
419 419 volumes is 8 Kbytes. Any power of 2 from 512 bytes
420 420 to 128 Kbytes is valid.
421 421
422 422 This property can also be referred to by its
423 423 shortened column name, volblock.
424 424
425 425 written The amount of referenced space written to this
426 426 dataset since the previous snapshot.
427 427
428 428 written@snapshot The amount of referenced space written to this
429 429 dataset since the specified snapshot. This is the
430 430 space that is referenced by this dataset but was
431 431 not referenced by the specified snapshot.
432 432
433 433 The snapshot may be specified as a short snapshot
434 434 name (just the part after the @), in which case it
435 435 will be interpreted as a snapshot in the same
436 436 filesystem as this dataset. The snapshot may be a
437 437 full snapshot name (filesystem@snapshot), which for
438 438 clones may be a snapshot in the origin's filesystem
439 439 (or the origin of the origin's filesystem, etc.)
440 440
441 441 The following native properties can be used to change the behavior of a
442 442 ZFS dataset.
443 443
444 444 aclinherit=discard|noallow|restricted|passthrough|passthrough-x
445 445 Controls how ACEs are inherited when files and directories are created.
446 446
447 447 discard does not inherit any ACEs.
448 448
449 449 noallow only inherits inheritable ACEs that specify "deny"
450 450 permissions.
451 451
452 452 restricted default, removes the write_acl and write_owner
453 453 permissions when the ACE is inherited.
454 454
455 455 passthrough inherits all inheritable ACEs without any modifications.
456 456
457 457 passthrough-x same meaning as passthrough, except that the owner@,
458 458 group@, and everyone@ ACEs inherit the execute
459 459 permission only if the file creation mode also requests
460 460 the execute bit.
461 461
462 462 When the property value is set to passthrough, files are created with a
463 463 mode determined by the inheritable ACEs. If no inheritable ACEs exist
464 464 that affect the mode, then the mode is set in accordance to the
465 465 requested mode from the application.
466 466
467 467 aclmode=discard|groupmask|passthrough|restricted
468 468 Controls how an ACL is modified during chmod(2).
469 469
470 470 discard default, deletes all ACEs that do not represent the mode
471 471 of the file.
472 472
473 473 groupmask reduces permissions granted in all ALLOW entries found in
474 474 the ACL such that they are no greater than the group
475 475 permissions specified by chmod(2).
476 476
477 477 passthrough indicates that no changes are made to the ACL other than
478 478 creating or updating the necessary ACEs to represent the
479 479 new mode of the file or directory.
480 480
481 481 restricted causes the chmod(2) operation to return an error when used
482 482 on any file or directory which has a non-trivial ACEs whose
483 483 entries can not be represented by a mode.
484 484
485 485 chmod(2) is required to change the set user ID, set group ID, or sticky
486 486 bits on a file or directory, as they do not have equivalent ACEs. In
487 487 order to use chmod(2) on a file or directory with a non-trivial ACL when
488 488 aclmode is set to restricted, you must first remove all ACEs which do
489 489 not represent the current mode.
490 490
491 491 atime=on|off
492 492 Controls whether the access time for files is updated when they are
493 493 read. Turning this property off avoids producing write traffic when
494 494 reading files and can result in significant performance gains, though
495 495 it might confuse mailers and other similar utilities. The default value
496 496 is on.
497 497
498 498 canmount=on|off|noauto
499 499 If this property is set to off, the file system cannot be mounted, and
500 500 is ignored by zfs mount -a. Setting this property to off is similar to
501 501 setting the mountpoint property to none, except that the dataset still
502 502 has a normal mountpoint property, which can be inherited. Setting this
503 503 property to off allows datasets to be used solely as a mechanism to
504 504 inherit properties. One example of setting canmount=off is to have two
505 505 datasets with the same mountpoint, so that the children of both
506 506 datasets appear in the same directory, but might have different
507 507 inherited characteristics.
508 508
509 509 When set to noauto, a dataset can only be mounted and unmounted
510 510 explicitly. The dataset is not mounted automatically when the dataset
511 511 is created or imported, nor is it mounted by the zfs mount -a command
512 512 or unmounted by the zfs unmount -a command.
513 513
514 514 This property is not inherited.
515 515
516 516 checksum=on|off|fletcher2|fletcher4|sha256|noparity|sha512|skein|edonr
517 517 Controls the checksum used to verify data integrity. The default value
518 518 is on, which automatically selects an appropriate algorithm (currently,
519 519 fletcher4, but this may change in future releases). The value off
520 520 disables integrity checking on user data. The value noparity not only
521 521 disables integrity but also disables maintaining parity for user data.
522 522 This setting is used internally by a dump device residing on a RAID-Z
523 523 pool and should not be used by any other dataset. Disabling checksums
524 524 is NOT a recommended practice.
525 525
526 526 The sha512, skein, and edonr checksum algorithms require enabling the
527 527 appropriate features on the pool. Please see zpool-features(5) for more
528 528 information on these algorithms.
529 529
530 530 Changing this property affects only newly-written data.
531 531
532 532 compression=on|off|gzip|gzip-N|lz4|lzjb|zle
533 533 Controls the compression algorithm used for this dataset.
534 534
535 535 Setting compression to on indicates that the current default
536 536 compression algorithm should be used. The default balances compression
537 537 and decompression speed, with compression ratio and is expected to work
538 538 well on a wide variety of workloads. Unlike all other settings for
539 539 this property, on does not select a fixed compression type. As new
540 540 compression algorithms are added to ZFS and enabled on a pool, the
541 541 default compression algorithm may change. The current default
542 542 compression algorthm is either lzjb or, if the lz4_compress feature is
543 543 enabled, lz4.
544 544
545 545 The lz4 compression algorithm is a high-performance replacement for the
546 546 lzjb algorithm. It features significantly faster compression and
547 547 decompression, as well as a moderately higher compression ratio than
548 548 lzjb, but can only be used on pools with the lz4_compress feature set
549 549 to enabled. See zpool-features(5) for details on ZFS feature flags and
550 550 the lz4_compress feature.
551 551
552 552 The lzjb compression algorithm is optimized for performance while
553 553 providing decent data compression.
554 554
555 555 The gzip compression algorithm uses the same compression as the gzip(1)
556 556 command. You can specify the gzip level by using the value gzip-N,
557 557 where N is an integer from 1 (fastest) to 9 (best compression ratio).
558 558 Currently, gzip is equivalent to gzip-6 (which is also the default for
559 559 gzip(1)).
560 560
561 561 The zle compression algorithm compresses runs of zeros.
562 562
563 563 This property can also be referred to by its shortened column name
564 564 compress. Changing this property affects only newly-written data.
565 565
566 566 copies=1|2|3
567 567 Controls the number of copies of data stored for this dataset. These
568 568 copies are in addition to any redundancy provided by the pool, for
569 569 example, mirroring or RAID-Z. The copies are stored on different disks,
570 570 if possible. The space used by multiple copies is charged to the
571 571 associated file and dataset, changing the used property and counting
572 572 against quotas and reservations.
573 573
574 574 Changing this property only affects newly-written data. Therefore, set
575 575 this property at file system creation time by using the -o copies=N
576 576 option.
577 577
578 578 devices=on|off
579 579 Controls whether device nodes can be opened on this file system. The
580 580 default value is on.
581 581
582 582 exec=on|off
583 583 Controls whether processes can be executed from within this file
584 584 system. The default value is on.
↓ open down ↓ |
584 lines elided |
↑ open up ↑ |
585 585
586 586 filesystem_limit=count|none
587 587 Limits the number of filesystems and volumes that can exist under this
588 588 point in the dataset tree. The limit is not enforced if the user is
589 589 allowed to change the limit. Setting a filesystem_limit to on a
590 590 descendent of a filesystem that already has a filesystem_limit does not
591 591 override the ancestor's filesystem_limit, but rather imposes an
592 592 additional limit. This feature must be enabled to be used (see
593 593 zpool-features(5)).
594 594
595 + fsid_guid=value
596 + Sets the dataset fsid_guid. The fsid_guid is a 64-bit unsigned integer,
597 + used to construct the vfs id when mounting a dataset. This property
598 + should only be set if you need the vfs id to be identical on two
599 + systems, for example in a NFS migration scenario. When the fsid_guid
600 + is changed for a file system, the file system and any children that
601 + inherit the mountpoint are unmounted, then remounted. If the file
602 + system was shared, existing NFS clients will require a remount.
603 +
595 604 mountpoint=path|none|legacy
596 605 Controls the mount point used for this file system. See the Mount
597 606 Points section for more information on how this property is used.
598 607
599 608 When the mountpoint property is changed for a file system, the file
600 609 system and any children that inherit the mount point are unmounted. If
601 610 the new value is legacy, then they remain unmounted. Otherwise, they
602 611 are automatically remounted in the new location if the property was
603 612 previously legacy or none, or if they were mounted before the property
604 613 was changed. In addition, any shared file systems are unshared and
605 614 shared in the new location.
606 615
607 616 nbmand=on|off
608 617 Controls whether the file system should be mounted with nbmand (Non
609 618 Blocking mandatory locks). This is used for SMB clients. Changes to
610 619 this property only take effect when the file system is umounted and
611 620 remounted. See mount(1M) for more information on nbmand mounts.
612 621
613 622 primarycache=all|none|metadata
614 623 Controls what is cached in the primary cache (ARC). If this property
615 624 is set to all, then both user data and metadata is cached. If this
616 625 property is set to none, then neither user data nor metadata is cached.
617 626 If this property is set to metadata, then only metadata is cached. The
618 627 default value is all.
619 628
620 629 quota=size|none
621 630 Limits the amount of space a dataset and its descendents can consume.
622 631 This property enforces a hard limit on the amount of space used. This
623 632 includes all space consumed by descendents, including file systems and
624 633 snapshots. Setting a quota on a descendent of a dataset that already
625 634 has a quota does not override the ancestor's quota, but rather imposes
626 635 an additional limit.
627 636
628 637 Quotas cannot be set on volumes, as the volsize property acts as an
629 638 implicit quota.
630 639
631 640 snapshot_limit=count|none
632 641 Limits the number of snapshots that can be created on a dataset and its
633 642 descendents. Setting a snapshot_limit on a descendent of a dataset that
634 643 already has a snapshot_limit does not override the ancestor's
635 644 snapshot_limit, but rather imposes an additional limit. The limit is
636 645 not enforced if the user is allowed to change the limit. For example,
637 646 this means that recursive snapshots taken from the global zone are
638 647 counted against each delegated dataset within a zone. This feature must
639 648 be enabled to be used (see zpool-features(5)).
640 649
641 650 userquota@user=size|none
642 651 Limits the amount of space consumed by the specified user. User space
643 652 consumption is identified by the userspace@user property.
644 653
645 654 Enforcement of user quotas may be delayed by several seconds. This
646 655 delay means that a user might exceed their quota before the system
647 656 notices that they are over quota and begins to refuse additional writes
648 657 with the EDQUOT error message. See the zfs userspace subcommand for
649 658 more information.
650 659
651 660 Unprivileged users can only access their own groups' space usage. The
652 661 root user, or a user who has been granted the userquota privilege with
653 662 zfs allow, can get and set everyone's quota.
654 663
655 664 This property is not available on volumes, on file systems before
656 665 version 4, or on pools before version 15. The userquota@... properties
657 666 are not displayed by zfs get all. The user's name must be appended
658 667 after the @ symbol, using one of the following forms:
659 668
660 669 POSIX name (for example, joe)
661 670
662 671 POSIX numeric ID (for example, 789)
663 672
664 673 SID name (for example, joe.smith@mydomain)
665 674
666 675 SID numeric ID (for example, S-1-123-456-789)
667 676
668 677 groupquota@group=size|none
669 678 Limits the amount of space consumed by the specified group. Group space
670 679 consumption is identified by the groupused@group property.
671 680
672 681 Unprivileged users can access only their own groups' space usage. The
673 682 root user, or a user who has been granted the groupquota privilege with
674 683 zfs allow, can get and set all groups' quotas.
675 684
676 685 readonly=on|off
677 686 Controls whether this dataset can be modified. The default value is
678 687 off.
679 688
680 689 This property can also be referred to by its shortened column name,
681 690 rdonly.
682 691
683 692 recordsize=size
684 693 Specifies a suggested block size for files in the file system. This
685 694 property is designed solely for use with database workloads that access
686 695 files in fixed-size records. ZFS automatically tunes block sizes
687 696 according to internal algorithms optimized for typical access patterns.
688 697
689 698 For databases that create very large files but access them in small
690 699 random chunks, these algorithms may be suboptimal. Specifying a
691 700 recordsize greater than or equal to the record size of the database can
692 701 result in significant performance gains. Use of this property for
693 702 general purpose file systems is strongly discouraged, and may adversely
694 703 affect performance.
695 704
696 705 The size specified must be a power of two greater than or equal to 512
697 706 and less than or equal to 128 Kbytes. If the large_blocks feature is
698 707 enabled on the pool, the size may be up to 1 Mbyte. See
699 708 zpool-features(5) for details on ZFS feature flags.
700 709
701 710 Changing the file system's recordsize affects only files created
702 711 afterward; existing files are unaffected.
703 712
704 713 This property can also be referred to by its shortened column name,
705 714 recsize.
706 715
707 716 redundant_metadata=all|most
708 717 Controls what types of metadata are stored redundantly. ZFS stores an
709 718 extra copy of metadata, so that if a single block is corrupted, the
710 719 amount of user data lost is limited. This extra copy is in addition to
711 720 any redundancy provided at the pool level (e.g. by mirroring or
712 721 RAID-Z), and is in addition to an extra copy specified by the copies
713 722 property (up to a total of 3 copies). For example if the pool is
714 723 mirrored, copies=2, and redundant_metadata=most, then ZFS stores 6
715 724 copies of most metadata, and 4 copies of data and some metadata.
716 725
717 726 When set to all, ZFS stores an extra copy of all metadata. If a single
718 727 on-disk block is corrupt, at worst a single block of user data (which is
719 728 recordsize bytes long) can be lost.
720 729
721 730 When set to most, ZFS stores an extra copy of most types of metadata.
722 731 This can improve performance of random writes, because less metadata
723 732 must be written. In practice, at worst about 100 blocks (of recordsize
724 733 bytes each) of user data can be lost if a single on-disk block is
725 734 corrupt. The exact behavior of which metadata blocks are stored
726 735 redundantly may change in future releases.
727 736
728 737 The default value is all.
729 738
730 739 refquota=size|none
731 740 Limits the amount of space a dataset can consume. This property
732 741 enforces a hard limit on the amount of space used. This hard limit does
733 742 not include space used by descendents, including file systems and
734 743 snapshots.
735 744
736 745 refreservation=size|none
737 746 The minimum amount of space guaranteed to a dataset, not including its
738 747 descendents. When the amount of space used is below this value, the
739 748 dataset is treated as if it were taking up the amount of space
740 749 specified by refreservation. The refreservation reservation is
741 750 accounted for in the parent datasets' space used, and counts against
742 751 the parent datasets' quotas and reservations.
743 752
744 753 If refreservation is set, a snapshot is only allowed if there is enough
745 754 free pool space outside of this reservation to accommodate the current
746 755 number of "referenced" bytes in the dataset.
747 756
748 757 This property can also be referred to by its shortened column name,
749 758 refreserv.
750 759
751 760 reservation=size|none
752 761 The minimum amount of space guaranteed to a dataset and its
753 762 descendents. When the amount of space used is below this value, the
754 763 dataset is treated as if it were taking up the amount of space
755 764 specified by its reservation. Reservations are accounted for in the
756 765 parent datasets' space used, and count against the parent datasets'
757 766 quotas and reservations.
758 767
759 768 This property can also be referred to by its shortened column name,
760 769 reserv.
761 770
762 771 secondarycache=all|none|metadata
763 772 Controls what is cached in the secondary cache (L2ARC). If this
764 773 property is set to all, then both user data and metadata is cached. If
765 774 this property is set to none, then neither user data nor metadata is
766 775 cached. If this property is set to metadata, then only metadata is
767 776 cached. The default value is all.
768 777
769 778 setuid=on|off
770 779 Controls whether the setuid bit is respected for the file system. The
771 780 default value is on.
772 781
773 782 sharesmb=on|off|opts
774 783 Controls whether the file system is shared via SMB, and what options
775 784 are to be used. A file system with the sharesmb property set to off is
776 785 managed through traditional tools such as sharemgr(1M). Otherwise, the
777 786 file system is automatically shared and unshared with the zfs share and
778 787 zfs unshare commands. If the property is set to on, the sharemgr(1M)
779 788 command is invoked with no options. Otherwise, the sharemgr(1M) command
780 789 is invoked with options equivalent to the contents of this property.
781 790
782 791 Because SMB shares requires a resource name, a unique resource name is
783 792 constructed from the dataset name. The constructed name is a copy of
784 793 the dataset name except that the characters in the dataset name, which
785 794 would be illegal in the resource name, are replaced with underscore (_)
786 795 characters. A pseudo property "name" is also supported that allows you
787 796 to replace the data set name with a specified name. The specified name
788 797 is then used to replace the prefix dataset in the case of inheritance.
789 798 For example, if the dataset data/home/john is set to name=john, then
790 799 data/home/john has a resource name of john. If a child dataset
791 800 data/home/john/backups is shared, it has a resource name of
792 801 john_backups.
793 802
794 803 When SMB shares are created, the SMB share name appears as an entry in
795 804 the .zfs/shares directory. You can use the ls or chmod command to
796 805 display the share-level ACLs on the entries in this directory.
797 806
798 807 When the sharesmb property is changed for a dataset, the dataset and
799 808 any children inheriting the property are re-shared with the new options,
800 809 only if the property was previously set to off, or if they were shared
801 810 before the property was changed. If the new property is set to off, the
802 811 file systems are unshared.
803 812
804 813 sharenfs=on|off|opts
805 814 Controls whether the file system is shared via NFS, and what options
806 815 are to be used. A file system with a sharenfs property of off is
807 816 managed through traditional tools such as share(1M), unshare(1M), and
808 817 dfstab(4). Otherwise, the file system is automatically shared and
809 818 unshared with the zfs share and zfs unshare commands. If the property
810 819 is set to on, share(1M) command is invoked with no options. Otherwise,
811 820 the share(1M) command is invoked with options equivalent to the
812 821 contents of this property.
813 822
814 823 When the sharenfs property is changed for a dataset, the dataset and
815 824 any children inheriting the property are re-shared with the new options,
816 825 only if the property was previously off, or if they were shared before
817 826 the property was changed. If the new property is off, the file systems
818 827 are unshared.
819 828
820 829 logbias=latency|throughput
821 830 Provide a hint to ZFS about handling of synchronous requests in this
822 831 dataset. If logbias is set to latency (the default), ZFS will use pool
823 832 log devices (if configured) to handle the requests at low latency. If
824 833 logbias is set to throughput, ZFS will not use configured pool log
825 834 devices. ZFS will instead optimize synchronous operations for global
826 835 pool throughput and efficient use of resources.
827 836
828 837 snapdir=hidden|visible
829 838 Controls whether the .zfs directory is hidden or visible in the root of
830 839 the file system as discussed in the Snapshots section. The default
831 840 value is hidden.
832 841
833 842 sync=standard|always|disabled
834 843 Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC).
835 844 standard is the POSIX specified behavior of ensuring all synchronous
836 845 requests are written to stable storage and all devices are flushed to
837 846 ensure data is not cached by device controllers (this is the default).
838 847 always causes every file system transaction to be written and flushed
839 848 before its system call returns. This has a large performance penalty.
840 849 disabled disables synchronous requests. File system transactions are
841 850 only committed to stable storage periodically. This option will give
842 851 the highest performance. However, it is very dangerous as ZFS would be
843 852 ignoring the synchronous transaction demands of applications such as
844 853 databases or NFS. Administrators should only use this option when the
845 854 risks are understood.
846 855
847 856 version=N|current
848 857 The on-disk version of this file system, which is independent of the
849 858 pool version. This property can only be set to later supported
850 859 versions. See the zfs upgrade command.
851 860
852 861 volsize=size
853 862 For volumes, specifies the logical size of the volume. By default,
854 863 creating a volume establishes a reservation of equal size. For storage
855 864 pools with a version number of 9 or higher, a refreservation is set
856 865 instead. Any changes to volsize are reflected in an equivalent change
857 866 to the reservation (or refreservation). The volsize can only be set to
858 867 a multiple of volblocksize, and cannot be zero.
859 868
860 869 The reservation is kept equal to the volume's logical size to prevent
861 870 unexpected behavior for consumers. Without the reservation, the volume
862 871 could run out of space, resulting in undefined behavior or data
863 872 corruption, depending on how the volume is used. These effects can also
864 873 occur when the volume size is changed while it is in use (particularly
865 874 when shrinking the size). Extreme care should be used when adjusting
866 875 the volume size.
867 876
868 877 Though not recommended, a "sparse volume" (also known as "thin
869 878 provisioning") can be created by specifying the -s option to the zfs
870 879 create -V command, or by changing the reservation after the volume has
871 880 been created. A "sparse volume" is a volume where the reservation is
872 881 less then the volume size. Consequently, writes to a sparse volume can
873 882 fail with ENOSPC when the pool is low on space. For a sparse volume,
874 883 changes to volsize are not reflected in the reservation.
875 884
876 885 vscan=on|off
877 886 Controls whether regular files should be scanned for viruses when a
878 887 file is opened and closed. In addition to enabling this property, the
879 888 virus scan service must also be enabled for virus scanning to occur.
880 889 The default value is off.
881 890
882 891 xattr=on|off
883 892 Controls whether extended attributes are enabled for this file system.
884 893 The default value is on.
885 894
886 895 zoned=on|off
887 896 Controls whether the dataset is managed from a non-global zone. See the
888 897 Zones section for more information. The default value is off.
889 898
890 899 The following three properties cannot be changed after the file system is
891 900 created, and therefore, should be set when the file system is created. If
892 901 the properties are not set with the zfs create or zpool create commands,
893 902 these properties are inherited from the parent dataset. If the parent
894 903 dataset lacks these properties due to having been created prior to these
895 904 features being supported, the new file system will have the default
896 905 values for these properties.
897 906
898 907 casesensitivity=sensitive|insensitive|mixed
899 908 Indicates whether the file name matching algorithm used by the file
900 909 system should be case-sensitive, case-insensitive, or allow a combination
901 910 of both styles of matching. The default value for the casesensitivity
902 911 property is sensitive. Traditionally, UNIX and POSIX file systems have
903 912 case-sensitive file names.
904 913
905 914 The mixed value for the casesensitivity property indicates that the
906 915 file system can support requests for both case-sensitive and case-
907 916 insensitive matching behavior. Currently, case-insensitive matching
908 917 behavior on a file system that supports mixed behavior is limited to
909 918 the SMB server product. For more information about the mixed value
910 919 behavior, see the "ZFS Administration Guide".
911 920
912 921 normalization=none|formC|formD|formKC|formKD
913 922 Indicates whether the file system should perform a unicode
914 923 normalization of file names whenever two file names are compared, and
915 924 which normalization algorithm should be used. File names are always
916 925 stored unmodified, names are normalized as part of any comparison
917 926 process. If this property is set to a legal value other than none, and
918 927 the utf8only property was left unspecified, the utf8only property is
919 928 automatically set to on. The default value of the normalization
920 929 property is none. This property cannot be changed after the file
921 930 system is created.
922 931
923 932 utf8only=on|off
924 933 Indicates whether the file system should reject file names that include
925 934 characters that are not present in the UTF-8 character code set. If
926 935 this property is explicitly set to off, the normalization property must
927 936 either not be explicitly set or be set to none. The default value for
928 937 the utf8only property is off. This property cannot be changed after
929 938 the file system is created.
930 939
931 940 The casesensitivity, normalization, and utf8only properties are also new
932 941 permissions that can be assigned to non-privileged users by using the ZFS
933 942 delegated administration feature.
934 943
935 944 Temporary Mount Point Properties
936 945 When a file system is mounted, either through mount(1M) for legacy mounts
937 946 or the zfs mount command for normal file systems, its mount options are
938 947 set according to its properties. The correlation between properties and
939 948 mount options is as follows:
940 949
941 950 PROPERTY MOUNT OPTION
942 951 devices devices/nodevices
943 952 exec exec/noexec
944 953 readonly ro/rw
945 954 setuid setuid/nosetuid
946 955 xattr xattr/noxattr
947 956
948 957 In addition, these options can be set on a per-mount basis using the -o
949 958 option, without affecting the property that is stored on disk. The values
950 959 specified on the command line override the values stored in the dataset.
951 960 The nosuid option is an alias for nodevices,nosetuid. These properties
952 961 are reported as "temporary" by the zfs get command. If the properties are
953 962 changed while the dataset is mounted, the new setting overrides any
954 963 temporary settings.
955 964
956 965 User Properties
957 966 In addition to the standard native properties, ZFS supports arbitrary
958 967 user properties. User properties have no effect on ZFS behavior, but
959 968 applications or administrators can use them to annotate datasets (file
960 969 systems, volumes, and snapshots).
961 970
962 971 User property names must contain a colon (:) character to distinguish
963 972 them from native properties. They may contain lowercase letters, numbers,
964 973 and the following punctuation characters: colon (":"), dash ("-"), period
965 974 ("."), and underscore ("_"). The expected convention is that the
966 975 property name is divided into two portions such as module:property, but
967 976 this namespace is not enforced by ZFS. User property names can be at
968 977 most 256 characters, and cannot begin with a dash ("-").
969 978
970 979 When making programmatic use of user properties, it is strongly suggested
971 980 to use a reversed DNS domain name for the module component of property
972 981 names to reduce the chance that two independently-developed packages use
973 982 the same property name for different purposes.
974 983
975 984 The values of user properties are arbitrary strings, are always
976 985 inherited, and are never validated. All of the commands that operate on
977 986 properties (zfs list, zfs get, zfs set, and so forth) can be used to
978 987 manipulate both native properties and user properties. Use the zfs
979 988 inherit command to clear a user property . If the property is not defined
980 989 in any parent dataset, it is removed entirely. Property values are
981 990 limited to 1024 characters.
982 991
983 992 ZFS Volumes as Swap or Dump Devices
984 993 During an initial installation a swap device and dump device are created
985 994 on ZFS volumes in the ZFS root pool. By default, the swap area size is
986 995 based on 1/2 the size of physical memory up to 2 Gbytes. The size of the
987 996 dump device depends on the kernel's requirements at installation time.
988 997 Separate ZFS volumes must be used for the swap area and dump devices. Do
989 998 not swap to a file on a ZFS file system. A ZFS swap file configuration is
990 999 not supported.
991 1000
992 1001 If you need to change your swap area or dump device after the system is
993 1002 installed or upgraded, use the swap(1M) and dumpadm(1M) commands.
994 1003
995 1004 SUBCOMMANDS
996 1005 All subcommands that modify state are logged persistently to the pool in
997 1006 their original form.
998 1007
999 1008 zfs -?
1000 1009 Displays a help message.
1001 1010
1002 1011 zfs create [-p] [-o property=value]... filesystem
1003 1012 Creates a new ZFS file system. The file system is automatically mounted
1004 1013 according to the mountpoint property inherited from the parent.
1005 1014
1006 1015 -o property=value
1007 1016 Sets the specified property as if the command zfs set
1008 1017 property=value was invoked at the same time the dataset was
1009 1018 created. Any editable ZFS property can also be set at creation
1010 1019 time. Multiple -o options can be specified. An error results if the
1011 1020 same property is specified in multiple -o options.
1012 1021
1013 1022 -p Creates all the non-existing parent datasets. Datasets created in
1014 1023 this manner are automatically mounted according to the mountpoint
1015 1024 property inherited from their parent. Any property specified on the
1016 1025 command line using the -o option is ignored. If the target
1017 1026 filesystem already exists, the operation completes successfully.
1018 1027
1019 1028 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
1020 1029 Creates a volume of the given size. The volume is exported as a block
1021 1030 device in /dev/zvol/{dsk,rdsk}/path, where path is the name of the
1022 1031 volume in the ZFS namespace. The size represents the logical size as
1023 1032 exported by the device. By default, a reservation of equal size is
1024 1033 created.
1025 1034
1026 1035 size is automatically rounded up to the nearest 128 Kbytes to ensure
1027 1036 that the volume has an integral number of blocks regardless of
1028 1037 blocksize.
1029 1038
1030 1039 -b blocksize
1031 1040 Equivalent to -o volblocksize=blocksize. If this option is
1032 1041 specified in conjunction with -o volblocksize, the resulting
1033 1042 behavior is undefined.
1034 1043
1035 1044 -o property=value
1036 1045 Sets the specified property as if the zfs set property=value
1037 1046 command was invoked at the same time the dataset was created. Any
1038 1047 editable ZFS property can also be set at creation time. Multiple -o
1039 1048 options can be specified. An error results if the same property is
1040 1049 specified in multiple -o options.
1041 1050
1042 1051 -p Creates all the non-existing parent datasets. Datasets created in
1043 1052 this manner are automatically mounted according to the mountpoint
1044 1053 property inherited from their parent. Any property specified on the
1045 1054 command line using the -o option is ignored. If the target
1046 1055 filesystem already exists, the operation completes successfully.
1047 1056
1048 1057 -s Creates a sparse volume with no reservation. See volsize in the
1049 1058 Native Properties section for more information about sparse
1050 1059 volumes.
1051 1060
1052 1061 zfs destroy [-Rfnprv] filesystem|volume
1053 1062 Destroys the given dataset. By default, the command unshares any file
1054 1063 systems that are currently shared, unmounts any file systems that are
1055 1064 currently mounted, and refuses to destroy a dataset that has active
1056 1065 dependents (children or clones).
1057 1066
1058 1067 -R Recursively destroy all dependents, including cloned file systems
1059 1068 outside the target hierarchy.
1060 1069
1061 1070 -f Force an unmount of any file systems using the unmount -f command.
1062 1071 This option has no effect on non-file systems or unmounted file
1063 1072 systems.
1064 1073
1065 1074 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1066 1075 useful in conjunction with the -v or -p flags to determine what
1067 1076 data would be deleted.
1068 1077
1069 1078 -p Print machine-parsable verbose information about the deleted data.
1070 1079
1071 1080 -r Recursively destroy all children.
1072 1081
1073 1082 -v Print verbose information about the deleted data.
1074 1083
1075 1084 Extreme care should be taken when applying either the -r or the -R
1076 1085 options, as they can destroy large portions of a pool and cause
1077 1086 unexpected behavior for mounted file systems in use.
1078 1087
1079 1088 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
1080 1089 The given snapshots are destroyed immediately if and only if the zfs
1081 1090 destroy command without the -d option would have destroyed it. Such
1082 1091 immediate destruction would occur, for example, if the snapshot had no
1083 1092 clones and the user-initiated reference count were zero.
1084 1093
1085 1094 If a snapshot does not qualify for immediate destruction, it is marked
1086 1095 for deferred deletion. In this state, it exists as a usable, visible
1087 1096 snapshot until both of the preconditions listed above are met, at which
1088 1097 point it is destroyed.
1089 1098
1090 1099 An inclusive range of snapshots may be specified by separating the
1091 1100 first and last snapshots with a percent sign. The first and/or last
1092 1101 snapshots may be left blank, in which case the filesystem's oldest or
1093 1102 newest snapshot will be implied.
1094 1103
1095 1104 Multiple snapshots (or ranges of snapshots) of the same filesystem or
1096 1105 volume may be specified in a comma-separated list of snapshots. Only the
1097 1106 snapshot's short name (the part after the @) should be specified when
1098 1107 using a range or comma-separated list to identify multiple snapshots.
1099 1108
1100 1109 -R Recursively destroy all clones of these snapshots, including the
1101 1110 clones, snapshots, and children. If this flag is specified, the -d
1102 1111 flag will have no effect.
1103 1112
1104 1113 -d Defer snapshot deletion.
1105 1114
1106 1115 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1107 1116 useful in conjunction with the -p or -v flags to determine what
1108 1117 data would be deleted.
1109 1118
1110 1119 -p Print machine-parsable verbose information about the deleted data.
1111 1120
1112 1121 -r Destroy (or mark for deferred deletion) all snapshots with this
1113 1122 name in descendent file systems.
1114 1123
1115 1124 -v Print verbose information about the deleted data.
1116 1125
1117 1126 Extreme care should be taken when applying either the -r or the -R
1118 1127 options, as they can destroy large portions of a pool and cause
1119 1128 unexpected behavior for mounted file systems in use.
1120 1129
1121 1130 zfs destroy filesystem|volume#bookmark
1122 1131 The given bookmark is destroyed.
1123 1132
1124 1133 zfs snapshot [-r] [-o property=value]...
1125 1134 filesystem@snapname|volume@snapname...
1126 1135 Creates snapshots with the given names. All previous modifications by
1127 1136 successful system calls to the file system are part of the snapshots.
1128 1137 Snapshots are taken atomically, so that all snapshots correspond to the
1129 1138 same moment in time. See the Snapshots section for details.
1130 1139
1131 1140 -o property=value
1132 1141 Sets the specified property; see zfs create for details.
1133 1142
1134 1143 -r Recursively create snapshots of all descendent datasets
1135 1144
1136 1145 zfs rollback [-Rfr] snapshot
1137 1146 Roll back the given dataset to a previous snapshot. When a dataset is
1138 1147 rolled back, all data that has changed since the snapshot is discarded,
1139 1148 and the dataset reverts to the state at the time of the snapshot. By
1140 1149 default, the command refuses to roll back to a snapshot other than the
1141 1150 most recent one. In order to do so, all intermediate snapshots and
1142 1151 bookmarks must be destroyed by specifying the -r option.
1143 1152
1144 1153 The -rR options do not recursively destroy the child snapshots of a
1145 1154 recursive snapshot. Only direct snapshots of the specified filesystem
1146 1155 are destroyed by either of these options. To completely roll back a
1147 1156 recursive snapshot, you must rollback the individual child snapshots.
1148 1157
1149 1158 -R Destroy any more recent snapshots and bookmarks, as well as any
1150 1159 clones of those snapshots.
1151 1160
1152 1161 -f Used with the -R option to force an unmount of any clone file
1153 1162 systems that are to be destroyed.
1154 1163
1155 1164 -r Destroy any snapshots and bookmarks more recent than the one
1156 1165 specified.
1157 1166
1158 1167 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
1159 1168 Creates a clone of the given snapshot. See the Clones section for
1160 1169 details. The target dataset can be located anywhere in the ZFS
1161 1170 hierarchy, and is created as the same type as the original.
1162 1171
1163 1172 -o property=value
1164 1173 Sets the specified property; see zfs create for details.
1165 1174
1166 1175 -p Creates all the non-existing parent datasets. Datasets created in
1167 1176 this manner are automatically mounted according to the mountpoint
1168 1177 property inherited from their parent. If the target filesystem or
1169 1178 volume already exists, the operation completes successfully.
1170 1179
1171 1180 zfs promote clone-filesystem
1172 1181 Promotes a clone file system to no longer be dependent on its "origin"
1173 1182 snapshot. This makes it possible to destroy the file system that the
1174 1183 clone was created from. The clone parent-child dependency relationship
1175 1184 is reversed, so that the origin file system becomes a clone of the
1176 1185 specified file system.
1177 1186
1178 1187 The snapshot that was cloned, and any snapshots previous to this
1179 1188 snapshot, are now owned by the promoted clone. The space they use moves
1180 1189 from the origin file system to the promoted clone, so enough space must
1181 1190 be available to accommodate these snapshots. No new space is consumed
1182 1191 by this operation, but the space accounting is adjusted. The promoted
1183 1192 clone must not have any conflicting snapshot names of its own. The
1184 1193 rename subcommand can be used to rename any conflicting snapshots.
1185 1194
1186 1195 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
1187 1196 zfs rename [-fp] filesystem|volume filesystem|volume
1188 1197 Renames the given dataset. The new target can be located anywhere in
1189 1198 the ZFS hierarchy, with the exception of snapshots. Snapshots can only
1190 1199 be renamed within the parent file system or volume. When renaming a
1191 1200 snapshot, the parent file system of the snapshot does not need to be
1192 1201 specified as part of the second argument. Renamed file systems can
1193 1202 inherit new mount points, in which case they are unmounted and
1194 1203 remounted at the new mount point.
1195 1204
1196 1205 -f Force unmount any filesystems that need to be unmounted in the
1197 1206 process.
1198 1207
1199 1208 -p Creates all the nonexistent parent datasets. Datasets created in
1200 1209 this manner are automatically mounted according to the mountpoint
1201 1210 property inherited from their parent.
1202 1211
1203 1212 zfs rename -r snapshot snapshot
1204 1213 Recursively rename the snapshots of all descendent datasets. Snapshots
1205 1214 are the only dataset that can be renamed recursively.
1206 1215
1207 1216 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
1208 1217 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
1209 1218 Lists the property information for the given datasets in tabular form.
1210 1219 If specified, you can list property information by the absolute
1211 1220 pathname or the relative pathname. By default, all file systems and
1212 1221 volumes are displayed. Snapshots are displayed if the listsnaps
1213 1222 property is on (the default is off). The following fields are
1214 1223 displayed, name,used,available,referenced,mountpoint.
1215 1224
1216 1225 -H Used for scripting mode. Do not print headers and separate fields
1217 1226 by a single tab instead of arbitrary white space.
1218 1227
1219 1228 -S property
1220 1229 Same as the -s option, but sorts by property in descending order.
1221 1230
1222 1231 -d depth
1223 1232 Recursively display any children of the dataset, limiting the
1224 1233 recursion to
1225 1234
1226 1235 -o property
1227 1236 A comma-separated list of properties to display. The property must
1228 1237 be:
1229 1238
1230 1239 One of the properties described in the Native Properties
1231 1240 section
1232 1241
1233 1242 A user property
1234 1243
1235 1244 The value name to display the dataset name
1236 1245
1237 1246 The value space to display space usage properties on file
1238 1247 systems and volumes. This is a shortcut for specifying -o
1239 1248 name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t
1240 1249 filesystem,volume syntax.
1241 1250
1242 1251 -p Display numbers in parsable (exact) values.
1243 1252
1244 1253 -r Recursively display any children of the dataset on the command
1245 1254 line. depth. A depth of 1 will display only the dataset and its
1246 1255 direct children.
1247 1256
1248 1257 -s property
1249 1258 A property for sorting the output by column in ascending order
1250 1259 based on the value of the property. The property must be one of the
1251 1260 properties described in the Properties section, or the special
1252 1261 value name to sort by the dataset name. Multiple properties can be
1253 1262 specified at one time using multiple -s property options. Multiple
1254 1263 -s options are evaluated from left to right in decreasing order of
1255 1264 importance. The following is a list of sorting criteria:
1256 1265
1257 1266 Numeric types sort in numeric order.
1258 1267
1259 1268 String types sort in alphabetical order.
1260 1269
1261 1270 Types inappropriate for a row sort that row to the literal
1262 1271 bottom, regardless of the specified ordering.
1263 1272
1264 1273 If no sorting options are specified the existing behavior of zfs
1265 1274 list is preserved.
1266 1275
1267 1276 -t type
1268 1277 A comma-separated list of types to display, where type is one of
1269 1278 filesystem, snapshot, volume, bookmark, or all. For example,
1270 1279 specifying -t snapshot displays only snapshots.
1271 1280
1272 1281 zfs set property=value [property=value]... filesystem|volume|snapshot...
1273 1282 Sets the property or list of properties to the given value(s) for each
1274 1283 dataset. Only some properties can be edited. See the Properties
1275 1284 section for more information on what properties can be set and
1276 1285 acceptable values. Numeric values can be specified as exact values, or
1277 1286 in a human-readable form with a suffix of B, K, M, G, T, P, E, Z (for
1278 1287 bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
1279 1288 or zettabytes, respectively). User properties can be set on snapshots.
1280 1289 For more information, see the User Properties section.
1281 1290
1282 1291 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
1283 1292 [-t type[,type]...] all | property[,property]...
1284 1293 filesystem|volume|snapshot...
1285 1294 Displays properties for the given datasets. If no datasets are
1286 1295 specified, then the command displays properties for all datasets on the
1287 1296 system. For each property, the following columns are displayed:
1288 1297
1289 1298 name Dataset name
1290 1299 property Property name
1291 1300 value Property value
1292 1301 source Property source. Can either be local, default,
1293 1302 temporary, inherited, or none (-).
1294 1303
1295 1304 All columns are displayed by default, though this can be controlled by
1296 1305 using the -o option. This command takes a comma-separated list of
1297 1306 properties as described in the Native Properties and User Properties
1298 1307 sections.
1299 1308
1300 1309 The special value all can be used to display all properties that apply
1301 1310 to the given dataset's type (filesystem, volume, snapshot, or
1302 1311 bookmark).
1303 1312
1304 1313 -H Display output in a form more easily parsed by scripts. Any headers
1305 1314 are omitted, and fields are explicitly separated by a single tab
1306 1315 instead of an arbitrary amount of space.
1307 1316
1308 1317 -d depth
1309 1318 Recursively display any children of the dataset, limiting the
1310 1319 recursion to depth. A depth of 1 will display only the dataset and
1311 1320 its direct children.
1312 1321
1313 1322 -o field
1314 1323 A comma-separated list of columns to display.
1315 1324 name,property,value,source is the default value.
1316 1325
1317 1326 -p Display numbers in parsable (exact) values.
1318 1327
1319 1328 -r Recursively display properties for any children.
1320 1329
1321 1330 -s source
1322 1331 A comma-separated list of sources to display. Those properties
1323 1332 coming from a source other than those in this list are ignored.
1324 1333 Each source must be one of the following: local, default,
1325 1334 inherited, temporary, and none. The default value is all sources.
1326 1335
1327 1336 -t type
1328 1337 A comma-separated list of types to display, where type is one of
1329 1338 filesystem, snapshot, volume, bookmark, or all.
1330 1339
1331 1340 zfs inherit [-rS] property filesystem|volume|snapshot...
1332 1341 Clears the specified property, causing it to be inherited from an
1333 1342 ancestor, restored to default if no ancestor has the property set, or
1334 1343 with the -S option reverted to the received value if one exists. See
1335 1344 the Properties section for a listing of default values, and details on
1336 1345 which properties can be inherited.
1337 1346
1338 1347 -r Recursively inherit the given property for all children.
1339 1348
1340 1349 -S Revert the property to the received value if one exists; otherwise
1341 1350 operate as if the -S option was not specified.
1342 1351
1343 1352 zfs upgrade
1344 1353 Displays a list of file systems that are not the most recent version.
1345 1354
1346 1355 zfs upgrade -v
1347 1356 Displays a list of currently supported file system versions.
1348 1357
1349 1358 zfs upgrade [-r] [-V version] -a | filesystem
1350 1359 Upgrades file systems to a new on-disk version. Once this is done, the
1351 1360 file systems will no longer be accessible on systems running older
1352 1361 versions of the software. zfs send streams generated from new
1353 1362 snapshots of these file systems cannot be accessed on systems running
1354 1363 older versions of the software.
1355 1364
1356 1365 In general, the file system version is independent of the pool version.
1357 1366 See zpool(1M) for information on the zpool upgrade command.
1358 1367
1359 1368 In some cases, the file system version and the pool version are
1360 1369 interrelated and the pool version must be upgraded before the file
1361 1370 system version can be upgraded.
1362 1371
1363 1372 -V version
1364 1373 Upgrade to the specified version. If the -V flag is not specified,
1365 1374 this command upgrades to the most recent version. This option can
1366 1375 only be used to increase the version number, and only up to the
1367 1376 most recent version supported by this software.
1368 1377
1369 1378 -a Upgrade all file systems on all imported pools.
1370 1379
1371 1380 filesystem
1372 1381 Upgrade the specified file system.
1373 1382
1374 1383 -r Upgrade the specified file system and all descendent file systems.
1375 1384
1376 1385 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1377 1386 [-t type[,type]...] filesystem|snapshot
1378 1387 Displays space consumed by, and quotas on, each user in the specified
1379 1388 filesystem or snapshot. This corresponds to the userused@user and
1380 1389 userquota@user properties.
1381 1390
1382 1391 -H Do not print headers, use tab-delimited output.
1383 1392
1384 1393 -S field
1385 1394 Sort by this field in reverse order. See -s.
1386 1395
1387 1396 -i Translate SID to POSIX ID. The POSIX ID may be ephemeral if no
1388 1397 mapping exists. Normal POSIX interfaces (for example, stat(2), ls
1389 1398 -l) perform this translation, so the -i option allows the output
1390 1399 from zfs userspace to be compared directly with those utilities.
1391 1400 However, -i may lead to confusion if some files were created by an
1392 1401 SMB user before a SMB-to-POSIX name mapping was established. In such
1393 1402 a case, some files will be owned by the SMB entity and some by the
1394 1403 POSIX entity. However, the -i option will report that the POSIX
1395 1404 entity has the total usage and quota for both.
1396 1405
1397 1406 -n Print numeric ID instead of user/group name.
1398 1407
1399 1408 -o field[,field]...
1400 1409 Display only the specified fields from the following set: type,
1401 1410 name, used, quota. The default is to display all fields.
1402 1411
1403 1412 -p Use exact (parsable) numeric output.
1404 1413
1405 1414 -s field
1406 1415 Sort output by this field. The -s and -S flags may be specified
1407 1416 multiple times to sort first by one field, then by another. The
1408 1417 default is -s type -s name.
1409 1418
1410 1419 -t type[,type]...
1411 1420 Print only the specified types from the following set: all,
1412 1421 posixuser, smbuser, posixgroup, smbgroup. The default is -t
1413 1422 posixuser,smbuser. The default can be changed to include group
1414 1423 types.
1415 1424
1416 1425 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1417 1426 [-t type[,type]...] filesystem|snapshot
1418 1427 Displays space consumed by, and quotas on, each group in the specified
1419 1428 filesystem or snapshot. This subcommand is identical to zfs userspace,
1420 1429 except that the default types to display are -t posixgroup,smbgroup.
1421 1430
1422 1431 zfs mount
1423 1432 Displays all ZFS file systems currently mounted.
1424 1433
1425 1434 zfs mount [-Ov] [-o options] -a | filesystem
1426 1435 Mounts ZFS file systems.
1427 1436
1428 1437 -O Perform an overlay mount. See mount(1M) for more information.
1429 1438
1430 1439 -a Mount all available ZFS file systems. Invoked automatically as part
1431 1440 of the boot process.
1432 1441
1433 1442 filesystem
1434 1443 Mount the specified filesystem.
1435 1444
1436 1445 -o options
1437 1446 An optional, comma-separated list of mount options to use
1438 1447 temporarily for the duration of the mount. See the Temporary Mount
1439 1448 Point Properties section for details.
1440 1449
1441 1450 -v Report mount progress.
1442 1451
1443 1452 zfs unmount [-f] -a | filesystem|mountpoint
1444 1453 Unmounts currently mounted ZFS file systems.
1445 1454
1446 1455 -a Unmount all available ZFS file systems. Invoked automatically as
1447 1456 part of the shutdown process.
1448 1457
1449 1458 filesystem|mountpoint
1450 1459 Unmount the specified filesystem. The command can also be given a
1451 1460 path to a ZFS file system mount point on the system.
1452 1461
1453 1462 -f Forcefully unmount the file system, even if it is currently in use.
1454 1463
1455 1464 zfs share -a | filesystem
1456 1465 Shares available ZFS file systems.
1457 1466
1458 1467 -a Share all available ZFS file systems. Invoked automatically as part
1459 1468 of the boot process.
1460 1469
1461 1470 filesystem
1462 1471 Share the specified filesystem according to the sharenfs and
1463 1472 sharesmb properties. File systems are shared when the sharenfs or
1464 1473 sharesmb property is set.
1465 1474
1466 1475 zfs unshare -a | filesystem|mountpoint
1467 1476 Unshares currently shared ZFS file systems.
1468 1477
1469 1478 -a Unshare all available ZFS file systems. Invoked automatically as
1470 1479 part of the shutdown process.
1471 1480
1472 1481 filesystem|mountpoint
1473 1482 Unshare the specified filesystem. The command can also be given a
1474 1483 path to a ZFS file system shared on the system.
1475 1484
1476 1485 zfs bookmark snapshot bookmark
1477 1486 Creates a bookmark of the given snapshot. Bookmarks mark the point in
1478 1487 time when the snapshot was created, and can be used as the incremental
1479 1488 source for a zfs send command.
1480 1489
1481 1490 This feature must be enabled to be used. See zpool-features(5) for
1482 1491 details on ZFS feature flags and the bookmarks feature.
1483 1492
1484 1493 zfs send [-DLPRenpv] [[-I|-i] snapshot] snapshot
1485 1494 Creates a stream representation of the second snapshot, which is
1486 1495 written to standard output. The output can be redirected to a file or
1487 1496 to a different system (for example, using ssh(1)). By default, a full
1488 1497 stream is generated.
1489 1498
1490 1499 -D Generate a deduplicated stream. Blocks which would have been sent
1491 1500 multiple times in the send stream will only be sent once. The
1492 1501 receiving system must also support this feature to recieve a
1493 1502 deduplicated stream. This flag can be used regardless of the
1494 1503 dataset's dedup property, but performance will be much better if
1495 1504 the filesystem uses a dedup-capable checksum (for example, sha256).
1496 1505
1497 1506 -I snapshot
1498 1507 Generate a stream package that sends all intermediary snapshots
1499 1508 from the first snapshot to the second snapshot. For example, -I @a
1500 1509 fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The
1501 1510 incremental source may be specified as with the -i option.
1502 1511
1503 1512 -L Generate a stream which may contain blocks larger than 128KB. This
1504 1513 flag has no effect if the large_blocks pool feature is disabled, or
1505 1514 if the recordsize property of this filesystem has never been set
1506 1515 above 128KB. The receiving system must have the large_blocks pool
1507 1516 feature enabled as well. See zpool-features(5) for details on ZFS
1508 1517 feature flags and the large_blocks feature.
1509 1518
1510 1519 -P Print machine-parsable verbose information about the stream package
1511 1520 generated.
1512 1521
1513 1522 -R Generate a replication stream package, which will replicate the
1514 1523 specified file system, and all descendent file systems, up to the
1515 1524 named snapshot. When received, all properties, snapshots,
1516 1525 descendent file systems, and clones are preserved.
1517 1526
1518 1527 If the -i or -I flags are used in conjunction with the -R flag, an
1519 1528 incremental replication stream is generated. The current values of
1520 1529 properties, and current snapshot and file system names are set when
1521 1530 the stream is received. If the -F flag is specified when this
1522 1531 stream is received, snapshots and file systems that do not exist on
1523 1532 the sending side are destroyed.
1524 1533
1525 1534 -e Generate a more compact stream by using WRITE_EMBEDDED records for
1526 1535 blocks which are stored more compactly on disk by the embedded_data
1527 1536 pool feature. This flag has no effect if the embedded_data feature
1528 1537 is disabled. The receiving system must have the embedded_data
1529 1538 feature enabled. If the lz4_compress feature is active on the
1530 1539 sending system, then the receiving system must have that feature
1531 1540 enabled as well. See zpool-features(5) for details on ZFS feature
1532 1541 flags and the embedded_data feature.
1533 1542
1534 1543 -i snapshot
1535 1544 Generate an incremental stream from the first snapshot (the
1536 1545 incremental source) to the second snapshot (the incremental
1537 1546 target). The incremental source can be specified as the last
1538 1547 component of the snapshot name (the @ character and following) and
1539 1548 it is assumed to be from the same file system as the incremental
1540 1549 target.
1541 1550
1542 1551 If the destination is a clone, the source may be the origin
1543 1552 snapshot, which must be fully specified (for example,
1544 1553 pool/fs@origin, not just @origin).
1545 1554
1546 1555 -n Do a dry-run ("No-op") send. Do not generate any actual send data.
1547 1556 This is useful in conjunction with the -v or -P flags to determine
1548 1557 what data will be sent. In this case, the verbose output will be
1549 1558 written to standard output (contrast with a non-dry-run, where the
1550 1559 stream is written to standard output and the verbose output goes to
1551 1560 standard error).
1552 1561
1553 1562 -p Include the dataset's properties in the stream. This flag is
1554 1563 implicit when -R is specified. The receiving system must also
1555 1564 support this feature.
1556 1565
1557 1566 -v Print verbose information about the stream package generated. This
1558 1567 information includes a per-second report of how much data has been
1559 1568 sent.
1560 1569
1561 1570 The format of the stream is committed. You will be able to receive
1562 1571 your streams on future versions of ZFS .
1563 1572
1564 1573 zfs send [-Le] [-i snapshot|bookmark] filesystem|volume|snapshot
1565 1574 Generate a send stream, which may be of a filesystem, and may be
1566 1575 incremental from a bookmark. If the destination is a filesystem or
1567 1576 volume, the pool must be read-only, or the filesystem must not be
1568 1577 mounted. When the stream generated from a filesystem or volume is
1569 1578 received, the default snapshot name will be "--head--".
1570 1579
1571 1580 -L Generate a stream which may contain blocks larger than 128KB. This
1572 1581 flag has no effect if the large_blocks pool feature is disabled, or
1573 1582 if the recordsize property of this filesystem has never been set
1574 1583 above 128KB. The receiving system must have the large_blocks pool
1575 1584 feature enabled as well. See zpool-features(5) for details on ZFS
1576 1585 feature flags and the large_blocks feature.
1577 1586
1578 1587 -e Generate a more compact stream by using WRITE_EMBEDDED records for
1579 1588 blocks which are stored more compactly on disk by the embedded_data
1580 1589 pool feature. This flag has no effect if the embedded_data feature
1581 1590 is disabled. The receiving system must have the embedded_data
1582 1591 feature enabled. If the lz4_compress feature is active on the
1583 1592 sending system, then the receiving system must have that feature
1584 1593 enabled as well. See zpool-features(5) for details on ZFS feature
1585 1594 flags and the embedded_data feature.
1586 1595
1587 1596 -i snapshot|bookmark
1588 1597 Generate an incremental send stream. The incremental source must be
1589 1598 an earlier snapshot in the destination's history. It will commonly
1590 1599 be an earlier snapshot in the destination's file system, in which
1591 1600 case it can be specified as the last component of the name (the #
1592 1601 or @ character and following).
1593 1602
1594 1603 If the incremental target is a clone, the incremental source can be
1595 1604 the origin snapshot, or an earlier snapshot in the origin's
1596 1605 filesystem, or the origin's origin, etc.
1597 1606
1598 1607 zfs send [-Penv] -t receive_resume_token
1599 1608 Creates a send stream which resumes an interrupted receive. The
1600 1609 receive_resume_token is the value of this property on the filesystem or
1601 1610 volume that was being received into. See the documentation for zfs
1602 1611 receive -s for more details.
1603 1612
1604 1613 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
1605 1614 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
1606 1615 Creates a snapshot whose contents are as specified in the stream
1607 1616 provided on standard input. If a full stream is received, then a new
1608 1617 file system is created as well. Streams are created using the zfs send
1609 1618 subcommand, which by default creates a full stream. zfs recv can be
1610 1619 used as an alias for zfs receive.
1611 1620
1612 1621 If an incremental stream is received, then the destination file system
1613 1622 must already exist, and its most recent snapshot must match the
1614 1623 incremental stream's source. For zvols, the destination device link is
1615 1624 destroyed and recreated, which means the zvol cannot be accessed during
1616 1625 the receive operation.
1617 1626
1618 1627 When a snapshot replication package stream that is generated by using
1619 1628 the zfs send -R command is received, any snapshots that do not exist on
1620 1629 the sending location are destroyed by using the zfs destroy -d command.
1621 1630
1622 1631 The name of the snapshot (and file system, if a full stream is
1623 1632 received) that this subcommand creates depends on the argument type and
1624 1633 the use of the -d or -e options.
1625 1634
1626 1635 If the argument is a snapshot name, the specified snapshot is created.
1627 1636 If the argument is a file system or volume name, a snapshot with the
1628 1637 same name as the sent snapshot is created within the specified
1629 1638 filesystem or volume. If neither of the -d or -e options are
1630 1639 specified, the provided target snapshot name is used exactly as
1631 1640 provided.
1632 1641
1633 1642 The -d and -e options cause the file system name of the target snapshot
1634 1643 to be determined by appending a portion of the sent snapshot's name to
1635 1644 the specified target filesystem. If the -d option is specified, all
1636 1645 but the first element of the sent snapshot's file system path (usually
1637 1646 the pool name) is used and any required intermediate file systems
1638 1647 within the specified one are created. If the -e option is specified,
1639 1648 then only the last element of the sent snapshot's file system name
1640 1649 (i.e. the name of the source file system itself) is used as the target
1641 1650 file system name.
1642 1651
1643 1652 -F Force a rollback of the file system to the most recent snapshot
1644 1653 before performing the receive operation. If receiving an
1645 1654 incremental replication stream (for example, one generated by zfs
1646 1655 send -R [-i|-I]), destroy snapshots and file systems that do not
1647 1656 exist on the sending side.
1648 1657
1649 1658 -d Discard the first element of the sent snapshot's file system name,
1650 1659 using the remaining elements to determine the name of the target
1651 1660 file system for the new snapshot as described in the paragraph
1652 1661 above.
1653 1662
1654 1663 -e Discard all but the last element of the sent snapshot's file system
1655 1664 name, using that element to determine the name of the target file
1656 1665 system for the new snapshot as described in the paragraph above.
1657 1666
1658 1667 -n Do not actually receive the stream. This can be useful in
1659 1668 conjunction with the -v option to verify the name the receive
1660 1669 operation would use.
1661 1670
1662 1671 -o origin=snapshot
1663 1672 Forces the stream to be received as a clone of the given snapshot.
1664 1673 This is only valid if the stream is an incremental stream whose
1665 1674 source is the same as the provided origin.
1666 1675
1667 1676 -u File system that is associated with the received stream is not
1668 1677 mounted.
1669 1678
1670 1679 -v Print verbose information about the stream and the time required to
1671 1680 perform the receive operation.
1672 1681
1673 1682 -s If the receive is interrupted, save the partially received state,
1674 1683 rather than deleting it. Interruption may be due to premature
1675 1684 termination of the stream (e.g. due to network failure or failure
1676 1685 of the remote system if the stream is being read over a network
1677 1686 connection), a checksum error in the stream, termination of the zfs
1678 1687 receive process, or unclean shutdown of the system.
1679 1688
1680 1689 The receive can be resumed with a stream generated by zfs send -t
1681 1690 token, where the token is the value of the receive_resume_token
1682 1691 property of the filesystem or volume which is received into.
1683 1692
1684 1693 To use this flag, the storage pool must have the extensible_dataset
1685 1694 feature enabled. See zpool-features(5) for details on ZFS feature
1686 1695 flags.
1687 1696
1688 1697 zfs receive -A filesystem|volume
1689 1698 Abort an interrupted zfs receive -s, deleting its saved partially
1690 1699 received state.
1691 1700
1692 1701 zfs allow filesystem|volume
1693 1702 Displays permissions that have been delegated on the specified
1694 1703 filesystem or volume. See the other forms of zfs allow for more
1695 1704 information.
1696 1705
1697 1706 zfs allow [-dglu] user|group[,user|group]...
1698 1707 perm|@setname[,perm|@setname]... filesystem|volume
1699 1708 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
1700 1709 filesystem|volume
1701 1710 Delegates ZFS administration permission for the file systems to non-
1702 1711 privileged users.
1703 1712
1704 1713 -d Allow only for the descendent file systems.
1705 1714
1706 1715 -e|everyone
1707 1716 Specifies that the permissions be delegated to everyone.
1708 1717
1709 1718 -g group[,group]...
1710 1719 Explicitly specify that permissions are delegated to the group.
1711 1720
1712 1721 -l Allow "locally" only for the specified file system.
1713 1722
1714 1723 -u user[,user]...
1715 1724 Explicitly specify that permissions are delegated to the user.
1716 1725
1717 1726 user|group[,user|group]...
1718 1727 Specifies to whom the permissions are delegated. Multiple entities
1719 1728 can be specified as a comma-separated list. If neither of the -gu
1720 1729 options are specified, then the argument is interpreted
1721 1730 preferentially as the keyword everyone, then as a user name, and
1722 1731 lastly as a group name. To specify a user or group named
1723 1732 "everyone", use the -g or -u options. To specify a group with the
1724 1733 same name as a user, use the -g options.
1725 1734
1726 1735 perm|@setname[,perm|@setname]...
1727 1736 The permissions to delegate. Multiple permissions may be specified
1728 1737 as a comma-separated list. Permission names are the same as ZFS
1729 1738 subcommand and property names. See the property list below.
1730 1739 Property set names, which begin with @, may be specified. See the
1731 1740 -s form below for details.
1732 1741
1733 1742 If neither of the -dl options are specified, or both are, then the
1734 1743 permissions are allowed for the file system or volume, and all of its
1735 1744 descendents.
1736 1745
1737 1746 Permissions are generally the ability to use a ZFS subcommand or change
1738 1747 a ZFS property. The following permissions are available:
1739 1748
1740 1749 NAME TYPE NOTES
1741 1750 allow subcommand Must also have the permission that is being
1742 1751 allowed
1743 1752 clone subcommand Must also have the 'create' ability and 'mount'
1744 1753 ability in the origin file system
1745 1754 create subcommand Must also have the 'mount' ability
1746 1755 destroy subcommand Must also have the 'mount' ability
1747 1756 diff subcommand Allows lookup of paths within a dataset
1748 1757 given an object number, and the ability to
1749 1758 create snapshots necessary to 'zfs diff'.
1750 1759 mount subcommand Allows mount/umount of ZFS datasets
1751 1760 promote subcommand Must also have the 'mount'
1752 1761 and 'promote' ability in the origin file system
1753 1762 receive subcommand Must also have the 'mount' and 'create' ability
1754 1763 rename subcommand Must also have the 'mount' and 'create'
1755 1764 ability in the new parent
1756 1765 rollback subcommand Must also have the 'mount' ability
1757 1766 send subcommand
1758 1767 share subcommand Allows sharing file systems over NFS or SMB
1759 1768 protocols
1760 1769 snapshot subcommand Must also have the 'mount' ability
1761 1770
1762 1771 groupquota other Allows accessing any groupquota@... property
1763 1772 groupused other Allows reading any groupused@... property
1764 1773 userprop other Allows changing any user property
1765 1774 userquota other Allows accessing any userquota@... property
1766 1775 userused other Allows reading any userused@... property
1767 1776
1768 1777 aclinherit property
1769 1778 aclmode property
1770 1779 atime property
1771 1780 canmount property
1772 1781 casesensitivity property
1773 1782 checksum property
1774 1783 compression property
1775 1784 copies property
1776 1785 devices property
1777 1786 exec property
1778 1787 filesystem_limit property
1779 1788 mountpoint property
1780 1789 nbmand property
1781 1790 normalization property
1782 1791 primarycache property
1783 1792 quota property
1784 1793 readonly property
1785 1794 recordsize property
1786 1795 refquota property
1787 1796 refreservation property
1788 1797 reservation property
1789 1798 secondarycache property
1790 1799 setuid property
1791 1800 sharenfs property
1792 1801 sharesmb property
1793 1802 snapdir property
1794 1803 snapshot_limit property
1795 1804 utf8only property
1796 1805 version property
1797 1806 volblocksize property
1798 1807 volsize property
1799 1808 vscan property
1800 1809 xattr property
1801 1810 zoned property
1802 1811
1803 1812 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
1804 1813 Sets "create time" permissions. These permissions are granted (locally)
1805 1814 to the creator of any newly-created descendent file system.
1806 1815
1807 1816 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
1808 1817 Defines or adds permissions to a permission set. The set can be used by
1809 1818 other zfs allow commands for the specified file system and its
1810 1819 descendents. Sets are evaluated dynamically, so changes to a set are
1811 1820 immediately reflected. Permission sets follow the same naming
1812 1821 restrictions as ZFS file systems, but the name must begin with @, and
1813 1822 can be no more than 64 characters long.
1814 1823
1815 1824 zfs unallow [-dglru] user|group[,user|group]...
1816 1825 [perm|@setname[,perm|@setname]...] filesystem|volume
1817 1826 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
1818 1827 filesystem|volume
1819 1828 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
1820 1829 Removes permissions that were granted with the zfs allow command. No
1821 1830 permissions are explicitly denied, so other permissions granted are
1822 1831 still in effect. For example, if the permission is granted by an
1823 1832 ancestor. If no permissions are specified, then all permissions for the
1824 1833 specified user, group, or everyone are removed. Specifying everyone (or
1825 1834 using the -e option) only removes the permissions that were granted to
1826 1835 everyone, not all permissions for every user and group. See the zfs
1827 1836 allow command for a description of the -ldugec options.
1828 1837
1829 1838 -r Recursively remove the permissions from this file system and all
1830 1839 descendents.
1831 1840
1832 1841 zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
1833 1842 filesystem|volume
1834 1843 Removes permissions from a permission set. If no permissions are
1835 1844 specified, then all permissions are removed, thus removing the set
1836 1845 entirely.
1837 1846
1838 1847 zfs hold [-r] tag snapshot...
1839 1848 Adds a single reference, named with the tag argument, to the specified
1840 1849 snapshot or snapshots. Each snapshot has its own tag namespace, and
1841 1850 tags must be unique within that space.
1842 1851
1843 1852 If a hold exists on a snapshot, attempts to destroy that snapshot by
1844 1853 using the zfs destroy command return EBUSY.
1845 1854
1846 1855 -r Specifies that a hold with the given tag is applied recursively to
1847 1856 the snapshots of all descendent file systems.
1848 1857
1849 1858 zfs holds [-r] snapshot...
1850 1859 Lists all existing user references for the given snapshot or snapshots.
1851 1860
1852 1861 -r Lists the holds that are set on the named descendent snapshots, in
1853 1862 addition to listing the holds on the named snapshot.
1854 1863
1855 1864 zfs release [-r] tag snapshot...
1856 1865 Removes a single reference, named with the tag argument, from the
1857 1866 specified snapshot or snapshots. The tag must already exist for each
1858 1867 snapshot. If a hold exists on a snapshot, attempts to destroy that
1859 1868 snapshot by using the zfs destroy command return EBUSY.
1860 1869
1861 1870 -r Recursively releases a hold with the given tag on the snapshots of
1862 1871 all descendent file systems.
1863 1872
1864 1873 zfs diff [-FHt] snapshot snapshot|filesystem
1865 1874 Display the difference between a snapshot of a given filesystem and
1866 1875 another snapshot of that filesystem from a later time or the current
1867 1876 contents of the filesystem. The first column is a character indicating
1868 1877 the type of change, the other columns indicate pathname, new pathname
1869 1878 (in case of rename), change in link count, and optionally file type
1870 1879 and/or change time. The types of change are:
1871 1880
1872 1881 - The path has been removed
1873 1882 + The path has been created
1874 1883 M The path has been modified
1875 1884 R The path has been renamed
1876 1885
1877 1886 -F Display an indication of the type of file, in a manner similar to
1878 1887 the - option of ls(1).
1879 1888
1880 1889 B Block device
1881 1890 C Character device
1882 1891 / Directory
1883 1892 > Door
1884 1893 | Named pipe
1885 1894 @ Symbolic link
1886 1895 P Event port
1887 1896 = Socket
1888 1897 F Regular file
1889 1898
1890 1899 -H Give more parsable tab-separated output, without header lines and
1891 1900 without arrows.
1892 1901
1893 1902 -t Display the path's inode change time as the first column of output.
1894 1903
1895 1904 EXIT STATUS
1896 1905 The zfs utility exits 0 on success, 1 if an error occurs, and 2 if
1897 1906 invalid command line options were specified.
1898 1907
1899 1908 EXAMPLES
1900 1909 Example 1 Creating a ZFS File System Hierarchy
1901 1910 The following commands create a file system named pool/home and a file
1902 1911 system named pool/home/bob. The mount point /export/home is set for
1903 1912 the parent file system, and is automatically inherited by the child
1904 1913 file system.
1905 1914
1906 1915 # zfs create pool/home
1907 1916 # zfs set mountpoint=/export/home pool/home
1908 1917 # zfs create pool/home/bob
1909 1918
1910 1919 Example 2 Creating a ZFS Snapshot
1911 1920 The following command creates a snapshot named yesterday. This
1912 1921 snapshot is mounted on demand in the .zfs/snapshot directory at the
1913 1922 root of the pool/home/bob file system.
1914 1923
1915 1924 # zfs snapshot pool/home/bob@yesterday
1916 1925
1917 1926 Example 3 Creating and Destroying Multiple Snapshots
1918 1927 The following command creates snapshots named yesterday of pool/home
1919 1928 and all of its descendent file systems. Each snapshot is mounted on
1920 1929 demand in the .zfs/snapshot directory at the root of its file system.
1921 1930 The second command destroys the newly created snapshots.
1922 1931
1923 1932 # zfs snapshot -r pool/home@yesterday
1924 1933 # zfs destroy -r pool/home@yesterday
1925 1934
1926 1935 Example 4 Disabling and Enabling File System Compression
1927 1936 The following command disables the compression property for all file
1928 1937 systems under pool/home. The next command explicitly enables
1929 1938 compression for pool/home/anne.
1930 1939
1931 1940 # zfs set compression=off pool/home
1932 1941 # zfs set compression=on pool/home/anne
1933 1942
1934 1943 Example 5 Listing ZFS Datasets
1935 1944 The following command lists all active file systems and volumes in the
1936 1945 system. Snapshots are displayed if the listsnaps property is on. The
1937 1946 default is off. See zpool(1M) for more information on pool properties.
1938 1947
1939 1948 # zfs list
1940 1949 NAME USED AVAIL REFER MOUNTPOINT
1941 1950 pool 450K 457G 18K /pool
1942 1951 pool/home 315K 457G 21K /export/home
1943 1952 pool/home/anne 18K 457G 18K /export/home/anne
1944 1953 pool/home/bob 276K 457G 276K /export/home/bob
1945 1954
1946 1955 Example 6 Setting a Quota on a ZFS File System
1947 1956 The following command sets a quota of 50 Gbytes for pool/home/bob.
1948 1957
1949 1958 # zfs set quota=50G pool/home/bob
1950 1959
1951 1960 Example 7 Listing ZFS Properties
1952 1961 The following command lists all properties for pool/home/bob.
1953 1962
1954 1963 # zfs get all pool/home/bob
1955 1964 NAME PROPERTY VALUE SOURCE
1956 1965 pool/home/bob type filesystem -
1957 1966 pool/home/bob creation Tue Jul 21 15:53 2009 -
1958 1967 pool/home/bob used 21K -
1959 1968 pool/home/bob available 20.0G -
1960 1969 pool/home/bob referenced 21K -
1961 1970 pool/home/bob compressratio 1.00x -
1962 1971 pool/home/bob mounted yes -
1963 1972 pool/home/bob quota 20G local
1964 1973 pool/home/bob reservation none default
1965 1974 pool/home/bob recordsize 128K default
1966 1975 pool/home/bob mountpoint /pool/home/bob default
1967 1976 pool/home/bob sharenfs off default
1968 1977 pool/home/bob checksum on default
1969 1978 pool/home/bob compression on local
1970 1979 pool/home/bob atime on default
1971 1980 pool/home/bob devices on default
1972 1981 pool/home/bob exec on default
1973 1982 pool/home/bob setuid on default
1974 1983 pool/home/bob readonly off default
1975 1984 pool/home/bob zoned off default
1976 1985 pool/home/bob snapdir hidden default
1977 1986 pool/home/bob aclmode discard default
1978 1987 pool/home/bob aclinherit restricted default
1979 1988 pool/home/bob canmount on default
1980 1989 pool/home/bob xattr on default
1981 1990 pool/home/bob copies 1 default
1982 1991 pool/home/bob version 4 -
1983 1992 pool/home/bob utf8only off -
1984 1993 pool/home/bob normalization none -
1985 1994 pool/home/bob casesensitivity sensitive -
1986 1995 pool/home/bob vscan off default
1987 1996 pool/home/bob nbmand off default
1988 1997 pool/home/bob sharesmb off default
1989 1998 pool/home/bob refquota none default
1990 1999 pool/home/bob refreservation none default
1991 2000 pool/home/bob primarycache all default
1992 2001 pool/home/bob secondarycache all default
1993 2002 pool/home/bob usedbysnapshots 0 -
1994 2003 pool/home/bob usedbydataset 21K -
1995 2004 pool/home/bob usedbychildren 0 -
1996 2005 pool/home/bob usedbyrefreservation 0 -
1997 2006
1998 2007 The following command gets a single property value.
1999 2008
2000 2009 # zfs get -H -o value compression pool/home/bob
2001 2010 on
2002 2011 The following command lists all properties with local settings for
2003 2012 pool/home/bob.
2004 2013
2005 2014 # zfs get -r -s local -o name,property,value all pool/home/bob
2006 2015 NAME PROPERTY VALUE
2007 2016 pool/home/bob quota 20G
2008 2017 pool/home/bob compression on
2009 2018
2010 2019 Example 8 Rolling Back a ZFS File System
2011 2020 The following command reverts the contents of pool/home/anne to the
2012 2021 snapshot named yesterday, deleting all intermediate snapshots.
2013 2022
2014 2023 # zfs rollback -r pool/home/anne@yesterday
2015 2024
2016 2025 Example 9 Creating a ZFS Clone
2017 2026 The following command creates a writable file system whose initial
2018 2027 contents are the same as pool/home/bob@yesterday.
2019 2028
2020 2029 # zfs clone pool/home/bob@yesterday pool/clone
2021 2030
2022 2031 Example 10 Promoting a ZFS Clone
2023 2032 The following commands illustrate how to test out changes to a file
2024 2033 system, and then replace the original file system with the changed one,
2025 2034 using clones, clone promotion, and renaming:
2026 2035
2027 2036 # zfs create pool/project/production
2028 2037 populate /pool/project/production with data
2029 2038 # zfs snapshot pool/project/production@today
2030 2039 # zfs clone pool/project/production@today pool/project/beta
2031 2040 make changes to /pool/project/beta and test them
2032 2041 # zfs promote pool/project/beta
2033 2042 # zfs rename pool/project/production pool/project/legacy
2034 2043 # zfs rename pool/project/beta pool/project/production
2035 2044 once the legacy version is no longer needed, it can be destroyed
2036 2045 # zfs destroy pool/project/legacy
2037 2046
2038 2047 Example 11 Inheriting ZFS Properties
2039 2048 The following command causes pool/home/bob and pool/home/anne to
2040 2049 inherit the checksum property from their parent.
2041 2050
2042 2051 # zfs inherit checksum pool/home/bob pool/home/anne
2043 2052
2044 2053 Example 12 Remotely Replicating ZFS Data
2045 2054 The following commands send a full stream and then an incremental
2046 2055 stream to a remote machine, restoring them into poolB/received/fs@a and
2047 2056 poolB/received/fs@b, respectively. poolB must contain the file system
2048 2057 poolB/received, and must not initially contain poolB/received/fs.
2049 2058
2050 2059 # zfs send pool/fs@a | \
2051 2060 ssh host zfs receive poolB/received/fs@a
2052 2061 # zfs send -i a pool/fs@b | \
2053 2062 ssh host zfs receive poolB/received/fs
2054 2063
2055 2064 Example 13 Using the zfs receive -d Option
2056 2065 The following command sends a full stream of poolA/fsA/fsB@snap to a
2057 2066 remote machine, receiving it into poolB/received/fsA/fsB@snap. The
2058 2067 fsA/fsB@snap portion of the received snapshot's name is determined from
2059 2068 the name of the sent snapshot. poolB must contain the file system
2060 2069 poolB/received. If poolB/received/fsA does not exist, it is created as
2061 2070 an empty file system.
2062 2071
2063 2072 # zfs send poolA/fsA/fsB@snap | \
2064 2073 ssh host zfs receive -d poolB/received
2065 2074
2066 2075 Example 14 Setting User Properties
2067 2076 The following example sets the user-defined com.example:department
2068 2077 property for a dataset.
2069 2078
2070 2079 # zfs set com.example:department=12345 tank/accounting
2071 2080
2072 2081 Example 15 Performing a Rolling Snapshot
2073 2082 The following example shows how to maintain a history of snapshots with
2074 2083 a consistent naming scheme. To keep a week's worth of snapshots, the
2075 2084 user destroys the oldest snapshot, renames the remaining snapshots, and
2076 2085 then creates a new snapshot, as follows:
2077 2086
2078 2087 # zfs destroy -r pool/users@7daysago
2079 2088 # zfs rename -r pool/users@6daysago @7daysago
2080 2089 # zfs rename -r pool/users@5daysago @6daysago
2081 2090 # zfs rename -r pool/users@yesterday @5daysago
2082 2091 # zfs rename -r pool/users@yesterday @4daysago
2083 2092 # zfs rename -r pool/users@yesterday @3daysago
2084 2093 # zfs rename -r pool/users@yesterday @2daysago
2085 2094 # zfs rename -r pool/users@today @yesterday
2086 2095 # zfs snapshot -r pool/users@today
2087 2096
2088 2097 Example 16 Setting sharenfs Property Options on a ZFS File System
2089 2098 The following commands show how to set sharenfs property options to
2090 2099 enable rw access for a set of IP addresses and to enable root access
2091 2100 for system neo on the tank/home file system.
2092 2101
2093 2102 # zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
2094 2103
2095 2104 If you are using DNS for host name resolution, specify the fully
2096 2105 qualified hostname.
2097 2106
2098 2107 Example 17 Delegating ZFS Administration Permissions on a ZFS Dataset
2099 2108 The following example shows how to set permissions so that user cindys
2100 2109 can create, destroy, mount, and take snapshots on tank/cindys. The
2101 2110 permissions on tank/cindys are also displayed.
2102 2111
2103 2112 # zfs allow cindys create,destroy,mount,snapshot tank/cindys
2104 2113 # zfs allow tank/cindys
2105 2114 ---- Permissions on tank/cindys --------------------------------------
2106 2115 Local+Descendent permissions:
2107 2116 user cindys create,destroy,mount,snapshot
2108 2117
2109 2118 Because the tank/cindys mount point permission is set to 755 by
2110 2119 default, user cindys will be unable to mount file systems under
2111 2120 tank/cindys. Add an ACE similar to the following syntax to provide
2112 2121 mount point access:
2113 2122
2114 2123 # chmod A+user:cindys:add_subdirectory:allow /tank/cindys
2115 2124
2116 2125 Example 18 Delegating Create Time Permissions on a ZFS Dataset
2117 2126 The following example shows how to grant anyone in the group staff to
2118 2127 create file systems in tank/users. This syntax also allows staff
2119 2128 members to destroy their own file systems, but not destroy anyone
2120 2129 else's file system. The permissions on tank/users are also displayed.
2121 2130
2122 2131 # zfs allow staff create,mount tank/users
2123 2132 # zfs allow -c destroy tank/users
2124 2133 # zfs allow tank/users
2125 2134 ---- Permissions on tank/users ---------------------------------------
2126 2135 Permission sets:
2127 2136 destroy
2128 2137 Local+Descendent permissions:
2129 2138 group staff create,mount
2130 2139
2131 2140 Example 19 Defining and Granting a Permission Set on a ZFS Dataset
2132 2141 The following example shows how to define and grant a permission set on
2133 2142 the tank/users file system. The permissions on tank/users are also
2134 2143 displayed.
2135 2144
2136 2145 # zfs allow -s @pset create,destroy,snapshot,mount tank/users
2137 2146 # zfs allow staff @pset tank/users
2138 2147 # zfs allow tank/users
2139 2148 ---- Permissions on tank/users ---------------------------------------
2140 2149 Permission sets:
2141 2150 @pset create,destroy,mount,snapshot
2142 2151 Local+Descendent permissions:
2143 2152 group staff @pset
2144 2153
2145 2154 Example 20 Delegating Property Permissions on a ZFS Dataset
2146 2155 The following example shows to grant the ability to set quotas and
2147 2156 reservations on the users/home file system. The permissions on
2148 2157 users/home are also displayed.
2149 2158
2150 2159 # zfs allow cindys quota,reservation users/home
2151 2160 # zfs allow users/home
2152 2161 ---- Permissions on users/home ---------------------------------------
2153 2162 Local+Descendent permissions:
2154 2163 user cindys quota,reservation
2155 2164 cindys% zfs set quota=10G users/home/marks
2156 2165 cindys% zfs get quota users/home/marks
2157 2166 NAME PROPERTY VALUE SOURCE
2158 2167 users/home/marks quota 10G local
2159 2168
2160 2169 Example 21 Removing ZFS Delegated Permissions on a ZFS Dataset
2161 2170 The following example shows how to remove the snapshot permission from
2162 2171 the staff group on the tank/users file system. The permissions on
2163 2172 tank/users are also displayed.
2164 2173
2165 2174 # zfs unallow staff snapshot tank/users
2166 2175 # zfs allow tank/users
2167 2176 ---- Permissions on tank/users ---------------------------------------
2168 2177 Permission sets:
2169 2178 @pset create,destroy,mount,snapshot
2170 2179 Local+Descendent permissions:
2171 2180 group staff @pset
2172 2181
2173 2182 Example 22 Showing the differences between a snapshot and a ZFS Dataset
2174 2183 The following example shows how to see what has changed between a prior
2175 2184 snapshot of a ZFS dataset and its current state. The -F option is used
2176 2185 to indicate type information for the files affected.
2177 2186
2178 2187 # zfs diff -F tank/test@before tank/test
2179 2188 M / /tank/test/
2180 2189 M F /tank/test/linked (+1)
2181 2190 R F /tank/test/oldname -> /tank/test/newname
2182 2191 - F /tank/test/deleted
2183 2192 + F /tank/test/created
2184 2193 M F /tank/test/modified
2185 2194
2186 2195 INTERFACE STABILITY
2187 2196 Commited.
2188 2197
2189 2198 SEE ALSO
2190 2199 gzip(1,) ssh(1), mount(1M), share(1M), sharemgr(1M), unshare(1M),
2191 2200 zonecfg(1M), zpool(1M), chmod(2), stat(2), write(2), fsync(3C),
2192 2201 dfstab(4), acl(5), attributes(5)
2193 2202
2194 2203 illumos June 8, 2015 illumos
↓ open down ↓ |
1590 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX