Print this page
6536 zfs send: want a way to disable sending of free records
Reviewed by: Alexander Stetsenko <astetsenko@racktopsystems.com>
Reviewed by: Kim Shrier <kshrier@racktopsystems.com>
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/man/man1m/zfs.1m.man.txt
+++ new/usr/src/man/man1m/zfs.1m.man.txt
1 1 ZFS(1M) Maintenance Commands ZFS(1M)
2 2
3 3 NAME
4 4 zfs - configures ZFS file systems
5 5
6 6 SYNOPSIS
7 7 zfs [-?]
8 8 zfs create [-p] [-o property=value]... filesystem
9 9 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
10 10 zfs destroy [-Rfnprv] filesystem|volume
11 11 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
12 12 zfs destroy filesystem|volume#bookmark
13 13 zfs snapshot [-r] [-o property=value]...
14 14 filesystem@snapname|volume@snapname...
15 15 zfs rollback [-Rfr] snapshot
16 16 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
17 17 zfs promote clone-filesystem
18 18 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
19 19 zfs rename [-fp] filesystem|volume filesystem|volume
20 20 zfs rename -r snapshot snapshot
21 21 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
22 22 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
23 23 zfs set property=value [property=value]... filesystem|volume|snapshot...
24 24 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
25 25 [-t type[,type]...] all | property[,property]...
26 26 filesystem|volume|snapshot...
27 27 zfs inherit [-rS] property filesystem|volume|snapshot...
28 28 zfs upgrade
29 29 zfs upgrade -v
30 30 zfs upgrade [-r] [-V version] -a | filesystem
↓ open down ↓ |
30 lines elided |
↑ open up ↑ |
31 31 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
32 32 [-t type[,type]...] filesystem|snapshot
33 33 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
34 34 [-t type[,type]...] filesystem|snapshot
35 35 zfs mount
36 36 zfs mount [-Ov] [-o options] -a | filesystem
37 37 zfs unmount [-f] -a | filesystem|mountpoint
38 38 zfs share -a | filesystem
39 39 zfs unshare -a | filesystem|mountpoint
40 40 zfs bookmark snapshot bookmark
41 - zfs send [-DLPRenpv] [[-I|-i] snapshot] snapshot
42 - zfs send [-Le] [-i snapshot|bookmark] filesystem|volume|snapshot
41 + zfs send [-DFLPRenpv] [[-I|-i] snapshot] snapshot
42 + zfs send [-FLe] [-i snapshot|bookmark] filesystem|volume|snapshot
43 43 zfs send [-Penv] -t receive_resume_token
44 44 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
45 45 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
46 46 zfs receive -A filesystem|volume
47 47 zfs allow filesystem|volume
48 48 zfs allow [-dglu] user|group[,user|group]...
49 49 perm|@setname[,perm|@setname]... filesystem|volume
50 50 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
51 51 filesystem|volume
52 52 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
53 53 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
54 54 zfs unallow [-dglru] user|group[,user|group]...
55 55 [perm|@setname[,perm|@setname]...] filesystem|volume
56 56 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
57 57 filesystem|volume
58 58 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
59 59 zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
60 60 filesystem|volume
61 61 zfs hold [-r] tag snapshot...
62 62 zfs holds [-r] snapshot...
63 63 zfs release [-r] tag snapshot...
64 64 zfs diff [-FHt] snapshot snapshot|filesystem
65 65
66 66 DESCRIPTION
67 67 The zfs command configures ZFS datasets within a ZFS storage pool, as
68 68 described in zpool(1M). A dataset is identified by a unique path within
69 69 the ZFS namespace. For example:
70 70
71 71 pool/{filesystem,volume,snapshot}
72 72
73 73 where the maximum length of a dataset name is MAXNAMELEN (256 bytes).
74 74
75 75 A dataset can be one of the following:
76 76
77 77 file system A ZFS dataset of type filesystem can be mounted within the
78 78 standard system namespace and behaves like other file
79 79 systems. While ZFS file systems are designed to be POSIX
80 80 compliant, known issues exist that prevent compliance in
81 81 some cases. Applications that depend on standards
82 82 conformance might fail due to non-standard behavior when
83 83 checking file system free space.
84 84
85 85 volume A logical volume exported as a raw or block device. This
86 86 type of dataset should only be used under special
87 87 circumstances. File systems are typically used in most
88 88 environments.
89 89
90 90 snapshot A read-only version of a file system or volume at a given
91 91 point in time. It is specified as filesystem@name or
92 92 volume@name.
93 93
94 94 ZFS File System Hierarchy
95 95 A ZFS storage pool is a logical collection of devices that provide space
96 96 for datasets. A storage pool is also the root of the ZFS file system
97 97 hierarchy.
98 98
99 99 The root of the pool can be accessed as a file system, such as mounting
100 100 and unmounting, taking snapshots, and setting properties. The physical
101 101 storage characteristics, however, are managed by the zpool(1M) command.
102 102
103 103 See zpool(1M) for more information on creating and administering pools.
104 104
105 105 Snapshots
106 106 A snapshot is a read-only copy of a file system or volume. Snapshots can
107 107 be created extremely quickly, and initially consume no additional space
108 108 within the pool. As data within the active dataset changes, the snapshot
109 109 consumes more data than would otherwise be shared with the active
110 110 dataset.
111 111
112 112 Snapshots can have arbitrary names. Snapshots of volumes can be cloned or
113 113 rolled back, but cannot be accessed independently.
114 114
115 115 File system snapshots can be accessed under the .zfs/snapshot directory
116 116 in the root of the file system. Snapshots are automatically mounted on
117 117 demand and may be unmounted at regular intervals. The visibility of the
118 118 .zfs directory can be controlled by the snapdir property.
119 119
120 120 Clones
121 121 A clone is a writable volume or file system whose initial contents are
122 122 the same as another dataset. As with snapshots, creating a clone is
123 123 nearly instantaneous, and initially consumes no additional space.
124 124
125 125 Clones can only be created from a snapshot. When a snapshot is cloned, it
126 126 creates an implicit dependency between the parent and child. Even though
127 127 the clone is created somewhere else in the dataset hierarchy, the
128 128 original snapshot cannot be destroyed as long as a clone exists. The
129 129 origin property exposes this dependency, and the destroy command lists
130 130 any such dependencies, if they exist.
131 131
132 132 The clone parent-child dependency relationship can be reversed by using
133 133 the promote subcommand. This causes the "origin" file system to become a
134 134 clone of the specified file system, which makes it possible to destroy
135 135 the file system that the clone was created from.
136 136
137 137 Mount Points
138 138 Creating a ZFS file system is a simple operation, so the number of file
139 139 systems per system is likely to be numerous. To cope with this, ZFS
140 140 automatically manages mounting and unmounting file systems without the
141 141 need to edit the /etc/vfstab file. All automatically managed file systems
142 142 are mounted by ZFS at boot time.
143 143
144 144 By default, file systems are mounted under /path, where path is the name
145 145 of the file system in the ZFS namespace. Directories are created and
146 146 destroyed as needed.
147 147
148 148 A file system can also have a mount point set in the mountpoint property.
149 149 This directory is created as needed, and ZFS automatically mounts the
150 150 file system when the zfs mount -a command is invoked (without editing
151 151 /etc/vfstab). The mountpoint property can be inherited, so if pool/home
152 152 has a mount point of /export/stuff, then pool/home/user automatically
153 153 inherits a mount point of /export/stuff/user.
154 154
155 155 A file system mountpoint property of none prevents the file system from
156 156 being mounted.
157 157
158 158 If needed, ZFS file systems can also be managed with traditional tools
159 159 (mount, umount, /etc/vfstab). If a file system's mount point is set to
160 160 legacy, ZFS makes no attempt to manage the file system, and the
161 161 administrator is responsible for mounting and unmounting the file system.
162 162
163 163 Zones
164 164 A ZFS file system can be added to a non-global zone by using the zonecfg
165 165 add fs subcommand. A ZFS file system that is added to a non-global zone
166 166 must have its mountpoint property set to legacy.
167 167
168 168 The physical properties of an added file system are controlled by the
169 169 global administrator. However, the zone administrator can create, modify,
170 170 or destroy files within the added file system, depending on how the file
171 171 system is mounted.
172 172
173 173 A dataset can also be delegated to a non-global zone by using the zonecfg
174 174 add dataset subcommand. You cannot delegate a dataset to one zone and the
175 175 children of the same dataset to another zone. The zone administrator can
176 176 change properties of the dataset or any of its children. However, the
177 177 quota, filesystem_limit and snapshot_limit properties of the delegated
178 178 dataset can be modified only by the global administrator.
179 179
180 180 A ZFS volume can be added as a device to a non-global zone by using the
181 181 zonecfg add device subcommand. However, its physical properties can be
182 182 modified only by the global administrator.
183 183
184 184 For more information about zonecfg syntax, see zonecfg(1M).
185 185
186 186 After a dataset is delegated to a non-global zone, the zoned property is
187 187 automatically set. A zoned file system cannot be mounted in the global
188 188 zone, since the zone administrator might have to set the mount point to
189 189 an unacceptable value.
190 190
191 191 The global administrator can forcibly clear the zoned property, though
192 192 this should be done with extreme care. The global administrator should
193 193 verify that all the mount points are acceptable before clearing the
194 194 property.
195 195
196 196 Native Properties
197 197 Properties are divided into two types, native properties and user-defined
198 198 (or "user") properties. Native properties either export internal
199 199 statistics or control ZFS behavior. In addition, native properties are
200 200 either editable or read-only. User properties have no effect on ZFS
201 201 behavior, but you can use them to annotate datasets in a way that is
202 202 meaningful in your environment. For more information about user
203 203 properties, see the User Properties section, below.
204 204
205 205 Every dataset has a set of properties that export statistics about the
206 206 dataset as well as control various behaviors. Properties are inherited
207 207 from the parent unless overridden by the child. Some properties apply
208 208 only to certain types of datasets (file systems, volumes, or snapshots).
209 209
210 210 The values of numeric properties can be specified using human-readable
211 211 suffixes (for example, k, KB, M, Gb, and so forth, up to Z for
212 212 zettabyte). The following are all valid (and equal) specifications:
213 213 1536M, 1.5g, 1.50GB.
214 214
215 215 The values of non-numeric properties are case sensitive and must be
216 216 lowercase, except for mountpoint, sharenfs, and sharesmb.
217 217
218 218 The following native properties consist of read-only statistics about the
219 219 dataset. These properties can be neither set, nor inherited. Native
220 220 properties apply to all dataset types unless otherwise noted.
221 221
222 222 available The amount of space available to the dataset and
223 223 all its children, assuming that there is no other
224 224 activity in the pool. Because space is shared
225 225 within a pool, availability can be limited by any
226 226 number of factors, including physical pool size,
227 227 quotas, reservations, or other datasets within the
228 228 pool.
229 229
230 230 This property can also be referred to by its
231 231 shortened column name, avail.
232 232
233 233 compressratio For non-snapshots, the compression ratio achieved
234 234 for the used space of this dataset, expressed as a
235 235 multiplier. The used property includes descendant
236 236 datasets, and, for clones, does not include the
237 237 space shared with the origin snapshot. For
238 238 snapshots, the compressratio is the same as the
239 239 refcompressratio property. Compression can be
240 240 turned on by running: zfs set compression=on
241 241 dataset. The default value is off.
242 242
243 243 creation The time this dataset was created.
244 244
245 245 clones For snapshots, this property is a comma-separated
246 246 list of filesystems or volumes which are clones of
247 247 this snapshot. The clones' origin property is this
248 248 snapshot. If the clones property is not empty, then
249 249 this snapshot can not be destroyed (even with the
250 250 -r or -f options).
251 251
252 252 defer_destroy This property is on if the snapshot has been marked
253 253 for deferred destroy by using the zfs destroy -d
254 254 command. Otherwise, the property is off.
255 255
256 256 filesystem_count The total number of filesystems and volumes that
257 257 exist under this location in the dataset tree. This
258 258 value is only available when a filesystem_limit has
259 259 been set somewhere in the tree under which the
260 260 dataset resides.
261 261
262 262 logicalreferenced The amount of space that is "logically" accessible
263 263 by this dataset. See the referenced property. The
264 264 logical space ignores the effect of the compression
265 265 and copies properties, giving a quantity closer to
266 266 the amount of data that applications see. However,
267 267 it does include space consumed by metadata.
268 268
269 269 This property can also be referred to by its
270 270 shortened column name, lrefer.
271 271
272 272 logicalused The amount of space that is "logically" consumed by
273 273 this dataset and all its descendents. See the used
274 274 property. The logical space ignores the effect of
275 275 the compression and copies properties, giving a
276 276 quantity closer to the amount of data that
277 277 applications see. However, it does include space
278 278 consumed by metadata.
279 279
280 280 This property can also be referred to by its
281 281 shortened column name, lused.
282 282
283 283 mounted For file systems, indicates whether the file system
284 284 is currently mounted. This property can be either
285 285 yes or no.
286 286
287 287 origin For cloned file systems or volumes, the snapshot
288 288 from which the clone was created. See also the
289 289 clones property.
290 290
291 291 receive_resume_token For filesystems or volumes which have saved
292 292 partially-completed state from zfs receive -s, this
293 293 opaque token can be provided to zfs send -t to
294 294 resume and complete the zfs receive.
295 295
296 296 referenced The amount of data that is accessible by this
297 297 dataset, which may or may not be shared with other
298 298 datasets in the pool. When a snapshot or clone is
299 299 created, it initially references the same amount of
300 300 space as the file system or snapshot it was created
301 301 from, since its contents are identical.
302 302
303 303 This property can also be referred to by its
304 304 shortened column name, refer.
305 305
306 306 refcompressratio The compression ratio achieved for the referenced
307 307 space of this dataset, expressed as a multiplier.
308 308 See also the compressratio property.
309 309
310 310 snapshot_count The total number of snapshots that exist under this
311 311 location in the dataset tree. This value is only
312 312 available when a snapshot_limit has been set
313 313 somewhere in the tree under which the dataset
314 314 resides.
315 315
316 316 type The type of dataset: filesystem, volume, or
317 317 snapshot.
318 318
319 319 used The amount of space consumed by this dataset and
320 320 all its descendents. This is the value that is
321 321 checked against this dataset's quota and
322 322 reservation. The space used does not include this
323 323 dataset's reservation, but does take into account
324 324 the reservations of any descendent datasets. The
325 325 amount of space that a dataset consumes from its
326 326 parent, as well as the amount of space that are
327 327 freed if this dataset is recursively destroyed, is
328 328 the greater of its space used and its reservation.
329 329
330 330 When snapshots (see the Snapshots section) are
331 331 created, their space is initially shared between
332 332 the snapshot and the file system, and possibly with
333 333 previous snapshots. As the file system changes,
334 334 space that was previously shared becomes unique to
335 335 the snapshot, and counted in the snapshot's space
336 336 used. Additionally, deleting snapshots can increase
337 337 the amount of space unique to (and used by) other
338 338 snapshots.
339 339
340 340 The amount of space used, available, or referenced
341 341 does not take into account pending changes. Pending
342 342 changes are generally accounted for within a few
343 343 seconds. Committing a change to a disk using
344 344 fsync(3C) or O_SYNC does not necessarily guarantee
345 345 that the space usage information is updated
346 346 immediately.
347 347
348 348 usedby* The usedby* properties decompose the used
349 349 properties into the various reasons that space is
350 350 used. Specifically, used = usedbychildren +
351 351 usedbydataset + usedbyrefreservation +
352 352 usedbysnapshots. These properties are only
353 353 available for datasets created on zpool "version
354 354 13" pools.
355 355
356 356 usedbychildren The amount of space used by children of this
357 357 dataset, which would be freed if all the dataset's
358 358 children were destroyed.
359 359
360 360 usedbydataset The amount of space used by this dataset itself,
361 361 which would be freed if the dataset were destroyed
362 362 (after first removing any refreservation and
363 363 destroying any necessary snapshots or descendents).
364 364
365 365 usedbyrefreservation The amount of space used by a refreservation set on
366 366 this dataset, which would be freed if the
367 367 refreservation was removed.
368 368
369 369 usedbysnapshots The amount of space consumed by snapshots of this
370 370 dataset. In particular, it is the amount of space
371 371 that would be freed if all of this dataset's
372 372 snapshots were destroyed. Note that this is not
373 373 simply the sum of the snapshots' used properties
374 374 because space can be shared by multiple snapshots.
375 375
376 376 userused@user The amount of space consumed by the specified user
377 377 in this dataset. Space is charged to the owner of
378 378 each file, as displayed by ls -l. The amount of
379 379 space charged is displayed by du and ls -s. See
380 380 the zfs userspace subcommand for more information.
381 381
382 382 Unprivileged users can access only their own space
383 383 usage. The root user, or a user who has been
384 384 granted the userused privilege with zfs allow, can
385 385 access everyone's usage.
386 386
387 387 The userused@... properties are not displayed by
388 388 zfs get all. The user's name must be appended
389 389 after the @ symbol, using one of the following
390 390 forms:
391 391
392 392 o POSIX name (for example, joe)
393 393
394 394 o POSIX numeric ID (for example, 789)
395 395
396 396 o SID name (for example, joe.smith@mydomain)
397 397
398 398 o SID numeric ID (for example, S-1-123-456-789)
399 399
400 400 userrefs This property is set to the number of user holds on
401 401 this snapshot. User holds are set by using the zfs
402 402 hold command.
403 403
404 404 groupused@group The amount of space consumed by the specified group
405 405 in this dataset. Space is charged to the group of
406 406 each file, as displayed by ls -l. See the
407 407 userused@user property for more information.
408 408
409 409 Unprivileged users can only access their own
410 410 groups' space usage. The root user, or a user who
411 411 has been granted the groupused privilege with zfs
412 412 allow, can access all groups' usage.
413 413
414 414 volblocksize=blocksize
415 415 For volumes, specifies the block size of the
416 416 volume. The blocksize cannot be changed once the
417 417 volume has been written, so it should be set at
418 418 volume creation time. The default blocksize for
419 419 volumes is 8 Kbytes. Any power of 2 from 512 bytes
420 420 to 128 Kbytes is valid.
421 421
422 422 This property can also be referred to by its
423 423 shortened column name, volblock.
424 424
425 425 written The amount of referenced space written to this
426 426 dataset since the previous snapshot.
427 427
428 428 written@snapshot The amount of referenced space written to this
429 429 dataset since the specified snapshot. This is the
430 430 space that is referenced by this dataset but was
431 431 not referenced by the specified snapshot.
432 432
433 433 The snapshot may be specified as a short snapshot
434 434 name (just the part after the @), in which case it
435 435 will be interpreted as a snapshot in the same
436 436 filesystem as this dataset. The snapshot may be a
437 437 full snapshot name (filesystem@snapshot), which for
438 438 clones may be a snapshot in the origin's filesystem
439 439 (or the origin of the origin's filesystem, etc.)
440 440
441 441 The following native properties can be used to change the behavior of a
442 442 ZFS dataset.
443 443
444 444 aclinherit=discard|noallow|restricted|passthrough|passthrough-x
445 445 Controls how ACEs are inherited when files and directories are created.
446 446
447 447 discard does not inherit any ACEs.
448 448
449 449 noallow only inherits inheritable ACEs that specify "deny"
450 450 permissions.
451 451
452 452 restricted default, removes the write_acl and write_owner
453 453 permissions when the ACE is inherited.
454 454
455 455 passthrough inherits all inheritable ACEs without any modifications.
456 456
457 457 passthrough-x same meaning as passthrough, except that the owner@,
458 458 group@, and everyone@ ACEs inherit the execute
459 459 permission only if the file creation mode also requests
460 460 the execute bit.
461 461
462 462 When the property value is set to passthrough, files are created with a
463 463 mode determined by the inheritable ACEs. If no inheritable ACEs exist
464 464 that affect the mode, then the mode is set in accordance to the
465 465 requested mode from the application.
466 466
467 467 aclmode=discard|groupmask|passthrough|restricted
468 468 Controls how an ACL is modified during chmod(2).
469 469
470 470 discard default, deletes all ACEs that do not represent the mode
471 471 of the file.
472 472
473 473 groupmask reduces permissions granted in all ALLOW entries found in
474 474 the ACL such that they are no greater than the group
475 475 permissions specified by chmod(2).
476 476
477 477 passthrough indicates that no changes are made to the ACL other than
478 478 creating or updating the necessary ACEs to represent the
479 479 new mode of the file or directory.
480 480
481 481 restricted causes the chmod(2) operation to return an error when used
482 482 on any file or directory which has a non-trivial ACEs
483 483 whose entries can not be represented by a mode.
484 484
485 485 chmod(2) is required to change the set user ID, set group ID, or sticky
486 486 bits on a file or directory, as they do not have equivalent ACEs. In
487 487 order to use chmod(2) on a file or directory with a non-trivial ACL
488 488 when aclmode is set to restricted, you must first remove all ACEs which
489 489 do not represent the current mode.
490 490
491 491 atime=on|off
492 492 Controls whether the access time for files is updated when they are
493 493 read. Turning this property off avoids producing write traffic when
494 494 reading files and can result in significant performance gains, though
495 495 it might confuse mailers and other similar utilities. The default value
496 496 is on.
497 497
498 498 canmount=on|off|noauto
499 499 If this property is set to off, the file system cannot be mounted, and
500 500 is ignored by zfs mount -a. Setting this property to off is similar to
501 501 setting the mountpoint property to none, except that the dataset still
502 502 has a normal mountpoint property, which can be inherited. Setting this
503 503 property to off allows datasets to be used solely as a mechanism to
504 504 inherit properties. One example of setting canmount=off is to have two
505 505 datasets with the same mountpoint, so that the children of both
506 506 datasets appear in the same directory, but might have different
507 507 inherited characteristics.
508 508
509 509 When set to noauto, a dataset can only be mounted and unmounted
510 510 explicitly. The dataset is not mounted automatically when the dataset
511 511 is created or imported, nor is it mounted by the zfs mount -a command
512 512 or unmounted by the zfs unmount -a command.
513 513
514 514 This property is not inherited.
515 515
516 516 checksum=on|off|fletcher2|fletcher4|sha256|noparity|sha512|skein|edonr
517 517 Controls the checksum used to verify data integrity. The default value
518 518 is on, which automatically selects an appropriate algorithm (currently,
519 519 fletcher4, but this may change in future releases). The value off
520 520 disables integrity checking on user data. The value noparity not only
521 521 disables integrity but also disables maintaining parity for user data.
522 522 This setting is used internally by a dump device residing on a RAID-Z
523 523 pool and should not be used by any other dataset. Disabling checksums
524 524 is NOT a recommended practice.
525 525
526 526 The sha512, skein, and edonr checksum algorithms require enabling the
527 527 appropriate features on the pool. Please see zpool-features(5) for more
528 528 information on these algorithms.
529 529
530 530 Changing this property affects only newly-written data.
531 531
532 532 compression=on|off|gzip|gzip-N|lz4|lzjb|zle
533 533 Controls the compression algorithm used for this dataset.
534 534
535 535 Setting compression to on indicates that the current default
536 536 compression algorithm should be used. The default balances compression
537 537 and decompression speed, with compression ratio and is expected to work
538 538 well on a wide variety of workloads. Unlike all other settings for
539 539 this property, on does not select a fixed compression type. As new
540 540 compression algorithms are added to ZFS and enabled on a pool, the
541 541 default compression algorithm may change. The current default
542 542 compression algorthm is either lzjb or, if the lz4_compress feature is
543 543 enabled, lz4.
544 544
545 545 The lz4 compression algorithm is a high-performance replacement for the
546 546 lzjb algorithm. It features significantly faster compression and
547 547 decompression, as well as a moderately higher compression ratio than
548 548 lzjb, but can only be used on pools with the lz4_compress feature set
549 549 to enabled. See zpool-features(5) for details on ZFS feature flags and
550 550 the lz4_compress feature.
551 551
552 552 The lzjb compression algorithm is optimized for performance while
553 553 providing decent data compression.
554 554
555 555 The gzip compression algorithm uses the same compression as the gzip(1)
556 556 command. You can specify the gzip level by using the value gzip-N,
557 557 where N is an integer from 1 (fastest) to 9 (best compression ratio).
558 558 Currently, gzip is equivalent to gzip-6 (which is also the default for
559 559 gzip(1)).
560 560
561 561 The zle compression algorithm compresses runs of zeros.
562 562
563 563 This property can also be referred to by its shortened column name
564 564 compress. Changing this property affects only newly-written data.
565 565
566 566 copies=1|2|3
567 567 Controls the number of copies of data stored for this dataset. These
568 568 copies are in addition to any redundancy provided by the pool, for
569 569 example, mirroring or RAID-Z. The copies are stored on different disks,
570 570 if possible. The space used by multiple copies is charged to the
571 571 associated file and dataset, changing the used property and counting
572 572 against quotas and reservations.
573 573
574 574 Changing this property only affects newly-written data. Therefore, set
575 575 this property at file system creation time by using the -o copies=N
576 576 option.
577 577
578 578 devices=on|off
579 579 Controls whether device nodes can be opened on this file system. The
580 580 default value is on.
581 581
582 582 exec=on|off
583 583 Controls whether processes can be executed from within this file
584 584 system. The default value is on.
585 585
586 586 filesystem_limit=count|none
587 587 Limits the number of filesystems and volumes that can exist under this
588 588 point in the dataset tree. The limit is not enforced if the user is
589 589 allowed to change the limit. Setting a filesystem_limit to on a
590 590 descendent of a filesystem that already has a filesystem_limit does not
591 591 override the ancestor's filesystem_limit, but rather imposes an
592 592 additional limit. This feature must be enabled to be used (see
593 593 zpool-features(5)).
594 594
595 595 mountpoint=path|none|legacy
596 596 Controls the mount point used for this file system. See the Mount
597 597 Points section for more information on how this property is used.
598 598
599 599 When the mountpoint property is changed for a file system, the file
600 600 system and any children that inherit the mount point are unmounted. If
601 601 the new value is legacy, then they remain unmounted. Otherwise, they
602 602 are automatically remounted in the new location if the property was
603 603 previously legacy or none, or if they were mounted before the property
604 604 was changed. In addition, any shared file systems are unshared and
605 605 shared in the new location.
606 606
607 607 nbmand=on|off
608 608 Controls whether the file system should be mounted with nbmand (Non
609 609 Blocking mandatory locks). This is used for SMB clients. Changes to
610 610 this property only take effect when the file system is umounted and
611 611 remounted. See mount(1M) for more information on nbmand mounts.
612 612
613 613 primarycache=all|none|metadata
614 614 Controls what is cached in the primary cache (ARC). If this property
615 615 is set to all, then both user data and metadata is cached. If this
616 616 property is set to none, then neither user data nor metadata is cached.
617 617 If this property is set to metadata, then only metadata is cached. The
618 618 default value is all.
619 619
620 620 quota=size|none
621 621 Limits the amount of space a dataset and its descendents can consume.
622 622 This property enforces a hard limit on the amount of space used. This
623 623 includes all space consumed by descendents, including file systems and
624 624 snapshots. Setting a quota on a descendent of a dataset that already
625 625 has a quota does not override the ancestor's quota, but rather imposes
626 626 an additional limit.
627 627
628 628 Quotas cannot be set on volumes, as the volsize property acts as an
629 629 implicit quota.
630 630
631 631 snapshot_limit=count|none
632 632 Limits the number of snapshots that can be created on a dataset and its
633 633 descendents. Setting a snapshot_limit on a descendent of a dataset that
634 634 already has a snapshot_limit does not override the ancestor's
635 635 snapshot_limit, but rather imposes an additional limit. The limit is
636 636 not enforced if the user is allowed to change the limit. For example,
637 637 this means that recursive snapshots taken from the global zone are
638 638 counted against each delegated dataset within a zone. This feature must
639 639 be enabled to be used (see zpool-features(5)).
640 640
641 641 userquota@user=size|none
642 642 Limits the amount of space consumed by the specified user. User space
643 643 consumption is identified by the userspace@user property.
644 644
645 645 Enforcement of user quotas may be delayed by several seconds. This
646 646 delay means that a user might exceed their quota before the system
647 647 notices that they are over quota and begins to refuse additional writes
648 648 with the EDQUOT error message. See the zfs userspace subcommand for
649 649 more information.
650 650
651 651 Unprivileged users can only access their own groups' space usage. The
652 652 root user, or a user who has been granted the userquota privilege with
653 653 zfs allow, can get and set everyone's quota.
654 654
655 655 This property is not available on volumes, on file systems before
656 656 version 4, or on pools before version 15. The userquota@... properties
657 657 are not displayed by zfs get all. The user's name must be appended
658 658 after the @ symbol, using one of the following forms:
659 659
660 660 o POSIX name (for example, joe)
661 661
662 662 o POSIX numeric ID (for example, 789)
663 663
664 664 o SID name (for example, joe.smith@mydomain)
665 665
666 666 o SID numeric ID (for example, S-1-123-456-789)
667 667
668 668 groupquota@group=size|none
669 669 Limits the amount of space consumed by the specified group. Group space
670 670 consumption is identified by the groupused@group property.
671 671
672 672 Unprivileged users can access only their own groups' space usage. The
673 673 root user, or a user who has been granted the groupquota privilege with
674 674 zfs allow, can get and set all groups' quotas.
675 675
676 676 readonly=on|off
677 677 Controls whether this dataset can be modified. The default value is
678 678 off.
679 679
680 680 This property can also be referred to by its shortened column name,
681 681 rdonly.
682 682
683 683 recordsize=size
684 684 Specifies a suggested block size for files in the file system. This
685 685 property is designed solely for use with database workloads that access
686 686 files in fixed-size records. ZFS automatically tunes block sizes
687 687 according to internal algorithms optimized for typical access patterns.
688 688
689 689 For databases that create very large files but access them in small
690 690 random chunks, these algorithms may be suboptimal. Specifying a
691 691 recordsize greater than or equal to the record size of the database can
692 692 result in significant performance gains. Use of this property for
693 693 general purpose file systems is strongly discouraged, and may adversely
694 694 affect performance.
695 695
696 696 The size specified must be a power of two greater than or equal to 512
697 697 and less than or equal to 128 Kbytes. If the large_blocks feature is
698 698 enabled on the pool, the size may be up to 1 Mbyte. See
699 699 zpool-features(5) for details on ZFS feature flags.
700 700
701 701 Changing the file system's recordsize affects only files created
702 702 afterward; existing files are unaffected.
703 703
704 704 This property can also be referred to by its shortened column name,
705 705 recsize.
706 706
707 707 redundant_metadata=all|most
708 708 Controls what types of metadata are stored redundantly. ZFS stores an
709 709 extra copy of metadata, so that if a single block is corrupted, the
710 710 amount of user data lost is limited. This extra copy is in addition to
711 711 any redundancy provided at the pool level (e.g. by mirroring or
712 712 RAID-Z), and is in addition to an extra copy specified by the copies
713 713 property (up to a total of 3 copies). For example if the pool is
714 714 mirrored, copies=2, and redundant_metadata=most, then ZFS stores 6
715 715 copies of most metadata, and 4 copies of data and some metadata.
716 716
717 717 When set to all, ZFS stores an extra copy of all metadata. If a single
718 718 on-disk block is corrupt, at worst a single block of user data (which
719 719 is recordsize bytes long) can be lost.
720 720
721 721 When set to most, ZFS stores an extra copy of most types of metadata.
722 722 This can improve performance of random writes, because less metadata
723 723 must be written. In practice, at worst about 100 blocks (of recordsize
724 724 bytes each) of user data can be lost if a single on-disk block is
725 725 corrupt. The exact behavior of which metadata blocks are stored
726 726 redundantly may change in future releases.
727 727
728 728 The default value is all.
729 729
730 730 refquota=size|none
731 731 Limits the amount of space a dataset can consume. This property
732 732 enforces a hard limit on the amount of space used. This hard limit does
733 733 not include space used by descendents, including file systems and
734 734 snapshots.
735 735
736 736 refreservation=size|none
737 737 The minimum amount of space guaranteed to a dataset, not including its
738 738 descendents. When the amount of space used is below this value, the
739 739 dataset is treated as if it were taking up the amount of space
740 740 specified by refreservation. The refreservation reservation is
741 741 accounted for in the parent datasets' space used, and counts against
742 742 the parent datasets' quotas and reservations.
743 743
744 744 If refreservation is set, a snapshot is only allowed if there is enough
745 745 free pool space outside of this reservation to accommodate the current
746 746 number of "referenced" bytes in the dataset.
747 747
748 748 This property can also be referred to by its shortened column name,
749 749 refreserv.
750 750
751 751 reservation=size|none
752 752 The minimum amount of space guaranteed to a dataset and its
753 753 descendents. When the amount of space used is below this value, the
754 754 dataset is treated as if it were taking up the amount of space
755 755 specified by its reservation. Reservations are accounted for in the
756 756 parent datasets' space used, and count against the parent datasets'
757 757 quotas and reservations.
758 758
759 759 This property can also be referred to by its shortened column name,
760 760 reserv.
761 761
762 762 secondarycache=all|none|metadata
763 763 Controls what is cached in the secondary cache (L2ARC). If this
764 764 property is set to all, then both user data and metadata is cached. If
765 765 this property is set to none, then neither user data nor metadata is
766 766 cached. If this property is set to metadata, then only metadata is
767 767 cached. The default value is all.
768 768
769 769 setuid=on|off
770 770 Controls whether the setuid bit is respected for the file system. The
771 771 default value is on.
772 772
773 773 sharesmb=on|off|opts
774 774 Controls whether the file system is shared via SMB, and what options
775 775 are to be used. A file system with the sharesmb property set to off is
776 776 managed through traditional tools such as sharemgr(1M). Otherwise, the
777 777 file system is automatically shared and unshared with the zfs share and
778 778 zfs unshare commands. If the property is set to on, the sharemgr(1M)
779 779 command is invoked with no options. Otherwise, the sharemgr(1M) command
780 780 is invoked with options equivalent to the contents of this property.
781 781
782 782 Because SMB shares requires a resource name, a unique resource name is
783 783 constructed from the dataset name. The constructed name is a copy of
784 784 the dataset name except that the characters in the dataset name, which
785 785 would be illegal in the resource name, are replaced with underscore (_)
786 786 characters. A pseudo property "name" is also supported that allows you
787 787 to replace the data set name with a specified name. The specified name
788 788 is then used to replace the prefix dataset in the case of inheritance.
789 789 For example, if the dataset data/home/john is set to name=john, then
790 790 data/home/john has a resource name of john. If a child dataset
791 791 data/home/john/backups is shared, it has a resource name of
792 792 john_backups.
793 793
794 794 When SMB shares are created, the SMB share name appears as an entry in
795 795 the .zfs/shares directory. You can use the ls or chmod command to
796 796 display the share-level ACLs on the entries in this directory.
797 797
798 798 When the sharesmb property is changed for a dataset, the dataset and
799 799 any children inheriting the property are re-shared with the new
800 800 options, only if the property was previously set to off, or if they
801 801 were shared before the property was changed. If the new property is set
802 802 to off, the file systems are unshared.
803 803
804 804 sharenfs=on|off|opts
805 805 Controls whether the file system is shared via NFS, and what options
806 806 are to be used. A file system with a sharenfs property of off is
807 807 managed through traditional tools such as share(1M), unshare(1M), and
808 808 dfstab(4). Otherwise, the file system is automatically shared and
809 809 unshared with the zfs share and zfs unshare commands. If the property
810 810 is set to on, share(1M) command is invoked with no options. Otherwise,
811 811 the share(1M) command is invoked with options equivalent to the
812 812 contents of this property.
813 813
814 814 When the sharenfs property is changed for a dataset, the dataset and
815 815 any children inheriting the property are re-shared with the new
816 816 options, only if the property was previously off, or if they were
817 817 shared before the property was changed. If the new property is off, the
818 818 file systems are unshared.
819 819
820 820 logbias=latency|throughput
821 821 Provide a hint to ZFS about handling of synchronous requests in this
822 822 dataset. If logbias is set to latency (the default), ZFS will use pool
823 823 log devices (if configured) to handle the requests at low latency. If
824 824 logbias is set to throughput, ZFS will not use configured pool log
825 825 devices. ZFS will instead optimize synchronous operations for global
826 826 pool throughput and efficient use of resources.
827 827
828 828 snapdir=hidden|visible
829 829 Controls whether the .zfs directory is hidden or visible in the root of
830 830 the file system as discussed in the Snapshots section. The default
831 831 value is hidden.
832 832
833 833 sync=standard|always|disabled
834 834 Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC).
835 835 standard is the POSIX specified behavior of ensuring all synchronous
836 836 requests are written to stable storage and all devices are flushed to
837 837 ensure data is not cached by device controllers (this is the default).
838 838 always causes every file system transaction to be written and flushed
839 839 before its system call returns. This has a large performance penalty.
840 840 disabled disables synchronous requests. File system transactions are
841 841 only committed to stable storage periodically. This option will give
842 842 the highest performance. However, it is very dangerous as ZFS would be
843 843 ignoring the synchronous transaction demands of applications such as
844 844 databases or NFS. Administrators should only use this option when the
845 845 risks are understood.
846 846
847 847 version=N|current
848 848 The on-disk version of this file system, which is independent of the
849 849 pool version. This property can only be set to later supported
850 850 versions. See the zfs upgrade command.
851 851
852 852 volsize=size
853 853 For volumes, specifies the logical size of the volume. By default,
854 854 creating a volume establishes a reservation of equal size. For storage
855 855 pools with a version number of 9 or higher, a refreservation is set
856 856 instead. Any changes to volsize are reflected in an equivalent change
857 857 to the reservation (or refreservation). The volsize can only be set to
858 858 a multiple of volblocksize, and cannot be zero.
859 859
860 860 The reservation is kept equal to the volume's logical size to prevent
861 861 unexpected behavior for consumers. Without the reservation, the volume
862 862 could run out of space, resulting in undefined behavior or data
863 863 corruption, depending on how the volume is used. These effects can also
864 864 occur when the volume size is changed while it is in use (particularly
865 865 when shrinking the size). Extreme care should be used when adjusting
866 866 the volume size.
867 867
868 868 Though not recommended, a "sparse volume" (also known as "thin
869 869 provisioning") can be created by specifying the -s option to the zfs
870 870 create -V command, or by changing the reservation after the volume has
871 871 been created. A "sparse volume" is a volume where the reservation is
872 872 less then the volume size. Consequently, writes to a sparse volume can
873 873 fail with ENOSPC when the pool is low on space. For a sparse volume,
874 874 changes to volsize are not reflected in the reservation.
875 875
876 876 vscan=on|off
877 877 Controls whether regular files should be scanned for viruses when a
878 878 file is opened and closed. In addition to enabling this property, the
879 879 virus scan service must also be enabled for virus scanning to occur.
880 880 The default value is off.
881 881
882 882 xattr=on|off
883 883 Controls whether extended attributes are enabled for this file system.
884 884 The default value is on.
885 885
886 886 zoned=on|off
887 887 Controls whether the dataset is managed from a non-global zone. See the
888 888 Zones section for more information. The default value is off.
889 889
890 890 The following three properties cannot be changed after the file system is
891 891 created, and therefore, should be set when the file system is created. If
892 892 the properties are not set with the zfs create or zpool create commands,
893 893 these properties are inherited from the parent dataset. If the parent
894 894 dataset lacks these properties due to having been created prior to these
895 895 features being supported, the new file system will have the default
896 896 values for these properties.
897 897
898 898 casesensitivity=sensitive|insensitive|mixed
899 899 Indicates whether the file name matching algorithm used by the file
900 900 system should be case-sensitive, case-insensitive, or allow a
901 901 combination of both styles of matching. The default value for the
902 902 casesensitivity property is sensitive. Traditionally, UNIX and POSIX
903 903 file systems have case-sensitive file names.
904 904
905 905 The mixed value for the casesensitivity property indicates that the
906 906 file system can support requests for both case-sensitive and case-
907 907 insensitive matching behavior. Currently, case-insensitive matching
908 908 behavior on a file system that supports mixed behavior is limited to
909 909 the SMB server product. For more information about the mixed value
910 910 behavior, see the "ZFS Administration Guide".
911 911
912 912 normalization=none|formC|formD|formKC|formKD
913 913 Indicates whether the file system should perform a unicode
914 914 normalization of file names whenever two file names are compared, and
915 915 which normalization algorithm should be used. File names are always
916 916 stored unmodified, names are normalized as part of any comparison
917 917 process. If this property is set to a legal value other than none, and
918 918 the utf8only property was left unspecified, the utf8only property is
919 919 automatically set to on. The default value of the normalization
920 920 property is none. This property cannot be changed after the file
921 921 system is created.
922 922
923 923 utf8only=on|off
924 924 Indicates whether the file system should reject file names that include
925 925 characters that are not present in the UTF-8 character code set. If
926 926 this property is explicitly set to off, the normalization property must
927 927 either not be explicitly set or be set to none. The default value for
928 928 the utf8only property is off. This property cannot be changed after
929 929 the file system is created.
930 930
931 931 The casesensitivity, normalization, and utf8only properties are also new
932 932 permissions that can be assigned to non-privileged users by using the ZFS
933 933 delegated administration feature.
934 934
935 935 Temporary Mount Point Properties
936 936 When a file system is mounted, either through mount(1M) for legacy mounts
937 937 or the zfs mount command for normal file systems, its mount options are
938 938 set according to its properties. The correlation between properties and
939 939 mount options is as follows:
940 940
941 941 PROPERTY MOUNT OPTION
942 942 devices devices/nodevices
943 943 exec exec/noexec
944 944 readonly ro/rw
945 945 setuid setuid/nosetuid
946 946 xattr xattr/noxattr
947 947
948 948 In addition, these options can be set on a per-mount basis using the -o
949 949 option, without affecting the property that is stored on disk. The values
950 950 specified on the command line override the values stored in the dataset.
951 951 The nosuid option is an alias for nodevices,nosetuid. These properties
952 952 are reported as "temporary" by the zfs get command. If the properties are
953 953 changed while the dataset is mounted, the new setting overrides any
954 954 temporary settings.
955 955
956 956 User Properties
957 957 In addition to the standard native properties, ZFS supports arbitrary
958 958 user properties. User properties have no effect on ZFS behavior, but
959 959 applications or administrators can use them to annotate datasets (file
960 960 systems, volumes, and snapshots).
961 961
962 962 User property names must contain a colon (":") character to distinguish
963 963 them from native properties. They may contain lowercase letters, numbers,
964 964 and the following punctuation characters: colon (":"), dash ("-"), period
965 965 ("."), and underscore ("_"). The expected convention is that the
966 966 property name is divided into two portions such as module:property, but
967 967 this namespace is not enforced by ZFS. User property names can be at
968 968 most 256 characters, and cannot begin with a dash ("-").
969 969
970 970 When making programmatic use of user properties, it is strongly suggested
971 971 to use a reversed DNS domain name for the module component of property
972 972 names to reduce the chance that two independently-developed packages use
973 973 the same property name for different purposes.
974 974
975 975 The values of user properties are arbitrary strings, are always
976 976 inherited, and are never validated. All of the commands that operate on
977 977 properties (zfs list, zfs get, zfs set, and so forth) can be used to
978 978 manipulate both native properties and user properties. Use the zfs
979 979 inherit command to clear a user property . If the property is not defined
980 980 in any parent dataset, it is removed entirely. Property values are
981 981 limited to 1024 characters.
982 982
983 983 ZFS Volumes as Swap or Dump Devices
984 984 During an initial installation a swap device and dump device are created
985 985 on ZFS volumes in the ZFS root pool. By default, the swap area size is
986 986 based on 1/2 the size of physical memory up to 2 Gbytes. The size of the
987 987 dump device depends on the kernel's requirements at installation time.
988 988 Separate ZFS volumes must be used for the swap area and dump devices. Do
989 989 not swap to a file on a ZFS file system. A ZFS swap file configuration is
990 990 not supported.
991 991
992 992 If you need to change your swap area or dump device after the system is
993 993 installed or upgraded, use the swap(1M) and dumpadm(1M) commands.
994 994
995 995 SUBCOMMANDS
996 996 All subcommands that modify state are logged persistently to the pool in
997 997 their original form.
998 998
999 999 zfs -?
1000 1000 Displays a help message.
1001 1001
1002 1002 zfs create [-p] [-o property=value]... filesystem
1003 1003 Creates a new ZFS file system. The file system is automatically mounted
1004 1004 according to the mountpoint property inherited from the parent.
1005 1005
1006 1006 -o property=value
1007 1007 Sets the specified property as if the command zfs set
1008 1008 property=value was invoked at the same time the dataset was
1009 1009 created. Any editable ZFS property can also be set at creation
1010 1010 time. Multiple -o options can be specified. An error results if the
1011 1011 same property is specified in multiple -o options.
1012 1012
1013 1013 -p Creates all the non-existing parent datasets. Datasets created in
1014 1014 this manner are automatically mounted according to the mountpoint
1015 1015 property inherited from their parent. Any property specified on the
1016 1016 command line using the -o option is ignored. If the target
1017 1017 filesystem already exists, the operation completes successfully.
1018 1018
1019 1019 zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume
1020 1020 Creates a volume of the given size. The volume is exported as a block
1021 1021 device in /dev/zvol/{dsk,rdsk}/path, where path is the name of the
1022 1022 volume in the ZFS namespace. The size represents the logical size as
1023 1023 exported by the device. By default, a reservation of equal size is
1024 1024 created.
1025 1025
1026 1026 size is automatically rounded up to the nearest 128 Kbytes to ensure
1027 1027 that the volume has an integral number of blocks regardless of
1028 1028 blocksize.
1029 1029
1030 1030 -b blocksize
1031 1031 Equivalent to -o volblocksize=blocksize. If this option is
1032 1032 specified in conjunction with -o volblocksize, the resulting
1033 1033 behavior is undefined.
1034 1034
1035 1035 -o property=value
1036 1036 Sets the specified property as if the zfs set property=value
1037 1037 command was invoked at the same time the dataset was created. Any
1038 1038 editable ZFS property can also be set at creation time. Multiple -o
1039 1039 options can be specified. An error results if the same property is
1040 1040 specified in multiple -o options.
1041 1041
1042 1042 -p Creates all the non-existing parent datasets. Datasets created in
1043 1043 this manner are automatically mounted according to the mountpoint
1044 1044 property inherited from their parent. Any property specified on the
1045 1045 command line using the -o option is ignored. If the target
1046 1046 filesystem already exists, the operation completes successfully.
1047 1047
1048 1048 -s Creates a sparse volume with no reservation. See volsize in the
1049 1049 Native Properties section for more information about sparse
1050 1050 volumes.
1051 1051
1052 1052 zfs destroy [-Rfnprv] filesystem|volume
1053 1053 Destroys the given dataset. By default, the command unshares any file
1054 1054 systems that are currently shared, unmounts any file systems that are
1055 1055 currently mounted, and refuses to destroy a dataset that has active
1056 1056 dependents (children or clones).
1057 1057
1058 1058 -R Recursively destroy all dependents, including cloned file systems
1059 1059 outside the target hierarchy.
1060 1060
1061 1061 -f Force an unmount of any file systems using the unmount -f command.
1062 1062 This option has no effect on non-file systems or unmounted file
1063 1063 systems.
1064 1064
1065 1065 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1066 1066 useful in conjunction with the -v or -p flags to determine what
1067 1067 data would be deleted.
1068 1068
1069 1069 -p Print machine-parsable verbose information about the deleted data.
1070 1070
1071 1071 -r Recursively destroy all children.
1072 1072
1073 1073 -v Print verbose information about the deleted data.
1074 1074
1075 1075 Extreme care should be taken when applying either the -r or the -R
1076 1076 options, as they can destroy large portions of a pool and cause
1077 1077 unexpected behavior for mounted file systems in use.
1078 1078
1079 1079 zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
1080 1080 The given snapshots are destroyed immediately if and only if the zfs
1081 1081 destroy command without the -d option would have destroyed it. Such
1082 1082 immediate destruction would occur, for example, if the snapshot had no
1083 1083 clones and the user-initiated reference count were zero.
1084 1084
1085 1085 If a snapshot does not qualify for immediate destruction, it is marked
1086 1086 for deferred deletion. In this state, it exists as a usable, visible
1087 1087 snapshot until both of the preconditions listed above are met, at which
1088 1088 point it is destroyed.
1089 1089
1090 1090 An inclusive range of snapshots may be specified by separating the
1091 1091 first and last snapshots with a percent sign. The first and/or last
1092 1092 snapshots may be left blank, in which case the filesystem's oldest or
1093 1093 newest snapshot will be implied.
1094 1094
1095 1095 Multiple snapshots (or ranges of snapshots) of the same filesystem or
1096 1096 volume may be specified in a comma-separated list of snapshots. Only
1097 1097 the snapshot's short name (the part after the @) should be specified
1098 1098 when using a range or comma-separated list to identify multiple
1099 1099 snapshots.
1100 1100
1101 1101 -R Recursively destroy all clones of these snapshots, including the
1102 1102 clones, snapshots, and children. If this flag is specified, the -d
1103 1103 flag will have no effect.
1104 1104
1105 1105 -d Defer snapshot deletion.
1106 1106
1107 1107 -n Do a dry-run ("No-op") deletion. No data will be deleted. This is
1108 1108 useful in conjunction with the -p or -v flags to determine what
1109 1109 data would be deleted.
1110 1110
1111 1111 -p Print machine-parsable verbose information about the deleted data.
1112 1112
1113 1113 -r Destroy (or mark for deferred deletion) all snapshots with this
1114 1114 name in descendent file systems.
1115 1115
1116 1116 -v Print verbose information about the deleted data.
1117 1117
1118 1118 Extreme care should be taken when applying either the -r or the -R
1119 1119 options, as they can destroy large portions of a pool and cause
1120 1120 unexpected behavior for mounted file systems in use.
1121 1121
1122 1122 zfs destroy filesystem|volume#bookmark
1123 1123 The given bookmark is destroyed.
1124 1124
1125 1125 zfs snapshot [-r] [-o property=value]...
1126 1126 filesystem@snapname|volume@snapname...
1127 1127 Creates snapshots with the given names. All previous modifications by
1128 1128 successful system calls to the file system are part of the snapshots.
1129 1129 Snapshots are taken atomically, so that all snapshots correspond to the
1130 1130 same moment in time. See the Snapshots section for details.
1131 1131
1132 1132 -o property=value
1133 1133 Sets the specified property; see zfs create for details.
1134 1134
1135 1135 -r Recursively create snapshots of all descendent datasets
1136 1136
1137 1137 zfs rollback [-Rfr] snapshot
1138 1138 Roll back the given dataset to a previous snapshot. When a dataset is
1139 1139 rolled back, all data that has changed since the snapshot is discarded,
1140 1140 and the dataset reverts to the state at the time of the snapshot. By
1141 1141 default, the command refuses to roll back to a snapshot other than the
1142 1142 most recent one. In order to do so, all intermediate snapshots and
1143 1143 bookmarks must be destroyed by specifying the -r option.
1144 1144
1145 1145 The -rR options do not recursively destroy the child snapshots of a
1146 1146 recursive snapshot. Only direct snapshots of the specified filesystem
1147 1147 are destroyed by either of these options. To completely roll back a
1148 1148 recursive snapshot, you must rollback the individual child snapshots.
1149 1149
1150 1150 -R Destroy any more recent snapshots and bookmarks, as well as any
1151 1151 clones of those snapshots.
1152 1152
1153 1153 -f Used with the -R option to force an unmount of any clone file
1154 1154 systems that are to be destroyed.
1155 1155
1156 1156 -r Destroy any snapshots and bookmarks more recent than the one
1157 1157 specified.
1158 1158
1159 1159 zfs clone [-p] [-o property=value]... snapshot filesystem|volume
1160 1160 Creates a clone of the given snapshot. See the Clones section for
1161 1161 details. The target dataset can be located anywhere in the ZFS
1162 1162 hierarchy, and is created as the same type as the original.
1163 1163
1164 1164 -o property=value
1165 1165 Sets the specified property; see zfs create for details.
1166 1166
1167 1167 -p Creates all the non-existing parent datasets. Datasets created in
1168 1168 this manner are automatically mounted according to the mountpoint
1169 1169 property inherited from their parent. If the target filesystem or
1170 1170 volume already exists, the operation completes successfully.
1171 1171
1172 1172 zfs promote clone-filesystem
1173 1173 Promotes a clone file system to no longer be dependent on its "origin"
1174 1174 snapshot. This makes it possible to destroy the file system that the
1175 1175 clone was created from. The clone parent-child dependency relationship
1176 1176 is reversed, so that the origin file system becomes a clone of the
1177 1177 specified file system.
1178 1178
1179 1179 The snapshot that was cloned, and any snapshots previous to this
1180 1180 snapshot, are now owned by the promoted clone. The space they use moves
1181 1181 from the origin file system to the promoted clone, so enough space must
1182 1182 be available to accommodate these snapshots. No new space is consumed
1183 1183 by this operation, but the space accounting is adjusted. The promoted
1184 1184 clone must not have any conflicting snapshot names of its own. The
1185 1185 rename subcommand can be used to rename any conflicting snapshots.
1186 1186
1187 1187 zfs rename [-f] filesystem|volume|snapshot filesystem|volume|snapshot
1188 1188 zfs rename [-fp] filesystem|volume filesystem|volume
1189 1189 Renames the given dataset. The new target can be located anywhere in
1190 1190 the ZFS hierarchy, with the exception of snapshots. Snapshots can only
1191 1191 be renamed within the parent file system or volume. When renaming a
1192 1192 snapshot, the parent file system of the snapshot does not need to be
1193 1193 specified as part of the second argument. Renamed file systems can
1194 1194 inherit new mount points, in which case they are unmounted and
1195 1195 remounted at the new mount point.
1196 1196
1197 1197 -f Force unmount any filesystems that need to be unmounted in the
1198 1198 process.
1199 1199
1200 1200 -p Creates all the nonexistent parent datasets. Datasets created in
1201 1201 this manner are automatically mounted according to the mountpoint
1202 1202 property inherited from their parent.
1203 1203
1204 1204 zfs rename -r snapshot snapshot
1205 1205 Recursively rename the snapshots of all descendent datasets. Snapshots
1206 1206 are the only dataset that can be renamed recursively.
1207 1207
1208 1208 zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]...
1209 1209 [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
1210 1210 Lists the property information for the given datasets in tabular form.
1211 1211 If specified, you can list property information by the absolute
1212 1212 pathname or the relative pathname. By default, all file systems and
1213 1213 volumes are displayed. Snapshots are displayed if the listsnaps
1214 1214 property is on (the default is off). The following fields are
1215 1215 displayed, name,used,available,referenced,mountpoint.
1216 1216
1217 1217 -H Used for scripting mode. Do not print headers and separate fields
1218 1218 by a single tab instead of arbitrary white space.
1219 1219
1220 1220 -S property
1221 1221 Same as the -s option, but sorts by property in descending order.
1222 1222
1223 1223 -d depth
1224 1224 Recursively display any children of the dataset, limiting the
1225 1225 recursion to
1226 1226
1227 1227 -o property
1228 1228 A comma-separated list of properties to display. The property must
1229 1229 be:
1230 1230
1231 1231 o One of the properties described in the Native Properties
1232 1232 section
1233 1233
1234 1234 o A user property
1235 1235
1236 1236 o The value name to display the dataset name
1237 1237
1238 1238 o The value space to display space usage properties on file
1239 1239 systems and volumes. This is a shortcut for specifying -o
1240 1240 name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t
1241 1241 filesystem,volume syntax.
1242 1242
1243 1243 -p Display numbers in parsable (exact) values.
1244 1244
1245 1245 -r Recursively display any children of the dataset on the command
1246 1246 line. depth. A depth of 1 will display only the dataset and its
1247 1247 direct children.
1248 1248
1249 1249 -s property
1250 1250 A property for sorting the output by column in ascending order
1251 1251 based on the value of the property. The property must be one of the
1252 1252 properties described in the Properties section, or the special
1253 1253 value name to sort by the dataset name. Multiple properties can be
1254 1254 specified at one time using multiple -s property options. Multiple
1255 1255 -s options are evaluated from left to right in decreasing order of
1256 1256 importance. The following is a list of sorting criteria:
1257 1257
1258 1258 o Numeric types sort in numeric order.
1259 1259
1260 1260 o String types sort in alphabetical order.
1261 1261
1262 1262 o Types inappropriate for a row sort that row to the literal
1263 1263 bottom, regardless of the specified ordering.
1264 1264
1265 1265 If no sorting options are specified the existing behavior of zfs
1266 1266 list is preserved.
1267 1267
1268 1268 -t type
1269 1269 A comma-separated list of types to display, where type is one of
1270 1270 filesystem, snapshot, volume, bookmark, or all. For example,
1271 1271 specifying -t snapshot displays only snapshots.
1272 1272
1273 1273 zfs set property=value [property=value]... filesystem|volume|snapshot...
1274 1274 Sets the property or list of properties to the given value(s) for each
1275 1275 dataset. Only some properties can be edited. See the Properties
1276 1276 section for more information on what properties can be set and
1277 1277 acceptable values. Numeric values can be specified as exact values, or
1278 1278 in a human-readable form with a suffix of B, K, M, G, T, P, E, Z (for
1279 1279 bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
1280 1280 or zettabytes, respectively). User properties can be set on snapshots.
1281 1281 For more information, see the User Properties section.
1282 1282
1283 1283 zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...]
1284 1284 [-t type[,type]...] all | property[,property]...
1285 1285 filesystem|volume|snapshot...
1286 1286 Displays properties for the given datasets. If no datasets are
1287 1287 specified, then the command displays properties for all datasets on the
1288 1288 system. For each property, the following columns are displayed:
1289 1289
1290 1290 name Dataset name
1291 1291 property Property name
1292 1292 value Property value
1293 1293 source Property source. Can either be local, default,
1294 1294 temporary, inherited, or none (-).
1295 1295
1296 1296 All columns are displayed by default, though this can be controlled by
1297 1297 using the -o option. This command takes a comma-separated list of
1298 1298 properties as described in the Native Properties and User Properties
1299 1299 sections.
1300 1300
1301 1301 The special value all can be used to display all properties that apply
1302 1302 to the given dataset's type (filesystem, volume, snapshot, or
1303 1303 bookmark).
1304 1304
1305 1305 -H Display output in a form more easily parsed by scripts. Any headers
1306 1306 are omitted, and fields are explicitly separated by a single tab
1307 1307 instead of an arbitrary amount of space.
1308 1308
1309 1309 -d depth
1310 1310 Recursively display any children of the dataset, limiting the
1311 1311 recursion to depth. A depth of 1 will display only the dataset and
1312 1312 its direct children.
1313 1313
1314 1314 -o field
1315 1315 A comma-separated list of columns to display.
1316 1316 name,property,value,source is the default value.
1317 1317
1318 1318 -p Display numbers in parsable (exact) values.
1319 1319
1320 1320 -r Recursively display properties for any children.
1321 1321
1322 1322 -s source
1323 1323 A comma-separated list of sources to display. Those properties
1324 1324 coming from a source other than those in this list are ignored.
1325 1325 Each source must be one of the following: local, default,
1326 1326 inherited, temporary, and none. The default value is all sources.
1327 1327
1328 1328 -t type
1329 1329 A comma-separated list of types to display, where type is one of
1330 1330 filesystem, snapshot, volume, bookmark, or all.
1331 1331
1332 1332 zfs inherit [-rS] property filesystem|volume|snapshot...
1333 1333 Clears the specified property, causing it to be inherited from an
1334 1334 ancestor, restored to default if no ancestor has the property set, or
1335 1335 with the -S option reverted to the received value if one exists. See
1336 1336 the Properties section for a listing of default values, and details on
1337 1337 which properties can be inherited.
1338 1338
1339 1339 -r Recursively inherit the given property for all children.
1340 1340
1341 1341 -S Revert the property to the received value if one exists; otherwise
1342 1342 operate as if the -S option was not specified.
1343 1343
1344 1344 zfs upgrade
1345 1345 Displays a list of file systems that are not the most recent version.
1346 1346
1347 1347 zfs upgrade -v
1348 1348 Displays a list of currently supported file system versions.
1349 1349
1350 1350 zfs upgrade [-r] [-V version] -a | filesystem
1351 1351 Upgrades file systems to a new on-disk version. Once this is done, the
1352 1352 file systems will no longer be accessible on systems running older
1353 1353 versions of the software. zfs send streams generated from new
1354 1354 snapshots of these file systems cannot be accessed on systems running
1355 1355 older versions of the software.
1356 1356
1357 1357 In general, the file system version is independent of the pool version.
1358 1358 See zpool(1M) for information on the zpool upgrade command.
1359 1359
1360 1360 In some cases, the file system version and the pool version are
1361 1361 interrelated and the pool version must be upgraded before the file
1362 1362 system version can be upgraded.
1363 1363
1364 1364 -V version
1365 1365 Upgrade to the specified version. If the -V flag is not specified,
1366 1366 this command upgrades to the most recent version. This option can
1367 1367 only be used to increase the version number, and only up to the
1368 1368 most recent version supported by this software.
1369 1369
1370 1370 -a Upgrade all file systems on all imported pools.
1371 1371
1372 1372 filesystem
1373 1373 Upgrade the specified file system.
1374 1374
1375 1375 -r Upgrade the specified file system and all descendent file systems.
1376 1376
1377 1377 zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1378 1378 [-t type[,type]...] filesystem|snapshot
1379 1379 Displays space consumed by, and quotas on, each user in the specified
1380 1380 filesystem or snapshot. This corresponds to the userused@user and
1381 1381 userquota@user properties.
1382 1382
1383 1383 -H Do not print headers, use tab-delimited output.
1384 1384
1385 1385 -S field
1386 1386 Sort by this field in reverse order. See -s.
1387 1387
1388 1388 -i Translate SID to POSIX ID. The POSIX ID may be ephemeral if no
1389 1389 mapping exists. Normal POSIX interfaces (for example, stat(2), ls
1390 1390 -l) perform this translation, so the -i option allows the output
1391 1391 from zfs userspace to be compared directly with those utilities.
1392 1392 However, -i may lead to confusion if some files were created by an
1393 1393 SMB user before a SMB-to-POSIX name mapping was established. In
1394 1394 such a case, some files will be owned by the SMB entity and some by
1395 1395 the POSIX entity. However, the -i option will report that the POSIX
1396 1396 entity has the total usage and quota for both.
1397 1397
1398 1398 -n Print numeric ID instead of user/group name.
1399 1399
1400 1400 -o field[,field]...
1401 1401 Display only the specified fields from the following set: type,
1402 1402 name, used, quota. The default is to display all fields.
1403 1403
1404 1404 -p Use exact (parsable) numeric output.
1405 1405
1406 1406 -s field
1407 1407 Sort output by this field. The -s and -S flags may be specified
1408 1408 multiple times to sort first by one field, then by another. The
1409 1409 default is -s type -s name.
1410 1410
1411 1411 -t type[,type]...
1412 1412 Print only the specified types from the following set: all,
1413 1413 posixuser, smbuser, posixgroup, smbgroup. The default is -t
1414 1414 posixuser,smbuser. The default can be changed to include group
1415 1415 types.
1416 1416
1417 1417 zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]...
1418 1418 [-t type[,type]...] filesystem|snapshot
1419 1419 Displays space consumed by, and quotas on, each group in the specified
1420 1420 filesystem or snapshot. This subcommand is identical to zfs userspace,
1421 1421 except that the default types to display are -t posixgroup,smbgroup.
1422 1422
1423 1423 zfs mount
1424 1424 Displays all ZFS file systems currently mounted.
1425 1425
1426 1426 zfs mount [-Ov] [-o options] -a | filesystem
1427 1427 Mounts ZFS file systems.
1428 1428
1429 1429 -O Perform an overlay mount. See mount(1M) for more information.
1430 1430
1431 1431 -a Mount all available ZFS file systems. Invoked automatically as part
1432 1432 of the boot process.
1433 1433
1434 1434 filesystem
1435 1435 Mount the specified filesystem.
1436 1436
1437 1437 -o options
1438 1438 An optional, comma-separated list of mount options to use
1439 1439 temporarily for the duration of the mount. See the Temporary Mount
1440 1440 Point Properties section for details.
1441 1441
1442 1442 -v Report mount progress.
1443 1443
1444 1444 zfs unmount [-f] -a | filesystem|mountpoint
1445 1445 Unmounts currently mounted ZFS file systems.
1446 1446
1447 1447 -a Unmount all available ZFS file systems. Invoked automatically as
1448 1448 part of the shutdown process.
1449 1449
1450 1450 filesystem|mountpoint
1451 1451 Unmount the specified filesystem. The command can also be given a
1452 1452 path to a ZFS file system mount point on the system.
1453 1453
1454 1454 -f Forcefully unmount the file system, even if it is currently in use.
1455 1455
1456 1456 zfs share -a | filesystem
1457 1457 Shares available ZFS file systems.
1458 1458
1459 1459 -a Share all available ZFS file systems. Invoked automatically as part
1460 1460 of the boot process.
1461 1461
1462 1462 filesystem
1463 1463 Share the specified filesystem according to the sharenfs and
1464 1464 sharesmb properties. File systems are shared when the sharenfs or
1465 1465 sharesmb property is set.
1466 1466
1467 1467 zfs unshare -a | filesystem|mountpoint
1468 1468 Unshares currently shared ZFS file systems.
1469 1469
1470 1470 -a Unshare all available ZFS file systems. Invoked automatically as
1471 1471 part of the shutdown process.
1472 1472
1473 1473 filesystem|mountpoint
1474 1474 Unshare the specified filesystem. The command can also be given a
↓ open down ↓ |
1422 lines elided |
↑ open up ↑ |
1475 1475 path to a ZFS file system shared on the system.
1476 1476
1477 1477 zfs bookmark snapshot bookmark
1478 1478 Creates a bookmark of the given snapshot. Bookmarks mark the point in
1479 1479 time when the snapshot was created, and can be used as the incremental
1480 1480 source for a zfs send command.
1481 1481
1482 1482 This feature must be enabled to be used. See zpool-features(5) for
1483 1483 details on ZFS feature flags and the bookmarks feature.
1484 1484
1485 - zfs send [-DLPRenpv] [[-I|-i] snapshot] snapshot
1485 + zfs send [-DFLPRenpv] [[-I|-i] snapshot] snapshot
1486 1486 Creates a stream representation of the second snapshot, which is
1487 1487 written to standard output. The output can be redirected to a file or
1488 1488 to a different system (for example, using ssh(1)). By default, a full
1489 1489 stream is generated.
1490 1490
1491 1491 -D Generate a deduplicated stream. Blocks which would have been sent
1492 1492 multiple times in the send stream will only be sent once. The
1493 1493 receiving system must also support this feature to recieve a
1494 1494 deduplicated stream. This flag can be used regardless of the
1495 1495 dataset's dedup property, but performance will be much better if
1496 1496 the filesystem uses a dedup-capable checksum (for example, sha256).
1497 1497
1498 1498 -I snapshot
1499 1499 Generate a stream package that sends all intermediary snapshots
1500 1500 from the first snapshot to the second snapshot. For example, -I @a
1501 1501 fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The
1502 1502 incremental source may be specified as with the -i option.
1503 1503
1504 1504 -L Generate a stream which may contain blocks larger than 128KB. This
1505 1505 flag has no effect if the large_blocks pool feature is disabled, or
1506 1506 if the recordsize property of this filesystem has never been set
1507 1507 above 128KB. The receiving system must have the large_blocks pool
1508 1508 feature enabled as well. See zpool-features(5) for details on ZFS
1509 1509 feature flags and the large_blocks feature.
1510 1510
1511 1511 -P Print machine-parsable verbose information about the stream package
1512 1512 generated.
1513 1513
1514 1514 -R Generate a replication stream package, which will replicate the
1515 1515 specified file system, and all descendent file systems, up to the
1516 1516 named snapshot. When received, all properties, snapshots,
1517 1517 descendent file systems, and clones are preserved.
1518 1518
1519 1519 If the -i or -I flags are used in conjunction with the -R flag, an
1520 1520 incremental replication stream is generated. The current values of
1521 1521 properties, and current snapshot and file system names are set when
1522 1522 the stream is received. If the -F flag is specified when this
1523 1523 stream is received, snapshots and file systems that do not exist on
1524 1524 the sending side are destroyed.
↓ open down ↓ |
29 lines elided |
↑ open up ↑ |
1525 1525
1526 1526 -e Generate a more compact stream by using WRITE_EMBEDDED records for
1527 1527 blocks which are stored more compactly on disk by the embedded_data
1528 1528 pool feature. This flag has no effect if the embedded_data feature
1529 1529 is disabled. The receiving system must have the embedded_data
1530 1530 feature enabled. If the lz4_compress feature is active on the
1531 1531 sending system, then the receiving system must have that feature
1532 1532 enabled as well. See zpool-features(5) for details on ZFS feature
1533 1533 flags and the embedded_data feature.
1534 1534
1535 + -F Generate a stream which omits free records. The stream will be
1536 + more compact but the receiving system will not be able to receive
1537 + the stream as a clone.
1538 +
1535 1539 -i snapshot
1536 1540 Generate an incremental stream from the first snapshot (the
1537 1541 incremental source) to the second snapshot (the incremental
1538 1542 target). The incremental source can be specified as the last
1539 1543 component of the snapshot name (the @ character and following) and
1540 1544 it is assumed to be from the same file system as the incremental
1541 1545 target.
1542 1546
1543 1547 If the destination is a clone, the source may be the origin
1544 1548 snapshot, which must be fully specified (for example,
1545 1549 pool/fs@origin, not just @origin).
1546 1550
1547 1551 -n Do a dry-run ("No-op") send. Do not generate any actual send data.
1548 1552 This is useful in conjunction with the -v or -P flags to determine
1549 1553 what data will be sent. In this case, the verbose output will be
1550 1554 written to standard output (contrast with a non-dry-run, where the
1551 1555 stream is written to standard output and the verbose output goes to
1552 1556 standard error).
1553 1557
1554 1558 -p Include the dataset's properties in the stream. This flag is
↓ open down ↓ |
10 lines elided |
↑ open up ↑ |
1555 1559 implicit when -R is specified. The receiving system must also
1556 1560 support this feature.
1557 1561
1558 1562 -v Print verbose information about the stream package generated. This
1559 1563 information includes a per-second report of how much data has been
1560 1564 sent.
1561 1565
1562 1566 The format of the stream is committed. You will be able to receive
1563 1567 your streams on future versions of ZFS .
1564 1568
1565 - zfs send [-Le] [-i snapshot|bookmark] filesystem|volume|snapshot
1569 + zfs send [-FLe] [-i snapshot|bookmark] filesystem|volume|snapshot
1566 1570 Generate a send stream, which may be of a filesystem, and may be
1567 1571 incremental from a bookmark. If the destination is a filesystem or
1568 1572 volume, the pool must be read-only, or the filesystem must not be
1569 1573 mounted. When the stream generated from a filesystem or volume is
1570 1574 received, the default snapshot name will be "--head--".
1571 1575
1572 1576 -L Generate a stream which may contain blocks larger than 128KB. This
1573 1577 flag has no effect if the large_blocks pool feature is disabled, or
1574 1578 if the recordsize property of this filesystem has never been set
1575 1579 above 128KB. The receiving system must have the large_blocks pool
1576 1580 feature enabled as well. See zpool-features(5) for details on ZFS
1577 1581 feature flags and the large_blocks feature.
↓ open down ↓ |
2 lines elided |
↑ open up ↑ |
1578 1582
1579 1583 -e Generate a more compact stream by using WRITE_EMBEDDED records for
1580 1584 blocks which are stored more compactly on disk by the embedded_data
1581 1585 pool feature. This flag has no effect if the embedded_data feature
1582 1586 is disabled. The receiving system must have the embedded_data
1583 1587 feature enabled. If the lz4_compress feature is active on the
1584 1588 sending system, then the receiving system must have that feature
1585 1589 enabled as well. See zpool-features(5) for details on ZFS feature
1586 1590 flags and the embedded_data feature.
1587 1591
1592 + -F Generate a stream which omits free records. The stream will be
1593 + more compact but the receiving system will not be able to receive
1594 + the stream as a clone.
1595 +
1588 1596 -i snapshot|bookmark
1589 1597 Generate an incremental send stream. The incremental source must be
1590 1598 an earlier snapshot in the destination's history. It will commonly
1591 1599 be an earlier snapshot in the destination's file system, in which
1592 1600 case it can be specified as the last component of the name (the #
1593 1601 or @ character and following).
1594 1602
1595 1603 If the incremental target is a clone, the incremental source can be
1596 1604 the origin snapshot, or an earlier snapshot in the origin's
1597 1605 filesystem, or the origin's origin, etc.
1598 1606
1599 1607 zfs send [-Penv] -t receive_resume_token
1600 1608 Creates a send stream which resumes an interrupted receive. The
1601 1609 receive_resume_token is the value of this property on the filesystem or
1602 1610 volume that was being received into. See the documentation for zfs
1603 1611 receive -s for more details.
1604 1612
1605 1613 zfs receive [-Fnsuv] [-o origin=snapshot] filesystem|volume|snapshot
1606 1614 zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] filesystem
1607 1615 Creates a snapshot whose contents are as specified in the stream
1608 1616 provided on standard input. If a full stream is received, then a new
1609 1617 file system is created as well. Streams are created using the zfs send
1610 1618 subcommand, which by default creates a full stream. zfs recv can be
1611 1619 used as an alias for zfs receive.
1612 1620
1613 1621 If an incremental stream is received, then the destination file system
1614 1622 must already exist, and its most recent snapshot must match the
1615 1623 incremental stream's source. For zvols, the destination device link is
1616 1624 destroyed and recreated, which means the zvol cannot be accessed during
1617 1625 the receive operation.
1618 1626
1619 1627 When a snapshot replication package stream that is generated by using
1620 1628 the zfs send -R command is received, any snapshots that do not exist on
1621 1629 the sending location are destroyed by using the zfs destroy -d command.
1622 1630
1623 1631 The name of the snapshot (and file system, if a full stream is
1624 1632 received) that this subcommand creates depends on the argument type and
1625 1633 the use of the -d or -e options.
1626 1634
1627 1635 If the argument is a snapshot name, the specified snapshot is created.
1628 1636 If the argument is a file system or volume name, a snapshot with the
1629 1637 same name as the sent snapshot is created within the specified
1630 1638 filesystem or volume. If neither of the -d or -e options are
1631 1639 specified, the provided target snapshot name is used exactly as
1632 1640 provided.
1633 1641
1634 1642 The -d and -e options cause the file system name of the target snapshot
1635 1643 to be determined by appending a portion of the sent snapshot's name to
1636 1644 the specified target filesystem. If the -d option is specified, all
1637 1645 but the first element of the sent snapshot's file system path (usually
1638 1646 the pool name) is used and any required intermediate file systems
1639 1647 within the specified one are created. If the -e option is specified,
1640 1648 then only the last element of the sent snapshot's file system name
1641 1649 (i.e. the name of the source file system itself) is used as the target
1642 1650 file system name.
1643 1651
1644 1652 -F Force a rollback of the file system to the most recent snapshot
1645 1653 before performing the receive operation. If receiving an
1646 1654 incremental replication stream (for example, one generated by zfs
1647 1655 send -R [-i|-I]), destroy snapshots and file systems that do not
1648 1656 exist on the sending side.
1649 1657
1650 1658 -d Discard the first element of the sent snapshot's file system name,
1651 1659 using the remaining elements to determine the name of the target
1652 1660 file system for the new snapshot as described in the paragraph
1653 1661 above.
1654 1662
1655 1663 -e Discard all but the last element of the sent snapshot's file system
1656 1664 name, using that element to determine the name of the target file
1657 1665 system for the new snapshot as described in the paragraph above.
1658 1666
1659 1667 -n Do not actually receive the stream. This can be useful in
1660 1668 conjunction with the -v option to verify the name the receive
1661 1669 operation would use.
1662 1670
1663 1671 -o origin=snapshot
1664 1672 Forces the stream to be received as a clone of the given snapshot.
1665 1673 If the stream is a full send stream, this will create the
1666 1674 filesystem described by the stream as a clone of the specified
1667 1675 snapshot. Which snapshot was specified will not affect the success
1668 1676 or failure of the receive, as long as the snapshot does exist. If
1669 1677 the stream is an incremental send stream, all the normal
1670 1678 verification will be performed.
1671 1679
1672 1680 -u File system that is associated with the received stream is not
1673 1681 mounted.
1674 1682
1675 1683 -v Print verbose information about the stream and the time required to
1676 1684 perform the receive operation.
1677 1685
1678 1686 -s If the receive is interrupted, save the partially received state,
1679 1687 rather than deleting it. Interruption may be due to premature
1680 1688 termination of the stream (e.g. due to network failure or failure
1681 1689 of the remote system if the stream is being read over a network
1682 1690 connection), a checksum error in the stream, termination of the zfs
1683 1691 receive process, or unclean shutdown of the system.
1684 1692
1685 1693 The receive can be resumed with a stream generated by zfs send -t
1686 1694 token, where the token is the value of the receive_resume_token
1687 1695 property of the filesystem or volume which is received into.
1688 1696
1689 1697 To use this flag, the storage pool must have the extensible_dataset
1690 1698 feature enabled. See zpool-features(5) for details on ZFS feature
1691 1699 flags.
1692 1700
1693 1701 zfs receive -A filesystem|volume
1694 1702 Abort an interrupted zfs receive -s, deleting its saved partially
1695 1703 received state.
1696 1704
1697 1705 zfs allow filesystem|volume
1698 1706 Displays permissions that have been delegated on the specified
1699 1707 filesystem or volume. See the other forms of zfs allow for more
1700 1708 information.
1701 1709
1702 1710 zfs allow [-dglu] user|group[,user|group]...
1703 1711 perm|@setname[,perm|@setname]... filesystem|volume
1704 1712 zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]...
1705 1713 filesystem|volume
1706 1714 Delegates ZFS administration permission for the file systems to non-
1707 1715 privileged users.
1708 1716
1709 1717 -d Allow only for the descendent file systems.
1710 1718
1711 1719 -e|everyone
1712 1720 Specifies that the permissions be delegated to everyone.
1713 1721
1714 1722 -g group[,group]...
1715 1723 Explicitly specify that permissions are delegated to the group.
1716 1724
1717 1725 -l Allow "locally" only for the specified file system.
1718 1726
1719 1727 -u user[,user]...
1720 1728 Explicitly specify that permissions are delegated to the user.
1721 1729
1722 1730 user|group[,user|group]...
1723 1731 Specifies to whom the permissions are delegated. Multiple entities
1724 1732 can be specified as a comma-separated list. If neither of the -gu
1725 1733 options are specified, then the argument is interpreted
1726 1734 preferentially as the keyword everyone, then as a user name, and
1727 1735 lastly as a group name. To specify a user or group named
1728 1736 "everyone", use the -g or -u options. To specify a group with the
1729 1737 same name as a user, use the -g options.
1730 1738
1731 1739 perm|@setname[,perm|@setname]...
1732 1740 The permissions to delegate. Multiple permissions may be specified
1733 1741 as a comma-separated list. Permission names are the same as ZFS
1734 1742 subcommand and property names. See the property list below.
1735 1743 Property set names, which begin with @, may be specified. See the
1736 1744 -s form below for details.
1737 1745
1738 1746 If neither of the -dl options are specified, or both are, then the
1739 1747 permissions are allowed for the file system or volume, and all of its
1740 1748 descendents.
1741 1749
1742 1750 Permissions are generally the ability to use a ZFS subcommand or change
1743 1751 a ZFS property. The following permissions are available:
1744 1752
1745 1753 NAME TYPE NOTES
1746 1754 allow subcommand Must also have the permission that is being
1747 1755 allowed
1748 1756 clone subcommand Must also have the 'create' ability and 'mount'
1749 1757 ability in the origin file system
1750 1758 create subcommand Must also have the 'mount' ability
1751 1759 destroy subcommand Must also have the 'mount' ability
1752 1760 diff subcommand Allows lookup of paths within a dataset
1753 1761 given an object number, and the ability to
1754 1762 create snapshots necessary to 'zfs diff'.
1755 1763 mount subcommand Allows mount/umount of ZFS datasets
1756 1764 promote subcommand Must also have the 'mount'
1757 1765 and 'promote' ability in the origin file system
1758 1766 receive subcommand Must also have the 'mount' and 'create' ability
1759 1767 rename subcommand Must also have the 'mount' and 'create'
1760 1768 ability in the new parent
1761 1769 rollback subcommand Must also have the 'mount' ability
1762 1770 send subcommand
1763 1771 share subcommand Allows sharing file systems over NFS or SMB
1764 1772 protocols
1765 1773 snapshot subcommand Must also have the 'mount' ability
1766 1774
1767 1775 groupquota other Allows accessing any groupquota@... property
1768 1776 groupused other Allows reading any groupused@... property
1769 1777 userprop other Allows changing any user property
1770 1778 userquota other Allows accessing any userquota@... property
1771 1779 userused other Allows reading any userused@... property
1772 1780
1773 1781 aclinherit property
1774 1782 aclmode property
1775 1783 atime property
1776 1784 canmount property
1777 1785 casesensitivity property
1778 1786 checksum property
1779 1787 compression property
1780 1788 copies property
1781 1789 devices property
1782 1790 exec property
1783 1791 filesystem_limit property
1784 1792 mountpoint property
1785 1793 nbmand property
1786 1794 normalization property
1787 1795 primarycache property
1788 1796 quota property
1789 1797 readonly property
1790 1798 recordsize property
1791 1799 refquota property
1792 1800 refreservation property
1793 1801 reservation property
1794 1802 secondarycache property
1795 1803 setuid property
1796 1804 sharenfs property
1797 1805 sharesmb property
1798 1806 snapdir property
1799 1807 snapshot_limit property
1800 1808 utf8only property
1801 1809 version property
1802 1810 volblocksize property
1803 1811 volsize property
1804 1812 vscan property
1805 1813 xattr property
1806 1814 zoned property
1807 1815
1808 1816 zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
1809 1817 Sets "create time" permissions. These permissions are granted (locally)
1810 1818 to the creator of any newly-created descendent file system.
1811 1819
1812 1820 zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
1813 1821 Defines or adds permissions to a permission set. The set can be used by
1814 1822 other zfs allow commands for the specified file system and its
1815 1823 descendents. Sets are evaluated dynamically, so changes to a set are
1816 1824 immediately reflected. Permission sets follow the same naming
1817 1825 restrictions as ZFS file systems, but the name must begin with @, and
1818 1826 can be no more than 64 characters long.
1819 1827
1820 1828 zfs unallow [-dglru] user|group[,user|group]...
1821 1829 [perm|@setname[,perm|@setname]...] filesystem|volume
1822 1830 zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...]
1823 1831 filesystem|volume
1824 1832 zfs unallow [-r] -c [perm|@setname[,perm|@setname]...] filesystem|volume
1825 1833 Removes permissions that were granted with the zfs allow command. No
1826 1834 permissions are explicitly denied, so other permissions granted are
1827 1835 still in effect. For example, if the permission is granted by an
1828 1836 ancestor. If no permissions are specified, then all permissions for the
1829 1837 specified user, group, or everyone are removed. Specifying everyone (or
1830 1838 using the -e option) only removes the permissions that were granted to
1831 1839 everyone, not all permissions for every user and group. See the zfs
1832 1840 allow command for a description of the -ldugec options.
1833 1841
1834 1842 -r Recursively remove the permissions from this file system and all
1835 1843 descendents.
1836 1844
1837 1845 zfs unallow [-r] -s -@setname [perm|@setname[,perm|@setname]...]
1838 1846 filesystem|volume
1839 1847 Removes permissions from a permission set. If no permissions are
1840 1848 specified, then all permissions are removed, thus removing the set
1841 1849 entirely.
1842 1850
1843 1851 zfs hold [-r] tag snapshot...
1844 1852 Adds a single reference, named with the tag argument, to the specified
1845 1853 snapshot or snapshots. Each snapshot has its own tag namespace, and
1846 1854 tags must be unique within that space.
1847 1855
1848 1856 If a hold exists on a snapshot, attempts to destroy that snapshot by
1849 1857 using the zfs destroy command return EBUSY.
1850 1858
1851 1859 -r Specifies that a hold with the given tag is applied recursively to
1852 1860 the snapshots of all descendent file systems.
1853 1861
1854 1862 zfs holds [-r] snapshot...
1855 1863 Lists all existing user references for the given snapshot or snapshots.
1856 1864
1857 1865 -r Lists the holds that are set on the named descendent snapshots, in
1858 1866 addition to listing the holds on the named snapshot.
1859 1867
1860 1868 zfs release [-r] tag snapshot...
1861 1869 Removes a single reference, named with the tag argument, from the
1862 1870 specified snapshot or snapshots. The tag must already exist for each
1863 1871 snapshot. If a hold exists on a snapshot, attempts to destroy that
1864 1872 snapshot by using the zfs destroy command return EBUSY.
1865 1873
1866 1874 -r Recursively releases a hold with the given tag on the snapshots of
1867 1875 all descendent file systems.
1868 1876
1869 1877 zfs diff [-FHt] snapshot snapshot|filesystem
1870 1878 Display the difference between a snapshot of a given filesystem and
1871 1879 another snapshot of that filesystem from a later time or the current
1872 1880 contents of the filesystem. The first column is a character indicating
1873 1881 the type of change, the other columns indicate pathname, new pathname
1874 1882 (in case of rename), change in link count, and optionally file type
1875 1883 and/or change time. The types of change are:
1876 1884
1877 1885 - The path has been removed
1878 1886 + The path has been created
1879 1887 M The path has been modified
1880 1888 R The path has been renamed
1881 1889
1882 1890 -F Display an indication of the type of file, in a manner similar to
1883 1891 the - option of ls(1).
1884 1892
1885 1893 B Block device
1886 1894 C Character device
1887 1895 / Directory
1888 1896 > Door
1889 1897 | Named pipe
1890 1898 @ Symbolic link
1891 1899 P Event port
1892 1900 = Socket
1893 1901 F Regular file
1894 1902
1895 1903 -H Give more parsable tab-separated output, without header lines and
1896 1904 without arrows.
1897 1905
1898 1906 -t Display the path's inode change time as the first column of output.
1899 1907
1900 1908 EXIT STATUS
1901 1909 The zfs utility exits 0 on success, 1 if an error occurs, and 2 if
1902 1910 invalid command line options were specified.
1903 1911
1904 1912 EXAMPLES
1905 1913 Example 1 Creating a ZFS File System Hierarchy
1906 1914 The following commands create a file system named pool/home and a file
1907 1915 system named pool/home/bob. The mount point /export/home is set for
1908 1916 the parent file system, and is automatically inherited by the child
1909 1917 file system.
1910 1918
1911 1919 # zfs create pool/home
1912 1920 # zfs set mountpoint=/export/home pool/home
1913 1921 # zfs create pool/home/bob
1914 1922
1915 1923 Example 2 Creating a ZFS Snapshot
1916 1924 The following command creates a snapshot named yesterday. This
1917 1925 snapshot is mounted on demand in the .zfs/snapshot directory at the
1918 1926 root of the pool/home/bob file system.
1919 1927
1920 1928 # zfs snapshot pool/home/bob@yesterday
1921 1929
1922 1930 Example 3 Creating and Destroying Multiple Snapshots
1923 1931 The following command creates snapshots named yesterday of pool/home
1924 1932 and all of its descendent file systems. Each snapshot is mounted on
1925 1933 demand in the .zfs/snapshot directory at the root of its file system.
1926 1934 The second command destroys the newly created snapshots.
1927 1935
1928 1936 # zfs snapshot -r pool/home@yesterday
1929 1937 # zfs destroy -r pool/home@yesterday
1930 1938
1931 1939 Example 4 Disabling and Enabling File System Compression
1932 1940 The following command disables the compression property for all file
1933 1941 systems under pool/home. The next command explicitly enables
1934 1942 compression for pool/home/anne.
1935 1943
1936 1944 # zfs set compression=off pool/home
1937 1945 # zfs set compression=on pool/home/anne
1938 1946
1939 1947 Example 5 Listing ZFS Datasets
1940 1948 The following command lists all active file systems and volumes in the
1941 1949 system. Snapshots are displayed if the listsnaps property is on. The
1942 1950 default is off. See zpool(1M) for more information on pool properties.
1943 1951
1944 1952 # zfs list
1945 1953 NAME USED AVAIL REFER MOUNTPOINT
1946 1954 pool 450K 457G 18K /pool
1947 1955 pool/home 315K 457G 21K /export/home
1948 1956 pool/home/anne 18K 457G 18K /export/home/anne
1949 1957 pool/home/bob 276K 457G 276K /export/home/bob
1950 1958
1951 1959 Example 6 Setting a Quota on a ZFS File System
1952 1960 The following command sets a quota of 50 Gbytes for pool/home/bob.
1953 1961
1954 1962 # zfs set quota=50G pool/home/bob
1955 1963
1956 1964 Example 7 Listing ZFS Properties
1957 1965 The following command lists all properties for pool/home/bob.
1958 1966
1959 1967 # zfs get all pool/home/bob
1960 1968 NAME PROPERTY VALUE SOURCE
1961 1969 pool/home/bob type filesystem -
1962 1970 pool/home/bob creation Tue Jul 21 15:53 2009 -
1963 1971 pool/home/bob used 21K -
1964 1972 pool/home/bob available 20.0G -
1965 1973 pool/home/bob referenced 21K -
1966 1974 pool/home/bob compressratio 1.00x -
1967 1975 pool/home/bob mounted yes -
1968 1976 pool/home/bob quota 20G local
1969 1977 pool/home/bob reservation none default
1970 1978 pool/home/bob recordsize 128K default
1971 1979 pool/home/bob mountpoint /pool/home/bob default
1972 1980 pool/home/bob sharenfs off default
1973 1981 pool/home/bob checksum on default
1974 1982 pool/home/bob compression on local
1975 1983 pool/home/bob atime on default
1976 1984 pool/home/bob devices on default
1977 1985 pool/home/bob exec on default
1978 1986 pool/home/bob setuid on default
1979 1987 pool/home/bob readonly off default
1980 1988 pool/home/bob zoned off default
1981 1989 pool/home/bob snapdir hidden default
1982 1990 pool/home/bob aclmode discard default
1983 1991 pool/home/bob aclinherit restricted default
1984 1992 pool/home/bob canmount on default
1985 1993 pool/home/bob xattr on default
1986 1994 pool/home/bob copies 1 default
1987 1995 pool/home/bob version 4 -
1988 1996 pool/home/bob utf8only off -
1989 1997 pool/home/bob normalization none -
1990 1998 pool/home/bob casesensitivity sensitive -
1991 1999 pool/home/bob vscan off default
1992 2000 pool/home/bob nbmand off default
1993 2001 pool/home/bob sharesmb off default
1994 2002 pool/home/bob refquota none default
1995 2003 pool/home/bob refreservation none default
1996 2004 pool/home/bob primarycache all default
1997 2005 pool/home/bob secondarycache all default
1998 2006 pool/home/bob usedbysnapshots 0 -
1999 2007 pool/home/bob usedbydataset 21K -
2000 2008 pool/home/bob usedbychildren 0 -
2001 2009 pool/home/bob usedbyrefreservation 0 -
2002 2010
2003 2011 The following command gets a single property value.
2004 2012
2005 2013 # zfs get -H -o value compression pool/home/bob
2006 2014 on
2007 2015 The following command lists all properties with local settings for
2008 2016 pool/home/bob.
2009 2017
2010 2018 # zfs get -r -s local -o name,property,value all pool/home/bob
2011 2019 NAME PROPERTY VALUE
2012 2020 pool/home/bob quota 20G
2013 2021 pool/home/bob compression on
2014 2022
2015 2023 Example 8 Rolling Back a ZFS File System
2016 2024 The following command reverts the contents of pool/home/anne to the
2017 2025 snapshot named yesterday, deleting all intermediate snapshots.
2018 2026
2019 2027 # zfs rollback -r pool/home/anne@yesterday
2020 2028
2021 2029 Example 9 Creating a ZFS Clone
2022 2030 The following command creates a writable file system whose initial
2023 2031 contents are the same as pool/home/bob@yesterday.
2024 2032
2025 2033 # zfs clone pool/home/bob@yesterday pool/clone
2026 2034
2027 2035 Example 10 Promoting a ZFS Clone
2028 2036 The following commands illustrate how to test out changes to a file
2029 2037 system, and then replace the original file system with the changed one,
2030 2038 using clones, clone promotion, and renaming:
2031 2039
2032 2040 # zfs create pool/project/production
2033 2041 populate /pool/project/production with data
2034 2042 # zfs snapshot pool/project/production@today
2035 2043 # zfs clone pool/project/production@today pool/project/beta
2036 2044 make changes to /pool/project/beta and test them
2037 2045 # zfs promote pool/project/beta
2038 2046 # zfs rename pool/project/production pool/project/legacy
2039 2047 # zfs rename pool/project/beta pool/project/production
2040 2048 once the legacy version is no longer needed, it can be destroyed
2041 2049 # zfs destroy pool/project/legacy
2042 2050
2043 2051 Example 11 Inheriting ZFS Properties
2044 2052 The following command causes pool/home/bob and pool/home/anne to
2045 2053 inherit the checksum property from their parent.
2046 2054
2047 2055 # zfs inherit checksum pool/home/bob pool/home/anne
2048 2056
2049 2057 Example 12 Remotely Replicating ZFS Data
2050 2058 The following commands send a full stream and then an incremental
2051 2059 stream to a remote machine, restoring them into poolB/received/fs@a and
2052 2060 poolB/received/fs@b, respectively. poolB must contain the file system
2053 2061 poolB/received, and must not initially contain poolB/received/fs.
2054 2062
2055 2063 # zfs send pool/fs@a | \
2056 2064 ssh host zfs receive poolB/received/fs@a
2057 2065 # zfs send -i a pool/fs@b | \
2058 2066 ssh host zfs receive poolB/received/fs
2059 2067
2060 2068 Example 13 Using the zfs receive -d Option
2061 2069 The following command sends a full stream of poolA/fsA/fsB@snap to a
2062 2070 remote machine, receiving it into poolB/received/fsA/fsB@snap. The
2063 2071 fsA/fsB@snap portion of the received snapshot's name is determined from
2064 2072 the name of the sent snapshot. poolB must contain the file system
2065 2073 poolB/received. If poolB/received/fsA does not exist, it is created as
2066 2074 an empty file system.
2067 2075
2068 2076 # zfs send poolA/fsA/fsB@snap | \
2069 2077 ssh host zfs receive -d poolB/received
2070 2078
2071 2079 Example 14 Setting User Properties
2072 2080 The following example sets the user-defined com.example:department
2073 2081 property for a dataset.
2074 2082
2075 2083 # zfs set com.example:department=12345 tank/accounting
2076 2084
2077 2085 Example 15 Performing a Rolling Snapshot
2078 2086 The following example shows how to maintain a history of snapshots with
2079 2087 a consistent naming scheme. To keep a week's worth of snapshots, the
2080 2088 user destroys the oldest snapshot, renames the remaining snapshots, and
2081 2089 then creates a new snapshot, as follows:
2082 2090
2083 2091 # zfs destroy -r pool/users@7daysago
2084 2092 # zfs rename -r pool/users@6daysago @7daysago
2085 2093 # zfs rename -r pool/users@5daysago @6daysago
2086 2094 # zfs rename -r pool/users@yesterday @5daysago
2087 2095 # zfs rename -r pool/users@yesterday @4daysago
2088 2096 # zfs rename -r pool/users@yesterday @3daysago
2089 2097 # zfs rename -r pool/users@yesterday @2daysago
2090 2098 # zfs rename -r pool/users@today @yesterday
2091 2099 # zfs snapshot -r pool/users@today
2092 2100
2093 2101 Example 16 Setting sharenfs Property Options on a ZFS File System
2094 2102 The following commands show how to set sharenfs property options to
2095 2103 enable rw access for a set of IP addresses and to enable root access
2096 2104 for system neo on the tank/home file system.
2097 2105
2098 2106 # zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
2099 2107
2100 2108 If you are using DNS for host name resolution, specify the fully
2101 2109 qualified hostname.
2102 2110
2103 2111 Example 17 Delegating ZFS Administration Permissions on a ZFS Dataset
2104 2112 The following example shows how to set permissions so that user cindys
2105 2113 can create, destroy, mount, and take snapshots on tank/cindys. The
2106 2114 permissions on tank/cindys are also displayed.
2107 2115
2108 2116 # zfs allow cindys create,destroy,mount,snapshot tank/cindys
2109 2117 # zfs allow tank/cindys
2110 2118 ---- Permissions on tank/cindys --------------------------------------
2111 2119 Local+Descendent permissions:
2112 2120 user cindys create,destroy,mount,snapshot
2113 2121
2114 2122 Because the tank/cindys mount point permission is set to 755 by
2115 2123 default, user cindys will be unable to mount file systems under
2116 2124 tank/cindys. Add an ACE similar to the following syntax to provide
2117 2125 mount point access:
2118 2126
2119 2127 # chmod A+user:cindys:add_subdirectory:allow /tank/cindys
2120 2128
2121 2129 Example 18 Delegating Create Time Permissions on a ZFS Dataset
2122 2130 The following example shows how to grant anyone in the group staff to
2123 2131 create file systems in tank/users. This syntax also allows staff
2124 2132 members to destroy their own file systems, but not destroy anyone
2125 2133 else's file system. The permissions on tank/users are also displayed.
2126 2134
2127 2135 # zfs allow staff create,mount tank/users
2128 2136 # zfs allow -c destroy tank/users
2129 2137 # zfs allow tank/users
2130 2138 ---- Permissions on tank/users ---------------------------------------
2131 2139 Permission sets:
2132 2140 destroy
2133 2141 Local+Descendent permissions:
2134 2142 group staff create,mount
2135 2143
2136 2144 Example 19 Defining and Granting a Permission Set on a ZFS Dataset
2137 2145 The following example shows how to define and grant a permission set on
2138 2146 the tank/users file system. The permissions on tank/users are also
2139 2147 displayed.
2140 2148
2141 2149 # zfs allow -s @pset create,destroy,snapshot,mount tank/users
2142 2150 # zfs allow staff @pset tank/users
2143 2151 # zfs allow tank/users
2144 2152 ---- Permissions on tank/users ---------------------------------------
2145 2153 Permission sets:
2146 2154 @pset create,destroy,mount,snapshot
2147 2155 Local+Descendent permissions:
2148 2156 group staff @pset
2149 2157
2150 2158 Example 20 Delegating Property Permissions on a ZFS Dataset
2151 2159 The following example shows to grant the ability to set quotas and
2152 2160 reservations on the users/home file system. The permissions on
2153 2161 users/home are also displayed.
2154 2162
2155 2163 # zfs allow cindys quota,reservation users/home
2156 2164 # zfs allow users/home
2157 2165 ---- Permissions on users/home ---------------------------------------
2158 2166 Local+Descendent permissions:
2159 2167 user cindys quota,reservation
2160 2168 cindys% zfs set quota=10G users/home/marks
2161 2169 cindys% zfs get quota users/home/marks
2162 2170 NAME PROPERTY VALUE SOURCE
2163 2171 users/home/marks quota 10G local
2164 2172
2165 2173 Example 21 Removing ZFS Delegated Permissions on a ZFS Dataset
2166 2174 The following example shows how to remove the snapshot permission from
2167 2175 the staff group on the tank/users file system. The permissions on
2168 2176 tank/users are also displayed.
2169 2177
2170 2178 # zfs unallow staff snapshot tank/users
2171 2179 # zfs allow tank/users
2172 2180 ---- Permissions on tank/users ---------------------------------------
2173 2181 Permission sets:
2174 2182 @pset create,destroy,mount,snapshot
2175 2183 Local+Descendent permissions:
2176 2184 group staff @pset
2177 2185
2178 2186 Example 22 Showing the differences between a snapshot and a ZFS Dataset
2179 2187 The following example shows how to see what has changed between a prior
2180 2188 snapshot of a ZFS dataset and its current state. The -F option is used
2181 2189 to indicate type information for the files affected.
2182 2190
2183 2191 # zfs diff -F tank/test@before tank/test
2184 2192 M / /tank/test/
2185 2193 M F /tank/test/linked (+1)
2186 2194 R F /tank/test/oldname -> /tank/test/newname
2187 2195 - F /tank/test/deleted
2188 2196 + F /tank/test/created
↓ open down ↓ |
591 lines elided |
↑ open up ↑ |
2189 2197 M F /tank/test/modified
2190 2198
2191 2199 INTERFACE STABILITY
2192 2200 Commited.
2193 2201
2194 2202 SEE ALSO
2195 2203 gzip(1), ssh(1), mount(1M), share(1M), sharemgr(1M), unshare(1M),
2196 2204 zonecfg(1M), zpool(1M), chmod(2), stat(2), write(2), fsync(3C),
2197 2205 dfstab(4), acl(5), attributes(5)
2198 2206
2199 -illumos June 8, 2015 illumos
2207 +illumos December 29, 2015 illumos
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX