82 specifies a triple-parity raidz group. The raidz vdev type is an
83 alias for raidz1.
84
85 A raidz group with N disks of size X with P parity disks can hold
86 approximately (N-P)*X bytes and can withstand P device(s) failing
87 before data integrity is compromised. The minimum number of
88 devices in a raidz group is one more than the number of parity
89 disks. The recommended number is between 3 and 9 to help increase
90 performance.
91
92 spare A special pseudo-vdev which keeps track of available hot spares
93 for a pool. For more information, see the Hot Spares section.
94
95 log A separate intent log device. If more than one log device is
96 specified, then writes are load-balanced between devices. Log
97 devices can be mirrored. However, raidz vdev types are not
98 supported for the intent log. For more information, see the
99 Intent Log section.
100
101 cache A device used to cache storage pool data. A cache device cannot
102 be cannot be configured as a mirror or raidz group. For more
103 information, see the Cache Devices section.
104
105 Virtual devices cannot be nested, so a mirror or raidz virtual device can
106 only contain files or disks. Mirrors of mirrors (or other combinations)
107 are not allowed.
108
109 A pool can have any number of virtual devices at the top of the
110 configuration (known as "root vdevs"). Data is dynamically distributed
111 across all top-level devices to balance data among devices. As new
112 virtual devices are added, ZFS automatically places data on the newly
113 available devices.
114
115 Virtual devices are specified one at a time on the command line,
116 separated by whitespace. The keywords mirror and raidz are used to
117 distinguish where a group ends and another begins. For example, the
118 following creates two root vdevs, each a mirror of two disks:
119
120 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
121
122 Device Failure and Recovery
123 ZFS supports a rich set of mechanisms for handling device failure and
1115 The following command dipslays the detailed information for the
1116 pool data. This pool is comprised of a single raidz vdev where
1117 one of its devices increased its capacity by 10GB. In this
1118 example, the pool will not be able to utilize this extra capacity
1119 until all the devices under the raidz vdev have been expanded.
1120
1121 # zpool list -v data
1122 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1123 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1124 raidz1 23.9G 14.6G 9.30G 48% -
1125 c1t1d0 - - - - -
1126 c1t2d0 - - - - 10G
1127 c1t3d0 - - - - -
1128
1129 INTERFACE STABILITY
1130 Evolving
1131
1132 SEE ALSO
1133 zfs(1M), attributes(5), zpool-features(5)
1134
1135 illumos February 15, 2016 illumos
|
82 specifies a triple-parity raidz group. The raidz vdev type is an
83 alias for raidz1.
84
85 A raidz group with N disks of size X with P parity disks can hold
86 approximately (N-P)*X bytes and can withstand P device(s) failing
87 before data integrity is compromised. The minimum number of
88 devices in a raidz group is one more than the number of parity
89 disks. The recommended number is between 3 and 9 to help increase
90 performance.
91
92 spare A special pseudo-vdev which keeps track of available hot spares
93 for a pool. For more information, see the Hot Spares section.
94
95 log A separate intent log device. If more than one log device is
96 specified, then writes are load-balanced between devices. Log
97 devices can be mirrored. However, raidz vdev types are not
98 supported for the intent log. For more information, see the
99 Intent Log section.
100
101 cache A device used to cache storage pool data. A cache device cannot
102 be configured as a mirror or raidz group. For more information,
103 see the Cache Devices section.
104
105 Virtual devices cannot be nested, so a mirror or raidz virtual device can
106 only contain files or disks. Mirrors of mirrors (or other combinations)
107 are not allowed.
108
109 A pool can have any number of virtual devices at the top of the
110 configuration (known as "root vdevs"). Data is dynamically distributed
111 across all top-level devices to balance data among devices. As new
112 virtual devices are added, ZFS automatically places data on the newly
113 available devices.
114
115 Virtual devices are specified one at a time on the command line,
116 separated by whitespace. The keywords mirror and raidz are used to
117 distinguish where a group ends and another begins. For example, the
118 following creates two root vdevs, each a mirror of two disks:
119
120 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
121
122 Device Failure and Recovery
123 ZFS supports a rich set of mechanisms for handling device failure and
1115 The following command dipslays the detailed information for the
1116 pool data. This pool is comprised of a single raidz vdev where
1117 one of its devices increased its capacity by 10GB. In this
1118 example, the pool will not be able to utilize this extra capacity
1119 until all the devices under the raidz vdev have been expanded.
1120
1121 # zpool list -v data
1122 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1123 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1124 raidz1 23.9G 14.6G 9.30G 48% -
1125 c1t1d0 - - - - -
1126 c1t2d0 - - - - 10G
1127 c1t3d0 - - - - -
1128
1129 INTERFACE STABILITY
1130 Evolving
1131
1132 SEE ALSO
1133 zfs(1M), attributes(5), zpool-features(5)
1134
1135 illumos March 25, 2016 illumos
|