Print this page
code review from Josh and Robert
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/man/man9/vmem.9
+++ new/usr/src/man/man9/vmem.9
1 1 .\"
2 2 .\" CDDL HEADER START
3 3 .\"
4 4 .\" The contents of this file are subject to the terms of the
5 5 .\" Common Development and Distribution License (the "License").
6 6 .\" You may not use this file except in compliance with the License.
7 7 .\"
8 8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 .\" or http://www.opensolaris.org/os/licensing.
10 10 .\" See the License for the specific language governing permissions
11 11 .\" and limitations under the License.
12 12 .\"
13 13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 .\" If applicable, add the following below this CDDL HEADER, with the
16 16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 18 .\"
19 19 .\" CDDL HEADER END
20 20 .\"
21 21 .\"
22 22 .\" Copyright 2010 Sun Microsystems, Inc. All rights reserved.
23 23 .\" Use is subject to license terms.
24 24 .\"
25 25 .\" Copyright (c) 2012, 2015 by Delphix. All rights reserved.
26 26 .\" Copyright (c) 2012, Joyent, Inc. All rights reserved.
27 27 .\"
28 28 .\" The text of this is derived from section 1 of the big theory statement in
29 29 .\" usr/src/uts/common/os/vmem.c, the traditional location of this text. They
30 30 .\" should largely be updated in tandem.
31 31 .Dd Jan 18, 2017
32 32 .Dt VMEM 9
33 33 .Os
34 34 .Sh NAME
35 35 .Nm vmem
36 36 .Nd virtual memory allocator
37 37 .Sh DESCRIPTION
38 38 .Ss Overview
39 39 An address space is divided into a number of logically distinct pieces, or
40 40 .Em arenas :
41 41 text, data, heap, stack, and so on.
42 42 Within these
43 43 arenas we often subdivide further; for example, we use heap addresses
44 44 not only for the kernel heap
45 45 .Po
46 46 .Fn kmem_alloc
47 47 space
48 48 .Pc ,
49 49 but also for DVMA,
50 50 .Fn bp_mapin ,
51 51 .Pa /dev/kmem ,
52 52 and even some device mappings.
53 53 .Pp
54 54 The kernel address space, therefore, is most accurately described as
55 55 a tree of arenas in which each node of the tree
56 56 .Em imports
57 57 some subset of its parent.
58 58 The virtual memory allocator manages these arenas
59 59 and supports their natural hierarchical structure.
60 60 .Ss Arenas
61 61 An arena is nothing more than a set of integers. These integers most
62 62 commonly represent virtual addresses, but in fact they can represent
↓ open down ↓ |
62 lines elided |
↑ open up ↑ |
63 63 anything at all. For example, we could use an arena containing the
64 64 integers minpid through maxpid to allocate process IDs. For uses of this
65 65 nature, prefer
66 66 .Xr id_space 9F
67 67 instead.
68 68 .Pp
69 69 .Fn vmem_create
70 70 and
71 71 .Fn vmem_destroy
72 72 create and destroy vmem arenas. In order to differentiate between arenas used
73 -for adresses and arenas used for identifiers, the
73 +for addresses and arenas used for identifiers, the
74 74 .Dv VMC_IDENTIFIER
75 75 flag is passed to
76 76 .Fn vmem_create .
77 77 This prevents identifier exhaustion from being diagnosed as general memory
78 78 failure.
79 79 .Ss Spans
80 80 We represent the integers in an arena as a collection of
81 81 .Em spans ,
82 82 or contiguous ranges of integers. For example, the kernel heap consists of
83 83 just one span:
84 84 .Li "[kernelheap, ekernelheap)" .
85 85 Spans can be added to an arena in two ways: explicitly, by
86 -.Fn vmem_add ,
86 +.Fn vmem_add ;
87 87 or implicitly, by importing, as described in
88 88 .Sx Imported Memory
89 89 below.
90 90 .Ss Segments
91 91 Spans are subdivided into
92 92 .Em segments ,
93 93 each of which is either allocated or free. A segment, like a span, is a
94 94 contiguous range of integers. Each allocated segment
95 95 .Li "[addr, addr + size)"
96 96 represents exactly one
97 97 .Li "vmem_alloc(size)"
98 98 that returned
99 99 .Sy addr .
100 100 Free segments represent the space between allocated segments. If two free
101 101 segments are adjacent, we coalesce them into one larger segment; that is, if
102 102 segments
103 103 .Li "[a, b)"
104 104 and
105 105 .Li "[b, c)"
106 106 are both free, we merge them into a single segment
107 107 .Li "[a, c)" .
108 108 The segments within a span are linked together in increasing\-address
109 109 order so we can easily determine whether coalescing is possible.
110 110 .Pp
111 111 Segments never cross span boundaries. When all segments within an imported
112 112 span become free, we return the span to its source.
113 113 .Ss Imported Memory
114 114 As mentioned in the overview, some arenas are logical subsets of
↓ open down ↓ |
18 lines elided |
↑ open up ↑ |
115 115 other arenas. For example,
116 116 .Sy kmem_va_arena
117 117 (a virtual address cache
118 118 that satisfies most
119 119 .Fn kmem_slab_create
120 120 requests) is just a subset of
121 121 .Sy heap_arena
122 122 (the kernel heap) that provides caching for the most common slab sizes. When
123 123 .Sy kmem_va_arena
124 124 runs out of virtual memory, it
125 -.Em imports more from the heap; we
126 -say that
125 +.Em imports
126 +more from the heap; we say that
127 127 .Sy heap_arena
128 128 is the
129 -.Em "vmem source" for
129 +.Em "vmem source"
130 +for
130 131 .Sy kmem_va_arena.
131 132 .Fn vmem_create
132 133 allows you to specify any existing vmem arena as the source for your new
133 134 arena. Topologically, since every arena is a child of at most one source, the
134 135 set of all arenas forms a collection of trees.
135 136 .Ss Constrained Allocations
136 137 Some vmem clients are quite picky about the kind of address they want.
137 138 For example, the DVMA code may need an address that is at a particular
138 139 phase with respect to some alignment (to get good cache coloring), or
139 140 that lies within certain limits (the addressable range of a device),
140 141 or that doesn't cross some boundary (a DMA counter restriction) \(em
141 142 or all of the above.
142 143 .Fn vmem_xalloc
143 144 allows the client to specify any or all of these constraints.
144 145 .Ss The Vmem Quantum
145 146 Every arena has a notion of
146 147 .Sq quantum ,
147 148 specified at
148 149 .Fn vmem_create
149 150 time, that defines the arena's minimum unit of currency. Most commonly the
150 151 quantum is either 1 or
151 152 .Dv PAGESIZE ,
152 153 but any power of 2 is legal. All vmem allocations are guaranteed to be
153 154 quantum\-aligned.
154 155 .Ss Relationship to the Kernel Memory Allocator
155 156 Every kmem cache has a vmem arena as its slab supplier. The kernel memory
156 157 allocator uses
157 158 .Fn vmem_alloc
158 159 and
159 160 .Fn vmem_free
160 161 to create and destroy slabs.
161 162 .Sh SEE ALSO
162 163 .Xr id_space 9F ,
163 164 .Xr vmem_add 9F ,
164 165 .Xr vmem_alloc 9F ,
165 166 .Xr vmem_contains 9F ,
166 167 .Xr vmem_create 9F ,
167 168 .Xr vmem_walk 9F
168 169 .Pp
169 170 .Rs
170 171 .%A Jeff Bonwick
171 172 .%A Jonathan Adams
172 173 .%T Magazines and vmem: Extending the Slab Allocator to Many CPUs and Arbitrary Resources.
173 174 .%J Proceedings of the 2001 Usenix Conference
174 175 .%U http://www.usenix.org/event/usenix01/bonwick.html
175 176 .Re
↓ open down ↓ |
36 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX