1 .\"
   2 .\" CDDL HEADER START
   3 .\"
   4 .\" The contents of this file are subject to the terms of the
   5 .\" Common Development and Distribution License (the "License").
   6 .\" You may not use this file except in compliance with the License.
   7 .\"
   8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
   9 .\" or http://www.opensolaris.org/os/licensing.
  10 .\" See the License for the specific language governing permissions
  11 .\" and limitations under the License.
  12 .\"
  13 .\" When distributing Covered Code, include this CDDL HEADER in each
  14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15 .\" If applicable, add the following below this CDDL HEADER, with the
  16 .\" fields enclosed by brackets "[]" replaced with your own identifying
  17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
  18 .\"
  19 .\" CDDL HEADER END
  20 .\"
  21 .\"
  22 .\" Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
  23 .\" Use is subject to license terms.
  24 .\"
  25 .\" Copyright (c) 2012, 2015 by Delphix. All rights reserved.
  26 .\" Copyright (c) 2012, Joyent, Inc. All rights reserved.
  27 .\"
  28 .\" The text of this is derived from section 1 of the big theory statement in
  29 .\" usr/src/uts/common/os/vmem.c, the traditional location of this text.  They
  30 .\" should largely be updated in tandem.
  31 .Dd Jan 18, 2017
  32 .Dt VMEM 9
  33 .Os
  34 .Sh NAME
  35 .Nm vmem
  36 .Nd virtual memory allocator
  37 .Sh DESCRIPTION
  38 .Ss Overview
  39 An address space is divided into a number of logically distinct pieces, or
  40 .Em arenas :
  41 text, data, heap, stack, and so on.
  42 Within these
  43 arenas we often subdivide further; for example, we use heap addresses
  44 not only for the kernel heap
  45 .Po
  46 .Fn kmem_alloc
  47 space
  48 .Pc ,
  49 but also for DVMA,
  50 .Fn bp_mapin ,
  51 .Pa /dev/kmem ,
  52 and even some device mappings.
  53 .Pp
  54 The kernel address space, therefore, is most accurately described as
  55 a tree of arenas in which each node of the tree
  56 .Em imports
  57 some subset of its parent.
  58 The virtual memory allocator manages these arenas
  59 and supports their natural hierarchical structure.
  60 .Ss Arenas
  61 An arena is nothing more than a set of integers.  These integers most
  62 commonly represent virtual addresses, but in fact they can represent
  63 anything at all.  For example, we could use an arena containing the
  64 integers minpid through maxpid to allocate process IDs.  For uses of this
  65 nature, prefer
  66 .Xr id_space 9F
  67 instead.
  68 .Pp
  69 .Fn vmem_create
  70 and
  71 .Fn vmem_destroy
  72 create and destroy vmem arenas.  In order to differentiate between arenas used
  73 for addresses and arenas used for identifiers, the
  74 .Dv VMC_IDENTIFIER
  75 flag is passed to
  76 .Fn vmem_create .
  77 This prevents identifier exhaustion from being diagnosed as general memory
  78 failure.
  79 .Ss Spans
  80 We represent the integers in an arena as a collection of
  81 .Em spans ,
  82 or contiguous ranges of integers.  For example, the kernel heap consists of
  83 just one span:
  84 .Li "[kernelheap, ekernelheap)" .
  85 Spans can be added to an arena in two ways: explicitly, by
  86 .Fn vmem_add ;
  87 or implicitly, by importing, as described in
  88 .Sx Imported Memory
  89 below.
  90 .Ss Segments
  91 Spans are subdivided into
  92 .Em segments ,
  93 each of which is either allocated or free.  A segment, like a span, is a
  94 contiguous range of integers.  Each allocated segment
  95 .Li "[addr, addr + size)"
  96 represents exactly one
  97 .Li "vmem_alloc(size)"
  98 that returned
  99 .Sy addr .
 100 Free segments represent the space between allocated segments.  If two free
 101 segments are adjacent, we coalesce them into one larger segment; that is, if
 102 segments
 103 .Li "[a, b)"
 104 and
 105 .Li "[b, c)"
 106 are both free, we merge them into a single segment
 107 .Li "[a, c)" .
 108 The segments within a span are linked together in increasing\-address
 109 order so we can easily determine whether coalescing is possible.
 110 .Pp
 111 Segments never cross span boundaries.  When all segments within an imported
 112 span become free, we return the span to its source.
 113 .Ss Imported Memory
 114 As mentioned in the overview, some arenas are logical subsets of
 115 other arenas.  For example,
 116 .Sy kmem_va_arena
 117 (a virtual address cache
 118 that satisfies most
 119 .Fn kmem_slab_create
 120 requests) is just a subset of
 121 .Sy heap_arena
 122 (the kernel heap) that provides caching for the most common slab sizes.  When
 123 .Sy kmem_va_arena
 124 runs out of virtual memory, it
 125 .Em imports
 126 more from the heap; we say that
 127 .Sy heap_arena
 128 is the
 129 .Em "vmem source"
 130 for
 131 .Sy kmem_va_arena.
 132 .Fn vmem_create
 133 allows you to specify any existing vmem arena as the source for your new
 134 arena.  Topologically, since every arena is a child of at most one source, the
 135 set of all arenas forms a collection of trees.
 136 .Ss Constrained Allocations
 137 Some vmem clients are quite picky about the kind of address they want.
 138 For example, the DVMA code may need an address that is at a particular
 139 phase with respect to some alignment (to get good cache coloring), or
 140 that lies within certain limits (the addressable range of a device),
 141 or that doesn't cross some boundary (a DMA counter restriction) \(em
 142 or all of the above.
 143 .Fn vmem_xalloc
 144 allows the client to specify any or all of these constraints.
 145 .Ss The Vmem Quantum
 146 Every arena has a notion of
 147 .Sq quantum ,
 148 specified at
 149 .Fn vmem_create
 150 time, that defines the arena's minimum unit of currency.  Most commonly the
 151 quantum is either 1 or
 152 .Dv PAGESIZE ,
 153 but any power of 2 is legal.  All vmem allocations are guaranteed to be
 154 quantum\-aligned.
 155 .Ss Relationship to the Kernel Memory Allocator
 156 Every kmem cache has a vmem arena as its slab supplier.  The kernel memory
 157 allocator uses
 158 .Fn vmem_alloc
 159 and
 160 .Fn vmem_free
 161 to create and destroy slabs.
 162 .Sh SEE ALSO
 163 .Xr id_space 9F ,
 164 .Xr vmem_add 9F ,
 165 .Xr vmem_alloc 9F ,
 166 .Xr vmem_contains 9F ,
 167 .Xr vmem_create 9F ,
 168 .Xr vmem_walk 9F
 169 .Pp
 170 .Rs
 171 .%A Jeff Bonwick
 172 .%A Jonathan Adams
 173 .%T Magazines and vmem: Extending the Slab Allocator to Many CPUs and Arbitrary Resources.
 174 .%J Proceedings of the 2001 Usenix Conference
 175 .%U http://www.usenix.org/event/usenix01/bonwick.html
 176 .Re