VMEM(9) Device Driver Interfaces VMEM(9) NNAAMMEE vvmmeemm - virtual memory allocator DDEESSCCRRIIPPTTIIOONN OOvveerrvviieeww An address space is divided into a number of logically distinct pieces, or _a_r_e_n_a_s: text, data, heap, stack, and so on. Within these arenas we often subdivide further; for example, we use heap addresses not only for the kernel heap (kkmmeemm__aalllloocc() space), but also for DVMA, bbpp__mmaappiinn(), _/_d_e_v_/_k_m_e_m, and even some device mappings. The kernel address space, therefore, is most accurately described as a tree of arenas in which each node of the tree _i_m_p_o_r_t_s some subset of its parent. The virtual memory allocator manages these arenas and supports their natural hierarchical structure. AArreennaass An arena is nothing more than a set of integers. These integers most commonly represent virtual addresses, but in fact they can represent anything at all. For example, we could use an arena containing the integers minpid through maxpid to allocate process IDs. For uses of this nature, prefer id_space(9F) instead. vvmmeemm__ccrreeaattee() and vvmmeemm__ddeessttrrooyy() create and destroy vmem arenas. In order to differentiate between arenas used for addresses and arenas used for identifiers, the VMC_IDENTIFIER flag is passed to vvmmeemm__ccrreeaattee(). This prevents identifier exhaustion from being diagnosed as general memory failure. SSppaannss We represent the integers in an arena as a collection of _s_p_a_n_s, or contiguous ranges of integers. For example, the kernel heap consists of just one span: [kernelheap, ekernelheap). Spans can be added to an arena in two ways: explicitly, by vvmmeemm__aadddd(); or implicitly, by importing, as described in _I_m_p_o_r_t_e_d _M_e_m_o_r_y below. SSeeggmmeennttss Spans are subdivided into _s_e_g_m_e_n_t_s, each of which is either allocated or free. A segment, like a span, is a contiguous range of integers. Each allocated segment [addr, addr + size) represents exactly one vmem_alloc(size) that returned aaddddrr. Free segments represent the space between allocated segments. If two free segments are adjacent, we coalesce them into one larger segment; that is, if segments [a, b) and [b, c) are both free, we merge them into a single segment [a, c). The segments within a span are linked together in increasing-address order so we can easily determine whether coalescing is possible. Segments never cross span boundaries. When all segments within an imported span become free, we return the span to its source. IImmppoorrtteedd MMeemmoorryy As mentioned in the overview, some arenas are logical subsets of other arenas. For example, kkmmeemm__vvaa__aarreennaa (a virtual address cache that satisfies most kkmmeemm__ssllaabb__ccrreeaattee() requests) is just a subset of hheeaapp__aarreennaa (the kernel heap) that provides caching for the most common slab sizes. When kkmmeemm__vvaa__aarreennaa runs out of virtual memory, it _i_m_p_o_r_t_s more from the heap; we say that hheeaapp__aarreennaa is the _v_m_e_m _s_o_u_r_c_e for kkmmeemm__vvaa__aarreennaa.. vvmmeemm__ccrreeaattee() allows you to specify any existing vmem arena as the source for your new arena. Topologically, since every arena is a child of at most one source, the set of all arenas forms a collection of trees. CCoonnssttrraaiinneedd AAllllooccaattiioonnss Some vmem clients are quite picky about the kind of address they want. For example, the DVMA code may need an address that is at a particular phase with respect to some alignment (to get good cache coloring), or that lies within certain limits (the addressable range of a device), or that doesn't cross some boundary (a DMA counter restriction) -- or all of the above. vvmmeemm__xxaalllloocc() allows the client to specify any or all of these constraints. TThhee VVmmeemm QQuuaannttuumm Every arena has a notion of `quantum', specified at vvmmeemm__ccrreeaattee() time, that defines the arena's minimum unit of currency. Most commonly the quantum is either 1 or PAGESIZE, but any power of 2 is legal. All vmem allocations are guaranteed to be quantum-aligned. RReellaattiioonnsshhiipp ttoo tthhee KKeerrnneell MMeemmoorryy AAllllooccaattoorr Every kmem cache has a vmem arena as its slab supplier. The kernel memory allocator uses vvmmeemm__aalllloocc() and vvmmeemm__ffrreeee() to create and destroy slabs. SSEEEE AALLSSOO id_space(9F), vmem_add(9F), vmem_alloc(9F), vmem_contains(9F), vmem_create(9F), vmem_walk(9F) Jeff Bonwick and Jonathan Adams, "Magazines and vmem: Extending the Slab Allocator to Many CPUs and Arbitrary Resources.", _P_r_o_c_e_e_d_i_n_g_s _o_f _t_h_e _2_0_0_1 _U_s_e_n_i_x _C_o_n_f_e_r_e_n_c_e, http://www.usenix.org/event/usenix01/bonwick.html. illumos January 18, 2017 illumos