Print this page
8956 Implement KPTI
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
9208 hati_demap_func should take pagesize into account
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Tim Kordas <tim.kordas@joyent.com>
Reviewed by: Yuri Pankov <yuripv@yuripv.net>

Split Close
Expand all
Collapse all
          --- old/usr/src/uts/i86pc/vm/hat_i86.c
          +++ new/usr/src/uts/i86pc/vm/hat_i86.c
↓ open down ↓ 19 lines elided ↑ open up ↑
  20   20   */
  21   21  /*
  22   22   * Copyright (c) 1992, 2010, Oracle and/or its affiliates. All rights reserved.
  23   23   */
  24   24  /*
  25   25   * Copyright (c) 2010, Intel Corporation.
  26   26   * All rights reserved.
  27   27   */
  28   28  /*
  29   29   * Copyright 2011 Nexenta Systems, Inc.  All rights reserved.
       30 + * Copyright 2018 Joyent, Inc.  All rights reserved.
  30   31   * Copyright (c) 2014, 2015 by Delphix. All rights reserved.
  31   32   */
  32   33  
  33   34  /*
  34   35   * VM - Hardware Address Translation management for i386 and amd64
  35   36   *
  36   37   * Implementation of the interfaces described in <common/vm/hat.h>
  37   38   *
  38   39   * Nearly all the details of how the hardware is managed should not be
  39   40   * visible outside this layer except for misc. machine specific functions
  40   41   * that work in conjunction with this code.
  41   42   *
  42   43   * Routines used only inside of i86pc/vm start with hati_ for HAT Internal.
  43   44   */
  44   45  
       46 +/*
       47 + * amd64 HAT Design
       48 + *
       49 + * ----------
       50 + * Background
       51 + * ----------
       52 + *
       53 + * On x86, the address space is shared between a user process and the kernel.
       54 + * This is different from SPARC. Conventionally, the kernel lives at the top of
       55 + * the address space and the user process gets to enjoy the rest of it. If you
       56 + * look at the image of the address map in uts/i86pc/os/startup.c, you'll get a
       57 + * rough sense of how the address space is laid out and used.
       58 + *
       59 + * Every unique address space is represented by an instance of a HAT structure
       60 + * called a 'hat_t'. In addition to a hat_t structure for each process, there is
       61 + * also one that is used for the kernel (kas.a_hat), and each CPU ultimately
       62 + * also has a HAT.
       63 + *
       64 + * Each HAT contains a pointer to its root page table. This root page table is
       65 + * what we call an L3 page table in illumos and Intel calls the PML4. It is the
       66 + * physical address of the L3 table that we place in the %cr3 register which the
       67 + * processor uses.
       68 + *
       69 + * Each of the many layers of the page table is represented by a structure
       70 + * called an htable_t. The htable_t manages a set of 512 8-byte entries. The
       71 + * number of entries in a given page table is constant across all different
       72 + * level page tables. Note, this is only true on amd64. This has not always been
       73 + * the case on x86.
       74 + *
       75 + * Each entry in a page table, generally referred to as a PTE, may refer to
       76 + * another page table or a memory location, depending on the level of the page
       77 + * table and the use of large pages. Importantly, the top-level L3 page table
       78 + * (PML4) only supports linking to further page tables. This is also true on
       79 + * systems which support a 5th level page table (which we do not currently
       80 + * support).
       81 + *
       82 + * Historically, on x86, when a process was running on CPU, the root of the page
       83 + * table was inserted into %cr3 on each CPU on which it was currently running.
       84 + * When processes would switch (by calling hat_switch()), then the value in %cr3
       85 + * on that CPU would change to that of the new HAT. While this behavior is still
       86 + * maintained in the xpv kernel, this is not what is done today.
       87 + *
       88 + * -------------------
       89 + * Per-CPU Page Tables
       90 + * -------------------
       91 + *
       92 + * Throughout the system the 64-bit kernel has a notion of what it calls a
       93 + * per-CPU page table or PCP. The notion of a per-CPU page table was originally
       94 + * introduced as part of the original work to support x86 PAE. On the 64-bit
       95 + * kernel, it was originally used for 32-bit processes running on the 64-bit
       96 + * kernel. The rationale behind this was that each 32-bit process could have all
       97 + * of its memory represented in a single L2 page table as each L2 page table
       98 + * entry represents 1 GbE of memory.
       99 + *
      100 + * Following on from this, the idea was that given that all of the L3 page table
      101 + * entries for 32-bit processes are basically going to be identical with the
      102 + * exception of the first entry in the page table, why not share those page
      103 + * table entries. This gave rise to the idea of a per-CPU page table.
      104 + *
      105 + * The way this works is that we have a member in the machcpu_t called the
      106 + * mcpu_hat_info. That structure contains two different 4k pages: one that
      107 + * represents the L3 page table and one that represents an L2 page table. When
      108 + * the CPU starts up, the L3 page table entries are copied in from the kernel's
      109 + * page table. The L3 kernel entries do not change throughout the lifetime of
      110 + * the kernel. The kernel portion of these L3 pages for each CPU have the same
      111 + * records, meaning that they point to the same L2 page tables and thus see a
      112 + * consistent view of the world.
      113 + *
      114 + * When a 32-bit process is loaded into this world, we copy the 32-bit process's
      115 + * four top-level page table entries into the CPU's L2 page table and then set
      116 + * the CPU's first L3 page table entry to point to the CPU's L2 page.
      117 + * Specifically, in hat_pcp_update(), we're copying from the process's
      118 + * HAT_COPIED_32 HAT into the page tables specific to this CPU.
      119 + *
      120 + * As part of the implementation of kernel page table isolation, this was also
      121 + * extended to 64-bit processes. When a 64-bit process runs, we'll copy their L3
      122 + * PTEs across into the current CPU's L3 page table. (As we can't do the
      123 + * first-L3-entry trick for 64-bit processes, ->hci_pcp_l2ptes is unused in this
      124 + * case.)
      125 + *
      126 + * The use of per-CPU page tables has a lot of implementation ramifications. A
      127 + * HAT that runs a user process will be flagged with the HAT_COPIED flag to
      128 + * indicate that it is using the per-CPU page table functionality. In tandem
      129 + * with the HAT, the top-level htable_t will be flagged with the HTABLE_COPIED
      130 + * flag. If the HAT represents a 32-bit process, then we will also set the
      131 + * HAT_COPIED_32 flag on that hat_t.
      132 + *
      133 + * These two flags work together. The top-level htable_t when using per-CPU page
      134 + * tables is 'virtual'. We never allocate a ptable for this htable_t (i.e.
      135 + * ht->ht_pfn is PFN_INVALID).  Instead, when we need to modify a PTE in an
      136 + * HTABLE_COPIED ptable, x86pte_access_pagetable() will redirect any accesses to
      137 + * ht_hat->hat_copied_ptes.
      138 + *
      139 + * Of course, such a modification won't actually modify the HAT_PCP page tables
      140 + * that were copied from the HAT_COPIED htable. When we change the top level
      141 + * page table entries (L2 PTEs for a 32-bit process and L3 PTEs for a 64-bit
      142 + * process), we need to make sure to trigger hat_pcp_update() on all CPUs that
      143 + * are currently tied to this HAT (including the current CPU).
      144 + *
      145 + * To do this, PCP piggy-backs on TLB invalidation, specifically via the
      146 + * hat_tlb_inval() path from link_ptp() and unlink_ptp().
      147 + *
      148 + * (Importantly, in all such cases, when this is in operation, the top-level
      149 + * entry should not be able to refer to an actual page table entry that can be
      150 + * changed and consolidated into a large page. If large page consolidation is
      151 + * required here, then there will be much that needs to be reconsidered.)
      152 + *
      153 + * -----------------------------------------------
      154 + * Kernel Page Table Isolation and the Per-CPU HAT
      155 + * -----------------------------------------------
      156 + *
      157 + * All Intel CPUs that support speculative execution and paging are subject to a
      158 + * series of bugs that have been termed 'Meltdown'. These exploits allow a user
      159 + * process to read kernel memory through cache side channels and speculative
      160 + * execution. To mitigate this on vulnerable CPUs, we need to use a technique
      161 + * called kernel page table isolation. What this requires is that we have two
      162 + * different page table roots. When executing in kernel mode, we will use a %cr3
      163 + * value that has both the user and kernel pages. However when executing in user
      164 + * mode, we will need to have a %cr3 that has all of the user pages; however,
      165 + * only a subset of the kernel pages required to operate.
      166 + *
      167 + * These kernel pages that we need mapped are:
      168 + *
      169 + *   o Kernel Text that allows us to switch between the cr3 values.
      170 + *   o The current global descriptor table (GDT)
      171 + *   o The current interrupt descriptor table (IDT)
      172 + *   o The current task switching state (TSS)
      173 + *   o The current local descriptor table (LDT)
      174 + *   o Stacks and scratch space used by the interrupt handlers
      175 + *
      176 + * For more information on the stack switching techniques, construction of the
      177 + * trampolines, and more, please see i86pc/ml/kpti_trampolines.s. The most
      178 + * important part of these mappings are the following two constraints:
      179 + *
      180 + *   o The mappings are all per-CPU (except for read-only text)
      181 + *   o The mappings are static. They are all established before the CPU is
      182 + *     started (with the exception of the boot CPU).
      183 + *
      184 + * To facilitate the kernel page table isolation we employ our per-CPU
      185 + * page tables discussed in the previous section and add the notion of a per-CPU
      186 + * HAT. Fundamentally we have a second page table root. There is both a kernel
      187 + * page table (hci_pcp_l3ptes), and a user L3 page table (hci_user_l3ptes).
      188 + * Both will have the user page table entries copied into them, the same way
      189 + * that we discussed in the section 'Per-CPU Page Tables'.
      190 + *
      191 + * The complex part of this is how do we construct the set of kernel mappings
      192 + * that should be present when running with the user page table. To answer that,
      193 + * we add the notion of a per-CPU HAT. This HAT functions like a normal HAT,
      194 + * except that it's not really associated with an address space the same way
      195 + * that other HATs are.
      196 + *
      197 + * This HAT lives off of the 'struct hat_cpu_info' which is a member of the
      198 + * machcpu in the member hci_user_hat. We use this per-CPU HAT to create the set
      199 + * of kernel mappings that should be present on this CPU. The kernel mappings
      200 + * are added to the per-CPU HAT through the function hati_cpu_punchin(). Once a
      201 + * mapping has been punched in, it may not be punched out. The reason that we
      202 + * opt to leverage a HAT structure is that it knows how to allocate and manage
      203 + * all of the lower level page tables as required.
      204 + *
      205 + * Because all of the mappings are present at the beginning of time for this CPU
      206 + * and none of the mappings are in the kernel pageable segment, we don't have to
      207 + * worry about faulting on these HAT structures and thus the notion of the
      208 + * current HAT that we're using is always the appropriate HAT for the process
      209 + * (usually a user HAT or the kernel's HAT).
      210 + *
      211 + * A further constraint we place on the system with these per-CPU HATs is that
      212 + * they are not subject to htable_steal(). Because each CPU will have a rather
      213 + * fixed number of page tables, the same way that we don't steal from the
      214 + * kernel's HAT, it was determined that we should not steal from this HAT due to
      215 + * the complications involved and somewhat criminal nature of htable_steal().
      216 + *
      217 + * The per-CPU HAT is initialized in hat_pcp_setup() which is called as part of
      218 + * onlining the CPU, but before the CPU is actually started. The per-CPU HAT is
      219 + * removed in hat_pcp_teardown() which is called when a CPU is being offlined to
      220 + * be removed from the system (which is different from what psradm usually
      221 + * does).
      222 + *
      223 + * Finally, once the CPU has been onlined, the set of mappings in the per-CPU
      224 + * HAT must not change. The HAT related functions that we call are not meant to
      225 + * be called when we're switching between processes. For example, it is quite
      226 + * possible that if they were, they would try to grab an htable mutex which
      227 + * another thread might have. One needs to treat hat_switch() as though they
      228 + * were above LOCK_LEVEL and therefore _must not_ block under any circumstance.
      229 + */
      230 +
  45  231  #include <sys/machparam.h>
  46  232  #include <sys/machsystm.h>
  47  233  #include <sys/mman.h>
  48  234  #include <sys/types.h>
  49  235  #include <sys/systm.h>
  50  236  #include <sys/cpuvar.h>
  51  237  #include <sys/thread.h>
  52  238  #include <sys/proc.h>
  53  239  #include <sys/cpu.h>
  54  240  #include <sys/kmem.h>
↓ open down ↓ 33 lines elided ↑ open up ↑
  88  274  /*
  89  275   * Basic parameters for hat operation.
  90  276   */
  91  277  struct hat_mmu_info mmu;
  92  278  
  93  279  /*
  94  280   * The page that is the kernel's top level pagetable.
  95  281   *
  96  282   * For 32 bit PAE support on i86pc, the kernel hat will use the 1st 4 entries
  97  283   * on this 4K page for its top level page table. The remaining groups of
  98      - * 4 entries are used for per processor copies of user VLP pagetables for
      284 + * 4 entries are used for per processor copies of user PCP pagetables for
  99  285   * running threads.  See hat_switch() and reload_pae32() for details.
 100  286   *
 101      - * vlp_page[0..3] - level==2 PTEs for kernel HAT
 102      - * vlp_page[4..7] - level==2 PTEs for user thread on cpu 0
 103      - * vlp_page[8..11]  - level==2 PTE for user thread on cpu 1
      287 + * pcp_page[0..3] - level==2 PTEs for kernel HAT
      288 + * pcp_page[4..7] - level==2 PTEs for user thread on cpu 0
      289 + * pcp_page[8..11]  - level==2 PTE for user thread on cpu 1
 104  290   * etc...
      291 + *
      292 + * On the 64-bit kernel, this is the normal root of the page table and there is
      293 + * nothing special about it when used for other CPUs.
 105  294   */
 106      -static x86pte_t *vlp_page;
      295 +static x86pte_t *pcp_page;
 107  296  
 108  297  /*
 109  298   * forward declaration of internal utility routines
 110  299   */
 111  300  static x86pte_t hati_update_pte(htable_t *ht, uint_t entry, x86pte_t expected,
 112  301          x86pte_t new);
 113  302  
 114  303  /*
 115      - * The kernel address space exists in all HATs. To implement this the
 116      - * kernel reserves a fixed number of entries in the topmost level(s) of page
 117      - * tables. The values are setup during startup and then copied to every user
 118      - * hat created by hat_alloc(). This means that kernelbase must be:
      304 + * The kernel address space exists in all non-HAT_COPIED HATs. To implement this
      305 + * the kernel reserves a fixed number of entries in the topmost level(s) of page
      306 + * tables. The values are setup during startup and then copied to every user hat
      307 + * created by hat_alloc(). This means that kernelbase must be:
 119  308   *
 120  309   *        4Meg aligned for 32 bit kernels
 121  310   *      512Gig aligned for x86_64 64 bit kernel
 122  311   *
 123  312   * The hat_kernel_range_ts describe what needs to be copied from kernel hat
 124  313   * to each user hat.
 125  314   */
 126  315  typedef struct hat_kernel_range {
 127  316          level_t         hkr_level;
 128  317          uintptr_t       hkr_start_va;
↓ open down ↓ 34 lines elided ↑ open up ↑
 163  352   */
 164  353  cpuset_t khat_cpuset;
 165  354  
 166  355  /*
 167  356   * management stuff for hat structures
 168  357   */
 169  358  kmutex_t        hat_list_lock;
 170  359  kcondvar_t      hat_list_cv;
 171  360  kmem_cache_t    *hat_cache;
 172  361  kmem_cache_t    *hat_hash_cache;
 173      -kmem_cache_t    *vlp_hash_cache;
      362 +kmem_cache_t    *hat32_hash_cache;
 174  363  
 175  364  /*
 176  365   * Simple statistics
 177  366   */
 178  367  struct hatstats hatstat;
 179  368  
 180  369  /*
 181  370   * Some earlier hypervisor versions do not emulate cmpxchg of PTEs
 182  371   * correctly.  For such hypervisors we must set PT_USER for kernel
 183  372   * entries ourselves (normally the emulation would set PT_USER for
 184  373   * kernel entries and PT_USER|PT_GLOBAL for user entries).  pt_kern is
 185  374   * thus set appropriately.  Note that dboot/kbm is OK, as only the full
 186  375   * HAT uses cmpxchg() and the other paths (hypercall etc.) were never
 187  376   * incorrect.
 188  377   */
 189  378  int pt_kern;
 190  379  
 191      -/*
 192      - * useful stuff for atomic access/clearing/setting REF/MOD/RO bits in page_t's.
 193      - */
 194      -extern void atomic_orb(uchar_t *addr, uchar_t val);
 195      -extern void atomic_andb(uchar_t *addr, uchar_t val);
 196      -
 197  380  #ifndef __xpv
 198  381  extern pfn_t memseg_get_start(struct memseg *);
 199  382  #endif
 200  383  
 201  384  #define PP_GETRM(pp, rmmask)    (pp->p_nrm & rmmask)
 202  385  #define PP_ISMOD(pp)            PP_GETRM(pp, P_MOD)
 203  386  #define PP_ISREF(pp)            PP_GETRM(pp, P_REF)
 204  387  #define PP_ISRO(pp)             PP_GETRM(pp, P_RO)
 205  388  
 206  389  #define PP_SETRM(pp, rm)        atomic_orb(&(pp->p_nrm), rm)
↓ open down ↓ 22 lines elided ↑ open up ↑
 229  412          hat->hat_ism_pgcnt = 0;
 230  413          hat->hat_stats = 0;
 231  414          hat->hat_flags = 0;
 232  415          CPUSET_ZERO(hat->hat_cpus);
 233  416          hat->hat_htable = NULL;
 234  417          hat->hat_ht_hash = NULL;
 235  418          return (0);
 236  419  }
 237  420  
 238  421  /*
      422 + * Put it at the start of the global list of all hats (used by stealing)
      423 + *
      424 + * kas.a_hat is not in the list but is instead used to find the
      425 + * first and last items in the list.
      426 + *
      427 + * - kas.a_hat->hat_next points to the start of the user hats.
      428 + *   The list ends where hat->hat_next == NULL
      429 + *
      430 + * - kas.a_hat->hat_prev points to the last of the user hats.
      431 + *   The list begins where hat->hat_prev == NULL
      432 + */
      433 +static void
      434 +hat_list_append(hat_t *hat)
      435 +{
      436 +        mutex_enter(&hat_list_lock);
      437 +        hat->hat_prev = NULL;
      438 +        hat->hat_next = kas.a_hat->hat_next;
      439 +        if (hat->hat_next)
      440 +                hat->hat_next->hat_prev = hat;
      441 +        else
      442 +                kas.a_hat->hat_prev = hat;
      443 +        kas.a_hat->hat_next = hat;
      444 +        mutex_exit(&hat_list_lock);
      445 +}
      446 +
      447 +/*
 239  448   * Allocate a hat structure for as. We also create the top level
 240  449   * htable and initialize it to contain the kernel hat entries.
 241  450   */
 242  451  hat_t *
 243  452  hat_alloc(struct as *as)
 244  453  {
 245  454          hat_t                   *hat;
 246  455          htable_t                *ht;    /* top level htable */
 247      -        uint_t                  use_vlp;
      456 +        uint_t                  use_copied;
 248  457          uint_t                  r;
 249  458          hat_kernel_range_t      *rp;
 250  459          uintptr_t               va;
 251  460          uintptr_t               eva;
 252  461          uint_t                  start;
 253  462          uint_t                  cnt;
 254  463          htable_t                *src;
      464 +        boolean_t               use_hat32_cache;
 255  465  
 256  466          /*
 257  467           * Once we start creating user process HATs we can enable
 258  468           * the htable_steal() code.
 259  469           */
 260  470          if (can_steal_post_boot == 0)
 261  471                  can_steal_post_boot = 1;
 262  472  
 263  473          ASSERT(AS_WRITE_HELD(as));
 264  474          hat = kmem_cache_alloc(hat_cache, KM_SLEEP);
 265  475          hat->hat_as = as;
 266  476          mutex_init(&hat->hat_mutex, NULL, MUTEX_DEFAULT, NULL);
 267  477          ASSERT(hat->hat_flags == 0);
 268  478  
 269  479  #if defined(__xpv)
 270  480          /*
 271      -         * No VLP stuff on the hypervisor due to the 64-bit split top level
      481 +         * No PCP stuff on the hypervisor due to the 64-bit split top level
 272  482           * page tables.  On 32-bit it's not needed as the hypervisor takes
 273  483           * care of copying the top level PTEs to a below 4Gig page.
 274  484           */
 275      -        use_vlp = 0;
      485 +        use_copied = 0;
      486 +        use_hat32_cache = B_FALSE;
      487 +        hat->hat_max_level = mmu.max_level;
      488 +        hat->hat_num_copied = 0;
      489 +        hat->hat_flags = 0;
 276  490  #else   /* __xpv */
 277      -        /* 32 bit processes uses a VLP style hat when running with PAE */
 278      -#if defined(__amd64)
 279      -        use_vlp = (ttoproc(curthread)->p_model == DATAMODEL_ILP32);
 280      -#elif defined(__i386)
 281      -        use_vlp = mmu.pae_hat;
 282      -#endif
      491 +
      492 +        /*
      493 +         * All processes use HAT_COPIED on the 64-bit kernel if KPTI is
      494 +         * turned on.
      495 +         */
      496 +        if (ttoproc(curthread)->p_model == DATAMODEL_ILP32) {
      497 +                use_copied = 1;
      498 +                hat->hat_max_level = mmu.max_level32;
      499 +                hat->hat_num_copied = mmu.num_copied_ents32;
      500 +                use_hat32_cache = B_TRUE;
      501 +                hat->hat_flags |= HAT_COPIED_32;
      502 +                HATSTAT_INC(hs_hat_copied32);
      503 +        } else if (kpti_enable == 1) {
      504 +                use_copied = 1;
      505 +                hat->hat_max_level = mmu.max_level;
      506 +                hat->hat_num_copied = mmu.num_copied_ents;
      507 +                use_hat32_cache = B_FALSE;
      508 +                HATSTAT_INC(hs_hat_copied64);
      509 +        } else {
      510 +                use_copied = 0;
      511 +                use_hat32_cache = B_FALSE;
      512 +                hat->hat_max_level = mmu.max_level;
      513 +                hat->hat_num_copied = 0;
      514 +                hat->hat_flags = 0;
      515 +                HATSTAT_INC(hs_hat_normal64);
      516 +        }
 283  517  #endif  /* __xpv */
 284      -        if (use_vlp) {
 285      -                hat->hat_flags = HAT_VLP;
 286      -                bzero(hat->hat_vlp_ptes, VLP_SIZE);
      518 +        if (use_copied) {
      519 +                hat->hat_flags |= HAT_COPIED;
      520 +                bzero(hat->hat_copied_ptes, sizeof (hat->hat_copied_ptes));
 287  521          }
 288  522  
 289  523          /*
 290      -         * Allocate the htable hash
      524 +         * Allocate the htable hash. For 32-bit PCP processes we use the
      525 +         * hat32_hash_cache. However, for 64-bit PCP processes we do not as the
      526 +         * number of entries that they have to handle is closer to
      527 +         * hat_hash_cache in count (though there will be more wastage when we
      528 +         * have more DRAM in the system and thus push down the user address
      529 +         * range).
 291  530           */
 292      -        if ((hat->hat_flags & HAT_VLP)) {
 293      -                hat->hat_num_hash = mmu.vlp_hash_cnt;
 294      -                hat->hat_ht_hash = kmem_cache_alloc(vlp_hash_cache, KM_SLEEP);
      531 +        if (use_hat32_cache) {
      532 +                hat->hat_num_hash = mmu.hat32_hash_cnt;
      533 +                hat->hat_ht_hash = kmem_cache_alloc(hat32_hash_cache, KM_SLEEP);
 295  534          } else {
 296  535                  hat->hat_num_hash = mmu.hash_cnt;
 297  536                  hat->hat_ht_hash = kmem_cache_alloc(hat_hash_cache, KM_SLEEP);
 298  537          }
 299  538          bzero(hat->hat_ht_hash, hat->hat_num_hash * sizeof (htable_t *));
 300  539  
 301  540          /*
 302  541           * Initialize Kernel HAT entries at the top of the top level page
 303  542           * tables for the new hat.
 304  543           */
 305  544          hat->hat_htable = NULL;
 306  545          hat->hat_ht_cached = NULL;
 307  546          XPV_DISALLOW_MIGRATE();
 308  547          ht = htable_create(hat, (uintptr_t)0, TOP_LEVEL(hat), NULL);
 309  548          hat->hat_htable = ht;
 310  549  
 311  550  #if defined(__amd64)
 312      -        if (hat->hat_flags & HAT_VLP)
      551 +        if (hat->hat_flags & HAT_COPIED)
 313  552                  goto init_done;
 314  553  #endif
 315  554  
 316  555          for (r = 0; r < num_kernel_ranges; ++r) {
 317  556                  rp = &kernel_ranges[r];
 318  557                  for (va = rp->hkr_start_va; va != rp->hkr_end_va;
 319  558                      va += cnt * LEVEL_SIZE(rp->hkr_level)) {
 320  559  
 321  560                          if (rp->hkr_level == TOP_LEVEL(hat))
 322  561                                  ht = hat->hat_htable;
↓ open down ↓ 4 lines elided ↑ open up ↑
 327  566                          start = htable_va2entry(va, ht);
 328  567                          cnt = HTABLE_NUM_PTES(ht) - start;
 329  568                          eva = va +
 330  569                              ((uintptr_t)cnt << LEVEL_SHIFT(rp->hkr_level));
 331  570                          if (rp->hkr_end_va != 0 &&
 332  571                              (eva > rp->hkr_end_va || eva == 0))
 333  572                                  cnt = htable_va2entry(rp->hkr_end_va, ht) -
 334  573                                      start;
 335  574  
 336  575  #if defined(__i386) && !defined(__xpv)
 337      -                        if (ht->ht_flags & HTABLE_VLP) {
 338      -                                bcopy(&vlp_page[start],
 339      -                                    &hat->hat_vlp_ptes[start],
      576 +                        if (ht->ht_flags & HTABLE_COPIED) {
      577 +                                bcopy(&pcp_page[start],
      578 +                                    &hat->hat_copied_ptes[start],
 340  579                                      cnt * sizeof (x86pte_t));
 341  580                                  continue;
 342  581                          }
 343  582  #endif
 344  583                          src = htable_lookup(kas.a_hat, va, rp->hkr_level);
 345  584                          ASSERT(src != NULL);
 346  585                          x86pte_copy(src, ht, start, cnt);
 347  586                          htable_release(src);
 348  587                  }
 349  588          }
↓ open down ↓ 4 lines elided ↑ open up ↑
 354  593          /*
 355  594           * Pin top level page tables after initializing them
 356  595           */
 357  596          xen_pin(hat->hat_htable->ht_pfn, mmu.max_level);
 358  597  #if defined(__amd64)
 359  598          xen_pin(hat->hat_user_ptable, mmu.max_level);
 360  599  #endif
 361  600  #endif
 362  601          XPV_ALLOW_MIGRATE();
 363  602  
      603 +        hat_list_append(hat);
      604 +
      605 +        return (hat);
      606 +}
      607 +
      608 +#if !defined(__xpv)
      609 +/*
      610 + * Cons up a HAT for a CPU. This represents the user mappings. This will have
      611 + * various kernel pages punched into it manually. Importantly, this hat is
      612 + * ineligible for stealing. We really don't want to deal with this ever
      613 + * faulting and figuring out that this is happening, much like we don't with
      614 + * kas.
      615 + */
      616 +static hat_t *
      617 +hat_cpu_alloc(cpu_t *cpu)
      618 +{
      619 +        hat_t *hat;
      620 +        htable_t *ht;
      621 +
      622 +        hat = kmem_cache_alloc(hat_cache, KM_SLEEP);
      623 +        hat->hat_as = NULL;
      624 +        mutex_init(&hat->hat_mutex, NULL, MUTEX_DEFAULT, NULL);
      625 +        hat->hat_max_level = mmu.max_level;
      626 +        hat->hat_num_copied = 0;
      627 +        hat->hat_flags = HAT_PCP;
      628 +
      629 +        hat->hat_num_hash = mmu.hash_cnt;
      630 +        hat->hat_ht_hash = kmem_cache_alloc(hat_hash_cache, KM_SLEEP);
      631 +        bzero(hat->hat_ht_hash, hat->hat_num_hash * sizeof (htable_t *));
      632 +
      633 +        hat->hat_next = hat->hat_prev = NULL;
      634 +
 364  635          /*
 365      -         * Put it at the start of the global list of all hats (used by stealing)
 366      -         *
 367      -         * kas.a_hat is not in the list but is instead used to find the
 368      -         * first and last items in the list.
 369      -         *
 370      -         * - kas.a_hat->hat_next points to the start of the user hats.
 371      -         *   The list ends where hat->hat_next == NULL
 372      -         *
 373      -         * - kas.a_hat->hat_prev points to the last of the user hats.
 374      -         *   The list begins where hat->hat_prev == NULL
      636 +         * Because this HAT will only ever be used by the current CPU, we'll go
      637 +         * ahead and set the CPUSET up to only point to the CPU in question.
 375  638           */
 376      -        mutex_enter(&hat_list_lock);
 377      -        hat->hat_prev = NULL;
 378      -        hat->hat_next = kas.a_hat->hat_next;
 379      -        if (hat->hat_next)
 380      -                hat->hat_next->hat_prev = hat;
 381      -        else
 382      -                kas.a_hat->hat_prev = hat;
 383      -        kas.a_hat->hat_next = hat;
 384      -        mutex_exit(&hat_list_lock);
      639 +        CPUSET_ADD(hat->hat_cpus, cpu->cpu_id);
 385  640  
      641 +        hat->hat_htable = NULL;
      642 +        hat->hat_ht_cached = NULL;
      643 +        ht = htable_create(hat, (uintptr_t)0, TOP_LEVEL(hat), NULL);
      644 +        hat->hat_htable = ht;
      645 +
      646 +        hat_list_append(hat);
      647 +
 386  648          return (hat);
 387  649  }
      650 +#endif /* !__xpv */
 388  651  
 389  652  /*
 390  653   * process has finished executing but as has not been cleaned up yet.
 391  654   */
 392  655  /*ARGSUSED*/
 393  656  void
 394  657  hat_free_start(hat_t *hat)
 395  658  {
 396  659          ASSERT(AS_WRITE_HELD(hat->hat_as));
 397  660  
↓ open down ↓ 36 lines elided ↑ open up ↑
 434  697                  hat->hat_next->hat_prev = hat->hat_prev;
 435  698          else
 436  699                  kas.a_hat->hat_prev = hat->hat_prev;
 437  700          mutex_exit(&hat_list_lock);
 438  701          hat->hat_next = hat->hat_prev = NULL;
 439  702  
 440  703  #if defined(__xpv)
 441  704          /*
 442  705           * On the hypervisor, unpin top level page table(s)
 443  706           */
      707 +        VERIFY3U(hat->hat_flags & HAT_PCP, ==, 0);
 444  708          xen_unpin(hat->hat_htable->ht_pfn);
 445  709  #if defined(__amd64)
 446  710          xen_unpin(hat->hat_user_ptable);
 447  711  #endif
 448  712  #endif
 449  713  
 450  714          /*
 451  715           * Make a pass through the htables freeing them all up.
 452  716           */
 453  717          htable_purge_hat(hat);
 454  718  
 455  719          /*
 456  720           * Decide which kmem cache the hash table came from, then free it.
 457  721           */
 458      -        if (hat->hat_flags & HAT_VLP)
 459      -                cache = vlp_hash_cache;
 460      -        else
      722 +        if (hat->hat_flags & HAT_COPIED) {
      723 +#if defined(__amd64)
      724 +                if (hat->hat_flags & HAT_COPIED_32) {
      725 +                        cache = hat32_hash_cache;
      726 +                } else {
      727 +                        cache = hat_hash_cache;
      728 +                }
      729 +#else
      730 +                cache = hat32_hash_cache;
      731 +#endif
      732 +        } else {
 461  733                  cache = hat_hash_cache;
      734 +        }
 462  735          kmem_cache_free(cache, hat->hat_ht_hash);
 463  736          hat->hat_ht_hash = NULL;
 464  737  
 465  738          hat->hat_flags = 0;
      739 +        hat->hat_max_level = 0;
      740 +        hat->hat_num_copied = 0;
 466  741          kmem_cache_free(hat_cache, hat);
 467  742  }
 468  743  
 469  744  /*
 470  745   * round kernelbase down to a supported value to use for _userlimit
 471  746   *
 472  747   * userlimit must be aligned down to an entry in the top level htable.
 473  748   * The one exception is for 32 bit HAT's running PAE.
 474  749   */
 475  750  uintptr_t
↓ open down ↓ 34 lines elided ↑ open up ↑
 510  785          }
 511  786          mmu.max_page_level = lvl;
 512  787  
 513  788          if ((lvl == 2) && (enable_1gpg == 0))
 514  789                  mmu.umax_page_level = 1;
 515  790          else
 516  791                  mmu.umax_page_level = lvl;
 517  792  }
 518  793  
 519  794  /*
      795 + * Determine the number of slots that are in used in the top-most level page
      796 + * table for user memory. This is based on _userlimit. In effect this is similar
      797 + * to htable_va2entry, but without the convenience of having an htable.
      798 + */
      799 +void
      800 +mmu_calc_user_slots(void)
      801 +{
      802 +        uint_t ent, nptes;
      803 +        uintptr_t shift;
      804 +
      805 +        nptes = mmu.top_level_count;
      806 +        shift = _userlimit >> mmu.level_shift[mmu.max_level];
      807 +        ent = shift & (nptes - 1);
      808 +
      809 +        /*
      810 +         * Ent tells us the slot that the page for _userlimit would fit in. We
      811 +         * need to add one to this to cover the total number of entries.
      812 +         */
      813 +        mmu.top_level_uslots = ent + 1;
      814 +
      815 +        /*
      816 +         * When running 32-bit compatability processes on a 64-bit kernel, we
      817 +         * will only need to use one slot.
      818 +         */
      819 +        mmu.top_level_uslots32 = 1;
      820 +
      821 +        /*
      822 +         * Record the number of PCP page table entries that we'll need to copy
      823 +         * around. For 64-bit processes this is the number of user slots. For
      824 +         * 32-bit proceses, this is 4 1 GiB pages.
      825 +         */
      826 +        mmu.num_copied_ents = mmu.top_level_uslots;
      827 +        mmu.num_copied_ents32 = 4;
      828 +}
      829 +
      830 +/*
 520  831   * Initialize hat data structures based on processor MMU information.
 521  832   */
 522  833  void
 523  834  mmu_init(void)
 524  835  {
 525  836          uint_t max_htables;
 526  837          uint_t pa_bits;
 527  838          uint_t va_bits;
 528  839          int i;
 529  840  
 530  841          /*
 531  842           * If CPU enabled the page table global bit, use it for the kernel
 532  843           * This is bit 7 in CR4 (PGE - Page Global Enable).
 533  844           */
 534  845          if (is_x86_feature(x86_featureset, X86FSET_PGE) &&
 535  846              (getcr4() & CR4_PGE) != 0)
 536  847                  mmu.pt_global = PT_GLOBAL;
 537  848  
      849 +#if !defined(__xpv)
 538  850          /*
      851 +         * The 64-bit x86 kernel has split user/kernel page tables. As such we
      852 +         * cannot have the global bit set. The simplest way for us to deal with
      853 +         * this is to just say that pt_global is zero, so the global bit isn't
      854 +         * present.
      855 +         */
      856 +        if (kpti_enable == 1)
      857 +                mmu.pt_global = 0;
      858 +#endif
      859 +
      860 +        /*
 539  861           * Detect NX and PAE usage.
 540  862           */
 541  863          mmu.pae_hat = kbm_pae_support;
 542  864          if (kbm_nx_support)
 543  865                  mmu.pt_nx = PT_NX;
 544  866          else
 545  867                  mmu.pt_nx = 0;
 546  868  
 547  869          /*
 548  870           * Use CPU info to set various MMU parameters
↓ open down ↓ 37 lines elided ↑ open up ↑
 586  908          if (!is_x86_feature(x86_featureset, X86FSET_CX8))
 587  909                  panic("Processor does not support cmpxchg8b instruction");
 588  910  
 589  911  #if defined(__amd64)
 590  912  
 591  913          mmu.num_level = 4;
 592  914          mmu.max_level = 3;
 593  915          mmu.ptes_per_table = 512;
 594  916          mmu.top_level_count = 512;
 595  917  
      918 +        /*
      919 +         * 32-bit processes only use 1 GB ptes.
      920 +         */
      921 +        mmu.max_level32 = 2;
      922 +
 596  923          mmu.level_shift[0] = 12;
 597  924          mmu.level_shift[1] = 21;
 598  925          mmu.level_shift[2] = 30;
 599  926          mmu.level_shift[3] = 39;
 600  927  
 601  928  #elif defined(__i386)
 602  929  
 603  930          if (mmu.pae_hat) {
 604  931                  mmu.num_level = 3;
 605  932                  mmu.max_level = 2;
↓ open down ↓ 16 lines elided ↑ open up ↑
 622  949  
 623  950  #endif  /* __i386 */
 624  951  
 625  952          for (i = 0; i < mmu.num_level; ++i) {
 626  953                  mmu.level_size[i] = 1UL << mmu.level_shift[i];
 627  954                  mmu.level_offset[i] = mmu.level_size[i] - 1;
 628  955                  mmu.level_mask[i] = ~mmu.level_offset[i];
 629  956          }
 630  957  
 631  958          set_max_page_level();
      959 +        mmu_calc_user_slots();
 632  960  
 633  961          mmu_page_sizes = mmu.max_page_level + 1;
 634  962          mmu_exported_page_sizes = mmu.umax_page_level + 1;
 635  963  
 636  964          /* restrict legacy applications from using pagesizes 1g and above */
 637  965          mmu_legacy_page_sizes =
 638  966              (mmu_exported_page_sizes > 2) ? 2 : mmu_exported_page_sizes;
 639  967  
 640  968  
 641  969          for (i = 0; i <= mmu.max_page_level; ++i) {
↓ open down ↓ 15 lines elided ↑ open up ↑
 657  985          /*
 658  986           * Compute how many hash table entries to have per process for htables.
 659  987           * We start with 1 page's worth of entries.
 660  988           *
 661  989           * If physical memory is small, reduce the amount need to cover it.
 662  990           */
 663  991          max_htables = physmax / mmu.ptes_per_table;
 664  992          mmu.hash_cnt = MMU_PAGESIZE / sizeof (htable_t *);
 665  993          while (mmu.hash_cnt > 16 && mmu.hash_cnt >= max_htables)
 666  994                  mmu.hash_cnt >>= 1;
 667      -        mmu.vlp_hash_cnt = mmu.hash_cnt;
      995 +        mmu.hat32_hash_cnt = mmu.hash_cnt;
 668  996  
 669  997  #if defined(__amd64)
 670  998          /*
 671  999           * If running in 64 bits and physical memory is large,
 672 1000           * increase the size of the cache to cover all of memory for
 673 1001           * a 64 bit process.
 674 1002           */
 675 1003  #define HASH_MAX_LENGTH 4
 676 1004          while (mmu.hash_cnt * HASH_MAX_LENGTH < max_htables)
 677 1005                  mmu.hash_cnt <<= 1;
↓ open down ↓ 28 lines elided ↑ open up ↑
 706 1034  
 707 1035          hat_cache = kmem_cache_create("hat_t",
 708 1036              sizeof (hat_t), 0, hati_constructor, NULL, NULL,
 709 1037              NULL, 0, 0);
 710 1038  
 711 1039          hat_hash_cache = kmem_cache_create("HatHash",
 712 1040              mmu.hash_cnt * sizeof (htable_t *), 0, NULL, NULL, NULL,
 713 1041              NULL, 0, 0);
 714 1042  
 715 1043          /*
 716      -         * VLP hats can use a smaller hash table size on large memroy machines
     1044 +         * 32-bit PCP hats can use a smaller hash table size on large memory
     1045 +         * machines
 717 1046           */
 718      -        if (mmu.hash_cnt == mmu.vlp_hash_cnt) {
 719      -                vlp_hash_cache = hat_hash_cache;
     1047 +        if (mmu.hash_cnt == mmu.hat32_hash_cnt) {
     1048 +                hat32_hash_cache = hat_hash_cache;
 720 1049          } else {
 721      -                vlp_hash_cache = kmem_cache_create("HatVlpHash",
 722      -                    mmu.vlp_hash_cnt * sizeof (htable_t *), 0, NULL, NULL, NULL,
 723      -                    NULL, 0, 0);
     1050 +                hat32_hash_cache = kmem_cache_create("Hat32Hash",
     1051 +                    mmu.hat32_hash_cnt * sizeof (htable_t *), 0, NULL, NULL,
     1052 +                    NULL, NULL, 0, 0);
 724 1053          }
 725 1054  
 726 1055          /*
 727 1056           * Set up the kernel's hat
 728 1057           */
 729 1058          AS_LOCK_ENTER(&kas, RW_WRITER);
 730 1059          kas.a_hat = kmem_cache_alloc(hat_cache, KM_NOSLEEP);
 731 1060          mutex_init(&kas.a_hat->hat_mutex, NULL, MUTEX_DEFAULT, NULL);
 732 1061          kas.a_hat->hat_as = &kas;
 733 1062          kas.a_hat->hat_flags = 0;
 734 1063          AS_LOCK_EXIT(&kas);
 735 1064  
 736 1065          CPUSET_ZERO(khat_cpuset);
 737 1066          CPUSET_ADD(khat_cpuset, CPU->cpu_id);
 738 1067  
 739 1068          /*
     1069 +         * The kernel HAT doesn't use PCP regardless of architectures.
     1070 +         */
     1071 +        ASSERT3U(mmu.max_level, >, 0);
     1072 +        kas.a_hat->hat_max_level = mmu.max_level;
     1073 +        kas.a_hat->hat_num_copied = 0;
     1074 +
     1075 +        /*
 740 1076           * The kernel hat's next pointer serves as the head of the hat list .
 741 1077           * The kernel hat's prev pointer tracks the last hat on the list for
 742 1078           * htable_steal() to use.
 743 1079           */
 744 1080          kas.a_hat->hat_next = NULL;
 745 1081          kas.a_hat->hat_prev = NULL;
 746 1082  
 747 1083          /*
 748 1084           * Allocate an htable hash bucket for the kernel
 749 1085           * XX64 - tune for 64 bit procs
↓ open down ↓ 11 lines elided ↑ open up ↑
 761 1097          /*
 762 1098           * Pre-allocate hrm_hashtab before enabling the collection of
 763 1099           * refmod statistics.  Allocating on the fly would mean us
 764 1100           * running the risk of suffering recursive mutex enters or
 765 1101           * deadlocks.
 766 1102           */
 767 1103          hrm_hashtab = kmem_zalloc(HRM_HASHSIZE * sizeof (struct hrmstat *),
 768 1104              KM_SLEEP);
 769 1105  }
 770 1106  
     1107 +
     1108 +extern void kpti_tramp_start();
     1109 +extern void kpti_tramp_end();
     1110 +
     1111 +extern void kdi_isr_start();
     1112 +extern void kdi_isr_end();
     1113 +
     1114 +extern gate_desc_t kdi_idt[NIDT];
     1115 +
 771 1116  /*
 772      - * Prepare CPU specific pagetables for VLP processes on 64 bit kernels.
     1117 + * Prepare per-CPU pagetables for all processes on the 64 bit kernel.
 773 1118   *
 774 1119   * Each CPU has a set of 2 pagetables that are reused for any 32 bit
 775      - * process it runs. They are the top level pagetable, hci_vlp_l3ptes, and
 776      - * the next to top level table for the bottom 512 Gig, hci_vlp_l2ptes.
     1120 + * process it runs. They are the top level pagetable, hci_pcp_l3ptes, and
     1121 + * the next to top level table for the bottom 512 Gig, hci_pcp_l2ptes.
 777 1122   */
 778 1123  /*ARGSUSED*/
 779 1124  static void
 780      -hat_vlp_setup(struct cpu *cpu)
     1125 +hat_pcp_setup(struct cpu *cpu)
 781 1126  {
 782      -#if defined(__amd64) && !defined(__xpv)
     1127 +#if !defined(__xpv)
 783 1128          struct hat_cpu_info *hci = cpu->cpu_hat_info;
 784      -        pfn_t pfn;
     1129 +        uintptr_t va;
     1130 +        size_t len;
 785 1131  
 786 1132          /*
 787 1133           * allocate the level==2 page table for the bottom most
 788 1134           * 512Gig of address space (this is where 32 bit apps live)
 789 1135           */
 790 1136          ASSERT(hci != NULL);
 791      -        hci->hci_vlp_l2ptes = kmem_zalloc(MMU_PAGESIZE, KM_SLEEP);
     1137 +        hci->hci_pcp_l2ptes = kmem_zalloc(MMU_PAGESIZE, KM_SLEEP);
 792 1138  
 793 1139          /*
 794 1140           * Allocate a top level pagetable and copy the kernel's
 795      -         * entries into it. Then link in hci_vlp_l2ptes in the 1st entry.
     1141 +         * entries into it. Then link in hci_pcp_l2ptes in the 1st entry.
 796 1142           */
 797      -        hci->hci_vlp_l3ptes = kmem_zalloc(MMU_PAGESIZE, KM_SLEEP);
 798      -        hci->hci_vlp_pfn =
 799      -            hat_getpfnum(kas.a_hat, (caddr_t)hci->hci_vlp_l3ptes);
 800      -        ASSERT(hci->hci_vlp_pfn != PFN_INVALID);
 801      -        bcopy(vlp_page, hci->hci_vlp_l3ptes, MMU_PAGESIZE);
     1143 +        hci->hci_pcp_l3ptes = kmem_zalloc(MMU_PAGESIZE, KM_SLEEP);
     1144 +        hci->hci_pcp_l3pfn =
     1145 +            hat_getpfnum(kas.a_hat, (caddr_t)hci->hci_pcp_l3ptes);
     1146 +        ASSERT3U(hci->hci_pcp_l3pfn, !=, PFN_INVALID);
     1147 +        bcopy(pcp_page, hci->hci_pcp_l3ptes, MMU_PAGESIZE);
 802 1148  
 803      -        pfn = hat_getpfnum(kas.a_hat, (caddr_t)hci->hci_vlp_l2ptes);
 804      -        ASSERT(pfn != PFN_INVALID);
 805      -        hci->hci_vlp_l3ptes[0] = MAKEPTP(pfn, 2);
 806      -#endif /* __amd64 && !__xpv */
     1149 +        hci->hci_pcp_l2pfn =
     1150 +            hat_getpfnum(kas.a_hat, (caddr_t)hci->hci_pcp_l2ptes);
     1151 +        ASSERT3U(hci->hci_pcp_l2pfn, !=, PFN_INVALID);
     1152 +
     1153 +        /*
     1154 +         * Now go through and allocate the user version of these structures.
     1155 +         * Unlike with the kernel version, we allocate a hat to represent the
     1156 +         * top-level page table as that will make it much simpler when we need
     1157 +         * to patch through user entries.
     1158 +         */
     1159 +        hci->hci_user_hat = hat_cpu_alloc(cpu);
     1160 +        hci->hci_user_l3pfn = hci->hci_user_hat->hat_htable->ht_pfn;
     1161 +        ASSERT3U(hci->hci_user_l3pfn, !=, PFN_INVALID);
     1162 +        hci->hci_user_l3ptes =
     1163 +            (x86pte_t *)hat_kpm_mapin_pfn(hci->hci_user_l3pfn);
     1164 +
     1165 +        /* Skip the rest of this if KPTI is switched off at boot. */
     1166 +        if (kpti_enable != 1)
     1167 +                return;
     1168 +
     1169 +        /*
     1170 +         * OK, now that we have this we need to go through and punch the normal
     1171 +         * holes in the CPU's hat for this. At this point we'll punch in the
     1172 +         * following:
     1173 +         *
     1174 +         *   o GDT
     1175 +         *   o IDT
     1176 +         *   o LDT
     1177 +         *   o Trampoline Code
     1178 +         *   o machcpu KPTI page
     1179 +         *   o kmdb ISR code page (just trampolines)
     1180 +         *
     1181 +         * If this is cpu0, then we also can initialize the following because
     1182 +         * they'll have already been allocated.
     1183 +         *
     1184 +         *   o TSS for CPU 0
     1185 +         *   o Double Fault for CPU 0
     1186 +         *
     1187 +         * The following items have yet to be allocated and have not been
     1188 +         * punched in yet. They will be punched in later:
     1189 +         *
     1190 +         *   o TSS (mach_cpucontext_alloc_tables())
     1191 +         *   o Double Fault Stack (mach_cpucontext_alloc_tables())
     1192 +         */
     1193 +        hati_cpu_punchin(cpu, (uintptr_t)cpu->cpu_gdt, PROT_READ);
     1194 +        hati_cpu_punchin(cpu, (uintptr_t)cpu->cpu_idt, PROT_READ);
     1195 +
     1196 +        /*
     1197 +         * As the KDI IDT is only active during kmdb sessions (including single
     1198 +         * stepping), typically we don't actually need this punched in (we
     1199 +         * consider the routines that switch to the user cr3 to be toxic).  But
     1200 +         * if we ever accidentally end up on the user cr3 while on this IDT,
     1201 +         * we'd prefer not to triple fault.
     1202 +         */
     1203 +        hati_cpu_punchin(cpu, (uintptr_t)&kdi_idt, PROT_READ);
     1204 +
     1205 +        CTASSERT(((uintptr_t)&kpti_tramp_start % MMU_PAGESIZE) == 0);
     1206 +        CTASSERT(((uintptr_t)&kpti_tramp_end % MMU_PAGESIZE) == 0);
     1207 +        for (va = (uintptr_t)&kpti_tramp_start;
     1208 +            va < (uintptr_t)&kpti_tramp_end; va += MMU_PAGESIZE) {
     1209 +                hati_cpu_punchin(cpu, va, PROT_READ | PROT_EXEC);
     1210 +        }
     1211 +
     1212 +        VERIFY3U(((uintptr_t)cpu->cpu_m.mcpu_ldt) % MMU_PAGESIZE, ==, 0);
     1213 +        for (va = (uintptr_t)cpu->cpu_m.mcpu_ldt, len = LDT_CPU_SIZE;
     1214 +            len >= MMU_PAGESIZE; va += MMU_PAGESIZE, len -= MMU_PAGESIZE) {
     1215 +                hati_cpu_punchin(cpu, va, PROT_READ);
     1216 +        }
     1217 +
     1218 +        /* mcpu_pad2 is the start of the page containing the kpti_frames. */
     1219 +        hati_cpu_punchin(cpu, (uintptr_t)&cpu->cpu_m.mcpu_pad2[0],
     1220 +            PROT_READ | PROT_WRITE);
     1221 +
     1222 +        if (cpu == &cpus[0]) {
     1223 +                /*
     1224 +                 * CPU0 uses a global for its double fault stack to deal with
     1225 +                 * the chicken and egg problem. We need to punch it into its
     1226 +                 * user HAT.
     1227 +                 */
     1228 +                extern char dblfault_stack0[];
     1229 +
     1230 +                hati_cpu_punchin(cpu, (uintptr_t)cpu->cpu_m.mcpu_tss,
     1231 +                    PROT_READ);
     1232 +
     1233 +                for (va = (uintptr_t)dblfault_stack0,
     1234 +                    len = DEFAULTSTKSZ; len >= MMU_PAGESIZE;
     1235 +                    va += MMU_PAGESIZE, len -= MMU_PAGESIZE) {
     1236 +                        hati_cpu_punchin(cpu, va, PROT_READ | PROT_WRITE);
     1237 +                }
     1238 +        }
     1239 +
     1240 +        CTASSERT(((uintptr_t)&kdi_isr_start % MMU_PAGESIZE) == 0);
     1241 +        CTASSERT(((uintptr_t)&kdi_isr_end % MMU_PAGESIZE) == 0);
     1242 +        for (va = (uintptr_t)&kdi_isr_start;
     1243 +            va < (uintptr_t)&kdi_isr_end; va += MMU_PAGESIZE) {
     1244 +                hati_cpu_punchin(cpu, va, PROT_READ | PROT_EXEC);
     1245 +        }
     1246 +#endif /* !__xpv */
 807 1247  }
 808 1248  
 809 1249  /*ARGSUSED*/
 810 1250  static void
 811      -hat_vlp_teardown(cpu_t *cpu)
     1251 +hat_pcp_teardown(cpu_t *cpu)
 812 1252  {
 813      -#if defined(__amd64) && !defined(__xpv)
     1253 +#if !defined(__xpv)
 814 1254          struct hat_cpu_info *hci;
 815 1255  
 816 1256          if ((hci = cpu->cpu_hat_info) == NULL)
 817 1257                  return;
 818      -        if (hci->hci_vlp_l2ptes)
 819      -                kmem_free(hci->hci_vlp_l2ptes, MMU_PAGESIZE);
 820      -        if (hci->hci_vlp_l3ptes)
 821      -                kmem_free(hci->hci_vlp_l3ptes, MMU_PAGESIZE);
     1258 +        if (hci->hci_pcp_l2ptes != NULL)
     1259 +                kmem_free(hci->hci_pcp_l2ptes, MMU_PAGESIZE);
     1260 +        if (hci->hci_pcp_l3ptes != NULL)
     1261 +                kmem_free(hci->hci_pcp_l3ptes, MMU_PAGESIZE);
     1262 +        if (hci->hci_user_hat != NULL) {
     1263 +                hat_free_start(hci->hci_user_hat);
     1264 +                hat_free_end(hci->hci_user_hat);
     1265 +        }
 822 1266  #endif
 823 1267  }
 824 1268  
 825 1269  #define NEXT_HKR(r, l, s, e) {                  \
 826 1270          kernel_ranges[r].hkr_level = l;         \
 827 1271          kernel_ranges[r].hkr_start_va = s;      \
 828 1272          kernel_ranges[r].hkr_end_va = e;        \
 829 1273          ++r;                                    \
 830 1274  }
 831 1275  
↓ open down ↓ 75 lines elided ↑ open up ↑
 907 1351                          }
 908 1352  
 909 1353                          (void) htable_create(kas.a_hat, va, rp->hkr_level - 1,
 910 1354                              NULL);
 911 1355                  }
 912 1356          }
 913 1357  
 914 1358          /*
 915 1359           * 32 bit PAE metal kernels use only 4 of the 512 entries in the
 916 1360           * page holding the top level pagetable. We use the remainder for
 917      -         * the "per CPU" page tables for VLP processes.
     1361 +         * the "per CPU" page tables for PCP processes.
 918 1362           * Map the top level kernel pagetable into the kernel to make
 919 1363           * it easy to use bcopy access these tables.
     1364 +         *
     1365 +         * PAE is required for the 64-bit kernel which uses this as well to
     1366 +         * perform the per-CPU pagetables. See the big theory statement.
 920 1367           */
 921 1368          if (mmu.pae_hat) {
 922      -                vlp_page = vmem_alloc(heap_arena, MMU_PAGESIZE, VM_SLEEP);
 923      -                hat_devload(kas.a_hat, (caddr_t)vlp_page, MMU_PAGESIZE,
     1369 +                pcp_page = vmem_alloc(heap_arena, MMU_PAGESIZE, VM_SLEEP);
     1370 +                hat_devload(kas.a_hat, (caddr_t)pcp_page, MMU_PAGESIZE,
 924 1371                      kas.a_hat->hat_htable->ht_pfn,
 925 1372  #if !defined(__xpv)
 926 1373                      PROT_WRITE |
 927 1374  #endif
 928 1375                      PROT_READ | HAT_NOSYNC | HAT_UNORDERED_OK,
 929 1376                      HAT_LOAD | HAT_LOAD_NOCONSIST);
 930 1377          }
 931      -        hat_vlp_setup(CPU);
     1378 +        hat_pcp_setup(CPU);
 932 1379  
 933 1380          /*
 934 1381           * Create kmap (cached mappings of kernel PTEs)
 935 1382           * for 32 bit we map from segmap_start .. ekernelheap
 936 1383           * for 64 bit we map from segmap_start .. segmap_start + segmapsize;
 937 1384           */
 938 1385  #if defined(__i386)
 939 1386          size = (uintptr_t)ekernelheap - segmap_start;
 940 1387  #elif defined(__amd64)
 941 1388          size = segmapsize;
 942 1389  #endif
 943 1390          hat_kmap_init((uintptr_t)segmap_start, size);
     1391 +
     1392 +#if !defined(__xpv)
     1393 +        ASSERT3U(kas.a_hat->hat_htable->ht_pfn, !=, PFN_INVALID);
     1394 +        ASSERT3U(kpti_safe_cr3, ==,
     1395 +            MAKECR3(kas.a_hat->hat_htable->ht_pfn, PCID_KERNEL));
     1396 +#endif
 944 1397  }
 945 1398  
 946 1399  /*
 947 1400   * On 32 bit PAE mode, PTE's are 64 bits, but ordinary atomic memory references
 948 1401   * are 32 bit, so for safety we must use atomic_cas_64() to install these.
 949 1402   */
 950 1403  #ifdef __i386
 951 1404  static void
 952 1405  reload_pae32(hat_t *hat, cpu_t *cpu)
 953 1406  {
 954 1407          x86pte_t *src;
 955 1408          x86pte_t *dest;
 956 1409          x86pte_t pte;
 957 1410          int i;
 958 1411  
 959 1412          /*
 960 1413           * Load the 4 entries of the level 2 page table into this
 961      -         * cpu's range of the vlp_page and point cr3 at them.
     1414 +         * cpu's range of the pcp_page and point cr3 at them.
 962 1415           */
 963 1416          ASSERT(mmu.pae_hat);
 964      -        src = hat->hat_vlp_ptes;
 965      -        dest = vlp_page + (cpu->cpu_id + 1) * VLP_NUM_PTES;
 966      -        for (i = 0; i < VLP_NUM_PTES; ++i) {
     1417 +        src = hat->hat_copied_ptes;
     1418 +        dest = pcp_page + (cpu->cpu_id + 1) * MAX_COPIED_PTES;
     1419 +        for (i = 0; i < MAX_COPIED_PTES; ++i) {
 967 1420                  for (;;) {
 968 1421                          pte = dest[i];
 969 1422                          if (pte == src[i])
 970 1423                                  break;
 971 1424                          if (atomic_cas_64(dest + i, pte, src[i]) != src[i])
 972 1425                                  break;
 973 1426                  }
 974 1427          }
 975 1428  }
 976 1429  #endif
 977 1430  
 978 1431  /*
     1432 + * Update the PCP data on the CPU cpu to the one on the hat. If this is a 32-bit
     1433 + * process, then we must update the L2 pages and then the L3. If this is a
     1434 + * 64-bit process then we must update the L3 entries.
     1435 + */
     1436 +static void
     1437 +hat_pcp_update(cpu_t *cpu, const hat_t *hat)
     1438 +{
     1439 +        ASSERT3U(hat->hat_flags & HAT_COPIED, !=, 0);
     1440 +
     1441 +        if ((hat->hat_flags & HAT_COPIED_32) != 0) {
     1442 +                const x86pte_t *l2src;
     1443 +                x86pte_t *l2dst, *l3ptes, *l3uptes;
     1444 +                /*
     1445 +                 * This is a 32-bit process. To set this up, we need to do the
     1446 +                 * following:
     1447 +                 *
     1448 +                 *  - Copy the 4 L2 PTEs into the dedicated L2 table
     1449 +                 *  - Zero the user L3 PTEs in the user and kernel page table
     1450 +                 *  - Set the first L3 PTE to point to the CPU L2 table
     1451 +                 */
     1452 +                l2src = hat->hat_copied_ptes;
     1453 +                l2dst = cpu->cpu_hat_info->hci_pcp_l2ptes;
     1454 +                l3ptes = cpu->cpu_hat_info->hci_pcp_l3ptes;
     1455 +                l3uptes = cpu->cpu_hat_info->hci_user_l3ptes;
     1456 +
     1457 +                l2dst[0] = l2src[0];
     1458 +                l2dst[1] = l2src[1];
     1459 +                l2dst[2] = l2src[2];
     1460 +                l2dst[3] = l2src[3];
     1461 +
     1462 +                /*
     1463 +                 * Make sure to use the mmu to get the number of slots. The
     1464 +                 * number of PLP entries that this has will always be less as
     1465 +                 * it's a 32-bit process.
     1466 +                 */
     1467 +                bzero(l3ptes, sizeof (x86pte_t) * mmu.top_level_uslots);
     1468 +                l3ptes[0] = MAKEPTP(cpu->cpu_hat_info->hci_pcp_l2pfn, 2);
     1469 +                bzero(l3uptes, sizeof (x86pte_t) * mmu.top_level_uslots);
     1470 +                l3uptes[0] = MAKEPTP(cpu->cpu_hat_info->hci_pcp_l2pfn, 2);
     1471 +        } else {
     1472 +                /*
     1473 +                 * This is a 64-bit process. To set this up, we need to do the
     1474 +                 * following:
     1475 +                 *
     1476 +                 *  - Zero the 4 L2 PTEs in the CPU structure for safety
     1477 +                 *  - Copy over the new user L3 PTEs into the kernel page table
     1478 +                 *  - Copy over the new user L3 PTEs into the user page table
     1479 +                 */
     1480 +                ASSERT3S(kpti_enable, ==, 1);
     1481 +                bzero(cpu->cpu_hat_info->hci_pcp_l2ptes, sizeof (x86pte_t) * 4);
     1482 +                bcopy(hat->hat_copied_ptes, cpu->cpu_hat_info->hci_pcp_l3ptes,
     1483 +                    sizeof (x86pte_t) * mmu.top_level_uslots);
     1484 +                bcopy(hat->hat_copied_ptes, cpu->cpu_hat_info->hci_user_l3ptes,
     1485 +                    sizeof (x86pte_t) * mmu.top_level_uslots);
     1486 +        }
     1487 +}
     1488 +
     1489 +static void
     1490 +reset_kpti(struct kpti_frame *fr, uint64_t kcr3, uint64_t ucr3)
     1491 +{
     1492 +        ASSERT3U(fr->kf_tr_flag, ==, 0);
     1493 +#if DEBUG
     1494 +        if (fr->kf_kernel_cr3 != 0) {
     1495 +                ASSERT3U(fr->kf_lower_redzone, ==, 0xdeadbeefdeadbeef);
     1496 +                ASSERT3U(fr->kf_middle_redzone, ==, 0xdeadbeefdeadbeef);
     1497 +                ASSERT3U(fr->kf_upper_redzone, ==, 0xdeadbeefdeadbeef);
     1498 +        }
     1499 +#endif
     1500 +
     1501 +        bzero(fr, offsetof(struct kpti_frame, kf_kernel_cr3));
     1502 +        bzero(&fr->kf_unused, sizeof (struct kpti_frame) -
     1503 +            offsetof(struct kpti_frame, kf_unused));
     1504 +
     1505 +        fr->kf_kernel_cr3 = kcr3;
     1506 +        fr->kf_user_cr3 = ucr3;
     1507 +        fr->kf_tr_ret_rsp = (uintptr_t)&fr->kf_tr_rsp;
     1508 +
     1509 +        fr->kf_lower_redzone = 0xdeadbeefdeadbeef;
     1510 +        fr->kf_middle_redzone = 0xdeadbeefdeadbeef;
     1511 +        fr->kf_upper_redzone = 0xdeadbeefdeadbeef;
     1512 +}
     1513 +
     1514 +#ifdef __xpv
     1515 +static void
     1516 +hat_switch_xen(hat_t *hat)
     1517 +{
     1518 +        struct mmuext_op t[2];
     1519 +        uint_t retcnt;
     1520 +        uint_t opcnt = 1;
     1521 +        uint64_t newcr3;
     1522 +
     1523 +        ASSERT(!(hat->hat_flags & HAT_COPIED));
     1524 +        ASSERT(!(getcr4() & CR4_PCIDE));
     1525 +
     1526 +        newcr3 = MAKECR3((uint64_t)hat->hat_htable->ht_pfn, PCID_NONE);
     1527 +
     1528 +        t[0].cmd = MMUEXT_NEW_BASEPTR;
     1529 +        t[0].arg1.mfn = mmu_btop(pa_to_ma(newcr3));
     1530 +
     1531 +        /*
     1532 +         * There's an interesting problem here, as to what to actually specify
     1533 +         * when switching to the kernel hat.  For now we'll reuse the kernel hat
     1534 +         * again.
     1535 +         */
     1536 +        t[1].cmd = MMUEXT_NEW_USER_BASEPTR;
     1537 +        if (hat == kas.a_hat)
     1538 +                t[1].arg1.mfn = mmu_btop(pa_to_ma(newcr3));
     1539 +        else
     1540 +                t[1].arg1.mfn = pfn_to_mfn(hat->hat_user_ptable);
     1541 +        ++opcnt;
     1542 +
     1543 +        if (HYPERVISOR_mmuext_op(t, opcnt, &retcnt, DOMID_SELF) < 0)
     1544 +                panic("HYPERVISOR_mmu_update() failed");
     1545 +        ASSERT(retcnt == opcnt);
     1546 +}
     1547 +#endif /* __xpv */
     1548 +
     1549 +/*
 979 1550   * Switch to a new active hat, maintaining bit masks to track active CPUs.
 980 1551   *
 981      - * On the 32-bit PAE hypervisor, %cr3 is a 64-bit value, on metal it
 982      - * remains a 32-bit value.
     1552 + * With KPTI, all our HATs except kas should be using PCP.  Thus, to switch
     1553 + * HATs, we need to copy over the new user PTEs, then set our trampoline context
     1554 + * as appropriate.
     1555 + *
     1556 + * If lacking PCID, we then load our new cr3, which will flush the TLB: we may
     1557 + * have established userspace TLB entries via kernel accesses, and these are no
     1558 + * longer valid.  We have to do this eagerly, as we just deleted this CPU from
     1559 + * ->hat_cpus, so would no longer see any TLB shootdowns.
     1560 + *
     1561 + * With PCID enabled, things get a little more complicated.  We would like to
     1562 + * keep TLB context around when entering and exiting the kernel, and to do this,
     1563 + * we partition the TLB into two different spaces:
     1564 + *
     1565 + * PCID_KERNEL is defined as zero, and used both by kas and all other address
     1566 + * spaces while in the kernel (post-trampoline).
     1567 + *
     1568 + * PCID_USER is used while in userspace.  Therefore, userspace cannot use any
     1569 + * lingering PCID_KERNEL entries to kernel addresses it should not be able to
     1570 + * read.
     1571 + *
     1572 + * The trampoline cr3s are set not to invalidate on a mov to %cr3. This means if
     1573 + * we take a journey through the kernel without switching HATs, we have some
     1574 + * hope of keeping our TLB state around.
     1575 + *
     1576 + * On a hat switch, rather than deal with any necessary flushes on the way out
     1577 + * of the trampolines, we do them upfront here. If we're switching from kas, we
     1578 + * shouldn't need any invalidation.
     1579 + *
     1580 + * Otherwise, we can have stale userspace entries for both PCID_USER (what
     1581 + * happened before we move onto the kcr3) and PCID_KERNEL (any subsequent
     1582 + * userspace accesses such as ddi_copyin()).  Since setcr3() won't do these
     1583 + * flushes on its own in PCIDE, we'll do a non-flushing load and then
     1584 + * invalidate everything.
 983 1585   */
 984 1586  void
 985 1587  hat_switch(hat_t *hat)
 986 1588  {
 987      -        uint64_t        newcr3;
 988      -        cpu_t           *cpu = CPU;
 989      -        hat_t           *old = cpu->cpu_current_hat;
     1589 +        cpu_t *cpu = CPU;
     1590 +        hat_t *old = cpu->cpu_current_hat;
 990 1591  
 991 1592          /*
 992 1593           * set up this information first, so we don't miss any cross calls
 993 1594           */
 994 1595          if (old != NULL) {
 995 1596                  if (old == hat)
 996 1597                          return;
 997 1598                  if (old != kas.a_hat)
 998 1599                          CPUSET_ATOMIC_DEL(old->hat_cpus, cpu->cpu_id);
 999 1600          }
1000 1601  
1001 1602          /*
1002 1603           * Add this CPU to the active set for this HAT.
1003 1604           */
1004 1605          if (hat != kas.a_hat) {
1005 1606                  CPUSET_ATOMIC_ADD(hat->hat_cpus, cpu->cpu_id);
1006 1607          }
1007 1608          cpu->cpu_current_hat = hat;
1008 1609  
1009      -        /*
1010      -         * now go ahead and load cr3
1011      -         */
1012      -        if (hat->hat_flags & HAT_VLP) {
1013      -#if defined(__amd64)
1014      -                x86pte_t *vlpptep = cpu->cpu_hat_info->hci_vlp_l2ptes;
     1610 +#if defined(__xpv)
     1611 +        hat_switch_xen(hat);
     1612 +#else
     1613 +        struct hat_cpu_info *info = cpu->cpu_m.mcpu_hat_info;
     1614 +        uint64_t pcide = getcr4() & CR4_PCIDE;
     1615 +        uint64_t kcr3, ucr3;
     1616 +        pfn_t tl_kpfn;
     1617 +        ulong_t flag;
1015 1618  
1016      -                VLP_COPY(hat->hat_vlp_ptes, vlpptep);
1017      -                newcr3 = MAKECR3(cpu->cpu_hat_info->hci_vlp_pfn);
1018      -#elif defined(__i386)
1019      -                reload_pae32(hat, cpu);
1020      -                newcr3 = MAKECR3(kas.a_hat->hat_htable->ht_pfn) +
1021      -                    (cpu->cpu_id + 1) * VLP_SIZE;
1022      -#endif
     1619 +        EQUIV(kpti_enable, !mmu.pt_global);
     1620 +
     1621 +        if (hat->hat_flags & HAT_COPIED) {
     1622 +                hat_pcp_update(cpu, hat);
     1623 +                tl_kpfn = info->hci_pcp_l3pfn;
1023 1624          } else {
1024      -                newcr3 = MAKECR3((uint64_t)hat->hat_htable->ht_pfn);
     1625 +                IMPLY(kpti_enable, hat == kas.a_hat);
     1626 +                tl_kpfn = hat->hat_htable->ht_pfn;
1025 1627          }
1026      -#ifdef __xpv
1027      -        {
1028      -                struct mmuext_op t[2];
1029      -                uint_t retcnt;
1030      -                uint_t opcnt = 1;
1031 1628  
1032      -                t[0].cmd = MMUEXT_NEW_BASEPTR;
1033      -                t[0].arg1.mfn = mmu_btop(pa_to_ma(newcr3));
1034      -#if defined(__amd64)
1035      -                /*
1036      -                 * There's an interesting problem here, as to what to
1037      -                 * actually specify when switching to the kernel hat.
1038      -                 * For now we'll reuse the kernel hat again.
1039      -                 */
1040      -                t[1].cmd = MMUEXT_NEW_USER_BASEPTR;
1041      -                if (hat == kas.a_hat)
1042      -                        t[1].arg1.mfn = mmu_btop(pa_to_ma(newcr3));
1043      -                else
1044      -                        t[1].arg1.mfn = pfn_to_mfn(hat->hat_user_ptable);
1045      -                ++opcnt;
1046      -#endif  /* __amd64 */
1047      -                if (HYPERVISOR_mmuext_op(t, opcnt, &retcnt, DOMID_SELF) < 0)
1048      -                        panic("HYPERVISOR_mmu_update() failed");
1049      -                ASSERT(retcnt == opcnt);
     1629 +        if (pcide) {
     1630 +                ASSERT(kpti_enable);
1050 1631  
     1632 +                kcr3 = MAKECR3(tl_kpfn, PCID_KERNEL) | CR3_NOINVL_BIT;
     1633 +                ucr3 = MAKECR3(info->hci_user_l3pfn, PCID_USER) |
     1634 +                    CR3_NOINVL_BIT;
     1635 +
     1636 +                setcr3(kcr3);
     1637 +                if (old != kas.a_hat)
     1638 +                        mmu_flush_tlb(FLUSH_TLB_ALL, NULL);
     1639 +        } else {
     1640 +                kcr3 = MAKECR3(tl_kpfn, PCID_NONE);
     1641 +                ucr3 = kpti_enable ?
     1642 +                    MAKECR3(info->hci_user_l3pfn, PCID_NONE) :
     1643 +                    0;
     1644 +
     1645 +                setcr3(kcr3);
1051 1646          }
1052      -#else
1053      -        setcr3(newcr3);
1054      -#endif
     1647 +
     1648 +        /*
     1649 +         * We will already be taking shootdowns for our new HAT, and as KPTI
     1650 +         * invpcid emulation needs to use kf_user_cr3, make sure we don't get
     1651 +         * any cross calls while we're inconsistent.  Note that it's harmless to
     1652 +         * have a *stale* kf_user_cr3 (we just did a FLUSH_TLB_ALL), but a
     1653 +         * *zero* kf_user_cr3 is not going to go very well.
     1654 +         */
     1655 +        if (pcide)
     1656 +                flag = intr_clear();
     1657 +
     1658 +        reset_kpti(&cpu->cpu_m.mcpu_kpti, kcr3, ucr3);
     1659 +        reset_kpti(&cpu->cpu_m.mcpu_kpti_flt, kcr3, ucr3);
     1660 +        reset_kpti(&cpu->cpu_m.mcpu_kpti_dbg, kcr3, ucr3);
     1661 +
     1662 +        if (pcide)
     1663 +                intr_restore(flag);
     1664 +
     1665 +#endif /* !__xpv */
     1666 +
1055 1667          ASSERT(cpu == CPU);
1056 1668  }
1057 1669  
1058 1670  /*
1059 1671   * Utility to return a valid x86pte_t from protections, pfn, and level number
1060 1672   */
1061 1673  static x86pte_t
1062 1674  hati_mkpte(pfn_t pfn, uint_t attr, level_t level, uint_t flags)
1063 1675  {
1064 1676          x86pte_t        pte;
↓ open down ↓ 291 lines elided ↑ open up ↑
1356 1968           * Install a new mapping in the page's mapping list
1357 1969           */
1358 1970          if (!PTE_ISVALID(old_pte)) {
1359 1971                  if (is_consist) {
1360 1972                          hment_assign(ht, entry, pp, hm);
1361 1973                          x86_hm_exit(pp);
1362 1974                  } else {
1363 1975                          ASSERT(flags & HAT_LOAD_NOCONSIST);
1364 1976                  }
1365 1977  #if defined(__amd64)
1366      -                if (ht->ht_flags & HTABLE_VLP) {
     1978 +                if (ht->ht_flags & HTABLE_COPIED) {
1367 1979                          cpu_t *cpu = CPU;
1368      -                        x86pte_t *vlpptep = cpu->cpu_hat_info->hci_vlp_l2ptes;
1369      -                        VLP_COPY(hat->hat_vlp_ptes, vlpptep);
     1980 +                        hat_pcp_update(cpu, hat);
1370 1981                  }
1371 1982  #endif
1372 1983                  HTABLE_INC(ht->ht_valid_cnt);
1373 1984                  PGCNT_INC(hat, l);
1374 1985                  return (rv);
1375 1986          }
1376 1987  
1377 1988          /*
1378 1989           * Remap's are more complicated:
1379 1990           *  - HAT_LOAD_REMAP must be specified if changing the pfn.
↓ open down ↓ 51 lines elided ↑ open up ↑
1431 2042          x86pte_t        pte;
1432 2043          int             rv = 0;
1433 2044  
1434 2045          /*
1435 2046           * The number 16 is arbitrary and here to catch a recursion problem
1436 2047           * early before we blow out the kernel stack.
1437 2048           */
1438 2049          ++curthread->t_hatdepth;
1439 2050          ASSERT(curthread->t_hatdepth < 16);
1440 2051  
1441      -        ASSERT(hat == kas.a_hat || AS_LOCK_HELD(hat->hat_as));
     2052 +        ASSERT(hat == kas.a_hat || (hat->hat_flags & HAT_PCP) != 0 ||
     2053 +            AS_LOCK_HELD(hat->hat_as));
1442 2054  
1443 2055          if (flags & HAT_LOAD_SHARE)
1444 2056                  hat->hat_flags |= HAT_SHARED;
1445 2057  
1446 2058          /*
1447 2059           * Find the page table that maps this page if it already exists.
1448 2060           */
1449 2061          ht = htable_lookup(hat, va, level);
1450 2062  
1451 2063          /*
1452 2064           * We must have HAT_LOAD_NOCONSIST if page_t is NULL.
1453 2065           */
1454 2066          if (pp == NULL)
1455 2067                  flags |= HAT_LOAD_NOCONSIST;
1456 2068  
1457 2069          if (ht == NULL) {
1458 2070                  ht = htable_create(hat, va, level, NULL);
1459 2071                  ASSERT(ht != NULL);
1460 2072          }
     2073 +        /*
     2074 +         * htable_va2entry checks this condition as well, but it won't include
     2075 +         * much useful info in the panic. So we do it in advance here to include
     2076 +         * all the context.
     2077 +         */
     2078 +        if (ht->ht_vaddr > va || va > HTABLE_LAST_PAGE(ht)) {
     2079 +                panic("hati_load_common: bad htable: va=%p, last page=%p, "
     2080 +                    "ht->ht_vaddr=%p, ht->ht_level=%d", (void *)va,
     2081 +                    (void *)HTABLE_LAST_PAGE(ht), (void *)ht->ht_vaddr,
     2082 +                    (int)ht->ht_level);
     2083 +        }
1461 2084          entry = htable_va2entry(va, ht);
1462 2085  
1463 2086          /*
1464 2087           * a bunch of paranoid error checking
1465 2088           */
1466 2089          ASSERT(ht->ht_busy > 0);
1467      -        if (ht->ht_vaddr > va || va > HTABLE_LAST_PAGE(ht))
1468      -                panic("hati_load_common: bad htable %p, va %p",
1469      -                    (void *)ht, (void *)va);
1470 2090          ASSERT(ht->ht_level == level);
1471 2091  
1472 2092          /*
1473 2093           * construct the new PTE
1474 2094           */
1475 2095          if (hat == kas.a_hat)
1476 2096                  attr &= ~PROT_USER;
1477 2097          pte = hati_mkpte(pfn, attr, level, flags);
1478 2098          if (hat == kas.a_hat && va >= kernelbase)
1479 2099                  PTE_SET(pte, mmu.pt_global);
↓ open down ↓ 429 lines elided ↑ open up ↑
1909 2529  /* ARGSUSED */
1910 2530  void
1911 2531  hat_unlock_region(struct hat *hat, caddr_t addr, size_t len,
1912 2532      hat_region_cookie_t rcookie)
1913 2533  {
1914 2534          panic("No shared region support on x86");
1915 2535  }
1916 2536  
1917 2537  #if !defined(__xpv)
1918 2538  /*
1919      - * Cross call service routine to demap a virtual page on
1920      - * the current CPU or flush all mappings in TLB.
     2539 + * Cross call service routine to demap a range of virtual
     2540 + * pages on the current CPU or flush all mappings in TLB.
1921 2541   */
1922      -/*ARGSUSED*/
1923 2542  static int
1924 2543  hati_demap_func(xc_arg_t a1, xc_arg_t a2, xc_arg_t a3)
1925 2544  {
1926      -        hat_t   *hat = (hat_t *)a1;
1927      -        caddr_t addr = (caddr_t)a2;
1928      -        size_t len = (size_t)a3;
     2545 +        _NOTE(ARGUNUSED(a3));
     2546 +        hat_t           *hat = (hat_t *)a1;
     2547 +        tlb_range_t     *range = (tlb_range_t *)a2;
1929 2548  
1930 2549          /*
1931 2550           * If the target hat isn't the kernel and this CPU isn't operating
1932 2551           * in the target hat, we can ignore the cross call.
1933 2552           */
1934 2553          if (hat != kas.a_hat && hat != CPU->cpu_current_hat)
1935 2554                  return (0);
1936 2555  
1937      -        /*
1938      -         * For a normal address, we flush a range of contiguous mappings
1939      -         */
1940      -        if ((uintptr_t)addr != DEMAP_ALL_ADDR) {
1941      -                for (size_t i = 0; i < len; i += MMU_PAGESIZE)
1942      -                        mmu_tlbflush_entry(addr + i);
     2556 +        if (range->tr_va != DEMAP_ALL_ADDR) {
     2557 +                mmu_flush_tlb(FLUSH_TLB_RANGE, range);
1943 2558                  return (0);
1944 2559          }
1945 2560  
1946 2561          /*
1947      -         * Otherwise we reload cr3 to effect a complete TLB flush.
     2562 +         * We are flushing all of userspace.
1948 2563           *
1949      -         * A reload of cr3 on a VLP process also means we must also recopy in
1950      -         * the pte values from the struct hat
     2564 +         * When using PCP, we first need to update this CPU's idea of the PCP
     2565 +         * PTEs.
1951 2566           */
1952      -        if (hat->hat_flags & HAT_VLP) {
     2567 +        if (hat->hat_flags & HAT_COPIED) {
1953 2568  #if defined(__amd64)
1954      -                x86pte_t *vlpptep = CPU->cpu_hat_info->hci_vlp_l2ptes;
1955      -
1956      -                VLP_COPY(hat->hat_vlp_ptes, vlpptep);
     2569 +                hat_pcp_update(CPU, hat);
1957 2570  #elif defined(__i386)
1958 2571                  reload_pae32(hat, CPU);
1959 2572  #endif
1960 2573          }
1961      -        reload_cr3();
     2574 +
     2575 +        mmu_flush_tlb(FLUSH_TLB_NONGLOBAL, NULL);
1962 2576          return (0);
1963 2577  }
1964 2578  
1965      -/*
1966      - * Flush all TLB entries, including global (ie. kernel) ones.
1967      - */
1968      -static void
1969      -flush_all_tlb_entries(void)
1970      -{
1971      -        ulong_t cr4 = getcr4();
1972      -
1973      -        if (cr4 & CR4_PGE) {
1974      -                setcr4(cr4 & ~(ulong_t)CR4_PGE);
1975      -                setcr4(cr4);
1976      -
1977      -                /*
1978      -                 * 32 bit PAE also needs to always reload_cr3()
1979      -                 */
1980      -                if (mmu.max_level == 2)
1981      -                        reload_cr3();
1982      -        } else {
1983      -                reload_cr3();
1984      -        }
1985      -}
1986      -
1987      -#define TLB_CPU_HALTED  (01ul)
1988      -#define TLB_INVAL_ALL   (02ul)
     2579 +#define TLBIDLE_CPU_HALTED      (0x1UL)
     2580 +#define TLBIDLE_INVAL_ALL       (0x2UL)
1989 2581  #define CAS_TLB_INFO(cpu, old, new)     \
1990 2582          atomic_cas_ulong((ulong_t *)&(cpu)->cpu_m.mcpu_tlb_info, (old), (new))
1991 2583  
1992 2584  /*
1993 2585   * Record that a CPU is going idle
1994 2586   */
1995 2587  void
1996 2588  tlb_going_idle(void)
1997 2589  {
1998      -        atomic_or_ulong((ulong_t *)&CPU->cpu_m.mcpu_tlb_info, TLB_CPU_HALTED);
     2590 +        atomic_or_ulong((ulong_t *)&CPU->cpu_m.mcpu_tlb_info,
     2591 +            TLBIDLE_CPU_HALTED);
1999 2592  }
2000 2593  
2001 2594  /*
2002 2595   * Service a delayed TLB flush if coming out of being idle.
2003 2596   * It will be called from cpu idle notification with interrupt disabled.
2004 2597   */
2005 2598  void
2006 2599  tlb_service(void)
2007 2600  {
2008 2601          ulong_t tlb_info;
2009 2602          ulong_t found;
2010 2603  
2011 2604          /*
2012 2605           * We only have to do something if coming out of being idle.
2013 2606           */
2014 2607          tlb_info = CPU->cpu_m.mcpu_tlb_info;
2015      -        if (tlb_info & TLB_CPU_HALTED) {
     2608 +        if (tlb_info & TLBIDLE_CPU_HALTED) {
2016 2609                  ASSERT(CPU->cpu_current_hat == kas.a_hat);
2017 2610  
2018 2611                  /*
2019 2612                   * Atomic clear and fetch of old state.
2020 2613                   */
2021 2614                  while ((found = CAS_TLB_INFO(CPU, tlb_info, 0)) != tlb_info) {
2022      -                        ASSERT(found & TLB_CPU_HALTED);
     2615 +                        ASSERT(found & TLBIDLE_CPU_HALTED);
2023 2616                          tlb_info = found;
2024 2617                          SMT_PAUSE();
2025 2618                  }
2026      -                if (tlb_info & TLB_INVAL_ALL)
2027      -                        flush_all_tlb_entries();
     2619 +                if (tlb_info & TLBIDLE_INVAL_ALL)
     2620 +                        mmu_flush_tlb(FLUSH_TLB_ALL, NULL);
2028 2621          }
2029 2622  }
2030 2623  #endif /* !__xpv */
2031 2624  
2032 2625  /*
2033 2626   * Internal routine to do cross calls to invalidate a range of pages on
2034 2627   * all CPUs using a given hat.
2035 2628   */
2036 2629  void
2037      -hat_tlb_inval_range(hat_t *hat, uintptr_t va, size_t len)
     2630 +hat_tlb_inval_range(hat_t *hat, tlb_range_t *in_range)
2038 2631  {
2039 2632          extern int      flushes_require_xcalls; /* from mp_startup.c */
2040 2633          cpuset_t        justme;
2041 2634          cpuset_t        cpus_to_shootdown;
     2635 +        tlb_range_t     range = *in_range;
2042 2636  #ifndef __xpv
2043 2637          cpuset_t        check_cpus;
2044 2638          cpu_t           *cpup;
2045 2639          int             c;
2046 2640  #endif
2047 2641  
2048 2642          /*
2049 2643           * If the hat is being destroyed, there are no more users, so
2050 2644           * demap need not do anything.
2051 2645           */
2052 2646          if (hat->hat_flags & HAT_FREEING)
2053 2647                  return;
2054 2648  
2055 2649          /*
2056 2650           * If demapping from a shared pagetable, we best demap the
2057 2651           * entire set of user TLBs, since we don't know what addresses
2058 2652           * these were shared at.
2059 2653           */
2060 2654          if (hat->hat_flags & HAT_SHARED) {
2061 2655                  hat = kas.a_hat;
2062      -                va = DEMAP_ALL_ADDR;
     2656 +                range.tr_va = DEMAP_ALL_ADDR;
2063 2657          }
2064 2658  
2065 2659          /*
2066 2660           * if not running with multiple CPUs, don't use cross calls
2067 2661           */
2068 2662          if (panicstr || !flushes_require_xcalls) {
2069 2663  #ifdef __xpv
2070      -                if (va == DEMAP_ALL_ADDR) {
     2664 +                if (range.tr_va == DEMAP_ALL_ADDR) {
2071 2665                          xen_flush_tlb();
2072 2666                  } else {
2073      -                        for (size_t i = 0; i < len; i += MMU_PAGESIZE)
2074      -                                xen_flush_va((caddr_t)(va + i));
     2667 +                        for (size_t i = 0; i < TLB_RANGE_LEN(&range);
     2668 +                            i += MMU_PAGESIZE) {
     2669 +                                xen_flush_va((caddr_t)(range.tr_va + i));
     2670 +                        }
2075 2671                  }
2076 2672  #else
2077      -                (void) hati_demap_func((xc_arg_t)hat,
2078      -                    (xc_arg_t)va, (xc_arg_t)len);
     2673 +                (void) hati_demap_func((xc_arg_t)hat, (xc_arg_t)&range, 0);
2079 2674  #endif
2080 2675                  return;
2081 2676          }
2082 2677  
2083 2678  
2084 2679          /*
2085 2680           * Determine CPUs to shootdown. Kernel changes always do all CPUs.
2086 2681           * Otherwise it's just CPUs currently executing in this hat.
2087 2682           */
2088 2683          kpreempt_disable();
↓ open down ↓ 13 lines elided ↑ open up ↑
2102 2697                  ulong_t tlb_info;
2103 2698  
2104 2699                  if (!CPU_IN_SET(check_cpus, c))
2105 2700                          continue;
2106 2701                  CPUSET_DEL(check_cpus, c);
2107 2702                  cpup = cpu[c];
2108 2703                  if (cpup == NULL)
2109 2704                          continue;
2110 2705  
2111 2706                  tlb_info = cpup->cpu_m.mcpu_tlb_info;
2112      -                while (tlb_info == TLB_CPU_HALTED) {
2113      -                        (void) CAS_TLB_INFO(cpup, TLB_CPU_HALTED,
2114      -                            TLB_CPU_HALTED | TLB_INVAL_ALL);
     2707 +                while (tlb_info == TLBIDLE_CPU_HALTED) {
     2708 +                        (void) CAS_TLB_INFO(cpup, TLBIDLE_CPU_HALTED,
     2709 +                            TLBIDLE_CPU_HALTED | TLBIDLE_INVAL_ALL);
2115 2710                          SMT_PAUSE();
2116 2711                          tlb_info = cpup->cpu_m.mcpu_tlb_info;
2117 2712                  }
2118      -                if (tlb_info == (TLB_CPU_HALTED | TLB_INVAL_ALL)) {
     2713 +                if (tlb_info == (TLBIDLE_CPU_HALTED | TLBIDLE_INVAL_ALL)) {
2119 2714                          HATSTAT_INC(hs_tlb_inval_delayed);
2120 2715                          CPUSET_DEL(cpus_to_shootdown, c);
2121 2716                  }
2122 2717          }
2123 2718  #endif
2124 2719  
2125 2720          if (CPUSET_ISNULL(cpus_to_shootdown) ||
2126 2721              CPUSET_ISEQUAL(cpus_to_shootdown, justme)) {
2127 2722  
2128 2723  #ifdef __xpv
2129      -                if (va == DEMAP_ALL_ADDR) {
     2724 +                if (range.tr_va == DEMAP_ALL_ADDR) {
2130 2725                          xen_flush_tlb();
2131 2726                  } else {
2132      -                        for (size_t i = 0; i < len; i += MMU_PAGESIZE)
2133      -                                xen_flush_va((caddr_t)(va + i));
     2727 +                        for (size_t i = 0; i < TLB_RANGE_LEN(&range);
     2728 +                            i += MMU_PAGESIZE) {
     2729 +                                xen_flush_va((caddr_t)(range.tr_va + i));
     2730 +                        }
2134 2731                  }
2135 2732  #else
2136      -                (void) hati_demap_func((xc_arg_t)hat,
2137      -                    (xc_arg_t)va, (xc_arg_t)len);
     2733 +                (void) hati_demap_func((xc_arg_t)hat, (xc_arg_t)&range, 0);
2138 2734  #endif
2139 2735  
2140 2736          } else {
2141 2737  
2142 2738                  CPUSET_ADD(cpus_to_shootdown, CPU->cpu_id);
2143 2739  #ifdef __xpv
2144      -                if (va == DEMAP_ALL_ADDR) {
     2740 +                if (range.tr_va == DEMAP_ALL_ADDR) {
2145 2741                          xen_gflush_tlb(cpus_to_shootdown);
2146 2742                  } else {
2147      -                        for (size_t i = 0; i < len; i += MMU_PAGESIZE) {
2148      -                                xen_gflush_va((caddr_t)(va + i),
     2743 +                        for (size_t i = 0; i < TLB_RANGE_LEN(&range);
     2744 +                            i += MMU_PAGESIZE) {
     2745 +                                xen_gflush_va((caddr_t)(range.tr_va + i),
2149 2746                                      cpus_to_shootdown);
2150 2747                          }
2151 2748                  }
2152 2749  #else
2153      -                xc_call((xc_arg_t)hat, (xc_arg_t)va, (xc_arg_t)len,
     2750 +                xc_call((xc_arg_t)hat, (xc_arg_t)&range, 0,
2154 2751                      CPUSET2BV(cpus_to_shootdown), hati_demap_func);
2155 2752  #endif
2156 2753  
2157 2754          }
2158 2755          kpreempt_enable();
2159 2756  }
2160 2757  
2161 2758  void
2162 2759  hat_tlb_inval(hat_t *hat, uintptr_t va)
2163 2760  {
2164      -        hat_tlb_inval_range(hat, va, MMU_PAGESIZE);
     2761 +        /*
     2762 +         * Create range for a single page.
     2763 +         */
     2764 +        tlb_range_t range;
     2765 +        range.tr_va = va;
     2766 +        range.tr_cnt = 1; /* one page */
     2767 +        range.tr_level = MIN_PAGE_LEVEL; /* pages are MMU_PAGESIZE */
     2768 +
     2769 +        hat_tlb_inval_range(hat, &range);
2165 2770  }
2166 2771  
2167 2772  /*
2168 2773   * Interior routine for HAT_UNLOADs from hat_unload_callback(),
2169 2774   * hat_kmap_unload() OR from hat_steal() code.  This routine doesn't
2170 2775   * handle releasing of the htables.
2171 2776   */
2172 2777  void
2173 2778  hat_pte_unmap(
2174 2779          htable_t        *ht,
↓ open down ↓ 146 lines elided ↑ open up ↑
2321 2926          if (mmu.kmap_addr <= va && va < mmu.kmap_eaddr) {
2322 2927                  ASSERT(hat == kas.a_hat);
2323 2928                  hat_kmap_unload(addr, len, flags);
2324 2929          } else {
2325 2930                  hat_unload_callback(hat, addr, len, flags, NULL);
2326 2931          }
2327 2932          XPV_ALLOW_MIGRATE();
2328 2933  }
2329 2934  
2330 2935  /*
2331      - * Do the callbacks for ranges being unloaded.
2332      - */
2333      -typedef struct range_info {
2334      -        uintptr_t       rng_va;
2335      -        ulong_t         rng_cnt;
2336      -        level_t         rng_level;
2337      -} range_info_t;
2338      -
2339      -/*
2340 2936   * Invalidate the TLB, and perform the callback to the upper level VM system,
2341 2937   * for the specified ranges of contiguous pages.
2342 2938   */
2343 2939  static void
2344      -handle_ranges(hat_t *hat, hat_callback_t *cb, uint_t cnt, range_info_t *range)
     2940 +handle_ranges(hat_t *hat, hat_callback_t *cb, uint_t cnt, tlb_range_t *range)
2345 2941  {
2346 2942          while (cnt > 0) {
2347      -                size_t len;
2348      -
2349 2943                  --cnt;
2350      -                len = range[cnt].rng_cnt << LEVEL_SHIFT(range[cnt].rng_level);
2351      -                hat_tlb_inval_range(hat, (uintptr_t)range[cnt].rng_va, len);
     2944 +                hat_tlb_inval_range(hat, &range[cnt]);
2352 2945  
2353 2946                  if (cb != NULL) {
2354      -                        cb->hcb_start_addr = (caddr_t)range[cnt].rng_va;
     2947 +                        cb->hcb_start_addr = (caddr_t)range[cnt].tr_va;
2355 2948                          cb->hcb_end_addr = cb->hcb_start_addr;
2356      -                        cb->hcb_end_addr += len;
     2949 +                        cb->hcb_end_addr += range[cnt].tr_cnt <<
     2950 +                            LEVEL_SHIFT(range[cnt].tr_level);
2357 2951                          cb->hcb_function(cb);
2358 2952                  }
2359 2953          }
2360 2954  }
2361 2955  
2362 2956  /*
2363 2957   * Unload a given range of addresses (has optional callback)
2364 2958   *
2365 2959   * Flags:
2366 2960   * define       HAT_UNLOAD              0x00
↓ open down ↓ 9 lines elided ↑ open up ↑
2376 2970          caddr_t         addr,
2377 2971          size_t          len,
2378 2972          uint_t          flags,
2379 2973          hat_callback_t  *cb)
2380 2974  {
2381 2975          uintptr_t       vaddr = (uintptr_t)addr;
2382 2976          uintptr_t       eaddr = vaddr + len;
2383 2977          htable_t        *ht = NULL;
2384 2978          uint_t          entry;
2385 2979          uintptr_t       contig_va = (uintptr_t)-1L;
2386      -        range_info_t    r[MAX_UNLOAD_CNT];
     2980 +        tlb_range_t     r[MAX_UNLOAD_CNT];
2387 2981          uint_t          r_cnt = 0;
2388 2982          x86pte_t        old_pte;
2389 2983  
2390 2984          XPV_DISALLOW_MIGRATE();
2391 2985          ASSERT(hat == kas.a_hat || eaddr <= _userlimit);
2392 2986          ASSERT(IS_PAGEALIGNED(vaddr));
2393 2987          ASSERT(IS_PAGEALIGNED(eaddr));
2394 2988  
2395 2989          /*
2396 2990           * Special case a single page being unloaded for speed. This happens
↓ open down ↓ 19 lines elided ↑ open up ↑
2416 3010  
2417 3011                  ASSERT(!IN_VA_HOLE(vaddr));
2418 3012  
2419 3013                  if (vaddr < (uintptr_t)addr)
2420 3014                          panic("hat_unload_callback(): unmap inside large page");
2421 3015  
2422 3016                  /*
2423 3017                   * We'll do the call backs for contiguous ranges
2424 3018                   */
2425 3019                  if (vaddr != contig_va ||
2426      -                    (r_cnt > 0 && r[r_cnt - 1].rng_level != ht->ht_level)) {
     3020 +                    (r_cnt > 0 && r[r_cnt - 1].tr_level != ht->ht_level)) {
2427 3021                          if (r_cnt == MAX_UNLOAD_CNT) {
2428 3022                                  handle_ranges(hat, cb, r_cnt, r);
2429 3023                                  r_cnt = 0;
2430 3024                          }
2431      -                        r[r_cnt].rng_va = vaddr;
2432      -                        r[r_cnt].rng_cnt = 0;
2433      -                        r[r_cnt].rng_level = ht->ht_level;
     3025 +                        r[r_cnt].tr_va = vaddr;
     3026 +                        r[r_cnt].tr_cnt = 0;
     3027 +                        r[r_cnt].tr_level = ht->ht_level;
2434 3028                          ++r_cnt;
2435 3029                  }
2436 3030  
2437 3031                  /*
2438 3032                   * Unload one mapping (for a single page) from the page tables.
2439 3033                   * Note that we do not remove the mapping from the TLB yet,
2440 3034                   * as indicated by the tlb=FALSE argument to hat_pte_unmap().
2441 3035                   * handle_ranges() will clear the TLB entries with one call to
2442 3036                   * hat_tlb_inval_range() per contiguous range.  This is
2443 3037                   * safe because the page can not be reused until the
2444 3038                   * callback is made (or we return).
2445 3039                   */
2446 3040                  entry = htable_va2entry(vaddr, ht);
2447 3041                  hat_pte_unmap(ht, entry, flags, old_pte, NULL, B_FALSE);
2448 3042                  ASSERT(ht->ht_level <= mmu.max_page_level);
2449 3043                  vaddr += LEVEL_SIZE(ht->ht_level);
2450 3044                  contig_va = vaddr;
2451      -                ++r[r_cnt - 1].rng_cnt;
     3045 +                ++r[r_cnt - 1].tr_cnt;
2452 3046          }
2453 3047          if (ht)
2454 3048                  htable_release(ht);
2455 3049  
2456 3050          /*
2457 3051           * handle last range for callbacks
2458 3052           */
2459 3053          if (r_cnt > 0)
2460 3054                  handle_ranges(hat, cb, r_cnt, r);
2461 3055          XPV_ALLOW_MIGRATE();
↓ open down ↓ 8 lines elided ↑ open up ↑
2470 3064  {
2471 3065          ssize_t sz;
2472 3066          caddr_t endva = va + size;
2473 3067  
2474 3068          while (va < endva) {
2475 3069                  sz = hat_getpagesize(hat, va);
2476 3070                  if (sz < 0) {
2477 3071  #ifdef __xpv
2478 3072                          xen_flush_tlb();
2479 3073  #else
2480      -                        flush_all_tlb_entries();
     3074 +                        mmu_flush_tlb(FLUSH_TLB_ALL, NULL);
2481 3075  #endif
2482 3076                          break;
2483 3077                  }
2484 3078  #ifdef __xpv
2485 3079                  xen_flush_va(va);
2486 3080  #else
2487      -                mmu_tlbflush_entry(va);
     3081 +                mmu_flush_tlb_kpage((uintptr_t)va);
2488 3082  #endif
2489 3083                  va += sz;
2490 3084          }
2491 3085  }
2492 3086  
2493 3087  /*
2494 3088   * synchronize mapping with software data structures
2495 3089   *
2496 3090   * This interface is currently only used by the working set monitor
2497 3091   * driver.
↓ open down ↓ 645 lines elided ↑ open up ↑
3143 3737                                      (LEVEL_SHIFT(ht->ht_level) - MMU_PAGESHIFT);
3144 3738                                  ht->ht_valid_cnt = 0;
3145 3739                                  need_demaps = 1;
3146 3740                          }
3147 3741                          htable_release(ht);
3148 3742                  }
3149 3743          }
3150 3744  
3151 3745          /*
3152 3746           * flush the TLBs - since we're probably dealing with MANY mappings
3153      -         * we do just one CR3 reload.
     3747 +         * we just do a full invalidation.
3154 3748           */
3155 3749          if (!(hat->hat_flags & HAT_FREEING) && need_demaps)
3156 3750                  hat_tlb_inval(hat, DEMAP_ALL_ADDR);
3157 3751  
3158 3752          /*
3159 3753           * Now go back and clean up any unaligned mappings that
3160 3754           * couldn't share pagetables.
3161 3755           */
3162 3756          if (!is_it_dism(hat, addr))
3163 3757                  flags |= HAT_UNLOAD_UNLOCK;
↓ open down ↓ 762 lines elided ↑ open up ↑
3926 4520  #else
3927 4521          {
3928 4522                  x86pte_t *pteptr;
3929 4523  
3930 4524                  pteptr = x86pte_mapin(mmu_btop(pte_pa),
3931 4525                      (pte_pa & MMU_PAGEOFFSET) >> mmu.pte_size_shift, NULL);
3932 4526                  if (mmu.pae_hat)
3933 4527                          *pteptr = 0;
3934 4528                  else
3935 4529                          *(x86pte32_t *)pteptr = 0;
3936      -                mmu_tlbflush_entry(addr);
     4530 +                mmu_flush_tlb_kpage((uintptr_t)addr);
3937 4531                  x86pte_mapout();
3938 4532          }
3939 4533  #endif
3940 4534  
3941 4535          ht = htable_getpte(kas.a_hat, ALIGN2PAGE(addr), NULL, NULL, 0);
3942 4536          if (ht == NULL)
3943 4537                  panic("hat_mempte_release(): invalid address");
3944 4538          ASSERT(ht->ht_level == 0);
3945 4539          HTABLE_DEC(ht->ht_valid_cnt);
3946 4540          htable_release(ht);
↓ open down ↓ 40 lines elided ↑ open up ↑
3987 4581  #else
3988 4582          {
3989 4583                  x86pte_t *pteptr;
3990 4584  
3991 4585                  pteptr = x86pte_mapin(mmu_btop(pte_pa),
3992 4586                      (pte_pa & MMU_PAGEOFFSET) >> mmu.pte_size_shift, NULL);
3993 4587                  if (mmu.pae_hat)
3994 4588                          *(x86pte_t *)pteptr = pte;
3995 4589                  else
3996 4590                          *(x86pte32_t *)pteptr = (x86pte32_t)pte;
3997      -                mmu_tlbflush_entry(addr);
     4591 +                mmu_flush_tlb_kpage((uintptr_t)addr);
3998 4592                  x86pte_mapout();
3999 4593          }
4000 4594  #endif
4001 4595          XPV_ALLOW_MIGRATE();
4002 4596  }
4003 4597  
4004 4598  
4005 4599  
4006 4600  /*
4007 4601   * Hat locking functions
↓ open down ↓ 13 lines elided ↑ open up ↑
4021 4615  }
4022 4616  
4023 4617  /*
4024 4618   * HAT part of cpu initialization.
4025 4619   */
4026 4620  void
4027 4621  hat_cpu_online(struct cpu *cpup)
4028 4622  {
4029 4623          if (cpup != CPU) {
4030 4624                  x86pte_cpu_init(cpup);
4031      -                hat_vlp_setup(cpup);
     4625 +                hat_pcp_setup(cpup);
4032 4626          }
4033 4627          CPUSET_ATOMIC_ADD(khat_cpuset, cpup->cpu_id);
4034 4628  }
4035 4629  
4036 4630  /*
4037 4631   * HAT part of cpu deletion.
4038 4632   * (currently, we only call this after the cpu is safely passivated.)
4039 4633   */
4040 4634  void
4041 4635  hat_cpu_offline(struct cpu *cpup)
4042 4636  {
4043 4637          ASSERT(cpup != CPU);
4044 4638  
4045 4639          CPUSET_ATOMIC_DEL(khat_cpuset, cpup->cpu_id);
4046      -        hat_vlp_teardown(cpup);
     4640 +        hat_pcp_teardown(cpup);
4047 4641          x86pte_cpu_fini(cpup);
4048 4642  }
4049 4643  
4050 4644  /*
4051 4645   * Function called after all CPUs are brought online.
4052 4646   * Used to remove low address boot mappings.
4053 4647   */
4054 4648  void
4055 4649  clear_boot_mappings(uintptr_t low, uintptr_t high)
4056 4650  {
↓ open down ↓ 295 lines elided ↑ open up ↑
4352 4946  {}
4353 4947  
4354 4948  /*ARGSUSED*/
4355 4949  void
4356 4950  hat_kpm_mseghash_update(pgcnt_t inx, struct memseg *msp)
4357 4951  {}
4358 4952  
4359 4953  #ifndef __xpv
4360 4954  void
4361 4955  hat_kpm_addmem_mseg_update(struct memseg *msp, pgcnt_t nkpmpgs,
4362      -        offset_t kpm_pages_off)
     4956 +    offset_t kpm_pages_off)
4363 4957  {
4364 4958          _NOTE(ARGUNUSED(nkpmpgs, kpm_pages_off));
4365 4959          pfn_t base, end;
4366 4960  
4367 4961          /*
4368 4962           * kphysm_add_memory_dynamic() does not set nkpmpgs
4369 4963           * when page_t memory is externally allocated.  That
4370 4964           * code must properly calculate nkpmpgs in all cases
4371 4965           * if nkpmpgs needs to be used at some point.
4372 4966           */
↓ open down ↓ 38 lines elided ↑ open up ↑
4411 5005  
4412 5006  void
4413 5007  hat_kpm_delmem_mseg_update(struct memseg *msp, struct memseg **mspp)
4414 5008  {
4415 5009          _NOTE(ARGUNUSED(msp, mspp));
4416 5010          ASSERT(0);
4417 5011  }
4418 5012  
4419 5013  void
4420 5014  hat_kpm_split_mseg_update(struct memseg *msp, struct memseg **mspp,
4421      -        struct memseg *lo, struct memseg *mid, struct memseg *hi)
     5015 +    struct memseg *lo, struct memseg *mid, struct memseg *hi)
4422 5016  {
4423 5017          _NOTE(ARGUNUSED(msp, mspp, lo, mid, hi));
4424 5018          ASSERT(0);
4425 5019  }
4426 5020  
4427 5021  /*
4428 5022   * Walk the memsegs chain, applying func to each memseg span.
4429 5023   */
4430 5024  void
4431 5025  hat_kpm_walk(void (*func)(void *, void *, size_t), void *arg)
↓ open down ↓ 51 lines elided ↑ open up ↑
4483 5077          ASSERT(IS_P2ALIGNED((uintptr_t)addr, MMU_PAGESIZE));
4484 5078          XPV_DISALLOW_MIGRATE();
4485 5079          ht = htable_lookup(hat, (uintptr_t)addr, 0);
4486 5080          ASSERT(ht != NULL);
4487 5081          ASSERT(ht->ht_busy >= 2);
4488 5082          htable_release(ht);
4489 5083          htable_release(ht);
4490 5084          XPV_ALLOW_MIGRATE();
4491 5085  }
4492 5086  #endif  /* __xpv */
     5087 +
     5088 +/*
     5089 + * Helper function to punch in a mapping that we need with the specified
     5090 + * attributes.
     5091 + */
     5092 +void
     5093 +hati_cpu_punchin(cpu_t *cpu, uintptr_t va, uint_t attrs)
     5094 +{
     5095 +        int ret;
     5096 +        pfn_t pfn;
     5097 +        hat_t *cpu_hat = cpu->cpu_hat_info->hci_user_hat;
     5098 +
     5099 +        ASSERT3S(kpti_enable, ==, 1);
     5100 +        ASSERT3P(cpu_hat, !=, NULL);
     5101 +        ASSERT3U(cpu_hat->hat_flags & HAT_PCP, ==, HAT_PCP);
     5102 +        ASSERT3U(va & MMU_PAGEOFFSET, ==, 0);
     5103 +
     5104 +        pfn = hat_getpfnum(kas.a_hat, (caddr_t)va);
     5105 +        VERIFY3U(pfn, !=, PFN_INVALID);
     5106 +
     5107 +        /*
     5108 +         * We purposefully don't try to find the page_t. This means that this
     5109 +         * will be marked PT_NOCONSIST; however, given that this is pretty much
     5110 +         * a static mapping that we're using we should be relatively OK.
     5111 +         */
     5112 +        attrs |= HAT_STORECACHING_OK;
     5113 +        ret = hati_load_common(cpu_hat, va, NULL, attrs, 0, 0, pfn);
     5114 +        VERIFY3S(ret, ==, 0);
     5115 +}
    
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX