Print this page
11967 need TAA mitigation
Portions contributed by: Robert Mustacchi <rm@fingolfin.org>
Reviewed by: Dan McDonald <danmcd@joyent.com>

Split Close
Expand all
Collapse all
          --- old/usr/src/uts/i86pc/os/cpuid.c
          +++ new/usr/src/uts/i86pc/os/cpuid.c
↓ open down ↓ 24 lines elided ↑ open up ↑
  25   25   * Copyright 2014 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
  26   26   */
  27   27  /*
  28   28   * Copyright (c) 2010, Intel Corporation.
  29   29   * All rights reserved.
  30   30   */
  31   31  /*
  32   32   * Portions Copyright 2009 Advanced Micro Devices, Inc.
  33   33   */
  34   34  /*
  35      - * Copyright 2019 Joyent, Inc.
       35 + * Copyright 2020 Joyent, Inc.
  36   36   */
  37   37  
  38   38  /*
  39   39   * CPU Identification logic
  40   40   *
  41   41   * The purpose of this file and its companion, cpuid_subr.c, is to help deal
  42   42   * with the identification of CPUs, their features, and their topologies. More
  43   43   * specifically, this file helps drive the following:
  44   44   *
  45   45   * 1. Enumeration of features of the processor which are used by the kernel to
↓ open down ↓ 1133 lines elided ↑ open up ↑
1179 1179   * particular, everything we've discussed above is only valid for a single
1180 1180   * thread executing on a core. In the case where you have hyper-threading
1181 1181   * present, this attack can be performed between threads. The theoretical fix
1182 1182   * for this is to ensure that both threads are always in the same security
1183 1183   * domain. This means that they are executing in the same ring and mutually
1184 1184   * trust each other. Practically speaking, this would mean that a system call
1185 1185   * would have to issue an inter-processor interrupt (IPI) to the other thread.
1186 1186   * Rather than implement this, we recommend that one disables hyper-threading
1187 1187   * through the use of psradm -aS.
1188 1188   *
     1189 + * TSX ASYNCHRONOUS ABORT
     1190 + *
     1191 + * TSX Asynchronous Abort (TAA) is another side-channel vulnerability that
     1192 + * behaves like MDS, but leverages Intel's transactional instructions as another
     1193 + * vector. Effectively, when a transaction hits one of these cases (unmapped
     1194 + * page, various cache snoop activity, etc.) then the same data can be exposed
     1195 + * as in the case of MDS. This means that you can attack your twin.
     1196 + *
     1197 + * Intel has described that there are two different ways that we can mitigate
     1198 + * this problem on affected processors:
     1199 + *
     1200 + *   1) We can use the same techniques used to deal with MDS. Flushing the
     1201 + *      microarchitectural buffers and disabling hyperthreading will mitigate
     1202 + *      this in the same way.
     1203 + *
     1204 + *   2) Using microcode to disable TSX.
     1205 + *
     1206 + * Now, most processors that are subject to MDS (as in they don't have MDS_NO in
     1207 + * the IA32_ARCH_CAPABILITIES MSR) will not receive microcode to disable TSX.
     1208 + * That's OK as we're already doing all such mitigations. On the other hand,
     1209 + * processors with MDS_NO are all supposed to receive microcode updates that
     1210 + * enumerate support for disabling TSX. In general, we'd rather use this method
     1211 + * when available as it doesn't require disabling hyperthreading to be
     1212 + * effective. Currently we basically are relying on microcode for processors
     1213 + * that enumerate MDS_NO.
     1214 + *
     1215 + * The microcode features are enumerated as part of the IA32_ARCH_CAPABILITIES.
     1216 + * When bit 7 (IA32_ARCH_CAP_TSX_CTRL) is present, then we are given two
     1217 + * different powers. The first allows us to cause all transactions to
     1218 + * immediately abort. The second gives us a means of disabling TSX completely,
     1219 + * which includes removing it from cpuid. If we have support for this in
     1220 + * microcode during the first cpuid pass, then we'll disable TSX completely such
     1221 + * that user land never has a chance to observe the bit. However, if we are late
     1222 + * loading the microcode, then we must use the functionality to cause
     1223 + * transactions to automatically abort. This is necessary for user land's sake.
     1224 + * Once a program sees a cpuid bit, it must not be taken away.
     1225 + *
     1226 + * We track whether or not we should do this based on what cpuid pass we're in.
     1227 + * Whenever we hit cpuid_scan_security() on the boot CPU and we're still on pass
     1228 + * 1 of the cpuid logic, then we can completely turn off TSX. Notably this
     1229 + * should happen twice. Once in the normal cpuid_pass1() code and then a second
     1230 + * time after we do the initial microcode update.
     1231 + *
     1232 + * If TAA has been fixed, then it will be enumerated in IA32_ARCH_CAPABILITIES
     1233 + * as TAA_NO. In such a case, we will still disable TSX: it's proven to be an
     1234 + * unfortunate feature in a number of ways, and taking the opportunity to
     1235 + * finally be able to turn it off is likely to be of benefit in the future.
     1236 + *
1189 1237   * SUMMARY
1190 1238   *
1191 1239   * The following table attempts to summarize the mitigations for various issues
1192 1240   * and what's done in various places:
1193 1241   *
1194 1242   *  - Spectre v1: Not currently mitigated
1195 1243   *  - swapgs: lfences after swapgs paths
1196 1244   *  - Spectre v2: Retpolines/RSB Stuffing or EIBRS if HW support
1197 1245   *  - Meltdown: Kernel Page Table Isolation
1198 1246   *  - Spectre v3a: Updated CPU microcode
1199 1247   *  - Spectre v4: Not currently mitigated
1200 1248   *  - SpectreRSB: SMEP and RSB Stuffing
1201 1249   *  - L1TF: spec_uarch_flush, SMT exclusion, requires microcode
1202      - *  - MDS: x86_md_clear, requires microcode, disabling hyper threading
     1250 + *  - MDS: x86_md_clear, requires microcode, disabling SMT
     1251 + *  - TAA: x86_md_clear and disabling SMT OR microcode and disabling TSX
1203 1252   *
1204 1253   * The following table indicates the x86 feature set bits that indicate that a
1205 1254   * given problem has been solved or a notable feature is present:
1206 1255   *
1207 1256   *  - RDCL_NO: Meltdown, L1TF, MSBDS subset of MDS
1208 1257   *  - MDS_NO: All forms of MDS
     1258 + *  - TAA_NO: TAA
1209 1259   */
1210 1260  
1211 1261  #include <sys/types.h>
1212 1262  #include <sys/archsystm.h>
1213 1263  #include <sys/x86_archext.h>
1214 1264  #include <sys/kmem.h>
1215 1265  #include <sys/systm.h>
1216 1266  #include <sys/cmn_err.h>
1217 1267  #include <sys/sunddi.h>
1218 1268  #include <sys/sunndi.h>
↓ open down ↓ 36 lines elided ↑ open up ↑
1255 1305          X86_SPECTREV2_RETPOLINE,
1256 1306          X86_SPECTREV2_RETPOLINE_AMD,
1257 1307          X86_SPECTREV2_ENHANCED_IBRS,
1258 1308          X86_SPECTREV2_DISABLED
1259 1309  } x86_spectrev2_mitigation_t;
1260 1310  
1261 1311  uint_t x86_disable_spectrev2 = 0;
1262 1312  static x86_spectrev2_mitigation_t x86_spectrev2_mitigation =
1263 1313      X86_SPECTREV2_RETPOLINE;
1264 1314  
     1315 +/*
     1316 + * The mitigation status for TAA:
     1317 + * X86_TAA_NOTHING -- no mitigation available for TAA side-channels
     1318 + * X86_TAA_DISABLED -- mitigation disabled via x86_disable_taa
     1319 + * X86_TAA_MD_CLEAR -- MDS mitigation also suffices for TAA
     1320 + * X86_TAA_TSX_FORCE_ABORT -- transactions are forced to abort
     1321 + * X86_TAA_TSX_DISABLE -- force abort transactions and hide from CPUID
     1322 + * X86_TAA_HW_MITIGATED -- TSX potentially active but H/W not TAA-vulnerable
     1323 + */
     1324 +typedef enum {
     1325 +        X86_TAA_NOTHING,
     1326 +        X86_TAA_DISABLED,
     1327 +        X86_TAA_MD_CLEAR,
     1328 +        X86_TAA_TSX_FORCE_ABORT,
     1329 +        X86_TAA_TSX_DISABLE,
     1330 +        X86_TAA_HW_MITIGATED
     1331 +} x86_taa_mitigation_t;
     1332 +
     1333 +uint_t x86_disable_taa = 0;
     1334 +static x86_taa_mitigation_t x86_taa_mitigation = X86_TAA_NOTHING;
     1335 +
1265 1336  uint_t pentiumpro_bug4046376;
1266 1337  
1267 1338  uchar_t x86_featureset[BT_SIZEOFMAP(NUM_X86_FEATURES)];
1268 1339  
1269 1340  static char *x86_feature_names[NUM_X86_FEATURES] = {
1270 1341          "lgpg",
1271 1342          "tsc",
1272 1343          "msr",
1273 1344          "mtrr",
1274 1345          "pge",
↓ open down ↓ 81 lines elided ↑ open up ↑
1356 1427          "monitorx",
1357 1428          "clzero",
1358 1429          "xop",
1359 1430          "fma4",
1360 1431          "tbm",
1361 1432          "avx512_vnni",
1362 1433          "amd_pcec",
1363 1434          "mb_clear",
1364 1435          "mds_no",
1365 1436          "core_thermal",
1366      -        "pkg_thermal"
     1437 +        "pkg_thermal",
     1438 +        "tsx_ctrl",
     1439 +        "taa_no"
1367 1440  };
1368 1441  
1369 1442  boolean_t
1370 1443  is_x86_feature(void *featureset, uint_t feature)
1371 1444  {
1372 1445          ASSERT(feature < NUM_X86_FEATURES);
1373 1446          return (BT_TEST((ulong_t *)featureset, feature));
1374 1447  }
1375 1448  
1376 1449  void
↓ open down ↓ 1318 lines elided ↑ open up ↑
2695 2768                  val = 0;
2696 2769          }
2697 2770          no_trap();
2698 2771  
2699 2772          if ((val & AMD_DECODE_CONFIG_LFENCE_DISPATCH) != 0)
2700 2773                  return (B_TRUE);
2701 2774          return (B_FALSE);
2702 2775  }
2703 2776  #endif  /* !__xpv */
2704 2777  
     2778 +/*
     2779 + * Determine how we should mitigate TAA or if we need to. Regardless of TAA, if
     2780 + * we can disable TSX, we do so.
     2781 + *
     2782 + * This determination is done only on the boot CPU, potentially after loading
     2783 + * updated microcode.
     2784 + */
2705 2785  static void
     2786 +cpuid_update_tsx(cpu_t *cpu, uchar_t *featureset)
     2787 +{
     2788 +        struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
     2789 +
     2790 +        VERIFY(cpu->cpu_id == 0);
     2791 +
     2792 +        if (cpi->cpi_vendor != X86_VENDOR_Intel) {
     2793 +                x86_taa_mitigation = X86_TAA_HW_MITIGATED;
     2794 +                return;
     2795 +        }
     2796 +
     2797 +        if (x86_disable_taa) {
     2798 +                x86_taa_mitigation = X86_TAA_DISABLED;
     2799 +                return;
     2800 +        }
     2801 +
     2802 +        /*
     2803 +         * If we do not have the ability to disable TSX, then our only
     2804 +         * mitigation options are in hardware (TAA_NO), or by using our existing
     2805 +         * MDS mitigation as described above.  The latter relies upon us having
     2806 +         * configured MDS mitigations correctly! This includes disabling SMT if
     2807 +         * we want to cross-CPU-thread protection.
     2808 +         */
     2809 +        if (!is_x86_feature(featureset, X86FSET_TSX_CTRL)) {
     2810 +                /*
     2811 +                 * It's not clear whether any parts will enumerate TAA_NO
     2812 +                 * *without* TSX_CTRL, but let's mark it as such if we see this.
     2813 +                 */
     2814 +                if (is_x86_feature(featureset, X86FSET_TAA_NO)) {
     2815 +                        x86_taa_mitigation = X86_TAA_HW_MITIGATED;
     2816 +                        return;
     2817 +                }
     2818 +
     2819 +                if (is_x86_feature(featureset, X86FSET_MD_CLEAR) &&
     2820 +                    !is_x86_feature(featureset, X86FSET_MDS_NO)) {
     2821 +                        x86_taa_mitigation = X86_TAA_MD_CLEAR;
     2822 +                } else {
     2823 +                        x86_taa_mitigation = X86_TAA_NOTHING;
     2824 +                }
     2825 +                return;
     2826 +        }
     2827 +
     2828 +        /*
     2829 +         * We have TSX_CTRL, but we can only fully disable TSX if we're early
     2830 +         * enough in boot.
     2831 +         *
     2832 +         * Otherwise, we'll fall back to causing transactions to abort as our
     2833 +         * mitigation. TSX-using code will always take the fallback path.
     2834 +         */
     2835 +        if (cpi->cpi_pass < 4) {
     2836 +                x86_taa_mitigation = X86_TAA_TSX_DISABLE;
     2837 +        } else {
     2838 +                x86_taa_mitigation = X86_TAA_TSX_FORCE_ABORT;
     2839 +        }
     2840 +}
     2841 +
     2842 +static void
     2843 +cpuid_apply_tsx(x86_taa_mitigation_t taa)
     2844 +{
     2845 +        uint64_t val;
     2846 +
     2847 +        switch (taa) {
     2848 +        case X86_TAA_TSX_DISABLE:
     2849 +                val = rdmsr(MSR_IA32_TSX_CTRL);
     2850 +                val |= IA32_TSX_CTRL_CPUID_CLEAR | IA32_TSX_CTRL_RTM_DISABLE;
     2851 +                wrmsr(MSR_IA32_TSX_CTRL, val);
     2852 +                break;
     2853 +        case X86_TAA_TSX_FORCE_ABORT:
     2854 +                val = rdmsr(MSR_IA32_TSX_CTRL);
     2855 +                val |= IA32_TSX_CTRL_RTM_DISABLE;
     2856 +                wrmsr(MSR_IA32_TSX_CTRL, val);
     2857 +                break;
     2858 +        case X86_TAA_HW_MITIGATED:
     2859 +        case X86_TAA_MD_CLEAR:
     2860 +        case X86_TAA_DISABLED:
     2861 +        case X86_TAA_NOTHING:
     2862 +                break;
     2863 +        }
     2864 +}
     2865 +
     2866 +static void
2706 2867  cpuid_scan_security(cpu_t *cpu, uchar_t *featureset)
2707 2868  {
2708 2869          struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2709 2870          x86_spectrev2_mitigation_t v2mit;
2710 2871  
2711 2872          if (cpi->cpi_vendor == X86_VENDOR_AMD &&
2712 2873              cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) {
2713 2874                  if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBPB)
2714 2875                          add_x86_feature(featureset, X86FSET_IBPB);
2715 2876                  if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBRS)
↓ open down ↓ 68 lines elided ↑ open up ↑
2784 2945                                              X86FSET_L1D_VM_NO);
2785 2946                                  }
2786 2947                                  if (reg & IA32_ARCH_CAP_SSB_NO) {
2787 2948                                          add_x86_feature(featureset,
2788 2949                                              X86FSET_SSB_NO);
2789 2950                                  }
2790 2951                                  if (reg & IA32_ARCH_CAP_MDS_NO) {
2791 2952                                          add_x86_feature(featureset,
2792 2953                                              X86FSET_MDS_NO);
2793 2954                                  }
     2955 +                                if (reg & IA32_ARCH_CAP_TSX_CTRL) {
     2956 +                                        add_x86_feature(featureset,
     2957 +                                            X86FSET_TSX_CTRL);
     2958 +                                }
     2959 +                                if (reg & IA32_ARCH_CAP_TAA_NO) {
     2960 +                                        add_x86_feature(featureset,
     2961 +                                            X86FSET_TAA_NO);
     2962 +                                }
2794 2963                          }
2795 2964                          no_trap();
2796 2965                  }
2797 2966  #endif  /* !__xpv */
2798 2967  
2799 2968                  if (ecp->cp_edx & CPUID_INTC_EDX_7_0_SSBD)
2800 2969                          add_x86_feature(featureset, X86FSET_SSBD);
2801 2970  
2802 2971                  if (ecp->cp_edx & CPUID_INTC_EDX_7_0_FLUSH_CMD)
2803 2972                          add_x86_feature(featureset, X86FSET_FLUSH_CMD);
2804 2973          }
2805 2974  
     2975 +        /*
     2976 +         * Take care of certain mitigations on the non-boot CPU. The boot CPU
     2977 +         * will have already run this function and determined what we need to
     2978 +         * do. This gives us a hook for per-HW thread mitigations such as
     2979 +         * enhanced IBRS, or disabling TSX.  For TSX disabling, we need to be
     2980 +         * careful that we've had a chance to load ucode that enables the new
     2981 +         * MSRs.
     2982 +         */
2806 2983          if (cpu->cpu_id != 0) {
2807 2984                  if (x86_spectrev2_mitigation == X86_SPECTREV2_ENHANCED_IBRS) {
2808 2985                          cpuid_enable_enhanced_ibrs();
2809 2986                  }
     2987 +
     2988 +                if (cpi->cpi_pass >= 1)
     2989 +                        cpuid_apply_tsx(x86_taa_mitigation);
2810 2990                  return;
2811 2991          }
2812 2992  
2813 2993          /*
2814 2994           * Go through and initialize various security mechanisms that we should
2815      -         * only do on a single CPU. This includes Spectre V2, L1TF, and MDS.
     2995 +         * only do on a single CPU. This includes Spectre V2, L1TF, MDS, and
     2996 +         * TAA.
2816 2997           */
2817 2998  
2818 2999          /*
2819 3000           * By default we've come in with retpolines enabled. Check whether we
2820 3001           * should disable them or enable enhanced IBRS. RSB stuffing is enabled
2821 3002           * by default, but disabled if we are using enhanced IBRS.
2822 3003           */
2823 3004          if (x86_disable_spectrev2 != 0) {
2824 3005                  v2mit = X86_SPECTREV2_DISABLED;
2825 3006          } else if (is_x86_feature(featureset, X86FSET_IBRS_ALL)) {
↓ open down ↓ 29 lines elided ↑ open up ↑
2855 3036           * Update whether or not we need to be taking explicit action against
2856 3037           * MDS.
2857 3038           */
2858 3039          cpuid_update_md_clear(cpu, featureset);
2859 3040  
2860 3041          /*
2861 3042           * Determine whether SMT exclusion is required and whether or not we
2862 3043           * need to perform an l1d flush.
2863 3044           */
2864 3045          cpuid_update_l1d_flush(cpu, featureset);
     3046 +
     3047 +        /*
     3048 +         * Determine what our mitigation strategy should be for TAA and then
     3049 +         * also apply TAA mitigations.
     3050 +         */
     3051 +        cpuid_update_tsx(cpu, featureset);
     3052 +        cpuid_apply_tsx(x86_taa_mitigation);
2865 3053  }
2866 3054  
2867 3055  /*
2868 3056   * Setup XFeature_Enabled_Mask register. Required by xsave feature.
2869 3057   */
2870 3058  void
2871 3059  setup_xfem(void)
2872 3060  {
2873 3061          uint64_t flags = XFEATURE_LEGACY_FP;
2874 3062  
↓ open down ↓ 4482 lines elided ↑ open up ↑
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX