Print this page
12144 Convert Intro(7) to mandoc
12145 Convert cpr(7) to mandoc
12146 Convert ibmf(7) to mandoc
12147 Convert FSS(7) to mandoc
Reviewed by: Peter Tribble <peter.tribble@gmail.com>
   1 FSS(7)                   Device and Network Interfaces                  FSS(7)
   2 
   3 
   4 
   5 NAME
   6        FSS - Fair share scheduler
   7 
   8 DESCRIPTION
   9        The fair share scheduler (FSS) guarantees application performance by
  10        explicitly allocating shares of CPU resources to projects. A share
  11        indicates a project's entitlement to available CPU resources. Because
  12        shares are meaningful only in comparison with other project's shares,
  13        the absolute quantity of shares is not important. Any number that is in
  14        proportion with the desired CPU entitlement can be used.
  15 
  16 
  17        The goals of the FSS scheduler differ from the traditional time-sharing
  18        scheduling class (TS). In addition to scheduling individual LWPs, the
  19        FSS scheduler schedules projects against each other, making it
  20        impossible for any project to acquire more CPU cycles simply by running
  21        more processes concurrently.
  22 
  23 
  24        A project's entitlement is individually calculated by FSS independently
  25        for each processor set if the project contains processes bound to them.
  26        If a project is running on more than one processor set, it can have
  27        different entitlements on every set. A project's entitlement is defined
  28        as a ratio between the number of shares given to a project and the sum
  29        of shares of all active projects running on the same processor set. An
  30        active project is one that has at least one running or runnable
  31        process. Entitlements are recomputed whenever any project becomes
  32        active or inactive, or whenever the number of shares is changed.
  33 







  34 
  35        Processor sets represent virtual machines in the FSS scheduling class
  36        and processes are scheduled independently in each processor set. That
  37        is, processes compete with each other only if they are running on the
  38        same processor set.  When a processor set is destroyed, all processes
  39        that were bound to it are moved to the default processor set, which
  40        always exists. Empty processor sets (that is, sets without processors
  41        in them) have no impact on the FSS scheduler behavior.
  42 
  43 
  44        If a processor set contains a mix of TS/IA and FSS processes, the
  45        fairness of the FSS scheduling class can be compromised because these
  46        classes use the same range of priorities. Fairness is most
  47        significantly affected if processes running in the TS scheduling class
  48        are CPU-intensive and are bound to processors within the processor set.
  49        As a result, you should avoid having processes from TS/IA and FSS
  50        classes share the same processor set. RT and FSS processes use disjoint
  51        priority ranges and therefore can share processor sets.
  52 
  53 
  54        As projects execute, their CPU usage is accumulated over time. The FSS
  55        scheduler periodically decays CPU usages of every project by
  56        multiplying it with a decay factor, ensuring that more recent CPU usage
  57        has greater weight when taken into account for scheduling. The FSS
  58        scheduler continually adjusts priorities of all processes to make each
  59        project's relative CPU usage converge with its entitlement.
  60 
  61 
  62        While FSS is designed to fairly allocate cycles over a long-term time
  63        period, it is possible that projects will not receive their allocated
  64        shares worth of CPU cycles due to uneven demand. This makes one-shot,
  65        instantaneous analysis of FSS performance data unreliable.
  66 
  67 
  68        Note that share is not the same as utilization. A project may be
  69        allocated 50% of the system, although on the average, it uses just 20%.
  70        Shares serve to cap a project's CPU usage only when there is
  71        competition from other projects running on the same processor set. When
  72        there is no competition, utilization may be larger than entitlement
  73        based on shares. Allocating a small share to a busy project slows it
  74        down but does not prevent it from completing its work if the system is
  75        not saturated.
  76 
  77 
  78        The configuration of CPU shares is managed by the name server as a
  79        property of the project(4) database. In the following example, an entry
  80        in the /etc/project file sets the number of shares for project x-files
  81        to 10:
  82 
  83          x-files:100::::project.cpu-shares=(privileged,10,none)
  84 
  85 
  86 
  87        Projects with undefined number of shares are given one share each. This
  88        means that such projects are treated with equal importance. Projects
  89        with 0 shares only run when there are no projects with non-zero shares
  90        competing for the same processor set. The maximum number of shares that
  91        can be assigned to one project is 65535.
  92 
  93 
  94        You can use the prctl(1) command to determine the current share
  95        assignment for a given project:
  96 
  97          $ prctl -n project.cpu-shares -i project x-files
  98 
  99 
 100 
 101        or to change the amount of shares if you have root privileges:
 102 
 103          # prctl -r -n project.cpu-shares -v 5 -i project x-files
 104 





 105 
 106 
 107        See the prctl(1) man page for additional information on how to modify
 108        and examine resource controls associated with active processes, tasks,
 109        or projects on the system. See resource_controls(5) for a description
 110        of the resource controls supported in the current release of the
 111        Solaris operating system.
 112 
 113 
 114        By default, project system (project ID 0) includes all system daemons
 115        started by initialization scripts and has an "unlimited" amount of
 116        shares. That is, it is always scheduled first no matter how many shares
 117        are given to other projects.
 118 
 119 
 120        The following command sets FSS as the default scheduler for the system:
 121 
 122          # dispadmin -d FSS
 123 
 124 
 125 
 126        This change will take effect on the next reboot. Alternatively, you can
 127        move processes from the time-share scheduling class (as well as the
 128        special case of init) into the FSS class without changing your default
 129        scheduling class and rebooting by becoming root, and then using the
 130        priocntl(1) command, as shown in the following example:
 131 
 132          # priocntl -s -c FSS -i class TS
 133          # priocntl -s -c FSS -i pid 1
 134 
 135 
 136 CONFIGURING SCHEDULER WITH DISPADMIN
 137        You can use the dispadmin(1M) command to examine and tune the FSS
 138        scheduler's time quantum value. Time quantum is the amount of time that
 139        a thread is allowed to run before it must relinquish the processor. The
 140        following example dumps the current time quantum for the fair share
 141        scheduler:
 142 
 143          $ dispadmin -g -c FSS
 144               #
 145               # Fair Share Scheduler Configuration
 146               #
 147               RES=1000
 148               #
 149               # Time Quantum
 150               #
 151               QUANTUM=110
 152 
 153 
 154 
 155        The value of the QUANTUM represents some fraction of a second with the
 156        fractional value determined by the reciprocal value of RES. With the
 157        default value of RES = 1000, the reciprocal of 1000 is .001, or
 158        milliseconds. Thus, by default, the QUANTUM value represents the time
 159        quantum in milliseconds.
 160 
 161 
 162        If you change the RES value using dispadmin with the -r option, you
 163        also change the QUANTUM value. For example, instead of quantum of 110
 164        with RES of 1000, a quantum of 11 with a RES of 100 results. The
 165        fractional unit is different while the amount of time is the same.
 166 
 167 
 168        You can use the -s option to change the time quantum value. Note that
 169        such changes are not preserved across reboot. Please refer to the
 170        dispadmin(1M) man page for additional information.
 171 
 172 
 173 SEE ALSO
 174        prctl(1), priocntl(1), dispadmin(1M), psrset(1M), priocntl(2),
 175        project(4), resource_controls(5)
 176 


 177 
 178        System Administration Guide:  Virtualization Using the Solaris
 179        Operating System
 180 
 181 
 182 
 183                                  May 13, 2017                           FSS(7)
   1 FSS(7)                   Device and Network Interfaces                  FSS(7)
   2 


   3 NAME
   4      FSS - Fair share scheduler
   5 
   6 DESCRIPTION
   7      The fair share scheduler (FSS) guarantees application performance by
   8      explicitly allocating shares of CPU resources to projects.  A share
   9      indicates a project's entitlement to available CPU resources.  Because
  10      shares are meaningful only in comparison with other project's shares, the
  11      absolute quantity of shares is not important.  Any number that is in
  12      proportion with the desired CPU entitlement can be used.
  13 

  14      The goals of the FSS scheduler differ from the traditional time-sharing
  15      scheduling class (TS).  In addition to scheduling individual LWPs, the
  16      FSS scheduler schedules projects against each other, making it impossible
  17      for any project to acquire more CPU cycles simply by running more
  18      processes concurrently.
  19 

  20      A project's entitlement is individually calculated by FSS independently
  21      for each processor set if the project contains processes bound to them.
  22      If a project is running on more than one processor set, it can have
  23      different entitlements on every set.  A project's entitlement is defined
  24      as a ratio between the number of shares given to a project and the sum of
  25      shares of all active projects running on the same processor set.  An
  26      active project is one that has at least one running or runnable process.
  27      Entitlements are recomputed whenever any project becomes active or
  28      inactive, or whenever the number of shares is changed.
  29 
  30      Processor sets represent virtual machines in the FSS scheduling class and
  31      processes are scheduled independently in each processor set.  That is,
  32      processes compete with each other only if they are running on the same
  33      processor set.  When a processor set is destroyed, all processes that
  34      were bound to it are moved to the default processor set, which always
  35      exists.  Empty processor sets (that is, sets without processors in them)
  36      have no impact on the FSS scheduler behavior.
  37 









  38      If a processor set contains a mix of TS/IA and FSS processes, the
  39      fairness of the FSS scheduling class can be compromised because these
  40      classes use the same range of priorities.  Fairness is most significantly
  41      affected if processes running in the TS scheduling class are CPU-
  42      intensive and are bound to processors within the processor set.  As a
  43      result, you should avoid having processes from TS/IA and FSS classes
  44      share the same processor set.  RT and FSS processes use disjoint priority
  45      ranges and therefore can share processor sets.
  46 

  47      As projects execute, their CPU usage is accumulated over time.  The FSS
  48      scheduler periodically decays CPU usages of every project by multiplying
  49      it with a decay factor, ensuring that more recent CPU usage has greater
  50      weight when taken into account for scheduling.  The FSS scheduler
  51      continually adjusts priorities of all processes to make each project's
  52      relative CPU usage converge with its entitlement.
  53 

  54      While FSS is designed to fairly allocate cycles over a long-term time
  55      period, it is possible that projects will not receive their allocated
  56      shares worth of CPU cycles due to uneven demand.  This makes one-shot,
  57      instantaneous analysis of FSS performance data unreliable.
  58 

  59      Note that share is not the same as utilization.  A project may be
  60      allocated 50% of the system, although on the average, it uses just 20%.
  61      Shares serve to cap a project's CPU usage only when there is competition
  62      from other projects running on the same processor set.  When there is no
  63      competition, utilization may be larger than entitlement based on shares.
  64      Allocating a small share to a busy project slows it down but does not
  65      prevent it from completing its work if the system is not saturated.

  66 

  67      The configuration of CPU shares is managed by the name server as a
  68      property of the project(4) database.  In the following example, an entry
  69      in the /etc/project file sets the number of shares for project x-files to
  70      10:
  71 
  72        x-files:100::::project.cpu-shares=(privileged,10,none)
  73 


  74      Projects with undefined number of shares are given one share each.  This
  75      means that such projects are treated with equal importance.  Projects
  76      with 0 shares only run when there are no projects with non-zero shares
  77      competing for the same processor set.  The maximum number of shares that
  78      can be assigned to one project is 65535.
  79 

  80      You can use the prctl(1) command to determine the current share
  81      assignment for a given project:
  82 
  83        $ prctl -n project.cpu-shares -i project x-files
  84 


  85      or to change the amount of shares if you have root privileges:
  86 
  87        # prctl -r -n project.cpu-shares -v 5 -i project x-files
  88 
  89      See the prctl(1) man page for additional information on how to modify and
  90      examine resource controls associated with active processes, tasks, or
  91      projects on the system.  See resource_controls(5) for a description of
  92      the resource controls supported in the current release of the Solaris
  93      operating system.
  94 








  95      By default, project system (project ID 0) includes all system daemons
  96      started by initialization scripts and has an "unlimited" amount of
  97      shares.  That is, it is always scheduled first no matter how many shares
  98      are given to other projects.
  99 

 100      The following command sets FSS as the default scheduler for the system:
 101 
 102        # dispadmin -d FSS
 103 


 104      This change will take effect on the next reboot.  Alternatively, you can
 105      move processes from the time-share scheduling class (as well as the
 106      special case of init) into the FSS class without changing your default
 107      scheduling class and rebooting by becoming root, and then using the
 108      priocntl(1) command, as shown in the following example:
 109 
 110        # priocntl -s -c FSS -i class TS
 111        # priocntl -s -c FSS -i pid 1
 112 

 113 CONFIGURING SCHEDULER WITH DISPADMIN
 114      You can use the dispadmin(1M) command to examine and tune the FSS
 115      scheduler's time quantum value.  Time quantum is the amount of time that
 116      a thread is allowed to run before it must relinquish the processor.  The
 117      following example dumps the current time quantum for the fair share
 118      scheduler:
 119 
 120        $ dispadmin -g -c FSS
 121                #
 122                # Fair Share Scheduler Configuration
 123                #
 124                RES=1000
 125                #
 126                # Time Quantum
 127                #
 128                QUANTUM=110
 129 


 130      The value of the QUANTUM represents some fraction of a second with the
 131      fractional value determined by the reciprocal value of RES.  With the
 132      default value of RES = 1000, the reciprocal of 1000 is .001, or
 133      milliseconds.  Thus, by default, the QUANTUM value represents the time
 134      quantum in milliseconds.
 135 
 136      If you change the RES value using dispadmin(1M) with the -r option, you

 137      also change the QUANTUM value.  For example, instead of quantum of 110
 138      with RES of 1000, a quantum of 11 with a RES of 100 results.  The
 139      fractional unit is different while the amount of time is the same.
 140 

 141      You can use the -s option to change the time quantum value.  Note that
 142      such changes are not preserved across reboot.  Please refer to the
 143      dispadmin(1M) man page for additional information.
 144 

 145 SEE ALSO
 146      prctl(1), priocntl(1), dispadmin(1M), psrset(1M), priocntl(2),
 147      project(4), resource_controls(5)
 148 
 149      System Administration Guide: Virtualization Using the Solaris Operating
 150      System
 151 
 152 illumos                        December 17, 2019                       illumos