1 FSS(7) Device and Network Interfaces FSS(7) 2 3 NAME 4 FSS - Fair share scheduler 5 6 DESCRIPTION 7 The fair share scheduler (FSS) guarantees application performance by 8 explicitly allocating shares of CPU resources to projects. A share 9 indicates a project's entitlement to available CPU resources. Because 10 shares are meaningful only in comparison with other project's shares, the 11 absolute quantity of shares is not important. Any number that is in 12 proportion with the desired CPU entitlement can be used. 13 14 The goals of the FSS scheduler differ from the traditional time-sharing 15 scheduling class (TS). In addition to scheduling individual LWPs, the 16 FSS scheduler schedules projects against each other, making it impossible 17 for any project to acquire more CPU cycles simply by running more 18 processes concurrently. 19 20 A project's entitlement is individually calculated by FSS independently 21 for each processor set if the project contains processes bound to them. 22 If a project is running on more than one processor set, it can have 23 different entitlements on every set. A project's entitlement is defined 24 as a ratio between the number of shares given to a project and the sum of 25 shares of all active projects running on the same processor set. An 26 active project is one that has at least one running or runnable process. 27 Entitlements are recomputed whenever any project becomes active or 28 inactive, or whenever the number of shares is changed. 29 30 Processor sets represent virtual machines in the FSS scheduling class and 31 processes are scheduled independently in each processor set. That is, 32 processes compete with each other only if they are running on the same 33 processor set. When a processor set is destroyed, all processes that 34 were bound to it are moved to the default processor set, which always 35 exists. Empty processor sets (that is, sets without processors in them) 36 have no impact on the FSS scheduler behavior. 37 38 If a processor set contains a mix of TS/IA and FSS processes, the 39 fairness of the FSS scheduling class can be compromised because these 40 classes use the same range of priorities. Fairness is most significantly 41 affected if processes running in the TS scheduling class are CPU- 42 intensive and are bound to processors within the processor set. As a 43 result, you should avoid having processes from TS/IA and FSS classes 44 share the same processor set. RT and FSS processes use disjoint priority 45 ranges and therefore can share processor sets. 46 47 As projects execute, their CPU usage is accumulated over time. The FSS 48 scheduler periodically decays CPU usages of every project by multiplying 49 it with a decay factor, ensuring that more recent CPU usage has greater 50 weight when taken into account for scheduling. The FSS scheduler 51 continually adjusts priorities of all processes to make each project's 52 relative CPU usage converge with its entitlement. 53 54 While FSS is designed to fairly allocate cycles over a long-term time 55 period, it is possible that projects will not receive their allocated 56 shares worth of CPU cycles due to uneven demand. This makes one-shot, 57 instantaneous analysis of FSS performance data unreliable. 58 59 Note that share is not the same as utilization. A project may be 60 allocated 50% of the system, although on the average, it uses just 20%. 61 Shares serve to cap a project's CPU usage only when there is competition 62 from other projects running on the same processor set. When there is no 63 competition, utilization may be larger than entitlement based on shares. 64 Allocating a small share to a busy project slows it down but does not 65 prevent it from completing its work if the system is not saturated. 66 67 The configuration of CPU shares is managed by the name server as a 68 property of the project(4) database. In the following example, an entry 69 in the /etc/project file sets the number of shares for project x-files to 70 10: 71 72 x-files:100::::project.cpu-shares=(privileged,10,none) 73 74 Projects with undefined number of shares are given one share each. This 75 means that such projects are treated with equal importance. Projects 76 with 0 shares only run when there are no projects with non-zero shares 77 competing for the same processor set. The maximum number of shares that 78 can be assigned to one project is 65535. 79 80 You can use the prctl(1) command to determine the current share 81 assignment for a given project: 82 83 $ prctl -n project.cpu-shares -i project x-files 84 85 or to change the amount of shares if you have root privileges: 86 87 # prctl -r -n project.cpu-shares -v 5 -i project x-files 88 89 See the prctl(1) man page for additional information on how to modify and 90 examine resource controls associated with active processes, tasks, or 91 projects on the system. See resource_controls(5) for a description of 92 the resource controls supported in the current release of the Solaris 93 operating system. 94 95 By default, project system (project ID 0) includes all system daemons 96 started by initialization scripts and has an "unlimited" amount of 97 shares. That is, it is always scheduled first no matter how many shares 98 are given to other projects. 99 100 The following command sets FSS as the default scheduler for the system: 101 102 # dispadmin -d FSS 103 104 This change will take effect on the next reboot. Alternatively, you can 105 move processes from the time-share scheduling class (as well as the 106 special case of init) into the FSS class without changing your default 107 scheduling class and rebooting by becoming root, and then using the 108 priocntl(1) command, as shown in the following example: 109 110 # priocntl -s -c FSS -i class TS 111 # priocntl -s -c FSS -i pid 1 112 113 CONFIGURING SCHEDULER WITH DISPADMIN 114 You can use the dispadmin(1M) command to examine and tune the FSS 115 scheduler's time quantum value. Time quantum is the amount of time that 116 a thread is allowed to run before it must relinquish the processor. The 117 following example dumps the current time quantum for the fair share 118 scheduler: 119 120 $ dispadmin -g -c FSS 121 # 122 # Fair Share Scheduler Configuration 123 # 124 RES=1000 125 # 126 # Time Quantum 127 # 128 QUANTUM=110 129 130 The value of the QUANTUM represents some fraction of a second with the 131 fractional value determined by the reciprocal value of RES. With the 132 default value of RES = 1000, the reciprocal of 1000 is .001, or 133 milliseconds. Thus, by default, the QUANTUM value represents the time 134 quantum in milliseconds. 135 136 If you change the RES value using dispadmin(1M) with the -r option, you 137 also change the QUANTUM value. For example, instead of quantum of 110 138 with RES of 1000, a quantum of 11 with a RES of 100 results. The 139 fractional unit is different while the amount of time is the same. 140 141 You can use the -s option to change the time quantum value. Note that 142 such changes are not preserved across reboot. Please refer to the 143 dispadmin(1M) man page for additional information. 144 145 SEE ALSO 146 prctl(1), priocntl(1), dispadmin(1M), psrset(1M), priocntl(2), 147 project(4), resource_controls(5) 148 149 System Administration Guide: Virtualization Using the Solaris Operating 150 System 151 152 illumos December 17, 2019 illumos