This repository has been archived by the owner on Jan 22, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* NB: fixed boost allotment & ticks reset in mlfq.py * ref: remzi-arpacidusseau/ostep-homework#18 * ref: remzi-arpacidusseau/ostep-homework#13
- Loading branch information
Showing
5 changed files
with
707 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,187 @@ | ||
|
||
This program, `mlfq.py`, allows you to see how the MLFQ scheduler | ||
presented in this chapter behaves. As before, you can use this to generate | ||
problems for yourself using random seeds, or use it to construct a | ||
carefully-designed experiment to see how MLFQ works under different | ||
circumstances. To run the program, type: | ||
|
||
```sh | ||
prompt> ./mlfq.py | ||
``` | ||
|
||
Use the help flag (-h) to see the options: | ||
|
||
```sh | ||
Usage: mlfq.py [options] | ||
Options: | ||
-h, --help show this help message and exit | ||
-s SEED, --seed=SEED the random seed | ||
-n NUMQUEUES, --numQueues=NUMQUEUES | ||
number of queues in MLFQ (if not using -Q) | ||
-q QUANTUM, --quantum=QUANTUM | ||
length of time slice (if not using -Q) | ||
-Q QUANTUMLIST, --quantumList=QUANTUMLIST | ||
length of time slice per queue level, | ||
specified as x,y,z,... where x is the | ||
quantum length for the highest-priority | ||
queue, y the next highest, and so forth | ||
-j NUMJOBS, --numJobs=NUMJOBS | ||
number of jobs in the system | ||
-m MAXLEN, --maxlen=MAXLEN | ||
max run-time of a job (if random) | ||
-M MAXIO, --maxio=MAXIO | ||
max I/O frequency of a job (if random) | ||
-B BOOST, --boost=BOOST | ||
how often to boost the priority of all | ||
jobs back to high priority (0 means never) | ||
-i IOTIME, --iotime=IOTIME | ||
how long an I/O should last (fixed constant) | ||
-S, --stay reset and stay at same priority level | ||
when issuing I/O | ||
-l JLIST, --jlist=JLIST | ||
a comma-separated list of jobs to run, | ||
in the form x1,y1,z1:x2,y2,z2:... where | ||
x is start time, y is run time, and z | ||
is how often the job issues an I/O request | ||
-c compute answers for me | ||
``` | ||
|
||
There are a few different ways to use the simulator. One way is to generate | ||
some random jobs and see if you can figure out how they will behave given the | ||
MLFQ scheduler. For example, if you wanted to create a randomly-generated | ||
three-job workload, you would simply type: | ||
|
||
```sh | ||
prompt> ./mlfq.py -j 3 | ||
``` | ||
|
||
What you would then see is the specific problem definition: | ||
|
||
```sh | ||
Here is the list of inputs: | ||
OPTIONS jobs 3 | ||
OPTIONS queues 3 | ||
OPTIONS quantum length for queue 2 is 10 | ||
OPTIONS quantum length for queue 1 is 10 | ||
OPTIONS quantum length for queue 0 is 10 | ||
OPTIONS boost 0 | ||
OPTIONS ioTime 0 | ||
OPTIONS stayAfterIO False | ||
|
||
For each job, three defining characteristics are given: | ||
startTime : at what time does the job enter the system | ||
runTime : the total CPU time needed by the job to finish | ||
ioFreq : every ioFreq time units, the job issues an I/O | ||
(the I/O takes ioTime units to complete) | ||
|
||
Job List: | ||
Job 0: startTime 0 - runTime 84 - ioFreq 7 | ||
Job 1: startTime 0 - runTime 42 - ioFreq 2 | ||
Job 2: startTime 0 - runTime 51 - ioFreq 4 | ||
|
||
Compute the execution trace for the given workloads. | ||
If you would like, also compute the response and turnaround | ||
times for each of the jobs. | ||
|
||
Use the -c flag to get the exact results when you are finished. | ||
``` | ||
|
||
This generates a random workload of three jobs (as specified), on the default | ||
number of queues with a number of default settings. If you run again with the | ||
solve flag on (-c), you'll see the same print out as above, plus the | ||
following: | ||
|
||
```sh | ||
Execution Trace: | ||
|
||
[ time 0 ] JOB BEGINS by JOB 0 | ||
[ time 0 ] JOB BEGINS by JOB 1 | ||
[ time 0 ] JOB BEGINS by JOB 2 | ||
[ time 0 ] Run JOB 0 at PRIORITY 2 [ TICKS 9 ALLOT 1 TIME 83 (of 84) ] | ||
[ time 1 ] Run JOB 0 at PRIORITY 2 [ TICKS 8 ALLOT 1 TIME 82 (of 84) ] | ||
[ time 2 ] Run JOB 0 at PRIORITY 2 [ TICKS 7 ALLOT 1 TIME 81 (of 84) ] | ||
[ time 3 ] Run JOB 0 at PRIORITY 2 [ TICKS 6 ALLOT 1 TIME 80 (of 84) ] | ||
[ time 4 ] Run JOB 0 at PRIORITY 2 [ TICKS 5 ALLOT 1 TIME 79 (of 84) ] | ||
[ time 5 ] Run JOB 0 at PRIORITY 2 [ TICKS 4 ALLOT 1 TIME 78 (of 84) ] | ||
[ time 6 ] Run JOB 0 at PRIORITY 2 [ TICKS 3 ALLOT 1 TIME 77 (of 84) ] | ||
[ time 7 ] IO_START by JOB 0 | ||
IO DONE | ||
[ time 7 ] Run JOB 1 at PRIORITY 2 [ TICKS 9 ALLOT 1 TIME 41 (of 42) ] | ||
[ time 8 ] Run JOB 1 at PRIORITY 2 [ TICKS 8 ALLOT 1 TIME 40 (of 42) ] | ||
[ time 9 ] Run JOB 1 at PRIORITY 2 [ TICKS 7 ALLOT 1 TIME 39 (of 42) ] | ||
... | ||
|
||
Final statistics: | ||
Job 0: startTime 0 - response 0 - turnaround 175 | ||
Job 1: startTime 0 - response 7 - turnaround 191 | ||
Job 2: startTime 0 - response 9 - turnaround 168 | ||
|
||
Avg 2: startTime n/a - response 5.33 - turnaround 178.00 | ||
``` | ||
|
||
The trace shows exactly, on a millisecond-by-millisecond time scale, what the | ||
scheduler decided to do. In this example, it begins by running Job 0 for 7 ms | ||
until Job 0 issues an I/O; this is entirely predictable, as Job 0's I/O | ||
frequency is set to 7 ms, meaning that every 7 ms it runs, it will issue an | ||
I/O and wait for it to complete before continuing. At that point, the | ||
scheduler switches to Job 1, which only runs 2 ms before issuing an I/O. | ||
The scheduler prints the entire execution trace in this manner, and | ||
finally also computes the response and turnaround times for each job | ||
as well as an average. | ||
|
||
You can also control various other aspects of the simulation. For example, you | ||
can specify how many queues you'd like to have in the system (-n) and what the | ||
quantum length should be for all of those queues (-q); if you want even more | ||
control and varied quanta length per queue, you can instead specify the length | ||
of the quantum (time slice) for each queue with -Q, e.g., -Q 10,20,30] | ||
simulates a scheduler with three queues, with the highest-priority queue | ||
having a 10-ms time slice, the next-highest a 20-ms time-slice, and the | ||
low-priority queue a 30-ms time slice. | ||
|
||
You can separately control how much time allotment there is per queue | ||
too. This can be set for all queues with -a, or per queue with -A, e.g., -A | ||
20,40,60 sets the time allotment per queue to 20ms, 40ms, and 60ms, | ||
respectively. | ||
|
||
If you are randomly generating jobs, you can also control how long they might | ||
run for (-m), or how often they generate I/O (-M). If you, however, want more | ||
control over the exact characteristics of the jobs running in the system, you | ||
can use -l (lower-case L) or --jlist, which allows you to specify the exact | ||
set of jobs you wish to simulate. The list is of the form: | ||
x1,y1,z1:x2,y2,z2:... where x is the start time of the job, y is the run time | ||
(i.e., how much CPU time it needs), and z the I/O frequency (i.e., after | ||
running z ms, the job issues an I/O; if z is 0, no I/Os are issued). | ||
|
||
For example, if you wanted to recreate the example in Figure 8.3 | ||
you would specify a job list as follows: | ||
|
||
```sh | ||
prompt> ./mlfq.py --jlist 0,180,0:100,20,0 -q 10 | ||
``` | ||
|
||
Running the simulator in this way creates a three-level MLFQ, with each level | ||
having a 10-ms time slice. Two jobs are created: Job 0 which starts at time 0, | ||
runs for 180 ms total, and never issues an I/O; Job 1 starts at 100 ms, needs | ||
only 20 ms of CPU time to complete, and also never issues I/Os. | ||
|
||
Finally, there are three more parameters of interest. The -B flag, if set to a | ||
non-zero value, boosts all jobs to the highest-priority queue every N | ||
milliseconds, when invoked as such: | ||
```sh | ||
prompt> ./mlfq.py -B N | ||
``` | ||
The scheduler uses this feature to avoid starvation as discussed in the | ||
chapter. However, it is off by default. | ||
|
||
The -S flag invokes older Rules 4a and 4b, which means that if a job issues an | ||
I/O before completing its time slice, it will return to that same priority | ||
queue when it resumes execution, with its full time-slice intact. This | ||
enables gaming of the scheduler. | ||
|
||
Finally, you can easily change how long an I/O lasts by using the -i flag. By | ||
default in this simplistic model, each I/O takes a fixed amount of time of 5 | ||
milliseconds or whatever you set it to with this flag. | ||
|
||
You can also play around with whether jobs that just complete an I/O are moved | ||
to the head of the queue they are in or to the back, with the -I flag. Check | ||
it out, it's fun! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,113 @@ | ||
> 1. Run a few randomly-generated problems with just two jobs and two queues; compute the MLFQ execution trace for each. Make your life easier by limiting the length of each job and turning off I/Os. | ||
```sh | ||
$ ./mlfq.py --numJobs=2 --numQueues=2 --maxio=0 --seed=0 | ||
OPTIONS jobs 2 | ||
OPTIONS queues 2 | ||
OPTIONS allotments for queue 1 is 1 | ||
OPTIONS quantum length for queue 1 is 10 | ||
OPTIONS allotments for queue 0 is 1 | ||
OPTIONS quantum length for queue 0 is 10 | ||
OPTIONS boost 0 | ||
OPTIONS ioTime 5 | ||
OPTIONS stayAfterIO False | ||
OPTIONS iobump False | ||
|
||
Job 0: startTime 0 - runTime 8 - ioFreq 0 | ||
Job 1: startTime 0 - runTime 4 - ioFreq 0 | ||
|
||
* job 0: time:10/84 allotment:10/10 priority:1->0 | ||
* job 1: time:10/42 allotment:10/10 priority:1->0 | ||
* job 0: time:20/84 allotment:10/10 priority:0 | ||
* job 1: time:20/42 allotment:10/10 priority:0 | ||
* job 0: time:30/84 allotment:10/10 priority:0 | ||
* job 1: time:30/42 allotment:10/10 priority:0 | ||
* job 0: time:40/84 allotment:10/10 priority:0 | ||
* job 1: time:40/42 allotment:10/10 priority:0 | ||
* job 0: time:50/84 allotment:10/10 priority:0 | ||
* job 1: time:42/42 allotment:02/10 priority:0->finished | ||
* job 0: time:60/84 allotment:10/10 priority:0 | ||
* job 0: time:70/84 allotment:10/10 priority:0 | ||
* job 0: time:80/84 allotment:10/10 priority:0 | ||
* job 0: time:84/84 allotment:04/10 priority:0->finished | ||
``` | ||
|
||
> 2. How would you run the scheduler to reproduce each of the examples in the chapter? | ||
```sh | ||
# figure 8.2: Long-running Job Over Time | ||
$ ./mlfq.py --jlist=0,200,0 -c | ||
|
||
# figure 8.3: Along Came An Interactive Job | ||
$ ./mlfq.py --jlist=0,180,0:100,20,0 -c | ||
|
||
# figure 8.4: A Mixed I/O-intensive and CPU-intensive Workload | ||
$ ./mlfq.py --stay --jlist=0,175,0:50,25,1 -c | ||
|
||
# figure 8.5.a: Without Priority Boost | ||
$ ./mlfq.py --iotime=2 --stay --jlist=0,175,0:100,50,2:100,50,2 -c | ||
|
||
# figure 8.5.b: With Priority Boost | ||
$ ./mlfq.py --boost=50 --iotime=2 --stay --jlist=0,175,0:100,50,2:100,50,2 -c | ||
|
||
# figure 8.6.a: Without Gaming Tolerance | ||
$ ./mlfq.py --iotime=1 --stay --jlist=0,175,0:80,90,9 -c | ||
|
||
# figure 8.6.b: With Gaming Tolerance | ||
$ ./mlfq.py --iotime=1 --jlist=0,175,0:80,90,9 -c | ||
|
||
# figure 8.7: Lower Priority, Longer Quanta | ||
$ ./mlfq.py --allotment=2 --quantumList=10,20,40 --jlist=0,140,0:0,140,0 -c | ||
``` | ||
|
||
> 3. How would you configure the scheduler parameters to behave just like a round-robin scheduler? | ||
Jobs on the same queue are scheduled with round-robin. Configuring MLFQ to only use one queue makes it act like round-robin. | ||
|
||
> 4. Craft a workload with two jobs and scheduler parameters so that one job takes advantage of the older Rules 4a and 4b (turned on with the -S flag) to game the scheduler and obtain 99% of the CPU over a particular time interval. | ||
```sh | ||
$ ./mlfq.py --quantum=100 --iotime=1 --stay --jlist=0,200,0:0,200,99 -c | ||
``` | ||
|
||
Job 1 obtains 99% of the CPU with `t in [100;302]` by setting `quantum=100`, `iotime=1` and `j1.iofreq=99`. The first time job 1 runs, it will execute 99 CPU instructions, avoid dropping its priority by doing 1 I/O request of length 1, and repeat these steps until it has finished. | ||
|
||
``` | ||
j0.iofreq=0 or j0.iofreq >= quantum | ||
quantum > j1.iofreq | ||
j1.cpuusage = j1.length / (j1.completion - j1.firstrun) | ||
= j1.length / j1.runtime | ||
= j1.length / (j1.length + j1.length * iotime / j1.iofreq) | ||
= j1.iofreq / (j1.iofreq + iotime) | ||
``` | ||
|
||
> 5. Given a system with a quantum length of 10 ms in its highest queue, how often would you have to boost jobs back to the highest priority level (with the -B flag) in order to guarantee that a single long-running (and potentially-starving) job gets at least 5% of the CPU? | ||
```sh | ||
$ ./mlfq.py --boost=250 --iotime=2 --stay --jlist=0,500,0:0,500,2:0,500,2 -c | ||
``` | ||
|
||
Over a particular time interval (i.e. [0;1000[), we want our long-running job to get at least 5% = 50ms of the cpu. | ||
|
||
``` | ||
objective = interval_length * 5% (= 50) | ||
divisor = (objective / quantum) - 1 (= 4) | ||
boost = interval_length / divisor (= 250) | ||
``` | ||
|
||
> 6. One question that arises in scheduling is which end of a queue to add a job that just finished I/O; the -I flag changes this behavior for this scheduling simulator. Play around with some workloads and see if you can see the effect of this flag. | ||
```sh | ||
$ mlfq -h | ||
-I, --iobump if specified, jobs that finished I/O move immediately | ||
to front of current queue | ||
|
||
$ ./mlfq.py --numQueues=1 --jlist=0,50,0:0,25,13 -c | ||
$ ./mlfq.py --numQueues=1 --iobump --jlist=0,50,0:0,25,13 -c | ||
``` | ||
> Bonus. Implement a tool to plot the results of `./mlfq.py`. | ||
```sh | ||
$ ./mlfq.py --jlist=0,200,0 -c | ./plot.py | ||
``` |
Oops, something went wrong.