diff --git a/README.md b/README.md index 374effc..f2552db 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ - [x] 02 [Process API](http://www.cs.wisc.edu/~remzi/OSTEP/cpu-api.pdf) - [x] 03 [Direct Execution](http://www.cs.wisc.edu/~remzi/OSTEP/cpu-mechanisms.pdf) - [x] 04 [Scheduling Basics](http://www.cs.wisc.edu/~remzi/OSTEP/cpu-sched.pdf) -- [ ] [MLFQ Scheduling](http://www.cs.wisc.edu/~remzi/OSTEP/cpu-sched-mlfq.pdf) +- [x] 05 [MLFQ Scheduling](http://www.cs.wisc.edu/~remzi/OSTEP/cpu-sched-mlfq.pdf) - [ ] [Lottery Scheduling](http://www.cs.wisc.edu/~remzi/OSTEP/cpu-sched-lottery.pdf) - [ ] [Multiprocessor Scheduling](http://www.cs.wisc.edu/~remzi/OSTEP/cpu-sched-multi.pdf) - [ ] [Abstraction: Address Spaces](http://www.cs.wisc.edu/~remzi/OSTEP/vm-intro.pdf) diff --git a/hw05-cpu-sched-mlfq/README.md b/hw05-cpu-sched-mlfq/README.md new file mode 100644 index 0000000..9e95be2 --- /dev/null +++ b/hw05-cpu-sched-mlfq/README.md @@ -0,0 +1,187 @@ + +This program, `mlfq.py`, allows you to see how the MLFQ scheduler +presented in this chapter behaves. As before, you can use this to generate +problems for yourself using random seeds, or use it to construct a +carefully-designed experiment to see how MLFQ works under different +circumstances. To run the program, type: + +```sh +prompt> ./mlfq.py +``` + +Use the help flag (-h) to see the options: + +```sh +Usage: mlfq.py [options] +Options: + -h, --help show this help message and exit + -s SEED, --seed=SEED the random seed + -n NUMQUEUES, --numQueues=NUMQUEUES + number of queues in MLFQ (if not using -Q) + -q QUANTUM, --quantum=QUANTUM + length of time slice (if not using -Q) + -Q QUANTUMLIST, --quantumList=QUANTUMLIST + length of time slice per queue level, + specified as x,y,z,... where x is the + quantum length for the highest-priority + queue, y the next highest, and so forth + -j NUMJOBS, --numJobs=NUMJOBS + number of jobs in the system + -m MAXLEN, --maxlen=MAXLEN + max run-time of a job (if random) + -M MAXIO, --maxio=MAXIO + max I/O frequency of a job (if random) + -B BOOST, --boost=BOOST + how often to boost the priority of all + jobs back to high priority (0 means never) + -i IOTIME, --iotime=IOTIME + how long an I/O should last (fixed constant) + -S, --stay reset and stay at same priority level + when issuing I/O + -l JLIST, --jlist=JLIST + a comma-separated list of jobs to run, + in the form x1,y1,z1:x2,y2,z2:... where + x is start time, y is run time, and z + is how often the job issues an I/O request + -c compute answers for me +``` + +There are a few different ways to use the simulator. One way is to generate +some random jobs and see if you can figure out how they will behave given the +MLFQ scheduler. For example, if you wanted to create a randomly-generated +three-job workload, you would simply type: + +```sh +prompt> ./mlfq.py -j 3 +``` + +What you would then see is the specific problem definition: + +```sh +Here is the list of inputs: +OPTIONS jobs 3 +OPTIONS queues 3 +OPTIONS quantum length for queue 2 is 10 +OPTIONS quantum length for queue 1 is 10 +OPTIONS quantum length for queue 0 is 10 +OPTIONS boost 0 +OPTIONS ioTime 0 +OPTIONS stayAfterIO False + +For each job, three defining characteristics are given: + startTime : at what time does the job enter the system + runTime : the total CPU time needed by the job to finish + ioFreq : every ioFreq time units, the job issues an I/O + (the I/O takes ioTime units to complete) + +Job List: + Job 0: startTime 0 - runTime 84 - ioFreq 7 + Job 1: startTime 0 - runTime 42 - ioFreq 2 + Job 2: startTime 0 - runTime 51 - ioFreq 4 + +Compute the execution trace for the given workloads. +If you would like, also compute the response and turnaround +times for each of the jobs. + +Use the -c flag to get the exact results when you are finished. +``` + +This generates a random workload of three jobs (as specified), on the default +number of queues with a number of default settings. If you run again with the +solve flag on (-c), you'll see the same print out as above, plus the +following: + +```sh +Execution Trace: + +[ time 0 ] JOB BEGINS by JOB 0 +[ time 0 ] JOB BEGINS by JOB 1 +[ time 0 ] JOB BEGINS by JOB 2 +[ time 0 ] Run JOB 0 at PRIORITY 2 [ TICKS 9 ALLOT 1 TIME 83 (of 84) ] +[ time 1 ] Run JOB 0 at PRIORITY 2 [ TICKS 8 ALLOT 1 TIME 82 (of 84) ] +[ time 2 ] Run JOB 0 at PRIORITY 2 [ TICKS 7 ALLOT 1 TIME 81 (of 84) ] +[ time 3 ] Run JOB 0 at PRIORITY 2 [ TICKS 6 ALLOT 1 TIME 80 (of 84) ] +[ time 4 ] Run JOB 0 at PRIORITY 2 [ TICKS 5 ALLOT 1 TIME 79 (of 84) ] +[ time 5 ] Run JOB 0 at PRIORITY 2 [ TICKS 4 ALLOT 1 TIME 78 (of 84) ] +[ time 6 ] Run JOB 0 at PRIORITY 2 [ TICKS 3 ALLOT 1 TIME 77 (of 84) ] +[ time 7 ] IO_START by JOB 0 +IO DONE +[ time 7 ] Run JOB 1 at PRIORITY 2 [ TICKS 9 ALLOT 1 TIME 41 (of 42) ] +[ time 8 ] Run JOB 1 at PRIORITY 2 [ TICKS 8 ALLOT 1 TIME 40 (of 42) ] +[ time 9 ] Run JOB 1 at PRIORITY 2 [ TICKS 7 ALLOT 1 TIME 39 (of 42) ] +... + +Final statistics: + Job 0: startTime 0 - response 0 - turnaround 175 + Job 1: startTime 0 - response 7 - turnaround 191 + Job 2: startTime 0 - response 9 - turnaround 168 + + Avg 2: startTime n/a - response 5.33 - turnaround 178.00 +``` + +The trace shows exactly, on a millisecond-by-millisecond time scale, what the +scheduler decided to do. In this example, it begins by running Job 0 for 7 ms +until Job 0 issues an I/O; this is entirely predictable, as Job 0's I/O +frequency is set to 7 ms, meaning that every 7 ms it runs, it will issue an +I/O and wait for it to complete before continuing. At that point, the +scheduler switches to Job 1, which only runs 2 ms before issuing an I/O. +The scheduler prints the entire execution trace in this manner, and +finally also computes the response and turnaround times for each job +as well as an average. + +You can also control various other aspects of the simulation. For example, you +can specify how many queues you'd like to have in the system (-n) and what the +quantum length should be for all of those queues (-q); if you want even more +control and varied quanta length per queue, you can instead specify the length +of the quantum (time slice) for each queue with -Q, e.g., -Q 10,20,30] +simulates a scheduler with three queues, with the highest-priority queue +having a 10-ms time slice, the next-highest a 20-ms time-slice, and the +low-priority queue a 30-ms time slice. + +You can separately control how much time allotment there is per queue +too. This can be set for all queues with -a, or per queue with -A, e.g., -A +20,40,60 sets the time allotment per queue to 20ms, 40ms, and 60ms, +respectively. + +If you are randomly generating jobs, you can also control how long they might +run for (-m), or how often they generate I/O (-M). If you, however, want more +control over the exact characteristics of the jobs running in the system, you +can use -l (lower-case L) or --jlist, which allows you to specify the exact +set of jobs you wish to simulate. The list is of the form: +x1,y1,z1:x2,y2,z2:... where x is the start time of the job, y is the run time +(i.e., how much CPU time it needs), and z the I/O frequency (i.e., after +running z ms, the job issues an I/O; if z is 0, no I/Os are issued). + +For example, if you wanted to recreate the example in Figure 8.3 +you would specify a job list as follows: + +```sh +prompt> ./mlfq.py --jlist 0,180,0:100,20,0 -q 10 +``` + +Running the simulator in this way creates a three-level MLFQ, with each level +having a 10-ms time slice. Two jobs are created: Job 0 which starts at time 0, +runs for 180 ms total, and never issues an I/O; Job 1 starts at 100 ms, needs +only 20 ms of CPU time to complete, and also never issues I/Os. + +Finally, there are three more parameters of interest. The -B flag, if set to a +non-zero value, boosts all jobs to the highest-priority queue every N +milliseconds, when invoked as such: +```sh + prompt> ./mlfq.py -B N +``` +The scheduler uses this feature to avoid starvation as discussed in the +chapter. However, it is off by default. + +The -S flag invokes older Rules 4a and 4b, which means that if a job issues an +I/O before completing its time slice, it will return to that same priority +queue when it resumes execution, with its full time-slice intact. This +enables gaming of the scheduler. + +Finally, you can easily change how long an I/O lasts by using the -i flag. By +default in this simplistic model, each I/O takes a fixed amount of time of 5 +milliseconds or whatever you set it to with this flag. + +You can also play around with whether jobs that just complete an I/O are moved +to the head of the queue they are in or to the back, with the -I flag. Check +it out, it's fun! diff --git a/hw05-cpu-sched-mlfq/answers.md b/hw05-cpu-sched-mlfq/answers.md new file mode 100644 index 0000000..2f3fe34 --- /dev/null +++ b/hw05-cpu-sched-mlfq/answers.md @@ -0,0 +1,113 @@ +> 1. Run a few randomly-generated problems with just two jobs and two queues; compute the MLFQ execution trace for each. Make your life easier by limiting the length of each job and turning off I/Os. + +```sh +$ ./mlfq.py --numJobs=2 --numQueues=2 --maxio=0 --seed=0 +OPTIONS jobs 2 +OPTIONS queues 2 +OPTIONS allotments for queue 1 is 1 +OPTIONS quantum length for queue 1 is 10 +OPTIONS allotments for queue 0 is 1 +OPTIONS quantum length for queue 0 is 10 +OPTIONS boost 0 +OPTIONS ioTime 5 +OPTIONS stayAfterIO False +OPTIONS iobump False + +Job 0: startTime 0 - runTime 8 - ioFreq 0 +Job 1: startTime 0 - runTime 4 - ioFreq 0 + +* job 0: time:10/84 allotment:10/10 priority:1->0 +* job 1: time:10/42 allotment:10/10 priority:1->0 +* job 0: time:20/84 allotment:10/10 priority:0 +* job 1: time:20/42 allotment:10/10 priority:0 +* job 0: time:30/84 allotment:10/10 priority:0 +* job 1: time:30/42 allotment:10/10 priority:0 +* job 0: time:40/84 allotment:10/10 priority:0 +* job 1: time:40/42 allotment:10/10 priority:0 +* job 0: time:50/84 allotment:10/10 priority:0 +* job 1: time:42/42 allotment:02/10 priority:0->finished +* job 0: time:60/84 allotment:10/10 priority:0 +* job 0: time:70/84 allotment:10/10 priority:0 +* job 0: time:80/84 allotment:10/10 priority:0 +* job 0: time:84/84 allotment:04/10 priority:0->finished +``` + +> 2. How would you run the scheduler to reproduce each of the examples in the chapter? + +```sh +# figure 8.2: Long-running Job Over Time +$ ./mlfq.py --jlist=0,200,0 -c + +# figure 8.3: Along Came An Interactive Job +$ ./mlfq.py --jlist=0,180,0:100,20,0 -c + +# figure 8.4: A Mixed I/O-intensive and CPU-intensive Workload +$ ./mlfq.py --stay --jlist=0,175,0:50,25,1 -c + +# figure 8.5.a: Without Priority Boost +$ ./mlfq.py --iotime=2 --stay --jlist=0,175,0:100,50,2:100,50,2 -c + +# figure 8.5.b: With Priority Boost +$ ./mlfq.py --boost=50 --iotime=2 --stay --jlist=0,175,0:100,50,2:100,50,2 -c + +# figure 8.6.a: Without Gaming Tolerance +$ ./mlfq.py --iotime=1 --stay --jlist=0,175,0:80,90,9 -c + +# figure 8.6.b: With Gaming Tolerance +$ ./mlfq.py --iotime=1 --jlist=0,175,0:80,90,9 -c + +# figure 8.7: Lower Priority, Longer Quanta +$ ./mlfq.py --allotment=2 --quantumList=10,20,40 --jlist=0,140,0:0,140,0 -c +``` + +> 3. How would you configure the scheduler parameters to behave just like a round-robin scheduler? + +Jobs on the same queue are scheduled with round-robin. Configuring MLFQ to only use one queue makes it act like round-robin. + +> 4. Craft a workload with two jobs and scheduler parameters so that one job takes advantage of the older Rules 4a and 4b (turned on with the -S flag) to game the scheduler and obtain 99% of the CPU over a particular time interval. + +```sh +$ ./mlfq.py --quantum=100 --iotime=1 --stay --jlist=0,200,0:0,200,99 -c +``` + +Job 1 obtains 99% of the CPU with `t in [100;302]` by setting `quantum=100`, `iotime=1` and `j1.iofreq=99`. The first time job 1 runs, it will execute 99 CPU instructions, avoid dropping its priority by doing 1 I/O request of length 1, and repeat these steps until it has finished. + +``` +j0.iofreq=0 or j0.iofreq >= quantum +quantum > j1.iofreq +j1.cpuusage = j1.length / (j1.completion - j1.firstrun) + = j1.length / j1.runtime + = j1.length / (j1.length + j1.length * iotime / j1.iofreq) + = j1.iofreq / (j1.iofreq + iotime) +``` + +> 5. Given a system with a quantum length of 10 ms in its highest queue, how often would you have to boost jobs back to the highest priority level (with the -B flag) in order to guarantee that a single long-running (and potentially-starving) job gets at least 5% of the CPU? + +```sh +$ ./mlfq.py --boost=250 --iotime=2 --stay --jlist=0,500,0:0,500,2:0,500,2 -c +``` + +Over a particular time interval (i.e. [0;1000[), we want our long-running job to get at least 5% = 50ms of the cpu. + +``` +objective = interval_length * 5% (= 50) +divisor = (objective / quantum) - 1 (= 4) +boost = interval_length / divisor (= 250) +``` + +> 6. One question that arises in scheduling is which end of a queue to add a job that just finished I/O; the -I flag changes this behavior for this scheduling simulator. Play around with some workloads and see if you can see the effect of this flag. + +```sh +$ mlfq -h +-I, --iobump if specified, jobs that finished I/O move immediately + to front of current queue + +$ ./mlfq.py --numQueues=1 --jlist=0,50,0:0,25,13 -c +$ ./mlfq.py --numQueues=1 --iobump --jlist=0,50,0:0,25,13 -c +``` + +> Bonus. Implement a tool to plot the results of `./mlfq.py`. + +```sh +$ ./mlfq.py --jlist=0,200,0 -c | ./plot.py +``` diff --git a/hw05-cpu-sched-mlfq/mlfq.py b/hw05-cpu-sched-mlfq/mlfq.py new file mode 100755 index 0000000..643dc00 --- /dev/null +++ b/hw05-cpu-sched-mlfq/mlfq.py @@ -0,0 +1,374 @@ +#! /usr/bin/env python + +from __future__ import print_function +import sys +from optparse import OptionParser +import random + +# to make Python2 and Python3 act the same -- how dumb +def random_seed(seed): + try: + random.seed(seed, version=1) + except: + random.seed(seed) + return + +# finds the highest nonempty queue +# -1 if they are all empty +def FindQueue(): + q = hiQueue + while q > 0: + if len(queue[q]) > 0: + return q + q -= 1 + if len(queue[0]) > 0: + return 0 + return -1 + +def Abort(str): + sys.stderr.write(str + '\n') + exit(1) + + +# +# PARSE ARGUMENTS +# + +parser = OptionParser() +parser.add_option('-s', '--seed', help='the random seed', + default=0, action='store', type='int', dest='seed') +parser.add_option('-n', '--numQueues', + help='number of queues in MLFQ (if not using -Q)', + default=3, action='store', type='int', dest='numQueues') +parser.add_option('-q', '--quantum', help='length of time slice (if not using -Q)', + default=10, action='store', type='int', dest='quantum') +parser.add_option('-a', '--allotment', help='length of allotment (if not using -A)', + default=1, action='store', type='int', dest='allotment') +parser.add_option('-Q', '--quantumList', + help='length of time slice per queue level, specified as ' + \ + 'x,y,z,... where x is the quantum length for the highest ' + \ + 'priority queue, y the next highest, and so forth', + default='', action='store', type='string', dest='quantumList') +parser.add_option('-A', '--allotmentList', + help='length of time allotment per queue level, specified as ' + \ + 'x,y,z,... where x is the # of time slices for the highest ' + \ + 'priority queue, y the next highest, and so forth', + default='', action='store', type='string', dest='allotmentList') +parser.add_option('-j', '--numJobs', default=3, help='number of jobs in the system', + action='store', type='int', dest='numJobs') +parser.add_option('-m', '--maxlen', default=100, help='max run-time of a job ' + + '(if randomly generating)', action='store', type='int', + dest='maxlen') +parser.add_option('-M', '--maxio', default=10, + help='max I/O frequency of a job (if randomly generating)', + action='store', type='int', dest='maxio') +parser.add_option('-B', '--boost', default=0, + help='how often to boost the priority of all jobs back to ' + + 'high priority', action='store', type='int', dest='boost') +parser.add_option('-i', '--iotime', default=5, + help='how long an I/O should last (fixed constant)', + action='store', type='int', dest='ioTime') +parser.add_option('-S', '--stay', default=False, + help='reset and stay at same priority level when issuing I/O', + action='store_true', dest='stay') +parser.add_option('-I', '--iobump', default=False, + help='if specified, jobs that finished I/O move immediately ' + \ + 'to front of current queue', + action='store_true', dest='iobump') +parser.add_option('-l', '--jlist', default='', + help='a comma-separated list of jobs to run, in the form ' + \ + 'x1,y1,z1:x2,y2,z2:... where x is start time, y is run ' + \ + 'time, and z is how often the job issues an I/O request', + action='store', type='string', dest='jlist') +parser.add_option('-c', help='compute answers for me', action='store_true', + default=False, dest='solve') + +(options, args) = parser.parse_args() + +random.seed(options.seed) + +# MLFQ: How Many Queues +numQueues = options.numQueues + +quantum = {} +if options.quantumList != '': + # instead, extract number of queues and their time slic + quantumLengths = options.quantumList.split(',') + numQueues = len(quantumLengths) + qc = numQueues - 1 + for i in range(numQueues): + quantum[qc] = int(quantumLengths[i]) + qc -= 1 +else: + for i in range(numQueues): + quantum[i] = int(options.quantum) + +allotment = {} +if options.allotmentList != '': + allotmentLengths = options.allotmentList.split(',') + if numQueues != len(allotmentLengths): + print('number of allotments specified must match number of quantums') + exit(1) + qc = numQueues - 1 + for i in range(numQueues): + allotment[qc] = int(allotmentLengths[i]) + if qc != 0 and allotment[qc] <= 0: + print('allotment must be positive integer') + exit(1) + qc -= 1 +else: + for i in range(numQueues): + allotment[i] = int(options.allotment) + +hiQueue = numQueues - 1 + +# MLFQ: I/O Model +# the time for each IO: not great to have a single fixed time but... +ioTime = int(options.ioTime) + +# This tracks when IOs and other interrupts are complete +ioDone = {} + +# This stores all info about the jobs +job = {} + +# seed the random generator +random_seed(options.seed) + +# jlist 'startTime,runTime,ioFreq:startTime,runTime,ioFreq:...' +jobCnt = 0 +if options.jlist != '': + allJobs = options.jlist.split(':') + for j in allJobs: + jobInfo = j.split(',') + if len(jobInfo) != 3: + print('Badly formatted job string. Should be x1,y1,z1:x2,y2,z2:...') + print('where x is the startTime, y is the runTime, and z is the I/O frequency.') + exit(1) + assert(len(jobInfo) == 3) + startTime = int(jobInfo[0]) + runTime = int(jobInfo[1]) + ioFreq = int(jobInfo[2]) + job[jobCnt] = {'currPri':hiQueue, 'ticksLeft':quantum[hiQueue], + 'allotLeft':allotment[hiQueue], 'startTime':startTime, + 'runTime':runTime, 'timeLeft':runTime, 'ioFreq':ioFreq, 'doingIO':False, + 'firstRun':-1} + if startTime not in ioDone: + ioDone[startTime] = [] + ioDone[startTime].append((jobCnt, 'JOB BEGINS')) + jobCnt += 1 +else: + # do something random + for j in range(options.numJobs): + startTime = 0 + runTime = int(random.random() * (options.maxlen - 1) + 1) + ioFreq = int(random.random() * (options.maxio - 1) + 1) + + job[jobCnt] = {'currPri':hiQueue, 'ticksLeft':quantum[hiQueue], + 'allotLeft':allotment[hiQueue], 'startTime':startTime, + 'runTime':runTime, 'timeLeft':runTime, 'ioFreq':ioFreq, 'doingIO':False, + 'firstRun':-1} + if startTime not in ioDone: + ioDone[startTime] = [] + ioDone[startTime].append((jobCnt, 'JOB BEGINS')) + jobCnt += 1 + + +numJobs = len(job) + +print('Here is the list of inputs:') +print('OPTIONS jobs', numJobs) +print('OPTIONS queues', numQueues) +for i in range(len(quantum)-1,-1,-1): + print('OPTIONS allotments for queue %2d is %3d' % (i, allotment[i])) + print('OPTIONS quantum length for queue %2d is %3d' % (i, quantum[i])) +print('OPTIONS boost', options.boost) +print('OPTIONS ioTime', options.ioTime) +print('OPTIONS stayAfterIO', options.stay) +print('OPTIONS iobump', options.iobump) + +print('\n') +print('For each job, three defining characteristics are given:') +print(' startTime : at what time does the job enter the system') +print(' runTime : the total CPU time needed by the job to finish') +print(' ioFreq : every ioFreq time units, the job issues an I/O') +print(' (the I/O takes ioTime units to complete)\n') + +print('Job List:') +for i in range(numJobs): + print(' Job %2d: startTime %3d - runTime %3d - ioFreq %3d' % (i, job[i]['startTime'], job[i]['runTime'], job[i]['ioFreq'])) +print('') + +if options.solve == False: + print('Compute the execution trace for the given workloads.') + print('If you would like, also compute the response and turnaround') + print('times for each of the jobs.') + print('') + print('Use the -c flag to get the exact results when you are finished.\n') + exit(0) + +# initialize the MLFQ queues +queue = {} +for q in range(numQueues): + queue[q] = [] + +# TIME IS CENTRAL +currTime = 0 + +# use these to know when we're finished +totalJobs = len(job) +finishedJobs = 0 + +print('\nExecution Trace:\n') + +while finishedJobs < totalJobs: + # find highest priority job + # run it until either + # (a) the job uses up its time quantum + # (b) the job performs an I/O + + # check for priority boost + if options.boost > 0 and currTime != 0: + if currTime % options.boost == 0: + print('[ time %d ] BOOST ( every %d )' % (currTime, options.boost)) + # remove all jobs from queues (except high queue) and put them in high queue + for q in range(numQueues-1): + for j in queue[q]: + if job[j]['doingIO'] == False: + queue[hiQueue].append(j) + queue[q] = [] + + # change priority to high priority + # reset number of ticks left for all jobs (just for lower jobs?) + # add to highest run queue (if not doing I/O) + for j in range(numJobs): + # print('-> Boost %d (timeLeft %d)' % (j, job[j]['timeLeft'])) + if job[j]['timeLeft'] > 0: + # print('-> FinalBoost %d (timeLeft %d)' % (j, job[j]['timeLeft'])) + job[j]['currPri'] = hiQueue + job[j]['allotLeft'] = allotment[hiQueue] + job[j]['ticksLeft'] = quantum[hiQueue] + # print('BOOST END: QUEUES look like:', queue) + + # check for any I/Os done + if currTime in ioDone: + for (j, type) in ioDone[currTime]: + q = job[j]['currPri'] + job[j]['doingIO'] = False + print('[ time %d ] %s by JOB %d' % (currTime, type, j)) + if options.iobump == False or type == 'JOB BEGINS': + queue[q].append(j) + else: + queue[q].insert(0, j) + + # now find the highest priority job + currQueue = FindQueue() + if currQueue == -1: + print('[ time %d ] IDLE' % (currTime)) + currTime += 1 + continue + + # there was at least one runnable job, and hence ... + currJob = queue[currQueue][0] + if job[currJob]['currPri'] != currQueue: + Abort('currPri[%d] does not match currQueue[%d]' % (job[currJob]['currPri'], currQueue)) + + job[currJob]['timeLeft'] -= 1 + job[currJob]['ticksLeft'] -= 1 + + if job[currJob]['firstRun'] == -1: + job[currJob]['firstRun'] = currTime + + runTime = job[currJob]['runTime'] + ioFreq = job[currJob]['ioFreq'] + ticksLeft = job[currJob]['ticksLeft'] + allotLeft = job[currJob]['allotLeft'] + timeLeft = job[currJob]['timeLeft'] + + print('[ time %d ] Run JOB %d at PRIORITY %d [ TICKS %d ALLOT %d TIME %d (of %d) ]' % \ + (currTime, currJob, currQueue, ticksLeft, allotLeft, timeLeft, runTime)) + + if timeLeft < 0: + Abort('Error: should never have less than 0 time left to run') + + + # UPDATE TIME + currTime += 1 + + # CHECK FOR JOB ENDING + if timeLeft == 0: + print('[ time %d ] FINISHED JOB %d' % (currTime, currJob)) + finishedJobs += 1 + job[currJob]['endTime'] = currTime + # print('BEFORE POP', queue) + done = queue[currQueue].pop(0) + # print('AFTER POP', queue) + assert(done == currJob) + continue + + # CHECK FOR IO + issuedIO = False + if ioFreq > 0 and (((runTime - timeLeft) % ioFreq) == 0): + # time for an IO! + print('[ time %d ] IO_START by JOB %d' % (currTime, currJob)) + issuedIO = True + desched = queue[currQueue].pop(0) + assert(desched == currJob) + job[currJob]['doingIO'] = True + # this does the bad rule -- reset your tick counter if you stay at the same level + if options.stay == True: + job[currJob]['ticksLeft'] = quantum[currQueue] + job[currJob]['allotLeft'] = allotment[currQueue] + # add to IO Queue: but which queue? + futureTime = currTime + ioTime + if futureTime not in ioDone: + ioDone[futureTime] = [] + print('IO DONE') + ioDone[futureTime].append((currJob, 'IO_DONE')) + + # CHECK FOR QUANTUM ENDING AT THIS LEVEL (BUT REMEMBER, THERE STILL MAY BE ALLOTMENT LEFT) + if ticksLeft == 0: + if issuedIO == False: + # IO HAS NOT BEEN ISSUED (therefor pop from queue)' + desched = queue[currQueue].pop(0) + assert(desched == currJob) + + job[currJob]['allotLeft'] = job[currJob]['allotLeft'] - 1 + + if job[currJob]['allotLeft'] == 0: + # this job is DONE at this level, so move on + if currQueue > 0: + # in this case, have to change the priority of the job + job[currJob]['currPri'] = currQueue - 1 + job[currJob]['ticksLeft'] = quantum[currQueue-1] + job[currJob]['allotLeft'] = allotment[currQueue-1] + if issuedIO == False: + queue[currQueue-1].append(currJob) + else: + job[currJob]['ticksLeft'] = quantum[currQueue] + job[currJob]['allotLeft'] = allotment[currQueue] + if issuedIO == False: + queue[currQueue].append(currJob) + else: + # this job has more time at this level, so just push it to end + job[currJob]['ticksLeft'] = quantum[currQueue] + if issuedIO == False: + queue[currQueue].append(currJob) + + + + +# print out statistics +print('') +print('Final statistics:') +responseSum = 0 +turnaroundSum = 0 +for i in range(numJobs): + response = job[i]['firstRun'] - job[i]['startTime'] + turnaround = job[i]['endTime'] - job[i]['startTime'] + print(' Job %2d: startTime %3d - response %3d - turnaround %3d' % (i, job[i]['startTime'], response, turnaround)) + responseSum += response + turnaroundSum += turnaround + +print('\n Avg %2d: startTime n/a - response %.2f - turnaround %.2f' % (i, float(responseSum)/numJobs, float(turnaroundSum)/numJobs)) +print('\n') diff --git a/hw05-cpu-sched-mlfq/plot.py b/hw05-cpu-sched-mlfq/plot.py new file mode 100755 index 0000000..d91192f --- /dev/null +++ b/hw05-cpu-sched-mlfq/plot.py @@ -0,0 +1,32 @@ +#!/usr/bin/env python3 + +import sys, re +from matplotlib import pyplot, colors # dep: https://pypi.org/project/matplotlib/ + +# retrieve data +queues = {} # each queue contains a dict of jobs, and each job contains a list + # of times +reTrace = re.compile("^\[ time ([0-9]+) \] Run JOB ([0-9]+) at PRIORITY ([0-9]+)") +for line in sys.stdin: + if match := reTrace.match(line): + groups = match.groups() + groups = tuple(int(item) for item in groups) + if groups[2] not in queues: + queues[groups[2]] = {} + if groups[1] not in queues[groups[2]]: + queues[groups[2]][groups[1]] = [] + queues[groups[2]][groups[1]].append(groups[0]) + +# plot data +# ref: https://matplotlib.org/2.0.2/examples/pylab_examples/broken_barh.html +figure, axis = pyplot.subplots() +axis.set_yticks(list(queues.keys())) +axis.set_xlabel("time (ms)") +axis.set_ylabel("priority") +for priority, queue in queues.items(): + for ijob, job in queue.items(): + axis.broken_barh(xranges = tuple((time, 1) for time in job), + yrange = (priority, 1), + facecolor = list(colors.TABLEAU_COLORS.values())[ijob]) + +pyplot.show()