Completely Fair Scheduler and its tuning 1
|
|
|
- Randall Weaver
- 10 years ago
- Views:
Transcription
1 Completely Fair Scheduler and its tuning 1 Jacek Kobus and Rafał Szklarski 1 Introduction The introduction of a new, the so called completely fair scheduler (CFS) to the Linux kernel (October 2007) does not mean that concepts needed to describe its predecessor, i.e. the O(1) scheduler, cease to be relevant. In order to be able to present the principles of the CFS scheduler and details of its algorithm the basic terminology must be introduced. This should allow to describe the salient features of the new scheduler within the context of the O(1) one. This approach should help the reader already acquainted with the O(1) scheduler to grasp the essential changes introduced to the scheduler subsystem. All modern operating systems divide CPU cycles in the form of time quantas among various processes and threads (or Linux tasks) in accordance with their policies and priorities. This is possible because the system timer periodically generates interrupts and thus allows the scheduler to do its job of choosing the next task for execution. The frequency of the timer is hard-coded and can be changed at the configuration phase. For the 2.4 kernels this freqency was set to 100 Hz. In case of the 2.6 kernels the user can choose between the following values: 100, 250, 300 or 1000 Hz. The higher freqences mean the faster serving of interrupts and the better interactivity of the system at the expence of the higher system overhead. When there is no much work to be done in the system, i.e. when there are no tasks in the running queue, servicing hundreds of timer interrupts per second seems a waste of CPU cycles and energy. In the kernel this problem was solved by implementing so called dynamic tics. If the system is idle some clock interrupts are ignored (skipped) and one can have as few as only six interrupts a second. In such a state the system responds only to other hardware interrupts. 2 Scheduling Time measurement is one of the most important parts of the scheduler subsystem and in the CFS implementation has been improved by addition of the high-resolution timer. This kind of timer appeared in the kernel and offers a nanosecond (instead of a milisecond) resolution which enables the scheduler to perform its actions with much finer granularity. In the O(1) scheduler every task is given its time share of the CPU of variable length which depends on the task s priority and its interactivity. This quanta can be used in its entirety unless there appears in the system a new task with a higher priority or the task yields (relinquishes) voluntarily the CPU waiting for initialized I/O operations to get finished. When the timeslice is completely used up the scheduler calculates the new time 1 This article is based on R.Szklarski s M.S. thesis, Toruń,
2 quanta for the task and moves the task from the active queue to the expired one. The O(1) scheduler employs the concept of the epoch: when the active queue becomes eventually empty the scheduler swaps the active and expired queues and the new epoch begins. This mechanizm may lead to unacceptable long delays in serving the tasks because expired tasks have to wait (for unknown amount of time) for active tasks to consume their timeslices. When the number of task in the system increases the danger of starvation increases as well. In order to guarantee good interactivity of the system the O(1) scheduler marks some tasks as interactive and reinserts them into the active queue (but in order to prevent starvation this is only done if the tasks in the expired array have run recently). The CFS scheduler calculates the timeslices for a given task just before it is scheduled for execution. Its length is variable (dynamic) and depends directly on its static priority and the current load of the running queue, i.e. the number and priority of the tasks in the queue. In case of a general-purpose operating system its scheduler should take into account the fact that tasks may have very different character and exhibit different behaviour and therefore require adequate treatment by the scheduler. The Linux scheduler distinguishes between the real-time, batch and interactive processes. The O(1) scheduler employs two group to tackle real-time tasks, namely SCHED FIFO and SCHED RR. SCHED IDLE, SCHED BATCH and SCHED OTHER groups are used for clasification of the remaining tasks. In principle the CFS scheduler uses scheduling policies similar to the one used by the O(1). However, it introduces the notion of the class of tasks which is characterized by a special scheduling policy. The CFS scheduler defins the following classes: real-time (SCHED FIFO and SCHED RR), fair (SCHED NORMAL and SCHED BATCH) and idle (SCHED IDLE). The idle class is very special since it is concerned with task being started when there are no more ready tasks in the system (every CPU has its own task of this sort). In this approach it is user s responsibility to classify properly his/hers tasks (using the chrt command) because the scheduler is not able to classify a task by itself. In the case of the fair class tasks the CFS uses the following two polices: SCHED NORMAL this policy is applied to normal tasks and it is also known as the SCHED- OTHER policy in the O(1) scheduler; SCHED NORMAL replaced SCHED OTHER in the kernel SCHED BATCH the CFS scheduler does not allow these tasks to preempt SCHED NORMAL tasks which results in a better utilization of the CPU cache. As a side effect it makes these tasks less interactive so that this policy should be used (as its very name suggests) for batch tasks. It must be noted that the essential difference between the O(1) and CFS scheduler boils down to a completely new treatment of the SCHED NORMAL policy. In one of the earliest commentaries on the new scheduler its author, Ingo Molnar, wrote (see Documentation/- scheduler/sched-design-cfs.txt): 2
3 80% of CFS s design can be summed up in a single sentence: CFS basically models an ideal, precise multi-tasking CPU on real hardware. The inventor of the CFS set himself a goal of devising a scheduler capable of the fair devision of available CPU power among all tasks. If one had an ideal multitasking computer capable of concurrent execution on N processes then every process would get exactly 1/Nth of its available CPU power. In reality, on a single CPU machine, only one process can be executed at a time. That is why the CFS scheduler employs the notion of the virtual runtime which should be equal for all tasks. In reality the virtual runtime is a normalized value of the real runtime of a given task with its static priority taken into account. The scheduler selects the task with the smallest virtual runtime value to reduce in the long run the differences between the runtimes for all the tasks. Real-time tasks are handeled by the CFS scheduler before tasks from other classes. Two different policies are used to this end, namely SCHED FIFO and SCHED RR. SCHED FIFO tasks get no finite time quanta but execute until completion or yielding the CPU. On the contrary tasks from the SCHED RR class get definite time quantas and are scheduled according to the round-robin algorithm (once all SCHED RR tasks of a given prio level exhaust their timeslices, the timeslices are refilled and the tasks continue running). Policies in Linux rely on priorities associated with every task. Priorities are used by the scheduler to rate the importance (or otherwise) of a given task and to calculate its CPU share. The O(1) scheduler employs the so called dynamical priority which, for a given task, is influenced by its static priority (set by the user) and its interactivity level. To account for the latter the scheduler records for every task its CPU time and the time it sleeps waiting for its I/O operations to be finished. The ratio of these two values is used as an indicator of the task interactivity. For I/O-bound proceses this indicator is small and increases as CPU consumption increases. When devising the new scheduler Molnar decided to get rid of these statistics and subsequent modifications of the dynamic priorities of the running tasks. The new scheduler tries to divide CPU resources fairly among all processes by taking into account their static priorities. The handling of tasks s queues have also been changed. The O(1) scheduler uses (for every CPU in the system) two separate tables for handling active and passive tasks. When all the tasks are found in the passive table the tables are swaped and a new epoch begins (in fact this is achieved not by swapping the tables themselves but by swapping the corresponding pointers). The CFS scheduler used instead a single red-black binary tree which nodes hold task ready to be executed. The Linux scheduler (either old or new) uses priorities within the range [0..99] for handling real-time tasks and the [ ] range for all other tasks. A new non real-time task gets by default the priority 120. This value can be modified (adjusted) by means of the static priority (called also the nice value) within the range [ ] by using the command nice (or snice). This means that the dynamic priority can be changed within the [ ] range as needed. In this way the users can set the relative importance of tasks it owns and the superuser can change priorities of all the tasks in the system. The actual value of the dynamic priority of a given taks is modified by the O(1) scheduler itself witin ±5 range depending 3
4 on the task s interactivity. The O(1) scheduler uses this mechanism to guarantee good responsiveness of the system by decreasing the dynamic priority value (i.e. weighting higher) of the iteractive tasks. This means that the behaviour of a task in a current epoch influences directly the interactivity level calculated by the scheduler and together with its static priority is used to classify the task as interactive or otherwise. This clasification has a heuristic character and is done according to a table of the form (see sched.c) TASK_INTERACTIVE(-20): [1,1,1,1,1,1,1,1,1,0,0] TASK_INTERACTIVE(-10): [1,1,1,1,1,1,1,0,0,0,0] TASK_INTERACTIVE( 0): [1,1,1,1,0,0,0,0,0,0,0] TASK_INTERACTIVE( 10): [1,1,0,0,0,0,0,0,0,0,0] TASK_INTERACTIVE( 19): [0,0,0,0,0,0,0,0,0,0,0] The responsibility of the CFS scheduler is to guarentee a fair division of the available CPU among all tasks from the fair class. This cannot result in dividing the epoch time equally among the tasks but the schedular must necessarily take into account weights of individual tasks which are related to their static priorities. To this end the CFS schedular uses the following equation: task load time slice = period cfs rq load, (1) where time slice is a CPU timeslice that the task deserves (is due), period is the epoch length (the time span of a period depends on the number of tasks in the queue and some parameters that can be tuned by the administrator; see below), task load the last load and cfs rq load the weight of the fair queue. Although the algorithm for calculating timeslices is of utmost importance but the moment they are calculated is crucial as well. The O(1) scheduler evaluates a new timeslice for a task when the old time quota has been used up and subsequently then moves the tasks from the active queue to the expired one. The expired queue is where tasks wait for the active ones to consume their timeslices. Interactive tasks are reinserted into the active queue. The CFS scheduler uses a notion of the so called dynamic time quanta which is calculated whenever a task is selected for execution. The actual length of this timeslice depends on the current queue load. Additionally, thanks to the high resolution timer, the newly established time quanta is determined within nanosecond accuracy. By means of the timer s handler the scheduler gets a chance to select a task that deserves CPU and if needed performs the context switch. The CFS scheduler tries to fairly distribute the CPU time of a given epoch among all tasks (in the fair class) in such a way that each of them gets the same amount of the virtual runtime. For heavy tasks their virtual runtime increases slowly; for light tasks the behaviour is just the opposite. The CFS scheduler selects the task which has the smallest value of the virtual runtime. Let us assume that we have only two tasks with extreme priorities, i.e. one with the priority 100 (the heavy task) and the other with priority 139 (the light task). According to eq. 1 the former enjoys the longer timeslices than the latter. If the scheduler based its selection decision on their 4
5 real runtimes then the light task would get small quanta one after the other (in a file) until the real runtime of the heavy task would get smaller than the time used by the lighter task. Within this approach the more important task would get CPU less frequently but in larger chuncks. Thanks to the selection process based on the normalized virtual runtime the CFS scheduler divides the CPU fairly which results in execution of every task once per epoch. If a new task is added to the run queue during a current epoch then the scheduler start imediately the new one. A normal task experiences the following when the CFS scheduler is at work. 1. The timer generates an interrupt and the scheduler (according to its configuration) selects a task which has the lowest virtual runtime has just changed its status from waiting to running The dynamic timeslice is calculated according to eq. 1 for the selected task. If the high resolution timer is used then it is set accordingly. Next, the scheduler switches the context to allow the newly selected task to execute. 2. The task starts using CPU (executing). 3. During the next timer interrupt (sometimes also during a system call) the scheduler examines the current state of affairs and can preempt the task if it used all its timeslice there is a task with smaller virtual runtime in the tree there is a newly created task there is a task that has completed its sleeping stage If this is the case (a) the scheduler updates the virtual runtime according to the real runtime and the load (b) the scheduler adds the task into red-black tree. Then the tasks are ordered according to the key equal to current vruntime min vruntime, where current runtime is the virtual runtime of the current task and min vruntime the smallest virtual runtime within the nodes on the tree 3 CFS scheduler tuning Te CFS scheduler offers a number of parameters which allow to tune its behaviour to actual needs. These parameters can be accessed via the proc file system, i.e. a special file system that exists only in memory and forms an interface between the kernel and the user 5
6 space. The content of any of its files is generated when a read operation is being performed. The kernel uses files in /proc both to export its internal data to user space applications and to modify kernel parameters, the CFS scheduler parameters including. This can be done by using the sysctl command or by treating the files as the ordinary text files which can be read or written. Thus to get the content of the /proc/sys/kernel/sched latency ns file it is necessary to use one of the following commands: # cat /proc/sys/kernel/sched_latency_ns # sysctl sched_latency_ns To modify its content one can use: # echo VALUE > /proc/sys/kernel/sched_latency_ns # sysctl -w sched_latency_ns=value Now, let us describe briefly some of the tunable kernel parameters of the CFS scheduler. All of them can be found in the /proc/sys/kernel directory in files with names exactly matching the names of these parameters. sched latency ns epoch duration (length) in nanoseconds (20 ms by default) sched min granularity ns granularity of the epoch in nanoseconds (4 ms by default); the epoch length is always equal to the multiplicity of the value of this parameter. The scheduler checks if the following inequality holds: nr running > sched latency ns sched min granularity ns, where nr running is equal to the number of tasks in the queue. If this relation is satisfied then the scheduler realizes that there are too many task in the system and that the epoch duration has to be increased. It is done according to the equation period = sched min granularity ns nr running, i.e the length of the epoch is equal to the granularity multiplied by the number of TASK RUNNING tasks from the fair class. This means that for the default values of the schedular parameters the epoch would be increased when the number of tasks in TASK RUNNING state is larger then five. Otherwise the epoch length would be set to 20 ms. sched child runs first this parameter influences the order of execution of the parent process and its child. If it is equal to one (the default value), then the child process will get CPU prior to its parent. If the parameter is zero the reverse will happen. sched compat yield this parameter is used to block the current process from yielding voluntarily CPU by setting its value to zero (the default). The process yields the CPU by calling the sched yield() function. 6
7 sched wakeup granularity ns this parameter describes the ability of tasks being waken up to preempt the current task (5 ms by default). The larger the value the more difficult it is for the task to force the preemption. sched migration cost this parameter is used by the scheduler to determine whether the procedure to select the next task should be called when a task is being waken up. If the mean real runtime for the current task and the one being waken up are smaller than this parameter (0.5 ms by default), the scheduler chooses another task. To make this work the WAKEUP OVERLAP feature must be active. In case of a multiprocesor computer system the scheduler tries also to spread load over available CPUs. In this respect the scheduler uses the following parameters: sched migration cost the cost of task migration among CPUs (0.5 ms by default) which is used to estimate whether the code of a task is still present in a CPU cache. If the real runtime of the task is smaller than the values of this parameter then the scheduler assumes that it is still in the cache and tries to avoid moving the task to another CPU during the load balancing procedure. sched nr migrate the maximum number of task the scheduler handles during the load balancing procedure (32 by default). The CFS scheduler makes it possible to control the time usage of real-time tasks up to microsecond resolution: sched rt runtime us the maximum CPU time that can be used by all the real-time tasks (1 second by default). When this amount of time is used up these tasks must wait for sched rt period us period before the are allowed to be executed again. sched rt period us the CFS scheduler waits this amount of time (0.95 s by default) before scheduling any of the real-time tasks again. The last parameter, namely sched features, represents certain features of the scheduler that can be switched on and off during the system operation and they are described in a separate subsection (see below). 3.1 Scheduler features The CFS scheduler gives a system s administrator opportunity to modify its operation by turning on and off particular features which can be found in the /sys/kernel/debug/- sched features file. When the content of this file is displayed one gets the list of all the features available. Some of the names may be prefixed by NO which means that the given feature has been switched off and is therefore not active. The execution of the following commands echo FEATURE_NAME > sched_features echo NO_FEATURE_NAME > sched_features 7
8 results first in activating the feature FEATURE_NAME and its subsequent deactivation. The description of the most important ones follows: 2 NEW FAIR SLEEPERS the feature relevant when a task is waken up. Every tasks that is waken up gets the virtual runtime equal to the smallest virtual runtime among the tasks in the queue. If this feature is on then this virtual runtime is additionally decreased by sched latency ns. However, the new value cannot be smaller than the virtual runtime of the task prior to its suspension. NORMALIZED SLEEPER this feature is taken into account when the NEW FAIR SLEEPERS feature is active. By switching on the NORMALIZED SLEEPER feature the value sched latency ns gets normalized (see above). The normalization is performed similar to the normalization of the virtual runtime. WAKEUP PREEMPT if this feature is active then the task which is waken up immediately preempts the current task. Swiching off this feature casuses the scheduler to block the possibility of preemption of the current task until it uses up its time slice. Moreover, if this feature is active and there is only one running task in the system then the scheduler will refrain from calling the procedure responsible for the selection of the next task untill additional task is added to the queue. START DEBIT this feature is taken into acount during initialization of virtual runtime for a newly created task. If this feature is active then the initial virtual runtime is increased by the amount equal to the normalized timeslice that the task would be allowed to use. SYNC WAKEUPS this feature allows for synchronous wakening up of tasks. If active the scheduler preempts the current task and schedules the one being waken up (WAKEUP - PREEMPT must be active). HRTICK this feature is used to switch on and off the high resolution timer. DOUBLE TICK this feature enables to control whether the procedure checking the CPU time consumend by the task is being called when the timer interrupt occurs. If active the interrupts generated by the high resolution timer or the system timer cause the scheduler to check if the task should be preempted. If not active, then only the interrupts generated by the high resolution timer results in the time checking. ASYM GRAN this feature is related to preemption and influences the granularity of wake ups. If active the scheduler sched wakeup granularity ns value is normalized (the normalization is performed similar to the normalization of the virtual runtime). 2 We bypass the features related to the scheduler operation on a multiprocessor machine and grouping. See Documentation/scheduler/sched-design-CFS.txt for details. 8
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann [email protected] Universität Paderborn PC² Agenda Multiprocessor and
Linux scheduler history. We will be talking about the O(1) scheduler
CPU Scheduling Linux scheduler history We will be talking about the O(1) scheduler SMP Support in 2.4 and 2.6 versions 2.4 Kernel 2.6 Kernel CPU1 CPU2 CPU3 CPU1 CPU2 CPU3 Linux Scheduling 3 scheduling
CPU Scheduling Outline
CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different
Chapter 5 Linux Load Balancing Mechanisms
Chapter 5 Linux Load Balancing Mechanisms Load balancing mechanisms in multiprocessor systems have two compatible objectives. One is to prevent processors from being idle while others processors still
Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum
Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods
Operating Systems Concepts: Chapter 7: Scheduling Strategies
Operating Systems Concepts: Chapter 7: Scheduling Strategies Olav Beckmann Huxley 449 http://www.doc.ic.ac.uk/~ob3 Acknowledgements: There are lots. See end of Chapter 1. Home Page for the course: http://www.doc.ic.ac.uk/~ob3/teaching/operatingsystemsconcepts/
Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling
Scheduling 0 : Levels High level scheduling: Deciding whether another process can run is process table full? user process limit reached? load to swap space or memory? Medium level scheduling: Balancing
Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/
Operating Systems Institut Mines-Telecom III. Scheduling Ludovic Apvrille [email protected] Eurecom, office 470 http://soc.eurecom.fr/os/ Outline Basics of Scheduling Definitions Switching
Linux Scheduler. Linux Scheduler
or or Affinity Basic Interactive es 1 / 40 Reality... or or Affinity Basic Interactive es The Linux scheduler tries to be very efficient To do that, it uses some complex data structures Some of what it
10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details
Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting
Linux Process Scheduling Policy
Lecture Overview Introduction to Linux process scheduling Policy versus algorithm Linux overall process scheduling objectives Timesharing Dynamic priority Favor I/O-bound process Linux scheduling algorithm
ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?
Housekeeping Paper reading assigned for next Thursday Scheduling Lab 2 due next Friday Don Porter CSE 506 Lecture goals Undergrad review Understand low-level building blocks of a scheduler Understand competing
Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm
Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm PURPOSE Getting familiar with the Linux kernel source code. Understanding process scheduling and how different parameters
W4118 Operating Systems. Instructor: Junfeng Yang
W4118 Operating Systems Instructor: Junfeng Yang Outline Advanced scheduling issues Multilevel queue scheduling Multiprocessor scheduling issues Real-time scheduling Scheduling in Linux Scheduling algorithm
Process Scheduling in Linux
The Gate of the AOSP #4 : Gerrit, Memory & Performance Process Scheduling in Linux 2013. 3. 29 Namhyung Kim Outline 1 Process scheduling 2 SMP scheduling 3 Group scheduling - www.kandroid.org 2/ 41 Process
CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems
Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Based on original slides by Silberschatz, Galvin and Gagne 1 Basic Concepts CPU I/O Burst Cycle Process execution
Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d
Comp 204: Computer Systems and Their Implementation Lecture 12: Scheduling Algorithms cont d 1 Today Scheduling continued Multilevel queues Examples Thread scheduling 2 Question A starvation-free job-scheduling
Real-Time Scheduling 1 / 39
Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A
Linux Process Scheduling. sched.c. schedule() scheduler_tick() hooks. try_to_wake_up() ... CFS CPU 0 CPU 1 CPU 2 CPU 3
Linux Process Scheduling sched.c schedule() scheduler_tick() try_to_wake_up() hooks RT CPU 0 CPU 1 CFS CPU 2 CPU 3 Linux Process Scheduling 1. Task Classification 2. Scheduler Skeleton 3. Completely Fair
Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses
Overview of Real-Time Scheduling Embedded Real-Time Software Lecture 3 Lecture Outline Overview of real-time scheduling algorithms Clock-driven Weighted round-robin Priority-driven Dynamic vs. static Deadline
Process Scheduling II
Process Scheduling II COMS W4118 Prof. Kaustubh R. Joshi [email protected] hdp://www.cs.columbia.edu/~krj/os References: OperaWng Systems Concepts (9e), Linux Kernel Development, previous W4118s Copyright
Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff
Process Scheduling CS 241 February 24, 2012 Copyright University of Illinois CS 241 Staff 1 Announcements Mid-semester feedback survey (linked off web page) MP4 due Friday (not Tuesday) Midterm Next Tuesday,
Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1
Scheduling policy ULK3e 7.1 Goals fast process response time good throughput for background jobs avoidance of process starvation reconciliation of needs of low- and high-priority processes Operating Systems:
CPU Scheduling. Core Definitions
CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases
Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:
Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations
Chapter 5 Process Scheduling
Chapter 5 Process Scheduling CPU Scheduling Objective: Basic Scheduling Concepts CPU Scheduling Algorithms Why Multiprogramming? Maximize CPU/Resources Utilization (Based on Some Criteria) CPU Scheduling
Improvement of Scheduling Granularity for Deadline Scheduler
Improvement of Scheduling Granularity for Deadline Scheduler Yoshitake Kobayashi Advanced Software Technology Group Corporate Software Engineering Center TOSHIBA CORPORATION Copyright 2012, Toshiba Corporation.
Processor Scheduling. Queues Recall OS maintains various queues
Processor Scheduling Chapters 9 and 10 of [OS4e], Chapter 6 of [OSC]: Queues Scheduling Criteria Cooperative versus Preemptive Scheduling Scheduling Algorithms Multi-level Queues Multiprocessor and Real-Time
Why Relative Share Does Not Work
Why Relative Share Does Not Work Introduction Velocity Software, Inc March 2010 Rob van der Heij rvdheij @ velocitysoftware.com Installations that run their production and development Linux servers on
Overview of the Linux Scheduler Framework
Overview of the Linux Scheduler Framework WORKSHOP ON REAL-TIME SCHEDULING IN THE LINUX KERNEL Pisa, June 27th, 2014 Marco Cesati University of Rome Tor Vergata Marco Cesati (Univ. of Rome Tor Vergata)
Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems
Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems Carsten Emde Open Source Automation Development Lab (OSADL) eg Aichhalder Str. 39, 78713 Schramberg, Germany [email protected]
Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:
Chapter 7 OBJECTIVES Operating Systems Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the concept of virtual memory. Understand the
CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS
CPU SCHEDULING CPU SCHEDULING (CONT D) Aims to assign processes to be executed by the CPU in a way that meets system objectives such as response time, throughput, and processor efficiency Broken down into
Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings
Operatin g Systems: Internals and Design Principle s Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Bear in mind,
Linux Block I/O Scheduling. Aaron Carroll [email protected] December 22, 2007
Linux Block I/O Scheduling Aaron Carroll [email protected] December 22, 2007 As of version 2.6.24, the mainline Linux tree provides four block I/O schedulers: Noop, Deadline, Anticipatory (AS)
An Implementation Of Multiprocessor Linux
An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than
Linux Scheduler Analysis and Tuning for Parallel Processing on the Raspberry PI Platform. Ed Spetka Mike Kohler
Linux Scheduler Analysis and Tuning for Parallel Processing on the Raspberry PI Platform Ed Spetka Mike Kohler Outline Abstract Hardware Overview Completely Fair Scheduler Design Theory Breakdown of the
Understanding Linux on z/vm Steal Time
Understanding Linux on z/vm Steal Time June 2014 Rob van der Heij [email protected] Summary Ever since Linux distributions started to report steal time in various tools, it has been causing
Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5
77 16 CPU Scheduling Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5 Until now you have heard about processes and memory. From now on you ll hear about resources, the things operated upon
OPERATING SYSTEMS SCHEDULING
OPERATING SYSTEMS SCHEDULING Jerry Breecher 5: CPU- 1 CPU What Is In This Chapter? This chapter is about how to get a process attached to a processor. It centers around efficient algorithms that perform
Load-Balancing for a Real-Time System Based on Asymmetric Multi-Processing
LIFL Report # 2004-06 Load-Balancing for a Real-Time System Based on Asymmetric Multi-Processing Éric PIEL [email protected] Philippe MARQUET [email protected] Julien SOULA [email protected]
W4118 Operating Systems. Instructor: Junfeng Yang
W4118 Operating Systems Instructor: Junfeng Yang Outline Introduction to scheduling Scheduling algorithms 1 Direction within course Until now: interrupts, processes, threads, synchronization Mostly mechanisms
ICS 143 - Principles of Operating Systems
ICS 143 - Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian [email protected] Note that some slides are adapted from course text slides 2008 Silberschatz. Some
These sub-systems are all highly dependent on each other. Any one of them with high utilization can easily cause problems in the other.
Abstract: The purpose of this document is to describe how to monitor Linux operating systems for performance. This paper examines how to interpret common Linux performance tool output. After collecting
Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne [email protected]. Proseminar KVBK Linux Scheduler Valderine Kom
Konzepte von Betriebssystem-Komponenten Linux Scheduler Kenmegne [email protected] 1 Contents: 1. Introduction 2. Scheduler Policy in Operating System 2.1 Scheduling Objectives 2.2 Some Scheduling
Mitigating Starvation of Linux CPU-bound Processes in the Presence of Network I/O
Mitigating Starvation of Linux CPU-bound Processes in the Presence of Network I/O 1 K. Salah 1 Computer Engineering Department Khalifa University of Science Technology and Research (KUSTAR) Sharjah, UAE
4003-440/4003-713 Operating Systems I. Process Scheduling. Warren R. Carithers ([email protected]) Rob Duncan ([email protected])
4003-440/4003-713 Operating Systems I Process Scheduling Warren R. Carithers ([email protected]) Rob Duncan ([email protected]) Review: Scheduling Policy Ideally, a scheduling policy should: Be: fair, predictable
CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.
CPU Scheduling The scheduler is the component of the kernel that selects which process to run next. The scheduler (or process scheduler, as it is sometimes called) can be viewed as the code that divides
Power Management in the Linux Kernel
Power Management in the Linux Kernel Tate Hornbeck, Peter Hokanson 7 April 2011 Intel Open Source Technology Center Venkatesh Pallipadi Senior Staff Software Engineer 2001 - Joined Intel Processor and
Tasks Schedule Analysis in RTAI/Linux-GPL
Tasks Schedule Analysis in RTAI/Linux-GPL Claudio Aciti and Nelson Acosta INTIA - Depto de Computación y Sistemas - Facultad de Ciencias Exactas Universidad Nacional del Centro de la Provincia de Buenos
Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts
Objectives Chapter 5: CPU Scheduling Introduce CPU scheduling, which is the basis for multiprogrammed operating systems Describe various CPU-scheduling algorithms Discuss evaluation criteria for selecting
Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling
Objectives To introduce CPU scheduling To describe various CPU-scheduling algorithms Chapter 5: Process Scheduling To discuss evaluation criteria for selecting the CPUscheduling algorithm for a particular
Modular Real-Time Linux
Modular Real-Time Linux Shinpei Kato Department of Information and Computer Science, Keio University 3-14-1 Hiyoshi, Kohoku, Yokohama, Japan [email protected] Nobuyuki Yamasaki Department of Information
Real-time KVM from the ground up
Real-time KVM from the ground up KVM Forum 2015 Rik van Riel Red Hat Real-time KVM What is real time? Hardware pitfalls Realtime preempt Linux kernel patch set KVM & qemu pitfalls KVM configuration Scheduling
Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run
SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run
2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput
Import Settings: Base Settings: Brownstone Default Highest Answer Letter: D Multiple Keywords in Same Paragraph: No Chapter: Chapter 5 Multiple Choice 1. Which of the following is true of cooperative scheduling?
Lecture 3 Theoretical Foundations of RTOS
CENG 383 Real-Time Systems Lecture 3 Theoretical Foundations of RTOS Asst. Prof. Tolga Ayav, Ph.D. Department of Computer Engineering Task States Executing Ready Suspended (or blocked) Dormant (or sleeping)
Operating System: Scheduling
Process Management Operating System: Scheduling OS maintains a data structure for each process called Process Control Block (PCB) Information associated with each PCB: Process state: e.g. ready, or waiting
ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness
Last time Scheduling overview, key trade-offs, etc. O(1) scheduler older Linux scheduler Scheduling, part 2 Don Porter CSE 506 Today: Completely Fair Scheduler (CFS) new hotness Other advanced scheduling
CPU Scheduling. CSC 256/456 - Operating Systems Fall 2014. TA: Mohammad Hedayati
CPU Scheduling CSC 256/456 - Operating Systems Fall 2014 TA: Mohammad Hedayati Agenda Scheduling Policy Criteria Scheduling Policy Options (on Uniprocessor) Multiprocessor scheduling considerations CPU
SYSTEM ecos Embedded Configurable Operating System
BELONGS TO THE CYGNUS SOLUTIONS founded about 1989 initiative connected with an idea of free software ( commercial support for the free software ). Recently merged with RedHat. CYGNUS was also the original
Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1
Module 8 Industrial Embedded and Communication Systems Version 2 EE IIT, Kharagpur 1 Lesson 37 Real-Time Operating Systems: Introduction and Process Management Version 2 EE IIT, Kharagpur 2 Instructional
Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.
Road Map Scheduling Dickinson College Computer Science 354 Spring 2010 Past: What an OS is, why we have them, what they do. Base hardware and support for operating systems Process Management Threads Present:
EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun
EECS 750: Advanced Operating Systems 01/28 /2015 Heechul Yun 1 Recap: Completely Fair Scheduler(CFS) Each task maintains its virtual time V i = E i 1 w i, where E is executed time, w is a weight Pick the
Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition
Chapter 5: CPU Scheduling Silberschatz, Galvin and Gagne 2009 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating
Update on big.little scheduling experiments. Morten Rasmussen Technology Researcher
Update on big.little scheduling experiments Morten Rasmussen Technology Researcher 1 Agenda Why is big.little different from SMP? Summary of previous experiments on emulated big.little. New results for
Operating Systems, 6 th ed. Test Bank Chapter 7
True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one
White Paper Perceived Performance Tuning a system for what really matters
TMurgent Technologies White Paper Perceived Performance Tuning a system for what really matters September 18, 2003 White Paper: Perceived Performance 1/7 TMurgent Technologies Introduction The purpose
Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur
Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 26 Real - Time POSIX. (Contd.) Ok Good morning, so let us get
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.
CPU Scheduling. CPU Scheduling
CPU Scheduling Electrical and Computer Engineering Stephen Kim ([email protected]) ECE/IUPUI RTOS & APPS 1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling
Multilevel Load Balancing in NUMA Computers
FACULDADE DE INFORMÁTICA PUCRS - Brazil http://www.pucrs.br/inf/pos/ Multilevel Load Balancing in NUMA Computers M. Corrêa, R. Chanin, A. Sales, R. Scheer, A. Zorzo Technical Report Series Number 049 July,
A Survey of Parallel Processing in Linux
A Survey of Parallel Processing in Linux Kojiro Akasaka Computer Science Department San Jose State University San Jose, CA 95192 408 924 1000 [email protected] ABSTRACT Any kernel with parallel processing
Performance Comparison of RTOS
Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile
Job Scheduling Model
Scheduling 1 Job Scheduling Model problem scenario: a set of jobs needs to be executed using a single server, on which only one job at a time may run for theith job, we have an arrival timea i and a run
Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com http://amitgud.tk
Linux O(1) CPU Scheduler Amit Gud amit (dot) gud (at) veritas (dot) com http://amitgud.tk April 27, 2005 Agenda CPU scheduler basics CPU scheduler algorithms overview Linux CPU scheduler goals What is
Linux Load Balancing
Linux Load Balancing Hyunmin Yoon 2 Load Balancing Linux scheduler attempts to evenly distribute load across CPUs Load of CPU (run queue): sum of task weights Load balancing is triggered by Timer interrupts
Chapter 2: OS Overview
Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:
Asymmetric Scheduling and Load Balancing for Real-Time on Linux SMP
Asymmetric Scheduling and Load Balancing for Real-Time on Linux SMP Éric Piel, Philippe Marquet, Julien Soula, and Jean-Luc Dekeyser {Eric.Piel,Philippe.Marquet,Julien.Soula,Jean-Luc.Dekeyser}@lifl.fr
Realtime Linux Kernel Features
Realtime Linux Kernel Features Tim Burke, Red Hat, Director Emerging Technologies Special guest appearance, Ted Tso of IBM Realtime what does it mean to you? Agenda What? Terminology, Target capabilities
Contributions to Gang Scheduling
CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,
Chapter 11 I/O Management and Disk Scheduling
Operatin g Systems: Internals and Design Principle s Chapter 11 I/O Management and Disk Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles An artifact can
On Linux Starvation of CPU-bound Processes in the Presence of Network I/O
On Linux Starvation of CPU-bound Processes in the Presence of Network I/O 1 K. Salah 1 Computer Engineering Department Khalifa University of Science Technology and Research (KUSTAR) Sharjah, UAE Email:
Enhancing the Monitoring of Real-Time Performance in Linux
Master of Science Thesis Enhancing the Monitoring of Real-Time Performance in Linux Author: Nima Asadi [email protected] Supervisor: Mehrdad Saadatmand [email protected] Examiner: Mikael
White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux
White Paper Real-time Capabilities for Linux SGI REACT Real-Time for Linux Abstract This white paper describes the real-time capabilities provided by SGI REACT Real-Time for Linux. software. REACT enables
Testing task schedulers on Linux system
Testing task schedulers on Linux system Leonardo Jelenković, Stjepan Groš, Domagoj Jakobović University of Zagreb, Croatia Faculty of Electrical Engineering and Computing Abstract Testing task schedulers
Validating Java for Safety-Critical Applications
Validating Java for Safety-Critical Applications Jean-Marie Dautelle * Raytheon Company, Marlborough, MA, 01752 With the real-time extensions, Java can now be used for safety critical systems. It is therefore
REAL TIME OPERATING SYSTEMS. Lesson-10:
REAL TIME OPERATING SYSTEMS Lesson-10: Real Time Operating System 1 1. Real Time Operating System Definition 2 Real Time A real time is the time which continuously increments at regular intervals after
Timing of a Disk I/O Transfer
Disk Performance Parameters To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector Seek time Time it takes to position the head at the desired
Real Time Scheduling Basic Concepts. Radek Pelánek
Real Time Scheduling Basic Concepts Radek Pelánek Basic Elements Model of RT System abstraction focus only on timing constraints idealization (e.g., zero switching time) Basic Elements Basic Notions task
Processes and Non-Preemptive Scheduling. Otto J. Anshus
Processes and Non-Preemptive Scheduling Otto J. Anshus 1 Concurrency and Process Challenge: Physical reality is Concurrent Smart to do concurrent software instead of sequential? At least we want to have
Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems
Lecture Outline Operating Systems Objectives Describe the functions and layers of an operating system List the resources allocated by the operating system and describe the allocation process Explain how
