Chinaunix首页 | 论坛 | 博客
  • 博客访问: 439611
  • 博文数量: 123
  • 博客积分: 2686
  • 博客等级: 少校
  • 技术积分: 1349
  • 用 户 组: 普通用户
  • 注册时间: 2009-12-23 22:11
文章分类
文章存档

2012年(3)

2011年(10)

2010年(100)

2009年(10)

我的朋友

分类: LINUX

2010-08-26 09:50:19

KERNEL:2.6.34

Call this function, means give up cpu and let other processes occupy cpu.
/*
 * schedule() is the main scheduler function.
 */

asmlinkage void __sched schedule(void)
{
    struct task_struct *prev, *next;
    unsigned long *switch_count;
    struct rq *rq;
    int cpu;

/*At the end of this function, it will check if need_resched() return
true, if that indeed happen, then goto here.*/

need_resched:

/*current process won't be preempted after call preemept_disable()*/

    preempt_disable();
    cpu = smp_processor_id();
    rq = cpu_rq(cpu);
/* rcu_sched_qs ? */
    rcu_sched_qs(cpu);

/* prev point to current task_struct */
    prev = rq->curr;

/* get current task_struct's context switch count */
    switch_count = &prev->nivcsw;

/* kernel_flag is "the big kernel lock". 
 * This spinlock is taken and released recursively by lock_kernel()
 * and unlock_kernel(). It is transparently dropped and reacquired
 * over schedule(). It is used to protect legacy code that hasn't
 * been migrated to a proper locking design yet.
 * In task_struct, there is a member lock_depth, which is inited -1,
 * indicates that the current task have no kernel lock.
 * When lock_depth >=0 indicate that it own kernel lock.
 * During context switching, it is not permitted that the task  
 * switched away remain own kernel lock , so in scedule(),it
 * call release_kernel_lock(), release kernel lock.
 */

    release_kernel_lock(prev);

need_resched_nonpreemptible:

    schedule_debug(prev);

    if (sched_feat(HRTICK))
        hrtick_clear(rq);

/* occupy current rq's lock */
    raw_spin_lock_irq(&rq->lock);

/* update rq's clock,this function will call sched_clock_cpu() */

    update_rq_clock(rq);

/* clear bit in task_struct's thread_struct's flag TIF_NEED_RESCHED.
 * In case that it will be rescheduled, because it prepare to give
 * up cpu.
 */

    clear_tsk_need_resched(prev);

/*  
schedule( ) examines the state of prev. If it is not runnable and it has not been preempted in Kernel Mode,then it should be removed from the runqueue. However, if it has nonblocked pending signals and its state is TASK_INTERRUPTIBLE, the function sets the process state to TASK_RUNNING and leaves it into the runqueue. This action is not the same as assigning the processor to prev; it just gives prev a chance to be selected for execution.
If the current task was in an interruptible sleep but has received a signal now, it must be promoted
to a running task again. Otherwise, the task is deactivated with the scheduler-class-specific methods
(deactivate_task essentially ends up in calling sched_class->dequeue_task):
Additionally, preempt_count() return current task preempt_counter, which was inited as 0, denoting that it can be preempted. preempt_counter increase by 1 each current task get a lock.
deactivate_task() will delete current task from rq.

*/

    if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
        if (unlikely(signal_pending_state(prev->state, prev)))
            prev->state = TASK_RUNNING;
        else
            deactivate_task(rq, prev, 1);
        switch_count = &prev->nvcsw;
    }

/* For none-SMP, pre_schedule is NULL */
    pre_schedule(rq, prev);

/* If this run queue has no process running, call idle_balance(), which attempts to pull tasks from other cpus */
    if (unlikely(!rq->nr_running))
        idle_balance(cpu, rq);

/* put_prev_task first announces to the scheduler class that the currently running task is going to be
replaced by another one. Note that this is not equivalent to taking the task off the run queue, but provides the opportunity to perform some accounting and bring statistics up to date. The next task that is sup-
posed to be executed must also be selected by the scheduling class, and pick_next_task is responsible
to do so and pick up the highest-prio task:
*/

    put_prev_task(rq, prev);

/* pick_next_task() first check if tasks in current run queue are all cfs schedule type, if so, call fair_sched_class->pick_next_task. Otherwise, call rt_sched_class->pick_next_task. We can see that real
time schedule type has higher priority than cfs schedule type. */

    next = pick_next_task(rq);

/* It need not necessarily be the case that a new task has been selected. If only one task is currently able to run because all others are sleeping, it will naturally be left on the CPU. If, however, a new task has been selected, then task switching at the hardware level must be prepared and executed.
*/

    if (likely(prev != next)) {
        sched_info_switch(prev, next);
        perf_event_task_sched_out(prev, next);

        rq->nr_switches++;
        rq->curr = next;
        ++*switch_count;

        context_switch(rq, prev, next); /* unlocks the rq */
        /*
         * the context switch might have flipped the stack from under
         * us, hence refresh the local variables.
         */
        cpu = smp_processor_id();
        rq = cpu_rq(cpu);
    } else
      raw_spin_unlock_irq(&rq->lock);//current task still occupy cpu

    post_schedule(rq);

    if (unlikely(reacquire_kernel_lock(current) < 0)) {
        prev = rq->curr;
        switch_count = &prev->nivcsw;
        goto need_resched_nonpreemptible;
    }

    preempt_enable_no_resched();
    if (need_resched())
        goto need_resched;
}
EXPORT_SYMBOL(schedule);

阅读(3276) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~