Chinaunix首页 | 论坛 | 博客
  • 博客访问: 437496
  • 博文数量: 123
  • 博客积分: 2686
  • 博客等级: 少校
  • 技术积分: 1349
  • 用 户 组: 普通用户
  • 注册时间: 2009-12-23 22:11
文章分类
文章存档

2012年(3)

2011年(10)

2010年(100)

2009年(10)

我的朋友

分类: LINUX

2010-09-11 15:17:03

 KERNEL_VERSION:linux-2.6.34

context_switch() --- switch to the new MM and new thread register state.

It is performed by invoking two functions:
1.switch_mm() changes the memory context described in task_struct -> mm. Depending on processor, this is done by loading the page tables,flushing the translation lookaside buffers.(partially or fully), and supplying the MMU with new information.


2.switch_to() switches the processor register content and kernel stack; The virtual user address space is changed in the first step, as it includes user mode stack, it is not necessary to change the latter explicitly. This work varies greatly from architecture to architecture and is usually coded by assembly language.


static inline void
context_switch(struct rq *rq, struct task_struct *prev,
     struct task_struct *next)
{
    struct mm_struct *mm, *oldmm;


/* before a task_switch, a prepare_arch_switch hook that must be defined by every architecture was called from prepare_task_switch(), This enables kernel to execute architecture-

specific code to prepare for the switch */

    prepare_task_switch(rq, prev, next);
    trace_sched_switch(rq, prev, next);
    mm = next->mm;
    oldmm = prev->active_mm;
    /*
     * For paravirt, this is coupled with an exit in switch_to to
     * combine the page table reload and the switch backend into
     * one hypercall.
     */

    arch_start_context_switch(prev);


/* kernel threads do not have their own user space address space, its mm is NULL */
    if (likely(!mm)) {
        next->active_mm = oldmm;
        atomic_inc(&oldmm->mm_count);


/* notifies the underlying architecture that exchanging the userspace portion of virtual address space is not required. This speeds up the context switch and is known as the lazy TLB technique.
*/

        enter_lazy_tlb(oldmm, next);
    } else
        switch_mm(oldmm, mm, next);


/* If previous tasks are kernel thread, its active_mm pointer must be reset to NULL to disconnect it from the borrowed address space */

    if (likely(!prev->mm)) {
        prev->active_mm = NULL;
        rq->prev_mm = oldmm;
    }
    /*
     * Since the runqueue lock will be released by the next
     * task (which is an invalid locking op but in the case
     * of the scheduler it's an obvious special-case), so we
     * do an early lockdep release here:
     */

#ifndef __ARCH_WANT_UNLOCKED_CTXSW
    spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
#endif

    /* Here we just switch the register state and the stack. */
    switch_to(prev, next, prev);


/*The barrier statement is a directive for the compiler that ensures that the order in which the switch_to and finish_task_switch statements are executed is not changed by any unfortunate optimizations */
    barrier();
    /*
     * this_rq must be evaluated again because prev may have moved
     * CPUs since it called schedule(), thus the 'rq' on its stack
     * frame will be invalid.
     */

/* perform some cleanup and allows for correctly releasing lock */
    finish_task_switch(this_rq(), prev);
}


阅读(3203) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~