Chinaunix首页 | 论坛 | 博客
  • 博客访问: 189787
  • 博文数量: 76
  • 博客积分: 2510
  • 博客等级: 少校
  • 技术积分: 831
  • 用 户 组: 普通用户
  • 注册时间: 2007-12-31 00:52
文章分类

全部博文(76)

文章存档

2010年(58)

2009年(18)

我的朋友

分类:

2009-10-30 19:06:15


Reading notes: Spin Lock

LkD:
The spin lock is used to  provide the needed protection from concurrency on SMP machines. On UP machines, the locks complied away and do not exist. They simply act as markers to disable and enable kernel preemption. If kernel preept is turn off, the locks compile away entirely.


LDD,3st
The kernel preemption case is handled by the spinklock code itself. Any time kernel code holds a spinlock, preemption is disabled on the relevant processor. Even uniprocessor systems must disable preemption in this way to avoid race conditions. That is why proper locking is required even if you never expect your code to run on a multiprocessor machine.


Spinlock.c (d:\eric\linux\linux-2.6.26\linux-2.6.26\kernel)    11343    2008-7-14

spin_lock in SMP:

void __lockfunc _spin_lock(spinlock_t *lock)
{
    preempt_disable();        // Avoid to deadlock
    spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
    LOCK_CONTENDED(lock, _raw_spin_trylock, _raw_spin_lock);
}

spin_lock in UP:

#define _spin_lock(lock)            __LOCK(lock)
/*
 * In the UP-nondebug case there's no real locking going on, so the
 * only thing we have to do is to keep the preempt counts and irq
 * flags straight, to suppress compiler warnings of unused lock
 * variables, and to add the proper checker annotations:
 */
#define __LOCK(lock) \
  do { preempt_disable(); __acquire(lock); (void)(lock); } while (0)


void __lockfunc _spin_unlock(spinlock_t *lock)
{
    spin_release(&lock->dep_map, 1, _RET_IP_);
    _raw_spin_unlock(lock);
    preempt_enable();
}
EXPORT_SYMBOL(_spin_unlock);


LKD
Spin locks can be used in interrupt handlers. If a lock is used in an interrupt handler, you must also disable local interrupts (interrupt request on the corrent processor) before obtaining the lock. Otherwise, double-acquire deadlock phenomenon may occur.


Spin lock from IRQ:


unsigned long __lockfunc _spin_lock_irqsave(spinlock_t *lock)
{
    unsigned long flags;

    local_irq_save(flags);            //disable local interrupt delivery
    preempt_disable();               // why have to disable preemption ????
    spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
    /*
     * On lockdep we dont want the hand-coded irq-enable of
     * _raw_spin_lock_flags() code, because lockdep assumes
     * that interrupts are not re-enabled during lock-acquire:
     */
#ifdef CONFIG_LOCKDEP
    LOCK_CONTENDED(lock, _raw_spin_trylock, _raw_spin_lock);
#else
    _raw_spin_lock_flags(lock, &flags);
#endif
    return flags;
}

void __lockfunc _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
{
    spin_release(&lock->dep_map, 1, _RET_IP_);
    _raw_spin_unlock(lock);
    local_irq_restore(flags);
    preempt_enable();
}






Try lock code:

int __lockfunc _spin_trylock(spinlock_t *lock)
{
    preempt_disable();
    if (_raw_spin_trylock(lock)) {
        spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);         // get the desired lock, need to call spin_unlock() to release lock at completion
        return 1;
    }
   
    preempt_enable();          // can't get the desired lock, re-enable preemption
    return 0;
}



Summary:

1. Under UP machine without kernel preemption, the spin lock do nothing, entirely compile away.
    Under UP mahcine with kernel preemption, the spin lock just simply disable kernel preemption before enter into critical region and enable kernel preemption after exit from ciritical region.
    Under SMP machine, all the spin lock function, including avoiding concurrently access in SMP and
competition caused by kernel preemption, will be enabled.






阅读(771) | 评论(0) | 转发(0) |
0

上一篇:Kernel Preemption

下一篇:Semaphores

给主人留下些什么吧!~~