全部博文(76)
分类:
2009-10-26 13:17:35
The processing of linux interrupt is split into two parts: top half and bottom half.
Top half: also referred to interrupt handler. It runs immediately upon receipt of the interrupt and performs only the time critical work, such as reading/writing register, acknowledging receipt of the interrupt or resetting the hadware. At least, it runs with current interrupt line diabled. It may runs with all local interrupts disable if SA_INTERRUPT is set. Interrupt handlers were given their own stack, referred to interrupt stack, one stack per processor, one page in size.
Bottom half: Excute work that can be performed later or at a more convenient time. The key point varied from top half is that it runs with all interrupts enable.
When executing an interrupt handler or bottom half, the kernel is in interrupt context. Since interrupt context is not associated with a process, withouting a backing process, interrupt context cannot sleep.Therefore, we cannot use functions that may sleeps in interrupt context.
1st part: Softirqs
1. Statically created at compile-time.
2. Can run simultaneously on any processor, even two of the samy type.
3. Rarely used. Reserved for the most timing-critical and important bottom-half processing on the system.
4. A softirq never preempts another softirq, the only event can preempt a softirq is an interrupt hanlder. However, another softirq - even the same one - can run on another processor.
5.Raising softirq: Usually, an interrupt handler marks its softirq for execution before returning and softirq will runs at a suitable time. In the case of interrupt handlers, the interrupt handler performs the basic hardwork-related work, raise the softirq and then exits. When processing interrupts, the kernel invokes do_softirq(). The softirq then runs and picks up where the interrupt handler left off.
6. The softirq hanlder runs with interrupts enabled and CANNOT sleep. While a handler runs, softirqs on the current processoer are disabled, however, another processor can execute other softirqs, even the same one. Thus, any shared data - even global data used only within the softirq handler itself - need proper locking. This is the important point why tasklets are usually preferred. Consequently, most softirq handlers resort to per-processor data(data unique to each processor and thus not requiring locking) or some other trichs to avoid explicit locking and provoide excellent scalability.
/*********************** sample code **************************/
softirq_action structure:A 32-entry array of softirq_action structure is declared in (softirq.c):
static struct softirq_action softirq_vec[32] __cacheline_aligned_in_smp;
Current softirq index(interrupt.h):
Raising softirq:
void raise_softirq(unsigned int nr)
{
unsigned long flags;
local_irq_save(flags);
raise_softirq_irqoff(nr);
local_irq_restore(flags);
}
Registering softirq:
void open_softirq(int nr, void (*action)(struct softirq_action*), void *data)
{
softirq_vec[nr].data = data;
softirq_vec[nr].action = action;
}
Softirq execution occurs in(softirq.c):
The following function also deal with the softirq reactivated issue. It guarantee the latency and fairness by waking up the ksoftirqd thread.
asmlinkage void do_softirq(void)
{
__u32 pending;
unsigned long flags;
if (in_interrupt())
return;
local_irq_save(flags);
pending = local_softirq_pending();
if (pending)
__do_softirq();
local_irq_restore(flags);
}
/*
* We restart softirq processing MAX_SOFTIRQ_RESTART times,
* and we fall back to softirqd after that.
*
* This number has been established via experimentation.
* The two things to balance is latency against fairness -
* we want to handle softirqs as soon as possible, but they
* should not be able to lock up the box.
*/
#define MAX_SOFTIRQ_RESTART 10
asmlinkage void __do_softirq(void)
{
struct softirq_action *h;
__u32 pending;
int max_restart = MAX_SOFTIRQ_RESTART;
int cpu;
pending = local_softirq_pending();
account_system_vtime(current);
__local_bh_disable((unsigned long)__builtin_return_address(0));
trace_softirq_enter();
cpu = smp_processor_id();
restart:
/* Reset the pending bitmask before enabling irqs */
set_softirq_pending(0);
local_irq_enable();
h = softirq_vec;
do {
if (pending & 1) {
h->action(h);
rcu_bh_qsctr_inc(cpu);
}
h++;
pending >>= 1;
} while (pending);
local_irq_disable();
pending = local_softirq_pending();
if (pending && --max_restart)
goto restart;
if (pending)
wakeup_softirqd();
trace_softirq_exit();
account_system_vtime(current);
_local_bh_enable();
}
D. In local_bh_enable(void) function.
2nd part: Tasklet
1. Builds on top of Softirqs: HI_SOFTIRQ and TASKLET_SOFTIRQ
2. Can dynamically created and destroy
3. Two different tasklet can run concurrently on different processors, but two of the same type of tasklet cannot run simultaneously.
4. As with softirqs, tasklet CANNOT sleep. Thus, CANNOT use semaphores or other blocking fucntions in tasklet.
5. Tasklet runs with all interrupt enable, so need to take precautions if our tasklet shares data with interrupt hanlder. Disable interrupt and obtain a lock is a solution.
6. Two different tasklets can run at the same time on two different CPU,
therefore proper locking need to used if our tasklet share data with anohter tasklet or softirq.
{
struct tasklet_struct *next;
unsigned long state;
atomic_t count;
void (*func)(unsigned long);
unsigned long data;
};
state definition:
Interrupt.h (d:\eric\linux\linux-2.6.26\linux-2.6.26\include\linux) 13333 2008-7-14
enum
{
TASKLET_STATE_SCHED, /* Tasklet is scheduled for execution */
TASKLET_STATE_RUN /* Tasklet is running (SMP only) */
};
#ifdef CONFIG_SMP
static inline int tasklet_trylock(struct tasklet_struct *t)
{
return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
}
static inline void tasklet_unlock(struct tasklet_struct *t)
{
smp_mb__before_clear_bit();
clear_bit(TASKLET_STATE_RUN, &(t)->state);
}
static inline void tasklet_unlock_wait(struct tasklet_struct *t)
{
while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
}
#else
#define tasklet_trylock(t) 1
#define tasklet_unlock_wait(t) do { } while (0)
#define tasklet_unlock(t) do { } while (0)
#endif
count: should be zero to enable tasklet to run
Declare tasklet_vec and tasklet_hi_vec list:
struct tasklet_head
{
struct tasklet_struct *head;
struct tasklet_struct **tail;
};
/* Some compilers disobey section attribute on statics when not
initialized -- RR */
static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec) = { NULL };
static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec) = { NULL };
Init tasklet_vec and tasklet_hi_vec and register related hanlder tasklet_action and tasklet_hi_action.
void __init softirq_init(void)
{
int cpu;
for_each_possible_cpu(cpu) {
per_cpu(tasklet_vec, cpu).tail =
&per_cpu(tasklet_vec, cpu).head;
per_cpu(tasklet_hi_vec, cpu).tail =
&per_cpu(tasklet_hi_vec, cpu).head;
}
open_softirq(TASKLET_SOFTIRQ, tasklet_action, NULL);
open_softirq(HI_SOFTIRQ, tasklet_hi_action, NULL);
}
{
struct tasklet_struct *list;
local_irq_disable();
list = __get_cpu_var(tasklet_vec).head;
__get_cpu_var(tasklet_vec).head = NULL;
__get_cpu_var(tasklet_vec).tail = &__get_cpu_var(tasklet_vec).head;
local_irq_enable();
while (list) {
struct tasklet_struct *t = list;
list = list->next;
if (tasklet_trylock(t)) {
if (!atomic_read(&t->count)) {
if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
BUG();
t->func(t->data);
tasklet_unlock(t);
continue;
}
tasklet_unlock(t);
}
local_irq_disable();
t->next = NULL;
*__get_cpu_var(tasklet_vec).tail = t;
__get_cpu_var(tasklet_vec).tail = &(t->next);
__raise_softirq_irqoff(TASKLET_SOFTIRQ);
local_irq_enable();
}
}
Declaring Tasklet
Statically:
#define DECLARE_TASKLET(name, func, data) \Dynamically:
Scheduling Tasklets:
Interrupt.h (d:\eric\linux\linux-2.6.26\linux-2.6.26\include\linux) 13333 2008-7-14
static inline void tasklet_schedule(struct tasklet_struct *t)
{
if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state))
__tasklet_schedule(t);
}
static inline void tasklet_hi_schedule(struct tasklet_struct *t)
{
if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state))
__tasklet_hi_schedule(t);
}
Kill a tasklet from a pending queue:
CANNOT used in interrupt context because it sleeps.
void tasklet_kill(struct tasklet_struct *t)
{
if (in_interrupt())
printk("Attempt to kill tasklet from interrupt\n");
while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
do
yield();
while (test_bit(TASKLET_STATE_SCHED, &t->state));
}
tasklet_unlock_wait(t);
clear_bit(TASKLET_STATE_SCHED, &t->state);
}
3rd part: Work Queues
1. The only bottom-half mechanisms runs in process context.
2. Scheduable and can sleeps, used in places such as need to allocate a lot of memory, obtain a semaphore or perform block I/O.
3. A work queue is a simple interface for deferring work to a generic kernel theread, called worker queue.
4. Each type of worker thread exits per CPU, which is represented by a workqueue_struct.
4th part: Locking Between the Botom Halves
1. Two different tasklets sharing the same data requires proper locking.
2. Since softirqs provide no serialization, all shared data needs an appropriate lock.
3. If process context code and a bottom half share data, we need to disable bottom-half processing and obtain a lock before accessing data.
Use local_bh_enable(void) and local_bh_disable(void).
These two calls do NOT diable the execution of work queues, because work queue runs in process text and with synchronous execution. However, softirqs and tasklets can occur asynchronously(say, on return from hanlding an interrupt), so kernel code may need to diable them.
4. If interrupt context code and a bottom half share data, we need to disable interrupt and obtain a lock before accessing data.
5. Any shared data in a work queue requires locking, same as normal kernel code.
Reference:
Linux Kernel Development,2st Edition
linux-2.6.26 source
http://blog.pfan.cn/ljqy/31891.html