中断,在linux内核里面是异步的,所以,linux在内核里面有单独的两个上下文来管理,分别是中断上下文和软中断上下文。
首先,中断产生于外部硬件,通过中断控制器的判定,发往cpu(这时会被中断控制器转换成中断向量)。这里需要注意的是,系统在初始化的时候,会设置前32个中断向量的中断函数和系统调用中断的处理函数,这个对应关系存储在一个中断向量表当中(IDT tables)
-
/*
-
* Linux IRQ vector layout.
-
*
-
* There are 256 IDT entries (per CPU - each entry is 8 bytes) which can
-
* be defined by Linux. They are used as a jump table by the CPU when a
-
* given vector is triggered - by a CPU-external, CPU-internal or
-
* software-triggered event.
-
*
-
* Linux sets the kernel code address each entry jumps to early during
-
* bootup, and never changes them. This is the general layout of the
-
* IDT entries:
-
*
-
* Vectors 0 ... 31 : system traps and exceptions - hardcoded events
-
* Vectors 32 ... 127 : device interrupts
-
* Vector 128 : legacy int80 syscall interface
-
* Vectors 129 ... INVALIDATE_TLB_VECTOR_START-1 except 204 : device interrupts
-
* Vectors INVALIDATE_TLB_VECTOR_START ... 255 : special interrupts
-
*
-
* 64-bit x86 has per CPU IDT tables, 32-bit has one shared IDT table.
-
*
-
* This file enumerates the exact layout of them:
-
*/
剩余没有用到的中断,都会根据其偏移量,来进行对应的设置:
-
/**
-
* idt_setup_apic_and_irq_gates - Setup APIC/SMP and normal interrupt gates
-
*/
-
void __init idt_setup_apic_and_irq_gates(void)
-
{
-
int i = FIRST_EXTERNAL_VECTOR;
-
void *entry;
-
-
idt_setup_from_table(idt_table, apic_idts, ARRAY_SIZE(apic_idts), true);
-
-
for_each_clear_bit_from(i, used_vectors, FIRST_SYSTEM_VECTOR) {
-
entry = irq_entries_start + 8 * (i - FIRST_EXTERNAL_VECTOR);
-
set_intr_gate(i, entry);
-
}
-
-
for_each_clear_bit_from(i, used_vectors, NR_VECTORS) {
-
#ifdef CONFIG_X86_LOCAL_APIC
-
set_bit(i, used_vectors);
-
set_intr_gate(i, spurious_interrupt);
-
#else
-
entry = irq_entries_start + 8 * (i - FIRST_EXTERNAL_VECTOR);
-
set_intr_gate(i, entry);
-
#endif
-
}
-
}
也就是说,每一个初始化没有被设置的中断向量,会根据其偏移初始外部中断号的多少,去irq_entries_start这个起始地址,找到相应的偏移函数,对该中断向量进行设置,irq_entries_start定义如下:
-
/*
-
* Build the entry stubs with some assembler magic.
-
* We pack 1 stub into every 8-byte block.
-
*/
-
.align 8
-
ENTRY(irq_entries_start)
-
vector=FIRST_EXTERNAL_VECTOR
-
.rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
-
pushl $(~vector+0x80) /* Note: always in signed byte range */
-
vector=vector+1
-
jmp common_interrupt
-
.align 8
-
.endr
-
END(irq_entries_start)
-
-
/*
-
* the CPU automatically disables interrupts when executing an IRQ vector,
-
* so IRQ-flags tracing has to follow that:
-
*/
-
.p2align CONFIG_X86_L1_CACHE_SHIFT
-
common_interrupt:
-
ASM_CLAC
-
addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */
-
SAVE_ALL
-
ENCODE_FRAME_POINTER
-
TRACE_IRQS_OFF
-
movl %esp, %eax
-
call do_IRQ
-
jmp ret_from_intr
-
ENDPROC(common_interrupt)
所以,最终从FIRST_EXTERNAL_VECTOR 到FIRST_SYSTEM_VECTOR的中断向量,都会调用do_IRQ这个函数
由于不同CPU中相同的中断向量对应的虚拟中断号irq需要不同(这是irq这一虚拟中断层出现的原因),所以,需要每个CPU在系统初始化的时候,将虚拟中断号irq与其本地的中断向量进行一个映射:
-
DEFINE_PER_CPU(vector_irq_t, vector_irq) = {
-
[0 ... NR_VECTORS - 1] = VECTOR_UNUSED,
-
};
然后,cpu会根据中断向量携带的相关信息,中断号,设备号等,先找到对应的整个内核都可以看得到的irq号(这部分映射主要是通过硬件传过来的硬件中断以及事先建立好的中断向量与irq号映射关系获得)。获得irq号之后,就会有一个与之对应的irq_desc结构,如下:
-
struct irq_desc {
-
struct irq_common_data irq_common_data;
-
struct irq_data irq_data;
-
unsigned int __percpu *kstat_irqs;
-
irq_flow_handler_t handle_irq;
-
...
-
struct irqaction *action; /* IRQ action list */
-
unsigned int status_use_accessors;
-
unsigned int core_internal_state__do_not_mess_with_it;
-
unsigned int depth; /* nested irq disables */
-
unsigned int wake_depth; /* nested wake enables */
-
unsigned int tot_count;
-
unsigned int irq_count; /* For detecting broken IRQs */
-
unsigned long last_unhandled; /* Aging timer for unhandled count */
-
unsigned int irqs_unhandled;
-
atomic_t threads_handled;
-
int threads_handled_last;
-
raw_spinlock_t lock;
-
struct cpumask *percpu_enabled;
-
const struct cpumask *percpu_affinity;
-
-
...
-
struct mutex request_mutex;
-
int parent_irq;
-
struct module *owner;
-
const char *name;
-
} ____cacheline_internodealigned_in_smp;
这里的action是一串链表,表示当前irq号对应的一些操作,这时候就会遍历这个链表找到对应的硬件设备信息来执行相关的函数了。这些相关的函数,其实就是在驱动程序向系统注册相关irq的时候挂在irqaction链表后面的。
中断处理一般会分成上半部分和下半部分。其中上半部分处理时间紧急的工作,在这期间,要关闭外部硬件中断的响应;下半部分是处理时间不是那么紧急的工作,基本上实现机制都是基于软中断(在硬件中断退出会调用,ksoftirqd守护进程也会执行),软中断的话实现是在内核里面写死的,基于它的有网络收发,计时器,tasklet等。另外,内核当中有关于软中断运行时间(2ms)以及运行次数的限制(最多10次循环处理pending软中断),避免软中断一直执行,占用CPU
阅读(11650) | 评论(0) | 转发(0) |