Crash工具分析某vmcore时,本想看看vmcore中running状态进程的相关状态,但是发现该vmcore中的running状态的进程异常多,而且其堆栈都是在schedule()中,有点不符合常理:
crash> ps |grep RU
27024 25033 21 ffff88105365a100 RU 0.1 4781932 97144 nginx
27028 25033 22 ffff880d93e31580 RU 0.1 4781804 97072 nginx
27029 25033 23 ffff880b2e91ab40 RU 0.1 4781676 97028 nginx
27032 25033 1 ffff880066f18080 RU 0.0 4767156 12632 nginx
27036 25033 21 ffff880066e5ca80 RU 0.0 4767156 12632 nginx
27558 1 21 ffff880cf9e77580 RU 2.3 20554252 2351244 java
27559 1 13 ffff880d914a3580 RU 2.3 20554252 2351244 java
27560 1 6 ffff881057f154c0 RU 2.3 20554252 2351244 java
27567 1 17 ffff880ce4132040 RU 2.3 20554252 2351244 java
27568 1 11 ffff88105304f4c0 RU 2.3 20554252 2351244 java
27607 1 22 ffff8810532d6ac0 RU 2.3 20554252 2351244 java
27640 1 15 ffff8810532d6080 RU 2.3 20554252 2351244 java
27642 1 7 ffff880f457de0c0 RU 2.3 20554252 2351244 java
27643 1 10 ffff880f5dda4a80 RU 2.3 20554252 2351244 java
27644 1 8 ffff880ebc032b00 RU 2.3 20554252 2351244 java
27645 1 2 ffff880d9167d4c0 RU 2.3 20554252 2351244 java
27647 1 6 ffff88012531cac0 RU 2.3 20554252 2351244 java
27657 1 0 ffff880f5debd580 RU 2.3 20554252 2351244 java
27660 1 2 ffff880d0f6f4ac0 RU 2.3 20554252 2351244 java
27662 1 17 ffff880d7ed8eb40 RU 2.3 20554252 2351244 java
27663 1 14 ffff880f5420a040 RU 2.3 20554252 2351244 java
27664 1 23 ffff8802b3711540 RU 2.3 20554252 2351244 java
27665 1 8 ffff880ebc01cac0 RU 2.3 20554252 2351244 java
27666 1 4 ffff880f603ca100 RU 2.3 20554252 2351244 java
27667 1 2 ffff880d7ec6d500 RU 2.3 20554252 2351244 java
27669 1 14 ffff88012b38e100 RU 2.3 20554252 2351244 java
27670 1 2 ffff8810565c1540 RU 2.3 20554252 2351244 java
27672 1 14 ffff8802a2a154c0 RU 2.3 20554252 2351244 java
27679 1 15 ffff8802a2a14a80 RU 2.3 20554252 2351244 java
27680 1 15 ffff88185a092b00 RU 2.3 20554252 2351244 java
27681 1 21 ffff88142bdb4b00 RU 2.3 20554252 2351244 java
27682 1 21 ffff8812d4a08040 RU 2.3 20554252 2351244 java
27684 1 6 ffff88109e51a080 RU 2.3 20554252 2351244 java
27685 1 3 ffff88185936e080 RU 2.3 20554252 2351244 java
27835 1 11 ffff8809681e94c0 RU 2.3 20554252 2351244 java
...
看看这些进程的在干什么:
crash> bt 27684
PID: 27684 TASK: ffff88109e51a080 CPU: 6 COMMAND: "java"
#0 [ffff8801bce09d38] schedule at ffffffff814f8b42
#1 [ffff8801bce09e00] schedule_timeout at ffffffff814f99d4
#2 [ffff8801bce09eb0] sys_epoll_wait at ffffffff811c1019
#3 [ffff8801bce09f80] system_call_fastpath at ffffffff8100b0f2
RIP: 0000003d5f8e5d03 RSP: 00007f686bffe710 RFLAGS: 00000202
RAX: 00000000000000e8 RBX: ffffffff8100b0f2 RCX: 00007f6864000a30
RDX: 0000000000002000 RSI: 00007f69bc435000 RDI: 0000000000000086
RBP: 00007f686bffe5e0 R8: 00000000000003e8 R9: 0000000000000086
R10: 00000000000003e8 R11: 0000000000000293 R12: 00000000000003e8
R13: 00007f69bc435000 R14: 0000000000002000 R15: 0000000000000086
ORIG_RAX: 00000000000000e8 CS: 0033 SS: 002b
crash> bt 27685
PID: 27685 TASK: ffff88185936e080 CPU: 3 COMMAND: "java"
#0 [ffff880d29d49d38] schedule at ffffffff814f8b42
#1 [ffff880d29d49e00] schedule_timeout at ffffffff814f99d4
#2 [ffff880d29d49eb0] sys_epoll_wait at ffffffff811c1019
#3 [ffff880d29d49f80] system_call_fastpath at ffffffff8100b0f2
RIP: 0000003d5f8e5d03 RSP: 00007f686befd4f0 RFLAGS: 00000293
RAX: 00000000000000e8 RBX: ffffffff8100b0f2 RCX: 0000003d5f8e5d03
RDX: 0000000000002000 RSI: 00007f69bc44e000 RDI: 0000000000000089
RBP: 00007f686befd560 R8: 00000000000003e8 R9: 0000000000000089
R10: 00000000000003e8 R11: 0000000000000293 R12: 00000000000003e8
R13: 00007f69bc44e000 R14: 0000000000002000 R15: 0000000000000089
ORIG_RAX: 00000000000000e8 CS: 0033 SS: 002b
crash>
看似都在schedule中啊,这种堆栈看起来都像是被调度出去了,应该不会是RUNNING状态吧,是否哪里出问题了?
再看看到底有多少个RUNNING状态进程
crash> ps |grep RU|wc -l
838
838个RUNNING状态进程,让人比较诧异,通常情况下,正常环境中的RUNNING状态进程不会太多(通常跟环境中的CPU核数是一个数量级),否则就会出现严重的CPU瓶颈,而这个环境中CPU只有24个:
crash> mach
MACHINE TYPE: x86_64
MEMORY SIZE: 96 GB
CPUS: 24
PROCESSOR SPEED: 2100 Mhz
HZ: 1000
PAGE SIZE: 4096
KERNEL VIRTUAL BASE: ffff880000000000
KERNEL VMALLOC BASE: ffffc90000000000
KERNEL VMEMMAP BASE: ffffea0000000000
KERNEL START MAP: ffffffff80000000
MACHINE TYPE: x86_64
MEMORY SIZE: 96 GB
CPUS: 24
PROCESSOR SPEED: 2100 Mhz
HZ: 1000
PAGE SIZE: 4096
KERNEL VIRTUAL BASE: ffff880000000000
KERNEL VMALLOC BASE: ffffc90000000000
KERNEL VMEMMAP BASE: ffffea0000000000
KERNEL START MAP: ffffffff80000000
KERNEL MODULES BASE: ffffffffa0000000
KERNEL STACK SIZE: 8192
IRQ STACK SIZE: 16384
IRQ STACKS:
CPU 0: ffff880028200000
CPU 1: ffff880028220000
CPU 2: ffff880028240000
CPU 3: ffff880028260000
CPU 4: ffff880028280000
CPU 5: ffff8800282a0000
CPU 6: ffff88109c600000
CPU 7: ffff88109c620000
CPU 8: ffff88109c640000
CPU 9: ffff88109c660000
CPU 10: ffff88109c680000
CPU 11: ffff88109c6a0000
CPU 12: ffff8800282c0000
CPU 13: ffff8800282e0000
CPU 14: ffff880028300000
CPU 15: ffff880028320000
CPU 16: ffff880028340000
CPU 17: ffff880028360000
CPU 18: ffff88109c6c0000
CPU 19: ffff88109c6e0000
CPU 20: ffff88109c700000
CPU 21: ffff88109c720000
CPU 22: ffff88109c740000
CPU 23: ffff88109c760000
看看环境中的CPU都在干什么吧:
crash> bt -a
PID: 18176 TASK: ffff8802b37100c0 CPU: 0 COMMAND: "agent_monitor.s"
#0 [ffff880028207e90] crash_nmi_callback at ffffffff81029a66
#1 [ffff880028207ea0] notifier_call_chain at ffffffff814fe7a5
#2 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff814fe80a
#3 [ffff880028207ef0] notify_die at ffffffff81097dce
#4 [ffff880028207f20] do_nmi at ffffffff814fc3c3
#5 [ffff880028207f50] nmi at ffffffff814fbcd0
[exception RIP: lock_kernel+46]
RIP: ffffffff814fb76e RSP: ffff8809740bdc68 RFLAGS: 00000283
RAX: 0000000000000000 RBX: ffffffff81fe2280 RCX: 00000000000070d8
RDX: 00000000000070e1 RSI: ffff8807892d55c0 RDI: ffff8810565f2118
RBP: ffff8809740bdc68 R8: 0000000000000000 R9: 0000000000000000
R10: 00000000000006c4 R11: 0000000000100006 R12: ffff8810565f2118
R13: ffff8807892d55c0 R14: ffff8807892d55c0 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
--- <NMI exception stack> ---
#6 [ffff8809740bdc68] lock_kernel at ffffffff814fb76e
#7 [ffff8809740bdc70] tty_open at ffffffff81318702
#8 [ffff8809740bdce0] chrdev_open at ffffffff81181035
#9 [ffff8809740bdd40] __dentry_open at ffffffff8117a96a
#10 [ffff8809740bdda0] nameidata_to_filp at ffffffff8117acd4
#11 [ffff8809740bddc0] do_filp_open at ffffffff8118ddf8
#12 [ffff8809740bdf20] do_sys_open at ffffffff8117a719
#13 [ffff8809740bdf70] sys_open at ffffffff8117a830
#14 [ffff8809740bdf80] system_call_fastpath at ffffffff8100b0f2
RIP: 0000003d5f8d8200 RSP: 00007fff0e322788 RFLAGS: 00010246
RAX: 0000000000000002 RBX: ffffffff8100b0f2 RCX: 0000003d5f832b40
RDX: 0000000000000001 RSI: 0000000000000802 RDI: 00000000004a2b95
RBP: 0000000000000000 R8: 0000003d5fb9c300 R9: 0000003d5f00e940
R10: 0000000000000008 R11: 0000000000000246 R12: ffffffff8117a830
R13: ffff8809740bdf78 R14: 0000000000000000 R15: 0000000000000000
ORIG_RAX: 0000000000000002 CS: 0033 SS: 002b
基本所有的CPU上的堆栈类似这样,内核栈中阻塞在大内核锁lock_kernel()上,中断栈在nmi中断中,
经分析出现这种堆栈,是因为nmi_watchdog捕获到了异常,也就是说检测到CPU处于关中断死锁状态,于是触发了nmi_watchdog,先不纠结这个了。这里值得关注的是大内核锁,新版本内核中已经去掉了这个,因为对性能损耗很大,锁的粒度太粗了,老版本内核中的大内核锁,通常是通过spin_lock实现的,也就是循环死等,这样如果时间获(比如超过softlockup或nmi_watchdog的检测周期)取不到锁的话,且内核没有配置抢占的话(通常都没配),就可能触发softlockup或nmi_watchdog(关中断),所以推断是这里的大内核锁获取不到,导致触发了nmi watchdog。
再确认下RUNNING状态进程的task_struct结构中的state,确认下是否正确。
crash> task_struct ffff88109e51a080
struct task_struct {
state = 0,
stack = 0xffff8801bce08000,
usage = {
counter = 2
},
flags = 4202560,
ptrace = 0,
lock_depth = -1,
prio = 120,
static_prio = 120,
normal_prio = 120,
rt_priority = 0,
sched_class = 0xffffffff8160b2c0,
se = {
load = {
weight = 1024,
inv_weight = 4194304
},
可以看到state为0,对应状态定义为:
/*进程状态*/
#define TASK_RUNNING 0 /*可运行状态,被调度的对象*/
#define TASK_INTERRUPTIBLE 1 /*可中断睡眠状态,即平时见的S状态*/
#define TASK_UNINTERRUPTIBLE 2 /*不可中断睡眠状态,即平时见的D状态*/
由此可见,进程确实是RUNNING状态,应该没有错,那为啥进程的堆栈最后会是在schedule()中呢,想了一下,应该是这样的:
这些进程应该是被唤醒了,但是一直没有得到调度,此时的堆栈就跟在调度出去之前是一样的,但为什么得不到调度呢,因为所有的进程都在等大内核锁,无法调度。
阅读(5031) | 评论(0) | 转发(0) |