分类: LINUX
2014-03-05 18:22:07
1. in recent project, we found one OOM-Killer issue, after I checked the log, and found system free memory is still more than 300M.
the below is the log.
<4>[ 216.320638] IntentService[R invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=4
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.328564] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.337315] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.346219] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.355385] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
216.364201] [
[ 01-14 10:04:07.297 0:0x0 F/Kernel ]
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.374544] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.385359] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.394874] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.403352] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
ff2f0>] (__do_fault+0x50/0x43c)
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.411989] [
[ 01-14 10:04:07.297 0:0x0 W/Kernel ]
<4>[ 216.420804] [
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.429864] [
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.438849] [
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.448085] Exception stack(0xdd6c5fb0 to 0xdd6c5ff8
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
)
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.453154] 5fa0: 0b996018 00000000 00000000 00000000
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.461894] 5fc0: 0b995e18 0b996018 4adeaa64 4adeaab8 00000200 00000000 000d0d50 48531e94
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.470098] 5fe0: 00000000 4adea848 ad9958ab ad9958aa 40000030 ffffffff
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.476725] Mem-info:
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.479001] Normal per-cpu:
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.481814] CPU 0: hi: 186, btch: 31 usd: 179
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.486606] HighMem per-cpu:
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.489489] CPU 0: hi: 90, btch: 15 usd: 14
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.494305] active_anon
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
:75932 inactive_anon:43594 isolated_anon:0
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.494311] active_file:141 inactive_file:423 isolated_file:0
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.494317] unevictable:112 dirty:0 writeback:0 unstable:0
[ 01-14 10:04:07.298 0:0x0 W/Kernel ]
<4>[ 216.494322] free:85133 slab_reclaimable:390 slab_unreclaimable:1016
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.494329] mapped:351 shmem:124 pagetables:2666 bounce:0
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.523679] Normal free:340224kB min:2940kB low:3672kB high:4408kB active_anon:135400kB inactive_anon:5884kB active_file:92kB inactive_file:132kB unevictable:0kB isolated(anon):0kB isolate
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
d(file):0kB present:540512kB mlocked:0kB dirty:0kB writeback:0kB mapped:232kB shmem:64kB slab_reclaimable:1560kB slab_unreclaimable:4064kB kernel_stack:3768kB pagetables:10664kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:352 all_unreclaimable? yes
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.563693] lowmem_reserve[]: 0 84328 84328
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.567942] HighMem free:308kB min:328kB low:784kB high:1244kB active_anon:168328kB inactive_anon:168492kB active_file:472kB inactive_file:1560kB unevictable:448kB isolated(anon):0kB isolated(fil
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
e):0kB present:337312kB mlocked:0kB dirty:0kB writeback:0kB mapped:1172kB shmem:432kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:2688 all_unreclaimable? yes
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.606627] lowmem_reserve[]: 0 0 0
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.610172] Normal: 156*4kB 94*8kB 36*16kB 15*32kB 8*64kB 3*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 3*4096kB 1*8192kB 19*16384kB = 340224kB
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.622875] HighMem: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 0*128kB 1*256kB 0*
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 308kB
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.634880] 807 total pagecache pages
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.638543] 0 pages in swap cache
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.641871] Swap cache stats: add 0, delete 0, find 0/0
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.647097] Free swap = 0kB
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.649988] Total swap = 0kB
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.682934] 221184 pages of RAM
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.686085] 85995 free pages
[ 01-14 10:04:07.299 0:0x0 W/Kernel ]
<4>[ 216.688967] 4935 reserved pages
[ 01-14 10:04:07.300 0:0x0 W/Kernel ]
<4>[ 216.692155] 1130 slab pages
[ 01-14 10:04:07.300 0:0x0 W/Kernel ]
<4>[ 216.694951] 15693 pages shared
[ 01-14 10:04:07.300 0:0x0 W/Kernel ]
<4>[ 216.698008] 0 pages swap cached
[ 01-14 10:04:07.300 0:0x0 E/Kernel ]
<3>[ 216.701171
[ 01-14 10:04:07.300 0:0x0 E/Kernel ]
] Out of memory: kill process 1564 (atsp.serverinfo) score 47314944 or a child
[ 01-14 10:04:07.300 0:0x0 E/Kernel ]
<3>[ 216.709274] Killed process 1564 (atsp.serverinfo) vsz:184824kB, anon-rss:4064kB, file-rss:392kB
[ 01-14 10:04:07.300 0:0x0 W/Kernel ]
<4>[ 216.965089] Thread-11 invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=-12
[ 01-14 10:04:07.300 0:0x0 W/Kernel ]
<4>[ 216.972720] [
[ 01-14 10:04:07.300 0:0x0 W/Kernel ]
<4>[ 216.981458] [
2. After analysis, we found low memory is reserved for 84328*4K, and this value is the same as the present size of HighMem., the below is the /proc/zoneinfo.
bash-3.2# cat /proc/zoneinfo
Node 0, zone Normal
pages free 121473
min 735
low 918
high 1102
scanned 0
spanned 136192
present 135128
nr_free_pages 121473
nr_inactive_anon 0
nr_active_anon 0
nr_inactive_file 376
nr_active_file 1231
nr_unevictable 0
nr_mlock 0
nr_anon_pages 0
nr_mapped 18
nr_file_pages 1605
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 582
nr_slab_unreclaimable 672
nr_page_table_pages 1374
nr_kernel_stack 281
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 0
protection: (0, 84328, 84328)
pagesets
cpu: 0
count: 78
high: 186
batch: 31
all_unreclaimable: 0
prev_priority: 12
start_pfn: 458752
inactive_ratio: 1
Node 0, zone HighMem
pages free 41852
min 82
low 196
high 311
scanned 0
spanned 84992
present 84328
nr_free_pages 41852
nr_inactive_anon 68
nr_active_anon 21131
nr_inactive_file 18081
nr_active_file 3795
nr_unevictable 0
nr_mlock 0
nr_anon_pages 21122
nr_mapped 7855
nr_file_pages 21956
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 0
nr_slab_unreclaimable 0
nr_page_table_pages 0
nr_kernel_stack 0
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 80
protection: (0, 0, 0)
pagesets
cpu: 0
count: 19
high: 90
batch: 15
all_unreclaimable: 0
prev_priority: 12
start_pfn: 594944
inactive_ratio: 1
3. Then we found the root cause is: vm's lowmem_reserve_ratio parameter is set to 1 in init.rc,
If we restore it from “1 32” to “32 32”, then the reserved memory will reduce from about 330M to 13M, which seems to be normal.