分类:
2010-07-14 19:05:55
Your Linux machine has just died, and your high up-time is wrecked. How do you tell what happened, and more importantly, how do you prevent a recurrence?
This article doesn't discuss user space programs -- few of them will crash the box without a chance of recovery; the only one I know of which reliably does that is crashme
. Most crashes are caused by kernel "oopses," or hardware failures.
A kernel oops occurs when the kernel code gets into an unrecoverable state. In most cases, the kernel can write its state to the drive, which allows you to determine what happened if you have the correct tools. In a few cases, such as the Aiee, killing interrupt handler
crash, the kernel is unable to write to the drive. With no interrupt handler, interrupt-driven I/O is impossible.
Even in the worst cases, some data can be retrieved and the cause can often be determined.
Diagnostic tools are a necessary part of kernel error recovery. The most obvious tool is the system log. dmesg
is a useful tool for extracting the relevant data from the system and kernel logs.
There is also a specialist tool for tracing kernel oopses. ksymoops
requires the system to be configured the same way it was when it crashed, and should be used as soon as possible after the crash. It traces the function chain, and displays the function and offset which the kernel was in when it crashed.
With the information from the system log, or (for more precision) from ksymoops
, a system admininistrator can determine which function the kernel was trying to run when it crashed. It is then much easier to determine whether to change a hardware driver, swap in a different loadable module -- or post an error report to the relevant kernel developer.
If your system is still running, you can run dmesg
to grab the kernel diagnostic information that is normally on the console. The messages are also written to /proc/kmsg
, but dmesg
allows you to copy it to a file for later perusal or for posting to a kernel expert. dmesg
can also be read by most users, /proc/kmsg
has limited permissions.
dmesg > filename
Useful arguments:
-nlevel
-sbufsize
-c
syslogd
and klogd
are the system loggers. klogd
handles kernel logging, but is often bundled in with syslogd
and configured with it. The loggers themselves are useless for after the fact debugging -- but can be configured to log more data for the next crash.
Are there other open-source tools you use to find out what caused a system crash? Please share your experiences with other diagnostic tools. |
Use /etc/syslogd.conf
to determine where the system log files are, and to see where the kernel log files are, if /proc/kmsg
doesn't exist.
If you are running loadable modules, klogd
must be signaled when the modules are loaded or unloaded. The sysklogd
source includes a patch for the modules-2.0.0 package to ensure that the module loaders and unloaders signal klogd
correctly.
From modutils 2.3.1, module logging is built in. To use the logging, create /var/log/ksymoops
, owned by root and set to mode "644" or "600". The script insmod_ksymoops_clean
will delete old versions, and should be run as a cron
job.
When the kernel detects an unrecoverable or serious error, it prints a status report to the kernel log file. This report includes such things as the contents of the registers, the contents of the kernel stack, and a function trace of the functions being executed during the fault.
All this stuff is extremely useful -- but is in machine-readable form, and addresses vary depending on the configuration of the individual machine. So the kernel log file alone is useless when determining precisely what went wrong. This is where ksymoops
comes in.
ksymoops
converts the machine-readable kernel oops report to human readable text. It relies on a correct System.map
file, which is generated as part of the kernel compilation. It also expects klogd
to handle loadable modules correctly, if appropriate.
ksymoops
requires the "oops text," usually available as Oops.file
from the system logger. If this file can't be found, grab the oops text from dmesg
, or from the console -- copied by hand, if necessary.
The output of ksymoops
is a list of messages that might contain a kernel problem. Where possible, ksymoops
converts the addresses to the function name the address occurs in.
>>EIP; c0113f8c
Trace; c011d3f5
Trace; c011af5f
Trace; c011afe9
Trace; c011d2bc
Trace; c010e80f
Trace; c0107c39
Trace; c0107b30
Trace; 00001000 Before first symbol
man ksymoops
explains these lines in great detail, but what is important to most system administrators is the list of the function names in which problems occurred. Once you know the key function, and the functions which called it, you can make an educated guess as to the cause of the kernel error.
Be aware that the output of ksymoops
is only as good as the input -- if the System.map
file is wrong, the loadable modules don't report when they are loaded in and out, or the vmlinux
, ksyms
, lsmod
and object
files are different from the ones present when the crash occurred, ksymoops
will produce invalid output. Run it as soon as possible after the crash, for the most accurate data -- and certainly before you change the kernel!
If you're an experienced C programmer, you might want to debug the kernel itself. Use the ksymoops
output to determine where in the kernel the problem is, then use gdb
to disassemble the offending function and debug it.
gdb /usr/src/linux/vmlinux
gdb> disassemble offending_function
You've figured out that the problem was something you can correct, perhaps a driver or a loadable module. What now?
Install any appropriate patches, check that the driver is correct -- and recompile the kernel and add it as a new lilo entry. Test the new kernel. If that doesn't correct the problem, consider reporting the problem to the linux-kernel list, or the appropriate kernel developer.
If you are reporting a bug to the Linux Kernel mailing list, or to any of the linux kernel developers, post the information to , or to the relevant developer, with the subject of "ISSUE: one line summary from [1.]
".
/proc/version
):
ksymoops
ver_linux
script from $LINUXHOME/scripts/ver_linux
)
/proc/cpuinfo
):
/proc/modules
):
/proc/scsi/scsi
):
/proc
and include all information that you think to be relevant):
Note that the linux-kernel FAQ states that Oops data is useless if the machine with the oops has an over-clocked CPU, or is running vmmon
from VMWare. If you have either of these, fix the problem and try to reproduce the Oops before reporting it.
If you get repeated, apparently random errors in code, your CPU fan may have died. If you're familiar enough with your equipment, you may be able to hear whether the CPU fan is running -- if not, the simplest test is to open the case and look. If the CPU fan isn't running, shut the machine down and replace the fan -- you may have saved your CPU.
If the CPU fan is running, but you're still getting random errors, suspect the RAM.
There are two common ways to test the RAM. One is to remove the suspect stick and try the machine with the other sticks of RAM, or to test the suspect stick in a known-working machine. The other is to repeatedly recompile a kernel. If you get a signal 11, the RAM is probably bad.
The final common cause of hardware failure is bad blocks on the hard drive. Use the program badblocks
to test the drive.
With a little care, and a little luck, you'll get the up-time record in your local LUG -- unless a power outage downs your machine. But I can't help you with that!
Related Reading |
man ksymoops
man dmesg
man syslogd
man klogd
man insmod
$LINUXDIR/linux/Documentation/oops-tracing.txt
$LINUXDIR/linux/README
man gdb
info gdb
Jennifer Vesperman is the author of Essential CVS. She writes for the O'Reilly Network, the Linux Documentation Project, and occasionally Linux.Com.
Return to the Linux DevCenter.
转自:http://linuxdevcenter.com/pub/a/linux/2001/11/01/postmortem.html?page=1
DMESG man: