全部博文(28)
分类: LINUX
2014-06-26 08:54:27
Most likely one of the most occurring issue (but once you solved it, you most likely are never going to see it again):
Unable to mount root fs on unknown-block(0,0)or
VFS: Cannot open root device "sda3" or unknown-block(8,3) Please append a correct "root=" boot option; here are the available partitions: sda driver: sd sda1 sda2The digits 0,0 or 8,3 can be different in your case - it refers to the device that the kernel tries to access (and which fails). Generally speaking one can say that, if the first digit is 0, then the kernel is unable to identify the hardware. If it is another digit (like 8), it is unable to identify the file system (but is able to access the hardware).
The problem here is that the kernel that you are booting cannot translate the root=/dev/... parameter you gave it (inside the boot loader configuration) into a real, accessible file system. Several reasons can result in such a failure:
- the kernel configuration is missing drivers for your HDD controller (cases 1, 4, 5)
- the kernel configuration is missing drivers for the bus used by your HDD controller
- the kernel configuration is missing drivers for the file system you are using
- the device is misidentified in your root= parameter (cases 2, 3)
Resolving the issue is easy if you know what the reason is. You most likely don't, so here's a quick check-up.
Open the kernel configuration wizard (the make menuconfig part) so that you can update the kernel configuration accordingly.
- Check if you have built in (and not as a module) support for the bus / protocol that your harddisk controller uses.
- Most likely this is PCI support, SATA support (which is beneath SCSI device support), ...
- Check if you have built in (and not as a module) support for the HDD controller you use. One of the most frequent cases: you selected support for your harddisk controller protocol (IDE, SATA, SCSI, ...) but forgot to
select the HDD controller driver itself (like Intel PIIX). Try
running the following lscpi command, and paste its output on
. The site will show you which kernel drivers you need to select for your system. Within the menuconfig,
you can type "/" to open the search function, and type in the driver
name to find out where it resides. # lspci -n- Check if you have built in (and not as a module) support for the file system(s) you use.
- Say your root file system uses btrfs (which I definitely don't recommend) but you didn't select it, or selected it to be built as a
module, then you'll get the error you see. Make sure the file system
support is built in the kernel.Check if the kernel parameter for root= is pointing to the correct partition.
This isn't as stupid as it sounds. When you are booted with one kernel, it might list your disks as being /dev/sda whereas your (configured) kernel is expecting it to be /dev/hda. This is not because kernels are inconsistent with each other, but because of the drivers used: older drivers use the hda syntax, newer sda.
Try switching hda with sda (and hdb with sdb, and ...).
Also, recent kernels give an overview of the partitions they found on the device told. If it does, it might help you identify if you misselected a partition (in the example given at the beginning of this section, only two partitions are found whereas the kernel was instructed to boot the third). If it doesn't, it is most likely because the kernel doesn't know the device to begin with (so it can't attempt to display partitions).
Check if the kernel that is being boot by the boot loader is the correct kernel. I have seen people who, after building a first kernel (which doesn't boot), forget that they have to mount /boot before the overwrite the kernel with a new one. As a result, they copy the kernel to the root file system (/) whereas the boot loader still expects the kernel image to be on the /boot partition.