Restore the MBR of a Linux installation

Transcription

Restore the MBR of a Linux installation
Stankowic development
Linux/UNIX, VMware and more
http://www.stankowic-development.net
Restore the MBR of a Linux installation
Categories : Linux, OSBN/Ubuntuusers planet, RHEL / CentOS, XING / LinkedIn / Amazon
Date : 17. March 2016
Netboot durch defekten MBR
As preparation for a restore test, I removed the master boot record (MBR) of a Linux system recently:
# dd if=/dev/zero of=/dev/sda bs=512 count=1
This command fills the first 512 bytes (containing the partition table and bootloader) of the first hard
drive (/dev/sda) with zeroes (/dev/zero).
The problem is when you discover that the restore did not work and you need to fix the system manually.
:)
Procedure
To restore the MBR, a utility named testdisk can be used. The program scans hard drives for known file
system footprints and thus detects partition boundaries. The most Linux, Unix and Windows file systems
can be detected.
Because the broken system is not capable of booting anymore, it is recommended to use a live medium e.g. GParted, which already comes with the testdisk utility.
After booting, a terminal is opened. As gparted is not able to detect any partitions, it is necessary to
utillize the lsscsi and fdisk commands, to find the correct hard drive:
1/4
Stankowic development
Linux/UNIX, VMware and more
http://www.stankowic-development.net
lsscsi / fdisk -l
Afterwards, testdisk can be used to detect partitions and rewrite the MBR:
1. Start testdisk and select a log file for documenting changes
2. Choose a hard drive and select Proceed
3. Select the hard disk type: Intel should be select for conventional PCs with MBR/BIOS. Make sure
to select EFI GPT for (U)EFI or GPT systems
4. Select Analyse and Quick Search. If partitions were detect, you can select the menu item Write
to rewrite the MBR:
2/4
Stankowic development
Linux/UNIX, VMware and more
http://www.stankowic-development.net
Rewrite MBR with testdisk
Now, fdisk also lists defined partitions again:
# fdisk -l /dev/sda Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63
sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 byte
s Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/op
timal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Device Boot
Start
End
Blocks
Id System /dev/sda1
*
1
64
512000
83 Linux /dev/sda2
64
2611
2045
8496
8e Linux LVM
If you would reboot the system right now, the boot would fail because of a missing bootloader. Depending
on your configuration, you might need to re-install GRUB(2) - e.g. using the live medium:
# mount /dev/sdaX /mnt # grub-install --no-floppy --rootdirectory=/mnt /dev/sda
It is getting tricky, if the GRUB version on the live medium is newer than the version on your host (GRUB2
vs. GRUB-legacy). In this case, pre-existing configuration files are incompatible and you need to mount all
system partitions and use a chroot environment:
# mount /dev/sdaX /mnt
# mount /dev/sdaX /mnt/boot
...
# mount --bind /dev
3/4
Stankowic development
Linux/UNIX, VMware and more
/mnt/dev
# mount --bind
http://www.stankowic-development.net
/tmp /mnt/tmp # mount -t proc proc /mnt/proc
t -t sysfs none /mnt/sys # chroot /mnt /bin/bash
# moun
Now, the bootloader can be re-installed:
# grep -v rootfs /proc/mounts > /etc/mtab
# grub-install --no-floppy /dev/sda
The next boot should work like a charm again. :)
Powered by TCPDF (www.tcpdf.org)
4/4