当前时区为 UTC + 8 小时



发表新帖 回复这个主题  [ 7 篇帖子 ] 
作者 内容
1 楼 
 文章标题 : https://wiki.ubuntu.com/FakeRaidHowto?highlight=%28RAID%29
帖子发表于 : 2006-06-03 1:38 

注册: 2006-05-13 3:06
帖子: 115
送出感谢: 0 次
接收感谢: 0 次
https://wiki.ubuntu.com/FakeRaidHowto?h ... %28RAID%29




How to configure Ubuntu to access a hardware fakeRAID
Contents


How to configure Ubuntu to access a hardware fakeRAID
What is fakeRAID?
Installing Ubuntu into the RAID Array
Installing dmraid
Partitioning the RAID Array
Formatting the Partitions
Mounting the Temporary File Structure
Installing the Base System
Setting Up the Bootloader for RAID
Installing the Bootloader Package
Configuring the Bootloader
Reconfiguring the Initramfs for RAID (Ubuntu 5.10)
Configuring mkinitramfs in Ubuntu 5.10 (Breezy Badger)
Updating the initrd (Ubuntu 5.10)
Preconfiguring the New System as Usual
Upgrading to Ubuntu 6.06 (Dapper Drake)
External Links
Special Note for Raid 5


Back when Ubuntu Breezy preview came out, I spent a week getting it installed on my Via SATA fakeRAID and finally got the system dual-booting WinXP and Ubuntu Linux on a RAID-0 (stripe) between two 36 gig 10,000rpm WD Raptor hard drives. So I thought I would create a howto to describe how I did it so that others could benefit from my work and add related lessons.

This page describes how to get Linux to see the RAID as one disk and boot from it. In my case, I use a RAID-0 configuration, but this should also apply to RAID-1 and RAID-5 (see note at end). For the benefit of those who haven't done some of these steps before, these instructions are fairly detailed (so don't be intimidated by the length of this document -- it's pretty straight-forward).

What is fakeRAID?
In the last year or two a number of hardware products have come on the market claiming to be IDE or SATA RAID controllers. These have shown up in a number of desktop/workstation motherboards. Virtually none of these are true hardware RAID controllers. Instead, each is simply a multi-channel disk controller that has special BIOS and drivers to assist the OS in performing software RAID functions. This has the effect of giving the appearence of a hardware RAID, because the RAID configuration is set up using a BIOS setup screen and the system can be booted from the RAID.

Under Windows, you must supply a driver floppy to the setup process so Windows can access the RAID. Under Linux, which has built-in softRAID functionality that pre-dates these devices, the hardware is seen for what it is -- multiple hard drives and a multi-channel IDE/SATA controller. Hence, "fakeRAID".

If you have arrived here after researching this topic on the Internet, you know that a common response to this question is, "I don't know if you can actually do that, but why bother -- Linux has built-in softRAID capability." Also, it's not clear that there is any performance gain using hardware fakeRAID under Linux instead of the built-in softRAID capability; the CPU still ends up doing the work. Well, that's beside the point. The point is that a Windows user with a fakeRAID system may very well want to put Linux on that same set of disks. Multiboot configurations are common for cross-over users trying Linux out, for people forced to use Windows for work, and for other reasons. These people shouldn't have to add an additional drive just so they can boot Linux. Also, ome people say, "RAID-0 is risky". That's a matter of individual needs (speed vs security subject to resource constraints). These are not the subject of this HowTo; we assume you want to do it and tells you "how to".

Installing Ubuntu into the RAID Array
Installing dmraid
The standard setup and LiveCDs do not yet contain support for fakeRAID. I used the LiveCD to boot up, and used the package manager to download the dmraid package from the universe repository. You will need to enable packages from Universe in the settings of Synaptic to see the package. If you are using the DVD you may also need to get the gparted package, which we will use for partitioning your RAID.

NOTE: Support for dmraid has been improved in Ubuntu 6.06, and several of the steps below are no longer necessary. If you install from the Live cd, install the dmraid package from universe before you start the installer program (Ubiquity). Just make sure you choose your RAID devices under /dev/mapper and do not use the raw devices /dev/sd* for anything. So far, this works for some, while for others, Ubuquity crashes. If Ubiquity does not complete the install, you can manually complete the process by following this procedure. In that case, those steps that are no longer required for Ubuntu 6.06 or later have been marked "Ubuntu 5.10".

Partitioning the RAID Array
You can use gparted to create and delete partitions as you see fit, but at this time, it can not refresh the partition table after it modifies it, so you will need to change the partitions, then manually run dmraid -ay from the command prompt to detect the new partitions, and then refresh gparted before you can format the partition. (Of course, you can use parted, fdisk or other tools if you are experienced with them.)

I needed to resize my existing NTFS partition to make space for Ubuntu. (If you don't need to do this, skip to the next paragraph.) Gparted currently can not do this on the mapper device so I had to use the ntfsresize program from the command line. Note that ntfsresize only resizes the filesystem, not the partition, so you have to do that manually. Use ntfsresize to shrink the filesystem, note the new size of the filesystem in sectors, then fire up fdisk. Switch fdisk to sector mode with the 'u' command. Use the 'p' command to print the current partition table. Delete the partition that you just resized and recreate it with the same starting sector. Use the new size of the filesystem in sectors to compute the ending sector of the partition. Don't forget to set the partition type to the value it was before. Now you should be able to create a new partition with the free space.

Start gparted and create the partitions you want for your setup. To begin, use the selector on the upper right to choose the device dmraid has created for your fakeRAID. In my case, this was /dev/mapper/via_hfciifae, with an additional device /dev/mapper/via_hfciifae1 assigned to my already-created NTFS partion. DMRAID will attempt to assign a meaningful name reflecting the controller you are using (e.g., and nvRAID user may see /dev/mapper/nvidia_bggfdgec or the like).

After selecting the unused space, I created an extended partition with 3 logical partitions inside. I made a 50 meg partition for /boot, a 1 gig partition for swap, and the rest for the root. Once you have set up the partitions you want, apply the changes and exit gparted. If you apply changes more than once (e.g., do this in more than one step, or change your mind while working), you should exit gparted, refresh the partition table using the command dmraid -ay, and open gparted again to continue your work.

Formatting the Partitions
Now format your filesystem for each partition. In my case I used fdisk and ran a mke2fs on /dev/mapper/via_hfciifae5 and mkreiserfs on /dev/mapper/via_hfciifae7.

Alternatively, you can do this using the GUI in gparted. Run dmraid -ay again to refresh the partition table for gparted and then open gparted again. You will see that the new partitions are designated as "unknown type", because they are not formatted. You can use gparted to format them by right-clicking each partion and selecting "convert" and the appropriate format. Before you exit, make a note of the device mapping for each new partition (you will need this later). Apply the changes and exit. You can also see these mappings with the command dmraid -r.

In my case I had the following mappings:

via_hfciifae -- the raw raid volume
via_hfciifae1 -- the NTFS partition
via_hfciifae5 -- /boot
via_hfciifae6 -- swap
via_hfciifae7 -- /
Mounting the Temporary File Structure
Next, I created a temporary file structure to hold my new installation while I construct it, I and mounted two sets of directories to it: a) Mounted the new partitions I had created for / and /boot (so could install packages to them). b) Mounted the currently running, /dev, /proc, and /sys filesystems, so I could use these to simulate a running system within my temporary file structure.

mkdir /target
mount -t reiserfs /dev/mapper/via_hfciifae7 /target
mkdir /target/boot
mount -t ext2 /dev/mapper/via_hfciifae5 /target/boot
mkdir /target/dev
mount --bind /dev /target/dev
mkdir /target/proc
mount -t proc proc /target/proc
mkdir /target/sys
mount -t sysfs sysfs /target/sys
Installing the Base System
Now we install the base system. debootstrap installs all base packages and does it setup. Afterwards you need to install some additional packages:

cd /target

apt-get install debootstrap
# install debootstrap to install the base system at the next point

# install base system
debootstrap breezy /target ## instead of breezy can be any distribution selected

# copy sources list
cp /etc/apt/sources.list etc/apt

# copy resolv.conf
cp /etc/resolv.conf /target/etc

# copy hosts
cp /etc/hosts /target/etc

# run in the now installed system
chroot /target

# install ubuntu-base (and other packages)
apt-get update
apt-get install ubuntu-base linux-k7 ubuntu-desktop dmraid grub
# change grub to lilo if you use lilo
# change k7 to your processor architecture if you don't know, use linux-386.

# when prompted whether you want to stop now, say no (we will later be fixing the issue that the system is talking about)
# when prompted whether to create a symbolic link, say yes. (Setting up symlinks with names that don't change with each kernel update the corresponding file references used by the bootloader don't have to be udpated each time the kernel is updated.)

# the system is installed now.
**Temporary Note to other editors: when I tested this howto with 6.06 LTS on 1 June 2006, the install of dmraid failed (--configure), indicating it was unable to start the dmraid initscript. This may have been some kind of error on my part. I was able to fix this with dpkg-reconfigure dmraid, so I add it here as a possibily useful tip should this turn out to be systemic problem that others encounter. Also, install dmraid first, then the kernel, in order to use the initramfs scripts that are now part of the 6.06 distrubution. This is based on one 6.06 test -- please correct/edit this as appropriate.**

Setting Up the Bootloader for RAID
Now that you have the debian core, ubuntu-base, linux kernel, dmraid, grub, and ubuntu-desktop installed, you can proceed with the bootloader. If you haven't completed these successfully, don't attempt to proceed, you will just exacerbate any problem you have at this point.

We will demonstrate the installation of GRUB (Grand Unified Bootloader), but there are several alternatives (e.g., LILO). The key information here is how the normal process for use of the bootloader had to be modified to accomodate the RAID mappings, so this general process should be useful regardless of your choice of bootloader.

Installing the Bootloader Package
Now you need to run the grub shell. In a non-RAID scenario, one might use grub-install, but we cannot because it cannot see the RAID device mappings and therefore cannot set up correct paths to our boot and root partitions. So we will install and configure grub manually as follows:

First, make a home for GRUB and put the files there that it needs to get set up:

mkdir /boot/grub
cp /lib/grub/<your-cpu-arch>-pc/stage1 /boot/grub/
cp /lib/grub/<your-cpu-arch>-pc/stage2 /boot/grub/
cp /lib/grub/<your-cpu-arch>-pc/<the staging file for your boot partition's filesystem>
The "staging files" look like: "e2fs_stage1_5" (for ext2 or 3); "reiserfs_stage1_5" (for reiserfs); "xfs_stage1_5" (and so on). It is safe to copy them all to your /boot/grub.

Next, go into the grub shell:

grub
You should now see the grub prompt.

Next, tell GRUB which device is the boot device:

device (hd0) /dev/mapper/via_hfciifae
In my case, it was the RAID array mapped as /dev/mapper/via_hfciifae.

Next, tell GRUB where all the stuff is that is needed for the boot process:

root (hd0,4)
CAUTION: This is one of the most common sources of error, so we will explain this in excruciating detail. From GRUB's perspective, "root" is whatever partition holds the contents of /boot. For most people, this is simply your linux root (/) partition. E.g., if / is your 2nd partition on the RAID you indicated above as hd0, you would say "root (hd0,1)". Remember that GRUB starts counting partitions at 0. The first parition is 0, the second is 1, and so on. In my case, however, I have a separate boot partition that GRUB mounts read-only for me at boot time (which helps keep it secure). It's my 5th partition, so I say "root (hd0,4)"

Optional: IF GRUB complains about bad cylinder numbers (i.e, if it did not complain, skip this part about fdisk and geometry): You may need to tell it about the device's geometry (cylinders, heads, and sectors per track. You can find this information out by using fdisk (quit GRUB) with the command: fdisk (fdisk -l /dev/mapper/via_hfciifae) ...then reenter the GRUB shell and use the command: geometry (hd0) 9001 255 63

Next, now that you've successfully established the "device" and "root", you can go ahead and instantiate GRUB on the boot device. This sets up the stage 1 bootloader in the device's master boot record and the stage 2 boot loader and grub menu in your boot partition:

setup (hd0)
quit
Configuring the Bootloader
Now run update-grub:

update-grub
This adds your newly installed linux kernel, and the associated initial ram disk image, to the boot options menu that grub presents during start-up. You will find this menu in the boot/grub directory. We need to edit this menu.lst file as follows. (CAUTION: Get this right - this is a common source of error and mistakes result in kernel panic upon reboot, so no typos.):

a) "root=": Correct the path that points to the linux root (in several places). update-grub configures hda1 as root because, not being dmraid-aware, it can't find your current root-device. Put the correct device mapping for your linux root. So put your equivalent of:

root=/dev/mapper/via_hfciifae7
every place you see "root=" (only where you see root and the equal sign). This goes in all the places where update-grub defaulted to root=/dev/hda1 or just left it blank like root= .

Make sure you change this in the Automagic defaults section as well as in each of the multiple alternatives sections that follow. (Important: the Automagic defaults section is nested and therefore uses ## to indicate comments and # to indicate the actual defaults that is uses. So don't "un-comment" the default lines when you edit them. In other words, leave the #). When you update your kernel later on, update-grub will use these defaults so it won't ignorantly "assume hda1" and send your system into a kernel panic when you boot. This ought to end up looking something like:

#kopt=root= /dev/mapper/via_hfciifae7 ro
b) "groot": If necessary, correct the grub root. In places, you will see other lines that also refer to "root" (or "groot") but use syntax such as root (hd0,1) instead of a path. As described earlier, these refer to the "root" for grub's purposes, which is actually your /boot. Also, remember grub's syntax uses partition numbering beginning with zero. So, if you have a separate /boot partition, these lines should instead show something like:

root (hd0,4)
(The same information we used while working with grub interactively earlier.) Change this both for the Automagic defaults as well as for each alternative, including the memtest option.

c) An additional edit is required IF you are using a separate /boot partition. The path pointing to the linux root must be RELATIVE to the grub "root" (your /boot). So if you are using a separate boot partition, the paths in grub's menu.lst file that help grub locate the linux kernel and initrd will not begin with "/boot", and you should delete that portion of the path. For example, {{update-grub}}} initially spat out this:

title Ubuntu, kernel 2.6.15-23-amd64-k8
root (hd0,0)
kernel /boot/vmlinuz-2.6.15-23-amd64-k8 root= ro quiet splash
initrd /boot/initrd.img-2.6.15-23-amd64-k8
savedefault
boot
... and because I have a separate boot parition and opted not to use a grub splash image (which you can learn about elsewhere), my editing looked like this...

title Ubuntu, kernel 2.6.15-23-amd64-k8
root (hd0,4)
kernel /vmlinuz-2.6.15-23-amd64-k8 root=/dev/mapper/via_hfciifae ro quiet
initrd /initrd.img-2.6.15-23-amd64-k8
savedefault
boot
NOTE that I removed "savedefault". If you leave this in, you will get a "file not found" error when you try to boot (you also can't set use default=saved up top as it shows in the example). Again, if you are not using a separate boot partition, you can leave /boot in the paths.

d) To add a static boot stanza for Windows, you can use and change the example in the menu.lst file or the following:

title Windows XP
rootnoverify (hd0,0)
chainloader +1
Put it at the bottom, below where it says ### END DEBIAN AUTOMAGIC KERNELS LIST. Or if for some unforgivable reason you want your computer to boot Windows by default, you can put it up front above where it says

### BEGIN DEBIAN AUTOMAGIC KERNELS LIST
e) Close the gaping security hole! First, set a password where the example shows it. This will be required for any locked menu entries, for the ability to edit the bootlines, or to drop to a command prompt. To do this, in the console type:

grub-md5-crypt
When it prompts you "Password:", it's asking what you want to be the GRUB password (not your user password, the root password, or anything else). You will be prompted to enter it twice, then it will spit out the MD5 hash that you need to paste into menu.lst. This line should end up looking something like:

password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/
- Then, to keep your "recovery mode" boot alternative(s) locked each time update-grub runs, set

lockalternative=true
.Unless you do this, anybody will be able to seize root simply by rebooting your computer (e.g., cutting power to it) and selecting your "recovery mode" menu entry when it reboots, or editing the normal bootline to include 'single' mode.

f) Test automagic kernels settings (also completes the locking of alternatives). It is better to find errors now than a month from now when you've forgotten all this stuff and the kernel gets updated. - first, make a backup of menu.lst - then run update-grub again - watch for errors and re-examine menu.lst for discrepancies - correct as needed.

Reconfiguring the Initramfs for RAID (Ubuntu 5.10)
Reminder: Sections Ubuntu 5.10 should be skipped if you are installing Ubuntu 6.06.

In recent years there has been a trend to try and pull a bunch of code out of the kernel and into EarlyUserspace. This includes stuff like nfsroot configuration, md software RAID, lvm, conventional partition/disklabel support, and so on. Early user space is set up in the form of an initramfs which the boot loader loads with the kernel, and this contains user mode utilities to detect and configure the hardware, mount the correct root device, and boot the rest of the system.

Hardware fakeRAID falls into this category of operation. A device driver in the kernel called device mapper is configured by user mode utilities to access software RAIDs and partitions. If you want to be able to use a fakeRAID for your root filesystem, your initramfs must be configured to detect the fakeRAID and configure the kernel mapper to access it.

So we need to add dmraid to the initramfs. Debian and Ubuntu supports this by way of a set of shell scripts and configuration files placed in /etc/mkinitramfs/. We must tailor these to include dmraid by plugging in two simple scripts and adding a one-line entry to a configuration file. The only real challenge here is to make sure you don't inadvertently screw up the syntax with a typo.

Note that in Ubuntu 6.06, this is taken care of by the dmraid package itself.

Configuring mkinitramfs in Ubuntu 5.10 (Breezy Badger)
First, create a new file as /etc/mkinitramfs/scripts/local-top/dmraid .

(If you are lazy or don't like to keyboard, you can open this how-to in the browser and copy the text.)

#!/bin/sh

PREREQ="udev"

prereqs()
{
echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0
;;
esac

modprobe -q sata_nv
modprobe -q dm-mod

# Uncomment next line if you are using RAID-1 (mirror)
# modprobe -q dm-mirror

/sbin/dmraid -ay
Second, create another new file as /etc/mkinitramfs/hooks/dmraid.

(Again for the lazy, you can copy it from your browser. Also, it's only slightly different, so if you are manually typing it for some reason, you may want to start with a copy of the first script.)

#!/bin/sh

PREREQ=""

prereqs()
{
echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0
;;
esac

. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/dmraid /sbin

exit 0
Third, mark both of these new initramfs scripts as executable:

chmod +x /etc/mkinitramfs/hooks/dmraid
chmod +x /etc/mkinitramfs/scripts/local-top/dmraid
Last, add the line dm-mod to the file /etc/mkinitramfs/modules. Make sure the file ends with a newline. If you use a RAID-1 (mirror), include dm-mirror as well.

Updating the initrd (Ubuntu 5.10)
Now the big moment -- use initramfs to update the initrd file. Below, I show the kernel I installed at that time, but stuff following "img-" and following "-c -k " must reflect the version YOU are using (e.g., "2.6.12-10-amd64-k8-smp" or whatever).

Two commands

rm /boot/initrd.img-2.6.12-9-k7
update-initramfs -c -k 2.6.12-9-k7
Now you are ready to set up the new system.

Preconfiguring the New System as Usual
Ensure that you are still operating as root within the new (temporary) system (i.e., your prompt will be root@ubuntu#. If not, chroot /target again: sudo chroot /target

(The process from here forward is the same as any bootstrap / network installation, and there are other sources to refer to for more detail.)

UBUNTU 5.10: Enter the command base-config new to configure system defaults.

**UBUNTU 6.06: base-config is deprecated in Dapper Drake. The correct procedure needs to be inserted here. Theoretically, one could do what base-config does manually.

While it is not absolutely necessary, it may be useful to also copy the live hosts and interfaces files into your temporary system before rebooting (after exiting your chroot:

cp /etc/hosts /target/etc/hosts
cp /etc/network/interfaces /target/etc/network/interfaces
).

It will also be helpful to configure your fstab file at this point. One easy way to do this is:

cat /etc/mtab
(select and copy everything)

nano /target/etc/fstab
(paste everything)

Then delete everything except the proc line, and the lines that refer to your RAID partitions. It might end up something like this (yours will vary - people asked for examples):

#FileSystem MountPoint Type Options Dump/Pass

proc /proc proc rw 0 0
/dev/mapper/via_hfciifae5 /boot ext3 defaults 0 2
/dev/mapper/via_hfciifae7 / reiserfs notail,noatime 0 1
/dev/mapper/via_hfciifae6 none swap sw 0 0
or

#[fs ] [fs_mount][fs_type][ fs_opts ][dmp][pass]
/dev/mapper/nvidia_bggfdgec2 /boot ext3 defaults 0 1
/dev/mapper/nvidia_bggfdgec3 none swap sw 0 0
proc /proc proc rw 0 0
Finally you are ready to reboot. This first time, select the "recovery mode" option. When it asks, you want to "perform maintenance". Set the root password:

passwd
Suggested early set-up tasks: adduser yourself (create a regular user) nano /etc/group (create an admin group) visudo (duplicate the root line except with %admin where root was) reboot, and you should be able to log in a normal user through gdm, and continue normally with sudo privileges.

(more is needed here, or a reference to whatever replaces the howto that describes a general debootstrap install)

Upgrading to Ubuntu 6.06 (Dapper Drake)
The dmraid package in Ubuntu 6.06 has the necessary scripts included (under /usr). After upgrading the dmraid package, you can therefore delete the old scripts that you've made (under /etc). To be sure the package scripts are baked into the initrd, update the initrd again by reconfiguring dmraid:

sudo rm /etc/mkinitramfs/hooks/dmraid
sudo rm /etc/mkinitramfs/scripts/local-top/dmraid
sudo dpkg-reconfigure dmraid
External Links
Running Ubuntu On a Fakeraid/1 array described how to adapt the original HOWTO to a FakeRAID/1 (mirroring) array.

Special Note for Raid 5
While trying to install dmraid for a Raid 5 Nvidia setup received a error 139 forced exit and upon further investigation in the TODO doc in /usr/share/doc/dmraid found that dmraid doesn't support raid modes above 1 yet. Here's the exact wording from the TODO,"higher RAID levels above 1; main restriction to support these is the need for device-mapper targets which map RAID3,4,5."

EDIT: Further research has lead me to dmraid 1.0.0.rc10, which in it's changelog notes Raid5 support for nvidia. Current Ubuntu version is 1.0.0.rc9 which explains the lack of Raid5 support. Will update with more info on how well it works.

NOTE: The kernel device mapper ( which dmraid depends on ) does not yet support raid 5. There are some early development patches availible, so they might get merged into Linus's kernel in time for Dapper+1, but I'd say it's not all that likely.


--------------------------------------------------------------------------------

CategoryDocumentation CategoryHardware

(2006-06-02 08:01:01由JohnBrendler编辑)

© 2005 Canonical Ltd. Ubuntu, Kubuntu, Edubuntu and Canonical are registered trademarks of Canonical Ltd.


页首
 用户资料  
 
2 楼 
 文章标题 :
帖子发表于 : 2006-06-03 3:59 

注册: 2006-05-13 3:06
帖子: 115
送出感谢: 0 次
接收感谢: 0 次
谁能帮忙翻译这个的????
在NF系列的RAID磁盘阵列中安装Ubuntu的方法。


页首
 用户资料  
 
3 楼 
 文章标题 :
帖子发表于 : 2006-06-14 20:04 

注册: 2006-06-08 15:30
帖子: 26
送出感谢: 0 次
接收感谢: 0 次
How to configure Ubuntu to access a hardware fakeRAID
如何在fakeRAID上配置Ubuntu
Contents
内容


How to configure Ubuntu to access a hardware fakeRAID
如何配置Ubuntu来存取fakeRAID硬件设备
What is fakeRAID?
什么是fakeRAID?
Installing Ubuntu into the RAID Array
在RAID阵列上安装Ubuntu
Installing dmraid
安装dmraid
Partitioning the RAID Array
将RAID阵列分区
Formatting the Partitions
格式化分区
Mounting the Temporary File Structure
挂载临时文件结构
Installing the Base System
安装基本系统
Setting Up the Bootloader for RAID
为RAID安装起动器
Installing the Bootloader Package
安装起动器包
Configuring the Bootloader
配置起动器
Reconfiguring the Initramfs for RAID (Ubuntu 5.10)
为RAID 重新配置Initramfs (Ubuntu 5.10)
Configuring mkinitramfs in Ubuntu 5.10 (Breezy Badger)
在Ubuntu 5.10中配置mkinitramfs
Updating the initrd (Ubuntu 5.10)
升级initrd
Preconfiguring the New System as Usual
像通常一样预配置新系统
Upgrading to Ubuntu 6.06 (Dapper Drake)
升级到Ubuntu 6.06
External Links
外部链接
Special Note for Raid 5
RAID 5的特别说明


Back when Ubuntu Breezy preview came out, I spent a week getting it installed on my Via SATA fakeRAID and finally

got the system dual-booting WinXP and Ubuntu Linux on a RAID-0 (stripe) between two 36 gig 10,000rpm WD Raptor hard

drives. So I thought I would create a howto to describe how I did it so that others could benefit from my work and

add related lessons.
在当初Breezy预览版发布的时候,我花了一个星期的时间将Breezy安装在我的Via SATA fakeRAID上,并最终在用两块万转36G西数

Rapter硬盘组建的RAID-0系统上实现了Winxp和Ubuntu linux的双启动。所以,我想我应该把我如何做得写下来,以便于他人能得益

于我的工作,并同样添加一些相关的内容。

This page describes how to get Linux to see the RAID as one disk and boot from it. In my case, I use a RAID-0

configuration, but this should also apply to RAID-1 and RAID-5 (see note at end). For the benefit of those who

haven't done some of these steps before, these instructions are fairly detailed (so don't be intimidated by the

length of this document -- it's pretty straight-forward).
本页描述了如何让linux将RAID看做一块硬盘并由其上启动。在我的例子中,我是使用的RAID-0结构,但我所介绍的内容应当也适用

于RAID-1,和RAID-5(参见文末的说明)。考虑到一些没有做过类似操作的朋友,我在此给出的介绍比较详细,所以不要被这篇文章的

长度所吓倒,其实都很直白。

What is fakeRAID?
In the last year or two a number of hardware products have come on the market claiming to be IDE or SATA RAID

controllers. These have shown up in a number of desktop/workstation motherboards. Virtually none of these are true

hardware RAID controllers. Instead, each is simply a multi-channel disk controller that has special BIOS and drivers

to assist the OS in performing software RAID functions. This has the effect of giving the appearence of a hardware

RAID, because the RAID configuration is set up using a BIOS setup screen and the system can be booted from the RAID.
什么是fakeRAID?
在前一两年,市场上出现了自称是IDE或者SATA RAID控制器的硬件设备。这类硬件出现在一些桌面或工作站的主板上。实际上,这些

设备都不是真正的RAID控制器。它们其实只是简单的多通道磁盘控制器,只不过是可以借助于特殊的BIOS和驱动来协助操作系统实现

软件层面的RAID功能。由于RAID设置是用的BISO设置界面而且系统可以从RAID上启动,所以这种功能就可以给用户一个像是硬件RAID

的外表。

Under Windows, you must supply a driver floppy to the setup process so Windows can access the RAID. Under Linux,

which has built-in softRAID functionality that pre-dates these devices, the hardware is seen for what it is --

multiple hard drives and a multi-channel IDE/SATA controller. Hence, "fakeRAID".
在windows下,我们必须为其安装驱动才能操作RAID。而在linux环境下,由于系统内置了软RAID功能可以对这些设备进行预处理

(predate该怎么翻译好?),这些硬件可以就按照它们本来的面目来看待,多硬盘驱动器或者多通道IDE/SATA控制器。也就

是,"fakeRAID"。

If you have arrived here after researching this topic on the Internet, you know that a common response to this

question is, "I don't know if you can actually do that, but why bother -- Linux has built-in softRAID capability."

Also, it's not clear that there is any performance gain using hardware fakeRAID under Linux instead of the built-in

softRAID capability; the CPU still ends up doing the work. Well, that's beside the point. The point is that a

Windows user with a fakeRAID system may very well want to put Linux on that same set of disks. Multiboot

configurations are common for cross-over users trying Linux out, for people forced to use Windows for work, and for

other reasons. These people shouldn't have to add an additional drive just so they can boot Linux. Also, ome people

say, "RAID-0 is risky". That's a matter of individual needs (speed vs security subject to resource constraints).

These are not the subject of this HowTo; we assume you want to do it and tells you "how to".
如果你已经在internet上研究了这个题目并看到了这里,你一定已经知道通常对于这个问题的反映是,我不知道你是否可以做到,但

是没必要苦恼,因为linux已经内置了软raid的功能。而且,我们也不知道在linux放弃内置的软raidn功能而选择使用fakeRaid是否

真的能带来系统性能上的提升。cpu最后仍然要作这些工作。不过,这不是问题的重点。重点在于,有windows用户可能会希望将

linux也安装在同一个硬盘组上。对于那些喜欢尝试各种系统,或者因为工作需要或其他原因必须使用windows的用户来说,多重启动

是很常见的。这类用户没必要为了使用linux就添加一块硬盘。另外,一些人说RAID-0不安全。不过这要视个人需要而言,看你是更

看重安全还是效率。这些问题都不是本文的重点问题,因此我们就假定您需要这么做并告诉您如何去做。

Installing Ubuntu into the RAID Array
Installing dmraid
The standard setup and LiveCDs do not yet contain support for fakeRAID. I used the LiveCD to boot up, and used the

package manager to download the dmraid package from the universe repository. You will need to enable packages from

Universe in the settings of Synaptic to see the package. If you are using the DVD you may also need to get the

gparted package, which we will use for partitioning your RAID.
在RAID上安装Ubuntu
安装dmraid
标准安装光盘和LiveCDs不包含对fakeRAID的支持。我使用liveCD启动机器,然后使用包管理器从universe下载dmraid包。You will

need to enable packages from Universe in the settings of Synaptic to see the package. 如果你使用的是DVD,那也需要

gparted 来为RAID分区。

NOTE: Support for dmraid has been improved in Ubuntu 6.06, and several of the steps below are no longer necessary.

If you install from the Live cd, install the dmraid package from universe before you start the installer program

(Ubiquity). Just make sure you choose your RAID devices under /dev/mapper and do not use the raw devices /dev/sd*

for anything. So far, this works for some, while for others, Ubuquity crashes. If Ubiquity does not complete the

install, you can manually complete the process by following this procedure. In that case, those steps that are no

longer required for Ubuntu 6.06 or later have been marked "Ubuntu 5.10".
说明:Ubuntu 6.06中已经增加了对dmraid 的支持,所以,以下的几个步骤将来就不需要了。如果你是从Live CD安装的系统,那就

需要在启动安装程序(Ubiquity)前从universe安装dmraid。要确保选择RAID设备时是在/dev/mapper下,而一定不要使用任

何/dev/sd*。到此,某些读者的Ubuquity可能会挂掉。如果Ubuquity没能完成安装,你可以按照这个程序手动完成安装。在这种情况

下,那些在Ubuntu 6.06以及之后版本下不再需要的步骤就被标记为"Ubuntu 5.10"。

Partitioning the RAID Array
You can use gparted to create and delete partitions as you see fit, but at this time, it can not refresh the

partition table after it modifies it, so you will need to change the partitions, then manually run dmraid -ay from

the command prompt to detect the new partitions, and then refresh gparted before you can format the partition. (Of

course, you can use parted, fdisk or other tools if you are experienced with them.)
RAID阵列的分区
您可以使用gparted 按照您的需求来删除和创建分区,但是它不能在修改分区之后就刷新分区表,所以在使用gparted 修改分区后,

您需要手动在命令行模式下运行dmraid -ay命令来检测新的分区,然后在格式化新分区之前刷新gparted (当然了,您如果对parted,

fdisk或者其他类似程序比较熟悉的话也可以使用这些工具)

I needed to resize my existing NTFS partition to make space for Ubuntu. (If you don't need to do this, skip to the

next paragraph.) Gparted currently can not do this on the mapper device so I had to use the ntfsresize program from

the command line. Note that ntfsresize only resizes the filesystem, not the partition, so you have to do that

manually. Use ntfsresize to shrink the filesystem, note the new size of the filesystem in sectors, then fire up

fdisk. Switch fdisk to sector mode with the 'u' command. Use the 'p' command to print the current partition table.

Delete the partition that you just resized and recreate it with the same starting sector. Use the new size of the

filesystem in sectors to compute the ending sector of the partition. Don't forget to set the partition type to the

value it was before. Now you should be able to create a new partition with the free space.
我需要调整我现有的NTFS分区大小来为Ubuntu腾出空间。(如果您不需要作这个步骤,就直接跳到下一段)。当前的Gparted 还不能在

映射设备上进行此操作,所以我只好在命令行下使用ntfsresize 程序。注意一点,ntfsresize 调整的是文件系统的大小,而不是分

区,所以您就必须手动来作。使用 ntfsresize 调整文件系统大小的时候要注意文件系统新的大小单位是扇区,然后运行fdisk。用u

命令将fdisk切换至扇区模式,用p命令打印出当前的分区表,删除您刚才调整过大小的分区,然后用以相同的开始扇区参数重建分区

。以扇区为单位计算出分区的结束扇区。切记要将分区类型设为调整相同的类型。然后你就应该可以用剩余空间来创建新分区了。

Start gparted and create the partitions you want for your setup. To begin, use the selector on the upper right to

choose the device dmraid has created for your fakeRAID. In my case, this was /dev/mapper/via_hfciifae, with an

additional device /dev/mapper/via_hfciifae1 assigned to my already-created NTFS partion. DMRAID will attempt to

assign a meaningful name reflecting the controller you are using (e.g., and nvRAID user may see

/dev/mapper/nvidia_bggfdgec or the like).
启动gparted创建你安装所需要的分区。首先,选择在右上角显示的用dmraid所创建的用于fakeRAID的设备。在我的例子中

是/dev/mapper/via_hfciifae和分配给已存在的NTFS分区的附加设备/dev/mapper/via_hfciifae1。DMRAID将会给你所使用的控制器

分配一个尽可能有意思的名称。

After selecting the unused space, I created an extended partition with 3 logical partitions inside. I made a 50 meg

partition for /boot, a 1 gig partition for swap, and the rest for the root. Once you have set up the partitions you

want, apply the changes and exit gparted. If you apply changes more than once (e.g., do this in more than one step,

or change your mind while working), you should exit gparted, refresh the partition table using the command dmraid -

ay, and open gparted again to continue your work.
选择了未使用的空间后,我创建了一个扩展分区,里面又划分了3个逻辑分区。我给/boo划分了50m空间的分区,为swap交换分区划了

1G,剩下的留给了root。完成分区设置,就将更改提交保存并退出gparted。如果更改信息提交了多次,那就需要退出gparted,用命

令dmraid -ay来刷新分区表,然后再重新打开gparted继续后续的操作。


Formatting the Partitions
Now format your filesystem for each partition. In my case I used fdisk and ran a mke2fs on /dev/mapper/via_hfciifae5

and mkreiserfs on /dev/mapper/via_hfciifae7.
格式化分区
现在就可以将各分区进行格式化。我是用的fdisk,并用mke2fs格式化/dev/mapper/via_hfciifae5,用mkreiserfs格式化

的/dev/mapper/via_hfciifae7

Alternatively, you can do this using the GUI in gparted. Run dmraid -ay again to refresh the partition table for

gparted and then open gparted again. You will see that the new partitions are designated as "unknown type", because

they are not formatted. You can use gparted to format them by right-clicking each partion and selecting "convert"

and the appropriate format. Before you exit, make a note of the device mapping for each new partition (you will need

this later). Apply the changes and exit. You can also see these mappings with the command dmraid -r.
此外,您也可以在GUI环境下使用gparted。用命令dmraid -ay刷新分区表并再将gparted打开。然后,您就会看到新分区标记

为"unknown type",这是因为这些分区还没有进行格式化。然后就可以用鼠标右键点击每个分区,选择"convert"来将分区格式化成

所需要的格式。在退出程序之前,最好记下每个分区的设备映射情况(后面会用到)。提交保存更改信息并退出。然后就能用dmraid

-r命令察看这些映射信息。


In my case I had the following mappings:
在我的例子中有如下映射情况:

via_hfciifae -- the raw raid volume
via_hfciifae1 -- the NTFS partition
via_hfciifae5 -- /boot
via_hfciifae6 -- swap
via_hfciifae7 -- /


Mounting the Temporary File Structure
Next, I created a temporary file structure to hold my new installation while I construct it, I and mounted two sets

of directories to it: a) Mounted the new partitions I had created for / and /boot (so could install packages to

them). b) Mounted the currently running, /dev, /proc, and /sys filesystems, so I could use these to simulate a

running system within my temporary file structure.
挂载临时文件结构
接下来,我创建了一个临时文件结构用于安装,我挂载了两组目录,1是用来挂载/ 和/boot,这样就可以在里面安装所用的包,2是

挂载当前运行的/dev, /proc, 和/sys 文件系统,如此就可以在临时文件结构中模拟出一个运行的系统

mkdir /target
mount -t reiserfs /dev/mapper/via_hfciifae7 /target
mkdir /target/boot
mount -t ext2 /dev/mapper/via_hfciifae5 /target/boot
mkdir /target/dev
mount --bind /dev /target/dev
mkdir /target/proc
mount -t proc proc /target/proc
mkdir /target/sys
mount -t sysfs sysfs /target/sys


Installing the Base System
Now we install the base system. debootstrap installs all base packages and does it setup. Afterwards you need to

install some additional packages:
安装基本系统
现在我们就开始安装基本系统。debootstrap 安装了所有基本包。之后您就可以安装一些其他的包了。

cd /target

apt-get install debootstrap
# install debootstrap to install the base system at the next point
# 安装debootstrap 用于在下一步安装基本系统

# install base system
# 安装基本系统
debootstrap breezy /target ## instead of breezy can be any distribution selected

# copy sources list
# 拷贝源的列表
cp /etc/apt/sources.list etc/apt

# copy resolv.conf
# 拷贝copy resolv.conf
cp /etc/resolv.conf /target/etc

# copy hosts
# 拷贝hosts
cp /etc/hosts /target/etc

# run in the now installed system
# 在已经安装的系统中运行
chroot /target

# install ubuntu-base (and other packages)
# 安装ubuntu-base (以及其他包)
apt-get update
apt-get install ubuntu-base linux-k7 ubuntu-desktop dmraid grub
# change grub to lilo if you use lilo
# 如果您用的是lilo的话就换成lilo
# change k7 to your processor architecture if you don't know, use linux-386.
# 将k7换成您所用的cpu的类型,如果您自己也不清楚的话就用linux-386


今天先整这些,有些翻译的不太好的地方,希望高手给改一下
这四俺在这里的处女贴,呵呵


页首
 用户资料  
 
4 楼 
 文章标题 :
帖子发表于 : 2006-06-16 16:33 

注册: 2006-06-08 15:30
帖子: 26
送出感谢: 0 次
接收感谢: 0 次
# when prompted whether you want to stop now, say no (we will later be fixing the issue that

the system is talking about)
#如果这时有寻问您是否要停止的对话框出现,选择no(我们在后面会处理系统提示的问题)。
#
# when prompted whether to create a symbolic link, say yes. (Setting up symlinks with names

that don't change with each kernel update the corresponding file references used by the

bootloader don't have to be udpated each time the kernel is updated.)
# 当出现询问你是否要创建符号连接的对话框时,选择yes。(这句话太长,看了半天也没搞清楚句子的结

构)

# the system is installed now.
# 现在系统就被安装好了

**Temporary Note to other editors: when I tested this howto with 6.06 LTS on 1 June 2006,

the install of dmraid failed (--configure), indicating it was unable to start the dmraid

initscript. This may have been some kind of error on my part. I was able to fix this with

dpkg-reconfigure dmraid, so I add it here as a possibily useful tip should this turn out to

be systemic problem that others encounter. Also, install dmraid first, then the kernel, in

order to use the initramfs scripts that are now part of the 6.06 distrubution. This is based

on one 6.06 test -- please correct/edit this as appropriate.**

**对其他编者的临时性说明:在2006年六月1号,我在6.06 LTS上测试了一下上面的步骤,结果失败了(-

-configure),无法启动dmraid 脚本。这可能是我自己机器上的问题。我通过dpkg-reconfigure重新配置

dmraid来解决了这个问题,其他朋友也可能碰到这类问题,所以我写下这些内容也许可以给这些朋友某些

有用的提示。另外,要先安装dmraid,然后是内核,这样就可以使用现在已经最为6.06发行版一部分的

initramfs 脚本。这一点已经在6.06上经过了测试。--如果以上描述不合适的话,请更正**

Setting Up the Bootloader for RAID
Now that you have the debian core, ubuntu-base, linux kernel, dmraid, grub, and ubuntu-

desktop installed, you can proceed with the bootloader. If you haven't completed these

successfully, don't attempt to proceed, you will just exacerbate any problem you have at

this point.
为RAID安装起动器
现在,您已经安装了debian核,ubuntu基本系统,linux 核心,dmraid,grub和ubuntu桌面,这样就可以

使用起动器了。如果您没有成功完成这些操作,就不要使用起动器,否则可能会把事情搞糟。


We will demonstrate the installation of GRUB (Grand Unified Bootloader), but there are

several alternatives (e.g., LILO). The key information here is how the normal process for

use of the bootloader had to be modified to accomodate the RAID mappings, so this general

process should be useful regardless of your choice of bootloader.
我们将演示GRUB(Grand Unified Bootloader)的安装,但是您也可以使用GRUB的替代品(比如说,lili)。

这里我们要说的关键问题是如何修改起动器的操作程序来适应RAID映射,所以不管你选择使用哪种起动器

,这个操作程序应该对您都是有用的。

Installing the Bootloader Package
Now you need to run the grub shell. In a non-RAID scenario, one might use grub-install, but

we cannot because it cannot see the RAID device mappings and therefore cannot set up correct

paths to our boot and root partitions. So we will install and configure grub manually as

follows:
安装起动器包
现在,到了需要运行grub shell的时候了。在非RAID情况下,我们可以使用grub-install,但是现在却不

可以,因为这样是无法看到RAID设备映射,因而也就无法为我们的boot和root分区设定正确的路径。所以

,我们将按照以下步骤手动安装grub


页首
 用户资料  
 
5 楼 
 文章标题 :
帖子发表于 : 2006-06-21 10:42 

注册: 2006-06-08 15:30
帖子: 26
送出感谢: 0 次
接收感谢: 0 次
First, make a home for GRUB and put the files there that it needs to get set up:
首先,给GRUB找个安放的地方。

mkdir /boot/grub
cp /lib/grub/<your-cpu-arch>-pc/stage1 /boot/grub/
cp /lib/grub/<your-cpu-arch>-pc/stage2 /boot/grub/
cp /lib/grub/<your-cpu-arch>-pc/<the staging file for your boot partition's filesystem>
The "staging files" look like: "e2fs_stage1_5" (for ext2 or 3); "reiserfs_stage1_5" (for

reiserfs); "xfs_stage1_5" (and so on). It is safe to copy them all to your /boot/grub.
"staging files"在ext2或ext3下大概是"e2fs_stage1_5"这个样子,在reiserfs下大概

是"reiserfs_stage1_5",或者是"xfs_stage1_5"。您可以将他们全拷贝到您的/boot/grub下。

Next, go into the grub shell:
接下来,进入grub shell

grub
You should now see the grub prompt.
现在您就应该能看到grub的提示符

Next, tell GRUB which device is the boot device:
然后,告诉grub哪个设备是boot设备

device (hd0) /dev/mapper/via_hfciifae
In my case, it was the RAID array mapped as /dev/mapper/via_hfciifae.
在我的例子中是RAID映射/dev/mapper/via_hfciifae

Next, tell GRUB where all the stuff is that is needed for the boot process:
再然后,告诉grub启动程序所需的东西存放的位置

root (hd0,4)
CAUTION: This is one of the most common sources of error, so we will explain this in

excruciating detail. From GRUB's perspective, "root" is whatever partition holds the

contents of /boot. For most people, this is simply your linux root (/) partition. E.g., if /

is your 2nd partition on the RAID you indicated above as hd0, you would say "root (hd0,1)".

Remember that GRUB starts counting partitions at 0. The first parition is 0, the second is

1, and so on. In my case, however, I have a separate boot partition that GRUB mounts read-

only for me at boot time (which helps keep it secure). It's my 5th partition, so I say "root

(hd0,4)"
注意:这是出现错误的常见原因,所以我们将极其详细的解释这个问题。从grub的视角来看,"root"是任

何保存有/boot的分区。对多数人来说,这就是linux的root(/)分区。比如说,如果/是你的RAID上的第二

个分区,表示为hd0,这样您就要用root(hd0,1)。要记住,GRUB里的分区是从0开始计算的,所以第一个

分区就是0,第二个是1,其他的依此类推。在我的例子中,我使用了一个独立的boot分区,grub在启动的

时候为我将其作为一个只读的分区,这样可以确保系统安全。这个是我的第五个分区,所以就要表示为

root(hd0,4)


Optional: IF GRUB complains about bad cylinder numbers (i.e, if it did not complain, skip

this part about fdisk and geometry): You may need to tell it about the device's geometry

(cylinders, heads, and sectors per track. You can find this information out by using fdisk

(quit GRUB) with the command: fdisk (fdisk -l /dev/mapper/via_hfciifae) ...then reenter the

GRUB shell and use the command: geometry (hd0) 9001 255 63
可选操作:如果grub报告柱面数有错(如果没有报错,那就直接跳过这一部分):您就得告诉grub一些设备

的相关参数(柱面数,磁头数,每磁道的扇区数)。您可以使用fdisk查到这些参数(fdisk -l

/dev/mapper/via_hfciifae) ,然后重新进入grub shell使用geometry命令geometry (hd0) 9001 255 63

Next, now that you've successfully established the "device" and "root", you can go ahead and

instantiate GRUB on the boot device. This sets up the stage 1 bootloader in the device's

master boot record and the stage 2 boot loader and grub menu in your boot partition:
现在,您就成功的建立起了"设备"和"根",然后就可以在boot设备上初始化grub。这个步骤将stage 1起

动器安装在设备的主引导记录中,stage 2起动器和grubmenu则安装在boot分区中。

setup (hd0)
quit
Configuring the Bootloader
Now run update-grub:
配置起动器
然后运行update-grub

update-grub
This adds your newly installed linux kernel, and the associated initial ram disk image, to

the boot options menu that grub presents during start-up. You will find this menu in the

boot/grub directory. We need to edit this menu.lst file as follows. (CAUTION: Get this right

- this is a common source of error and mistakes result in kernel panic upon reboot, so no

typos.):
update-grub
升级grub
这一步将您新安装的linux内核以及初始化内存磁盘镜像添加到启动选项菜单中,这个菜单将出现在grub

的启动画面中。您可以在boot/grub目录下找到这个菜单。我们需要按照如下方法编辑menu.lst。(注意:

这个是造成核心在重新启动时出现错误的常见原因)


a) "root=": Correct the path that points to the linux root (in several places). update-grub

configures hda1 as root because, not being dmraid-aware, it can't find your current root-

device. Put the correct device mapping for your linux root. So put your equivalent of:
a)"root=":更正指向linux跟的路径。update-grub将hda1指定为根,是因为如果不是dmraid-aware的话,

他将无法找到你当前的根设备。

root=/dev/mapper/via_hfciifae7
every place you see "root=" (only where you see root and the equal sign). This goes in all

the places where update-grub defaulted to root=/dev/hda1 or just left it blank like root= .
为你的linux根设置正确的设备映射,将每个出现"root="的地方设置为

root=/dev/mapper/via_hfciifae7(仅仅在出现root和等号的时候)。要在所有update-grub缺省设置

root=dev/hda1的地方作这一步,要不然就保留为空,比如root=。


页首
 用户资料  
 
6 楼 
 文章标题 :
帖子发表于 : 2006-06-21 16:03 
论坛管理员

注册: 2005-03-27 0:06
帖子: 10116
系统: Ubuntu 12.04
送出感谢: 7
接收感谢: 128
cleverysm , 直接在下面的链接上面修改吧。

http://wiki.ubuntu.org.cn/community/FakeRaidHowto


页首
 用户资料  
 
7 楼 
 文章标题 :
帖子发表于 : 2006-06-27 18:28 

注册: 2006-06-08 15:30
帖子: 26
送出感谢: 0 次
接收感谢: 0 次
Make sure you change this in the Automagic defaults section as well as in each of the multiple alternatives sections that follow. (Important: the Automagic defaults section is nested and therefore uses ## to indicate comments and # to indicate the actual defaults that is uses. So don't "un-comment" the default lines when you edit them. In other words, leave the #). When you update your kernel later on, update-grub will use these defaults so it won't ignorantly "assume hda1" and send your system into a kernel panic when you boot. This ought to end up looking something like:#kopt=root= /dev/mapper/via_hfciifae7 ro
请确保您是在Automagic缺省分区以及跟随其后的每个可选分区上作的这种改动操作。(重点注意:Automagic缺省分区是嵌套的,因此要使用##来表示注释,用#来表示实际使用中的缺省设置。所以,在您编辑的时候一定不要取消缺省行的注释。也就是保留#)。之后,当您升级您的核心时,update-grub将使用缺省设置,所以他不会直接就假定使用hda1而使您的系统在启动的时候就崩溃掉。设置的结尾处应当像如下这样设定:#kopt=root= /dev/mapper/via_hfciifae7 ro。




b) "groot": If necessary, correct the grub root. In places, you will see other lines that also refer to "root" (or "groot") but use syntax such as root (hd0,1) instead of a path. As described earlier, these refer to the "root" for grub's purposes, which is actually your /boot. Also, remember grub's syntax uses partition numbering beginning with zero. So, if you have a separate /boot partition, these lines should instead show something like:
root (hd0,4)
(The same information we used while working with grub interactively earlier.) Change this both for the Automagic defaults as well as for each alternative, including the memtest option.
b)"groot":如果必须的话,就修正grub根的设定。在一些地方,您会看到若干对"root"的引用,不过这些地方是用的root(hd0,1)这样的语法,而不是使用路径。正如前面提到过的,这个是grub下所谓的根,而实际上是您的/boot。而且这个地方要牢记住,grub语法中分区编号是从0开始的。所以,如果您有一个独立的/boot分区,这个地方就应当是类似如下的形式:root(hd0,4)
(在前文对grub操作中我们也使用的相同的信息)。要同时修改Automagic的缺省设定和每个可选分区的设定,包括memtest选项。


c) An additional edit is required IF you are using a separate /boot partition. The path pointing to the linux root must be RELATIVE to the grub "root" (your /boot). So if you are using a separate boot partition, the paths in grub's menu.lst file that help grub locate the linux kernel and initrd will not begin with "/boot", and you should delete that portion of the path. For example, {{update-grub}}} initially spat out this:
c)如果您使用了一个独立的/boot分区,那您就需要作一点额外的设置。指向linux根的路径必须是相对于grub的"root"(您的/boot)的路径。所以,如果您正在使用一个独立的boot分区,在menu.lst文件中用于定位linux核心和initrd的路径就不能以/boot开头,而需要删掉路径的那一部分。比如说,update-grub就要以类似下面的形式开始:


title Ubuntu, kernel 2.6.15-23-amd64-k8
root (hd0,0)
kernel /boot/vmlinuz-2.6.15-23-amd64-k8 root= ro quiet splash
initrd /boot/initrd.img-2.6.15-23-amd64-k8
savedefault
boot
... and because I have a separate boot parition and opted not to use a grub splash image (which you can learn about elsewhere), my editing looked like this...
由于我使用了一个独立的boot分区,而且没有使用splash图像(相关信息可以从其他地方获得),我的配置信息就如下所示:

title Ubuntu, kernel 2.6.15-23-amd64-k8
root (hd0,4)
kernel /vmlinuz-2.6.15-23-amd64-k8 root=/dev/mapper/via_hfciifae ro quiet
initrd /initrd.img-2.6.15-23-amd64-k8
savedefault
boot
NOTE that I removed "savedefault". If you leave this in, you will get a "file not found" error when you try to boot (you also can't set use default=saved up top as it shows in the example). Again, if you are not using a separate boot partition, you can leave /boot in the paths.
注意:我在此处删掉了"savedefault"。如果您没有这么做,就会在启动的您时候出现"file not found"的错误(您也不可以像例子中的那样设定为default=saved)。如果您没有使用独立的分区作为boot,您就可以在路径中保留/boot。

d) To add a static boot stanza for Windows, you can use and change the example in the menu.lst file or the following:
d)添加一个windows的启动项,您可以修改menu.lst中的例子也可以使用下面的作为模板。

title Windows XP
rootnoverify (hd0,0)
chainloader +1
Put it at the bottom, below where it says ### END DEBIAN AUTOMAGIC KERNELS LIST. Or if for some unforgivable reason you want your computer to boot Windows by default, you can put it up front above where it says

### BEGIN DEBIAN AUTOMAGIC KERNELS LIST
将此项放在最底部### END DEBIAN AUTOMAGIC KERNELS LIST的下面。或者由于某种不可原谅的原因,您希望您的计算机将windows作为默认的启动系统,您也可以把这些放在### BEGIN DEBIAN AUTOMAGIC KERNELS LIST上面

e) Close the gaping security hole! First, set a password where the example shows it. This will be required for any locked menu entries, for the ability to edit the bootlines, or to drop to a command prompt. To do this, in the console type:
grub-md5-crypt
e)杜绝安全隐患!首先需要在例子所示的地方设置一个密码。这将可以用来锁定菜单入口,限制编辑菜单的能力,或者进入命令提示行。为了实现这点,需要在控制台下键入:grub-md5-crypt


When it prompts you "Password:", it's asking what you want to be the GRUB password (not your user password, the root password, or anything else). You will be prompted to enter it twice, then it will spit out the MD5 hash that you need to paste into menu.lst. This line should end up looking something like:

password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/
当出现password提示信息的时候,是要您输入您要设定的grub密码(不是您的用户密码,也不是root的或其他的什么密码)。您会被要求输入两次密码,然后您的密码会通过MD5进行加密并需要粘贴到menu.lst文件中。这一行的末尾应当以类似以下形式:password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/

- Then, to keep your "recovery mode" boot alternative(s) locked each time update-grub runs, set lockalternative=true
然后,设定lockalternative=true,这样在每次 update-grub运行的时候就会将"recovery mode"选项设为锁定状态。


.Unless you do this, anybody will be able to seize root simply by rebooting your computer (e.g., cutting power to it) and selecting your "recovery mode" menu entry when it reboots, or editing the normal bootline to include 'single' mode.
如果您不这样做的话,任何人就都可以简单的通过重启计算机的方法来获得root权利(比如,关闭计算机的电源),然后重启的时候选择"recovery mode"或者编辑正常的启动行加上'single'模式。

f) Test automagic kernels settings (also completes the locking of alternatives). It is better to find errors now than a month from now when you've forgotten all this stuff and the kernel gets updated. - first, make a backup of menu.lst - then run update-grub again - watch for errors and re-examine menu.lst for discrepancies - correct as needed.
f) 测试一下automagic 核心的设定(同时把可选项都锁定)。最好现在就能发现存在的问题错误,千万不要几个月后核心已经升级而您也已经把现在做得一些设定细节忘掉的时候才出现错误。首先,给menu.lst文件做个备份,然后运行update-grub,看看有什么错误没有,有则修正这些错误。

Reconfiguring the Initramfs for RAID (Ubuntu 5.10)
Reminder: Sections Ubuntu 5.10 should be skipped if you are installing Ubuntu 6.06.
重新为RAID 配置Initramfs
提示:如果您安装的是Ubuntu 6.06的话就跳过5.10版的章节

In recent years there has been a trend to try and pull a bunch of code out of the kernel and into EarlyUserspace. This includes stuff like nfsroot configuration, md software RAID, lvm, conventional partition/disklabel support, and so on. Early user space is set up in the form of an initramfs which the boot loader loads with the kernel, and this contains user mode utilities to detect and configure the hardware, mount the correct root device, and boot the rest of the system.
在最近几年,有一种将核心中的部分代码提取出来放到EarlyUserspace(咋翻译好)的趋势。这包括nfsroot配置,md软件RAID,lvm,传统的分区/卷标支持等。 Early user space是以initramfs 的形式,它是由起动器同核心一起加载的,在其中还含有用户模块用来检测和配置硬件,挂载和修正根设备以及启动剩下的系统。


Hardware fakeRAID falls into this category of operation. A device driver in the kernel called device mapper is configured by user mode utilities to access software RAIDs and partitions. If you want to be able to use a fakeRAID for your root filesystem, your initramfs must be configured to detect the fakeRAID and configure the kernel mapper to access it.
硬件fakeRAID进入这种操作。用户模块工具对核心中的一个叫做设备映射器的设备驱动进行配置,这样就可以存取软件RAIDs和分区。如果您希望为您的root文件系统使用fakeRAID,您就必须配置您的initramfs 来检测fakeRAID并适当设置核心映射来存取它。


So we need to add dmraid to the initramfs. Debian and Ubuntu supports this by way of a set of shell scripts and configuration files placed in /etc/mkinitramfs/. We must tailor these to include dmraid by plugging in two simple scripts and adding a one-line entry to a configuration file. The only real challenge here is to make sure you don't inadvertently screw up the syntax with a typo.
所以我们需要将dmraid添加到initramfs中。Debian和Ubuntu是通过存放在/etc/mkinitramfs/中的一系列shell脚本和配置文件来支持此功能的。所以,我们就需要在配置文件中添加两行脚本代码和一行入口代码。这个地方唯一要注意的是要避免出现语法错误。


Note that in Ubuntu 6.06, this is taken care of by the dmraid package itself.
注意在Ubuntu6.06中,这些是由dmraid包自行处理的。

Configuring mkinitramfs in Ubuntu 5.10 (Breezy Badger)
First, create a new file as /etc/mkinitramfs/scripts/local-top/dmraid .
在Ubuntu 5.10中配置mkinitramfs

(If you are lazy or don't like to keyboard, you can open this how-to in the browser and copy the text.)
(如果您懒得打字的话,就可以把下面的一段考过去)

#!/bin/sh

PREREQ="udev"

prereqs()
{
echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0
;;
esac

modprobe -q sata_nv
modprobe -q dm-mod

# Uncomment next line if you are using RAID-1 (mirror)
如果您在使用RAID-1,就把这行的注释取消
# modprobe -q dm-mirror

/sbin/dmraid -ay
Second, create another new file as /etc/mkinitramfs/hooks/dmraid.
第二步,创建另一个新文件/etc/mkinitramfs/hooks/dmraid

(Again for the lazy, you can copy it from your browser. Also, it's only slightly different, so if you are manually typing it for some reason, you may want to start with a copy of the first script.)
(如果您还是懒得敲键盘,那就还是把这段考过去。另外,这个地方只有很小的改动,所以即使您手动敲的键盘也可以直接把前面这部分脚本考过去)

#!/bin/sh

PREREQ=""

prereqs()
{
echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0
;;
esac

. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/dmraid /sbin

exit 0
Third, mark both of these new initramfs scripts as executable:
第三步,标记新的两个initramfs脚本为可执行的

chmod +x /etc/mkinitramfs/hooks/dmraid
chmod +x /etc/mkinitramfs/scripts/local-top/dmraid
Last, add the line dm-mod to the file /etc/mkinitramfs/modules. Make sure the file ends with a newline. If you use a RAID-1 (mirror), include dm-mirror as well.
最后,在/etc/mkinitramfs/modules中加入一行dm-mod。保证文件要以新的一行结束。如果您在使用RAID-1,那就要连dm-mirror也加进去。

Updating the initrd (Ubuntu 5.10)
Now the big moment -- use initramfs to update the initrd file. Below, I show the kernel I installed at that time, but stuff following "img-" and following "-c -k " must reflect the version YOU are using (e.g., "2.6.12-10-amd64-k8-smp" or whatever).
升级initrd (Ubuntu 5.10)
现在到了一个重要的时刻--用initramfs升级initrd文件。在下面我所演示的是我当时安装的核心,不过,对于您来说,"img-"和"-c -k "后面的内容要根据您所使用的版本来进行修改(比如说,"2.6.12-10-amd64-k8-smp"之类的样子)

Two commands
两条命令

rm /boot/initrd.img-2.6.12-9-k7
update-initramfs -c -k 2.6.12-9-k7
Now you are ready to set up the new system.
现在您就可以设置您的新系统了

Preconfiguring the New System as Usual
Ensure that you are still operating as root within the new (temporary) system (i.e., your prompt will be root@ubuntu#. If not, chroot /target again: sudo chroot /target
按照通常的步骤预配置新系统
首先要保证您是在以root的身份在新系统里进行操作(比如说,您当前的提示符应当大概是这样root@ubuntu#。如果不是的话就要切换到root状态下)。

(The process from here forward is the same as any bootstrap / network installation, and there are other sources to refer to for more detail.)
(从这里往下的操作就和bootstrap网络设置是一样的了,您可以找到其他一些对此操作的详细介绍)

UBUNTU 5.10: Enter the command base-config new to configure system defaults.


**UBUNTU 6.06: base-config is deprecated in Dapper Drake. The correct procedure needs to be inserted here. Theoretically, one could do what base-config does manually.
**UBUNTU 6.06:在 Dapper Drake中是不推荐使用base-config的。从理论上讲,base-config的操作是完全可以手工完成的。

While it is not absolutely necessary, it may be useful to also copy the live hosts and interfaces files into your temporary system before rebooting (after exiting your chroot:
您可以在重新启动前将live hosts 和interfaces文件拷贝到您的临时文件系统,尽管这并不是非常必要,但有时也可能会很有用处。


cp /etc/hosts /target/etc/hosts
cp /etc/network/interfaces /target/etc/network/interfaces
).

It will also be helpful to configure your fstab file at this point. One easy way to do this is:
在这个时候最好也一起把fstab文件配置妥当。简单的方法如下:

cat /etc/mtab
(select and copy everything)
(选择并拷贝所有的内容)

nano /target/etc/fstab
(粘贴所有的内容)

Then delete everything except the proc line, and the lines that refer to your RAID partitions. It might end up something like this (yours will vary - people asked for examples):
然后,删除除了proc行和涉及到您的RAID分区的脚本行以外的所有内容。比如可能会如下结尾(您的可能会不太一样):

#FileSystem MountPoint Type Options Dump/Pass

proc /proc proc rw 0 0
/dev/mapper/via_hfciifae5 /boot ext3 defaults 0 2
/dev/mapper/via_hfciifae7 / reiserfs notail,noatime 0 1
/dev/mapper/via_hfciifae6 none swap sw 0 0
or

#[fs ] [fs_mount][fs_type][ fs_opts ][dmp][pass]
/dev/mapper/nvidia_bggfdgec2 /boot ext3 defaults 0 1
/dev/mapper/nvidia_bggfdgec3 none swap sw 0 0
proc /proc proc rw 0 0
Finally you are ready to reboot. This first time, select the "recovery mode" option. When it asks, you want to "perform maintenance". Set the root password:
最后,准备重新启动您的机器。第一次的时候记得选择"recovery mode"选项。然后再它询问您的时候选择"perform maintenance"。设定root密码:


页首
 用户资料  
 
显示帖子 :  排序  
发表新帖 回复这个主题  [ 7 篇帖子 ] 

当前时区为 UTC + 8 小时


在线用户

正在浏览此版面的用户:没有注册用户 和 2 位游客


不能 在这个版面发表主题
不能 在这个版面回复主题
不能 在这个版面编辑帖子
不能 在这个版面删除帖子
不能 在这个版面提交附件

前往 :  
本站点为公益性站点,用于推广开源自由软件,由 DiaHosting VPSBudgetVM VPS 提供服务。
我们认为:软件应可免费取得,软件工具在各种语言环境下皆可使用,且不会有任何功能上的差异;
人们应有定制和修改软件的自由,且方式不受限制,只要他们自认为合适。

Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
简体中文语系由 王笑宇 翻译