Chinaunix首页 | 论坛 | 博客
  • 博客访问: 7664023
  • 博文数量: 1770
  • 博客积分: 18684
  • 博客等级: 上将
  • 技术积分: 16357
  • 用 户 组: 普通用户
  • 注册时间: 2010-06-02 10:28
个人简介

啥也没写

文章分类

全部博文(1770)

文章存档

2024年(16)

2023年(44)

2022年(39)

2021年(46)

2020年(43)

2019年(27)

2018年(44)

2017年(50)

2016年(47)

2015年(15)

2014年(21)

2013年(43)

2012年(143)

2011年(228)

2010年(263)

2009年(384)

2008年(246)

2007年(30)

2006年(38)

2005年(2)

2004年(1)

分类: LINUX

2012-02-06 16:26:56

EC2 is already a  environment, which means it’s nearly impossible to run your own virtualization (KVM/VirtualBox/qemu) from inside that environment. However, Linux recently introduced a new system into the kernel, called cgroups, which provides a way to isolate process groups from each other in the kernel. A project was soon formed around this new technology, which allows for very thin, fast, and secure quasi-virtualization. It’s called LXC, short for LinuX Containers. And it works in EC2 perfectly.

Here’s how.

You’ll want a recent Linux AMI (preferrably kernel 2.6.35 or higher). I use Ubuntu Server 11.04, and the following instructions are meant for that OS. Can’t vouch for other distros, though the instructions should be easily portable. Ubuntu’s good for EC2 and LXC because they already have nice pre-made AMI images, the kernel supports LXC out of the box, and they have software repositories hosted in the EC2 cloud, which makes for extremely fast system updates. Also, any instance type works, even a t1.micro (micro instance) will suffice (my weapon of choice for testing purposes).

Start by SSH-ing into your EC2 server. You’ll need to run almost all of the following instructions as root, so let’s do:

sudo -i

to become root. Otherwise, you can prepend ‘sudo’ to the beginning of every command from now on (unless specified otherwise).

Now, we need to install a few packages:

apt-get update && apt-get install lxc debootstrap bridge-utils dnsmasq

From the packages you installed, lxc is, well, lxc. debootstrap is a utility used to create a minimal Ubuntu install within a directory (which we will do shortly). bridge-utils is a suite of utilities used for creating  in Linux, which we will use to provide network access to the container. dnsmasq is a DNS/DHCP server which will allow the container(s) to identify themselves on the local network.

Now run lxc-checkconfig and make sure that the tests pass (all of them should, if you’re using Ubuntu Server 11.04).

NOTE: THIS IS IMPORTANT! Keep in mind that the effects of most of the commands from here on out (specifically iptables, sysctl, mount, brctl and any edits to /etc/resolv.conf) will not persist over a reboot, even on a EBS-backed instance. These are in-memory changes which will go away as soon as you shut down the machine. If you bring the instance back up, you’ll need to run them again, otherwise things will be broken! There are several ways to get around this: iptables rules and /etc/resolv.conf can be set by an init script, sysctl can be set in sysctl.conf, mounts can be specified in /etc/fstab, and brctl can be set in /etc/network/interfaces (add the br0 interface); however, for the purposes of this guide (I don’t use EBS-backed instances, personally), we’ll assume instance storage (config is lost on reboot).

We need to create a place on the system to hold cgroup information (required for LXC to work). I use /cgroup for simplicity. Let’s mount a cgroup environment there.

mkdir /cgroup
mount -t cgroup none /cgroup

(If you want to keep cgroup mounted between reboots, do: echo "none /cgroup cgroup defaults 0 0" >> /etc/fstab instead.)

Now, let’s create a network bridge for the containers to be able to connect to the network/Internet. Simply run:

brctl addbr br0
brctl setfd br0 0
ifconfig br0 192.168.3.1 up

In the second command, we use 192.168.3.* as the container network for the purposes of this tutorial. If you’d like to use another subset (192.168.x.*), you’re free to do so, but be sure to change every instance of 3.* in this article to yourchoice.*, because those IPs will be references a large amount of times in configuration. We recommend you keep the IPs we suggest here for simplicity purposes.

Now we need to set up a few system rules for the containers to be able to reach the Internet:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sysctl -w net.ipv4.ip_forward=1

Let’s set up DHCP/DNS on our new bridge. Open up /etc/dnsmasq.conf for editing (vim/nano/ed/cat, your choice). Uncomment the necessary lines so that the conf file looks like the following:

domain-needed
bogus-priv
interface = br0
listen-address = 127.0.0.1
listen-address = 192.168.3.1
expand-hosts
domain = containers
dhcp-range = 192.168.3.50,192.168.3.200,1h

Now, you’ll need to edit /etc/dhcp3/dhclient.conf for DNS to properly resolve locally. Add the following lines to the beginning:

prepend domain-name-servers 127.0.0.1;
prepend domain-search "containers.";

(Don’t forget the dot after containers, that’s not a typo!)

Now we need to renew our DHCP lease so that dhclient will regenerate /etc/resolv.conf.

dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp3/dhclient.eth0.leases eth0

Now, let’s restart dnsmasq so it’ll re-read the new configuration.

service dnsmasq restart

Next, we need to create the environment inside the container. There’s a script that comes with lxc called lxc-ubuntu, which will set up the container. However, it’ll require a bit of tweaking for the environment to work. I’ve done the tweaking for you, and put the new script up, so simply run:

wget -O lxc-ubuntu
chmod +x lxc-ubuntu

(If you’d like to do the tweaking yourself, you’ll need to pull the script from /usr/lib/lxc/templates/lxc-ubuntu, change the hostname, change the mirror to match EC2′s Ubuntu mirror, fix the sshd runlevel script, and change the networking config of LXC for DHCP veth (check my script to see how it’s set up)).

Now, let’s create a new container:

./lxc-ubuntu -p /mnt/vm0 -n vm0

Wait a while for the script to finish, and your container is set up in /mnt/vm0. Let’s try it out!

lxc-start -n vm0

Type in root for the username and root for the password. Try pinging Google:

ping

If it works, your Internet is set up! Now let’s try another thing (make sure you run this from the VM, not from the host!!):

poweroff (this shuts down the VM, and puts you in the host again)
lxc-start -n vm0 -d (this runs the VM in daemon mode)

To check if a VM is running, type:

lxc-info -n vm0

(it should say RUNNING). To test network, try pinging the VM (this might not work right away, you might have to wait up to a minute):

ping vm0
ssh root@vm0

If those two work, the VM is now in your DNS and you can address it by its hostname. Cool, huh?

Creating a new VM

Creating another VM is as simple as:

./lxc-ubuntu -n vm1 -p /mnt/vm1

The packages won’t be redownloaded, and the command should complete quickly.

Clone existing VM

If you want to clone your existing VM, you’ll need to do a few things:

cp -r /mnt/vm0 /mnt/vm1

Now edit /mnt/vm1/config and replace all references of vm0 to vm1. Do the same with /mnt/vm1/fstab. Then go into /mnt/vm1/rootfs/etc/hostname and replace the hostname with vm1. Finally, run:

lxc-create -n vm1 -f /mnt/vm1/config

Upon starting the VM, you should be able to ping it/ssh to it:

ping vm1
ssh root@vm1

If not, lxc-console into the VM and check your connection. Keep in mind you only need one br0 for all your instances, but you can create many, if you so desire.

Running services inside the container

You may want web servers to be accessible from outside the VM (from the rest of EC2, or outside EC2). To do this, you’ll need to port forward from the host to the VM. Simply run:

iptables -t nat -A PREROUTING -p tcp --dport -j DNAT --to-destination :

Hibernating a container

To ‘hibernate’ a container (save the current state (running processes) of the VM for instant restoring later) do:

lxc-freeze -n vm0

and later,

lxc-unfreeze -n vm0

to restore.

Installing additional packages into the container

Your container is just like any other Ubuntu system. Therefore,

apt-get update
apt-get install

works great.

Setting resource limits

One of the benefits of LXC is that you can limit resource usage per-container. Let’s delve into the various resources you can limit:

CPU

There’s two ways of limiting CPU in LXC. On a multi-core system, you can assign different CPUs to different containers, as such (add this line to your container config file, /mnt/vm0/config or similar):

lxc.cgroup.cpuset.cpus = 0 (assigns the first CPU to the container)
or
lxc.cgroup.cpuset.cpus = 0,2,3 (assigns the first, third, and fourth CPU to the container)

The alternative (this one makes more sense to me) is to use the scheduler. You can use values to say ‘I want this container to get 3 times the CPU of this container’. For example, add:

lxc.cgroup.cpu.shares = 2048

to the config to give a container double the default (1024).

RAM

To limit RAM, simply set:

lxc.cgroup.memory.limit_in_bytes = 256M

(replacing 256M with however much RAM you want to allow).

To limit swap, set:

lxc.cgroup.memory.memsw.limit_in_bytes = 1G

Hard Disk

Well, there’s no official way to do this, it’s up to you. You can use LVM (in EC2? fun), or you can create a filesystem in a file (something like dd if=/dev/zero of=somefile.img bs=4GB count=1 && mkfs.ext3 somefile.img && mount -o loop somefile.img /mnt/vm0/rootfs) and mount it to /mnt/vm0/rootfs to limit space.

Network Bandwidth

To limit network bandwidth per container, you'll want to use the tc utility. Keep in mind you’ll need to use separate bridges (br0, br1) for each container if you go this route. Don’t forget to edit the config of each VM to match your new bridge if you do so. 

Wrap-up

I hope I’ve covered most basic aspects of using LXC in the EC2 environment. LXC is an amazing technology, and the possibilities of what you can do with it are endless.

Some further reading: the , , and IBM’s tutorial.

NOTE: When following other guides on LXC, be very careful with messing with the network in the EC2 environment (restarting networking services or altering /etc/network/interfaces on the host) as careless network reconfiguration may cause the connection will drop between you and your instance (you’ll lose SSH), and therefore lose your instance completely. The instructions I’ve provided here have been tested and will not drop your EC2 connection.

 makes heavy use of LXC to provide a secure, isolated environment for your apps (more on this soon). Check out the  today and see how easy it can be to get your apps up and running in the cloud with Stackato!

转:http://www.activestate.com/blog/2011/10/virtualization-ec2-cloud-using-lxc

阅读(1278) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~