Viper in OpenVZ CT
This is a practical guide on setting up OpenVZ and creating a container (CT). In it, you can then install Viper as usual and documented in Setting up a Viper server.
Note that we use the term "host" to refer to any host, physical or virtual. When context is relevant, we use the term "physical host" or "CT0" to refer to the physical host, and "container", "CT", "CT110" or similar to refer to the containers.
Physical host setup
Let's first install vz-enabled kernel, after which you should reboot into it (make sure you select openvz in the bootloader menu, sometimes it does not become the default kernel).
apt-get install linux-image-openvz-686
After vz support is installed and the new kernel has been rebooted into, we can continue.
First, define and load the necessary sysctl parameters:
sudo sh -c 'echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf' sudo sh -c 'echo "net.ipv4.conf.eth0.proxy_arp=1" >> /etc/sysctl.conf' sudo sysctl -p
vzctl
We need to use vzctl version >= 3.0.23. It is currently in Debian testing or unstable. If dpkg -l vzctl shows a lower version, then:
1) Add Debian testing to sources.list 2) Run apt-get update 3) Run apt-get install vzctl 4) Remove testing from sources.list 5) Run apt-get update
Installing the container
Now let's debootstrap the Debian installation for container 110, apply necessary vz configuration and add a SSH key so later we can ssh in without a password:
debootstrap --arch i386 lenny /var/lib/vz/private/110 \ http://ftp.us.debian.org/debian cd /etc/vz/conf wget 'http://git.openvz.org/?p=vzctl;a=blob_plain;f=etc/conf/ve-unlimited.conf-sample' \ -O ve-unlimited.conf-sample vzctl set 110 --applyconfig unlimited --save vzctl set 110 --diskspace unlimited:unlimited --save vzctl set 110 --diskinodes unlimited:unlimited --save echo "OSTEMPLATE=debian-4.0" >> /etc/vz/conf/110.conf ln -sf /var/lib/vz/private/110 /viper sed -i -e '/getty/d' /viper/etc/inittab echo viper > /viper/etc/hostname echo 127.0.0.1 localhost.localdomain localhost >> /viper/etc/hosts echo 10.0.1.1 viper >> /viper/etc/hosts test -r ~/.ssh/id_dsa.pub || ssh-keygen -t dsa mkdir -p /viper/root/.ssh cp ~/.ssh/id_dsa.pub /viper/root/.ssh/authorized_keys
Now, to be able to run the DHCP server and have a real ethX interface within the VZ, we need to use 'veth' instead of the default 'venet' interface in openvz.
Veth devices are added using vzctl --netif_add, and the following will create devices eth0 and eth1 within the CT. (On the host node, you'll see veth110.0 and veth110.1).
apt-get install bridge-utils vzctl set 110 --netif_add eth0,,,,vzbr0 --save vzctl set 110 --netif_add eth1,,,,vzbr1 --save
(If you get "Invalid syntax" error on the above, as we said, you need to upgrade your vzctl to at least 3.0.23).
Bridging setup on the physical host
The ethX interfaces we have registered with the CT are standalone devices that are not linked to host's ethX. To make this link, we'll use bridging.
Simply, bridge is a device to which you just add the devices you want to bridge. In our setup, the host's eth0 and eth1 interfaces will become part of bridges vzbr0 and vzbr1. The container's devices (visible as veth110.0 and veth110.1 on the physical host) are present only while the CT is running, so the vznet.conf script will take care of adding/removing them from the bridges as CT is started or stopped.
As the host's eth0 and eth1 interfaces become part of bridges vzbr0 and vzbr1, it means you should remove any eth0 and eth1 configuration in /etc/network/interfaces, then add the below bridge configuration, and remember that from now on, the bridges take over eth0/eth1, so whatever network operation you want to do on the host, do it on vzbr0/vzbr1 instead of ethX devices, and you'll get the expected results.
Here's the needed bridge specifications in /etc/network/interfaces (the listing assumes you have two physical interfaces on the host, eth0 and eth1, where eth0 was a standard DHCP-configured interface, and eth1 was the device you wanted to use for Viper and DHCP):
auto vzbr0 iface vzbr0 inet dhcp pre-up ifconfig eth0 up pre-up brctl addbr vzbr0 pre-up brctl addif vzbr0 eth0 post-down ifconfig eth0 down post-down brctl delif vzbr0 eth0 post-down brctl delbr vzbr0 auto vzbr1 iface vzbr1 inet static address 10.0.1.254 netmask 255.255.255.0 pre-up ifconfig eth1 up pre-up brctl addbr vzbr1 pre-up brctl addif vzbr1 eth1 post-down ifconfig eth1 down post-down brctl delif vzbr1 eth1 post-down brctl delbr vzbr1
Apply the new host interfaces configuration like this (execute this ONLY locally, DO NOT run over SSH as it'll kill your connection and leave the machine in an unconfigured state):
ifconfig eth0 0 ifconfig eth1 0 ifup vzbr0 ifup vzbr1
(Note that bridges autodetect network topology change, and that job takes a couple of seconds. When configuring vzbr0, its DHCP client will already start sending packets before the bridge had autoadjusted itself. This is alright and has no unwanted effects, except that the DHCP will take a bit longer to get the address.)
Now configure OpenVZ to run the script that automatically changes the veth devices in the bridges as containers are started or stopped.
test -r /etc/vz/vznet.conf || echo '#!/bin/bash' > /etc/vz/vznet.conf echo 'EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"' >> /etc/vz/vznet.conf chmod 755 /etc/vz/vznet.conf
Starting the container, configuring the installation in CT
Enter from host into the CT:
vzctl start 110 vzctl enter 110
/etc/network/interfaces for the container:
auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 10.0.1.1 netmask 255.255.255.0
Bring up interfaces:
ifup lo ifup eth0 ifup eth1
Update packages list and install the minimum of tools:
apt-get update apt-get install openssh-{client,server} vim less locales dpkg-reconfigure locales tzdata
Installing Viper
We can now exit the container shell (Ctrl+d) that we have entered via "vzctl enter 110", and try using SSH to login as usual. The connection should succeed because we've installed the SSH server in the container, and it should let us in without a password because we added the public key to authorized_keys:
echo 10.0.1.1 viper >> /etc/hosts ssh viper
Just make sure you SSH as the same user whose key you copied (root or your regular user account).
After you've successfully SSH-ed in, you can follow the usual Viper installation instructions (Setting up a Viper server) as if it was a physical host and not a container.