Proxmox OpenVZ Server 2 NICs 2 Gateways
These are two solutions that worked for me. They are by no means the only solutions.
My Predicament.
A server (a Dell SC440), with two physical network interfaces. Our office has two ISP connections (DSL broadband). We have the one internal network range (192.168.1.0/24) with two gateways (1.254 and 1.221).
I have installed Promox on the server. I wanted to be able to choose which gateway the guest containers used, instead of them being locked into using the same gateway as the host server. Some of the guest OS containers will host services via the first ISP, and the others will host services via the second ISP. (obviously with firewall rules etc configured in our gateways). So the container gateway entries need to point to their respective ISP/Gateway.
Note: While the security advantage of venet over veth is certainly worthwhile in a hosting environment (and others as well), in my case, we have two hardware firewalls that haven’t let us down yet, and there is little to no concern for the possibility of an internal hacker (there are only three of us!)
Solution 1. Requires the most effort, but gives the most flexibility.
I decided to configure the host with a vmbr0 device that contains a bond0 device, that contains devices eth0 and eth1 (you can look at the video tutorials on pve.proxmox.com for anyone unsure of how to create a bond0) – I configured the bond0 with an IP address that matches our office network (this address doesn’t matter in regards to the containers, just makes the host easy to access if it’s on the same network).
Creating the network bond is meaningless (except for redundancy reasons), so long as you end up with an accessible vmbrX device, that contains either ethX or bondX, you’re good to go. Bonding network devices in linux is close to using the ‘join to network bridge’ feature available in Windows 2000/XP/Vista/7 when you have two similiar Ethernet network devices.
When creating a new container in the web interface, instead of choosing Virtual Network (venet) choose Bridged Ethernet (veth) instead.
It has been pointed out to me, that veth is not as secure as venet (which is true), however veth does give your guest container OS direct access to the network, in a smiliar fashion to the way VMware server can give a guest OS direct access to a physical network using Bridged Ethernet.
Once your guest OS container is up and running, you’ll need to use the “Open VNC Console” link on the container’s General page, in the proxmox web interface, to get console access to the container, so that you can manually configure your network interfaces. Instead of configuring venet0:0 or similiar devices, you’ll be back to configuring eth0 or similiar devices. In CentOS, these turn out to be ifcfg-eth0 and ifcfg-lo. If you are unsure of how to do this manually, I suggest googling for some answers, or in CentOS, you can use a tool (if it’s available in your guest OS template) called system-config-network-tui. It’s available via Yum if you don’t have it.
Easy way to get it
1. Create a container with CentOS
2. Assigned it an IP Address using Virtual Network (venet)
3. Login via the VNC console or SSH and issue: yum install system-config-network-tui
4. Logout
5. Use the Proxmox web interface to shutdown the container
6. Remove the Virtual Network Adapter (venet) by deleting the IP Address and clicking Save.
7. The page will refresh and you can now select vmbr0 from the Bridged Ethernet Devices section. Click Save.
8. Start your container
9. Login via VNC Console and issue: system-config-network-tui
10. You’re on your own from here. Remember, you need to configure eth0 not venet0 and venet0:0 – you may want to remove these. Don’t forget to set your gateway and edit your DNS
11. Restart your network with /etc/init.d/network restart
Solution 2. Requires less effort, doesn’t necassarily give as much flexibility, but increases security.
You can edit your /etc/network/interfaces file on the host (the server with Proxmox installed). It’s actually kinda easy. This option also lets you avoid having to edit the network settings inside your guest container OS. So you can create plenty of containers, without have to edit each one! Neat!
Remember: I have two Gateways. 1.254 and 1.221 and my server has two network interfaces.
I could configure each of the network interfaces with two different networks. For example: NIC1: 192.168.1.0/24 with a gateway of 192.168.1.254 and NIC2: 192.168.4.0/24 with a gateway of 192.168.4.254.
There is no point in configuring both interfaces with the same network, for example: NIC1: 192.168.1.200 with a gateway of 192.168.1.254 and NIC2: 192.168.1.205 with a gateway of 192.168.1.221. The host (and consequently any guest container OS) will always use the gateway configured in eth0). If there is a way around this, I wasn’t able to figure it out.
By configuring each NIC with a separate network (192.168.1.0/24 and 192.168.4.0/24), I can make a guest container OS use the gateway of the 192.168.4.0/24 network, by giving it an IP Address like: 192.168.4.54, or make a guest container OS use the gateway of the 192.168.1.0/24 network, by giving it an IP Address like: 192.168.1.54.
However, we’re not done yet. Here’s how you do it. In the Web Interface, I again configured my network as follows. This could be done in a variety of ways, so long as you end up with a vmbr0.
I created a bond0, that contained eth0 and eth1. I then created a vmbr0 that contained bond0. I set the IP address of vmbr0 to 192.168.1.230 – so I could access it again after reboot. I didn’t configure IP Addresses anywhere else.
After rebooting, I edited my /etc/network/interfaces file
On the host I issused: nano /etc/network/interfaces
(I love nano, I know it’s for noobs but I can’t get my head around vi, I’ve used nano for too long!)
At the bottom of my /etc/network/interfaces file I added
auto vmbr0:0
iface vmbr0:0 inet static
address 192.168.4.100
netmask 255.255.255.0
gateway 192.168.4.254
auto vmbr0:1
iface vmbr0:0 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.254
DNS can be configured individually for each container using the proxmox web interface, or in my case of using CentOS, by editing the resov.conf file inside my guest container OS. Do what is easiest for you. You can also use system-config-network-tui.
Once I had modified by /etc/network/interfaces file, I saved and than restarted the network service by issuing: /etc/init.d/networking restart
I next created guest OS containers, with IP addresses that matched the network of the gateway that I wanted to use. In this situation, the guest container OS’s can also commuinicate with each other, even though they may be using different networks, thanks to the host. Handy, no?
In the end for me, I am using Solution 1. Mainly because we are not running a hosting company, rather just running services on our network and migrating away from VMware, so configuring network settings manually in this case, isn’t a big issue. In the proxmox web interface, I added the IP Address of each container, into the comments section for future reference.
If I was to use Solution 2, in our case we would have to change the IP Address of not only the gateway, but also many other devices also currently using the 192.168.1.0/24 network. We would inevitibly have to implement a routing solution to allow nodes on each network (1.0 and 4.0), to communicate with devices on either network, as is the case with most devices communicating with each other now.
As I said above, this is not necassarily the best or only solution, it is however what I was able to come up with. Comments, suggestions and improvements are welcome.
If someone can help me get the security of using venet instead of veth (and manually configuring the containers) and be able to get each of my containers to use one gateway or the other, all on the one network, without having to edit the container network settings, well then that would be awesome!