CoreOS playground through PXE – Network Set Up [2/5]

In the prvious installment we depicted the idea of setting up a PXE server to boot CoreOS. In this post, I’ll show you how to set up the networking on the PXE server.

Playground Virtual Machine Set Up

Create a new machine in Virtual Box with two network card: one of type NAT, the other as a Internal Network called “coreosnet”. The first will provide Internet access to the Playground server, the latter will be the one the CoreOS machine will be connected to.

Lubuntu Installation

Download the Lubuntu ISO from https://help.ubuntu.com/community/Lubuntu/GetLubuntu, and use it to install the OS on the “playground” machine.

Virtual Box Guest Addition

It’s always nice to be able to copy and paste code snippets back and forth between the guest and the host, so it’s better to install the Virtual Box Guest Additions. Start the PXE Server, open a terminal and install all the latest updates and dkms.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install dkms

After that you can install the VBox Guest Additions through the virtual machine’s “Insert Guest Addition” menu and reboot the system. After the reboot log in and double check that the clipboard settings are enabled. You should now be ready to enjoy a completely usable Lubuntu virtual machine.

Programs And Services

Let’s install some tools that will be needed to set up and test the PXE server.

sudo apt-get install tftpd-hpa tftp-hpa isc-dhcp-server apache2 curl
  • tftpd-hpa: the tftp damon, needed to provide the boot files to the CoreOS nodes, to LAN boot them;
  • tftp-hpa: a tftp client, used to check if the tftp daemon has been correctly set up;
  • isc-dhcp-server: DHCP server to let the CoreOS nodes configure their NICs;
  • apache2: to provide the config file through HTTP;
  • curl: to run a check on the config file being published.

Network Configuration

First of all let’s check both network cards. The “ip” command shows the list of existing interface cards. In this case lo, eth0 and eth1.

$ ip link list
 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
 link/ether 08:00:27:a9:87:6d brd ff:ff:ff:ff:ff:ff
 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
 link/ether 08:00:27:44:04:9f brd ff:ff:ff:ff:ff:ff

With “ip” is also possible to discover which interface cards are properly configured

$ ip -4 address list
 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
 valid_lft 85461sec preferred_lft 85461sec

It is straightforward to understand that eth1 does not have any IP address associated. Is then the network card connected to the internal network. We need to assign a static address to this network card.

cat > /tmp/interfaces <<END
 # loopback
 auto lo
 iface lo inet loopback
 # eth1
 auto eth1
 iface eth1 inet static
 address 192.168.77.1
 # no gateway for eth1 because it does not fwd packets outside the network
 netmask 255.255.255.0
 network 192.168.77.0
 broadcast 192.168.77.255
 END
 sudo cp /etc/network/interfaces /etc/network/interfaces.$(date --iso-8601=seconds)
 sudo mv /tmp/interfaces /etc/network/interfaces
 sudo /etc/init.d/networking restart

To check if the network card correctly obtained an IP just execute again the ip command, it should now show the IP for eth1 too.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
 valid_lft 84732sec preferred_lft 84732sec
 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 inet 192.168.0.1/16 brd 192.168.255.255 scope global eth1
 valid_lft forever preferred_lft forever

It’s time now to enable port forwarding to enable internet access from the coreos network.

sudo cp /etc/sysctl.conf /etc/sysctl.conf.$(date --iso-8601=seconds)
sudo cp /etc/sysctl.conf /tmp/sysctl.conf
echo 'net.ipv4.ip_forward=1' >> /tmp/sysctl.conf
sudo cp /tmp/sysctl.conf /etc/sysctl.conf

Finally, to enable NAT

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
sudo iptables -t filter -L
sudo iptables -t nat -L
sudo iptables -t mangle -L

This should be enough to enable the PXE server to provide internet access to the CoreOS nodes we will boot on the dedicated network.

Stay tuned for the next episode, we’ll set up all the services needed to provide booting through PXE.

CoreOS Playground With PXE – Part 1

CoreOS Playground With PXE – Introduction [1/5]

CoreOS Playground With PXE

CoreOS is an open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments and it supports Docker out of the box.

Would not be great to have a CoreOS playground to experiment all mighty stuffs this OS promises? Well, it would, at least for me, and here I’d like to share with you how it is possible to create such environment.

The main objective is to come up with a system running on a single PC that allows a developer to easily run experiments involving CoreOS. Here above there’s a list of things I would like to be able to accomplish with this CoreOS playground.

  • Quickly set up a properly configured CoreOS machine and run Docker containers on it.
  • Quickly bring up additional CoreOS machine to test what happens when a node is added to the cluster.
  • Easily discard an entire cluster to try a new configurations.

To do that, I’m going to set a virtual machine not powered by CoreOS. It will act as a router connecting a NAT network on one side, to provide Internet access, and a CoreOS network on the other.

coreos-playground-network

 

On top of that, this machine will provide DHCP and CoreOS via PXE. This way CoreOS will be installed automatically through PXE. I’ll also set up an empty virtual machine properly configured for lan boot. This way, adding a CoreOS node to the cluster will be just a matter of cloning and starting the template machine. As you probably know, a CoreOS installation can be configured providing a reference to a “cloud-config” file. I’ll set up apache on the router to provide such file to the starting CoreOS nodes through HTTP. The following schema depicts the boot process from a logical point of view.

coreos-playground-pxe

I’m going to use VirtualBox to run the machine and Lubuntu for the Playground Server, even though I’m trying to provide as much command line as I can, so it should be all reasonably adaptable to other environments too.

This is just the first installment of this series. If you are interested, stay tuned for the next posts. I’ll show you how to implement the idea presented here.