Building an Enterprise SDN Fabric with Open vSwitch and Faucet
Introduction
Software-Defined Networking (SDN) fundamentally transforms datacenter architecture by decoupling the control plane from the data plane. This tutorial demonstrates how to engineer a programmable, end-to-end SDN architecture utilizing the Faucet OpenFlow Controller alongside Open vSwitch (OVS).
We will traverse the entire networking stack: starting from high-level Faucet controller routing configurations, down through the OpenFlow instruction tables, and finally arriving at the hardware datapath processing layer.
Architectural Overview
Before provisioning the infrastructure, let's visualize the topology. We will configure an L2 switching fabric, route between segmented L3 networks (VLANs), and establish Access Control Lists (ACLs).
At each layer, the system components cleanly separate concerns:
1. Faucet (The Control Plane)
As the top level in the cluster topology, Faucet serves as the authoritative source of truth for all network routing logic. While Faucet natively exports Prometheus metrics and connects to extensive monitoring pipelines, our primary interface for this implementation will be configuring the declarative faucet.yaml manifest.
2. Open vSwitch (The Data Plane)
OpenFlow is the standardized protocol (maintained by the Open Networking Foundation) that Faucet uses to dictate packet-processing rules to edge switches. We will rely on tools like ovs-ofctl and ovs-appctl to audit the programmable flow tables Open vSwitch maintains.
Environment Provisioning
For this deployment, we assume a modern enterprise standard operating environment:
- Operating System: Ubuntu 22.04 LTS or 24.04 LTS
Step 1: Initialize Open vSwitch
Bootstrap the localized data plane by installing Open vSwitch.
sudo apt update && sudo apt install -y openvswitch-switch
Step 2: Deploy the Faucet Controller Platform
We will deploy Faucet seamlessly using its official Docker container image, mounting localized configuration directories for persistence.
sudo mkdir -p /opt/faucet/{logs,config,scripts,images}
sudo touch /opt/faucet/config/faucet.yaml
sudo docker run -d \
--name faucet \
--restart=always \
-v /opt/faucet/config/:/etc/faucet/ \
-v /opt/faucet/logs/:/var/log/faucet/ \
-p 6653:6653 \
-p 9302:9302 \
faucet/faucet
Configuring the Core Switch Firmware
Layer-2 (L2) switching is the foundational substrate of modern infrastructure. We will define two distinct broadcast domains (VLAN 100 for office and VLAN 200 for guest).
Populate your /opt/faucet/config/faucet.yaml manifest with the following datapath specifications:
dps:
switch0:
dp_id: 0x1
timeout: 7201
arp_neighbor_timeout: 3600
stack:
priority: 1
interfaces:
1:
native_vlan: office
2:
native_vlan: office
3:
native_vlan: office
4:
native_vlan: guest
5:
native_vlan: guest
vlans:
office:
vid: 100
description: "Office Core Network"
faucet_mac: "0e:00:00:00:00:01"
faucet_vips: ['10.0.100.254/24']
guest:
vid: 200
description: "Isolated Guest Network"
faucet_mac: "0e:00:00:00:00:02"
faucet_vips: ['10.0.200.254/24']
routers:
router-datacenter-networks:
vlans: [office, guest]
This declarative file defines a single switch (datapath ID 0x1) named switch0. Ports 1-3 are allocated to the office network, while Ports 4-5 are sandboxed into guest.
Trigger a hot-reload of the Faucet controller to ingest the new topology constraints:
sudo docker restart faucet
Verify successful initialization by tailing the logs (/opt/faucet/logs/faucet.log). You should observe status messages confirming IPv4 routing initialization across both VLANs.
Linking the Data Plane to the Control Plane
Faucet is now actively waiting for incoming OpenFlow TCP connections from switches claiming datapath ID 0x1. We must configure Open vSwitch to instantiate switch0 and point its control-plane management directly to Faucet.
sudo ovs-vsctl add-br switch0 \
-- set bridge switch0 other-config:datapath-id=0000000000000001 \
-- set-controller switch0 tcp:127.0.0.1:6653 \
-- set controller switch0 connection-mode=out-of-band
Spinning Up KVM Compute Instances
To realistically test our fabric, we will leverage Kernel-based Virtual Machines (KVM) using CirrOS, an incredibly lightweight Linux distribution heavily utilized in OpenStack cloud testing.
First, download the CirrOS base image into our persistent /opt directory:
sudo wget -P /opt/faucet/images https://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img
Bootstrapping Dynamic Network Interfaces
Instances need virtual bindings (TAP interfaces) plugged directly into their corresponding OVS ports. We automate this binding logic using precise qemu-ifup lifecycle scripts.
Deploy the ifup configuration:
cat << 'EOF' | sudo tee /opt/faucet/scripts/ovs-ifup
#!/bin/bash
netdev=$1
switch="switch${netdev:2:1}"
ip link set $netdev up
ovs-vsctl add-port ${switch} $netdev -- set Interface $netdev ofport=${netdev: -1}
EOF
Deploy the localized teardown ifdown cleanup:
cat << 'EOF' | sudo tee /opt/faucet/scripts/ovs-ifdown
#!/bin/bash
netdev=$1
switch="switch${netdev:2:1}"
ip addr flush dev $netdev
ip link set $netdev down
ovs-vsctl del-port ${switch} $netdev
EOF
Ensure strict execution permissions:
sudo chmod +x /opt/faucet/scripts/ovs-*
Launching the Compute Array
We will spawn 4 distinct terminal sessions to launch our 4 compute nodes, meticulously ensuring cross-port binding mappings (switch0p1 ties to Port 1, switch0p2 to Port 2).
Bootstrapping Host 1 (Port 1):
IFACE=switch0p1
HOST=host1
MAC_ADDR=$(printf '52:54:00:%02x:%02x:%02x' $((RANDOM%256)) $((RANDOM%256)) $((RANDOM%256)))
cp /opt/faucet/images/cirros-0.5.2-x86_64-disk.img /opt/faucet/images/cirros-${HOST}.img
kvm -m 512 \
-device e1000,netdev=${IFACE},mac=$MAC_ADDR \
-drive file=/opt/faucet/images/${HOST}.img,boot=on -nographic \
-netdev tap,id=${IFACE},ifname=${IFACE},script=/opt/faucet/scripts/ovs-ifup,downscript=/opt/faucet/scripts/ovs-ifdown
Tailing the faucet.log will now reflect real-time MAC-address learning via the OpenFlow streams:
DPID 1 (0x1) switch-1 status did not change: Port 1 up status True reason MODIFY state 4
DPID 1 (0x1) switch-1 L2 learned on Port 1 52:54:00:b1:6c:84 (L2 type 0x0800...) Port 1 VLAN 100
Repeat the execution block above for Host 2 (IFACE=switch0p2), Host 3 (IFACE=switch0p3), and Host 4 (IFACE=switch0p4).
Enforcing Routing Topologies Within Compute Nodes
With the hardware paths wired, our final step is bridging the OS-level networking within the virtual machines themselves.
We will structure subnets corresponding linearly with Faucet’s VIP routing tables established earlier: 10.0.100.x for the office nodes (Hosts 1-3) and 10.0.200.x for the guest node (Host 4).
Execute sequentially across the active KVM terminals:
Host 1 (Office Node A):
sudo ip address add 10.0.100.1/24 dev eth0
sudo ip route add default via 10.0.100.254 dev eth0
Host 2 (Office Node B):
sudo ip address add 10.0.100.2/24 dev eth0
sudo ip route add default via 10.0.100.254 dev eth0
Host 3 (Office Node C):
sudo ip address add 10.0.100.3/24 dev eth0
sudo ip route add default via 10.0.100.254 dev eth0
Host 4 (Guest Network Node):
sudo ip address add 10.0.200.1/24 dev eth0
sudo ip route add default via 10.0.200.254 dev eth0
Fabric Verification
Thanks to the Faucet router-datacenter-networks logic implicitly bridging VLAN ingress spanning our two domains, end-to-end multi-layer routing is highly autonomous.
Execute a simple network ping test. From Host 1 (10.0.100.1), initiate an ICMP echo directly targeting the highly isolated Host 4 (10.0.200.1):
ping -c 4 10.0.200.1
If ICMP traffic negotiates successfully back and forth, you have dynamically engineered a robust, centralized Software-Defined Network capable of intricate hardware routing purely through programmable OpenFlow controller logic!