Engineering a Distributed Overlay Network with Open vSwitch
Introduction
In modern, hyper-converged datacenter environments, decoupling the logical network from the physical underlay is a foundational requirement. Open vSwitch (OVS) is the industry-standard multi-layer virtual switch, designed specifically to enable massive network automation and programmatic extension across complex computing clusters.
This guide demonstrates how to architect a high-performance Overlay Network using Open vSwitch. By establishing a VxLAN tunnel between two distinct hosts, we effectively bridge geographically separated sites, creating a seamless Layer-2 broadcast domain over a standard Layer-3 IP network.
Enterprise Deployment Environment
For this implementation, we assume a production-grade Linux environment:
- Operating System: 2 hosts running modern Ubuntu LTS (e.g., 22.04 LTS or 24.04 LTS)
- Open vSwitch: Version 3.x (or late 2.17+ LTS series) natively supported by the OS package manager
Initializing Open vSwitch
To begin scaffolding our Software-Defined Network (SDN), install the core OVS packages on both host machines.
sudo apt update && sudo apt install -y openvswitch-switch
Ensure the OVS daemon (ovs-vswitchd) is active and enabled to persist across host reboots:
sudo systemctl enable --now openvswitch-switch
Configuring the Integration Bridge and VxLAN Tunnels
This phase involves provisioning the virtual switch and establishing the cryptographic encapsulation layer. Wait, VxLAN isn't cryptographic by default, but it provides the essential tunneling layer (MAC-in-UDP encapsulation) for our overlay.
Specifically, we will:
- Provision a central OVS integration bridge named
br-int. - Attach an internal interface (
veth0) to serve as the local gateway for the overlay. - Configure a
vxlantype port to establish point-to-point communication with the secondary host.
Architectural Note on MTU: VxLAN encapsulates Ethernet frames within UDP packets, adding 50 bytes of overhead. To prevent deep-packet fragmentation issues across the WAN, the MTU of the internal interface (
veth0) must be explicitly lowered to 1450.
Configuration: Datacenter Host-1 (Alpha)
On the primary host, execute the following to construct the bridge and assign the overlay IP space (10.0.0.10/24):
Step 1: Provision the integration bridge
sudo ovs-vsctl add-br br-int
Step 2: Attach the internal interface
sudo ovs-vsctl add-port br-int veth0 -- set interface veth0 type=internal
Step 3: Construct the VxLAN overlay port targeting Host-2's underlay IP
sudo ovs-vsctl add-port br-int vxlan0 -- set interface vxlan0 type=vxlan \
options:remote_ip=192.168.122.11 options:key=101
Step 4: Initialize the interface with an overlay IPv4 address and optimized MTU
sudo ip address add 10.0.0.10/24 dev veth0
sudo ip link set dev veth0 up mtu 1450
Configuration: Datacenter Host-2 (Beta)
Mirror the configuration on the secondary host, ensuring you invert the remote underlay mapping and assign the corresponding overlay IP (10.0.0.11/24):
Step 1: Provision the integration bridge
sudo ovs-vsctl add-br br-int
Step 2: Attach the internal interface
sudo ovs-vsctl add-port br-int veth0 -- set interface veth0 type=internal
Step 3: Construct the VxLAN overlay port targeting Host-1's underlay IP
sudo ovs-vsctl add-port br-int vxlan0 -- set interface vxlan0 type=vxlan \
options:remote_ip=192.168.122.10 options:key=101
Step 4: Initialize the interface with an overlay IPv4 address and optimized MTU
sudo ip address add 10.0.0.11/24 dev veth0
sudo ip link set dev veth0 up mtu 1450
Validating the OVS Topology
Verification is critical in distributed networking. Run ovs-vsctl show on both nodes to dump the active switch configuration.
On Host-1 (Alpha):
b6f578f4-1691-4573-b3ba-1d05e0eb7b22
Bridge br-int
Port br-int
Interface br-int
type: internal
Port veth0
Interface veth0
type: internal
Port vxlan0
Interface vxlan0
type: vxlan
options: {key="101", remote_ip="192.168.122.11"}
ovs_version: "3.3.0"
On Host-2 (Beta):
4649a8a9-f107-41ca-ba0c-5b24903b1aea
Bridge br-int
Port br-int
Interface br-int
type: internal
Port vxlan0
Interface vxlan0
type: vxlan
options: {key="101", remote_ip="192.168.122.10"}
Port veth0
Interface veth0
type: internal
ovs_version: "3.3.0"
Note: UUIDs and specific OVS versions (e.g., 3.3.0 vs older 2.13.x) will vary based on your exact environment.
Testing Fabric Connectivity
The true test of the overlay is verifying that ICMP packets can traverse the Layer-3 WAN seamlessly encapsulated as Layer-2 frames.
From Host-1, initiate a ping to the overlay interface of Host-2 (10.0.0.11):
$ ping -c 3 10.0.0.11
PING 10.0.0.11 (10.0.0.11) 56(84) bytes of data.
64 bytes from 10.0.0.11: icmp_seq=1 ttl=64 time=0.987 ms
64 bytes from 10.0.0.11: icmp_seq=2 ttl=64 time=1.15 ms
64 bytes from 10.0.0.11: icmp_seq=3 ttl=64 time=1.14 ms
--- 10.0.0.11 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 0.987/1.093/1.152/0.075 ms
From Host-2, verify bi-directional routing targeting Host-1's overlay IP (10.0.0.10):
$ ping -c 3 10.0.0.10
PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data.
64 bytes from 10.0.0.10: icmp_seq=1 ttl=64 time=1.03 ms
64 bytes from 10.0.0.10: icmp_seq=2 ttl=64 time=1.17 ms
64 bytes from 10.0.0.10: icmp_seq=3 ttl=64 time=1.34 ms
--- 10.0.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.025/1.177/1.338/0.127 ms
Strategic Conclusion
While this tutorial models a localized two-node lab, the enterprise implications are profound. Constructing a dynamic, programmatic Layer-2 overlay across a standard Layer-3 boundary enables highly scalable Datacenter Interconnects (DCI), seamless workload migration, and multi-tenant isolation.
Frameworks like OpenStack natively leverage exactly these OVS/VxLAN mechanics beneath the surface to provision massive, isolated tenant networks on demand. By mastering the fundamental OVS primitives, engineering teams can build custom, robust, and lightning-fast software-defined network fabrics.