Connecting Multiple Clusters with iptables NAT
Connecting servers with internal IPs across multiple clusters over the internet is a common infrastructure requirement. While a traditional VPN is the standard approach, you can implement similar functionality using iptables and Network Address Translation (NAT) rules, often with lower overhead than VPN solutions.
This method works by forwarding traffic between clusters at the IP layer, effectively creating a tunnel-like connection without the encryption or authentication overhead of a true VPN. It’s suitable for private networks where all traffic stays within your infrastructure.
Architecture Overview
The basic setup involves:
- Multiple clusters, each with internal (private) IP ranges
- Gateway nodes in each cluster with both internal and public IPs
- iptables rules on gateway nodes to forward and translate traffic between clusters
- Routing tables configured to direct cross-cluster traffic through the gateways
Each cluster needs a designated gateway server that will handle inter-cluster traffic. This server must have connectivity to gateways in other clusters (either through public IPs or direct network links).
Configuration Steps
Step 1: Enable IP Forwarding
On each gateway node, enable kernel packet forwarding:
sysctl -w net.ipv4.ip_forward=1
Make this persistent across reboots:
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.d/99-forwarding.conf
sysctl -p /etc/sysctl.d/99-forwarding.conf
Step 2: Configure iptables Rules
Set up NAT rules on each gateway to translate traffic destined for other clusters. For a two-cluster setup:
Cluster A gateway (internal subnet 10.0.1.0/24, other cluster 10.0.2.0/24):
# Allow forwarding between clusters
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
# NAT outgoing traffic to other cluster
iptables -t nat -A POSTROUTING -d 10.0.2.0/24 -o eth0 -j MASQUERADE
# NAT return traffic
iptables -t nat -A PREROUTING -i eth0 -s 10.0.2.0/24 -d 10.0.1.0/24 -j DNAT --to-destination 10.0.1.0/24
Replace interface names (eth0, eth1) with your actual interfaces. Typically, one interface connects to the internal cluster network and another to the internet.
Step 3: Add Static Routes
On non-gateway nodes in each cluster, add routes directing traffic for other clusters through the local gateway:
ip route add 10.0.2.0/24 via 10.0.1.1 dev eth0
Where 10.0.1.1 is the internal IP of your gateway and 10.0.2.0/24 is the remote cluster subnet.
To persist routes on systems using netplan:
network:
version: 2
ethernets:
eth0:
addresses:
- 10.0.1.10/24
gateway4: 10.0.1.1
routes:
- to: 10.0.2.0/24
via: 10.0.1.1
On systems using ifcfg (RHEL/CentOS), add to /etc/sysconfig/network-scripts/route-eth0:
10.0.2.0/24 via 10.0.1.1 dev eth0
Step 4: Save iptables Rules
Use iptables-persistent (Debian/Ubuntu) or firewalld (RHEL/CentOS 7+) to persist rules:
Debian/Ubuntu:
apt-get install iptables-persistent
iptables-save > /etc/iptables/rules.v4
RHEL/CentOS with firewalld:
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.0.2.0/24" accept'
firewall-cmd --reload
For multiple clusters, configure additional rules for each cluster pair. Gateway servers will need rules for all other cluster subnets.
Testing Connectivity
From a node in Cluster A, test connectivity to a node in Cluster B:
ping 10.0.2.50
traceroute 10.0.2.50
Monitor the gateway for traffic:
tcpdump -i eth0 -n host 10.0.2.50
Check current NAT translations:
conntrack -L
Performance Considerations
This iptables-based approach avoids VPN encryption overhead, making it faster for high-throughput scenarios. However:
- No encryption — all traffic is sent in plaintext
- No authentication — relies on network isolation
- Requires manual route management at scale
- Gateway becomes a single point of failure per cluster
For production setups with stringent security requirements, consider WireGuard or traditional VPN solutions instead. For internal infrastructure within data centers or private networks, this method works well.
Scaling to Multiple Clusters
For three or more clusters, configure each gateway with rules for all other cluster subnets:
# On Cluster A gateway, for Clusters B and C
iptables -t nat -A POSTROUTING -d 10.0.2.0/24 -o eth0 -j MASQUERADE
iptables -t nat -A POSTROUTING -d 10.0.3.0/24 -o eth0 -j MASQUERADE
Add corresponding routes on all non-gateway nodes pointing to their local gateway. With many clusters, consider implementing this with configuration management tools like Ansible to avoid manual errors.
Debugging Connection Issues
If connectivity fails, check these in order:
- Verify IP forwarding is enabled:
sysctl net.ipv4.ip_forward - Check routes exist:
ip route show - Verify firewall rules:
iptables -L -vandiptables -t nat -L -v - Confirm DNS resolution if using hostnames:
nslookup hostname - Test with tcpdump to see if packets reach the gateway
- Check MTU if packets are lost:
ip link show eth0
This approach provides a lightweight alternative to traditional VPNs for internal cluster networking, provided your security model tolerates unencrypted traffic within your network boundary.

Dear Zhiqiang Ma,
I have small doubt, it is possible to move the Vg partition to another server, Please let me know ?
Thanks for advance.
BY
Mike
Hi Mike,
I am confused: what do you mean by “Vg partition”? Is is related to the VPN-like network?
If it is not related to any post, you are welcome to ask on https://www.systutorials.com/qa/
Dear Zhiqiang Ma,
Sorry I am taking about LVM, I have created Volume group(vg) and mount the vg(volume group) in /mnt and stored the some file.
It is possible to move the Vg partition to another server, Please let me know ?
Thanks for advance.
BY
Mike
It’s been a long time. Just noticed this comment.
If some one else has similar question, the techniques introduced at http://www.fclose.com/2611/duplicating-and-backing-up-lvm-backed-xen-domu-from-a-remote-server/ may help (following steps 1 to 5 should be enough).
Dear Zhiqiang Ma
I have configured Openvpn (slackware13.37) Amazon server machine, it is working fine, and tested that it is working.
I have configured
1. System1 linux machine for Openvp Client setup and started the service it is running fine.
2. System2 linux machine I have configured Openvpn(server), it is working fine.
The System1 openvpn client & System2 Openvpn server is connected,
I have checked the log and I am able to ping the ping the tun0 openvpn server ip(10.8.0.1) to openvpn client(10.8.0.6)
And In Openvpn server I am able to ping the openvpn client tun0 IP address.
In openvpn server config file I have added the option for (push “redirect-gateway def1 bypass-dhcp”) . But my openvpn client machine (amazon)get hunged.
In System1 while run the command
wget -qO- ifconfig.me/ip
it showing System2(openvpn server) IP address. But it show System1 Public IP address
Please help me, and how to set the routing in the IP tables?
how to fix this issue.
By
Kavi
I am not sure what the problem is since I have no much OpenVPN experience. It may be related to the routing tables (maybe also iptables). You may check the `ip`’s man page ( https://www.systutorials.com/docs/linux/man/8-ip/#lbBB ) for how to change the routing tables.