[lxc-users] Connecting container to tagged VLAN
Joshua Schaeffer
jschaeffer0922 at gmail.com
Thu Jan 28 00:46:25 UTC 2016
On Wed, Jan 27, 2016 at 4:38 PM, Guido Jäkel <G.Jaekel at dnb.de> wrote:
> Dear Joshua,
>
> you wrote, that there's a trunk on eth1 and eth2. But for eth2, i can't
> see any VLAN (501 ?) detrunking as with eth1 & eth1.500. In the other hand
> you wrote, that eth2 is working. Are you shure, that you realy receive this
> trunk of 3 VLANs on your both eth's?
>
I started to think about this as well and I've found the reason. VMware
allows you to tag NICs from the hypervisor level. Eth1 and eth2 were both
setup under VLAN 500 so that is why no tagging on the LXC host was
required, hence why eth2 worked. So the lesson there is don't mix dot1q,
either set it on the hypervisor and leave it completely out of the LXC host
and container or vice versa.
I've completely removed VLAN tagging from my LXC host and making progress,
but still running into odd situations:
lxcuser at prvlxc01:~$ sudo ip -d link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:be:13:94 brd ff:ff:ff:ff:ff:ff promiscuity 0
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master
br0-500 state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:be:46:c5 brd ff:ff:ff:ff:ff:ff promiscuity 1
bridge_slave
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
group default qlen 1000
link/ether 00:50:56:be:26:4f brd ff:ff:ff:ff:ff:ff promiscuity 0
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
group default qlen 1000
link/ether 00:50:56:be:01:d8 brd ff:ff:ff:ff:ff:ff promiscuity 0
6: br0-500: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default
link/ether 00:50:56:be:46:c5 brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge
7: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT group default
link/ether de:ef:8c:53:01:0b brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge
9: vethKAG02C: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master br0-500 state UP mode DEFAULT group default qlen 1000
link/ether fe:bf:b5:cf:f0:83 brd ff:ff:ff:ff:ff:ff promiscuity 1
veth
bridge_slave
*Scenario 1*: When assigning an IP directly to eth1 on the host, no
bridging involved, no containers involved (Success):
/etc/network/interfaces
auto eth1
iface eth1 inet static
address 10.240.78.3/24
route -n
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=8.25 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=2.59 ms
^C
--- 10.240.78.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 2.597/5.425/8.254/2.829 ms
*Scenario 2*: When assigning an IP to a bridge and making eth1 a slave to
the bridge, no containers involved (Success):
/etc/network/interfaces
auto eth1
iface eth1 inet manual
auto br0-500
iface br0-500 inet static
address 10.240.78.3/24
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
route -n
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0
br0-500
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=3.26 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=1.51 ms
64 bytes from 10.240.78.1: icmp_seq=3 ttl=255 time=2.30 ms
^C
--- 10.240.78.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.514/2.360/3.262/0.715 ms
*Scenario 3*: Same scenario as above, expect the bridge is not assigned and
IP and a container is created and connects to the same bridge (Failure):
/etc/network/interfaces
auto eth1
iface eth1 inet manual
auto br0-500
iface br0-500 inet static
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
~/.local/share/lxc/c4/config
# Network configuration
lxc.network.type = veth
lxc.network.link = br0-500
lxc.network.ipv4 = 10.240.78.3/24
lxc.network.ipv4.gateway = 10.240.78.1
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:f7:0a:83
route -n (on host)
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0
br0-500
route -n (inside container)
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
ping (on host)
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=1.12 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=1.17 ms
64 bytes from 10.240.78.1: icmp_seq=3 ttl=255 time=6.54 ms
^C
--- 10.240.78.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 1.125/2.950/6.548/2.544 ms
ping (inside container)
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
>From 10.240.78.3 icmp_seq=1 Destination Host Unreachable
>From 10.240.78.3 icmp_seq=2 Destination Host Unreachable
>From 10.240.78.3 icmp_seq=3 Destination Host Unreachable
^C
--- 10.240.78.1 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3013ms
Here is the odd part, if I sniff the traffic on the bridge from the host I
can see the container arping for the gateway and getting a response.
However nothing is being added to my arp table on the container.
lxcuser at prvlxc01:~$ su root -c "tcpdump -i br0-500 -Uw - | tcpdump -en -r -
arp" #this is the host
Password:
reading from file -, link-type EN10MB (Ethernet)
tcpdump: listening on br0-500, link-type EN10MB (Ethernet), capture size
262144 bytes
17:32:10.223168 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 42: Request who-has 10.240.78.1 tell 10.240.78.3, length 28
17:32:10.223337 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 60: Request who-has 10.240.78.1 tell 10.240.78.3, length 46
17:32:10.225821 00:13:c4:f2:64:4d > 00:16:3e:f7:0a:83, ethertype ARP
(0x0806), length 60: Reply 10.240.78.1 is-at 00:13:c4:f2:64:4d, length 46
17:32:11.220216 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 42: Request who-has 10.240.78.1 tell 10.240.78.3, length 28
17:32:11.220418 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 60: Request who-has 10.240.78.1 tell 10.240.78.3, length 46
17:32:11.230455 00:13:c4:f2:64:4d > 00:16:3e:f7:0a:83, ethertype ARP
(0x0806), length 60: Reply 10.240.78.1 is-at 00:13:c4:f2:64:4d, length 46
arp -n (from container)
Address HWtype HWaddress Flags Mask
Iface
10.240.78.1 (incomplete)
eth0
If I manually add the gateway's MAC address to the arp table I can ping it
and have full internet access! Anybody know why the container isn't adding
the MAC address when a response is being given?
>
> I'm using a (working) comparable setup: On the host, eth0 is used for host
> management on a detrunked port. On eth1, there's a trunk with the needed
> VLANs for different network for a staged environment. On eth1, there is a
> VLAN decoder for each of the needed VLANs. And this is attached to a
> seperate software bridge for each VLAN. A container's outside veth is
> attached for the appropriate bridge - this is done in a start script by a
> calculated configuration statement based on the container's name.
> But the lxc host is located on plain hardware, not in a VM.
That's basically what I have in my home lab (and everything working
successfully there) and what I'm trying to reproduce here, unfortunately I
don't have the pull here to get a physical LXC host so gotta work with what
I got.
Thanks,
Joshua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160127/4e5f6cd8/attachment-0001.html>
More information about the lxc-users
mailing list