[lxc-users] Containers have network issues when their host uses a bonded interface

Andrey Repin anrdaemon at yandex.ru
Tue Sep 15 08:29:31 UTC 2015


Greetings, Fajar A. Nugraha!

>> We will have to do some thorough testing with the 4.2 (or possibly 4.1)
>> kernel over the next few weeks to make sure this kernel doesn't introduce
>> new issues.

> That would seem like the best option for you.

>> new issues. Our only other option would be to fall back to KVM instead of
>> containers and that's not something we really want to do.

> Assuming your problem is caused by bridging the veth interface,
> there's an alternate networking setup with proxyarp + route that might
> work. It doesn't use bridge, and only works for privileged containers.

Aren't you overcomplicating it?

1. Containers config:

lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge
lxc.network.link = bond0

# (the rest is the same as you have it right now)
lxc.network.hwaddr = 00:16:3e:...
lxc.network.name = eth0
lxc.network.flags = up


2a. If you don't want to communicate with containers from the host, you're all
set, test your connectivity from other machine on the network, it should just
work.

2b. If you absolutely want to communicate with containers from host via
network, you will need a similarly set up interface on the host.
This is a little complicated without a helper script, but still doable:

auto mac0
iface mac0 inet static
  address 192.168.1.12
  netmask 255.255.255.224
  hwaddress ether 02:00:00:00:00:01
  pre-up ip link set bond0 up
  pre-up ip link add $IFACE link bond0 type macvlan mode bridge
  post-down ip link delete $IFACE type macvlan


-- 
With best regards,
Andrey Repin
Tuesday, September 15, 2015 11:14:26

Sorry for my terrible english...



More information about the lxc-users mailing list