[lxc-devel] Cloud instance with reduced MTU, container uses 1500, connections stall

Andreas Hasenack andreas at canonical.com
Mon Jun 16 15:33:47 UTC 2014


On Mon, Jun 16, 2014 at 10:45 AM, Serge Hallyn <serge.hallyn at ubuntu.com>
wrote:

> Quoting Andreas Hasenack (andreas at canonical.com):
> > Hi,
> >
> > I'm not sure where or how this should be fixed.
> >
> > I have a case where openstack was juju deployed with neutron networking,
> > and we needed to set instance-mtu = 1454 in the neutron-gateway charm.
> >
> > Instances launched in the cloud get that 1454 MTU set in their eth0
> devices
> > and life is good.
> >
> > If, however, I create a container inside an instance (for example, via
> juju
> > deploy --to lxc:0), that container gets eth0 set with an MTU of 1500. And
> > that makes almost all connections stall, and the deployment fails.
> >
> > Now, who should set the default MTU for containers to be 1454 in this
> case?
> > Or, more explicitly, to mimic the MTU of the "host" (in this case, the
> > instance)? juju? lxc-create?
>
> Hi,
>
> Does adding
>
> lxc.network.mtu = 1454
>
> to /etc/lxc/default.conf on the instance before the containers are
> created fix the issue?
>
>
No, it didn't help. It only worked if added to the actual container's
config file and rebooting it. Maybe juju bypasses that default.conf file
somehow when creating its containers.


> Setting the mtu automatically in lxc is in general a good idea, but it
> won't catch all cases and may not catch yours.  Are you using lxcbr0, or
> a br0 with eth0 bridged to it?  (Actually if using lxcbr0 then I'd like
> to think that the kernel would fragment the lxcbr0 traffic has it hits
> your eth0 which would just slow your traffic down)
>

It's a veth device bridged into lxcbr0. Here is some info (look at the
MTUs):

root at juju-fakestack-machine-3:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1454 qdisc pfifo_fast state
UP group default qlen 1000
    link/ether fa:16:3e:94:c9:cd brd ff:ff:ff:ff:ff:ff
    inet 10.10.0.6/16 brd 10.10.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe94:c9cd/64 scope link
       valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP group default
    link/ether fe:ee:96:ca:72:80 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::38b8:13ff:feac:a6b0/64 scope link
       valid_lft forever preferred_lft forever
7: veth13ME9N: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master lxcbr0 state UP group default qlen 1000
    link/ether fe:ee:96:ca:72:80 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fcee:96ff:feca:7280/64 scope link
       valid_lft forever preferred_lft forever

Bridge:
# brctl show lxcbr0
bridge name    bridge id        STP enabled    interfaces
lxcbr0        8000.feee96ca7280    no        veth13ME9N

I'm not sure how different MTUs are handled on a bridge.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-devel/attachments/20140616/0f8ae526/attachment.html>


More information about the lxc-devel mailing list