<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Jun 16, 2014 at 10:45 AM, Serge Hallyn <span dir="ltr"><<a href="mailto:serge.hallyn@ubuntu.com" target="_blank">serge.hallyn@ubuntu.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">Quoting Andreas Hasenack (<a href="mailto:andreas@canonical.com">andreas@canonical.com</a>):<br>
> Hi,<br>
><br>
> I'm not sure where or how this should be fixed.<br>
><br>
> I have a case where openstack was juju deployed with neutron networking,<br>
> and we needed to set instance-mtu = 1454 in the neutron-gateway charm.<br>
><br>
> Instances launched in the cloud get that 1454 MTU set in their eth0 devices<br>
> and life is good.<br>
><br>
> If, however, I create a container inside an instance (for example, via juju<br>
> deploy --to lxc:0), that container gets eth0 set with an MTU of 1500. And<br>
> that makes almost all connections stall, and the deployment fails.<br>
><br>
> Now, who should set the default MTU for containers to be 1454 in this case?<br>
> Or, more explicitly, to mimic the MTU of the "host" (in this case, the<br>
> instance)? juju? lxc-create?<br>
<br>
</div></div>Hi,<br>
<br>
Does adding<br>
<br>
lxc.network.mtu = 1454<br>
<br>
to /etc/lxc/default.conf on the instance before the containers are<br>
created fix the issue?<br>
<br></blockquote><div><br></div><div>No, it didn't help. It only worked if added to the actual container's config file and rebooting it. Maybe juju bypasses that default.conf file somehow when creating its containers.<br>
</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Setting the mtu automatically in lxc is in general a good idea, but it<br>
won't catch all cases and may not catch yours. Are you using lxcbr0, or<br>
a br0 with eth0 bridged to it? (Actually if using lxcbr0 then I'd like<br>
to think that the kernel would fragment the lxcbr0 traffic has it hits<br>
your eth0 which would just slow your traffic down)<br></blockquote><div><br></div><div>It's a veth device bridged into lxcbr0. Here is some info (look at the MTUs):<br><br>root@juju-fakestack-machine-3:~# ip addr<br>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default <br> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br> inet <a href="http://127.0.0.1/8">127.0.0.1/8</a> scope host lo<br>
valid_lft forever preferred_lft forever<br> inet6 ::1/128 scope host <br> valid_lft forever preferred_lft forever<br>2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1454 qdisc pfifo_fast state UP group default qlen 1000<br>
link/ether fa:16:3e:94:c9:cd brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://10.10.0.6/16">10.10.0.6/16</a> brd 10.10.255.255 scope global eth0<br> valid_lft forever preferred_lft forever<br> inet6 fe80::f816:3eff:fe94:c9cd/64 scope link <br>
valid_lft forever preferred_lft forever<br>3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default <br> link/ether fe:ee:96:ca:72:80 brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://10.0.3.1/24">10.0.3.1/24</a> brd 10.0.3.255 scope global lxcbr0<br>
valid_lft forever preferred_lft forever<br> inet6 fe80::38b8:13ff:feac:a6b0/64 scope link <br> valid_lft forever preferred_lft forever<br>7: veth13ME9N: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lxcbr0 state UP group default qlen 1000<br>
link/ether fe:ee:96:ca:72:80 brd ff:ff:ff:ff:ff:ff<br> inet6 fe80::fcee:96ff:feca:7280/64 scope link <br> valid_lft forever preferred_lft forever<br><br></div><div>Bridge:<br># brctl show lxcbr0<br>bridge name bridge id STP enabled interfaces<br>
lxcbr0 8000.feee96ca7280 no veth13ME9N<br><br></div><div>I'm not sure how different MTUs are handled on a bridge.<br></div></div></div></div>