[Lxc-users] Slow response times (at least, from the LAN) to LXC containers
Michael B. Trausch
mike at trausch.us
Wed Mar 10 18:57:43 UTC 2010
On 03/10/2010 12:06 PM, Daniel Lezcano wrote:
> Michael B. Trausch wrote:
>>
>> Here is ping output showing the problem:
>>
>> mbt at fennel:~$ time ping -c 4 spicerack.trausch.us
>> PING spicerack.trausch.us (173.15.213.185) 56(84) bytes of data.
>> From 172.16.0.1: icmp_seq=2 Redirect Host(New nexthop: 173.15.213.185)
>> From 172.16.0.1: icmp_seq=3 Redirect Host(New nexthop: 173.15.213.185)
>> 64 bytes from 173.15.213.185: icmp_seq=4 ttl=64 time=2.27 ms
>>
>> --- spicerack.trausch.us ping statistics ---
>> 4 packets transmitted, 1 received, 75% packet loss, time 11073ms
>> rtt min/avg/max/mdev = 2.278/2.278/2.278/0.000 ms
>>
>> real 0m21.144s
>> user 0m0.000s
>> sys 0m0.020s
>>
>> Now, this is pinging from my laptop. When I ping from the outside
>> world, it always seems to work:
>
> Mmh, some informations are missing to investigate.
>
> The redirect you receive means the router find an optimized route for
> the packet you sent to him, so the icmp redirect will trigger the kernel
> to create a new route for these packets. Maybe the route is not created
> in the right container ? Can you check where is created this route ?
> * ip route table show all
> or
> * route -Cn
The routing tables are automatically setup (that is, they are setup by
Debian's /etc/network/interfaces) based on the network configuration
information.
Here is the routing table from the spicerack.trausch.us container:
mbt at spicerack:~$ ip route show all
173.15.213.184/29 dev eth0 proto kernel scope link src 173.15.213.185
172.16.0.0/24 dev eth1 proto kernel scope link src 172.16.0.3
default via 173.15.213.190 dev eth0 metric 100
Here is the routing table from the container's host:
mbt at saffron:~$ ip route show all
172.16.0.0/24 dev br0 proto kernel scope link src 172.16.0.2
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
default via 172.16.0.1 dev br0 metric 100
> Can you give a summary of the network topolgy (IP of each container and
> routes ) ?
The network router has the global IP address 173.15.213.190 and an
address on the LAN of 172.16.0.1. It is an unfortunate piece of
hardware supplied by my ISP (it is a cable modem and router
combination). I have a /29 from my ISP, of which they use one address
(.190) and I use the other 5 (.185 through .189). Anything that doesn't
have a static IP address has a 172.16.0.0/24 address, and is NAT'd
through .190.
> For example, where is located 172.16.0.1 ?
It is at the network edge, and as noted above has also the global IP
address 173.15.213.190.
> Is your host configured as a router ?
The LXC host has IPv4 forwarding enabled:
mbt at saffron:~$ cat /proc/sys/net/ipv4/conf/all/forwarding
1
Interestingly enough, IPv6 forwarding was _not_ enabled on the host
node, despite the fact that I am pretty sure that I enabled it. It is
enabled now, though:
mbt at saffron:~$ cat /proc/sys/net/ipv6/conf/all/forwarding
1
> Probably more questions will come later :)
Okay, that's fine. I will answer anything/everything I can. As far as
I can tell, I am only having this problem on the LAN when I try to reach
one of the global IP addresses. Unfortunately, that's the most annoying
part. Most of my things on those addresses are things that I use mostly
at home, but sometimes need to use out and about, which is why they are
on my Web server in the first place. :-)
>>
>> What I don't get is why I am receiving these redirects from ping. I
>> never get them when pinging 172.16.0.x addresses that are in LXC
>> containers on that system. And I never got these redirects before when
>> I was running containers in OpenVZ.
>>
>> Also, if I try to ping that same interface's IPv6 addresses, I get
>> failures (Address unreachable), no matter if I ping the private or the
>> global one. However, I can reach ipv6.google.com just fine, and it is
>> going through that computer to get to my tunnel to Hurricane Electric.
>> It has to, since that container is my only IPv6 route to the world and
>> the tunnel endpoint is that container's global IP address.
It seems that this issue might be fixed; I guess the host system didn't
remember to enable forwarding after the last reboot. Though that is
even more confusing: IPv6 forwarding should not be required on the host
to ping an Ethernet interface on the bridge that has IPv6, because the
host should not have to forward that. Only a container should need the
IPv6 forwarding enabled if the container is the thing that is routing
between your own IPv6 network and the IPv6 Internet.
>> I am completely confused on this one. I don't know where to look next.
>
> Let's try to solve the problems one by one.
--
Michael B. Trausch ☎ (404) 492-6475
More information about the lxc-users
mailing list