[Lxc-users] Slow response times (at least, from the LAN) to LXC containers

Daniel Lezcano daniel.lezcano at free.fr
Wed Mar 10 17:06:33 UTC 2010


Michael B. Trausch wrote:
> Hello,
>
> Alright, so I am still having some strange networking issues between my 
> LAN and my containers.  I'm not sure if the outside world is having 
> trouble with my containers or not, though.
>
> Here's the situation:  I have 3 containers running under LXC, all of 
> which have unique MAC addresses (I checked this time...!  I still feel 
> like an idiot after the last email I sent here...).  2 of these 
> addresses are static, and two are DHCP (the DHCP addresses are LAN 
> addresses in the 172.16.0.0/24 network, and the other ones are global IP 
> addresses).
>
> Now, I am _not_ having any issues with the LAN addresses.  Those, in 
> fact, are working just fine.  The problem I am having is that the 
> containers that have global IP addresses are having issues; they will 
> always (eventually) reply, but sometimes they simply fail to respond and 
> I have not the slightest clue why.  This is the only thing that I could 
> get working under OpenVZ that I cannot seem to get working correctly here.
>
> Here is ping output showing the problem:
>
> mbt at fennel:~$ time ping -c 4 spicerack.trausch.us
> PING spicerack.trausch.us (173.15.213.185) 56(84) bytes of data.
>  From 172.16.0.1: icmp_seq=2 Redirect Host(New nexthop: 173.15.213.185)
>  From 172.16.0.1: icmp_seq=3 Redirect Host(New nexthop: 173.15.213.185)
> 64 bytes from 173.15.213.185: icmp_seq=4 ttl=64 time=2.27 ms
>
> --- spicerack.trausch.us ping statistics ---
> 4 packets transmitted, 1 received, 75% packet loss, time 11073ms
> rtt min/avg/max/mdev = 2.278/2.278/2.278/0.000 ms
>
> real	0m21.144s
> user	0m0.000s
> sys	0m0.020s
>
> Now, this is pinging from my laptop.  When I ping from the outside 
> world, it always seems to work:
>   

Mmh, some informations are missing to investigate.

The redirect you receive means the router find an optimized route for 
the packet you sent to him, so the icmp redirect will trigger the kernel 
to create a new route for these packets. Maybe the route is not created 
in the right container ? Can you check where is created this route ?
 * ip route table show all
or
 * route -Cn

Can you give a summary of the network topolgy (IP of each container and 
routes ) ?

For example, where is located 172.16.0.1 ?

Is your host configured as a router ?

Probably more questions will come later :)

> otaku% time ping -c 4 spicerack.trausch.us
> PING spicerack.trausch.us (173.15.213.185): 56 data bytes
> 64 bytes from 173.15.213.185: icmp_seq=0 ttl=44 time=42.237 ms
> 64 bytes from 173.15.213.185: icmp_seq=1 ttl=44 time=60.862 ms
> 64 bytes from 173.15.213.185: icmp_seq=2 ttl=44 time=49.686 ms
> 64 bytes from 173.15.213.185: icmp_seq=3 ttl=44 time=50.719 ms
>
> ----spicerack.trausch.us PING Statistics----
> 4 packets transmitted, 4 packets received, 0.0% packet loss
> round-trip min/avg/max/stddev = 42.237/50.876/60.862/7.655 ms
> ping -c 4 spicerack.trausch.us  0.01s user 0.01s system 0% cpu 3.067 total
>
> I don't understand why this is.  This problem also exhibits itself when 
> I try to go to my Web server (on that very same IP address).  It will 
> _always_ fail to connect the first time, and then I can reload and get 
> to it the second.  If I stop going through page-load cycles with the 
> machine, then I have the same problem again in a few minutes.
>
> The networking is all bridged like so (from the container host, obviously):
>
> mbt at saffron:~$ ifconfig
> br0       Link encap:Ethernet  HWaddr 00:e0:4d:c6:99:c3
>            inet addr:172.16.0.2  Bcast:172.16.0.255  Mask:255.255.255.0
>            inet6 addr: fe80::2e0:4dff:fec6:99c3/64 Scope:Link
>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>            RX packets:1338484 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:94110 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:0
>            RX bytes:233020157 (233.0 MB)  TX bytes:13305287 (13.3 MB)
>
> eth1      Link encap:Ethernet  HWaddr 00:e0:4d:c6:99:c3
>            inet6 addr: fe80::2e0:4dff:fec6:99c3/64 Scope:Link
>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>            RX packets:8304537 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:8319870 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:1000
>            RX bytes:5506717265 (5.5 GB)  TX bytes:5028941992 (5.0 GB)
>            Interrupt:27 Base address:0x4000
>
> lo        Link encap:Local Loopback
>            inet addr:127.0.0.1  Mask:255.0.0.0
>            inet6 addr: ::1/128 Scope:Host
>            UP LOOPBACK RUNNING  MTU:16436  Metric:1
>            RX packets:17 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:0
>            RX bytes:1426 (1.4 KB)  TX bytes:1426 (1.4 KB)
>
> veth7XAGMJ Link encap:Ethernet  HWaddr 66:43:3e:f1:49:50
>            inet6 addr: fe80::6443:3eff:fef1:4950/64 Scope:Link
>            UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
>            RX packets:4667350 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:6883287 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:1000
>            RX bytes:3822200715 (3.8 GB)  TX bytes:7370222136 (7.3 GB)
>
> vethZ2MPrI Link encap:Ethernet  HWaddr a2:79:e0:7d:c9:32
>            inet6 addr: fe80::a079:e0ff:fe7d:c932/64 Scope:Link
>            UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
>            RX packets:239273 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:2005271 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:1000
>            RX bytes:173986823 (173.9 MB)  TX bytes:304727044 (304.7 MB)
>
> vethlyeclR Link encap:Ethernet  HWaddr 06:c1:72:45:84:6b
>            inet6 addr: fe80::4c1:72ff:fe45:846b/64 Scope:Link
>            UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
>            RX packets:1012517 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:2705478 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:1000
>            RX bytes:202961661 (202.9 MB)  TX bytes:326251397 (326.2 MB)
>
> vethpzG08i Link encap:Ethernet  HWaddr ae:85:18:6c:f2:b8
>            inet6 addr: fe80::ac85:18ff:fe6c:f2b8/64 Scope:Link
>            UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
>            RX packets:1848591 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:2750421 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:1000
>            RX bytes:619641970 (619.6 MB)  TX bytes:363396089 (363.3 MB)
>
> virbr0    Link encap:Ethernet  HWaddr f2:3f:6e:49:77:df
>            inet addr:192.168.122.1  Bcast:192.168.122.255 
> Mask:255.255.255.0
>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:77 errors:0 dropped:0 overruns:0 carrier:0
>            collisions:0 txqueuelen:0
>            RX bytes:0 (0.0 B)  TX bytes:7703 (7.7 KB)
>
> mbt at saffron:~$ sudo brctl show
> bridge name	bridge id		STP enabled	interfaces
> br0		8000.00e04dc699c3	yes		eth1
> 							veth7XAGMJ
> 							vethZ2MPrI
> 							vethlyeclR
> 							vethpzG08i
> virbr0		8000.000000000000	yes		
>
> What I don't get is why I am receiving these redirects from ping.  I 
> never get them when pinging 172.16.0.x addresses that are in LXC 
> containers on that system.  And I never got these redirects before when 
> I was running containers in OpenVZ.
>
> Also, if I try to ping that same interface's IPv6 addresses, I get 
> failures (Address unreachable), no matter if I ping the private or the 
> global one.  However, I can reach ipv6.google.com just fine, and it is 
> going through that computer to get to my tunnel to Hurricane Electric. 
> It has to, since that container is my only IPv6 route to the world and 
> the tunnel endpoint is that container's global IP address.
>
> I am completely confused on this one.  I don't know where to look next. 
>   
Let's try to solve the problems one by one.




More information about the lxc-users mailing list