[lxc-users] Container doesn't connect to bridge

Benoit GEORGELIN - Association Web4all benoit.georgelin at web4all.fr
Fri Oct 23 20:25:23 UTC 2015


Yes, thanks, I saw it in your configuration file. 

Everything looks good. 
Your container does not have a gateway address , but you should be able to ping local network . 

This looks good too: 

Address HWtype HWaddress Flags Mask Iface 
192.168.54.65 ether 00:50:56:be:13:94 C eth0 
192.168.54.1 ether 00:13:c4:f2:64:41 C eth0 


Your container know the mac address of the host. Communication is working on that level. 

Do you have any iptables rules on the host ? 

Can you look at this file , it should be 1 
cat /proc/sys/net/ipv4/ip_forward 

Also can you send the OVS db content: 

ovs-vsctl show 


Cordialement, 

Benoît Georgelin - 
Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité 


De: "Joshua Schaeffer" <jschaeffer0922 at gmail.com> 
À: "lxc-users" <lxc-users at lists.linuxcontainers.org> 
Envoyé: Vendredi 23 Octobre 2015 15:41:49 
Objet: Re: [lxc-users] Container doesn't connect to bridge 

Oh, also forgot to mention that I'm using OVS to create the bridge. I didn't think this would be a problem if I got the bridge working on the host, but let me know if I've missed something. 
Thanks, 
Joshua 

On Fri, Oct 23, 2015 at 1:36 PM, Joshua Schaeffer < jschaeffer0922 at gmail.com > wrote: 



Here ya go. It looks like the routing table is off for the container or am I just misreading that. Also I assigned the veth an mac address from the config file. Everything still appears to be the same, no change. 

Host: 
jschaeffer at prvlxc01:~$ sudo route -n 
Kernel IP routing table 
Destination Gateway Genmask Flags Metric Ref Use Iface 
0.0.0.0 192.168.54.1 0.0.0.0 UG 0 0 0 br0 
192.168.54.0 0.0.0.0 255.255.255.128 U 0 0 0 br0 

jschaeffer at prvlxc01:~$ cat /etc/network/interfaces 
# This file describes the network interfaces available on your system 
# and how to activate them. For more information, see interfaces(5). 

source /etc/network/interfaces.d/* 

# The loopback network interface 
auto lo 
iface lo inet loopback 

allow-ovs br0 
iface br0 inet static 
address 192.168.54.65 
netmask 255.255.255.128 
gateway 192.168.54.1 
ovs_type OVSBridge 
ovs_ports eth0 

# The primary network interface 
allow-br0 eth0 
iface eth0 inet manual 
ovs_bridge br0 
ovs_type OVSPort 



Container: 
root at thinkweb:~# route -n 
Kernel IP routing table 
Destination Gateway Genmask Flags Metric Ref Use Iface 
192.168.54.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0 

root at thinkweb:~# arp -n 
Address HWtype HWaddress Flags Mask Iface 
192.168.54.65 ether 00:50:56:be:13:94 C eth0 
192.168.54.1 ether 00:13:c4:f2:64:41 C eth0 


On Fri, Oct 23, 2015 at 12:23 PM, Benoit GEORGELIN - Association Web4all < benoit.georgelin at web4all.fr > wrote: 

BQ_BEGIN

Hi, 

can you provide from the host and from the container : 

route -n 

can you provide from the container : 

arp -n 

can you also give the bridge configuration from /etc/network/interfaces 

LXC configuration looks good to me . 
I would try to set the mac address manually in the configuration file like : 

lxc.network.hwaddr = fe:fa:9c:21:8d:0b 

Cordialement, 

Benoît Georgelin - 
Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité 


De: "Joshua Schaeffer" < jschaeffer0922 at gmail.com > 
À: "lxc-users" < lxc-users at lists.linuxcontainers.org > 
Envoyé: Vendredi 23 Octobre 2015 13:40:35 
Objet: [lxc-users] Container doesn't connect to bridge 

I have a lxc container on version 1.1.2 on Debian that cannot connect to 
the network. My host has br0 setup and I can access any machine on the 
network and internet from the host: 

This is the host: 
jschaeffer at prvlxc01:~$ sudo ifconfig 
[sudo] password for jschaeffer: 
br0 Link encap:Ethernet HWaddr 00:50:56:be:13:94 
inet addr:192.168.54.65 Bcast:192.168.54.127 
Mask:255.255.255.128 
inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link 
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 
RX packets:9891 errors:0 dropped:0 overruns:0 frame:0 
TX packets:4537 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:0 
RX bytes:4078480 (3.8 MiB) TX bytes:521427 (509.2 KiB) 

eth0 Link encap:Ethernet HWaddr 00:50:56:be:13:94 
inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link 
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 
RX packets:10872 errors:0 dropped:0 overruns:0 frame:0 
TX packets:5085 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:1000 
RX bytes:4159749 (3.9 MiB) TX bytes:575863 (562.3 KiB) 

lo Link encap:Local Loopback 
inet addr:127.0.0.1 Mask:255.0.0.0 
inet6 addr: ::1/128 Scope:Host 
UP LOOPBACK RUNNING MTU:65536 Metric:1 
RX packets:0 errors:0 dropped:0 overruns:0 frame:0 
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 

vethAGP5QO Link encap:Ethernet HWaddr fe:fa:9c:21:8d:0b 
inet6 addr: fe80::fcfa:9cff:fe21:8d0b/64 Scope:Link 
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 
RX packets:536 errors:0 dropped:0 overruns:0 frame:0 
TX packets:3013 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:1000 
RX bytes:49648 (48.4 KiB) TX bytes:332247 (324.4 KiB) 

>From the container I cannot even reach the gateway: 

This is the container: 
root at thinkweb:/# ifconfig 
eth0 Link encap:Ethernet HWaddr aa:0a:f7:64:12:db 
inet addr:192.168.54.110 Bcast:192.168.54.127 
Mask:255.255.255.128 
inet6 addr: fe80::a80a:f7ff:fe64:12db/64 Scope:Link 
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 
RX packets:3194 errors:0 dropped:0 overruns:0 frame:0 
TX packets:536 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:1000 
RX bytes:352314 (344.0 KiB) TX bytes:49648 (48.4 KiB) 

lo Link encap:Local Loopback 
inet addr:127.0.0.1 Mask:255.0.0.0 
inet6 addr: ::1/128 Scope:Host 
UP LOOPBACK RUNNING MTU:65536 Metric:1 
RX packets:4 errors:0 dropped:0 overruns:0 frame:0 
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 
collisions:0 txqueuelen:0 
RX bytes:336 (336.0 B) TX bytes:336 (336.0 B) 

root at thinkweb:/# ping 192.168.54.1 
PING 192.168.54.1 (192.168.54.1) 56(84) bytes of data. 
^C 
--- 192.168.54.1 ping statistics --- 
7 packets transmitted, 0 received, 100% packet loss, time 6049ms 

jschaeffer at prvlxc01:~$ cat /var/lib/lxc/thinkweb/config 
cat: /var/lib/lxc/thinkweb/config: Permission denied 
jschaeffer at prvlxc01:~$ sudo cat /var/lib/lxc/thinkweb/config 
# Template used to create this 
container: /usr/share/lxc/templates/lxc-download 
# Parameters passed to the template: -d debian -r jessie -a amd64 
# For additional config options, please look at lxc.container.conf(5) 

# Distribution configuration 
lxc.include = /usr/share/lxc/config/debian.common.conf 
lxc.arch = x86_64 

# Container specific configuration 
lxc.rootfs = /var/lib/lxc/thinkweb/rootfs 
lxc.utsname = thinkweb 
lxc.tty = 4 
lxc.pts = 1024 
lxc.cap.drop = sys_module mac_admin 
mac_override sys_time 
# When using LXC with apparmor, uncomment the next line to run 
unconfined: 
#lxc.aa_profile = unconfined 

# Network configuration 
lxc.network.type = veth 
lxc.network.flags = up 
lxc.network.link = br0 
lxc.network.ipv4 = 192.168.54.110/25 
lxc.network.name = eth0 

## Limits 
lxc.cgroup.cpu.shares = 1024 
lxc.cgroup.cpuset.cpus = 0,1,2,3 
lxc.cgroup.memory.limit_in_bytes = 2G 
#lxc.cgroup.memory.memsw.limit_in_bytes = 3G 


Thanks, 
Joshua 

_______________________________________________ 
lxc-users mailing list 
lxc-users at lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 

_______________________________________________ 
lxc-users mailing list 
lxc-users at lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 





BQ_END



_______________________________________________ 
lxc-users mailing list 
lxc-users at lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20151023/34b39e37/attachment-0001.html>


More information about the lxc-users mailing list