[lxc-users] Container doesn't connect to bridge
Joshua Schaeffer
jschaeffer0922 at gmail.com
Fri Oct 23 17:40:35 UTC 2015
I have a lxc container on version 1.1.2 on Debian that cannot connect to
the network. My host has br0 setup and I can access any machine on the
network and internet from the host:
This is the host:
jschaeffer at prvlxc01:~$ sudo ifconfig
[sudo] password for jschaeffer:
br0 Link encap:Ethernet HWaddr 00:50:56:be:13:94
inet addr:192.168.54.65 Bcast:192.168.54.127
Mask:255.255.255.128
inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9891 errors:0 dropped:0 overruns:0 frame:0
TX packets:4537 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4078480 (3.8 MiB) TX bytes:521427 (509.2 KiB)
eth0 Link encap:Ethernet HWaddr 00:50:56:be:13:94
inet6 addr: fe80::250:56ff:febe:1394/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10872 errors:0 dropped:0 overruns:0 frame:0
TX packets:5085 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4159749 (3.9 MiB) TX bytes:575863 (562.3 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
vethAGP5QO Link encap:Ethernet HWaddr fe:fa:9c:21:8d:0b
inet6 addr: fe80::fcfa:9cff:fe21:8d0b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:536 errors:0 dropped:0 overruns:0 frame:0
TX packets:3013 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:49648 (48.4 KiB) TX bytes:332247 (324.4 KiB)
>From the container I cannot even reach the gateway:
This is the container:
root at thinkweb:/# ifconfig
eth0 Link encap:Ethernet HWaddr aa:0a:f7:64:12:db
inet addr:192.168.54.110 Bcast:192.168.54.127
Mask:255.255.255.128
inet6 addr: fe80::a80a:f7ff:fe64:12db/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3194 errors:0 dropped:0 overruns:0 frame:0
TX packets:536 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:352314 (344.0 KiB) TX bytes:49648 (48.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:336 (336.0 B) TX bytes:336 (336.0 B)
root at thinkweb:/# ping 192.168.54.1
PING 192.168.54.1 (192.168.54.1) 56(84) bytes of data.
^C
--- 192.168.54.1 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6049ms
jschaeffer at prvlxc01:~$ cat /var/lib/lxc/thinkweb/config
cat: /var/lib/lxc/thinkweb/config: Permission denied
jschaeffer at prvlxc01:~$ sudo cat /var/lib/lxc/thinkweb/config
# Template used to create this
container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: -d debian -r jessie -a amd64
# For additional config options, please look at lxc.container.conf(5)
# Distribution configuration
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.arch = x86_64
# Container specific configuration
lxc.rootfs = /var/lib/lxc/thinkweb/rootfs
lxc.utsname = thinkweb
lxc.tty = 4
lxc.pts = 1024
lxc.cap.drop = sys_module mac_admin
mac_override sys_time
# When using LXC with apparmor, uncomment the next line to run
unconfined:
#lxc.aa_profile = unconfined
# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.ipv4 = 192.168.54.110/25
lxc.network.name = eth0
## Limits
lxc.cgroup.cpu.shares = 1024
lxc.cgroup.cpuset.cpus = 0,1,2,3
lxc.cgroup.memory.limit_in_bytes = 2G
#lxc.cgroup.memory.memsw.limit_in_bytes = 3G
Thanks,
Joshua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20151023/47b6953b/attachment.html>
More information about the lxc-users
mailing list