[lxc-users] Network space visibility in containers
steve at linuxsuite.org
steve at linuxsuite.org
Wed Jul 6 17:36:15 UTC 2016
> Try defining lxc.network.name and see if it fixes it.
>
version 1.08....
Nope.
[root at admn-101 ~]# ifconfig
admn101-1 Link encap:Ethernet HWaddr 26:3C:0B:06:A2:AF
inet addr:10.2.3.101 Bcast:10.2.255.255 Mask:255.255.0.0
inet6 addr: fe80::243c:bff:fe06:a2af/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:312 errors:0 dropped:0 overruns:0 frame:0
TX packets:129 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:48616 (47.4 KiB) TX bytes:26791 (26.1 KiB)
admn101-4 Link encap:Ethernet HWaddr FE:3D:09:F8:AA:AA
inet addr:10.5.3.101 Bcast:10.5.255.255 Mask:255.255.0.0
inet6 addr: fe80::fc3d:9ff:fef8:aaaa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:468 (468.0 b) TX bytes:468 (468.0 b)
admn101-5 Link encap:Ethernet HWaddr 72:26:66:8B:0E:FB
inet addr:10.1.3.101 Bcast:10.1.255.255 Mask:255.255.0.0
inet6 addr: fe80::7026:66ff:fe8b:efb/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:920 (920.0 b) TX bytes:468 (468.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root at admn-101 ~]# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address
State
tcp 0 0 0.0.0.0:25 0.0.0.0:*
LISTEN
tcp 0 0 10.5.5.101:443 207.11.1.163:12508
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:41572
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19664
SYN_RECV
tcp 0 0 10.5.5.101:443 73.112.14.86:25891
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19641
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:3458
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:54481
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19608
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19644
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19619
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:57090
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:1215
SYN_RECV
tcp 0 0 10.5.5.101:443 172.56.42.139:38995
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19565
SYN_RECV
tcp 0 0 10.5.5.101:443 172.56.42.139:36355
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19532
SYN_RECV
tcp 0 0 10.5.5.101:443 142.27.78.252:51543
SYN_RECV
tcp 0 0 10.5.5.101:443 172.56.42.139:27733
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19585
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:19024
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:29653
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19611
SYN_RECV
tcp 0 0 10.5.5.101:443 89.77.132.239:45287
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19599
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19629
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:32231
SYN_RECV
tcp 0 0 10.5.5.101:443 58.11.176.101:53361
SYN_RECV
tcp 0 0 10.5.5.101:443 172.56.42.139:23182
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19558
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19683
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:23751
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:47675
SYN_RECV
tcp 0 0 10.5.5.101:443 101.177.230.216:61453
SYN_RECV
tcp 0 0 10.5.5.101:443 172.56.42.139:21113
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:5824
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19676
SYN_RECV
tcp 0 0 10.5.5.101:443 87.211.18.55:61326
SYN_RECV
tcp 0 0 10.5.5.101:443 154.127.125.1:1746
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19548
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:43152
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19672
SYN_RECV
tcp 0 0 10.5.5.101:443 172.56.42.139:48737
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:12832
SYN_RECV
tcp 0 0 10.5.5.101:443 122.106.235.197:59220
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:56063
SYN_RECV
tcp 0 0 10.5.5.101:443 66.19.70.152:49996
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19543
SYN_RECV
tcp 0 0 10.5.5.101:443 172.56.42.139:23791
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:42423
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19650
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:6714
SYN_RECV
tcp 0 0 10.5.5.101:443 1.39.15.205:32364
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19572
SYN_RECV
tcp 0 0 10.5.5.101:443 96.53.94.194:19575
SYN_RECV
tcp 0 0 0.0.0.0:514 0.0.0.0:*
LISTEN
tcp 0 0 10.2.3.101:22 0.0.0.0:*
LISTEN
tcp 0 48 10.2.3.101:22 10.2.1.2:24483
ESTABLISHED
tcp 0 0 :::514 :::*
LISTEN
udp 0 0 0.0.0.0:514 0.0.0.0:*
udp 0 0 :::514 :::*
Active UNIX domain sockets (servers and established)
> On 07/06/2016 12:04 PM, steve at linuxsuite.org wrote:
>>> How are these containers networked together? Are you using a Bridges on
>>> the host or are you just bringing up new interfaces on the host?
>> I have a bridge for each interface. No interfaces on the
>> host
>> have
>> IP's except br1. Use veth in config
>>
>> lxc.network.type = veth
>> lxc.network.flags = up
>> lxc.network.link = br1
>> #lxc.network.hwaddr = fe:41:31:7f:5c:d6
>> lxc.network.veth.pair = admn101-1
>> lxc.network.ipv4 = 10.2.3.101/16
>> lxc.network.ipv4.gateway = 10.2.1.2
>>
>> lxc.network.type = veth
>> lxc.network.flags = up
>> lxc.network.link = br4
>> #lxc.network.hwaddr = fe:41:31:7f:5c:d6
>> lxc.network.veth.pair = admn101-4
>> lxc.network.ipv4 = 10.5.3.101/16
>>
>> [root at lxc100 ~]$ brctl show
>> bridge name bridge id STP enabled interfaces
>> br1 8000.0024e85d25ea no admn101-1
>> em1
>> mfs101-1
>> br2 8000.0024e85d25ec no em2
>> mfs101-2
>> br3 8000.0024e85d25ee no em3
>> mfs101-3
>> br4 8000.0024e85d25f0 no admn101-4
>> em4
>> mfs101-4
>> br5 8000.00151778923c no admn101-5
>> em5
>>
>>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
More information about the lxc-users
mailing list