Hello David,<br><br>As you can see you only force the MAC adress _inside_ the container, on the host the MAC for the veth is "out of the bounds" for ESX it doesn't seem to like that - At least that's my guess cause i have not been able to make it work correctly with this configuration.<br>
<br>First thing to check out, is ensure that your ESX vswitch has promiscuous mode enabled - it's disabled by default.<br>Next thing is to use Macvlan configuration for your containers.<br><br>Here's a network config example i use successfully in my containers:<br>
<br>lxc.utsname = lxc1<br>lxc.network.type = macvlan<br>lxc.network.macvlan.mode = bridge<br>lxc.network.flags = up<br>lxc.network.link = br1<br><a href="http://lxc.network.name">lxc.network.name</a> = eth0<br>lxc.network.mtu = 1500<br>
lxc.network.hwaddr = 00:50:56:3f:ff:00 # High enough MAC to not overlap with ESX assignments - from 00 to FF gives quite a good number of guests :)<br>lxc.network.ipv4 = 0.0.0.0 # I set the network inside the guest for minimal guest modifications<br>
<br><br>I find a bit painfull to have to configure another macvlan interface on the host to be able to communicate to the guests, so i'm assigning 2 interfaces on the hosts - The advantage of virtualization ;) - eth0 stays for the host network, and i setup a bridge over eth1 which is called br1 and is used for the containers.<br>
<br>I've achieved to have very good network performances since i set this up this way and have completely fixed my stability problems that i had why veth.<br><br><br>Tell me if you need some more details.<br><br><br>Cheers,<br>
<br>Olivier <br><br><br><br><div class="gmail_quote">On Tue, May 17, 2011 at 5:18 PM, David Touzeau <span dir="ltr"><<a href="mailto:david@touzeau.eu">david@touzeau.eu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Dear<br>
<br>
According last discuss i have tried to change MAC address up to:<br>
00:50:56:XX:YY:ZZ<br>
Thread was here :<br>
<a href="http://sourceforge.net/mailarchive/message.php?msg_id=27400968" target="_blank">http://sourceforge.net/mailarchive/message.php?msg_id=27400968</a><br>
<br>
Using container veth+bridge<br>
<br>
the host is a Virtual Machine stored on ESXi 4.0<br>
<br>
The container can ping the host, the host can ping the container.<br>
Issue is others computers network. cannot ping the container and the<br>
container cannot ping the network.<br>
<br>
Is there anybody encounter this issue<br>
<br>
<br>
here it is the ifconfig of the host:<br>
<br>
br5 Link encap:Ethernet HWaddr 00:0C:29:AD:40:A7<br>
inet adr:192.168.1.64 Bcast:192.168.1.255<br>
Masque:255.255.255.0<br>
adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien<br>
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br>
RX packets:607044 errors:0 dropped:0 overruns:0 frame:0<br>
TX packets:12087 errors:0 dropped:0 overruns:0 carrier:0<br>
collisions:0 lg file transmission:0<br>
RX bytes:54131332 (51.6 MiB) TX bytes:6350221 (6.0 MiB)<br>
<br>
eth1 Link encap:Ethernet HWaddr 00:0C:29:AD:40:A7<br>
adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien<br>
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1<br>
RX packets:611474 errors:0 dropped:0 overruns:0 frame:0<br>
TX packets:13813 errors:0 dropped:0 overruns:0 carrier:0<br>
collisions:0 lg file transmission:1000<br>
RX bytes:63127550 (60.2 MiB) TX bytes:6638350 (6.3 MiB)<br>
Interruption:18 Adresse de base:0x2000<br>
<br>
vethZS6zKh Link encap:Ethernet HWaddr 5E:AE:96:7C:4B:D7<br>
adr inet6: fe80::5cae:96ff:fe7c:4bd7/64 Scope:Lien<br>
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1<br>
RX packets:56 errors:0 dropped:0 overruns:0 frame:0<br>
TX packets:3875 errors:0 dropped:0 overruns:0 carrier:0<br>
collisions:0 lg file transmission:1000<br>
RX bytes:3756 (3.6 KiB) TX bytes:437097 (426.8 KiB)<br>
<br>
<br>
<br>
<br>
container settings:<br>
<br>
lxc.tty = 4<br>
lxc.pts = 1024<br>
lxc.network.type = veth<br>
lxc.network.link = br5<br>
lxc.network.ipv4 = 192.168.1.72<br>
lxc.network.hwaddr = 00:50:56:a5:af:30<br>
<a href="http://lxc.network.name" target="_blank">lxc.network.name</a> = eth0<br>
lxc.network.flags = up<br>
lxc.cgroup.memory.limit_in_bytes = 128M<br>
lxc.cgroup.memory.memsw.limit_in_bytes = 512M<br>
lxc.cgroup.cpu.shares = 1024<br>
lxc.cgroup.cpuset.cpus = 0<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------------<br>
Achieve unprecedented app performance and reliability<br>
What every C/C++ and Fortran developer should know.<br>
Learn how Intel has extended the reach of its next-generation tools<br>
to help boost performance applications - inlcuding clusters.<br>
<a href="http://p.sf.net/sfu/intel-dev2devmay" target="_blank">http://p.sf.net/sfu/intel-dev2devmay</a><br>
_______________________________________________<br>
Lxc-users mailing list<br>
<a href="mailto:Lxc-users@lists.sourceforge.net">Lxc-users@lists.sourceforge.net</a><br>
<a href="https://lists.sourceforge.net/lists/listinfo/lxc-users" target="_blank">https://lists.sourceforge.net/lists/listinfo/lxc-users</a><br>
</blockquote></div><br>