[Lxc-users] LXC on ESXi (help)

David Touzeau david at touzeau.eu
Tue May 17 18:43:06 UTC 2011


Le mardi 17 mai 2011 à 17:37 +0200, Mauras Olivier a écrit :
> Hello David,
> 
> As you can see you only force the MAC adress _inside_ the container,
> on the host the MAC for the veth is "out of the bounds" for ESX it
> doesn't seem to like that - At least that's my guess cause i have not
> been able to make it work correctly with this configuration.
> 
> First thing to check out, is ensure that your ESX vswitch has
> promiscuous mode enabled - it's disabled by default.
> Next thing is to use Macvlan configuration for your containers.
> 
> Here's a network config example i use successfully in my containers:
> 
> lxc.utsname = lxc1
> lxc.network.type = macvlan
> lxc.network.macvlan.mode = bridge
> lxc.network.flags = up
> lxc.network.link = br1
> lxc.network.name = eth0
> lxc.network.mtu = 1500
> lxc.network.hwaddr = 00:50:56:3f:ff:00    # High enough MAC to not
> overlap with ESX assignments - from 00 to FF gives quite a good number
> of guests :)
> lxc.network.ipv4 = 0.0.0.0                      # I set the network
> inside the guest for minimal guest modifications
> 
> 
> I find a bit painfull to have to configure another macvlan interface
> on the host to be able to communicate to the guests, so i'm assigning
> 2 interfaces on the hosts - The advantage of virtualization ;) - eth0
> stays for the host network, and i setup a bridge over eth1 which is
> called br1 and is used for the containers.
> 
> I've achieved to have very good network performances since i set this
> up this way and have completely fixed my stability problems that i had
> why veth.
> 
> 
> Tell me if you need some more details.
> 
> 
> Cheers,
> 
> Olivier 
> 
> 
> 
> On Tue, May 17, 2011 at 5:18 PM, David Touzeau <david at touzeau.eu>
> wrote:
>         Dear
>         
>         According last discuss i have tried to change MAC address up
>         to:
>         00:50:56:XX:YY:ZZ
>         Thread was here :
>         http://sourceforge.net/mailarchive/message.php?msg_id=27400968
>         
>         Using container veth+bridge
>         
>         the host is a Virtual Machine stored on ESXi 4.0
>         
>         The container can ping the host, the host can ping the
>         container.
>         Issue is others computers network. cannot ping the container
>         and the
>         container cannot ping the network.
>         
>         Is there anybody encounter this issue
>         
>         
>         here it is the ifconfig of the host:
>         
>         br5       Link encap:Ethernet  HWaddr 00:0C:29:AD:40:A7
>                  inet adr:192.168.1.64  Bcast:192.168.1.255
>         Masque:255.255.255.0
>                  adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien
>                  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>                  RX packets:607044 errors:0 dropped:0 overruns:0
>         frame:0
>                  TX packets:12087 errors:0 dropped:0 overruns:0
>         carrier:0
>                  collisions:0 lg file transmission:0
>                  RX bytes:54131332 (51.6 MiB)  TX bytes:6350221 (6.0
>         MiB)
>         
>         eth1      Link encap:Ethernet  HWaddr 00:0C:29:AD:40:A7
>                  adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien
>                  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500
>          Metric:1
>                  RX packets:611474 errors:0 dropped:0 overruns:0
>         frame:0
>                  TX packets:13813 errors:0 dropped:0 overruns:0
>         carrier:0
>                  collisions:0 lg file transmission:1000
>                  RX bytes:63127550 (60.2 MiB)  TX bytes:6638350 (6.3
>         MiB)
>                  Interruption:18 Adresse de base:0x2000
>         
>         vethZS6zKh Link encap:Ethernet  HWaddr 5E:AE:96:7C:4B:D7
>                  adr inet6: fe80::5cae:96ff:fe7c:4bd7/64 Scope:Lien
>                  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500
>          Metric:1
>                  RX packets:56 errors:0 dropped:0 overruns:0 frame:0
>                  TX packets:3875 errors:0 dropped:0 overruns:0
>         carrier:0
>                  collisions:0 lg file transmission:1000
>                  RX bytes:3756 (3.6 KiB)  TX bytes:437097 (426.8 KiB)
>         
>         
>         
>         
>         container settings:
>         
>         lxc.tty = 4
>         lxc.pts = 1024
>         lxc.network.type = veth
>         lxc.network.link = br5
>         lxc.network.ipv4 = 192.168.1.72
>         lxc.network.hwaddr = 00:50:56:a5:af:30
>         lxc.network.name = eth0
>         lxc.network.flags = up
>         lxc.cgroup.memory.limit_in_bytes = 128M
>         lxc.cgroup.memory.memsw.limit_in_bytes = 512M
>         lxc.cgroup.cpu.shares = 1024
>         lxc.cgroup.cpuset.cpus = 0
>         
>         
>         
>         
>         
>         
>         
>         
>         
>         
>         
>         ------------------------------------------------------------------------------
>         Achieve unprecedented app performance and reliability
>         What every C/C++ and Fortran developer should know.
>         Learn how Intel has extended the reach of its next-generation
>         tools
>         to help boost performance applications - inlcuding clusters.
>         http://p.sf.net/sfu/intel-dev2devmay
>         _______________________________________________
>         Lxc-users mailing list
>         Lxc-users at lists.sourceforge.net
>         https://lists.sourceforge.net/lists/listinfo/lxc-users
> 



Thanks olivier 

I will check it for the fun !





More information about the lxc-users mailing list