[lxc-users] Macvlan

Dave Smith dave.smith at candata.com
Mon Jun 1 13:10:37 UTC 2015


For those googling later , if you are using macvlan the IP has to be on the
same subnet as the "host device" which was not the case.  I ended up using
a bridge with an non public IP as suggested and that worked fine. I never
could fine anything on a bridgeless veth setup and I would be curious to
read about this. If someone could point me to the link that would be great.


On Sat, May 30, 2015 at 7:27 PM, Fajar A. Nugraha <list at fajar.net> wrote:

> On Sun, May 31, 2015 at 3:22 AM, Dave Smith <dave.smith at candata.com>
> wrote:
> > I am trying to setup a public IP via macvlan to a container. The host
> has a
> > public IP and then 2 extra static public IP's on one physical interface
> > (bond1) that is assigned to it by my upstream vendor. In my config I have
>
> Did your upstream provider allow additional mac address on your switch
> port?
>
> >
> > lxc.network.type = macvlan
> > lxc.network.flags = up
> > lxc.network.link = bond1
> > lxc.network.name = eth0
> > lxc.network.ipv4 = x.x.x.x/32 x.x.x.x
> > lxc.network.ipv4.gateway = x.x.x.x
> >
> > where x.x.x.x is the public static IP I want to use
>
>
> Assuming you have lxcbr0 (should be automatically created), try this
>
> lxc.network.type = veth
> lxc.network.flags = up
> lxc.network.link = lxcbr0
> lxc.network.ipv4 = x.x.x.x/32
> lxc.network.ipv4.gateway = 10.0.3.1
>
> ... where 10.0.3.1 is lxcbr0's IP address. This will work if:
> - your provider route the additional IP thru your main IP. Should be
> the case if your main IP and additional IP is on different subnet
> - you disable any networking setup on the container's OS side, since
> you already set it up on lxc config file.
> - on the host side, you run "ip route add x.x.x.x/32 dev lxcbr0" (or
> something similar) to tell the host that container's IP is reachable
> thru lxcbr0
>
> > netstat -nr
> > Kernel IP routing table
> > Destination     Gateway         Genmask         Flags   MSS Window  irtt
> > Iface
> > 0.0.0.0         x.x.x.x    0.0.0.0         UG        0 0          0 eth0
>
> There should be an additional entry, saying how to reach the gateway
> from the container. Something like this
>
> # netstat -nr
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags   MSS Window  irtt
> Iface
> 0.0.0.0         10.0.3.1        0.0.0.0         UG        0 0          0
> eth0
> 10.0.3.1        0.0.0.0         255.255.255.255 UH        0 0          0
> eth0
>
>
> >
> >  ip -d link show eth0
> > 56: eth0 at if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc
> > noqueue state UNKNOWN
> >     link/ether e6:9d:bf:fb:95:c7 brd ff:ff:ff:ff:ff:ff
> >     macvlan  mode private
> >
> >
> > Now when I ping out from my container ( to google.ca) I see the packet
> going
> > out and coming back (using tcpdump -e ) on the bond1 interface but my
> > container never receives it. There are no iptables rules on either the
> host
> > or in the container.
> >
>
>
> If you use macvlan or bridge the hosts's public interface (eth0,
> bond0, etc), then you wouldn't use /32. You'd use the same netmask and
> gateway as you do on the host, and your provider will need to allow
> more than 1 mac on your port. This way the container will be just like
> any other physical host on the same broadcast network as the host
> (e.g. /24).
>
> If you CAN'T use the same netmask and gateway as the host (e.g. when
> your provider gives additional IPs that are on different subnet), then
> you CAN'T use macvlan (or bridge the host's public interface). Use
> routed setup like my example instead. You can either use lxcbr0,
> create your own bridge, or use a bridgeless veth setup (not covered
> here, search the archives for details).
>
> --
> Fajar
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20150601/5867c1b1/attachment.html>


More information about the lxc-users mailing list