[lxc-users] How to setup a static IP in a container with LX[C|D] 2.0.0.*

Witold Filipczyk gglater62 at gmail.com
Mon Jan 23 17:01:07 UTC 2017


On Sat, Mar 19, 2016 at 06:21:51AM +0700, Fajar A. Nugraha wrote:
> On Fri, Mar 18, 2016 at 11:09 PM, Sean McNamara <smcnam at gmail.com> wrote:
> > As part of that, I was expecting some way to tell LXD to restrict the
> > IP addresses that can be claimed/used by a given container. For
> > instance, if I have a public Internet IPv4 /26 allocated to a physical
> > host by a hosting provider, I'll want to assign only one or two IP
> > addresses to each container. Currently, I can have an LXD container
> > just spuriously decide to use any arbitrary IP, and I haven't found a
> > way to prevent it from doing that if an untrusted user has root access
> > in the container. They can just run ifconfig and specify the IP
> > address they want to use.
> 
> The same is true for lxd, lxc, xen, kvm, and
> whatever-solution-that-uses-plain-bridge.
> 
> 
> > How can I configure the host environment (LXD or something else on the
> > host, assuming I'm running a very recent Ubuntu 16.04 Beta nightly) so
> > that no packets can be transmitted to/from the guest unless the guest
> > is using a specific IP or set of IPs? I also want to make sure that no
> > broadcasting is occurring; i.e., the root user in the container should
> > not be able to sniff layer 2 and see all the packets going to all the
> > other containers.
> >
> > ...Or is LXD not suitable for this use case? If it isn't, will it ever be?
> 
> 
> As Stephane said, it's not lxc's job (or xen, or kvm) when using bridge.
> 
> That being said, I've been using a workaroud for lxc, and can be adapted to lxd:
> - use veth interface for containers, with a persistent name (e.g.
> veth-r-0), without a bridge. For lxd's case, this is slightly
> complicated as it doesn't allow you to define bridgeless veth
> interface. I had to create and assign it to my custom bridge (e.g.
> br-dummy), but have a script take it out of the bridge later
> 
> - configure the host and the container to use the the veth pair as
> point to point
> 
> - configure the host to use proxyarp (and optionally, set the host to
> send arp update for container's ip, so that other hosts in the network
> knows about the change immediately)
> 
> 
> The host configuration part looks like this
> 
> ### container device config (lxc config device show CONTAINER_NAME),
> relevant nic config only
> eth0:
>   host_name: veth-r-0
>   name: eth0
>   nictype: bridged
>   parent: br-dummy
>   type: nic
> ###

Can it be done with nictype 'p2p' somehow?
Could you provide lxd commands how to achieve this?
Number of examples is too low in documentation, IMO.



> 
> ### /etc/sysctl.d/50-proxyarp.conf
> net.ipv4.conf.eth0.proxy_arp=1
> net.ipv4.ip_forward = 1
> ###
> 
> ### addition to /etc/network/interfaces ###
> # dummy bridge. Needed later for bridgeless container
> auto br-dummy
> iface br-dummy inet manual
>         bridge_ports none
> 
> # host veth pair config
> allow-hotplug veth-r-0
> iface veth-r-0 inet static
>         address 10.0.0.1/32
>         scope link
>         pointopoint 192.168.0.101
>         up /etc/lxc/script/proxyarp 192.168.0.101 192.168.0.1 eth0
>         up sleep 1 && brctl delif br-dummy veth-r-0
> ###
> 
> ### /etc/lxc/script/proxyarp ###
> #!/bin/bash
> export IP=$1
> export GW=$2
> export DEV=$3
> ip ad add dev $DEV $IP/32;arping -c 1 -I $DEV -s $IP $GW;ip ad del dev
> $DEV $IP/32
> ###
> 
> The container's /etc/network/interfaces looks something like this
> ###
> auto lo
> iface lo inet loopback
> 
> auto eth0
> iface eth0 inet static
>         address 192.168.0.101/32
>         pointopoint 10.0.0.1
>         gateway 10.0.0.1
>         dns-nameservers 8.8.8.8 8.8.4.4
> ###
> 
> 
> With that setup, the container is only connected to the host (thru the
> pair of veth interface), and does not share L2 networking with other
> containers (so things like snooping other container's traffic won't
> work). If the container change its network config, it simply can't
> connect anywhere.
> 
> It works for me, but might not be suitable for mass-hosting setup (due
> to additions needed on the host's /etc/network/interfaces). In that
> case it'd probably be more suitable to use something like openvswitch
> and perform the configuration there.



More information about the lxc-users mailing list