[lxc-users] LXC - Best way to avoid networking changes in a container

Benoit GEORGELIN - Association Web4all benoit.georgelin at web4all.fr
Mon Aug 17 04:00:54 UTC 2015


HI, 

this is finally what I did with openvswitch : 

ovs-ofctl del-flows vswitch-vps 
ovs-ofctl add-flow vswitch-vps "in_port=PORT_GW ip actions=NORMAL" 
ovs-ofctl add-flow vswitch-vps "in_port=PORT_GW arp actions=NORMAL" 

# default drop communication with HOST_A 
ovs-ofctl add-flow vswitch-vps "in_port=PORT_HOST_A priority=38000 idle_timeout=0 action=drop" 

# default drop communication with HOST_B 
ovs-ofctl add-flow vswitch-vps "in_port=PORT_HOST_B priority=38000 idle_timeout=0 action=drop" 

# Allow GW communication + Hypervisor 
ovs-ofctl add-flow vswitch-vps "in_port=PORT_GW priority=39000 dl_type=0x0800 nw_src=IP_GW dl_src= MAC_GW idle_timeout=0 action=normal" 
ovs-ofctl add-flow vswitch-vps "in_port=PORT_GW priority=38500 dl_type=0x0806 dl_src=MAC_GW idle_timeout=0 action=normal" 

# Allow HOST A 
ovs-ofctl add-flow vswitch-vps "in_port=PORT_HOST_A priority=38400 dl_type=0x0800 nw_src= IP_HOST_A dl_src= MAC_HOST_A idle_timeout=0 action=normal" 
ovs-ofctl add-flow vswitch-vps "in_port= PORT_HOST_A priority=38300 dl_type=0x0806 dl_src=MAC_HOST_A idle_timeout=0 action=normal" 

# Allow HOST B 
ovs-ofctl add-flow vswitch-vps "in_port= PORT_HOST_B priority=38400 dl_type=0x0800 nw_src=IP_HOST_B dl_src=MAC_HOST_B idle_timeout=0 action=normal" 
ovs-ofctl add-flow vswitch-vps "in_port= PORT_HOST_A priority=38300 dl_type=0x0806 dl_src=MAC_HOST_B idle_timeout=0 action=normal" 


To find port numbers: 

ovs-ofctl show BRIDGE 


Cordialement, 

Benoît Georgelin - 
Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité 


De: "Benoit GEORGELIN - Association Web4all" <benoit.georgelin at web4all.fr> 
À: "gustavo panizzo (gfa)" <gfa at zumbi.com.ar> 
Cc: "lxc-users" <lxc-users at lists.linuxcontainers.org> 
Envoyé: Vendredi 26 Juin 2015 09:31:09 
Objet: Re: [lxc-users] LXC - Best way to avoid networking changes in a container 

thanks for the link. 
I will consider this option too. This should be an interesting configuration. 
I'm surprise there isn't many talks about it . 

Cordialement, 


----- Mail original ----- 
De: "gustavo panizzo (gfa)" <gfa at zumbi.com.ar> 
À: "lxc-users" <lxc-users at lists.linuxcontainers.org>, "Benoit GEORGELIN - Association Web4all" <benoit.georgelin at web4all.fr> 
Envoyé: Vendredi 26 Juin 2015 09:23:49 
Objet: Re: [lxc-users] LXC - Best way to avoid networking changes in a container 

you can configure openvswitch to drop the pkts if the mac address and/or 
ip does not match. or you can use an SDN controller which will do it for 
you 

http://openvswitch.org/pipermail/discuss/2011-May/005112.html for an 
example how to do it manually 


On June 26, 2015 11:59:04 AM GMT+08:00, Benoit GEORGELIN - Association Web4all <benoit.georgelin at web4all.fr> wrote: 


Hi, 

I'm looking to avoid network changes in an LXC container with root access while the system is up and running. 

Let's say I have two containers running. 

A: 192.168.0.100/24 
B: 192.168.0.200/24 

They are both on the same private network but it can be a public network too. 
How can I prevent root user from container B to change his IP address and user the IP address of container A ? 

Container network is built on top of Ovs Switch . Maybe there is a way to restrict MAC Address and IP for a specific port ? I did not see any option. 

Thanks for you advises ! 

Cordialement, 
Benoit G 





lxc-users mailing list 
lxc-users at lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 




-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333 

Sent from mobile. 

_______________________________________________ 
lxc-users mailing list 
lxc-users at lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20150817/85195004/attachment.html>


More information about the lxc-users mailing list