[Lxc-users] networking query

Daniel Lezcano daniel.lezcano at free.fr
Thu Jul 29 13:17:05 UTC 2010


On 07/29/2010 11:43 AM, Andy Billington wrote:
> On 29/07/2010 10:32, Daniel Lezcano wrote:
>> On 07/29/2010 01:47 AM, Andy Billington wrote:
>>> Firstly, am just starting to look at LXC as a possible migration 
>>> from OpenSolaris, so excuse me if question is obvious.
>>> Reading what I have found so far, it seems clear that with a bridged 
>>> interface on the global side, the Containers can all have separate 
>>> network info (different IPs, subnets) and so on. The question I have 
>>> is can each container run an independent, totally isolated IP stack 
>>> (like OpenSolaris Crossbow) including completely separate routing 
>>> tables and IPSec configurations?
>>
>> Yes, each container has its own private network stack, the 
>> virtualization begins at the L2 layer. The container will have its 
>> own network interfaces. From the linux kernel point of view, it was 
>> modified to dynamically allocate a new kernel stack with a syscall.
>>
>> I don't use ipsec within a container, but as far as I remember that 
>> was implemented 2 years ago right after pushing the core network 
>> virtualization, so I think it is supported per container so far.
>>
>>> The problem I'm investigating is that I currently have two Zones in 
>>> Solaris, call them Z1 (10.1.1.1/24) and Z2 (10.1.2.1/24). These then 
>>> talk to customer networks via IPSec; call them Customer1 and 
>>> Customer2. The "fun" part is the Customer networking: Customer1 uses 
>>> 192.168.1.0/24 as their internal range (ie. "behind" the VPN tunnel, 
>>> my IPSec emerges on 192.168.1.252), and Customer2 uses 
>>> 192.168.0.0/16 as their internal range. So, overlapping ranges.
>>
>> ok.
>>
>>> Z1 talks to Customer1, Z2 talks to Customer2, it is critical they 
>>> cannot "see" each other. Crossbow is doing it just fine; 
>>
>> I am not sure to understand "the cannot see each other", can you 
>> elaborate a bit ?
>>
> Z1 and Customer1 traffic must be able to route between each other, but 
> not reach either Z2 or Customer2

Ok. As the containers can be setup as additional hosts on the same 
network or as another network where the host becomes a router for the 
containers between the real network and the virtualized one, I think you 
can consider a container will behave as a host you plugged to your 
network or to an additional router. If the containers are running on the 
same host, you can use the macvlan configuration where the traffic will 
be totally private to the container (Z1 and Z2 can't communicate, as 
well as Z1|Z2 and the host).

With the VPN, I am not sure the IP addresses will overlap between Z1 and 
Z2, no ? The 192.168.0.0 addresses within the container will go through 
the vpn interface doing ip2ip, and so going outside via the network 
assigned to the container, right ?


Sorry, I don't know exactly the network topology you are using/aiming, 
so maybe I am wrong.

>>> can LXC do the same thing?
>>
>> I never tried this configuration but at the first glance, I think the 
>> linux kernel support that.
>> Maybe someone on this mailing list tried that ...
>>
>> If you expect LXC to do the VPN setup for you, that is not (yet) 
>> supported.
> That's fine
>>
>> If you expect to run a virtualized system like ubuntu inside a 
>> container, you can configure this system to create a vpn/ipsec by 
>> installing openvpn and whatever you need like any real host for your 
>> configuration. This is about an appliance to be created (there are 
>> some basic appliance available for lxc you can improve).
>>
> Got to be a full IPSec implementation, as in the future some Cisco IOS 
> endpoints are joining in. Was going to use Racoon/ IPsec-tools ?



>>> If LXC can do it, are there any gotcha's or suggestions as to the 
>>> best choice for IPSec setup / configuration?#
>>
>> For testing that, I suggest to create an ubuntu system (on ubuntu 
>> host) via the command:
>>
>> lxc-create -n Z1 -f lxc.conf -t ubuntu
>>
>> where lxc.conf is:
>>
>> lxc.network.type=veth
>> lxc.network.link=br0
>> lxc.network.flags=up
>>
>> Assuming you have a bridge br0 setup on your host with your nic 
>> attached to it.
>>
>> Then start the container:
>>
>> lxc-start -n Z1
>>
>> You will get a console, and you can log into with user: root / pwd: root
>>
>> At this point you can install/configure your container with openvpn.
>>
>> Hope that helps
>>   -- Daniel
>>
> Thankyou! One completely unrelated question: is there an LXC way to 
> de-duplicate on storage for Containers? The Z1 virtual machine and the 
> Z2 virtual machine will be 95% identical, so I don't really want to 
> have disks eaten up with two copies of identical files.

Yes, lxc is like swiss army knife, flexible for a lot of purposes but at 
the cost of letting an administrator or a developer to create some 
script on top to automate the container creation.

In your case here is a couple of examples:

  1 - use a btrfs (a COW filesystem) where you install your bare system 
for Z1, Z2, .. . You create a snaphost of this bare system which results 
on a directory for Z1, repeat the operation for Z2. At this point, the 
content of Z1 and Z2 are identical to 'bare' and the size of the fs is 
equal at the content of 'bare'. Set in the configuration file 
lxc.rootfs=/mnt/btrfs/Z1 as the rootfs for Z1 and so on for the other 
Zx. Any modification made to theses rootfs will be private to the 
container and COWed.

I won't recommand this solution for now, because the btrfs is still 
"experimental" and a bit unstable - but if nobody use it, it will stay 
unstable ;)

2 - install a rootfs containing the common part between Z1 and Z2 and 
use ro bind mount. You can either use your system as the base system for 
Z1 and Z2 and ro-bind mount the directories.

For example:

Create a couple of empty FHS in /tmp/container/rootfs/Z1 and Z2

and in the lxc configuration file

...

lxc.mount.entry=/dev /tmp/container/rootfs/Z1/dev none ro,bind 0 0
lxc.mount.entry=/lib /tmp/container/rootfs/Z1/lib none ro,bind 0 0
lxc.mount.entry=/bin /tmp/container/rootfs/Z1/bin none ro,bind 0 0
lxc.mount.entry=/usr /tmp/container/rootfs/Z1/usr none ro,bind 0 0
lxc.mount.entry=/sbin /tmp/container/rootfs/Z1/sbin none ro,bind 0 0
lxc.rootfs=/tmp/container/rootfs/Z1
...

You can do the same with a separate rootfs (common for Z1 and Z2), if 
you don't want to share between the host and the containers the filesystem.

Thanks
   -- Daniel






More information about the lxc-users mailing list