[Lxc-users] LXC a feature complete replacement of OpenVZ?

Gordon Henderson gordon at drogon.net
Thu May 13 17:17:22 UTC 2010


On Thu, 13 May 2010, Christian Haintz wrote:

> Hi,
>
> At first LXC seams to be a great work from what we have read already.
>
> There are still a few open questions for us (we are currently running
> dozens of OpenVZ Hardwarenodes).

I can't answer for the developers, but here's my answers/observations 
based on what I've seen and used ...

> 1) OpenVZ in the long-term seams to be a dead end. Will LXC be a
> feature complete replacement for OpenVZ in the 1.0 Version?

I looked at OpenVZ and while it looked promising, didn't seem to be going 
anywhere. I also struggled to get their patches into a recent kernel and 
it looked like there was no Debian support for it. LXC was in the kernel 
as standard - I doubt it'll come out now... (and there is a back-ported 
lxc debian package that works fine under Lenny)


> As of the current version
> 2) is there IPTable support, any sort of control like the OpenVZ
> IPTable config.

I run iptables - and in some cases different iptable setups in each 
container on a host (which also has it's own iptables).

Seems to "just work". Each container has an eth0 and the host has a br0 
(as well as an eth0).

Logging is at the kernel level though, so goes into the log-files on the 
server host rather than in the container - it may be possible to isolate 
that, but it's not something I'm too bothered with.

My iptables are just shell-scripts that get called as part of the boot 
sequence - I really don't know what sort of control OpenVZ gives you.


> 3) Is there support for tun/tap device

Doesn't look like it yet...

http://www.mail-archive.com/lxc-users@lists.sourceforge.net/msg00239.html


> 4) is there support for correct memory info and disk space info (are
> df and top are showing the container ressources or the resources of
> the hardwarenode)

Something I'm looking at myself - top gives your own processes, but cpu 
usage is for the whole machine. 'df' I can get by manipulating /etc/mtab - 
then I get the size of the entire partition my host is running under. I'm 
not doing anything 'clever' like creating a file and loopback mounting it 
- all my containers in a host are currently on the same partition. I'm not 
looking to give fixed-size disks to each container though. YMMV.

However gathering cpu stats for each container is something I am 
interested in - and was about to post to the list about it - I think there 
are files (on the host) under /cgroup/container-name/cpuacct.stat and a 
few others which might help me though, but I'm going to have to look them 
up...

> 5) is there something compared to the fine grained controll about
> memory resources like vmguarpages/privmpages/oomguarpages in LXC?

Pass..

> 6) is LXC production ready?

Not sure who could make that definitive decision ;-)

It sounds like the lack of tun/tap might be a show-stopper for you though. 
(come back next week ;-)

However, I'm using it in production - got a dozen LAMPy type boxes running 
it so-far with several containers inside, and a small number of asterisk 
hosts. (I'm not mixing the LAMP and asterisk hosts though) My clients 
haven't noticed any changes which makes me happy. I don't think what I'm 
doing is very stressful to the systems though, but so-far I'm very happy 
with it.

I did test it to my own satisfaction before I committed myself to it on 
servers 300 miles away though. One test was to create 20 containers on an 
old 1.8GHz celeron box, each running asterisk with one connected to the 
next and so on - then place a call into the first. It manged 3 loops 
playing media before it had any problems - and those were due to kernel 
context/network switching rather than anything to do with the LXC setup. 
(I suspect there is more network overhead though due to the bridge and 
vlan nature of the underlying plumbing)

So right now, I'm happy with LXC - I've no need for other virtualisation 
as I'm purely running Linux, so don't need to host Win, different kernels, 
etc. And for me, it's a management tool - I can now take a container and 
move it to different hardware (not yet a proper "live migration", but the 
final rsync is currently only a few minutes and I can live with that) I 
have also saved myself a headache or two by moving old servers with OS's I 
couldn't upgrade into new hardware - so I have one server running Debian 
Lenny, kernel 2.6.33.1 hosting an old Debian Woody server inside a 
container running the customers custom application which they developed 6 
years ago... They're happy as they got new hardware and I'm happy as I 
didn't have to worry about migrating their code to a new version of Debian 
on new hardware... And I can also take that entire image now and move it 
to another server if I needed to load-balance, upgrade, cater for h/w 
failure, etc.

I'm using kernel 2.6.33.x (which I custom compile for the server hardware) 
and Debian Lenny FWIW.

I'm trying to not sound like a complete fanboi, but until the start of 
this year, I had no interest in virtualisation at all, but once I got into 
it and saw it as a management tool, I was sold - and LXC is the solution 
that seemed to work the best for me. (and more-so as a lot of the servers I 
have don't have those magic instructions to make XEN or KVM go faster)

Hope this helps,

Gordon




More information about the lxc-users mailing list