[Lxc-users] procfs and cpu masking.

atp Andrew.Phillips at lmax.com
Tue Feb 23 10:26:49 UTC 2010


Hi,

 Apologies for the delay - I've just got to looking at the procfs
tarball.

> It's for the moment very experimental, it's a prototype:
> http://lxc.sourceforge.net/download/procfs/procfs.tar.gz
> 
> IMO, the code is easy to follow.
> 
> The fuse in mounted in the container but the code expect to share the 
> rootfs.
> 

  It needs access to /dev/fuse, and the group mounted. One side effect
seems to be that if no memory limits are set you'll get the default 
setting for memory.limit_in_bytes rather than the amount of ram in the
system in /proc/meminfo. 
  
  I can easily fix that, and implement some other files like cpuinfo. 
Before I dive in however;

  In the readme you mention;
"This code is *not* intended to be integrated to lxc, at least under
this form, fuse is too heavy and forks too much, for this reason a
single daemon on the host is better."
 
  Would you mind expanding on that? Do you have something specific in
mind? The current way of mounting fuse somewhere else (/tmp/dir) and 
doing a bind mount of /tmp/dir back over /proc seems a little clumsy. 

  If there's a better way, either a daemon on the host, or a kernel
module, I'll happily start down that road given a couple of hints. 

 Andy

Andrew Phillips
Head of Systems

www.lmax.com 

Office: +44 203 1922509
Mobile: +44 (0)7595 242 900

LMAX | Level 2, Yellow Building | 1 Nicholas Road | London | W11 4AN




The information in this e-mail and any attachment is confidential and is intended only for the named recipient(s). The e-mail may not be disclosed or used by any person other than the addressee, nor may it be copied in any way. If you are not a named recipient please notify the sender immediately and delete any copies of this message. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. Any view or opinions presented are solely those of the author and do not necessarily represent those of the company.




More information about the lxc-users mailing list