[lxc-devel] LXC Karmic

lxc at zitta.fr lxc at zitta.fr
Mon Mar 15 14:21:39 UTC 2010


This script is a part of lxc-provider which is a provisioning tool for lxc.
You should not use this script alone, and it is a old version.
You could read the script to understand what it does or use the entire
product :
http://sourceforge.net/projects/lxc-provider/

regards,

Guillaume ZITTA

Le 15/03/2010 14:40, Elias Olivares a écrit :
> Hello !
>
> I've try to run your script but it doesn't work. Certainly a mistake
> from my part.
> Can you give me more précisions on how to run it ?
>
> Thank a lot
>
> Elias.
>
>
>
>
>
>
>
> Le 04/03/2010 19:27, Daniel Lezcano a écrit :
> > Elias Olivares wrote:
> >  
> >> Hi !
> >>
> >> Here a new bug installing ubuntu karmic into a container :
> >>
> >> I've installed karmic with debootstrap and when i try to run the
> container , it don't and this error message appears on the screen :
> >>
> >> mountall:/dev/ppp: Operation not permitted
> >> mountall:/dev/net/tun: Operation not permitted
> >> mountall:/dev/loop0: Operation not permitted
> >>
> >>    
> In order to "containerize" karmic, I disabled mountall.
> Here my script to manage karmic's upstart stuff :
> http://lxc-provider.git.sourceforge.net/git/gitweb.cgi?p=lxc-provider/lxc-provider;a=blob;f=libexec/cache_helpers/ubuntu.karmic.init.sh
> I hope it could help
> >> But the container seems to run :
> >>
> >> host# lxc-info -n karmic
> >> 'karmictest.1g6.biz' is RUNNING
> >>
> >> The command lxc-ls seems to be broken : (it show 2 times the
> container)
> >>
> >> vms:/mnt/vz# lxc-ls
> >> karmictest
> >> karmictest
> >>  
> >>    
> > Yes, that was reported one time, it's displayed twice because it is
> > created and because it is running.
> > I guess there is some polishing to do with this command.
> >
> >  
> >> My container configuration file :
> >>
> >> lxc.utsname = karmictest
> >> lxc.tty = 4
> >> lxc.pts = 1024
> >> lxc.network.type = veth
> >> lxc.network.flags = up
> >> lxc.network.link = br0
> >> lxc.network.name = eth0
> >> lxc.network.mtu = 1500
> >> #lxc.mount =
> >> lxc.rootfs = /mnt/vz/karmictest
> >>  
> >>    
> > Can you try by disabling the cgroup.devices section below and try to
> > start the container ?
> > If you can start it, it is probable you have to allow more devices
> to be
> > created within the container, (eg : b 7 0 for the loop0)
> >
> >  
> >> lxc.cgroup.devices.deny = a
> >> # /dev/null and zero
> >> lxc.cgroup.devices.allow = c 1:3 rwm
> >> lxc.cgroup.devices.allow = c 1:5 rwm
> >> # consoles
> >> lxc.cgroup.devices.allow = c 5:1 rwm
> >> lxc.cgroup.devices.allow = c 5:0 rwm
> >> lxc.cgroup.devices.allow = c 4:0 rwm
> >> lxc.cgroup.devices.allow = c 4:1 rwm
> >> # /dev/{,u}random
> >> lxc.cgroup.devices.allow = c 1:9 rwm
> >> lxc.cgroup.devices.allow = c 1:8 rwm
> >> lxc.cgroup.devices.allow = c 136:* rwm
> >> lxc.cgroup.devices.allow = c 5:2 rwm
> >> # rtc
> >> lxc.cgroup.devices.allow = c 254:0 rwm
> >>  
> >>    
> >
> >
> ------------------------------------------------------------------------------
> > Download Intel® Parallel Studio Eval
> > Try the new software tools for yourself. Speed compiling, find bugs
> > proactively, and fine-tune applications for parallel performance.
> > See why Intel Parallel Studio got high marks during beta.
> > http://p.sf.net/sfu/intel-sw-dev
> > _______________________________________________
> > Lxc-devel mailing list
> > Lxc-devel at lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/lxc-devel
> >  
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-devel/attachments/20100315/e2f12f43/attachment.html>


More information about the lxc-devel mailing list