[lxc-users] Unavailable loop devices

CDR venefax at gmail.com
Tue May 6 16:42:51 UTC 2014


IMHO,the moment Novell got sold, Suse was doomed.
They have a Beta for the new version, but it is closed. If they were
alive, the beta should be public.
I wonder if somebody has access to the ISO files for SLES 12 beta.

Philip

On Tue, May 6, 2014 at 12:34 PM, Michael H. Warfield <mhw at wittsend.com> wrote:
> On Tue, 2014-05-06 at 12:20 -0400, CDR wrote:
>> The current SUSE 11 SP3 templates are useless.
>> The on-line repository is corrupt and since Novell sold the company,
>> there is no maintenance.
>> I had to install the container from the ISO files.
>> I think SLES is an abandoned distribution. Fortunately, OpenSUSE seems vibrant.
>> I have a single client that loves SUSE.
>
> It's not an abandoned distro.  I don't think...  In fact, I was thinking
> that a new release had just come out late last year but I guess it was
> just the latest service pack to SEL 11.  Releases and service packs have
> been few and far between, though, I'll admit.  SEL 11 is 5 years old
> now.  Sigh...  I traded E-Mails with a couple of their developers
> earlier this year.  They may be short handed and unable to response in a
> way we would like, but we've all been there.  Fortunately, I'm no longer
> involved in anything that requires SEL.
>
> Regards,
> Mike
>
>> On Tue, May 6, 2014 at 11:40 AM, Michael H. Warfield <mhw at wittsend.com> wrote:
>> > On Tue, 2014-05-06 at 11:36 -0400, Michael H. Warfield wrote:
>> >> On Tue, 2014-05-06 at 11:20 -0400, CDR wrote:
>> >> > Well, I just found real business case where your theory falls flat.
>> >> > In a Suse Enterprise container, the only way to allow the owner of the
>> >> > container to install new packages, is to mount permanently the
>> >> > original ISO and the original SDK iso, otherwise zypper would not
>> >> > work. The updates come from the internet, but new base packages you
>> >> > need to fetch them from the ISO. I am not sure if zypper just mounts
>> >> > and dismounts them on the sport and frees the loop device.
>> >> > Suppose my customer clones 50 times a container, this would blow
>> >> > through the available loop devices.
>> >
>> >> Yeah, I ran into that on some mainframes using full hardware VM's.  We
>> >> had zLinux Suse SEL running as a VM on a zSeries mainframe.  Same issue.
>> >> Same solution.  The host services provides the mounted images (they set
>> >> the container up in the first place) either though full mounts or bind
>> >> mounts.
>> >
>> >> That's actually an easier and more reliable solution which I've seen
>> >> used in practice with HW VM's.  One image in the host can service
>> >> multiple HW VM's and containers.  I've even seen that done with some
>> >> RHEL instantiations in HW VM's.  That's a fixed (pun and dual meaning
>> >> fully intended here) case and I don't see where we need anything
>> >> different here.
>> >
>> > I should also add that, if that were a hard requirement for Suse which
>> > absolutely could not be worked around in the host, the Suse maintainers
>> > should add that option to their template and their configuration include
>> > for all the Suse templates.  It still should not be part of the general
>> > default for all containers and all distros.
>> >
>> > Regards,
>> > Mike
>> >
>> >> > Yours
>> >> >
>> >> > Philip
>> >> >
>> >> > On Tue, May 6, 2014 at 11:06 AM, Michael H. Warfield <mhw at wittsend.com> wrote:
>> >> > > On Tue, 2014-05-06 at 10:33 -0400, CDR wrote:
>> >> > >> Dear Mike
>> >> > >> It does work indeed.
>> >> > >> I suggest that the developers add these two lines to the sample configuration.
>> >> > >
>> >> > > It's been discussed and passed on for reasons for the time being.  The
>> >> > > need for it in containers is relatively limited.
>> >> > >
>> >> > > There also are currently some isolation issues between containers with
>> >> > > the loop devices.  i.e.  Running losetup -l currently dumps the
>> >> > > information of all the loop devices system wide even if you are in a
>> >> > > container.  I'm not sure at this point in time if you did a losetup -d
>> >> > > on a loop device in a container, which had not setup the loop device,
>> >> > > what would happen.  I hadn't previously tested that yet but...  It seems
>> >> > > to "fail" silently as if it succeeded but doesn't really do anything.
>> >> > > It's not clean.  In most cases, using losetup to automatically managed
>> >> > > the appropriate loop device does the right thing and avoids collisions.
>> >> > >
>> >> > > Then there's the issue of the number of available loop devices.  Because
>> >> > > they're shared, if one container consumes 3 and another container
>> >> > > requires 2, the second one is going to fail in the default configuration
>> >> > > (Default is 4 - I run with 64).
>> >> > >
>> >> > > I would personally advise only adding loop devices to those containers
>> >> > > that absolutely need them.  I don't think they are appropriate as
>> >> > > default devices at this time when most containers don't even need them.
>> >> > > I would especially avoid them in cases where you may be hosting
>> >> > > containers for others.  I have about a half a dozen groups of containers
>> >> > > I'm hosting for other friends and relatives and business associates on a
>> >> > > bit colocated server I run.  I wouldn't enable loop devices in any of
>> >> > > those containers unless it was specifically requested and even then only
>> >> > > for the duration of the need.  They know.  They've never asked.
>> >> > > Certainly no need for that to be in a default configuration.
>> >> > >
>> >> > > Yes that limits the container owners ability to mount images but that's
>> >> > > really not that common in practice outside of developers.
>> >> > >
>> >> > > Building containers within containers, you may also run into problems
>> >> > > with certain package installs and builds having unusual requirements for
>> >> > > capabilities (setfcap comes immediately to mind).  I run into this when
>> >> > > I created containers to build NST (Network Security Toolkit) images, in
>> >> > > addition to the expected loop device issues.  That's another thing that
>> >> > > should only be enabled on those specific containers requiring it.
>> >> > >
>> >> > >> Yours
>> >> > >> Philip
>> >> > >
>> >> > > Regards,
>> >> > > Mike
>> >> > >
>> >> > >> On Tue, May 6, 2014 at 9:28 AM, Michael H. Warfield <mhw at wittsend.com> wrote:
>> >> > >> > On Tue, 2014-05-06 at 06:25 -0400, CDR wrote:
>> >> > >> >> Dear Friends
>> >> > >> >
>> >> > >> >> I succesfully created a SLES 11 SP3 container, but when I try to do this
>> >> > >> >
>> >> > >> >> mount -o loop /images/SLE-11-SP3-SDK-DVD-x86_64-GM-DVD1.iso /media
>> >> > >> >
>> >> > >> >> mount: Could not find any loop device. Maybe this kernel does not know
>> >> > >> >>        about the loop device? (If so, recompile or `modprobe loop'.)
>> >> > >> >
>> >> > >> > Add the following to your container configuration file:
>> >> > >> >
>> >> > >> > lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
>> >> > >> > lxc.cgroup.devices.allow = b 7:* rwm    # loop*
>> >> > >> >
>> >> > >> > Then make sure you have the following devices in your container /dev
>> >> > >> > directory...
>> >> > >> >
>> >> > >> > brw-rw----. 1 root disk  7,   0 May  2 13:03 /dev/loop0
>> >> > >> > brw-rw----. 1 root disk  7,   1 May  2 13:03 /dev/loop1
>> >> > >> > brw-rw----. 1 root disk  7,   2 May  2 13:03 /dev/loop2
>> >> > >> > brw-rw----. 1 root disk  7,   3 May  2 13:03 /dev/loop3
>> >> > >> > crw-------. 1 root root 10, 237 May  2 13:03 /dev/loop-control
>> >> > >> >
>> >> > >> > Regards,
>> >> > >> > Mike
>> >> > >> >
>> >> > >> >> My host is Fedora 20 and the LXC version is
>> >> > >> >
>> >> > >> >> rpm -qa | grep lxc
>> >> > >> >> libvirt-daemon-lxc-1.1.3.4-4.fc20.x86_64
>> >> > >> >> libvirt-daemon-driver-lxc-1.1.3.4-4.fc20.x86_64
>> >> > >> >> lxc-devel-1.0.0-1.fc20.x86_64
>> >> > >> >> lxc-debuginfo-1.0.0-1.fc20.x86_64
>> >> > >> >> lxc-libs-1.0.0-1.fc20.x86_64
>> >> > >> >> lxc-1.0.0-1.fc20.x86_64
>> >> > >> >
>> >> > >> >> the configuration is:
>> >> > >> >>
>> >> > >> >> lxc.start.auto = 0
>> >> > >> >> lxc.start.delay = 5
>> >> > >> >> lxc.start.order = 10
>> >> > >> >>
>> >> > >> >> # When using LXC with apparmor, uncomment the next line to run unconfined:
>> >> > >> >> #lxc.aa_profile = unconfined
>> >> > >> >>
>> >> > >> >> lxc.cgroup.devices.deny = a
>> >> > >> >> # /dev/null and zero
>> >> > >> >> lxc.cgroup.devices.allow = c 1:3 rwm
>> >> > >> >> lxc.cgroup.devices.allow = c 1:5 rwm
>> >> > >> >> # consoles
>> >> > >> >> lxc.cgroup.devices.allow = c 5:1 rwm
>> >> > >> >> lxc.cgroup.devices.allow = c 5:0 rwm
>> >> > >> >> lxc.cgroup.devices.allow = c 4:0 rwm
>> >> > >> >> lxc.cgroup.devices.allow = c 4:1 rwm
>> >> > >> >> # /dev/{,u}random
>> >> > >> >> lxc.cgroup.devices.allow = c 1:9 rwm
>> >> > >> >> lxc.cgroup.devices.allow = c 1:8 rwm
>> >> > >> >> lxc.cgroup.devices.allow = c 136:* rwm
>> >> > >> >> lxc.cgroup.devices.allow = c 5:2 rwm
>> >> > >> >> # rtc
>> >> > >> >> lxc.cgroup.devices.allow = c 254:0 rwm
>> >> > >> >>
>> >> > >> >> # mounts point
>> >> > >> >> lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
>> >> > >> >> lxc.mount.entry = sysfs sys sysfs defaults  0 0
>> >> > >> >> lxc.mount.entry = /images  /var/lib/lxc/utel-kde/rootfs/images none bind 0 0
>> >> > >> >>
>> >> > >> >>
>> >> > >> >> lxc.network.type=macvlan
>> >> > >> >> lxc.network.macvlan.mode=bridge
>> >> > >> >> lxc.network.link=eth1
>> >> > >> >> lxc.network.flags=up
>> >> > >> >> lxc.network.hwaddr = e2:91:a8:17:97:e4
>> >> > >> >> lxc.network.ipv4 = 0.0.0.0/21
>> >> > >> >>
>> >> > >> >>
>> >> > >> >> How do make the kernel loop module available for the container?
>> >> > >> >>
>> >> > >> >> Yours
>> >> > >> >> Philip
>> >> > >> >> _______________________________________________
>> >> > >> >> lxc-users mailing list
>> >> > >> >> lxc-users at lists.linuxcontainers.org
>> >> > >> >> http://lists.linuxcontainers.org/listinfo/lxc-users
>> >> > >> >
>> >> > >> > --
>> >> > >> > Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
>> >> > >> >    /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
>> >> > >> >    NIC whois: MHW9          | An optimist believes we live in the best of all
>> >> > >> >  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
>> >> > >> >
>> >> > >> >
>> >> > >> > _______________________________________________
>> >> > >> > lxc-users mailing list
>> >> > >> > lxc-users at lists.linuxcontainers.org
>> >> > >> > http://lists.linuxcontainers.org/listinfo/lxc-users
>> >> > >> _______________________________________________
>> >> > >> lxc-users mailing list
>> >> > >> lxc-users at lists.linuxcontainers.org
>> >> > >> http://lists.linuxcontainers.org/listinfo/lxc-users
>> >> > >
>> >> > > --
>> >> > > Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
>> >> > >    /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
>> >> > >    NIC whois: MHW9          | An optimist believes we live in the best of all
>> >> > >  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
>> >> > >
>> >> > >
>> >> > > _______________________________________________
>> >> > > lxc-users mailing list
>> >> > > lxc-users at lists.linuxcontainers.org
>> >> > > http://lists.linuxcontainers.org/listinfo/lxc-users
>> >> > _______________________________________________
>> >> > lxc-users mailing list
>> >> > lxc-users at lists.linuxcontainers.org
>> >> > http://lists.linuxcontainers.org/listinfo/lxc-users
>> >>
>> >
>> > --
>> > Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
>> >    /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
>> >    NIC whois: MHW9          | An optimist believes we live in the best of all
>> >  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
>> >
>> >
>> > _______________________________________________
>> > lxc-users mailing list
>> > lxc-users at lists.linuxcontainers.org
>> > http://lists.linuxcontainers.org/listinfo/lxc-users
>> _______________________________________________
>> lxc-users mailing list
>> lxc-users at lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
> --
> Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
>    /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
>    NIC whois: MHW9          | An optimist believes we live in the best of all
>  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


More information about the lxc-users mailing list