[lxc-users] Kernel lockups when running lxc-start (J?kel)
Dao Quang Minh
dqminh89 at gmail.com
Wed Mar 12 14:30:25 UTC 2014
We havent tested without the bind mounts ( but we can probably try it asap
). We migrated to 1.0.0 from 0.7.5 about a week ago, and this is the first
time i've seen this bug.
`grep shared /proc/self/mountinfo` doesnt show anything, but
`/proc/self/mountinfo` does display 2 entries per physical container ( i
guess because of the bind mounts ):
```
1766 39 0:764 / /disk1/container-name rw,relatime - aufs none
rw,si=70f12540eaf98716
1768 38 0:764 / /var/lib/lxc/container-name rw,relatime - aufs none
rw,si=70f12540eaf98716
```
On Wed, Mar 12, 2014 at 10:14 PM, Serge Hallyn <serge.hallyn at ubuntu.com>wrote:
> So does the same thing happen if you don't have the
> /var/lib/lxc/container bind mount, and instead do
>
> lxc-start -P /desk1 -n container
>
> ?
>
> does 'grep shared /proc/self/mountinfo' show anything?
>
> Quoting Dao Quang Minh (dqminh89 at gmail.com):
> > Hi,
> >
> > We run a ( sort of ) unconventional FS layout with aufs.
> >
> > bind-mount aufs
> > /var/lib/lxc/container <------- /disk1/container<----+----+ base
> > |
> > +----+ delta
> >
> > The container is then started as usual with lxc-start. `/disk1` and
> > `/var/lib/lxc` are ext4 disks.
> >
> > Daniel.
> >
> > >Dear Daniel,
> > >
> > >may you please add some information about the type and layout of
> filesystems involved to get an idea what kind of mount operations are
> involved. I guess it's some bug in the FS layer while LXC doing the mounts.
> > >
> > >thank you
> > >
> > >Guido
> > >
> > >>-----Original Message-----
> > >>From: lxc-users-bounces at lists.linuxcontainers.org [mailto:
> lxc-users-bounces at lists.linuxcontainers.org] On Behalf Of Dao
> > >>Quang Minh
> > >>Sent: Wednesday, March 12, 2014 9:03 AM
> > >>To: lxc-users at lists.linuxcontainers.org
> > >>Subject: [lxc-users] Kernel lockups when running lxc-start
> > >>
> > >>Hi all,
> > >>
> > >>We encounter a bug today when one of our systems enter soft-lockup
> when we try to start a container. Unfortunately at that
> > >>point, we have to do a power cycle because we can’t access the system
> anymore. Here is the kernel.log:
> > >>
> > >>[...]
> > >>
> > >>After this point, it seems that all lxc-start will fail,but the system
> continues to run until we power-cycled it.
> > >>
> > >>When i inspected some of the containers that were started during that
> time, i saw that one of them has an existing
> > >>lxc_putold directory ( which should be removed when the container
> finished starting up right ? ). However, i'm not sure if that
> > >>is related to the lockup above.
> > >>
> > >>The host is running on a 12.04 ec2 server, with lxc 1.0.0 and kernel
> 3.13.0-12.32
> > >>
> > >>Cheers,
> > >>Daniel.
>
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20140312/d9dade61/attachment.html>
More information about the lxc-users
mailing list