[lxc-users] LXD uptime back to 0
Benoit GEORGELIN - Association Web4all
benoit.georgelin at web4all.fr
Fri Apr 8 17:51:24 UTC 2016
Not for now and I dit now try/watch more .
I will if that happen again.
Cordialement,
Benoît
De: "Serge Hallyn" <serge.hallyn at ubuntu.com>
À: "lxc-users" <lxc-users at lists.linuxcontainers.org>
Envoyé: Lundi 4 Avril 2016 19:09:41
Objet: Re: [lxc-users] LXD uptime back to 0
Can you reproduce this at will?
Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin at web4all.fr):
> oh ok, no problem.
> So that could be something like
>
> ps auxf|grep 19782|grep -v grep
> 165536 19782 0.0 0.4 37388 4464 ? Ss Mar15 0:05 \_ /sbin/init
>
> stat /proc/19782
>
> File: ‘/proc/19782’
> Size: 0 Blocks: 0 IO Block: 1024 directory
> Device: 4h/4d Inode: 47494006 Links: 9
> Access: (0555/dr-xr-xr-x) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN)
> Access: 2016-03-23 00:01:04.996574266 +0100
> Modify: 2016-03-23 00:01:04.996574266 +0100
> Change: 2016-03-23 00:01:04.996574266 +0100
> Birth: -
>
> From container view, it look like that happened again but only on that one (ben)
>
>
> lxc exec ben -- bash
> root at ben:~# cd /proc
>
> root at ben:/proc# stat uptime
> File: 'uptime'
> Size: 0 Blocks: 0 IO Block: 4096 regular empty file
> Device: 2ah/42d Inode: 10669 Links: 1
> Access: (0444/-r--r--r--) Uid: (65534/ nobody) Gid: (65534/ nogroup)
> Access: 2016-03-23 02:28:41.479290940 +0000
> Modify: 2016-03-23 02:28:41.479290940 +0000
> Change: 2016-03-23 02:28:41.479290940 +0000
> Birth: -
>
>
> root at ben:/proc# uptime
> 02:28:48 up 3:27, 0 users, load average: 0.31, 0.29, 0.28
>
>
>
> Cordialement,
>
> Benoît
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité
>
>
> De: "Serge Hallyn" <serge.hallyn at ubuntu.com>
> À: "lxc-users" <lxc-users at lists.linuxcontainers.org>
> Envoyé: Mardi 22 Mars 2016 22:10:13
> Objet: Re: [lxc-users] LXD uptime back to 0
>
> D'oh, I was misremember how it works. And good thing too, as what I was
> thinking couldn't possibly work. The number which should be used in
> /proc/uptime is the st.st_ctime for /proc/<pid>.
>
> Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin at web4all.fr):
> > For container "ben"
> >
> > This would be :
> >
> > Dir: /sys/fs/cgroup/systemd/lxc/ben
> > File: tasks
> >
> > /sys/fs/cgroup/systemd/lxc/ben] stat tasks
> > File: ‘tasks’
> > Size: 0 Blocks: 0 IO Block: 4096 regular empty file
> > Device: 17h/23d Inode: 439 Links: 1
> > Access: (0664/-rw-rw-r--) Uid: ( 0/ root) Gid: (165536/ UNKNOWN)
> > Access: 2016-03-15 01:45:44.239558654 +0100
> > Modify: 2016-03-22 01:45:43.368945755 +0100
> > Change: 2016-03-22 01:45:43.368945755 +0100
> > Birth: -
> >
> > Dir: /sys/fs/cgroup/systemd/lxc/ben/init.scope
> > File: tasks
> >
> >
> > /sys/fs/cgroup/systemd/lxc/ben/init.scope] stat tasks
> > File: ‘tasks’
> > Size: 0 Blocks: 0 IO Block: 4096 regular empty file
> > Device: 17h/23d Inode: 445 Links: 1
> > Access: (0644/-rw-r--r--) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN)
> > Access: 2016-03-15 01:45:44.415562979 +0100
> > Modify: 2016-03-15 01:45:44.415562979 +0100
> > Change: 2016-03-15 01:45:44.415562979 +0100
> > Birth: -
> >
> >
> > Or after lxc info ben --verbose
> >
> > lxc info ben --verbose
> > Name: ben
> > Architecture: x86_64
> > Created: 2016/03/07 23:39 UTC
> > Status: Running
> > Type: persistent
> > Profiles: default
> > Pid: 19782
> > Processes: 21
> >
> > Then :
> >
> > /sys/fs/cgroup/systemd/lxc/ben/init.scope] stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/19782/cgroup`
> > File: ‘/sys/fs/cgroup/systemd//lxc/ben/init.scope’
> > Size: 0 Blocks: 0 IO Block: 4096 directory
> > Device: 17h/23d Inode: 441 Links: 2
> > Access: (0755/drwxr-xr-x) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN)
> > Access: 2016-03-15 01:45:44.415562979 +0100
> > Modify: 2016-03-15 01:45:44.415562979 +0100
> > Change: 2016-03-15 01:45:44.415562979 +0100
> > Birth: -
> >
> >
> >
> > Thanks
> >
> > Cordialement,
> >
> > Benoît
> > Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité
> >
> >
> > De: "Serge Hallyn" <serge.hallyn at ubuntu.com>
> > À: "lxc-users" <lxc-users at lists.linuxcontainers.org>
> > Envoyé: Mardi 22 Mars 2016 19:36:09
> > Objet: Re: [lxc-users] LXD uptime back to 0
> >
> > Oh, right, but what we really want is the cgroups
> > of the container init task (lxc-info -n vps-01 -p -H)
> >
> > Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin at web4all.fr):
> > > I cannot hide anything then !
> > > You are right . I use openvswitch
> > >
> > >
> > > cat /proc/1942/cgroup
> > > 10:blkio:/system.slice/lxd.service
> > > 9:memory:/system.slice/lxd.service
> > > 8:hugetlb:/system.slice/lxd.service
> > > 7:perf_event:/system.slice/lxd.service
> > > 6:cpuset:/system.slice/lxd.service
> > > 5:devices:/system.slice/lxd.service
> > > 4:cpu,cpuacct:/system.slice/lxd.service
> > > 3:freezer:/system.slice/lxd.service
> > > 2:net_cls,net_prio:/system.slice/lxd.service
> > > 1:name=systemd:/system.slice/lxd.service
> > >
> > >
> > >
> > > stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/1942/cgroup`
> > > File: ‘/sys/fs/cgroup/systemd//system.slice/lxd.service’
> > > Size: 0 Blocks: 0 IO Block: 4096 directory
> > > Device: 17h/23d Inode: 144 Links: 2
> > > Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
> > > Access: 2016-03-23 00:20:41.844854933 +0100
> > > Modify: 2016-03-23 00:20:41.844854933 +0100
> > > Change: 2016-03-23 00:20:41.844854933 +0100
> > > Birth: -
> > >
> > > Cordialement,
> > >
> > > Benoît
> > > Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité
> > >
> > >
> > > De: "Serge Hallyn" <serge.hallyn at ubuntu.com>
> > > À: "lxc-users" <lxc-users at lists.linuxcontainers.org>
> > > Envoyé: Mardi 22 Mars 2016 19:17:39
> > > Objet: Re: [lxc-users] LXD uptime back to 0
> > >
> > > Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin at web4all.fr):
> > > > Interesting, look the process monitor :
> > > >
> > > > root 1942 0.0 0.1 72544 1280 ? Ss Mar10 0:00 [lxc monitor] /var/lib/lxd/containers vps-01
> > > > root 1984 0.0 0.0 72544 948 ? S Mar10 0:00 \_ [lxc monitor] /var/lib/lxd/containers vps-01
> > > > root 19734 0.0 0.3 72544 3460 ? Ss Mar15 0:00 [lxc monitor] /var/lib/lxd/containers vps-02
> > > > root 19781 0.0 0.2 72544 2364 ? S Mar15 0:00 \_ [lxc monitor] /var/lib/lxd/containers vps-02
> > >
> > > Hi,
> > >
> > > I bet you're using openvswitch? That causes lxc to create a second
> > > thread which waits for the container to stop and deletes the port.
> > >
> > > > They exist twice for each containers
> > > >
> > > > init process :
> > > >
> > > > 165536 1987 2.3 0.2 28280 3036 ? Ss Mar10 429:30 \_ /sbin/init
> > > > 165536 19782 0.0 0.4 37388 4476 ? Ss Mar15 0:04 \_ /sbin/init
> > > >
> > > > Where can I get the ctime of the cgroups of the tatks and the init task ?
> > >
> > > cat /proc/1942/cgroup
> > > stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/1942/cgroup`
> > > _______________________________________________
> > > lxc-users mailing list
> > > lxc-users at lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > > _______________________________________________
> > > lxc-users mailing list
> > > lxc-users at lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
lxc-users at lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160408/a6ac3815/attachment.html>
More information about the lxc-users
mailing list