<html><body><div style="font-family: arial, helvetica, sans-serif; font-size: 10pt; color: #000000"><div>oh ok, no problem. </div><div>So that could be something like </div><div><br data-mce-bogus="1"></div><div>ps auxf|grep 19782|grep -v grep <br>165536 19782 0.0 0.4 37388 4464 ? Ss Mar15 0:05 \_ /sbin/init<br></div><div><br data-mce-bogus="1"></div><div>stat /proc/19782<br> </div><div><br data-mce-bogus="1"></div><div>File: ‘/proc/19782’<br> Size: 0 Blocks: 0 IO Block: 1024 directory<br>Device: 4h/4d Inode: 47494006 Links: 9<br>Access: (0555/dr-xr-xr-x) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN)<br>Access: 2016-03-23 00:01:04.996574266 +0100<br>Modify: 2016-03-23 00:01:04.996574266 +0100<br>Change: 2016-03-23 00:01:04.996574266 +0100<br> Birth: -<br></div><div><br data-mce-bogus="1"></div><div>From container view, it look like that happened again but only on that one (ben) </div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>lxc exec ben -- bash<br>root@ben:~# cd /proc<br></div><div><br data-mce-bogus="1"></div><div>root@ben:/proc# stat uptime<br> File: 'uptime'<br> Size: 0 Blocks: 0 IO Block: 4096 regular empty file<br>Device: 2ah/42d Inode: 10669 Links: 1<br>Access: (0444/-r--r--r--) Uid: (65534/ nobody) Gid: (65534/ nogroup)<br>Access: 2016-03-23 02:28:41.479290940 +0000<br>Modify: 2016-03-23 02:28:41.479290940 +0000<br>Change: 2016-03-23 02:28:41.479290940 +0000<br> Birth: -</div><div><br data-mce-bogus="1"></div><div><br>root@ben:/proc# uptime<br> 02:28:48 up 3:27, 0 users, load average: 0.31, 0.29, 0.28<br></div><div><br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div><br></div><div data-marker="__SIG_PRE__"><div><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif;">Cordialement,</span><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;"><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;"><br></span></span></div><div><br></div><div><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;">Benoît </span><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;"><br></span><span data-mce-style="color: #c0c0c0; font-weight: bold; font-size: xx-small;" style="color: #c0c0c0; font-weight: bold; font-size: xx-small;" size="1"><span style="font-family: times new roman,new york,times,serif; font-style: italic;" data-mce-style="font-family: times new roman,new york,times,serif; font-style: italic;">Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité</span></span><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;"><br></span></div></div><br><hr id="zwchr" data-marker="__DIVIDER__"><div data-marker="__HEADERS__"><b>De: </b>"Serge Hallyn" <serge.hallyn@ubuntu.com><br><b>À: </b>"lxc-users" <lxc-users@lists.linuxcontainers.org><br><b>Envoyé: </b>Mardi 22 Mars 2016 22:10:13<br><b>Objet: </b>Re: [lxc-users] LXD uptime back to 0<br></div><br><div data-marker="__QUOTED_TEXT__">D'oh, I was misremember how it works. And good thing too, as what I was<br>thinking couldn't possibly work. The number which should be used in<br>/proc/uptime is the st.st_ctime for /proc/<pid>.<br><br>Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin@web4all.fr):<br>> For container "ben" <br>> <br>> This would be : <br>> <br>> Dir: /sys/fs/cgroup/systemd/lxc/ben <br>> File: tasks <br>> <br>> /sys/fs/cgroup/systemd/lxc/ben] stat tasks <br>> File: ‘tasks’ <br>> Size: 0 Blocks: 0 IO Block: 4096 regular empty file <br>> Device: 17h/23d Inode: 439 Links: 1 <br>> Access: (0664/-rw-rw-r--) Uid: ( 0/ root) Gid: (165536/ UNKNOWN) <br>> Access: 2016-03-15 01:45:44.239558654 +0100 <br>> Modify: 2016-03-22 01:45:43.368945755 +0100 <br>> Change: 2016-03-22 01:45:43.368945755 +0100 <br>> Birth: - <br>> <br>> Dir: /sys/fs/cgroup/systemd/lxc/ben/init.scope <br>> File: tasks <br>> <br>> <br>> /sys/fs/cgroup/systemd/lxc/ben/init.scope] stat tasks <br>> File: ‘tasks’ <br>> Size: 0 Blocks: 0 IO Block: 4096 regular empty file <br>> Device: 17h/23d Inode: 445 Links: 1 <br>> Access: (0644/-rw-r--r--) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN) <br>> Access: 2016-03-15 01:45:44.415562979 +0100 <br>> Modify: 2016-03-15 01:45:44.415562979 +0100 <br>> Change: 2016-03-15 01:45:44.415562979 +0100 <br>> Birth: - <br>> <br>> <br>> Or after lxc info ben --verbose <br>> <br>> lxc info ben --verbose <br>> Name: ben <br>> Architecture: x86_64 <br>> Created: 2016/03/07 23:39 UTC <br>> Status: Running <br>> Type: persistent <br>> Profiles: default <br>> Pid: 19782 <br>> Processes: 21 <br>> <br>> Then : <br>> <br>> /sys/fs/cgroup/systemd/lxc/ben/init.scope] stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/19782/cgroup` <br>> File: ‘/sys/fs/cgroup/systemd//lxc/ben/init.scope’ <br>> Size: 0 Blocks: 0 IO Block: 4096 directory <br>> Device: 17h/23d Inode: 441 Links: 2 <br>> Access: (0755/drwxr-xr-x) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN) <br>> Access: 2016-03-15 01:45:44.415562979 +0100 <br>> Modify: 2016-03-15 01:45:44.415562979 +0100 <br>> Change: 2016-03-15 01:45:44.415562979 +0100 <br>> Birth: - <br>> <br>> <br>> <br>> Thanks <br>> <br>> Cordialement, <br>> <br>> Benoît <br>> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité <br>> <br>> <br>> De: "Serge Hallyn" <serge.hallyn@ubuntu.com> <br>> À: "lxc-users" <lxc-users@lists.linuxcontainers.org> <br>> Envoyé: Mardi 22 Mars 2016 19:36:09 <br>> Objet: Re: [lxc-users] LXD uptime back to 0 <br>> <br>> Oh, right, but what we really want is the cgroups <br>> of the container init task (lxc-info -n vps-01 -p -H) <br>> <br>> Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin@web4all.fr): <br>> > I cannot hide anything then ! <br>> > You are right . I use openvswitch <br>> > <br>> > <br>> > cat /proc/1942/cgroup <br>> > 10:blkio:/system.slice/lxd.service <br>> > 9:memory:/system.slice/lxd.service <br>> > 8:hugetlb:/system.slice/lxd.service <br>> > 7:perf_event:/system.slice/lxd.service <br>> > 6:cpuset:/system.slice/lxd.service <br>> > 5:devices:/system.slice/lxd.service <br>> > 4:cpu,cpuacct:/system.slice/lxd.service <br>> > 3:freezer:/system.slice/lxd.service <br>> > 2:net_cls,net_prio:/system.slice/lxd.service <br>> > 1:name=systemd:/system.slice/lxd.service <br>> > <br>> > <br>> > <br>> > stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/1942/cgroup` <br>> > File: ‘/sys/fs/cgroup/systemd//system.slice/lxd.service’ <br>> > Size: 0 Blocks: 0 IO Block: 4096 directory <br>> > Device: 17h/23d Inode: 144 Links: 2 <br>> > Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) <br>> > Access: 2016-03-23 00:20:41.844854933 +0100 <br>> > Modify: 2016-03-23 00:20:41.844854933 +0100 <br>> > Change: 2016-03-23 00:20:41.844854933 +0100 <br>> > Birth: - <br>> > <br>> > Cordialement, <br>> > <br>> > Benoît <br>> > Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité <br>> > <br>> > <br>> > De: "Serge Hallyn" <serge.hallyn@ubuntu.com> <br>> > À: "lxc-users" <lxc-users@lists.linuxcontainers.org> <br>> > Envoyé: Mardi 22 Mars 2016 19:17:39 <br>> > Objet: Re: [lxc-users] LXD uptime back to 0 <br>> > <br>> > Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin@web4all.fr): <br>> > > Interesting, look the process monitor : <br>> > > <br>> > > root 1942 0.0 0.1 72544 1280 ? Ss Mar10 0:00 [lxc monitor] /var/lib/lxd/containers vps-01 <br>> > > root 1984 0.0 0.0 72544 948 ? S Mar10 0:00 \_ [lxc monitor] /var/lib/lxd/containers vps-01 <br>> > > root 19734 0.0 0.3 72544 3460 ? Ss Mar15 0:00 [lxc monitor] /var/lib/lxd/containers vps-02 <br>> > > root 19781 0.0 0.2 72544 2364 ? S Mar15 0:00 \_ [lxc monitor] /var/lib/lxd/containers vps-02 <br>> > <br>> > Hi, <br>> > <br>> > I bet you're using openvswitch? That causes lxc to create a second <br>> > thread which waits for the container to stop and deletes the port. <br>> > <br>> > > They exist twice for each containers <br>> > > <br>> > > init process : <br>> > > <br>> > > 165536 1987 2.3 0.2 28280 3036 ? Ss Mar10 429:30 \_ /sbin/init <br>> > > 165536 19782 0.0 0.4 37388 4476 ? Ss Mar15 0:04 \_ /sbin/init <br>> > > <br>> > > Where can I get the ctime of the cgroups of the tatks and the init task ? <br>> > <br>> > cat /proc/1942/cgroup <br>> > stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/1942/cgroup` <br>> > _______________________________________________ <br>> > lxc-users mailing list <br>> > lxc-users@lists.linuxcontainers.org <br>> > http://lists.linuxcontainers.org/listinfo/lxc-users <br>> <br>> > _______________________________________________ <br>> > lxc-users mailing list <br>> > lxc-users@lists.linuxcontainers.org <br>> > http://lists.linuxcontainers.org/listinfo/lxc-users <br>> <br>> _______________________________________________ <br>> lxc-users mailing list <br>> lxc-users@lists.linuxcontainers.org <br>> http://lists.linuxcontainers.org/listinfo/lxc-users <br><br>> _______________________________________________<br>> lxc-users mailing list<br>> lxc-users@lists.linuxcontainers.org<br>> http://lists.linuxcontainers.org/listinfo/lxc-users<br><br>_______________________________________________<br>lxc-users mailing list<br>lxc-users@lists.linuxcontainers.org<br>http://lists.linuxcontainers.org/listinfo/lxc-users<br></div></div></body></html>