[lxc-users] LXD uptime back to 0
Serge Hallyn
serge.hallyn at ubuntu.com
Wed Mar 23 02:10:13 UTC 2016
D'oh, I was misremember how it works. And good thing too, as what I was
thinking couldn't possibly work. The number which should be used in
/proc/uptime is the st.st_ctime for /proc/<pid>.
Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin at web4all.fr):
> For container "ben"
>
> This would be :
>
> Dir: /sys/fs/cgroup/systemd/lxc/ben
> File: tasks
>
> /sys/fs/cgroup/systemd/lxc/ben] stat tasks
> File: ‘tasks’
> Size: 0 Blocks: 0 IO Block: 4096 regular empty file
> Device: 17h/23d Inode: 439 Links: 1
> Access: (0664/-rw-rw-r--) Uid: ( 0/ root) Gid: (165536/ UNKNOWN)
> Access: 2016-03-15 01:45:44.239558654 +0100
> Modify: 2016-03-22 01:45:43.368945755 +0100
> Change: 2016-03-22 01:45:43.368945755 +0100
> Birth: -
>
> Dir: /sys/fs/cgroup/systemd/lxc/ben/init.scope
> File: tasks
>
>
> /sys/fs/cgroup/systemd/lxc/ben/init.scope] stat tasks
> File: ‘tasks’
> Size: 0 Blocks: 0 IO Block: 4096 regular empty file
> Device: 17h/23d Inode: 445 Links: 1
> Access: (0644/-rw-r--r--) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN)
> Access: 2016-03-15 01:45:44.415562979 +0100
> Modify: 2016-03-15 01:45:44.415562979 +0100
> Change: 2016-03-15 01:45:44.415562979 +0100
> Birth: -
>
>
> Or after lxc info ben --verbose
>
> lxc info ben --verbose
> Name: ben
> Architecture: x86_64
> Created: 2016/03/07 23:39 UTC
> Status: Running
> Type: persistent
> Profiles: default
> Pid: 19782
> Processes: 21
>
> Then :
>
> /sys/fs/cgroup/systemd/lxc/ben/init.scope] stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/19782/cgroup`
> File: ‘/sys/fs/cgroup/systemd//lxc/ben/init.scope’
> Size: 0 Blocks: 0 IO Block: 4096 directory
> Device: 17h/23d Inode: 441 Links: 2
> Access: (0755/drwxr-xr-x) Uid: (165536/ UNKNOWN) Gid: (165536/ UNKNOWN)
> Access: 2016-03-15 01:45:44.415562979 +0100
> Modify: 2016-03-15 01:45:44.415562979 +0100
> Change: 2016-03-15 01:45:44.415562979 +0100
> Birth: -
>
>
>
> Thanks
>
> Cordialement,
>
> Benoît
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité
>
>
> De: "Serge Hallyn" <serge.hallyn at ubuntu.com>
> À: "lxc-users" <lxc-users at lists.linuxcontainers.org>
> Envoyé: Mardi 22 Mars 2016 19:36:09
> Objet: Re: [lxc-users] LXD uptime back to 0
>
> Oh, right, but what we really want is the cgroups
> of the container init task (lxc-info -n vps-01 -p -H)
>
> Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin at web4all.fr):
> > I cannot hide anything then !
> > You are right . I use openvswitch
> >
> >
> > cat /proc/1942/cgroup
> > 10:blkio:/system.slice/lxd.service
> > 9:memory:/system.slice/lxd.service
> > 8:hugetlb:/system.slice/lxd.service
> > 7:perf_event:/system.slice/lxd.service
> > 6:cpuset:/system.slice/lxd.service
> > 5:devices:/system.slice/lxd.service
> > 4:cpu,cpuacct:/system.slice/lxd.service
> > 3:freezer:/system.slice/lxd.service
> > 2:net_cls,net_prio:/system.slice/lxd.service
> > 1:name=systemd:/system.slice/lxd.service
> >
> >
> >
> > stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/1942/cgroup`
> > File: ‘/sys/fs/cgroup/systemd//system.slice/lxd.service’
> > Size: 0 Blocks: 0 IO Block: 4096 directory
> > Device: 17h/23d Inode: 144 Links: 2
> > Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
> > Access: 2016-03-23 00:20:41.844854933 +0100
> > Modify: 2016-03-23 00:20:41.844854933 +0100
> > Change: 2016-03-23 00:20:41.844854933 +0100
> > Birth: -
> >
> > Cordialement,
> >
> > Benoît
> > Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité
> >
> >
> > De: "Serge Hallyn" <serge.hallyn at ubuntu.com>
> > À: "lxc-users" <lxc-users at lists.linuxcontainers.org>
> > Envoyé: Mardi 22 Mars 2016 19:17:39
> > Objet: Re: [lxc-users] LXD uptime back to 0
> >
> > Quoting Benoit GEORGELIN - Association Web4all (benoit.georgelin at web4all.fr):
> > > Interesting, look the process monitor :
> > >
> > > root 1942 0.0 0.1 72544 1280 ? Ss Mar10 0:00 [lxc monitor] /var/lib/lxd/containers vps-01
> > > root 1984 0.0 0.0 72544 948 ? S Mar10 0:00 \_ [lxc monitor] /var/lib/lxd/containers vps-01
> > > root 19734 0.0 0.3 72544 3460 ? Ss Mar15 0:00 [lxc monitor] /var/lib/lxd/containers vps-02
> > > root 19781 0.0 0.2 72544 2364 ? S Mar15 0:00 \_ [lxc monitor] /var/lib/lxd/containers vps-02
> >
> > Hi,
> >
> > I bet you're using openvswitch? That causes lxc to create a second
> > thread which waits for the container to stop and deletes the port.
> >
> > > They exist twice for each containers
> > >
> > > init process :
> > >
> > > 165536 1987 2.3 0.2 28280 3036 ? Ss Mar10 429:30 \_ /sbin/init
> > > 165536 19782 0.0 0.4 37388 4476 ? Ss Mar15 0:04 \_ /sbin/init
> > >
> > > Where can I get the ctime of the cgroups of the tatks and the init task ?
> >
> > cat /proc/1942/cgroup
> > stat /sys/fs/cgroup/systemd/`awk -F: '/systemd/ { print $3 }' /proc/1942/cgroup`
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
More information about the lxc-users
mailing list