[lxc-users] How to know available cgroup parameters ?

Daniel Caillibaud ml at lairdutemps.org
Wed Aug 24 11:50:24 UTC 2016


Hi,

I'm using lxc 2.0.3 with debian jessie on kernel 4.6.6 (without lxd or any other lxc manager,
just the lxc-* binaries provides by the lxc package)

I haven't found the right parameters lxc.cgroup.* to set in my container config file to limit
memory

# this one is fine
lxc.cgroup.cpu.shares = 512

# this one too
lxc.cgroup.blkio.weight = 300

# but with
lxc.cgroup.memory.limit_in_bytes = 1G

=> lxc-start 20160824124403.700 ERROR    lxc_cgfsng - cgfsng.c:cgfsng_setup_limits:1662 - No
such file or directory - Error setting memory.limit_in_bytes to 1G for nw1

Because I haven't any /sys/fs/cgroup/memory/ on the host.


to understand, I look into (files given by linux-doc-4.6 package)
/usr/share/doc/linux-doc-4.6/Documentation/cgroup-v1/memory.txt.gz
and
/usr/share/doc/linux-doc-4.6/Documentation/cgroup-v2.txt.gz
but haven't found solution

How to know available parameters ? 

Thanks a lot

Daniel



PS: Here are my host details

(I saw lxc-cgroup but its output doesn't really help)

lxc-cgroup -n nw1 devices.list 
c *:* m
b *:* m
c 1:3 rwm
c 1:5 rwm
c 1:7 rwm
c 5:0 rwm
c 5:1 rwm
c 5:2 rwm
c 1:8 rwm
c 1:9 rwm
c 136:* rwm
c 10:229 rwm
c 254:0 rm
c 10:200 rwm
c 10:228 rwm
c 10:232 rwm



I have in fstab the classic
sysfs		/sys	sysfs	defaults		0	0

which give

grep cgroup /proc/mounts 
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0 
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0 
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0


And I see for the container nw1 :

ls -1 /sys/fs/cgroup/*/lxc/nw1/

/sys/fs/cgroup/blkio/lxc/nw1/:
blkio.io_merged
blkio.io_merged_recursive
blkio.io_queued
blkio.io_queued_recursive
blkio.io_service_bytes
blkio.io_service_bytes_recursive
blkio.io_serviced
blkio.io_serviced_recursive
blkio.io_service_time
blkio.io_service_time_recursive
blkio.io_wait_time
blkio.io_wait_time_recursive
blkio.leaf_weight
blkio.leaf_weight_device
blkio.reset_stats
blkio.sectors
blkio.sectors_recursive
blkio.throttle.io_service_bytes
blkio.throttle.io_serviced
blkio.throttle.read_bps_device
blkio.throttle.read_iops_device
blkio.throttle.write_bps_device
blkio.throttle.write_iops_device
blkio.time
blkio.time_recursive
blkio.weight
blkio.weight_device
cgroup.clone_children
cgroup.procs
notify_on_release
tasks

/sys/fs/cgroup/cpuacct/lxc/nw1/:
# […] same as /sys/fs/cgroup/cpu,cpuacct/lxc/nw1/ because symlink

/sys/fs/cgroup/cpu,cpuacct/lxc/nw1/:
cgroup.clone_children
cgroup.procs
cpuacct.stat
cpuacct.usage
cpuacct.usage_percpu
cpu.cfs_period_us
cpu.cfs_quota_us
cpu.shares
cpu.stat
notify_on_release
tasks

/sys/fs/cgroup/cpu/lxc/nw1/:
# […] same as /sys/fs/cgroup/cpu,cpuacct/lxc/nw1/ because symlink

/sys/fs/cgroup/cpuset/lxc/nw1/:
cgroup.clone_children
cgroup.procs
cpuset.cpu_exclusive
cpuset.cpus
cpuset.effective_cpus
cpuset.effective_mems
cpuset.mem_exclusive
cpuset.mem_hardwall
cpuset.memory_migrate
cpuset.memory_pressure
cpuset.memory_spread_page
cpuset.memory_spread_slab
cpuset.mems
cpuset.sched_load_balance
cpuset.sched_relax_domain_level
notify_on_release
tasks

/sys/fs/cgroup/devices/lxc/nw1/:
cgroup.clone_children
cgroup.procs
devices.allow
devices.deny
devices.list
notify_on_release
tasks

/sys/fs/cgroup/freezer/lxc/nw1/:
cgroup.clone_children
cgroup.procs
freezer.parent_freezing
freezer.self_freezing
freezer.state
notify_on_release
tasks

/sys/fs/cgroup/net_cls/lxc/nw1/:
cgroup.clone_children
cgroup.procs
net_cls.classid
net_prio.ifpriomap
net_prio.prioidx
notify_on_release
tasks

/sys/fs/cgroup/net_cls,net_prio/lxc/nw1/:
cgroup.clone_children
cgroup.procs
net_cls.classid
net_prio.ifpriomap
net_prio.prioidx
notify_on_release
tasks

/sys/fs/cgroup/net_prio/lxc/nw1/:
cgroup.clone_children
cgroup.procs
net_cls.classid
net_prio.ifpriomap
net_prio.prioidx
notify_on_release
tasks

/sys/fs/cgroup/perf_event/lxc/nw1/:
cgroup.clone_children
cgroup.procs
notify_on_release
tasks

/sys/fs/cgroup/pids/lxc/nw1/:
cgroup.clone_children
cgroup.procs
notify_on_release
pids.current
pids.max
tasks

/sys/fs/cgroup/systemd/lxc/nw1/:
cgroup.clone_children
cgroup.procs
notify_on_release
system.slice
tasks
user.slice



More information about the lxc-users mailing list