[lxc-devel] [lxc/master] doc: adapt + update

brauner on Github lxc-bot at linuxcontainers.org
Tue Sep 5 23:07:26 UTC 2017


A non-text attachment was scrubbed...
Name: not available
Type: text/x-mailbox
Size: 364 bytes
Desc: not available
URL: <http://lists.linuxcontainers.org/pipermail/lxc-devel/attachments/20170905/bade8522/attachment.bin>
-------------- next part --------------
From 594d6e30d6c86f55c340bf49f0aa15b761d7e627 Mon Sep 17 00:00:00 2001
From: Christian Brauner <christian.brauner at ubuntu.com>
Date: Wed, 6 Sep 2017 00:30:40 +0200
Subject: [PATCH 1/2] doc: lxc.sgml.in

Signed-off-by: Christian Brauner <christian.brauner at ubuntu.com>
---
 doc/lxc.sgml.in | 326 ++++++++++++++++++++------------------------------------
 1 file changed, 116 insertions(+), 210 deletions(-)

diff --git a/doc/lxc.sgml.in b/doc/lxc.sgml.in
index c1c2fedca..894e6ca90 100644
--- a/doc/lxc.sgml.in
+++ b/doc/lxc.sgml.in
@@ -52,136 +52,67 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
   </refnamediv>
 
   <refsect1>
-    <title>Quick start</title>
-    <para>
-      You are in a hurry, and you don't want to read this man page. Ok,
-      without warranty, here are the commands to launch a shell inside
-      a container with a predefined configuration template, it may
-      work.
-      <command>@BINDIR@/lxc-execute -n foo -f
-      @DOCDIR@/examples/lxc-macvlan.conf /bin/bash</command>
-    </para>
-  </refsect1>
-
-  <refsect1>
     <title>Overview</title>
     <para>
-      The container technology is actively being pushed into the
-      mainstream linux kernel. It provides the resource management
-      through the control groups aka process containers and resource
-      isolation through the namespaces.
+      The container technology is actively being pushed into the mainstream
+      Linux kernel. It provides resource management through control groups and
+      resource isolation via namespaces.
     </para>
 
     <para>
-      The linux containers, <command>lxc</command>, aims to use these
-      new functionalities to provide a userspace container object
-      which provides full resource isolation and resource control for
-      an applications or a system.
+      <command>lxc</command>, aims to use these new functionalities to provide a
+      userspace container object which provides full resource isolation and
+      resource control for an applications or a full system.
     </para>
 
     <para>
-      The first objective of this project is to make the life easier
-      for the kernel developers involved in the containers project and
-      especially to continue working on the Checkpoint/Restart new
-      features. The <command>lxc</command> is small enough to easily
-      manage a container with simple command lines and complete enough
-      to be used for other purposes.
+      <command>lxc</command> is small enough to easily manage a container with
+      simple command lines and complete enough to be used for other purposes.
     </para>
   </refsect1>
 
   <refsect1>
     <title>Requirements</title>
     <para>
-      The <command>lxc</command> relies on a set of functionalities
-      provided by the kernel which needs to be active. Depending of
-      the missing functionalities the <command>lxc</command> will
-      work with a restricted number of functionalities or will simply
-      fail.
+      The kernel version >= 3.10 shipped with the distros, will work with
+      <command>lxc</command>, this one will have less functionalities but enough
+      to be interesting.
     </para>
-
+       
     <para>
-      The following list gives the kernel features to be enabled in
-      the kernel to have the full features container:
+      <command>lxc</command> relies on a set of functionalities provided by the
+      kernel. The helper script <command>lxc-checkconfig</command> will give
+      you information about your kernel configuration, required, and missing
+      features.
     </para>
-      <programlisting>
-	    * General setup
-	      * Control Group support
-	        -> Namespace cgroup subsystem
-	        -> Freezer cgroup subsystem
-	        -> Cpuset support
-	        -> Simple CPU accounting cgroup subsystem
-	        -> Resource counters
-	          -> Memory resource controllers for Control Groups
-	      * Group CPU scheduler
-	        -> Basis for grouping tasks (Control Groups)
-	      * Namespaces support
-	        -> UTS namespace
-	        -> IPC namespace
-	        -> User namespace
-	        -> Pid namespace
-	        -> Network namespace
-	    * Device Drivers
-	      * Character devices
-	        -> Support multiple instances of devpts
-	      * Network device support
-	        -> MAC-VLAN support
-	        -> Virtual ethernet pair device
-	    * Networking
-	      * Networking options
-	        -> 802.1d Ethernet Bridging
-	    * Security options
-	      -> File POSIX Capabilities
-      </programlisting>
-
-      <para>
-
-	The kernel version >= 3.10 shipped with the distros, will
-	work with <command>lxc</command>, this one will have less
-	functionalities but enough to be interesting.
-
-	The helper script <command>lxc-checkconfig</command> will give
-	you information about your kernel configuration.
-      </para>
-
-      <para>
-	  The control group can be mounted anywhere, eg:
-	  <command>mount -t cgroup cgroup /cgroup</command>.
-
-	  It is however recommended to use cgmanager, cgroup-lite or systemd
-	  to mount the cgroup hierarchy under /sys/fs/cgroup.
-
-      </para>
-
   </refsect1>
 
   <refsect1>
     <title>Functional specification</title>
     <para>
-      A container is an object isolating some resources of the host,
-      for the application or system running in it.
+      A container is an object isolating some resources of the host, for the
+      application or system running in it.
     </para>
     <para>
-      The application / system will be launched inside a
-      container specified by a configuration that is either
-      initially created or passed as parameter of the starting commands.
+      The application / system will be launched inside a container specified by
+      a configuration that is either initially created or passed as a parameter
+      of the commands.
     </para>
 
-    <para>How to run an application in a container ?</para>
+    <para>How to run an application in a container</para>
     <para>
-      Before running an application, you should know what are the
-      resources you want to isolate. The default configuration is to
-      isolate the pids, the sysv ipc and the mount points. If you want
-      to run a simple shell inside a container, a basic configuration
-      is needed, especially if you want to share the rootfs. If you
-      want to run an application like <command>sshd</command>, you
-      should provide a new network stack and a new hostname. If you
-      want to avoid conflicts with some files
-      eg. <filename>/var/run/httpd.pid</filename>, you should
-      remount <filename>/var/run</filename> with an empty
-      directory. If you want to avoid the conflicts in all the cases,
-      you can specify a rootfs for the container. The rootfs can be a
-      directory tree, previously bind mounted with the initial rootfs,
-      so you can still use your distro but with your
+      Before running an application, you should know what are the resources you
+      want to isolate. The default configuration is to isolate PIDs, the sysv
+      IPC and mount points. If you want to run a simple shell inside a
+      container, a basic configuration is needed, especially if you want to
+      share the rootfs. If you want to run an application like
+      <command>sshd</command>, you should provide a new network stack and a new
+      hostname. If you want to avoid conflicts with some files eg.
+      <filename>/var/run/httpd.pid</filename>, you should remount
+      <filename>/var/run</filename> with an empty directory. If you want to
+      avoid the conflicts in all the cases, you can specify a rootfs for the
+      container. The rootfs can be a directory tree, previously bind mounted
+      with the initial rootfs, so you can still use your distro but with your
       own <filename>/etc</filename> and <filename>/home</filename>
     </para>
     <para>
@@ -225,15 +156,17 @@ rootfs
       </programlisting>
     </para>
 
-    <para>How to run a system in a container ?</para>
+    <para>How to run a system in a container</para>
 
-    <para>Running a system inside a container is paradoxically easier
-    than running an application. Why ? Because you don't have to care
-    about the resources to be isolated, everything need to be
+    <para>
+    Running a system inside a container is paradoxically easier
+    than running an application. Why? Because you don't have to care
+    about the resources to be isolated, everything needs to be
     isolated, the other resources are specified as being isolated but
     without configuration because the container will set them
     up. eg. the ipv4 address will be setup by the system container
     init scripts. Here is an example of the mount points file:
+    </para>
 
       <programlisting>
 	[root at lxc debian]$ cat fstab
@@ -242,26 +175,17 @@ rootfs
 	/dev/pts /home/root/debian/rootfs/dev/pts  none bind 0 0
       </programlisting>
 
-      More information can be added to the container to facilitate the
-      configuration. For example, make accessible from the container
-      the resolv.conf file belonging to the host.
-
-      <programlisting>
-	/etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
-      </programlisting>
-    </para>
-
     <refsect2>
       <title>Container life cycle</title>
       <para>
 	When the container is created, it contains the configuration
-	information. When a process is launched, the container will be
-	starting and running. When the last process running inside the
-	container exits, the container is stopped.
+	information. When a process is launched, the container will be starting
+	and running. When the last process running inside the container exits,
+	the container is stopped.
       </para>
       <para>
-	In case of failure when the container is initialized, it will
-	pass through the aborting state.
+	In case of failure when the container is initialized, it will pass
+	through the aborting state.
       </para>
 
       <programlisting>
@@ -306,17 +230,14 @@ rootfs
     </refsect2>
 
     <refsect2>
-      <title>Creating / Destroying container
-	(persistent container)</title>
+      <title>Creating / Destroying containers</title>
       <para>
-	A persistent container object can be
-	created via the <command>lxc-create</command>
-	command. It takes a container name as parameter and
-	optional configuration file and template.
-	The name is used by the different
-	commands to refer to this
-	container. The <command>lxc-destroy</command> command will
-	destroy the container object.
+	A persistent container object can be created via the
+	<command>lxc-create</command> command. It takes a container name as
+	parameter and optional configuration file and template. The name is
+	used by the different commands to refer to this container. The
+	<command>lxc-destroy</command> command will destroy the container
+	object.
 	<programlisting>
 	  lxc-create -n foo
 	  lxc-destroy -n foo
@@ -326,33 +247,30 @@ rootfs
 
     <refsect2>
 	<title>Volatile container</title>
-	<para>It is not mandatory to create a container object
-	before to start it.
-	The container can be directly started with a
-	configuration file as parameter.
+	<para>
+	  It is not mandatory to create a container object before starting it.
+	  The container can be directly started with a configuration file as
+	  parameter.
 	</para>
     </refsect2>
 
     <refsect2>
       <title>Starting / Stopping container</title>
-      <para>When the container has been created, it is ready to run an
-      application / system.
-      This is the purpose of the <command>lxc-execute</command> and
-      <command>lxc-start</command> commands.
-      If the container was not created before
-      starting the application, the container will use the
-      configuration file passed as parameter to the command,
-      and if there is no such parameter either, then
-      it will use a default isolation.
-      If the application is ended, the container will be stopped also,
-      but if needed the <command>lxc-stop</command> command can
-      be used to kill the still running application.
+      <para>
+	When the container has been created, it is ready to run an application /
+	system.  This is the purpose of the <command>lxc-execute</command> and
+	<command>lxc-start</command> commands.  If the container was not created
+	before starting the application, the container will use the
+	configuration file passed as parameter to the command, and if there is
+	no such parameter either, then it will use a default isolation.  If the
+	application ended, the container will be stopped, but if needed the
+	<command>lxc-stop</command> command can be used to stop the container.
       </para>
 
       <para>
-	Running an application inside a container is not exactly the
-	same thing as running a system. For this reason, there are two
-	different commands to run an application into a container:
+	Running an application inside a container is not exactly the same thing
+	as running a system. For this reason, there are two different commands
+	to run an application into a container:
 	<programlisting>
 	  lxc-execute -n foo [-f config] /bin/bash
 	  lxc-start -n foo [-f config] [/bin/bash]
@@ -360,39 +278,35 @@ rootfs
       </para>
 
       <para>
-	<command>lxc-execute</command> command will run the
-	specified command into the container via an intermediate
-	process, <command>lxc-init</command>.
-	This lxc-init after launching  the specified command,
-	will wait for its end and all other reparented processes.
-        (to support daemons in the container).
-	In other words, in the
-	container, <command>lxc-init</command> has the pid 1 and the
-	first process of the application has the pid 2.
+	The <command>lxc-execute</command> command will run the specified command
+	into a container via an intermediate process,
+	<command>lxc-init</command>.
+	This lxc-init after launching  the specified command, will wait for its
+	end and all other reparented processes.  (to support daemons in the
+	container).  In other words, in the container,
+	<command>lxc-init</command> has PID 1 and the first process of the
+	application has PID 2.
       </para>
 
       <para>
-	<command>lxc-start</command> command will run directly the specified
-	command into the container.
-	The pid of the first process is 1. If no command is
-	specified <command>lxc-start</command> will
-	run the command defined in lxc.init.cmd or if not set,
-	<filename>/sbin/init</filename> .
+	The <command>lxc-start</command> command will directly run the specified
+	command in the container. The PID of the first process is 1. If no
+	command is specified <command>lxc-start</command> will run the command
+	defined in lxc.init.cmd or if not set, <filename>/sbin/init</filename> .
       </para>
 
       <para>
-	To summarize, <command>lxc-execute</command> is for running
-	an application and <command>lxc-start</command> is better suited for
+	To summarize, <command>lxc-execute</command> is for running an
+	application and <command>lxc-start</command> is better suited for
 	running a system.
       </para>
 
       <para>
-	If the application is no longer responding, is inaccessible or is
-	not able to finish by itself, a
-	wild <command>lxc-stop</command> command will kill all the
-	processes in the container without pity.
+	If the application is no longer responding, is inaccessible or is not
+	able to finish by itself, a wild <command>lxc-stop</command> command
+	will kill all the processes in the container without pity.
 	<programlisting>
-	  lxc-stop -n foo
+	  lxc-stop -n foo -k
 	</programlisting>
       </para>
     </refsect2>
@@ -400,11 +314,10 @@ rootfs
     <refsect2>
       <title>Connect to an available tty</title>
       <para>
-	If the container is configured with the ttys, it is possible
-	to access it through them. It is up to the container to
-	provide a set of available tty to be used by the following
-	command. When the tty is lost, it is possible to reconnect it
-	without login again.
+	If the container is configured with ttys, it is possible to access it
+	through them. It is up to the container to provide a set of available
+	ttys to be used by the following command. When the tty is lost, it is
+	possible to reconnect to it without login again.
 	<programlisting>
 	  lxc-console -n foo -t 3
 	</programlisting>
@@ -430,30 +343,28 @@ rootfs
       </para>
 
       <para>
-	This feature is enabled if the cgroup freezer is enabled in the
-	kernel.
+	This feature is enabled if the freezer cgroup v1 controller is enabled
+	in the kernel.
       </para>
     </refsect2>
 
     <refsect2>
       <title>Getting information about container</title>
-      <para>When there are a lot of containers, it is hard to follow
-      what has been created or destroyed, what is running or what are
-      the pids running into a specific container. For this reason, the
-      following commands may be useful:
+      <para>
+      When there are a lot of containers, it is hard to follow what has been
+      created or destroyed, what is running or what are the PIDs running in a
+      specific container. For this reason, the following commands may be useful:
 	<programlisting>
-	  lxc-ls
+	  lxc-ls -f
 	  lxc-info -n foo
 	</programlisting>
       </para>
       <para>
-	<command>lxc-ls</command> lists the containers of the
-	system.
+	<command>lxc-ls</command> lists containers.
       </para>
 
       <para>
-	<command>lxc-info</command> gives information for a specific
-	container.
+	<command>lxc-info</command> gives information for a specific container.
       </para>
 
       <para>
@@ -464,22 +375,20 @@ rootfs
 	    lxc-info -n $i
 	  done
 	</programlisting>
-
       </para>
-
     </refsect2>
 
     <refsect2>
       <title>Monitoring container</title>
-      <para>It is sometime useful to track the states of a container,
-      for example to monitor it or just to wait for a specific
-      state in a script.
+      <para>
+	It is sometime useful to track the states of a container, for example to
+	monitor it or just to wait for a specific state in a script.
       </para>
 
       <para>
-	<command>lxc-monitor</command> command will monitor one or
-	several containers. The parameter of this command accept a
-	regular expression for example:
+	<command>lxc-monitor</command> command will monitor one or several
+	containers. The parameter of this command accepts a regular expression
+	for example:
 	<programlisting>
 	  lxc-monitor -n "foo|bar"
 	</programlisting>
@@ -504,8 +413,8 @@ rootfs
 	state change and exit. This is useful for scripting to
 	synchronize the launch of a container or the end. The
 	parameter is an ORed combination of different states. The
-	following example shows how to wait for a container if he went
-	to the background.
+	following example shows how to wait for a container if it successfully
+	started as a daemon.
 
 	<programlisting>
 <![CDATA[
@@ -527,11 +436,12 @@ rootfs
     </refsect2>
 
     <refsect2>
-      <title>Setting the control group for container</title>
-      <para>The container is tied with the control groups, when a
-	container is started a control group is created and associated
-	with it. The control group properties can be read and modified
-	when the container is running by using the lxc-cgroup command.
+      <title>cgroup settings for containers</title>
+      <para>
+	The container is tied with the control groups, when a container is
+	started a control group is created and associated with it. The control
+	group properties can be read and modified when the container is running
+	by using the lxc-cgroup command.
       </para>
       <para>
 	<command>lxc-cgroup</command> command is used to set or get a
@@ -553,18 +463,14 @@ rootfs
     </refsect2>
   </refsect1>
 
-  <refsect1>
-    <title>Bugs</title>
-    <para>The <command>lxc</command> is still in development, so the
-    command syntax and the API can change. The version 1.0.0 will be
-    the frozen version.</para>
-  </refsect1>
-
   &seealso;
 
   <refsect1>
     <title>Author</title>
     <para>Daniel Lezcano <email>daniel.lezcano at free.fr</email></para>
+    <para>Christian Brauner <email>christian.brauner at ubuntu.com</email></para>
+    <para>Serge Hallyn <email>serge at hallyn.com</email></para>
+    <para>Stéphane Graber <email>stgraber at ubuntu.com</email></para>
   </refsect1>
 
 </refentry>

From bdcbb6b377528e524094b6cefaae178c6240df51 Mon Sep 17 00:00:00 2001
From: Christian Brauner <christian.brauner at ubuntu.com>
Date: Wed, 6 Sep 2017 00:43:05 +0200
Subject: [PATCH 2/2] doc: bugfixes

- lxc.id_map -> lxc.idmap
- document lxc.cgroup.dir

Signed-off-by: Christian Brauner <christian.brauner at ubuntu.com>
---
 doc/ja/lxc.container.conf.sgml.in | 10 +++++-----
 doc/ko/lxc.container.conf.sgml.in |  6 +++---
 doc/lxc.container.conf.sgml.in    | 27 +++++++++++++++++++++++----
 src/lxc/cgroups/cgmanager.c       |  2 +-
 src/lxc/conf.c                    |  4 ++--
 src/lxc/conf.h                    |  8 ++++----
 src/tests/lxc-test-apparmor-mount |  4 ++--
 src/tests/lxc-test-unpriv         |  4 ++--
 src/tests/lxc-test-usernic.in     |  4 ++--
 src/tests/parse_config_file.c     | 28 ++++++++++++++++++++++++++++
 templates/lxc-sabayon.in          |  4 ++--
 11 files changed, 74 insertions(+), 27 deletions(-)

diff --git a/doc/ja/lxc.container.conf.sgml.in b/doc/ja/lxc.container.conf.sgml.in
index 6c4dadef0..f567e8212 100644
--- a/doc/ja/lxc.container.conf.sgml.in
+++ b/doc/ja/lxc.container.conf.sgml.in
@@ -105,11 +105,11 @@ by KATOH Yasufumi <karma at jazz.email.ne.jp>
       example, a process running as UID and GID 0 inside the container might
       appear as UID and GID 100000 on the host.  The implementation and working
       details can be gathered from the corresponding user namespace man page.
-      UID and GID mappings can be defined with the <option>lxc.id_map</option>
+      UID and GID mappings can be defined with the <option>lxc.idmap</option>
       key.
         -->
       本質的には、ユーザ名前空間は与えられた UID、GID の組を隔離します。ユーザ名前空間は、ホスト上の UID、GID のある範囲を、それとは異なるコンテナ上の UID、GID の範囲へマッピングすることで実現します。カーネルは、ホスト上では実際には UID、GID は特権を持たないにも関わらず、コンテナ内ではすべての UID、GID が期待されるように見えるように変換を行います。
-      例えば、コンテナ内では UID、GID が 0 として実行中のプロセスは、ホスト上では UID、GID が 100000 として見えるでしょう。実装と動作の詳細は、ユーザ名前空間の man ページから得られます。UID と GID のマッピングは <option>lxc.id_map</option> を使って定義できます。
+      例えば、コンテナ内では UID、GID が 0 として実行中のプロセスは、ホスト上では UID、GID が 100000 として見えるでしょう。実装と動作の詳細は、ユーザ名前空間の man ページから得られます。UID と GID のマッピングは <option>lxc.idmap</option> を使って定義できます。
     </para>
 
     <para>
@@ -1904,7 +1904,7 @@ by KATOH Yasufumi <karma at jazz.email.ne.jp>
       <variablelist>
         <varlistentry>
           <term>
-            <option>lxc.id_map</option>
+            <option>lxc.idmap</option>
           </term>
           <listitem>
             <para>
@@ -2642,8 +2642,8 @@ by KATOH Yasufumi <karma at jazz.email.ne.jp>
         この設定は、コンテナ内のユーザとグループ両方の id 0-9999 の範囲を、ホスト上の 100000-109999 へマッピングします。
       </para>
       <programlisting>
-        lxc.id_map = u 0 100000 10000
-        lxc.id_map = g 0 100000 10000
+        lxc.idmap = u 0 100000 10000
+        lxc.idmap = g 0 100000 10000
       </programlisting>
     </refsect2>
 
diff --git a/doc/ko/lxc.container.conf.sgml.in b/doc/ko/lxc.container.conf.sgml.in
index b0466a1eb..e880525a6 100644
--- a/doc/ko/lxc.container.conf.sgml.in
+++ b/doc/ko/lxc.container.conf.sgml.in
@@ -1839,7 +1839,7 @@ mknod errno 0
       <variablelist>
 	<varlistentry>
 	  <term>
-	    <option>lxc.id_map</option>
+	    <option>lxc.idmap</option>
 	  </term>
 	  <listitem>
 	    <para>
@@ -2564,8 +2564,8 @@ mknod errno 0
         이 설정은 UID와 GID 둘다를 컨테이너의 0 ~ 9999를 호스트의 100000 ~ 109999로 매핑한다.
       </para>
       <programlisting>
-	lxc.id_map = u 0 100000 10000
-	lxc.id_map = g 0 100000 10000
+	lxc.idmap = u 0 100000 10000
+	lxc.idmap = g 0 100000 10000
       </programlisting>
     </refsect2>
 
diff --git a/doc/lxc.container.conf.sgml.in b/doc/lxc.container.conf.sgml.in
index f3b594ea0..397222f0b 100644
--- a/doc/lxc.container.conf.sgml.in
+++ b/doc/lxc.container.conf.sgml.in
@@ -86,7 +86,7 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
       example, a process running as UID and GID 0 inside the container might
       appear as UID and GID 100000 on the host.  The implementation and working
       details can be gathered from the corresponding user namespace man page.
-      UID and GID mappings can be defined with the <option>lxc.id_map</option>
+      UID and GID mappings can be defined with the <option>lxc.idmap</option>
       key.
     </para>
 
@@ -1129,6 +1129,25 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
             </para>
           </listitem>
         </varlistentry>
+        <varlistentry>
+          <term>
+            <option>lxc.cgroup.dir</option>
+          </term>
+          <listitem>
+            <para>
+              specify a directory or path in which the container's cgroup will
+              be created. For example, setting
+              <option>lxc.cgroup.dir = my-cgroup/first</option> for a container
+              named "c1" will create the container's cgroup as a sub-cgroup of
+              "my-cgroup". For example, if the user's current cgroup "my-user"
+              is located in the root cgroup of the cpuset controllerin in a
+              cgroup v1 hierarchy this would create the cgroup
+              "/sys/fs/cgroup/cpuset/my-user/my-cgroup/first/c1" for the
+              container. Any missing cgroups will be created by LXC. This
+              presupposes that the user has write access to its current cgroup.
+            </para>
+          </listitem>
+        </varlistentry>
       </variablelist>
     </refsect2>
 
@@ -1383,7 +1402,7 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
       <variablelist>
         <varlistentry>
           <term>
-            <option>lxc.id_map</option>
+            <option>lxc.idmap</option>
           </term>
           <listitem>
             <para>
@@ -1935,8 +1954,8 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
         range 0-9999 in the container to the ids 100000-109999 on the host.
       </para>
       <programlisting>
-        lxc.id_map = u 0 100000 10000
-        lxc.id_map = g 0 100000 10000
+        lxc.idmap = u 0 100000 10000
+        lxc.idmap = g 0 100000 10000
       </programlisting>
     </refsect2>
 
diff --git a/src/lxc/cgroups/cgmanager.c b/src/lxc/cgroups/cgmanager.c
index 6c6278e59..054eb1715 100644
--- a/src/lxc/cgroups/cgmanager.c
+++ b/src/lxc/cgroups/cgmanager.c
@@ -1559,7 +1559,7 @@ static bool cgm_chown(void *hdata, struct lxc_conf *conf)
 }
 
 /*
- * TODO: this should be re-written to use the get_config_item("lxc.id_map")
+ * TODO: this should be re-written to use the get_config_item("lxc.idmap")
  * cmd api instead of getting the idmap from c->lxc_conf.  The reason is
  * that the id_maps may be different if the container was started with a
  * -f or -s argument.
diff --git a/src/lxc/conf.c b/src/lxc/conf.c
index 6e5af200c..7a1188165 100644
--- a/src/lxc/conf.c
+++ b/src/lxc/conf.c
@@ -3972,8 +3972,8 @@ void suggest_default_idmap(void)
 	ERROR("To pass uid mappings to lxc-create, you could create");
 	ERROR("~/.config/lxc/default.conf:");
 	ERROR("lxc.include = %s", LXC_DEFAULT_CONFIG);
-	ERROR("lxc.id_map = u 0 %u %u", uid, urange);
-	ERROR("lxc.id_map = g 0 %u %u", gid, grange);
+	ERROR("lxc.idmap = u 0 %u %u", uid, urange);
+	ERROR("lxc.idmap = g 0 %u %u", gid, grange);
 
 	free(gname);
 	free(uname);
diff --git a/src/lxc/conf.h b/src/lxc/conf.h
index 7c38d93ba..882c9cd83 100644
--- a/src/lxc/conf.h
+++ b/src/lxc/conf.h
@@ -96,10 +96,10 @@ enum idtype {
 
 /*
  * id_map is an id map entry.  Form in confile is:
- * lxc.id_map = u 0    9800 100
- * lxc.id_map = u 1000 9900 100
- * lxc.id_map = g 0    9800 100
- * lxc.id_map = g 1000 9900 100
+ * lxc.idmap = u 0    9800 100
+ * lxc.idmap = u 1000 9900 100
+ * lxc.idmap = g 0    9800 100
+ * lxc.idmap = g 1000 9900 100
  * meaning the container can use uids and gids 0-99 and 1000-1099,
  * with [ug]id 0 mapping to [ug]id 9800 on the host, and [ug]id 1000 to
  * [ug]id 9900 on the host.
diff --git a/src/tests/lxc-test-apparmor-mount b/src/tests/lxc-test-apparmor-mount
index 390c6f46c..a09fd5443 100755
--- a/src/tests/lxc-test-apparmor-mount
+++ b/src/tests/lxc-test-apparmor-mount
@@ -102,8 +102,8 @@ mkdir -p $HDIR/.config/lxc/
 cat > $HDIR/.config/lxc/default.conf << EOF
 lxc.net.0.type = veth
 lxc.net.0.link = lxcbr0
-lxc.id_map = u 0 910000 9999
-lxc.id_map = g 0 910000 9999
+lxc.idmap = u 0 910000 9999
+lxc.idmap = g 0 910000 9999
 EOF
 chown -R $TUSER: $HDIR
 
diff --git a/src/tests/lxc-test-unpriv b/src/tests/lxc-test-unpriv
index 40c6bf667..5fe092794 100755
--- a/src/tests/lxc-test-unpriv
+++ b/src/tests/lxc-test-unpriv
@@ -118,8 +118,8 @@ mkdir -p $HDIR/.config/lxc/
 cat > $HDIR/.config/lxc/default.conf << EOF
 lxc.net.0.type = veth
 lxc.net.0.link = lxcbr0
-lxc.id_map = u 0 910000 9999
-lxc.id_map = g 0 910000 9999
+lxc.idmap = u 0 910000 9999
+lxc.idmap = g 0 910000 9999
 EOF
 chown -R $TUSER: $HDIR
 
diff --git a/src/tests/lxc-test-usernic.in b/src/tests/lxc-test-usernic.in
index 53bc8166c..f7d19a362 100755
--- a/src/tests/lxc-test-usernic.in
+++ b/src/tests/lxc-test-usernic.in
@@ -81,8 +81,8 @@ usermod -v 910000-919999 -w 910000-919999 usernic-user
 mkdir -p /home/usernic-user/.config/lxc/
 cat > /home/usernic-user/.config/lxc/default.conf << EOF
 lxc.net.0.type = empty
-lxc.id_map = u 0 910000 10000
-lxc.id_map = g 0 910000 10000
+lxc.idmap = u 0 910000 10000
+lxc.idmap = g 0 910000 10000
 EOF
 
 if which cgm >/dev/null 2>&1; then
diff --git a/src/tests/parse_config_file.c b/src/tests/parse_config_file.c
index ef03b9285..db61dd044 100644
--- a/src/tests/parse_config_file.c
+++ b/src/tests/parse_config_file.c
@@ -455,6 +455,34 @@ int main(int argc, char *argv[])
 		return -1;
 	}
 
+	/* lxc.idmap
+	 * We can't really save the config here since save_config() wants to
+	 * chown the container's directory but we haven't created an on-disk
+	 * container. So let's test set-get-clear.
+	 */
+	if (set_get_compare_clear_save_load(
+		c, "lxc.idmap", "u 0 100000 1000000000", NULL, false) < 0) {
+		lxc_error("%s\n", "lxc.idmap");
+		goto non_test_error;
+	}
+
+	if (!c->set_config_item(c, "lxc.idmap", "u 1 100000 10000000")) {
+		lxc_error("%s\n", "failed to set config item "
+				  "\"lxc.idmap\" to \"u 1 100000 10000000\"");
+		return -1;
+	}
+
+	if (!c->set_config_item(c, "lxc.idmap", "g 1 100000 10000000")) {
+		lxc_error("%s\n", "failed to set config item "
+				  "\"lxc.idmap\" to \"g 1 100000 10000000\"");
+		return -1;
+	}
+
+	if (!c->get_config_item(c, "lxc.idmap", retval, sizeof(retval))) {
+		lxc_error("%s\n", "failed to get config item \"lxc.cgroup\"");
+		return -1;
+	}
+
 	c->clear_config(c);
 	c->lxc_conf = NULL;
 
diff --git a/templates/lxc-sabayon.in b/templates/lxc-sabayon.in
index 76e877d47..75e5c765e 100644
--- a/templates/lxc-sabayon.in
+++ b/templates/lxc-sabayon.in
@@ -287,8 +287,8 @@ configure_container() {
     if [[ $unprivileged && $unprivileged == true ]] ; then
         if [[ $flush_owner == true ]] ; then
             unprivileged_options="
-lxc.id_map = u 0 ${mapped_uid} 65536
-lxc.id_map = g 0 ${mapped_gid} 65536
+lxc.idmap = u 0 ${mapped_uid} 65536
+lxc.idmap = g 0 ${mapped_gid} 65536
 "
         fi
 


More information about the lxc-devel mailing list