[lxc-users] lxc-start failing in Fedora 20

Michael H. Warfield mhw at WittsEnd.com
Mon Jun 30 13:04:57 UTC 2014


On Mon, 2014-06-30 at 07:58 +0530, Ajith Adapa wrote:
> Thanks for the reply @Michael.

> When will lxc 1.0.4 version will be available in raw hide ?

That I can't answer myself.  Thomas Moschny is the Fedora maintainer for
the LXC package and I think he's working on it.  At least he indicated
he would be working on it shortly, in a message to the -devel list back
on the 13th, so it shouldn't be too long.  I've cc'ed him on this reply
in case he has an ETA to share.

> I see a lot of failed states and error messages when I finally start
> the lxc container as shown below. Many of them are from systemd but
> are they ok ?

Yes, they should be.  At one time systemd-journald had a bad propensity
to run amok and burn CPU time like mad.  So, in the 1.0.3 release, the
Fedora template was masking the systemd-journald.service.  That bug in
journald has been fix (although I've seen at least one recent report)
so, in 1.0.4 the template no longer does that.  You can remove that mask
in the the container with:

systemctl unmask systemd-journald.service

Or, in the host, just:

rm ${container_root_fs}/etc/systemd/system/systemd-journald.service

I have not looked into the automount or DBUS issues at this time.  If
you're not using autofs, you probably don't need the automount service.
If you do need autofs, you may run into other problems depending on if
cap sys_admin has been dropped.

Anything to do with udev can be ignored.  Udev does not work and will
not work in the near future in a container.

I'm a little concerned with some of the "security context" errors I saw,
that are probably selinux related.  The template disables selinux in the
container and I've been seeing some cross-distro problems (Ubuntu
running on Fedora) when the host has selinux enabled in either
permissive or enforcing mode.  My development engines are currently
running "disabled", so I can't tell on that one at the moment.

Regards,
Mike

> # lxc-start -n test
> <27>systemd[1]: Failed to set the kernel's timezone, ignoring:
> Operation not permitted
> <30>systemd[1]: systemd 208 running in system mode. (+PAM +LIBWRAP
> +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
> <30>systemd[1]: Detected virtualization 'lxc'.
> 
> Welcome to Fedora 20 (Heisenbug)!
> 
> <30>systemd[1]: Set hostname to <test.ipinfusion.com>.
> <30>systemd[1]: Initializing machine ID from random generator.
> <30>systemd[1]: Starting Forward Password Requests to Wall Directory Watch.
> <30>systemd[1]: Started Forward Password Requests to Wall Directory Watch.
> <30>systemd[1]: Starting Remote File Systems.
> [  OK  ] Reached target Remote File Systems.
> <30>systemd[1]: Reached target Remote File Systems.
> <30>systemd[1]: Started Replay Read-Ahead Data.
> <30>systemd[1]: Started Collect Read-Ahead Data.
> <30>systemd[1]: Starting Delayed Shutdown Socket.
> [  OK  ] Listening on Delayed Shutdown Socket.
> <30>systemd[1]: Listening on Delayed Shutdown Socket.
> <30>systemd[1]: Starting Journal Socket.
> <27>systemd[1]: Socket service systemd-journald.service not loaded, refusing.
> [FAILED] Failed to listen on Journal Socket.
> See 'systemctl status systemd-journald.socket' for details.
> <27>systemd[1]: Failed to listen on Journal Socket.
> <30>systemd[1]: Started Load legacy module configuration.
> <30>systemd[1]: Starting /dev/initctl Compatibility Named Pipe.
> [  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
> <30>systemd[1]: Listening on /dev/initctl Compatibility Named Pipe.
> <30>systemd[1]: Starting Root Slice.
> [  OK  ] Created slice Root Slice.
> <30>systemd[1]: Created slice Root Slice.
> <30>systemd[1]: Starting User and Session Slice.
> [  OK  ] Created slice User and Session Slice.
> <30>systemd[1]: Created slice User and Session Slice.
> <30>systemd[1]: Starting System Slice.
> [  OK  ] Created slice System Slice.
> <30>systemd[1]: Created slice System Slice.
> <30>systemd[1]: Starting Slices.
> [  OK  ] Reached target Slices.
> <30>systemd[1]: Reached target Slices.
> <30>systemd[1]: Starting system-getty.slice.
> [  OK  ] Created slice system-getty.slice.
> <30>systemd[1]: Created slice system-getty.slice.
> <30>systemd[1]: Mounting POSIX Message Queue File System...
>          Mounting POSIX Message Queue File System...
> <30>systemd[1]: Starting Dispatch Password Requests to Console Directory Watch.
> <30>systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
> <30>systemd[1]: Starting Paths.
> [  OK  ] Reached target Paths.
> <30>systemd[1]: Reached target Paths.
> <30>systemd[1]: Mounting Debug File System...
>          Mounting Debug File System...
> <30>systemd[1]: Starting Apply Kernel Variables...
>          Starting Apply Kernel Variables...
> <30>systemd[1]: Starting Arbitrary Executable File Formats File System
> Automount Point.
> <27>systemd[1]: Failed to open /dev/autofs: No such file or directory
> <27>systemd[1]: Failed to initialize automounter: No such file or directory
> [FAILED] Failed to set up automount Arbitrary Executable File Form...
> Automount Point.
> See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
> <27>systemd[1]: Failed to set up automount Arbitrary Executable File
> Formats File System Automount Point.
> <29>systemd[1]: Unit proc-sys-fs-binfmt_misc.automount entered failed state.
> <30>systemd[1]: Starting Encrypted Volumes.
> [  OK  ] Reached target Encrypted Volumes.
> <30>systemd[1]: Reached target Encrypted Volumes.
> <30>systemd[1]: Started Set Up Additional Binary Formats.
> <30>systemd[1]: Started Setup Virtual Console.
> <30>systemd[1]: Started Create list of required static device nodes
> for the current kernel.
> <30>systemd[1]: Starting Create static device nodes in /dev...
>          Starting Create static device nodes in /dev...
> <30>systemd[1]: Mounting Huge Pages File System...
>          Mounting Huge Pages File System...
> <30>systemd[1]: Started Load Kernel Modules.
> <30>systemd[1]: Mounted FUSE Control File System.
> <30>systemd[1]: Mounting Configuration File System...
>          Mounting Configuration File System...
> <30>systemd[1]: Starting udev Kernel Socket.
> [  OK  ] Listening on udev Kernel Socket.
> <30>systemd[1]: Listening on udev Kernel Socket.
> <30>systemd[1]: Starting udev Control Socket.
> [  OK  ] Listening on udev Control Socket.
> <30>systemd[1]: Listening on udev Control Socket.
> <30>systemd[1]: Starting udev Coldplug all Devices...
>          Starting udev Coldplug all Devices...
> <30>systemd[1]: Starting Swap.
> [  OK  ] Reached target Swap.
> <30>systemd[1]: Reached target Swap.
> <30>systemd[1]: Mounting Temporary Directory...
>          Mounting Temporary Directory...
> <30>systemd[1]: Started File System Check on Root Device.
> <30>systemd[1]: Starting Remount Root and Kernel File Systems...
>          Starting Remount Root and Kernel File Systems...
> [  OK  ] Started Remount Root and Kernel File Systems.
> <30>systemd[1]: Started Remount Root and Kernel File Systems.
> <30>systemd[1]: Starting Load/Save Random Seed...
>          Starting Load/Save Random Seed...
> <30>systemd[1]: Starting Configure read-only root support...
>          Starting Configure read-only root support...
> <30>systemd[1]: Started Import network configuration from initramfs.
> [  OK  ] Mounted POSIX Message Queue File System.
> <30>systemd[1]: Mounted POSIX Message Queue File System.
> [  OK  ] Mounted Debug File System.
> <30>systemd[1]: Mounted Debug File System.
> [  OK  ] Mounted Configuration File System.
> <30>systemd[1]: Mounted Configuration File System.
> [  OK  ] Mounted Huge Pages File System.
> <30>systemd[1]: Mounted Huge Pages File System.
> [  OK  ] Mounted Temporary Directory.
> <30>systemd[1]: Mounted Temporary Directory.
> [  OK  ] Started Create static device nodes in /dev.
> <30>systemd[1]: Started Create static device nodes in /dev.
> [  OK  ] Started Load/Save Random Seed.
> <30>systemd[1]: Started Load/Save Random Seed.
> <30>systemd[1]: Starting udev Kernel Device Manager...
>          Starting udev Kernel Device Manager...
> <30>systemd[1]: Starting Local File Systems (Pre).
> [  OK  ] Reached target Local File Systems (Pre).
> <30>systemd[1]: Reached target Local File Systems (Pre).
> [  OK  ] Started udev Coldplug all Devices.
> <30>systemd[1]: Started udev Coldplug all Devices.
> [  OK  ] Started Configure read-only root support.
> <30>systemd[1]: Started Configure read-only root support.
> <30>systemd[1]: Starting Local File Systems.
> [  OK  ] Reached target Local File Systems.
> <30>systemd[1]: Reached target Local File Systems.
> <30>systemd[1]: Starting Trigger Flushing of Journal to Persistent Storage...
>          Starting Trigger Flushing of Journal to Persistent Storage...
> <30>systemd[1]: Started Relabel all filesystems, if necessary.
> <30>systemd[1]: Started Mark the need to relabel after reboot.
> <30>systemd[1]: Started Reconfigure the system on administrator request.
> <30>systemd[1]: Starting Create Volatile Files and Directories...
>          Starting Create Volatile Files and Directories...
> [  OK  ] Started Apply Kernel Variables.
> <30>systemd[1]: Started Apply Kernel Variables.
> [  OK  ] Started udev Kernel Device Manager.
> <30>systemd[1]: Started udev Kernel Device Manager.
> <30>systemd-udevd[24]: starting version 208
> [  OK  ] Started Create Volatile Files and Directories.
> <30>systemd[1]: Started Create Volatile Files and Directories.
> <30>systemd[1]: Starting Update UTMP about System Reboot/Shutdown...
>          Starting Update UTMP about System Reboot/Shutdown...
> [  OK  ] Started Update UTMP about System Reboot/Shutdown.
> <30>systemd[1]: Started Update UTMP about System Reboot/Shutdown.
> <30>systemd[1]: Starting System Initialization.
> [  OK  ] Reached target System Initialization.
> <30>systemd[1]: Reached target System Initialization.
> <30>systemd[1]: Starting Daily Cleanup of Temporary Directories.
> <30>systemd[1]: Started Daily Cleanup of Temporary Directories.
> <30>systemd[1]: Starting Timers.
> [  OK  ] Reached target Timers.
> <30>systemd[1]: Reached target Timers.
> <30>systemd[1]: Starting D-Bus System Message Bus Socket.
> [  OK  ] Listening on D-Bus System Message Bus Socket.
> <30>systemd[1]: Listening on D-Bus System Message Bus Socket.
> <30>systemd[1]: Starting Sockets.
> [  OK  ] Reached target Sockets.
> <30>systemd[1]: Reached target Sockets.
> <30>systemd[1]: Starting Basic System.
> [  OK  ] Reached target Basic System.
> <30>systemd[1]: Reached target Basic System.
> <30>systemd[1]: Starting LSB: Bring up/down networking...
>          Starting LSB: Bring up/down networking...
> <30>systemd[1]: Starting OpenSSH server daemon...
>          Starting OpenSSH server daemon...
> <30>systemd[1]: Starting System Logging Service...
>          Starting System Logging Service...
> <30>systemd[1]: Starting Login Service...
>          Starting Login Service...
> <30>systemd[1]: Starting D-Bus System Message Bus...
>          Starting D-Bus System Message Bus...
> [  OK  ] Started D-Bus System Message Bus.
> <30>systemd[1]: Started D-Bus System Message Bus.
> <29>systemd[1]: systemd-journal-flush.service: main process exited,
> code=exited, status=1/FAILURE
> [FAILED] Failed to start Trigger Flushing of Journal to Persistent Storage.
> See 'systemctl status systemd-journal-flush.service' for details.
> <27>systemd[1]: Failed to start Trigger Flushing of Journal to
> Persistent Storage.
> <29>systemd[1]: Unit systemd-journal-flush.service entered failed state.
> <30>systemd[1]: Starting Permit User Sessions...
>          Starting Permit User Sessions...
> [  OK  ] Started Permit User Sessions.
> <30>systemd[1]: Started Permit User Sessions.
> <30>systemd[1]: Starting Getty on tty4...
>          Starting Getty on tty4...
> [  OK  ] Started Getty on tty4.
> <30>systemd[1]: Started Getty on tty4.
> <30>systemd[1]: Starting Getty on tty1...
>          Starting Getty on tty1...
> [  OK  ] Started Getty on tty1.
> <30>systemd[1]: Started Getty on tty1.
> <30>systemd[1]: Starting Getty on tty3...
>          Starting Getty on tty3...
> [  OK  ] Started Getty on tty3.
> <30>systemd[1]: Started Getty on tty3.
> <30>systemd[1]: Starting Getty on tty2...
>          Starting Getty on tty2...
> [  OK  ] Started Getty on tty2.
> <30>systemd[1]: Started Getty on tty2.
> <30>systemd[1]: Starting Console Getty...
>          Starting Console Getty...
> [  OK  ] Started Console Getty.
> <30>systemd[1]: Started Console Getty.
> <30>systemd[1]: Starting Login Prompts.
> [  OK  ] Reached target Login Prompts.
> <30>systemd[1]: Reached target Login Prompts.
> <29>systemd[1]: dbus.service: main process exited, code=exited, status=1/FAILURE
> <29>systemd[1]: Unit dbus.service entered failed state.
> <30>systemd[1]: Starting D-Bus System Message Bus...
>          Starting D-Bus System Message Bus...
> [  OK  ] Started D-Bus System Message Bus.
> <30>systemd[1]: Started D-Bus System Message Bus.
> <27>systemd-udevd[67]: Failed to apply ACL on /dev/snd/pcmC0D0c: No
> such file or directory
> <27>systemd-udevd[68]: Failed to apply ACL on /dev/snd/pcmC0D0p: No
> such file or directory
> <30>systemd[1]: Starting Sound Card.
> [  OK  ] Reached target Sound Card.
> <30>systemd[1]: Reached target Sound Card.
> <29>systemd[1]: dbus.service: main process exited, code=exited, status=1/FAILURE
> <27>systemd-udevd[64]: Failed to apply ACL on /dev/snd/controlC0: No
> such file or directory
> <29>systemd[1]: Unit dbus.service entered failed state.
> <30>systemd[1]: Starting D-Bus System Message Bus...
>          Starting D-Bus System Message Bus...
> <27>systemd-udevd[63]: Failed to apply ACL on /dev/snd/pcmC0D1c: No
> such file or directory
> <27>systemd-udevd[66]: Failed to apply ACL on /dev/dri/card0: No such
> file or directory
> [  OK  ] Started D-Bus System Message Bus.
> <30>systemd[1]: Started D-Bus System Message Bus.
> <27>systemd-udevd[64]: inotify_add_watch(7, /dev/vda, 10) failed: No
> such file or directory
> <27>systemd-udevd[68]: inotify_add_watch(7, /dev/vda2, 10) failed: No
> such file or directory
> <27>systemd-udevd[71]: Failed to apply ACL on /dev/sg0: No such file
> or directory
> <27>systemd-udevd[64]: inotify_add_watch(7, /dev/vda1, 10) failed: No
> such file or directory
> <29>systemd[1]: dbus.service: main process exited, code=exited, status=1/FAILURE
> <29>systemd[1]: Unit dbus.service entered failed state.
> <30>systemd[1]: Starting D-Bus System Message Bus...
>          Starting D-Bus System Message Bus...
> [  OK  ] Started D-Bus System Message Bus.
> <30>systemd[1]: Started D-Bus System Message Bus.
> <29>systemd[1]: dbus.service: main process exited, code=exited, status=1/FAILURE
> <29>systemd[1]: Unit dbus.service entered failed state.
> <30>systemd[1]: Starting D-Bus System Message Bus...
>          Starting D-Bus System Message Bus...
> [  OK  ] Started D-Bus System Message Bus.
> <30>systemd[1]: Started D-Bus System Message Bus.
> <27>systemd-udevd[69]: Failed to apply ACL on /dev/snd/seq: No such
> file or directory
> <27>systemd-udevd[69]: Failed to apply ACL on /dev/snd/timer: No such
> file or directory
> <29>systemd[1]: dbus.service: main process exited, code=exited, status=1/FAILURE
> <29>systemd[1]: Unit dbus.service entered failed state.
> <30>systemd[1]: Starting D-Bus System Message Bus...
>          Starting D-Bus System Message Bus...
> <28>systemd[1]: dbus.service start request repeated too quickly,
> refusing to start.
> [FAILED] Failed to start D-Bus System Message Bus.
> See 'systemctl status dbus.service' for details.
> <27>systemd[1]: Failed to start D-Bus System Message Bus.
> <29>systemd[1]: Unit dbus.socket entered failed state.
> <27>systemd-udevd[63]: Failed to apply ACL on /dev/sr0: No such file
> or directory
> [  OK  ] Started System Logging Service.
> <30>systemd[1]: Started System Logging Service.
> [  OK  ] Started OpenSSH server daemon.
> <30>systemd[1]: Started OpenSSH server daemon.
> [  OK  ] Started LSB: Bring up/down networking.
> <30>systemd[1]: Started LSB: Bring up/down networking.
> <30>systemd[1]: Starting Network is Online.
> [  OK  ] Reached target Network is Online.
> <30>systemd[1]: Reached target Network is Online.
> systemd-logind.service: main process exited, code=exited, status=1/FAILURE
> Unit systemd-logind.service entered failed state.
> systemd-logind.service holdoff time over, scheduling restart.
> dbus.service start request repeated too quickly, refusing to start.
> <35>systemd-logind[316]: Failed to get system D-Bus connection: Did
> not receive a reply. Possible causes include: the remote application
> did not send a reply, the message bus security policy blocked the
> reply, the reply timeout expired, or the network connection was
> broken.
> <35>systemd-logind[316]: Failed to fully start up daemon: Connection refused
> Unit dbus.socket entered failed state.
> systemd-logind.service: main process exited, code=exited, status=1/FAILURE
> Unit systemd-logind.service entered failed state.
> systemd-logind.service holdoff time over, scheduling restart.
> 
> dbus.service start request repeated too quickly, refusing to start.
> Unit dbus.socket entered failed state.<35>systemd-logind[317]: Failed
> to get system D-Bus connection: Did not receive a reply. Possible
> causes include: the remote application did not send a reply, the
> message bus security policy blocked the reply, the reply timeout
> expired, or the network connection was broken.
> 
> <35>systemd-logind[317]: Failed to fully start up daemon: Connection refused
> systemd-logind.service: main process exited, code=exited, status=1/FAILURE
> Unit systemd-logind.service entered failed state.
> systemd-logind.service holdoff time over, scheduling restart.
> dbus.service start request repeated too quickly, refusing to start.
> <35>systemd-logind[318]: Failed to get system D-Bus connection: Did
> not receive a reply. Possible causes include: the remote application
> did not send a reply, the message bus security policy blocked the
> reply, the reply timeout expired, or the network connection was
> broken.Unit dbus.socket entered failed state.
> 
> <35>systemd-logind[318]: Failed to fully start up daemon: Connection refused
> systemd-logind.service: main process exited, code=exited, status=1/FAILURE
> Unit systemd-logind.service entered failed state.
> systemd-logind.service holdoff time over, scheduling restart.
> Fedora release 20 (Heisenbug)
> Kernel 3.14.5-200.fc20.x86_64 on an x86_64 (console)
> 
> test login: dbus.service start request repeated too quickly, refusing to start.
> <35>systemd-logind[319]: Failed to get system D-Bus connection: Did
> not receive a reply. Possible causes include: the remote application
> did not send a reply, the message bus security policy blocked the
> reply, the reply timeout expired, or the network connection was
> broken.
> <35>systemd-logind[319]: Failed to fully start up daemon: Connection refused
> Unit dbus.socket entered failed state.
> systemd-logind.service: main process exited, code=exited, status=1/FAILURE
> Unit systemd-logind.service entered failed state.
> systemd-logind.service holdoff time over, scheduling restart.
> systemd-logind.service start request repeated too quickly, refusing to start.
> Unit systemd-logind.service entered failed state.
> 
> 
> Fedora release 20 (Heisenbug)
> Kernel 3.14.5-200.fc20.x86_64 on an x86_64 (console)
> 
> test login: root
> Password:
> Unable to get valid context for root
> dbus.service: main process exited, code=exited, status=1/FAILURE
> Unit dbus.service entered failed state.
> dbus.service: main process exited, code=exited, status=1/FAILURE
> Unit dbus.service entered failed state.
> dbus.service: main process exited, code=exited, status=1/FAILURE
> Unit dbus.service entered failed state.
> dbus.service: main process exited, code=exited, status=1/FAILURE
> Unit dbus.service entered failed state.
> dbus.service: main process exited, code=exited, status=1/FAILURE
> Unit dbus.service entered failed state.
> dbus.service start request repeated too quickly, refusing to start.
> Unit dbus.socket entered failed state.
> [root at test ~]#
> 
> 
> Regards,
> Ajith
> 
> On Sat, Jun 28, 2014 at 8:42 PM, Michael H. Warfield <mhw at wittsend.com> wrote:
> > On Sat, 2014-06-28 at 20:12 +0530, Ajith Adapa wrote:
> >> Thanks @Michael
> >
> >> I am running lxc 1.0.3 version in rawhide.
> >
> > Ah.  Ok...  Understand that lxc-autostart is not fully functional in
> > 1.0.3 and will not autoboot containers on host boot.  That's in 1.0.4
> > which should be in there rsn.
> >
> >> My fedora 20 setup is a VM and hasn't got libvirtd running. As you
> >> mentioned earlier thats the reason why virbr0 is not created by
> >> default.
> >
> >> Why doesn't lxc directly support creating virbr0 ? Might be one more
> >> option in the template.
> >
> > For that, I think I'll have to defer to Serge or Stéphane for a
> > definitive answer but, IMHO, it's largely because libvirt is already
> > responsible for virbr0 and we could result in conflicts.  Not saying we
> > would but just that there could be.  Worse case would be a race
> > condition between us in lxc-autostart-helper and the libvirt service in
> > trying to create that bridge.  It could result in a failure in one
> > service or the other.
> >
> > It's also possible that we've just never really looked into it.  Perhaps
> > there should be a run-time dependency on libvirt running that we could
> > detect and document better.  The error message leaves a little bit to be
> > desired.
> >
> > Given that, it's not a template issue at all and wouldn't (shouldn't)
> > require any container config changes.  It would need to be some sort of
> > lxc service startup option to precreate the needed bridges or a helper.
> >
> > Given all that, 1.0.4 may very well resolve (or may compound) the
> > problem as the lxc.service systemd service uses a script that waits for
> > virbr0 from libvirt to settle before autobooting containers (there's
> > where your race conditions would live).  I'm not sure how that's going
> > to play out if libvirt is not running.  It looks like we may need to add
> > code to /usr/libexec/lxc/lxc-autostart-helper to insure that the default
> > lxc network bridge is running.  I'd be reluctant to adding it to the
> > lxc-start code as it would be difficult to insure it would always be
> > doing the right thing including cases like unpriv containers.
> >
> > This is a corner case that, maybe, Dwight and I may need to address or
> > punt it over to Serge or Stéphane.  It's complicated in that we don't
> > always know the bridges that are needed or even if they are need if a
> > site is using "macvlan" or "physical" network types.  It definitely
> > needs to be tested.
> >
> >> I will try out the steps given regarding password.
> >
> > Cool.
> >
> >> Regards,
> >> Ajith
> >
> > Regards,
> > Mike
> >
> >> On Sat, Jun 28, 2014 at 7:13 PM, Michael H. Warfield <mhw at wittsend.com> wrote:
> >> > On Sat, 2014-06-28 at 15:34 +0530, Ajith Adapa wrote:
> >> >> Hi,
> >> >
> >> >> lxc-start is failing in latest fedora 20 saying virbr0 is not found.
> >> >
> >> > What version of LXC?  AFAIK, it's still 0.9.0 with 1.0.3 (hopefully
> >> > 1.0.4 real soon now) in rawhide.
> >> >
> >> >> 1. Is it madatory for the admin to create virbr0 interface before
> >> >> starting a container ?
> >> >
> >> > Yes.  You have two ways to do this.
> >> >
> >> > 1) [Preferred] have libvirt running.  That's the default bridge for
> >> > libvirt and it wills set it up and manage it for you.
> >> >
> >> > 2) Create the bridge manually.
> >> >
> >> > If you have another bridge already on the system, you can change the
> >> > bridge name in the configuration files and in /etc/lxc/default.conf.
> >> > Personally, I keep libvirt and virbr0 up and running for my nat'ed
> >> > bridge while I have a static lxcbr0 to which the primary interface has
> >> > been added for a true bridge to the outer network (but I have lots of
> >> > IPv4 addresses so I can allow them direct access to the address pool.
> >> >
> >> >> 2. How can I create a container with a default password for root
> >> >> rather than auto-generating the same ?
> >> >
> >> > It's a tuning knob in the template.  Read the comments in
> >> > the /usr/share/lxc/templates/lxc-fedora file starting around line 32...
> >> >
> >> > --
> >> >
> >> > # Some combinations of the tunning knobs below do not exactly make sense.
> >> > # but that's ok.
> >> > #
> >> > # If the "root_password" is non-blank, use it, else set a default.
> >> > # This can be passed to the script as an environment variable and is
> >> > # set by a shell conditional assignment.  Looks weird but it is what it is.
> >> > #
> >> > # If the root password contains a ding ($) then try to expand it.
> >> > # That will pick up things like ${name} and ${RANDOM}.
> >> > # If the root password contians more than 3 consecutive X's, pass it as
> >> > # a template to mktemp and take the result.
> >> > #
> >> > # If root_display_password = yes, display the temporary root password at exit.
> >> > # If root_store_password = yes, store it in the configuration directory
> >> > # If root_prompt_password = yes, invoke "passwd" to force the user to change
> >> > # the root password after the container is created.
> >> > #
> >> > # These are conditional assignments...  The can be overridden from the
> >> > # preexisting environment variables...
> >> > #
> >> > # Make sure this is in single quotes to defer expansion to later!
> >> > # :{root_password='Root-${name}-${RANDOM}'}
> >> > : ${root_password='Root-${name}-XXXXXX'}
> >> >
> >> > # Now, it doesn't make much sense to display, store, and force change
> >> > # together.  But, we gotta test, right???
> >> > : ${root_display_password='no'}
> >> > : ${root_store_password='yes'}
> >> > # Prompting for something interactive has potential for mayhem
> >> > # with users running under the API...  Don't default to "yes"
> >> > : ${root_prompt_password='no'}
> >> >
> >> > --
> >> >
> >> > Stated plainly, create your container with the following:
> >> >
> >> > export root_store_password='no'
> >> > export root_password='my_root_password'
> >> > lxc-create -n container1 -t fedora
> >> > lxc-create -n container2 -t fedora
> >> >
> >> > etc, etc, etc...  Each container will have the root password set to
> >> > "my_root_password".
> >> >
> >> > Or this:
> >> >
> >> > export root_prompt_password='yes'
> >> >
> >> > Then run lxc-create and it will then prompt you for the new root
> >> > password.
> >> >
> >> > NOTE: This only works for the Fedora and CentOS templates.  It has not
> >> > been ported to any of the other templates at this time!
> >> >
> >> >>
> >> >> # lxc-create -n test -t fedora
> >> >> Host CPE ID from /etc/os-release: cpe:/o:fedoraproject:fedora:20
> >> >> Checking cache download in /var/cache/lxc/fedora/x86_64/20/rootfs ...
> >> >> Cache found. Updating...
> >> >> No packages marked for update
> >> >> Update finished
> >> >> Copy /var/cache/lxc/fedora/x86_64/20/rootfs to /var/lib/lxc/test/rootfs ...
> >> >> Copying rootfs to /var/lib/lxc/test/rootfs ...
> >> >> Storing root password in '/var/lib/lxc/test/tmp_root_pass'
> >> >> Expiring password for user root.
> >> >> passwd: Success
> >> >> installing fedora-release package
> >> >> Package fedora-release-20-3.noarch already installed and latest version
> >> >> Nothing to do
> >> >>
> >> >> Container rootfs and config have been created.
> >> >> Edit the config file to check/enable networking setup.
> >> >>
> >> >> The temporary root password is stored in:
> >> >>
> >> >>         '/var/lib/lxc/test/tmp_root_pass'
> >> >>
> >> >>
> >> >> The root password is set up as expired and will require it to be changed
> >> >> at first login, which you should do as soon as possible.  If you lose the
> >> >> root password or wish to change it without starting the container, you
> >> >> can change it from the host by running the following command (which will
> >> >> also reset the expired flag):
> >> >>
> >> >>         chroot /var/lib/lxc/test/rootfs passwd
> >> >>
> >> >> # lxc-start -n test
> >> >> lxc-start: failed to attach 'vethKOT10G' to the bridge 'virbr0' : No such device
> >> >> lxc-start: failed to create netdev
> >> >> lxc-start: failed to create the network
> >> >> lxc-start: failed to spawn 'test'
> >> >>
> >> >> ======================================================
> >> >> configuration
> >> >> ======================================================
> >> >>
> >> >> # yum install lxc*
> >> >> Loaded plugins: langpacks
> >> >> Package lxc-extra-1.0.3-2.fc21.x86_64 already installed and latest version
> >> >> Package lxc-templates-1.0.3-2.fc21.x86_64 already installed and latest version
> >> >> Package lxc-libs-1.0.3-2.fc21.x86_64 already installed and latest version
> >> >> Package lxc-1.0.3-2.fc21.x86_64 already installed and latest version
> >> >> Package lxc-doc-1.0.3-2.fc21.noarch already installed and latest version
> >> >> Package lxc-devel-1.0.3-2.fc21.x86_64 already installed and latest version
> >> >>
> >> >> -------------------------------------------
> >> >>
> >> >> # lxc-checkconfig
> >> >> Kernel configuration not found at /proc/config.gz; searching...
> >> >> Kernel configuration found at /boot/config-3.14.5-200.fc20.x86_64
> >> >> --- Namespaces ---
> >> >> Namespaces: enabled
> >> >> Utsname namespace: enabled
> >> >> Ipc namespace: enabled
> >> >> Pid namespace: enabled
> >> >> User namespace: enabled
> >> >> Network namespace: enabled
> >> >> Multiple /dev/pts instances: enabled
> >> >>
> >> >> --- Control groups ---
> >> >> Cgroup: enabled
> >> >> Cgroup clone_children flag: enabled
> >> >> Cgroup device: enabled
> >> >> Cgroup sched: enabled
> >> >> Cgroup cpu account: enabled
> >> >> Cgroup memory controller: enabled
> >> >> Cgroup cpuset: enabled
> >> >>
> >> >> --- Misc ---
> >> >> Veth pair device: enabled
> >> >> Macvlan: enabled
> >> >> Vlan: enabled
> >> >> File capabilities: enabled
> >> >>
> >> >> Note : Before booting a new kernel, you can check its configuration
> >> >> usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
> >> >>
> >> >> Regards,
> >> >> Ajith
> >> >
> >> > Regards,
> >> > Mike
> >> > --
> >> > Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
> >> >    /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
> >> >    NIC whois: MHW9          | An optimist believes we live in the best of all
> >> >  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
> >> >
> >> >
> >> > _______________________________________________
> >> > lxc-users mailing list
> >> > lxc-users at lists.linuxcontainers.org
> >> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >> _______________________________________________
> >> lxc-users mailing list
> >> lxc-users at lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > --
> > Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
> >    /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
> >    NIC whois: MHW9          | An optimist believes we live in the best of all
> >  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
> >
> >
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
   /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
   NIC whois: MHW9          | An optimist believes we live in the best of all
 PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 482 bytes
Desc: This is a digitally signed message part
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20140630/b982ee98/attachment.sig>


More information about the lxc-users mailing list