[Lxc-users] Regarding creating a LXC container in fedora 17

jjs - mainphrame jjs at mainphrame.com
Sat May 18 19:09:35 UTC 2013


Interesting. I didn't realize how spoiled I am and how easy I have it with
lxc on ubuntu!

Joe


On Sat, May 18, 2013 at 11:19 AM, Michael H. Warfield <mhw at wittsend.com>wrote:

> On Sat, 2013-05-18 at 19:41 +0530, Ajith Adapa wrote:
> > Hmm sounds one more road block for using lxc in fedora 17 because of
> > systemd.
>
> It's not a roadblock.  More like a mile long stretch of stingers (stop
> spike strips / tire deflators).  We're getting there.  It's just one
> more unnecessary puzzle to solve.  Sigh...
>
> > Currently there is no place where there is a guide for starting up
> > with LXC for latest fedora versions. I think a page in fedoraproject
> > would be of great help with the known issues and steps using lxc under
> > various fedora versions.
>
> First we get it working but, yeah, that would be incredibly nice and
> then also add it to this project as well.
>
> > I am really thinking to start using LXC containers in fedora 14. Build
> > and Boot it up with latest stable kernel version (Might be 3.4) and
> > LXC version (>0.9) and try out using LXC- containers :)
> >
> >
> >
> >
> > On Sat, May 18, 2013 at 7:28 PM, Michael H. Warfield
> > <mhw at wittsend.com> wrote:
> >         On Sat, 2013-05-18 at 19:02 +0530, Ajith Adapa wrote:
> >         > Sorry for the confusion.
> >
> >         > In case of issue 3, I felt host kernel crashed because of
> >         the soft
> >         > lock issue mentioned in Issue 2.That's the reason I was
> >         saying "as a
> >         > result of ..". Ideally speaking I haven't done anything
> >         other than
> >         > creating the lxc-container at the time. Once I restarted the
> >         host
> >         > machine after crash I havent observed any issues.
> >
> >         > Then I have started the container using below command and
> >         tried to
> >         > connect to its shell using lxc-console command but I ended
> >         up with
> >         > below message. Ideally I should see a prompt but its just
> >         hangs down
> >         > there. <Ctl+a q> works and nothing else.
> >
> >         > [root at ipiblr ~]# lxc-start -n TEST -d
> >         > [root at ipiblr ~]# lxc-console -n TEST
> >
> >         > Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to
> >         enter Ctrl+a
> >         > itself
> >
> >
> >         Oh, crap...  I keep forgetting about that (because I don't use
> >         it).
> >         That needs to be noted somewhere in the documentation.
> >
> >         That's yet another BAD decision on the part of the systemd
> >         crowd,
> >         lxc-console is probably not going to work, at least for the
> >         time being.
> >         They (systemd) intentionally, with documented malice a
> >         forethought,
> >         disable gettys on the vtys in the container if systemd detects
> >         that it's
> >         in a container.  However, /dev/console in the container is
> >         still active
> >         and is connected to lxc-start and I'm able to log in there but
> >         I have
> >         never gotten lxc-console to work with a systemd container and
> >         I don't
> >         know of anything I can do about it.  You would need some way
> >         to force
> >         the container to start gettys on the vtys.
> >
> >         Maybe, if I (or someone else) can figure out a way to do that
> >         (force the
> >         gettys to start on the vtys), it could be integrated into the
> >         Fedora
> >         template.  My patches for the autodev stuff (plus other stuff)
> >         have now
> >         been accepted and applied by Serge, so that's done.  Maybe I
> >         can look
> >         deeper into this morass now.
> >
> >         Regards,
> >         Mike
> >
> >         > Regards,
> >         > Ajith
> >         >
> >         >
> >         >
> >         >
> >         > On Sat, May 18, 2013 at 5:55 PM, Michael H. Warfield
> >         > <mhw at wittsend.com> wrote:
> >         >         Hello,
> >         >
> >         >         On Sat, 2013-05-18 at 12:35 +0530, Ajith Adapa
> >         wrote:
> >         >         > Hi,
> >         >
> >         >         > I have installed all the rpms created by @thomas
> >         and
> >         >         followed @michael
> >         >         > steps to start a lxc container.
> >         >
> >         >         > I have a doubt.
> >         >
> >         >         > 1. When I give lxc-create command I came across
> >         huge
> >         >         download of various
> >         >         > files.
> >         >         > As per my understanding rootfs is created for new
> >         container
> >         >         (where can i
> >         >         > get the steps for it ? ).
> >         >
> >         >
> >         >         Steps for what?  It's
> >         in /var/lib/lxc/{Container}/rootfs/
> >         >
> >         >         > But I see below log. Is there any issue ?
> >         >
> >         >         > Copy /var/cache/lxc/fedora/i686/17/rootfs
> >         >         to /var/lib/lxc/TEST/TEST/rootfs
> >         >         > ...
> >         >         > Copying rootfs
> >         to /var/lib/lxc/TEST/TEST/rootfs ...setting
> >         >         root passwd to
> >         >         > root
> >         >         > installing fedora-release package
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >         > warning: Failed to read auxiliary vector, /proc
> >         not mounted?
> >         >
> >         >
> >         >         The warnings are perfectly normal and harmless.  I
> >         ran into
> >         >         this with
> >         >         recent versions of yum and researched it.  It's
> >         because /proc
> >         >         is not
> >         >         mounted in the container itself when the container
> >         is being
> >         >         created.
> >         >         You can ignore them.
> >         >
> >         >         > Package fedora-release-17-2.noarch already
> >         installed and
> >         >         latest version
> >         >         > Nothing to do
> >         >
> >         >
> >         >         Again, normal.
> >         >
> >         >         > container rootfs and config created
> >         >         > 'fedora' template installed
> >         >         > 'TEST' created
> >         >
> >         >
> >         >         Looks like your container was created.  I don't see
> >         a problem.
> >         >
> >         >         > 2.I see a SOFT LOCK issue with latest version
> >         kernel shown
> >         >         below.
> >         >
> >         >         > # uname -a
> >         >         > Linux blr 3.8.8-100.fc17.i686 #1 SMP Wed Apr 17
> >         17:26:59 UTC
> >         >         2013 i686 i686
> >         >         > i386 GNU/Linux
> >         >         >
> >         >         >
> >         >         > [1098069.351017] SELinux: initialized (dev
> >         binfmt_misc, type
> >         >         binfmt_misc),
> >         >         > uses genfs_contexts
> >         >         > [1281973.370052] BUG: soft lockup - CPU#0 stuck
> >         for 23s!
> >         >         [kworker/0:1:2201]
> >         >
> >         >
> >         >         I've seen that on my Dell 610's but they haven't
> >         caused any
> >         >         real
> >         >         failures.  Not quite sure what that is.
> >         >
> >         >         > [1281973.370052] Modules linked in: binfmt_misc
> >         lockd sunrpc
> >         >         snd_intel8x0
> >         >         > snd_ac97_codec ac97_bus snd_seq snd_seq_device
> >         snd_pcm
> >         >         i2c_piix4 i2c_core
> >         >         > microcode virtio_balloon snd_page_alloc snd_timer
> >         snd
> >         >         soundcore virtio_net
> >         >         > uinput virtio_blk
> >         >         > [1281973.370052] Pid: 2201, comm: kworker/0:1 Not
> >         tainted
> >         >         > 3.8.8-100.fc17.i686 #1 Bochs Bochs
> >         >         > [1281973.370052] EIP: 0060:[<c068b17a>] EFLAGS:
> >         00000206
> >         >         CPU: 0
> >         >         > [1281973.370052] EIP is at iowrite16+0x1a/0x40
> >         >         > [1281973.370052] EAX: 00000001 EBX: f69b3000 ECX:
> >         0001c050
> >         >         EDX: 0000c050
> >         >         > [1281973.370052] ESI: e9d9b600 EDI: 00000000 EBP:
> >         f5009b90
> >         >         ESP: f5009b8c
> >         >         > [1281973.370052]  DS: 007b ES: 007b FS: 00d8 GS:
> >         00e0 SS:
> >         >         0068
> >         >         > [1281973.370052] CR0: 8005003b CR2: 09cae530 CR3:
> >         345e0000
> >         >         CR4: 000006d0
> >         >         > [1281973.370052] DR0: 00000000 DR1: 00000000 DR2:
> >         00000000
> >         >         DR3: 00000000
> >         >         > [1281973.370052] DR6: ffff0ff0 DR7: 00000400
> >         >         > [1281973.370052] Process kworker/0:1 (pid: 2201,
> >         ti=f5008000
> >         >         task=f6830cb0
> >         >         > task.ti=f4bb2000)
> >         >         > [1281973.370052] Stack:
> >         >         > [1281973.370052]  c07107cd f5009b9c c070ffb9
> >         f4a17a00
> >         >         f5009bcc f7c36f2b
> >         >         > 00000000 e9d9b600
> >         >         > [1281973.370052]  00000020 00000000 e9d9b600
> >         00000000
> >         >         f69b2000 00000000
> >         >         > f4b5a740 00000036
> >         >         > [1281973.370052]  f5009c00 c088ea5e e9d9b600
> >         00000000
> >         >         f7c384c0 f6822600
> >         >         > f69b2000 00000000
> >         >         > [1281973.370052] Call Trace:
> >         >         > [1281973.370052]  [<c07107cd>] ? vp_notify
> >         +0x1d/0x20
> >         >         > [1281973.370052]  [<c070ffb9>] virtqueue_kick
> >         +0x19/0x20
> >         >         > [1281973.370052]  [<f7c36f2b>] start_xmit
> >         +0x14b/0x370
> >         >         [virtio_net]
> >         >         > [1281973.370052]  [<c088ea5e>] dev_hard_start_xmit
> >         >         +0x24e/0x4c0
> >         >         > [1281973.370052]  [<c08a793f>] sch_direct_xmit
> >         +0xaf/0x180
> >         >         > [1281973.370052]  [<c088f01e>] dev_queue_xmit
> >         +0x12e/0x370
> >         >         > [1281973.370052]  [<c08bf670>] ? ip_fragment
> >         +0x870/0x870
> >         >         > [1281973.370052]  [<c08bf88e>] ip_finish_output
> >         +0x21e/0x3b0
> >         >         > [1281973.370052]  [<c08bf670>] ? ip_fragment
> >         +0x870/0x870
> >         >         > [1281973.370052]  [<c08c0354>] ip_output+0x84/0xd0
> >         >         > [1281973.370052]  [<c08bf670>] ? ip_fragment
> >         +0x870/0x870
> >         >         > [1281973.370052]  [<c08bfb00>] ip_local_out
> >         +0x20/0x30
> >         >         > [1281973.370052]  [<c08bfc3f>] ip_queue_xmit
> >         +0x12f/0x3b0
> >         >         > [1281973.370052]  [<c08d62fb>] tcp_transmit_skb
> >         +0x3cb/0x850
> >         >         > [1281973.370052]  [<c097a440>] ?
> >         apic_timer_interrupt
> >         >         +0x34/0x3c
> >         >         > [1281973.370052]  [<c08d8b50>] tcp_send_ack
> >         +0xd0/0x120
> >         >         > [1281973.370052]  [<c08cc096>] __tcp_ack_snd_check
> >         +0x56/0x90
> >         >         > [1281973.370052]  [<c08d3038>] tcp_rcv_established
> >         >         +0x1c8/0x890
> >         >         > [1281973.370052]  [<c08dc8f3>] tcp_v4_do_rcv
> >         +0x223/0x3e0
> >         >         > [1281973.370052]  [<c06233f4>] ?
> >         security_sock_rcv_skb
> >         >         +0x14/0x20
> >         >         > [1281973.370052]  [<c08de39c>] tcp_v4_rcv
> >         +0x53c/0x770
> >         >         > [1281973.370052]  [<c08bb110>] ? ip_rcv_finish
> >         +0x320/0x320
> >         >         > [1281973.370052]  [<c08bb1c2>]
> >         ip_local_deliver_finish
> >         >         +0xb2/0x260
> >         >         > [1281973.370052]  [<c08bb4ac>] ip_local_deliver
> >         +0x3c/0x80
> >         >         > [1281973.370052]  [<c08bb110>] ? ip_rcv_finish
> >         +0x320/0x320
> >         >         > [1281973.370052]  [<c08bae50>] ip_rcv_finish
> >         +0x60/0x320
> >         >         > [1281973.370052]  [<c043009c>] ?
> >         pvclock_clocksource_read
> >         >         +0x9c/0x130
> >         >         > [1281973.370052]  [<c08bb73c>] ip_rcv+0x24c/0x370
> >         >         > [1281973.370052]  [<c088d5db>] __netif_receive_skb
> >         >         +0x5bb/0x740
> >         >         > [1281973.370052]  [<c088d8ce>] netif_receive_skb
> >         +0x2e/0x90
> >         >         > [1281973.370052]  [<f7c36a49>] virtnet_poll
> >         +0x449/0x6a0
> >         >         [virtio_net]
> >         >         > [1281973.370052]  [<c044d6aa>] ? run_timer_softirq
> >         >         +0x1a/0x210
> >         >         > [1281973.370052]  [<c088decd>] net_rx_action
> >         +0x11d/0x1f0
> >         >         > [1281973.370052]  [<c044695b>] __do_softirq
> >         +0xab/0x1c0
> >         >         > [1281973.370052]  [<c04468b0>] ?
> >         local_bh_enable_ip
> >         >         +0x90/0x90
> >         >         > [1281973.370052]  <IRQ>
> >         >         > [1281973.370052]  [<c0446bdd>] ? irq_exit
> >         +0x9d/0xb0
> >         >         > [1281973.370052]  [<c04258ee>] ?
> >         smp_apic_timer_interrupt
> >         >         +0x5e/0x90
> >         >         > [1281973.370052]  [<c097a440>] ?
> >         apic_timer_interrupt
> >         >         +0x34/0x3c
> >         >         > [1281973.370052]  [<c044007b>] ? console_start
> >         +0xb/0x20
> >         >         > [1281973.370052]  [<c0979bbf>] ?
> >         _raw_spin_unlock_irqrestore
> >         >         +0xf/0x20
> >         >         > [1281973.370052]  [<c07918d6>] ? ata_scsi_queuecmd
> >         >         +0x96/0x250
> >         >         > [1281973.370052]  [<c076ad18>] ? scsi_dispatch_cmd
> >         >         +0xb8/0x260
> >         >         > [1281973.370052]  [<c066007b>] ?
> >         queue_store_random
> >         >         +0x4b/0x70
> >         >         > [1281973.370052]  [<c07711b3>] ? scsi_request_fn
> >         +0x2c3/0x4b0
> >         >         > [1281973.370052]  [<c042f2b7>] ? kvm_clock_read
> >         +0x17/0x20
> >         >         > [1281973.370052]  [<c0409448>] ? sched_clock
> >         +0x8/0x10
> >         >         > [1281973.370052]  [<c065cace>] ? __blk_run_queue
> >         +0x2e/0x40
> >         >         > [1281973.370052]  [<c066214a>] ?
> >         blk_execute_rq_nowait
> >         >         +0x6a/0xd0
> >         >         > [1281973.370052]  [<c066221d>] ? blk_execute_rq
> >         +0x6d/0xe0
> >         >         > [1281973.370052]  [<c06620b0>] ?
> >         __raw_spin_unlock_irq
> >         >         +0x10/0x10
> >         >         > [1281973.370052]  [<c0446ba7>] ? irq_exit
> >         +0x67/0xb0
> >         >         > [1281973.370052]  [<c04258ee>] ?
> >         smp_apic_timer_interrupt
> >         >         +0x5e/0x90
> >         >         > [1281973.370052]  [<c097a440>] ?
> >         apic_timer_interrupt
> >         >         +0x34/0x3c
> >         >         > [1281973.370052]  [<c076ffa0>] ? scsi_execute
> >         +0xb0/0x140
> >         >         > [1281973.370052]  [<c0771429>] ? scsi_execute_req
> >         +0x89/0x100
> >         >         > [1281973.370052]  [<c077f3d5>] ? sr_check_events
> >         +0xb5/0x2e0
> >         >         > [1281973.370052]  [<c07a64cd>] ?
> >         cdrom_check_events
> >         >         +0x1d/0x40
> >         >         > [1281973.370052]  [<c077f856>] ?
> >         sr_block_check_events
> >         >         +0x16/0x20
> >         >         > [1281973.370052]  [<c06663c5>] ? disk_check_events
> >         +0x45/0xf0
> >         >         > [1281973.370052]  [<c0666485>] ?
> >         disk_events_workfn
> >         >         +0x15/0x20
> >         >         > [1281973.370052]  [<c045788e>] ? process_one_work
> >         >         +0x12e/0x3d0
> >         >         > [1281973.370052]  [<c097a440>] ?
> >         apic_timer_interrupt
> >         >         +0x34/0x3c
> >         >         > [1281973.370052]  [<c0459939>] ? worker_thread
> >         +0x119/0x3b0
> >         >         > [1281973.370052]  [<c0459820>] ?
> >         flush_delayed_work
> >         >         +0x50/0x50
> >         >         > [1281973.370052]  [<c045e2a4>] ? kthread+0x94/0xa0
> >         >         > [1281973.370052]  [<c0980ef7>] ?
> >         ret_from_kernel_thread
> >         >         +0x1b/0x28
> >         >         > [1281973.370052]  [<c045e210>] ?
> >         kthread_create_on_node
> >         >         +0xc0/0xc0
> >         >         > [1281973.370052] Code: 5d c3 8d b4 26 00 00 00 00
> >         89 02 c3
> >         >         90 8d 74 26 00
> >         >         > 81 fa ff ff 03 00 89 d1 77 2e 81 fa 00 00 01 00 76
> >         0e 81 e2
> >         >         ff ff 00 00 66
> >         >         > ef <c3> 90 8d 74 26 00 55 ba 2c 5a b2 c0 89 e5 89
> >         c8 e8 01
> >         >         ff ff ff
> >         >         > [1281991.139165] ata2: lost interrupt (Status
> >         0x58)
> >         >         > [1281991.148055] ata2: drained 12 bytes to clear
> >         DRQ
> >         >         > [1281991.165039] ata2.00: exception Emask 0x0 SAct
> >         0x0 SErr
> >         >         0x0 action 0x6
> >         >         > frozen
> >         >         > [1281991.172924] sr 1:0:0:0: CDB:
> >         >         > [1281991.172932] Get event status notification: 4a
> >         01 00 00
> >         >         10 00 00 00 08
> >         >         > 00
> >         >         > [1281991.497342] ata2.00: cmd
> >         >         a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio
> >         >         > 16392 in
> >         >         > [1281991.497342]          res
> >         >         40/00:02:00:04:00/00:00:00:00:00/a0 Emask 0x4
> >         >         > (timeout)
> >         >         > [1281991.523767] ata2.00: status: { DRDY }
> >         >         > [1281991.616161] ata2: soft resetting link
> >         >         > [1281998.232648] ata2.01: qc timeout (cmd 0xec)
> >         >         > [1281998.238559] ata2.01: failed to IDENTIFY (I/O
> >         error,
> >         >         err_mask=0x4)
> >         >         > [1281998.247432] ata2: soft resetting link
> >         >         > [1281998.575468] ata2.01: NODEV after polling
> >         detection
> >         >         > [1281998.698009] ata2.00: configured for MWDMA2
> >         >         > [1281998.714460] ata2: EH complete
> >         >
> >         >
> >         >         Not sure what the deal is with that ATA error.
> >          That's a hard
> >         >         drive lost
> >         >         interrupt problem.  Looks to be on your CD Rom
> >         drive?  Looks
> >         >         like it
> >         >         recovered.
> >         >
> >         >         > 3. Last but not least after sometime my host
> >         kernel crashed
> >         >         as a result
> >         >         > need to restart the VPC.
> >         >
> >         >
> >         >         I don't understand what you are saying here.  You're
> >         saying
> >         >         your kernel
> >         >         crashed but I don't understand the "as a result
> >         of..."  What
> >         >         did you do,
> >         >         why did you do it, and what happened?
> >         >
> >         >         > Regards,
> >         >         > Ajith
> >         >
> >         >         Regards,
> >         >         Mike
> >         >
> >         >         > On Thu, May 16, 2013 at 8:09 PM, Ajith Adapa
> >         >         <ajith.adapa at gmail.com> wrote:
> >         >         >
> >         >         > > Thanks @thomas and @michael.
> >         >         > >
> >         >         > > I will try the RPMs and steps provided to start
> >         a
> >         >         container.
> >         >         > >
> >         >         > > Regards,
> >         >         > > Ajith
> >         >         > >
> >         >         > >
> >         >         > > On Wed, May 15, 2013 at 2:01 PM, Thomas Moschny
> >         >         <thomas.moschny at gmail.com>wrote:
> >         >         > >
> >         >         > >> 2013/5/14 Michael H. Warfield
> >         <mhw at wittsend.com>:
> >         >         > >> > What I would recommend as steps on Fedora
> >         17...
> >         >         > >> >
> >         >         > >> > Download lxc-0.9.0 here:
> >         >         > >> >
> >         >         > >> >
> >         >
> >         http://lxc.sourceforge.net/download/lxc/lxc-0.9.0.tar.gz
> >         >         > >> >
> >         >         > >> > You should have rpm-build and friends
> >         installed via yum
> >         >         on your system.
> >         >         > >> > Build the lxc rpms by running rpmbuild (as
> >         any user) as
> >         >         follows:
> >         >         > >>
> >         >         > >> You could also try using the pre-built packages
> >         I put
> >         >         here:
> >         >         > >> http://thm.fedorapeople.org/lxc/ .
> >         >         > >>
> >         >         > >> Regards,
> >         >         > >> Thomas
> >         >         > >>
> >         >         > >>
> >         >         > >>
> >         >
> >
> ------------------------------------------------------------------------------
> >         >         > >> AlienVault Unified Security Management (USM)
> >         platform
> >         >         delivers complete
> >         >         > >> security visibility with the essential security
> >         >         capabilities. Easily and
> >         >         > >> efficiently configure, manage, and operate all
> >         of your
> >         >         security controls
> >         >         > >> from a single console and one unified
> >         framework. Download
> >         >         a free trial.
> >         >         > >> http://p.sf.net/sfu/alienvault_d2d
> >         >         > >> _______________________________________________
> >         >         > >> Lxc-users mailing list
> >         >         > >> Lxc-users at lists.sourceforge.net
> >         >         > >>
> >         https://lists.sourceforge.net/lists/listinfo/lxc-users
> >         >         > >>
> >         >         > >
> >         >         > >
> >         >         >
> >         >         >
> >         >
> >
> ------------------------------------------------------------------------------
> >         >         > AlienVault Unified Security Management (USM)
> >         platform
> >         >         delivers complete
> >         >         > security visibility with the essential security
> >         >         capabilities. Easily and
> >         >         > efficiently configure, manage, and operate all of
> >         your
> >         >         security controls
> >         >         > from a single console and one unified framework.
> >         Download a
> >         >         free trial.
> >         >         > http://p.sf.net/sfu/alienvault_d2d
> >         >         > _______________________________________________
> >         Lxc-users
> >         >         mailing list Lxc-users at lists.sourceforge.net
> >         >
> >         https://lists.sourceforge.net/lists/listinfo/lxc-users
> >         >
> >         >
> >         >         --
> >         >         Michael H. Warfield (AI4NB) | (770) 985-6132 |
> >         >          mhw at WittsEnd.com
> >         >            /\/\|=mhw=|\/\/          | (678) 463-0932 |
> >         >          http://www.wittsend.com/mhw/
> >         >            NIC whois: MHW9          | An optimist believes
> >         we live in
> >         >         the best of all
> >         >          PGP Key: 0x674627FF        | possible worlds.  A
> >         pessimist is
> >         >         sure of it!
> >         >
> >         >
> >         >
> >         >
> >
> >         > --
> >         > This message has been scanned for viruses and
> >         > dangerous content by MailScanner, and is
> >         > believed to be clean.
> >
> >         --
> >         Michael H. Warfield (AI4NB) | (770) 985-6132 |
> >          mhw at WittsEnd.com
> >            /\/\|=mhw=|\/\/          | (678) 463-0932 |
> >          http://www.wittsend.com/mhw/
> >            NIC whois: MHW9          | An optimist believes we live in
> >         the best of all
> >          PGP Key: 0x674627FF        | possible worlds.  A pessimist is
> >         sure of it!
> >
> >
> >
> >
> > --
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
>
> --
> Michael H. Warfield (AI4NB) | (770) 985-6132 |  mhw at WittsEnd.com
>    /\/\|=mhw=|\/\/          | (678) 463-0932 |
> http://www.wittsend.com/mhw/
>    NIC whois: MHW9          | An optimist believes we live in the best of
> all
>  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
>
>
> ------------------------------------------------------------------------------
> AlienVault Unified Security Management (USM) platform delivers complete
> security visibility with the essential security capabilities. Easily and
> efficiently configure, manage, and operate all of your security controls
> from a single console and one unified framework. Download a free trial.
> http://p.sf.net/sfu/alienvault_d2d
> _______________________________________________
> Lxc-users mailing list
> Lxc-users at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20130518/6f696842/attachment.html>


More information about the lxc-users mailing list