<div dir="ltr">Interesting. I didn't realize how spoiled I am and how easy I have it with lxc on ubuntu!<div><br></div><div>Joe</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sat, May 18, 2013 at 11:19 AM, Michael H. Warfield <span dir="ltr"><<a href="mailto:mhw@wittsend.com" target="_blank">mhw@wittsend.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Sat, 2013-05-18 at 19:41 +0530, Ajith Adapa wrote:<br>
> Hmm sounds one more road block for using lxc in fedora 17 because of<br>
> systemd.<br>
<br>
</div>It's not a roadblock. More like a mile long stretch of stingers (stop<br>
spike strips / tire deflators). We're getting there. It's just one<br>
more unnecessary puzzle to solve. Sigh...<br>
<div class="im"><br>
> Currently there is no place where there is a guide for starting up<br>
> with LXC for latest fedora versions. I think a page in fedoraproject<br>
> would be of great help with the known issues and steps using lxc under<br>
> various fedora versions.<br>
<br>
</div>First we get it working but, yeah, that would be incredibly nice and<br>
then also add it to this project as well.<br>
<div class="HOEnZb"><div class="h5"><br>
> I am really thinking to start using LXC containers in fedora 14. Build<br>
> and Boot it up with latest stable kernel version (Might be 3.4) and<br>
> LXC version (>0.9) and try out using LXC- containers :)<br>
><br>
><br>
><br>
><br>
> On Sat, May 18, 2013 at 7:28 PM, Michael H. Warfield<br>
> <<a href="mailto:mhw@wittsend.com">mhw@wittsend.com</a>> wrote:<br>
> On Sat, 2013-05-18 at 19:02 +0530, Ajith Adapa wrote:<br>
> > Sorry for the confusion.<br>
><br>
> > In case of issue 3, I felt host kernel crashed because of<br>
> the soft<br>
> > lock issue mentioned in Issue 2.That's the reason I was<br>
> saying "as a<br>
> > result of ..". Ideally speaking I haven't done anything<br>
> other than<br>
> > creating the lxc-container at the time. Once I restarted the<br>
> host<br>
> > machine after crash I havent observed any issues.<br>
><br>
> > Then I have started the container using below command and<br>
> tried to<br>
> > connect to its shell using lxc-console command but I ended<br>
> up with<br>
> > below message. Ideally I should see a prompt but its just<br>
> hangs down<br>
> > there. <Ctl+a q> works and nothing else.<br>
><br>
> > [root@ipiblr ~]# lxc-start -n TEST -d<br>
> > [root@ipiblr ~]# lxc-console -n TEST<br>
><br>
> > Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to<br>
> enter Ctrl+a<br>
> > itself<br>
><br>
><br>
> Oh, crap... I keep forgetting about that (because I don't use<br>
> it).<br>
> That needs to be noted somewhere in the documentation.<br>
><br>
> That's yet another BAD decision on the part of the systemd<br>
> crowd,<br>
> lxc-console is probably not going to work, at least for the<br>
> time being.<br>
> They (systemd) intentionally, with documented malice a<br>
> forethought,<br>
> disable gettys on the vtys in the container if systemd detects<br>
> that it's<br>
> in a container. However, /dev/console in the container is<br>
> still active<br>
> and is connected to lxc-start and I'm able to log in there but<br>
> I have<br>
> never gotten lxc-console to work with a systemd container and<br>
> I don't<br>
> know of anything I can do about it. You would need some way<br>
> to force<br>
> the container to start gettys on the vtys.<br>
><br>
> Maybe, if I (or someone else) can figure out a way to do that<br>
> (force the<br>
> gettys to start on the vtys), it could be integrated into the<br>
> Fedora<br>
> template. My patches for the autodev stuff (plus other stuff)<br>
> have now<br>
> been accepted and applied by Serge, so that's done. Maybe I<br>
> can look<br>
> deeper into this morass now.<br>
><br>
> Regards,<br>
> Mike<br>
><br>
> > Regards,<br>
> > Ajith<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Sat, May 18, 2013 at 5:55 PM, Michael H. Warfield<br>
> > <<a href="mailto:mhw@wittsend.com">mhw@wittsend.com</a>> wrote:<br>
> > Hello,<br>
> ><br>
> > On Sat, 2013-05-18 at 12:35 +0530, Ajith Adapa<br>
> wrote:<br>
> > > Hi,<br>
> ><br>
> > > I have installed all the rpms created by @thomas<br>
> and<br>
> > followed @michael<br>
> > > steps to start a lxc container.<br>
> ><br>
> > > I have a doubt.<br>
> ><br>
> > > 1. When I give lxc-create command I came across<br>
> huge<br>
> > download of various<br>
> > > files.<br>
> > > As per my understanding rootfs is created for new<br>
> container<br>
> > (where can i<br>
> > > get the steps for it ? ).<br>
> ><br>
> ><br>
> > Steps for what? It's<br>
> in /var/lib/lxc/{Container}/rootfs/<br>
> ><br>
> > > But I see below log. Is there any issue ?<br>
> ><br>
> > > Copy /var/cache/lxc/fedora/i686/17/rootfs<br>
> > to /var/lib/lxc/TEST/TEST/rootfs<br>
> > > ...<br>
> > > Copying rootfs<br>
> to /var/lib/lxc/TEST/TEST/rootfs ...setting<br>
> > root passwd to<br>
> > > root<br>
> > > installing fedora-release package<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> > > warning: Failed to read auxiliary vector, /proc<br>
> not mounted?<br>
> ><br>
> ><br>
> > The warnings are perfectly normal and harmless. I<br>
> ran into<br>
> > this with<br>
> > recent versions of yum and researched it. It's<br>
> because /proc<br>
> > is not<br>
> > mounted in the container itself when the container<br>
> is being<br>
> > created.<br>
> > You can ignore them.<br>
> ><br>
> > > Package fedora-release-17-2.noarch already<br>
> installed and<br>
> > latest version<br>
> > > Nothing to do<br>
> ><br>
> ><br>
> > Again, normal.<br>
> ><br>
> > > container rootfs and config created<br>
> > > 'fedora' template installed<br>
> > > 'TEST' created<br>
> ><br>
> ><br>
> > Looks like your container was created. I don't see<br>
> a problem.<br>
> ><br>
> > > 2.I see a SOFT LOCK issue with latest version<br>
> kernel shown<br>
> > below.<br>
> ><br>
> > > # uname -a<br>
> > > Linux blr 3.8.8-100.fc17.i686 #1 SMP Wed Apr 17<br>
> 17:26:59 UTC<br>
> > 2013 i686 i686<br>
> > > i386 GNU/Linux<br>
> > ><br>
> > ><br>
> > > [1098069.351017] SELinux: initialized (dev<br>
> binfmt_misc, type<br>
> > binfmt_misc),<br>
> > > uses genfs_contexts<br>
> > > [1281973.370052] BUG: soft lockup - CPU#0 stuck<br>
> for 23s!<br>
> > [kworker/0:1:2201]<br>
> ><br>
> ><br>
> > I've seen that on my Dell 610's but they haven't<br>
> caused any<br>
> > real<br>
> > failures. Not quite sure what that is.<br>
> ><br>
> > > [1281973.370052] Modules linked in: binfmt_misc<br>
> lockd sunrpc<br>
> > snd_intel8x0<br>
> > > snd_ac97_codec ac97_bus snd_seq snd_seq_device<br>
> snd_pcm<br>
> > i2c_piix4 i2c_core<br>
> > > microcode virtio_balloon snd_page_alloc snd_timer<br>
> snd<br>
> > soundcore virtio_net<br>
> > > uinput virtio_blk<br>
> > > [1281973.370052] Pid: 2201, comm: kworker/0:1 Not<br>
> tainted<br>
> > > 3.8.8-100.fc17.i686 #1 Bochs Bochs<br>
> > > [1281973.370052] EIP: 0060:[<c068b17a>] EFLAGS:<br>
> 00000206<br>
> > CPU: 0<br>
> > > [1281973.370052] EIP is at iowrite16+0x1a/0x40<br>
> > > [1281973.370052] EAX: 00000001 EBX: f69b3000 ECX:<br>
> 0001c050<br>
> > EDX: 0000c050<br>
> > > [1281973.370052] ESI: e9d9b600 EDI: 00000000 EBP:<br>
> f5009b90<br>
> > ESP: f5009b8c<br>
> > > [1281973.370052] DS: 007b ES: 007b FS: 00d8 GS:<br>
> 00e0 SS:<br>
> > 0068<br>
> > > [1281973.370052] CR0: 8005003b CR2: 09cae530 CR3:<br>
> 345e0000<br>
> > CR4: 000006d0<br>
> > > [1281973.370052] DR0: 00000000 DR1: 00000000 DR2:<br>
> 00000000<br>
> > DR3: 00000000<br>
> > > [1281973.370052] DR6: ffff0ff0 DR7: 00000400<br>
> > > [1281973.370052] Process kworker/0:1 (pid: 2201,<br>
> ti=f5008000<br>
> > task=f6830cb0<br>
> > > task.ti=f4bb2000)<br>
> > > [1281973.370052] Stack:<br>
> > > [1281973.370052] c07107cd f5009b9c c070ffb9<br>
> f4a17a00<br>
> > f5009bcc f7c36f2b<br>
> > > 00000000 e9d9b600<br>
> > > [1281973.370052] 00000020 00000000 e9d9b600<br>
> 00000000<br>
> > f69b2000 00000000<br>
> > > f4b5a740 00000036<br>
> > > [1281973.370052] f5009c00 c088ea5e e9d9b600<br>
> 00000000<br>
> > f7c384c0 f6822600<br>
> > > f69b2000 00000000<br>
> > > [1281973.370052] Call Trace:<br>
> > > [1281973.370052] [<c07107cd>] ? vp_notify<br>
> +0x1d/0x20<br>
> > > [1281973.370052] [<c070ffb9>] virtqueue_kick<br>
> +0x19/0x20<br>
> > > [1281973.370052] [<f7c36f2b>] start_xmit<br>
> +0x14b/0x370<br>
> > [virtio_net]<br>
> > > [1281973.370052] [<c088ea5e>] dev_hard_start_xmit<br>
> > +0x24e/0x4c0<br>
> > > [1281973.370052] [<c08a793f>] sch_direct_xmit<br>
> +0xaf/0x180<br>
> > > [1281973.370052] [<c088f01e>] dev_queue_xmit<br>
> +0x12e/0x370<br>
> > > [1281973.370052] [<c08bf670>] ? ip_fragment<br>
> +0x870/0x870<br>
> > > [1281973.370052] [<c08bf88e>] ip_finish_output<br>
> +0x21e/0x3b0<br>
> > > [1281973.370052] [<c08bf670>] ? ip_fragment<br>
> +0x870/0x870<br>
> > > [1281973.370052] [<c08c0354>] ip_output+0x84/0xd0<br>
> > > [1281973.370052] [<c08bf670>] ? ip_fragment<br>
> +0x870/0x870<br>
> > > [1281973.370052] [<c08bfb00>] ip_local_out<br>
> +0x20/0x30<br>
> > > [1281973.370052] [<c08bfc3f>] ip_queue_xmit<br>
> +0x12f/0x3b0<br>
> > > [1281973.370052] [<c08d62fb>] tcp_transmit_skb<br>
> +0x3cb/0x850<br>
> > > [1281973.370052] [<c097a440>] ?<br>
> apic_timer_interrupt<br>
> > +0x34/0x3c<br>
> > > [1281973.370052] [<c08d8b50>] tcp_send_ack<br>
> +0xd0/0x120<br>
> > > [1281973.370052] [<c08cc096>] __tcp_ack_snd_check<br>
> +0x56/0x90<br>
> > > [1281973.370052] [<c08d3038>] tcp_rcv_established<br>
> > +0x1c8/0x890<br>
> > > [1281973.370052] [<c08dc8f3>] tcp_v4_do_rcv<br>
> +0x223/0x3e0<br>
> > > [1281973.370052] [<c06233f4>] ?<br>
> security_sock_rcv_skb<br>
> > +0x14/0x20<br>
> > > [1281973.370052] [<c08de39c>] tcp_v4_rcv<br>
> +0x53c/0x770<br>
> > > [1281973.370052] [<c08bb110>] ? ip_rcv_finish<br>
> +0x320/0x320<br>
> > > [1281973.370052] [<c08bb1c2>]<br>
> ip_local_deliver_finish<br>
> > +0xb2/0x260<br>
> > > [1281973.370052] [<c08bb4ac>] ip_local_deliver<br>
> +0x3c/0x80<br>
> > > [1281973.370052] [<c08bb110>] ? ip_rcv_finish<br>
> +0x320/0x320<br>
> > > [1281973.370052] [<c08bae50>] ip_rcv_finish<br>
> +0x60/0x320<br>
> > > [1281973.370052] [<c043009c>] ?<br>
> pvclock_clocksource_read<br>
> > +0x9c/0x130<br>
> > > [1281973.370052] [<c08bb73c>] ip_rcv+0x24c/0x370<br>
> > > [1281973.370052] [<c088d5db>] __netif_receive_skb<br>
> > +0x5bb/0x740<br>
> > > [1281973.370052] [<c088d8ce>] netif_receive_skb<br>
> +0x2e/0x90<br>
> > > [1281973.370052] [<f7c36a49>] virtnet_poll<br>
> +0x449/0x6a0<br>
> > [virtio_net]<br>
> > > [1281973.370052] [<c044d6aa>] ? run_timer_softirq<br>
> > +0x1a/0x210<br>
> > > [1281973.370052] [<c088decd>] net_rx_action<br>
> +0x11d/0x1f0<br>
> > > [1281973.370052] [<c044695b>] __do_softirq<br>
> +0xab/0x1c0<br>
> > > [1281973.370052] [<c04468b0>] ?<br>
> local_bh_enable_ip<br>
> > +0x90/0x90<br>
> > > [1281973.370052] <IRQ><br>
> > > [1281973.370052] [<c0446bdd>] ? irq_exit<br>
> +0x9d/0xb0<br>
> > > [1281973.370052] [<c04258ee>] ?<br>
> smp_apic_timer_interrupt<br>
> > +0x5e/0x90<br>
> > > [1281973.370052] [<c097a440>] ?<br>
> apic_timer_interrupt<br>
> > +0x34/0x3c<br>
> > > [1281973.370052] [<c044007b>] ? console_start<br>
> +0xb/0x20<br>
> > > [1281973.370052] [<c0979bbf>] ?<br>
> _raw_spin_unlock_irqrestore<br>
> > +0xf/0x20<br>
> > > [1281973.370052] [<c07918d6>] ? ata_scsi_queuecmd<br>
> > +0x96/0x250<br>
> > > [1281973.370052] [<c076ad18>] ? scsi_dispatch_cmd<br>
> > +0xb8/0x260<br>
> > > [1281973.370052] [<c066007b>] ?<br>
> queue_store_random<br>
> > +0x4b/0x70<br>
> > > [1281973.370052] [<c07711b3>] ? scsi_request_fn<br>
> +0x2c3/0x4b0<br>
> > > [1281973.370052] [<c042f2b7>] ? kvm_clock_read<br>
> +0x17/0x20<br>
> > > [1281973.370052] [<c0409448>] ? sched_clock<br>
> +0x8/0x10<br>
> > > [1281973.370052] [<c065cace>] ? __blk_run_queue<br>
> +0x2e/0x40<br>
> > > [1281973.370052] [<c066214a>] ?<br>
> blk_execute_rq_nowait<br>
> > +0x6a/0xd0<br>
> > > [1281973.370052] [<c066221d>] ? blk_execute_rq<br>
> +0x6d/0xe0<br>
> > > [1281973.370052] [<c06620b0>] ?<br>
> __raw_spin_unlock_irq<br>
> > +0x10/0x10<br>
> > > [1281973.370052] [<c0446ba7>] ? irq_exit<br>
> +0x67/0xb0<br>
> > > [1281973.370052] [<c04258ee>] ?<br>
> smp_apic_timer_interrupt<br>
> > +0x5e/0x90<br>
> > > [1281973.370052] [<c097a440>] ?<br>
> apic_timer_interrupt<br>
> > +0x34/0x3c<br>
> > > [1281973.370052] [<c076ffa0>] ? scsi_execute<br>
> +0xb0/0x140<br>
> > > [1281973.370052] [<c0771429>] ? scsi_execute_req<br>
> +0x89/0x100<br>
> > > [1281973.370052] [<c077f3d5>] ? sr_check_events<br>
> +0xb5/0x2e0<br>
> > > [1281973.370052] [<c07a64cd>] ?<br>
> cdrom_check_events<br>
> > +0x1d/0x40<br>
> > > [1281973.370052] [<c077f856>] ?<br>
> sr_block_check_events<br>
> > +0x16/0x20<br>
> > > [1281973.370052] [<c06663c5>] ? disk_check_events<br>
> +0x45/0xf0<br>
> > > [1281973.370052] [<c0666485>] ?<br>
> disk_events_workfn<br>
> > +0x15/0x20<br>
> > > [1281973.370052] [<c045788e>] ? process_one_work<br>
> > +0x12e/0x3d0<br>
> > > [1281973.370052] [<c097a440>] ?<br>
> apic_timer_interrupt<br>
> > +0x34/0x3c<br>
> > > [1281973.370052] [<c0459939>] ? worker_thread<br>
> +0x119/0x3b0<br>
> > > [1281973.370052] [<c0459820>] ?<br>
> flush_delayed_work<br>
> > +0x50/0x50<br>
> > > [1281973.370052] [<c045e2a4>] ? kthread+0x94/0xa0<br>
> > > [1281973.370052] [<c0980ef7>] ?<br>
> ret_from_kernel_thread<br>
> > +0x1b/0x28<br>
> > > [1281973.370052] [<c045e210>] ?<br>
> kthread_create_on_node<br>
> > +0xc0/0xc0<br>
> > > [1281973.370052] Code: 5d c3 8d b4 26 00 00 00 00<br>
> 89 02 c3<br>
> > 90 8d 74 26 00<br>
> > > 81 fa ff ff 03 00 89 d1 77 2e 81 fa 00 00 01 00 76<br>
> 0e 81 e2<br>
> > ff ff 00 00 66<br>
> > > ef <c3> 90 8d 74 26 00 55 ba 2c 5a b2 c0 89 e5 89<br>
> c8 e8 01<br>
> > ff ff ff<br>
> > > [1281991.139165] ata2: lost interrupt (Status<br>
> 0x58)<br>
> > > [1281991.148055] ata2: drained 12 bytes to clear<br>
> DRQ<br>
> > > [1281991.165039] ata2.00: exception Emask 0x0 SAct<br>
> 0x0 SErr<br>
> > 0x0 action 0x6<br>
> > > frozen<br>
> > > [1281991.172924] sr 1:0:0:0: CDB:<br>
> > > [1281991.172932] Get event status notification: 4a<br>
> 01 00 00<br>
> > 10 00 00 00 08<br>
> > > 00<br>
> > > [1281991.497342] ata2.00: cmd<br>
> > a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio<br>
> > > 16392 in<br>
> > > [1281991.497342] res<br>
> > 40/00:02:00:04:00/00:00:00:00:00/a0 Emask 0x4<br>
> > > (timeout)<br>
> > > [1281991.523767] ata2.00: status: { DRDY }<br>
> > > [1281991.616161] ata2: soft resetting link<br>
> > > [1281998.232648] ata2.01: qc timeout (cmd 0xec)<br>
> > > [1281998.238559] ata2.01: failed to IDENTIFY (I/O<br>
> error,<br>
> > err_mask=0x4)<br>
> > > [1281998.247432] ata2: soft resetting link<br>
> > > [1281998.575468] ata2.01: NODEV after polling<br>
> detection<br>
> > > [1281998.698009] ata2.00: configured for MWDMA2<br>
> > > [1281998.714460] ata2: EH complete<br>
> ><br>
> ><br>
> > Not sure what the deal is with that ATA error.<br>
> That's a hard<br>
> > drive lost<br>
> > interrupt problem. Looks to be on your CD Rom<br>
> drive? Looks<br>
> > like it<br>
> > recovered.<br>
> ><br>
> > > 3. Last but not least after sometime my host<br>
> kernel crashed<br>
> > as a result<br>
> > > need to restart the VPC.<br>
> ><br>
> ><br>
> > I don't understand what you are saying here. You're<br>
> saying<br>
> > your kernel<br>
> > crashed but I don't understand the "as a result<br>
> of..." What<br>
> > did you do,<br>
> > why did you do it, and what happened?<br>
> ><br>
> > > Regards,<br>
> > > Ajith<br>
> ><br>
> > Regards,<br>
> > Mike<br>
> ><br>
> > > On Thu, May 16, 2013 at 8:09 PM, Ajith Adapa<br>
> > <<a href="mailto:ajith.adapa@gmail.com">ajith.adapa@gmail.com</a>> wrote:<br>
> > ><br>
> > > > Thanks @thomas and @michael.<br>
> > > ><br>
> > > > I will try the RPMs and steps provided to start<br>
> a<br>
> > container.<br>
> > > ><br>
> > > > Regards,<br>
> > > > Ajith<br>
> > > ><br>
> > > ><br>
> > > > On Wed, May 15, 2013 at 2:01 PM, Thomas Moschny<br>
> > <<a href="mailto:thomas.moschny@gmail.com">thomas.moschny@gmail.com</a>>wrote:<br>
> > > ><br>
> > > >> 2013/5/14 Michael H. Warfield<br>
> <<a href="mailto:mhw@wittsend.com">mhw@wittsend.com</a>>:<br>
> > > >> > What I would recommend as steps on Fedora<br>
> 17...<br>
> > > >> ><br>
> > > >> > Download lxc-0.9.0 here:<br>
> > > >> ><br>
> > > >> ><br>
> ><br>
> <a href="http://lxc.sourceforge.net/download/lxc/lxc-0.9.0.tar.gz" target="_blank">http://lxc.sourceforge.net/download/lxc/lxc-0.9.0.tar.gz</a><br>
> > > >> ><br>
> > > >> > You should have rpm-build and friends<br>
> installed via yum<br>
> > on your system.<br>
> > > >> > Build the lxc rpms by running rpmbuild (as<br>
> any user) as<br>
> > follows:<br>
> > > >><br>
> > > >> You could also try using the pre-built packages<br>
> I put<br>
> > here:<br>
> > > >> <a href="http://thm.fedorapeople.org/lxc/" target="_blank">http://thm.fedorapeople.org/lxc/</a> .<br>
> > > >><br>
> > > >> Regards,<br>
> > > >> Thomas<br>
> > > >><br>
> > > >><br>
> > > >><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > >> AlienVault Unified Security Management (USM)<br>
> platform<br>
> > delivers complete<br>
> > > >> security visibility with the essential security<br>
> > capabilities. Easily and<br>
> > > >> efficiently configure, manage, and operate all<br>
> of your<br>
> > security controls<br>
> > > >> from a single console and one unified<br>
> framework. Download<br>
> > a free trial.<br>
> > > >> <a href="http://p.sf.net/sfu/alienvault_d2d" target="_blank">http://p.sf.net/sfu/alienvault_d2d</a><br>
> > > >> _______________________________________________<br>
> > > >> Lxc-users mailing list<br>
> > > >> <a href="mailto:Lxc-users@lists.sourceforge.net">Lxc-users@lists.sourceforge.net</a><br>
> > > >><br>
> <a href="https://lists.sourceforge.net/lists/listinfo/lxc-users" target="_blank">https://lists.sourceforge.net/lists/listinfo/lxc-users</a><br>
> > > >><br>
> > > ><br>
> > > ><br>
> > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > AlienVault Unified Security Management (USM)<br>
> platform<br>
> > delivers complete<br>
> > > security visibility with the essential security<br>
> > capabilities. Easily and<br>
> > > efficiently configure, manage, and operate all of<br>
> your<br>
> > security controls<br>
> > > from a single console and one unified framework.<br>
> Download a<br>
> > free trial.<br>
> > > <a href="http://p.sf.net/sfu/alienvault_d2d" target="_blank">http://p.sf.net/sfu/alienvault_d2d</a><br>
> > > _______________________________________________<br>
> Lxc-users<br>
> > mailing list <a href="mailto:Lxc-users@lists.sourceforge.net">Lxc-users@lists.sourceforge.net</a><br>
> ><br>
> <a href="https://lists.sourceforge.net/lists/listinfo/lxc-users" target="_blank">https://lists.sourceforge.net/lists/listinfo/lxc-users</a><br>
> ><br>
> ><br>
> > --<br>
> > Michael H. Warfield (AI4NB) | <a href="tel:%28770%29%20985-6132" value="+17709856132">(770) 985-6132</a> |<br>
> > mhw@WittsEnd.com<br>
> > /\/\|=mhw=|\/\/ | <a href="tel:%28678%29%20463-0932" value="+16784630932">(678) 463-0932</a> |<br>
> > <a href="http://www.wittsend.com/mhw/" target="_blank">http://www.wittsend.com/mhw/</a><br>
> > NIC whois: MHW9 | An optimist believes<br>
> we live in<br>
> > the best of all<br>
> > PGP Key: 0x674627FF | possible worlds. A<br>
> pessimist is<br>
> > sure of it!<br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
> > --<br>
> > This message has been scanned for viruses and<br>
> > dangerous content by MailScanner, and is<br>
> > believed to be clean.<br>
><br>
> --<br>
> Michael H. Warfield (AI4NB) | <a href="tel:%28770%29%20985-6132" value="+17709856132">(770) 985-6132</a> |<br>
> mhw@WittsEnd.com<br>
> /\/\|=mhw=|\/\/ | <a href="tel:%28678%29%20463-0932" value="+16784630932">(678) 463-0932</a> |<br>
> <a href="http://www.wittsend.com/mhw/" target="_blank">http://www.wittsend.com/mhw/</a><br>
> NIC whois: MHW9 | An optimist believes we live in<br>
> the best of all<br>
> PGP Key: 0x674627FF | possible worlds. A pessimist is<br>
> sure of it!<br>
><br>
><br>
><br>
><br>
> --<br>
> This message has been scanned for viruses and<br>
> dangerous content by MailScanner, and is<br>
> believed to be clean.<br>
<br>
--<br>
Michael H. Warfield (AI4NB) | <a href="tel:%28770%29%20985-6132" value="+17709856132">(770) 985-6132</a> | mhw@WittsEnd.com<br>
/\/\|=mhw=|\/\/ | <a href="tel:%28678%29%20463-0932" value="+16784630932">(678) 463-0932</a> | <a href="http://www.wittsend.com/mhw/" target="_blank">http://www.wittsend.com/mhw/</a><br>
NIC whois: MHW9 | An optimist believes we live in the best of all<br>
PGP Key: 0x674627FF | possible worlds. A pessimist is sure of it!<br>
</div></div><br>------------------------------------------------------------------------------<br>
AlienVault Unified Security Management (USM) platform delivers complete<br>
security visibility with the essential security capabilities. Easily and<br>
efficiently configure, manage, and operate all of your security controls<br>
from a single console and one unified framework. Download a free trial.<br>
<a href="http://p.sf.net/sfu/alienvault_d2d" target="_blank">http://p.sf.net/sfu/alienvault_d2d</a><br>_______________________________________________<br>
Lxc-users mailing list<br>
<a href="mailto:Lxc-users@lists.sourceforge.net">Lxc-users@lists.sourceforge.net</a><br>
<a href="https://lists.sourceforge.net/lists/listinfo/lxc-users" target="_blank">https://lists.sourceforge.net/lists/listinfo/lxc-users</a><br>
<br></blockquote></div><br></div>