[lxc-users] {Disarmed} [lxc-devel] CentOS 6.3 kernel-2.6.32-279.el6.x86_64 crash
CDR
venefax at gmail.com
Wed May 7 16:44:34 UTC 2014
I had to install kernel 3.14.2 in order to avoid crashes with LXC.
On Wed, May 7, 2014 at 12:41 PM, Shibashish <shib4u at gmail.com> wrote:
> Upgraded lxc
> lxc-libs-1.0.3-1.el6.x86_64
> lxc-1.0.3-1.el6.x86_64
>
> CentOS release 6.3 (Final)
>
> uname -a
> Linux myhost 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012
> x86_64 x86_64 x86_64 GNU/Linux
>
>
> But the problem persists, have had couple of kernel panics.
>
> ------------[ cut here ]------------
> kernel BUG at mm/slab.c:533!
> invalid opcode: 0000 [#1] SMP
> last sysfs file: /sys/devices/virtual/dmi/id/sys_vendor
> CPU 0
> Modules linked in: veth bridge stp llc ipv6 e1000e(U) sg microcode i2c_i801
> iTCO_wdt iTCO_vendor_support shpchp i5000_edac edac_core i5k_amb ioatdma dca
> ext3 jbd mbcache sd_mod crc_t10dif aacraid pata_acpi ata_generic ata_piix
> radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash
> dm_log dm_mod [last unloaded: scsi_wait_scan]
>
> Pid: 0, comm: swapper Tainted: G I---------------
> 2.6.32-279.el6.x86_64 #1 Supermicro X7DVL/X7DVL
> RIP: 0010:[<ffffffff81163f75>] [<ffffffff81163f75>] free_block+0x165/0x170
> RSP: 0018:ffff8800282032d0 EFLAGS: 00010046
> RAX: ffffea000a54e368 RBX: ffff88042fcf03c0 RCX: 0000000000000010
> RDX: 0040000000000000 RSI: ffff8802f3bb6d40 RDI: ffff8802f3aeb000
> RBP: ffff880028203320 R08: ffffea000e79b720 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000080042000 R12: 000000000000000c
> R13: ffff88042fea13a8 R14: 0000000000000002 R15: ffffea0000000000
> FS: 0000000000000000(0000) GS:ffff880028200000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> CR2: 00007fc077681000 CR3: 000000042216f000 CR4: 00000000000006f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process swapper (pid: 0, threadinfo ffffffff81a00000, task ffffffff81a8d020)
> Stack:
> ffff88042fc216c0 ffff8802f3bb6d40 000000000000100c ffff8802f3aeb000
> <d> ffff880028203360 ffff8802f3bc4000 ffff88042fea1380 0000000000000286
> <d> ffff88042fcf03c0 ffff88042fea1398 ffff880028203390 ffffffff81164500
> Call Trace:
> <IRQ>
> [<ffffffff81164500>] kfree+0x310/0x320
> [<ffffffff8143c949>] ? enqueue_to_backlog+0x179/0x210
> [<ffffffff8142fef8>] skb_release_data+0xd8/0x110
> [<ffffffff8143c949>] ? enqueue_to_backlog+0x179/0x210
> [<ffffffff8142fa2e>] __kfree_skb+0x1e/0xa0
> [<ffffffff8142fb72>] kfree_skb+0x42/0x90
> [<ffffffff8143c949>] enqueue_to_backlog+0x179/0x210
> [<ffffffff8143fb20>] netif_rx+0xb0/0x160
> [<ffffffff8143fe32>] dev_forward_skb+0x122/0x180
> [<ffffffffa03446e6>] veth_xmit+0x86/0xe0 [veth]
> [<ffffffff8143b0cc>] dev_hard_start_xmit+0x2bc/0x3f0
> [<ffffffff81458c1a>] sch_direct_xmit+0x15a/0x1c0
> [<ffffffff8143f878>] dev_queue_xmit+0x4f8/0x6f0
> [<ffffffffa03276bc>] br_dev_queue_push_xmit+0x6c/0xa0 [bridge]
> [<ffffffffa032d378>] br_nf_dev_queue_xmit+0x28/0xa0 [bridge]
> [<ffffffffa032de10>] br_nf_post_routing+0x1d0/0x280 [bridge]
> [<ffffffff814665e9>] nf_iterate+0x69/0xb0
> [<ffffffffa0327650>] ? br_dev_queue_push_xmit+0x0/0xa0 [bridge]
> [<ffffffff814667a4>] nf_hook_slow+0x74/0x110
> [<ffffffffa0327650>] ? br_dev_queue_push_xmit+0x0/0xa0 [bridge]
> [<ffffffffa03276f0>] ? br_forward_finish+0x0/0x60 [bridge]
> [<ffffffffa0327733>] br_forward_finish+0x43/0x60 [bridge]
> [<ffffffffa032d9b8>] br_nf_forward_finish+0x128/0x140 [bridge]
> [<ffffffffa032eea8>] ? br_nf_forward_ip+0x318/0x3c0 [bridge]
> [<ffffffffa032eea8>] br_nf_forward_ip+0x318/0x3c0 [bridge]
> [<ffffffff814665e9>] nf_iterate+0x69/0xb0
> [<ffffffffa03276f0>] ? br_forward_finish+0x0/0x60 [bridge]
> [<ffffffff814667a4>] nf_hook_slow+0x74/0x110
> [<ffffffffa03276f0>] ? br_forward_finish+0x0/0x60 [bridge]
> [<ffffffffa0327750>] ? __br_forward+0x0/0xc0 [bridge]
> [<ffffffffa03277c2>] __br_forward+0x72/0xc0 [bridge]
> [<ffffffffa0327601>] br_flood+0xc1/0xd0 [bridge]
> [<ffffffffa0327625>] br_flood_forward+0x15/0x20 [bridge]
> [<ffffffffa03287ae>] br_handle_frame_finish+0x27e/0x2a0 [bridge]
> [<ffffffffa032e318>] br_nf_pre_routing_finish+0x228/0x340 [bridge]
> [<ffffffffa032e88f>] br_nf_pre_routing+0x45f/0x760 [bridge]
> [<ffffffff814665e9>] nf_iterate+0x69/0xb0
> [<ffffffffa0328530>] ? br_handle_frame_finish+0x0/0x2a0 [bridge]
> [<ffffffff814667a4>] nf_hook_slow+0x74/0x110
> [<ffffffffa0328530>] ? br_handle_frame_finish+0x0/0x2a0 [bridge]
> [<ffffffffa032895c>] br_handle_frame+0x18c/0x250 [bridge]
> [<ffffffff8143a839>] __netif_receive_skb+0x519/0x6f0
> [<ffffffff8143ca38>] netif_receive_skb+0x58/0x60
> [<ffffffff8143cbe4>] napi_gro_complete+0x84/0xe0
> [<ffffffff8143ce0b>] dev_gro_receive+0x1cb/0x290
> [<ffffffff8143cf4b>] __napi_gro_receive+0x7b/0x170
> [<ffffffff8143f06f>] napi_gro_receive+0x2f/0x50
> [<ffffffffa027233b>] e1000_receive_skb+0x5b/0x90 [e1000e]
> [<ffffffffa0275601>] e1000_clean_rx_irq+0x241/0x4c0 [e1000e]
> [<ffffffffa027cb8d>] e1000e_poll+0x8d/0x380 [e1000e]
> [<ffffffff8143aaaa>] ? process_backlog+0x9a/0x100
> [<ffffffff8143f193>] net_rx_action+0x103/0x2f0
> [<ffffffff81073ec1>] __do_softirq+0xc1/0x1e0
> [<ffffffff810db800>] ? handle_IRQ_event+0x60/0x170
> [<ffffffff8100c24c>] call_softirq+0x1c/0x30
> [<ffffffff8100de85>] do_softirq+0x65/0xa0
> [<ffffffff81073ca5>] irq_exit+0x85/0x90
> [<ffffffff81505af5>] do_IRQ+0x75/0xf0
> [<ffffffff8100ba53>] ret_from_intr+0x0/0x11
> <EOI>
> [<ffffffff81014877>] ? mwait_idle+0x77/0xd0
> [<ffffffff8150338a>] ? atomic_notifier_call_chain+0x1a/0x20
> [<ffffffff81009e06>] cpu_idle+0xb6/0x110
> [<ffffffff814e433a>] rest_init+0x7a/0x80
> [<ffffffff81c21f7b>] start_kernel+0x424/0x430
> [<ffffffff81c2133a>] x86_64_start_reservations+0x125/0x129
> [<ffffffff81c21438>] x86_64_start_kernel+0xfa/0x109
> Code: 41 5c 41 5d 41 5e 41 5f c9 c3 0f 1f 40 00 48 8b 72 08 48 89 c7 e8 2c
> f0 11 00 e9 07 ff ff ff 48 8b 40 10 48 8b 10 e9 3e ff ff ff <0f> 0b eb fe 0f
> 1f 80 00 00 00 00 55 48 89 e5 48 83 ec 30 48 89
> RIP [<ffffffff81163f75>] free_block+0x165/0x170
> RSP <ffff8800282032d0>
>
> ShiB.
> while ( ! ( succeed = try() ) );
>
>
> On Sat, May 3, 2014 at 8:14 PM, Michael H. Warfield <mhw at wittsend.com>
> wrote:
>>
>> On Sat, 2014-05-03 at 19:40 +0530, Shibashish wrote:
>> > Hi,
>>
>> > My server with 4 lxc VM is kernel panicking often. On analyzing the
>> > crash dump, it shows the following. I have 4 VMs, with 3G memory each,
>> > memory+swap set at 4G in the croup settings.
>>
>> > lxc version: 0.9.0.alpha2
>>
>> You definitely need to upgrade that version of LXC.
>>
>> 1) It's old.
>> 2) It's an alpha version.
>> >
>>
>> > I did a hardware swap, but the problem persists. Please let me know
>> > what to do next. Should I upgrade kernel
>> > to 2.6.32-431.11.2.el6.centos.plus ?
>>
>> I would most definitely update the entire system including and
>> especially the kernel. Nothing that LXC does should cause a kernel
>> panic.
>> >
>> > KERNEL: /usr/lib/debug/lib/modules/2.6.32-279.el6.x86_64/vmlinux
>> > DUMPFILE: /var/crash/MailScanner has detected a possible fraud
>> > attempt from "127.0.0" claiming to be MailScanner warning: numerical
>> > links are often malicious: 127.0.0.1-2014-05-03-06:11:01/vmcore
>> > [PARTIAL DUMP]
>> > CPUS: 8
>> > DATE: Sat May 3 06:09:12 2014
>> > UPTIME: 07:56:53
>> > LOAD AVERAGE: 0.07, 0.06, 0.01
>> > TASKS: 651
>> > NODENAME: myhost
>> > RELEASE: 2.6.32-279.el6.x86_64
>> > VERSION: #1 SMP Fri Jun 22 12:19:21 UTC 2012
>> > MACHINE: x86_64 (1866 Mhz)
>> > MEMORY: 16 GB
>> > PANIC: "kernel BUG at mm/slab.c:533!"
>> > PID: 0
>> > COMMAND: "swapper"
>> > TASK: ffff880426373540 (1 of 8) [THREAD_INFO:
>> > ffff880426374000]
>> > CPU: 7
>> > STATE: TASK_RUNNING (PANIC)
>> >
>> >
>> I'm not even sure how this relates to LXC. From what I'm seeing below,
>> I do see functions in veth and br_*, which could be from an LXC
>> container, so the fault is failing somewhere down through the bridging
>> code and into e1000 NIC driver and interrupt handlers. That's a kernel
>> fault of some sort, and really deep. Definitely upgrade that kernel.
>> >
>> > ------------[ cut here ]------------
>> > kernel BUG at mm/slab.c:533!
>> > invalid opcode: 0000 [#1] SMP
>> > last sysfs file: /sys/devices/system/cpu/online
>> > CPU 7
>> > Modules linked in: veth bridge stp llc ipv6 e1000e(U) sg microcode
>> > i2c_i801 iTCO_wdt iTCO_vendor_support i5000_edac edac_core i5k_amb
>> > ioatdma dca shpchp ext3 jbd mbcache sd_mod crc_t10dif aacraid
>> > pata_acpi ata_generic ata_piix radeon ttm drm_kms_helper drm
>> > i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last
>> > unloaded: scsi_wait_scan]
>> >
>> >
>> > Pid: 0, comm: swapper Tainted: G I---------------
>> > 2.6.32-279.el6.x86_64 #1 Supermicro X7DVL/X7DVL
>> > RIP: 0010:[<ffffffff81163f75>] [<ffffffff81163f75>] free_block
>> > +0x165/0x170
>> > RSP: 0018:ffff8800283c32d0 EFLAGS: 00010046
>> > RAX: ffffea0009fd5878 RBX: ffff88042fcf03c0 RCX: 0000000000000010
>> > RDX: 0040000000000000 RSI: ffff8802bba2cec0 RDI: ffff8802daab9800
>> > RBP: ffff8800283c3320 R08: ffffea0009d7b600 R09: 0000000000000000
>> > R10: 0000000000000000 R11: 0000000080042000 R12: 000000000000000c
>> > R13: ffff880426350aa8 R14: 0000000000000002 R15: ffffea0000000000
>> > FS: 0000000000000000(0000) GS:ffff8800283c0000(0000)
>> > knlGS:0000000000000000
>> > CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
>> > CR2: 0000003fafe7b3f0 CR3: 00000004240a1000 CR4: 00000000000006e0
>> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> > Process swapper (pid: 0, threadinfo ffff880426374000, task
>> > ffff880426373540)
>> > Stack:
>> > ffff88042fc216c0 ffff8802bba2cec0 000000000000100c ffff8802daab9800
>> > <d> ffff8800283c3360 ffff8802daabc800 ffff880426350a80
>> > 0000000000000286
>> > <d> ffff88042fcf03c0 ffff880426350a98 ffff8800283c3390
>> > ffffffff81164500
>> > Call Trace:
>> > <IRQ>
>> > [<ffffffff81164500>] kfree+0x310/0x320
>> > [<ffffffff8143c949>] ? enqueue_to_backlog+0x179/0x210
>> > [<ffffffff8142fef8>] skb_release_data+0xd8/0x110
>> > [<ffffffff8143c949>] ? enqueue_to_backlog+0x179/0x210
>> > [<ffffffff8142fa2e>] __kfree_skb+0x1e/0xa0
>> > [<ffffffff8142fb72>] kfree_skb+0x42/0x90
>> > [<ffffffff8143c949>] enqueue_to_backlog+0x179/0x210
>> > [<ffffffff8143fb20>] netif_rx+0xb0/0x160
>> > [<ffffffff8143fe32>] dev_forward_skb+0x122/0x180
>> > [<ffffffffa02396e6>] veth_xmit+0x86/0xe0 [veth]
>> > [<ffffffff8143b0cc>] dev_hard_start_xmit+0x2bc/0x3f0
>> > [<ffffffff81458c1a>] sch_direct_xmit+0x15a/0x1c0
>> > [<ffffffff8143f878>] dev_queue_xmit+0x4f8/0x6f0
>> > [<ffffffffa032c6bc>] br_dev_queue_push_xmit+0x6c/0xa0 [bridge]
>> > [<ffffffffa0332378>] br_nf_dev_queue_xmit+0x28/0xa0 [bridge]
>> > [<ffffffffa0332e10>] br_nf_post_routing+0x1d0/0x280 [bridge]
>> > [<ffffffff814665e9>] nf_iterate+0x69/0xb0
>> > [<ffffffffa032c650>] ? br_dev_queue_push_xmit+0x0/0xa0 [bridge]
>> > [<ffffffff814667a4>] nf_hook_slow+0x74/0x110
>> > [<ffffffffa032c650>] ? br_dev_queue_push_xmit+0x0/0xa0 [bridge]
>> > [<ffffffffa032c6f0>] ? br_forward_finish+0x0/0x60 [bridge]
>> > [<ffffffffa032c733>] br_forward_finish+0x43/0x60 [bridge]
>> > [<ffffffffa03329b8>] br_nf_forward_finish+0x128/0x140 [bridge]
>> > [<ffffffffa0333ea8>] ? br_nf_forward_ip+0x318/0x3c0 [bridge]
>> > [<ffffffffa0333ea8>] br_nf_forward_ip+0x318/0x3c0 [bridge]
>> > [<ffffffff814665e9>] nf_iterate+0x69/0xb0
>> > [<ffffffffa032c6f0>] ? br_forward_finish+0x0/0x60 [bridge]
>> > [<ffffffff814667a4>] nf_hook_slow+0x74/0x110
>> > [<ffffffffa032c6f0>] ? br_forward_finish+0x0/0x60 [bridge]
>> > [<ffffffffa032c750>] ? __br_forward+0x0/0xc0 [bridge]
>> > [<ffffffffa032c7c2>] __br_forward+0x72/0xc0 [bridge]
>> > [<ffffffffa032c601>] br_flood+0xc1/0xd0 [bridge]
>> > [<ffffffffa032c625>] br_flood_forward+0x15/0x20 [bridge]
>> > [<ffffffffa032d7ae>] br_handle_frame_finish+0x27e/0x2a0 [bridge]
>> > [<ffffffffa0333318>] br_nf_pre_routing_finish+0x228/0x340 [bridge]
>> > [<ffffffffa033388f>] br_nf_pre_routing+0x45f/0x760 [bridge]
>> > [<ffffffff814665e9>] nf_iterate+0x69/0xb0
>> > [<ffffffffa032d530>] ? br_handle_frame_finish+0x0/0x2a0 [bridge]
>> > [<ffffffff814667a4>] nf_hook_slow+0x74/0x110
>> > [<ffffffffa032d530>] ? br_handle_frame_finish+0x0/0x2a0 [bridge]
>> > [<ffffffffa032d95c>] br_handle_frame+0x18c/0x250 [bridge]
>> > [<ffffffff8143a839>] __netif_receive_skb+0x519/0x6f0
>> > [<ffffffff8143ca38>] netif_receive_skb+0x58/0x60
>> > [<ffffffff8143cbe4>] napi_gro_complete+0x84/0xe0
>> > [<ffffffff8143ce0b>] dev_gro_receive+0x1cb/0x290
>> > [<ffffffff8143cf4b>] __napi_gro_receive+0x7b/0x170
>> > [<ffffffff8143f06f>] napi_gro_receive+0x2f/0x50
>> > [<ffffffffa027733b>] e1000_receive_skb+0x5b/0x90 [e1000e]
>> > [<ffffffffa027a601>] e1000_clean_rx_irq+0x241/0x4c0 [e1000e]
>> > [<ffffffffa0281b8d>] e1000e_poll+0x8d/0x380 [e1000e]
>> > [<ffffffff8143aaaa>] ? process_backlog+0x9a/0x100
>> > [<ffffffff8143f193>] net_rx_action+0x103/0x2f0
>> > [<ffffffff81073ec1>] __do_softirq+0xc1/0x1e0
>> > [<ffffffff810db800>] ? handle_IRQ_event+0x60/0x170
>> > [<ffffffff8100c24c>] call_softirq+0x1c/0x30
>> > [<ffffffff8100de85>] do_softirq+0x65/0xa0
>> > [<ffffffff81073ca5>] irq_exit+0x85/0x90
>> > [<ffffffff81505af5>] do_IRQ+0x75/0xf0
>> > [<ffffffff8100ba53>] ret_from_intr+0x0/0x11
>> > <EOI>
>> > [<ffffffff81014877>] ? mwait_idle+0x77/0xd0
>> > [<ffffffff8150338a>] ? atomic_notifier_call_chain+0x1a/0x20
>> > [<ffffffff81009e06>] cpu_idle+0xb6/0x110
>> > [<ffffffff814f6cdf>] start_secondary+0x22a/0x26d
>> > Code: 41 5c 41 5d 41 5e 41 5f c9 c3 0f 1f 40 00 48 8b 72 08 48 89 c7
>> > e8 2c f0 11 00 e9 07 ff ff ff 48 8b 40 10 48 8b 10 e9 3e ff ff ff <0f>
>> > 0b eb fe 0f 1f 80 00 00 00 00 55 48 89 e5 48 83 ec 30 48 89
>> > RIP [<ffffffff81163f75>] free_block+0x165/0x170
>> > RSP <ffff8800283c32d0>
>> >
>> >
>> >
>> >
>> > ShiB.
>> > while ( ! ( succeed = try() ) );
>> >
>> >
>>
>> Regards,
>> Mike
>> --
>> Michael H. Warfield (AI4NB) | (770) 978-7061 | mhw at WittsEnd.com
>> /\/\|=mhw=|\/\/ | (678) 463-0932 |
>> http://www.wittsend.com/mhw/
>> NIC whois: MHW9 | An optimist believes we live in the best of
>> all
>> PGP Key: 0x674627FF | possible worlds. A pessimist is sure of it!
>>
>>
>> _______________________________________________
>> lxc-users mailing list
>> lxc-users at lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
More information about the lxc-users
mailing list