[Lxc-users] LXC container SSH X forwarding kernel crash

Ferenc Holzhauser ferenc.holzhauser at gmail.com
Wed Jun 16 13:51:13 UTC 2010


Dear fellow LXC users,

I'm experiencing an annoying kernel crash each time I'm trying to use
SSH X forwarding into the container.
I can open an SSH session but as soon as I start an X app, the crash happens.

I'm using Lucid (both host and container) with latest updates.
I have disabled IPv6 both on host and container (kernel and ssh) to
maximize my chances (google said it might help) but no luck yet.

Unfortunately I'm not an expert in kernel debugging so I'm kind of
stuck at the moment.
I hope someone with more skills has seen this before and has some
pointers for me where to look further.

the crash log looks like this:
[  177.390249] BUG: unable to handle kernel NULL pointer dereference at (null)
[  177.390975] IP: [<(null)>] (null)
[  177.391362] PGD 0
[  177.391649] Oops: 0010 [#1] SMP
[  177.392110] last sysfs file:
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
[  177.392632] CPU 1
[  177.392917] Modules linked in: veth bridge stp fbcon tileblit font
bitblit softcursor vga16fb vgastate serio_raw ioatdma lp parport
raid10 raid456 async_raid6_recov async_pq usbhid hid raid6_pq
async_xor mptsas mptscsih xor async_memcpy mptbase async_tx ahci igb
raid1 scsi_transport_sas dca raid0 multipath linear
[  177.398732] Pid: 0, comm: swapper Not tainted 2.6.32-22-server
#36-Ubuntu SUN FIRE X4170 SERVER
[  177.399353] RIP: 0010:[<0000000000000000>]  [<(null)>] (null)
[  177.399939] RSP: 0018:ffff880010e23d38  EFLAGS: 00010293
[  177.400355] RAX: ffff8802739ccec0 RBX: ffff8802757a9b00 RCX: 0000000000000000
[  177.400820] RDX: 0000000000000000 RSI: ffff8802757a9b00 RDI: ffff8802757a9b00
[  177.401284] RBP: ffff880010e23d70 R08: ffffffff8149f0a0 R09: ffff880010e23d38
[  177.401748] R10: ffff88027728d080 R11: 0000000000000000 R12: ffff8802674e1050
[  177.402212] R13: ffff8802757a9b00 R14: 0000000000000008 R15: ffffffff8185eca0
[  177.402679] FS:  0000000000000000(0000) GS:ffff880010e20000(0000)
knlGS:0000000000000000
[  177.403227] CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
[  177.403621] CR2: 0000000000000000 CR3: 0000000001001000 CR4: 00000000000006e0
[  177.404086] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  177.404550] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  177.418610] Process swapper (pid: 0, threadinfo ffff88027710c000,
task ffff8802771044d0)
[  177.447188] Stack:
[  177.447190]  ffffffff8149f1cd 0000000000000002 ffff8802757a9b00
ffff8802757a9b00
[  177.447193] <0> ffff88024d92b800 ffff8802757a9b00 0000000000000008
ffff880010e23db0
[  177.447196] <0> ffffffff8149f755 0000000080000000 ffff880273381158
ffff8802733f8000
[  177.447199] Call Trace:
[  177.447201]  <IRQ>
[  177.447207]  [<ffffffff8149f1cd>] ? ip_rcv_finish+0x12d/0x440
[  177.447210]  [<ffffffff8149f755>] ip_rcv+0x275/0x360
[  177.447216]  [<ffffffff8146ffea>] netif_receive_skb+0x38a/0x5d0
[  177.447219]  [<ffffffff814702b3>] process_backlog+0x83/0xe0
[  177.447225]  [<ffffffff810880c2>] ? enqueue_hrtimer+0x82/0xd0
[  177.447229]  [<ffffffff81470adf>] net_rx_action+0x10f/0x250
[  177.447233]  [<ffffffff8106e257>] __do_softirq+0xb7/0x1e0
[  177.447237]  [<ffffffff810c4880>] ? handle_IRQ_event+0x60/0x170
[  177.447242]  [<ffffffff810142ec>] call_softirq+0x1c/0x30
[  177.447245]  [<ffffffff81015cb5>] do_softirq+0x65/0xa0
[  177.447247]  [<ffffffff8106e0f5>] irq_exit+0x85/0x90
[  177.447252]  [<ffffffff8155c675>] do_IRQ+0x75/0xf0
[  177.447255]  [<ffffffff81013b13>] ret_from_intr+0x0/0x11
[  177.447256]  <EOI>
[  177.447261]  [<ffffffff8130ccd7>] ? acpi_idle_enter_bm+0x28a/0x2be
[  177.447265]  [<ffffffff8130ccd0>] ? acpi_idle_enter_bm+0x283/0x2be
[  177.447270]  [<ffffffff81449297>] ? cpuidle_idle_call+0xa7/0x140
[  177.447278]  [<ffffffff81011e63>] ? cpu_idle+0xb3/0x110
[  177.447283]  [<ffffffff8154f5e0>] ? start_secondary+0xa8/0xaa
[  177.447284] Code:  Bad RIP value.
[  177.447290] RIP  [<(null)>] (null)
[  177.447292]  RSP <ffff880010e23d38>
[  177.447293] CR2: 0000000000000000

Thanks a lot for your help in advance,
Ferenc




More information about the lxc-users mailing list