<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body style="color: rgb(0, 0, 0); font-size: 14px; font-family: Calibri, sans-serif; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">
<div>Hello,</div>
<div><br>
</div>
<div>I am testing forwarding performance of 1000 containers running at the same time.</div>
<div>I am running Linux 3.8.5 and lxc 0.8.0.</div>
<div><br>
</div>
<div>Each container is a simple router : 2 IPv4 interfaces, The routing table is very small : 3/4 routes to allow for bidirectional traffic to flow between the 2 interfaces.</div>
<div><br>
</div>
<div>
<div>lxc.network.type = phys</div>
<div>lxc.network.flags = up</div>
<div>lxc.network.link = eth6.3 </div>
<div>lxc.network.name = eth2</div>
<div>lxc.network.hwaddr = 00:50:56:a8:03:03</div>
<div>lxc.network.ipv4 = 192.168.1.1/24</div>
<div>lxc.network.type = phys</div>
<div>lxc.network.flags = up</div>
<div>lxc.network.link = eth7.3</div>
<div>lxc.network.name = eth1</div>
<div>lxc.network.ipv4 = 2.2.2.2/24</div>
<div>lxc.network.ipv6 = 2003:1339:0:12::2/64</div>
<div><br>
</div>
<div>Between 1 and 100 containers running and forwarding at the same time the forwarding performance is relatively similar.</div>
<div><br>
</div>
<div>If I run the same test with 1000 containers, performance is divided by 4.</div>
<div><br>
</div>
<div>I have capture oprofile data for 1000 containers forwarding traffic :</div>
<div><br>
</div>
<div>
<div>
<div>CPU: Intel Sandy Bridge microarchitecture, speed 2.001e+06 MHz (estimated)</div>
<div>Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000</div>
<div>Counted UNHALTED_REFERENCE_CYCLES events (Unhalted reference cycles) with a unit mask of 0x01 (No unit mask) count 1000500</div>
<div>Counted LLC_REFS events (Last level cache demand requests from this core) with a unit mask of 0x4f (No unit mask) count 1000500</div>
<div>samples % samples % samples % image name app name symbol name</div>
<div>3799100 7.0555 27682 6.4292 8307 9.3270 vmlinux-3.8.5 vmlinux-3.8.5 tg_load_down</div>
<div>1842136 3.4211 13082 3.0383 3278 3.6805 vmlinux-3.8.5 udevd tg_load_down</div>
<div>1611025 2.9919 12869 2.9888 245 0.2751 vmlinux-3.8.5 udevd do_int3</div>
<div>1314286 2.4408 8738 2.0294 1711 1.9211 libnih.so.1.0.0 init /lib/x86_64-linux-gnu/libnih.so.1.0.0</div>
<div>1011093 1.8777 9121 2.1184 1047 1.1756 vmlinux-3.8.5 vmlinux-3.8.5 intel_idle</div>
<div>860652 1.5984 6949 1.6139 468 0.5255 vmlinux-3.8.5 vmlinux-3.8.5 __ticket_spin_lock</div>
<div>761266 1.4138 5773 1.3408 290 0.3256 vmlinux-3.8.5 udevd __ticket_spin_lock</div>
<div>731444 1.3584 4237 0.9841 163 0.1830 oprofiled oprofiled sfile_find</div>
<div>718165 1.3337 5176 1.2021 1123 1.2609 libc-2.15.so udevd /lib/x86_64-linux-gnu/libc-2.15.so</div>
<div>704824 1.3090 4875 1.1322 1464 1.6438 libc-2.15.so init /lib/x86_64-linux-gnu/libc-2.15.so</div>
<div>696393 1.2933 3914 0.9090 3319 3.7265 oprofiled oprofiled for_one_sfile</div>
<div>690144 1.2817 5997 1.3928 1020 1.1452 vmlinux-3.8.5 vmlinux-3.8.5 update_sd_lb_stats</div>
<div>674357 1.2524 4928 1.1445 646 0.7253 libdbus-1.so.3.5.8 upstart-udev-bridge /lib/x86_64-linux-gnu/libdbus-1.so.3.5.8</div>
<div>639174 1.1870 6348 1.4743 1247 1.4001 ixgbe ixgbe /ixgbe</div>
<div>622320 1.1557 6120 1.4214 1354 1.5203 vmlinux-3.8.4 vmlinux-3.8.4 fib_table_lookup</div>
<div>607881 1.1289 4372 1.0154 101 0.1134 vmlinux-3.8.5 vmlinux-3.8.5 try_to_wake_up</div>
<div>590477 1.0966 3245 0.7537 116 0.1302 libc-2.15.so sudo /lib/x86_64-linux-gnu/libc-2.15.so</div>
<div>558932 1.0380 4792 1.1130 29 0.0326 vmlinux-3.8.5 vmlinux-3.8.5 mutex_spin_on_owner</div>
<div>531910 0.9878 5514 1.2806 1725 1.9368 vmlinux-3.8.4 vmlinux-3.8.4 ipt_do_table</div>
<div>517979 0.9620 4511 1.0477 197 0.2212 vmlinux-3.8.5 sudo snmp_fold_field</div>
<div>504656 0.9372 3767 0.8749 604 0.6782 libdbus-1.so.3.5.8 init /lib/x86_64-linux-gnu/libdbus-1.so.3.5.8</div>
<div>478770 0.8891 3342 0.7762 959 1.0768 init init /sbin/init</div>
<div>440734 0.8185 3189 0.7407 56 0.0629 vmlinux-3.8.5 udevd try_to_wake_up</div>
<div>437560 0.8126 3671 0.8526 12 0.0135 vmlinux-3.8.4 vmlinux-3.8.4 mutex_spin_on_owner</div>
<div>420793 0.7815 3239 0.7523 732 0.8219 vmlinux-3.8.5 upstart-udev-bridge tg_load_down</div>
</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>Oprofile data for 100 containers forwarding traffic, it does not look too different :</div>
<div><br>
</div>
<div>
<div>
<div>CPU: Intel Sandy Bridge microarchitecture, speed 2.001e+06 MHz (estimated)</div>
<div>Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000</div>
<div>Counted UNHALTED_REFERENCE_CYCLES events (Unhalted reference cycles) with a unit mask of 0x01 (No unit mask) count 1000500</div>
<div>Counted LLC_REFS events (Last level cache demand requests from this core) with a unit mask of 0x4f (No unit mask) count 1000500</div>
<div>samples % samples % samples % image name app name symbol name</div>
<div>3812298 6.5582 27823 5.8604 8326 8.4405 vmlinux-3.8.5 vmlinux-3.8.5 tg_load_down</div>
<div>1842136 3.1690 13082 2.7555 3278 3.3231 vmlinux-3.8.5 udevd tg_load_down</div>
<div>1611025 2.7714 12869 2.7106 245 0.2484 vmlinux-3.8.5 udevd do_int3</div>
<div>1314286 2.2609 8738 1.8405 1711 1.7345 libnih.so.1.0.0 init /lib/x86_64-linux-gnu/libnih.so.1.0.0</div>
<div>1290389 2.2198 13032 2.7449 2483 2.5172 ixgbe ixgbe /ixgbe</div>
<div>1089462 1.8742 11091 2.3361 576 0.5839 vmlinux-3.8.5 vmlinux-3.8.5 intel_idle_cpu_init</div>
<div>1014068 1.7445 9151 1.9275 1062 1.0766 vmlinux-3.8.5 vmlinux-3.8.5 intel_idle</div>
<div>971666 1.6715 8058 1.6973 576 0.5839 vmlinux-3.8.5 vmlinux-3.8.5 __ticket_spin_lock</div>
<div>761947 1.3108 4446 0.9365 167 0.1693 oprofiled oprofiled sfile_find</div>
<div>761266 1.3096 5773 1.2160 290 0.2940 vmlinux-3.8.5 udevd __ticket_spin_lock</div>
<div>718165 1.2354 5176 1.0902 1123 1.1384 libc-2.15.so udevd /lib/x86_64-linux-gnu/libc-2.15.so</div>
<div>704824 1.2125 4875 1.0268 1464 1.4841 libc-2.15.so init /lib/x86_64-linux-gnu/libc-2.15.so</div>
<div>697389 1.1997 6077 1.2800 1035 1.0492 vmlinux-3.8.5 vmlinux-3.8.5 update_sd_lb_stats</div>
<div>696399 1.1980 3914 0.8244 3319 3.3647 oprofiled oprofiled for_one_sfile</div>
<div>674357 1.1601 4928 1.0380 646 0.6549 libdbus-1.so.3.5.8 upstart-udev-bridge /lib/x86_64-linux-gnu/libdbus-1.so.3.5.8</div>
<div>656320 1.1291 5755 1.2122 33 0.0335 vmlinux-3.8.5 vmlinux-3.8.5 mutex_spin_on_owner</div>
<div>622320 1.0706 6120 1.2891 1354 1.3726 vmlinux-3.8.4 vmlinux-3.8.4 fib_table_lookup</div>
<div>609061 1.0478 4384 0.9234 104 0.1054 vmlinux-3.8.5 vmlinux-3.8.5 try_to_wake_up</div>
<div>590477 1.0158 3245 0.6835 116 0.1176 libc-2.15.so sudo /lib/x86_64-linux-gnu/libc-2.15.so</div>
<div>531910 0.9150 5514 1.1614 1725 1.7487 vmlinux-3.8.4 vmlinux-3.8.4 ipt_do_table</div>
<div>517979 0.8911 4511 0.9502 197 0.1997 vmlinux-3.8.5 sudo snmp_fold_field</div>
<div>504656 0.8682 3767 0.7934 604 0.6123 libdbus-1.so.3.5.8 init /lib/x86_64-linux-gnu/libdbus-1.so.3.5.8</div>
<div>478770 0.8236 3342 0.7039 959 0.9722 init init /sbin/init</div>
<div>475511 0.8180 5009 1.0550 1320 1.3382 ip_tables ip_tables /ip_tables</div>
<div>452566 0.7785 2642 0.5565 215 0.2180 oprofiled profiled odb_update_node_with_offset</div>
</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div>Is there any theoretical reason for this performance drop when running 1000 containers ?</div>
<div><br>
</div>
<div>Regards</div>
<div><br>
</div>
<div>Benoit</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
</body>
</html>