[Lxc-users] Packet forwarding performance drop with 1000 containers

Benoit Lourdelet blourdel at juniper.net
Tue Apr 23 21:34:21 UTC 2013


Hello,

Forwarding throughput is decreasing gradually as I add containers. I don't
see any sudden drop.

I we consider aggregated forwarding performance with 100 containers to be
1, here are the measurements for

# containers 	Aggregated throughput
------------------------------------
100 			1
500 			.71
1000 			.27
1100 			.23


Benoit

On 22/04/2013 17:16, "Serge Hallyn" <serge.hallyn at ubuntu.com> wrote:

>Quoting Benoit Lourdelet (blourdel at juniper.net):
>> Hello,
>> 
>> I am testing forwarding performance of  1000 containers running at the
>>same time.
>> I am running Linux 3.8.5 and lxc 0.8.0.
>> 
>> Each container is a simple router : 2 IPv4 interfaces, The routing
>>table is very small :  3/4 routes to allow for bidirectional traffic to
>>flow between the 2 interfaces.
>> 
>> lxc.network.type = phys
>> lxc.network.flags = up
>> lxc.network.link = eth6.3
>> lxc.network.name = eth2
>> lxc.network.hwaddr = 00:50:56:a8:03:03
>> lxc.network.ipv4 = 192.168.1.1/24
>> lxc.network.type = phys
>> lxc.network.flags = up
>> lxc.network.link = eth7.3
>> lxc.network.name = eth1
>> lxc.network.ipv4 = 2.2.2.2/24
>> lxc.network.ipv6 = 2003:1339:0:12::2/64
>> 
>> Between 1 and 100 containers running and forwarding at the same time
>>the  forwarding performance is relatively similar.
>
>And looks like what?  (you only showed data for 100 and 1000, which as
>you say look similar)
>
>It could be a whole slew of things.  You could be suddenly hitting swap,
>it could be some cache which isn't scaling well - though if 100 and 1000
>containers look similar, and worse than 1-99, that's weird :)  What
>about 1100 and 1500?
>
>> If I run the same  test with 1000 containers, performance is divided by
>>4.
>> 
>> I have capture oprofile data for 1000 containers  forwarding traffic :
>> 
>> CPU: Intel Sandy Bridge microarchitecture, speed 2.001e+06 MHz
>>(estimated)
>> Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a
>>unit mask of 0x00 (No unit mask) count 100000
>> Counted UNHALTED_REFERENCE_CYCLES events (Unhalted reference cycles)
>>with a unit mask of 0x01 (No unit mask) count 1000500
>> Counted LLC_REFS events (Last level cache demand requests from this
>>core) with a unit mask of 0x4f (No unit mask) count 1000500
>> samples  %        samples  %        samples  %        image name
>>       app name                 symbol name
>> 3799100   7.0555  27682     6.4292  8307      9.3270  vmlinux-3.8.5
>>       vmlinux-3.8.5            tg_load_down
>> 1842136   3.4211  13082     3.0383  3278      3.6805  vmlinux-3.8.5
>>       udevd                    tg_load_down
>> 1611025   2.9919  12869     2.9888  245       0.2751  vmlinux-3.8.5
>>       udevd                    do_int3
>> 1314286   2.4408  8738      2.0294  1711      1.9211  libnih.so.1.0.0
>>       init                     /lib/x86_64-linux-gnu/libnih.so.1.0.0
>> 1011093   1.8777  9121      2.1184  1047      1.1756  vmlinux-3.8.5
>>       vmlinux-3.8.5            intel_idle
>> 860652    1.5984  6949      1.6139  468       0.5255  vmlinux-3.8.5
>>       vmlinux-3.8.5            __ticket_spin_lock
>> 761266    1.4138  5773      1.3408  290       0.3256  vmlinux-3.8.5
>>       udevd                    __ticket_spin_lock
>> 731444    1.3584  4237      0.9841  163       0.1830  oprofiled
>>       oprofiled                sfile_find
>> 718165    1.3337  5176      1.2021  1123      1.2609  libc-2.15.so
>>       udevd                    /lib/x86_64-linux-gnu/libc-2.15.so
>> 704824    1.3090  4875      1.1322  1464      1.6438  libc-2.15.so
>>       init                     /lib/x86_64-linux-gnu/libc-2.15.so
>> 696393    1.2933  3914      0.9090  3319      3.7265  oprofiled
>>       oprofiled                for_one_sfile
>> 690144    1.2817  5997      1.3928  1020      1.1452  vmlinux-3.8.5
>>       vmlinux-3.8.5            update_sd_lb_stats
>> 674357    1.2524  4928      1.1445  646       0.7253
>>libdbus-1.so.3.5.8       upstart-udev-bridge
>>/lib/x86_64-linux-gnu/libdbus-1.so.3.5.8
>> 639174    1.1870  6348      1.4743  1247      1.4001  ixgbe
>>       ixgbe                    /ixgbe
>> 622320    1.1557  6120      1.4214  1354      1.5203  vmlinux-3.8.4
>>       vmlinux-3.8.4            fib_table_lookup
>> 607881    1.1289  4372      1.0154  101       0.1134  vmlinux-3.8.5
>>       vmlinux-3.8.5            try_to_wake_up
>> 590477    1.0966  3245      0.7537  116       0.1302  libc-2.15.so
>>       sudo                     /lib/x86_64-linux-gnu/libc-2.15.so
>> 558932    1.0380  4792      1.1130  29        0.0326  vmlinux-3.8.5
>>       vmlinux-3.8.5            mutex_spin_on_owner
>> 531910    0.9878  5514      1.2806  1725      1.9368  vmlinux-3.8.4
>>       vmlinux-3.8.4            ipt_do_table
>> 517979    0.9620  4511      1.0477  197       0.2212  vmlinux-3.8.5
>>       sudo                     snmp_fold_field
>> 504656    0.9372  3767      0.8749  604       0.6782
>>libdbus-1.so.3.5.8       init
>>/lib/x86_64-linux-gnu/libdbus-1.so.3.5.8
>> 478770    0.8891  3342      0.7762  959       1.0768  init
>>       init                     /sbin/init
>> 440734    0.8185  3189      0.7407  56        0.0629  vmlinux-3.8.5
>>       udevd                    try_to_wake_up
>> 437560    0.8126  3671      0.8526  12        0.0135  vmlinux-3.8.4
>>       vmlinux-3.8.4            mutex_spin_on_owner
>> 420793    0.7815  3239      0.7523  732       0.8219  vmlinux-3.8.5
>>       upstart-udev-bridge      tg_load_down
>> 
>> 
>> 
>> Oprofile data for 100 containers forwarding traffic, it does not look
>>too different :
>> 
>> CPU: Intel Sandy Bridge microarchitecture, speed 2.001e+06 MHz
>>(estimated)
>> Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a
>>unit mask of 0x00 (No unit mask) count 100000
>> Counted UNHALTED_REFERENCE_CYCLES events (Unhalted reference cycles)
>>with a unit mask of 0x01 (No unit mask) count 1000500
>> Counted LLC_REFS events (Last level cache demand requests from this
>>core) with a unit mask of 0x4f (No unit mask) count 1000500
>> samples  %        samples  %        samples  %        image name
>>       app name                 symbol name
>> 3812298   6.5582  27823     5.8604  8326      8.4405  vmlinux-3.8.5
>>       vmlinux-3.8.5            tg_load_down
>> 1842136   3.1690  13082     2.7555  3278      3.3231  vmlinux-3.8.5
>>       udevd                    tg_load_down
>> 1611025   2.7714  12869     2.7106  245       0.2484  vmlinux-3.8.5
>>       udevd                    do_int3
>> 1314286   2.2609  8738      1.8405  1711      1.7345  libnih.so.1.0.0
>>       init                     /lib/x86_64-linux-gnu/libnih.so.1.0.0
>> 1290389   2.2198  13032     2.7449  2483      2.5172  ixgbe
>>       ixgbe                    /ixgbe
>> 1089462   1.8742  11091     2.3361  576       0.5839  vmlinux-3.8.5
>>       vmlinux-3.8.5            intel_idle_cpu_init
>> 1014068   1.7445  9151      1.9275  1062      1.0766  vmlinux-3.8.5
>>       vmlinux-3.8.5            intel_idle
>> 971666    1.6715  8058      1.6973  576       0.5839  vmlinux-3.8.5
>>       vmlinux-3.8.5            __ticket_spin_lock
>> 761947    1.3108  4446      0.9365  167       0.1693  oprofiled
>>       oprofiled                sfile_find
>> 761266    1.3096  5773      1.2160  290       0.2940  vmlinux-3.8.5
>>       udevd                    __ticket_spin_lock
>> 718165    1.2354  5176      1.0902  1123      1.1384  libc-2.15.so
>>       udevd                    /lib/x86_64-linux-gnu/libc-2.15.so
>> 704824    1.2125  4875      1.0268  1464      1.4841  libc-2.15.so
>>       init                     /lib/x86_64-linux-gnu/libc-2.15.so
>> 697389    1.1997  6077      1.2800  1035      1.0492  vmlinux-3.8.5
>>       vmlinux-3.8.5            update_sd_lb_stats
>> 696399    1.1980  3914      0.8244  3319      3.3647  oprofiled
>>       oprofiled                for_one_sfile
>> 674357    1.1601  4928      1.0380  646       0.6549
>>libdbus-1.so.3.5.8       upstart-udev-bridge
>>/lib/x86_64-linux-gnu/libdbus-1.so.3.5.8
>> 656320    1.1291  5755      1.2122  33        0.0335  vmlinux-3.8.5
>>       vmlinux-3.8.5            mutex_spin_on_owner
>> 622320    1.0706  6120      1.2891  1354      1.3726  vmlinux-3.8.4
>>       vmlinux-3.8.4            fib_table_lookup
>> 609061    1.0478  4384      0.9234  104       0.1054  vmlinux-3.8.5
>>       vmlinux-3.8.5            try_to_wake_up
>> 590477    1.0158  3245      0.6835  116       0.1176  libc-2.15.so
>>       sudo                     /lib/x86_64-linux-gnu/libc-2.15.so
>> 531910    0.9150  5514      1.1614  1725      1.7487  vmlinux-3.8.4
>>       vmlinux-3.8.4            ipt_do_table
>> 517979    0.8911  4511      0.9502  197       0.1997  vmlinux-3.8.5
>>       sudo                     snmp_fold_field
>> 504656    0.8682  3767      0.7934  604       0.6123
>>libdbus-1.so.3.5.8       init
>>/lib/x86_64-linux-gnu/libdbus-1.so.3.5.8
>> 478770    0.8236  3342      0.7039  959       0.9722  init
>>       init                     /sbin/init
>> 475511    0.8180  5009      1.0550  1320      1.3382  ip_tables
>>       ip_tables                /ip_tables
>> 452566    0.7785  2642      0.5565  215       0.2180  oprofiled
>>       profiled                odb_update_node_with_offset
>> 
>> 
>> Is there any theoretical reason for this performance drop when running
>>1000 containers ?
>> 
>> Regards
>> 
>> Benoit
>> 
>> 
>> 
>> 
>> 
>
>> 
>>-------------------------------------------------------------------------
>>-----
>> Precog is a next-generation analytics platform capable of advanced
>> analytics on semi-structured data. The platform includes APIs for
>>building
>> apps and a phenomenal toolset for data science. Developers can use
>> our toolset for easy data analysis & visualization. Get a free account!
>> http://www2.precog.com/precogplatform/slashdotnewsletter
>
>> _______________________________________________
>> Lxc-users mailing list
>> Lxc-users at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
>






More information about the lxc-users mailing list