[Lxc-users] Stats 'n' Stuff

Matt Franz mdfranz at gmail.com
Sat Nov 12 19:32:01 UTC 2011


Yes.  The random Ethernet device names make monitoring with munin zenoss or whatever very painful. 

One of the nice features of openvz is that it uses the container ID in the device name which will be consistent across container reboots and also allows you to easily identify which Nic belongs to the container so you can easily view traffic stats across multiple containers while only monitoring the bare metal host.

The right answer would be to find the .c or .sh that creates the devices (assuming it is in user space) and modify that code so the random Device names can be overridden.


Mdf

On Nov 12, 2011, at 1:57 PM, Gordon Henderson <gordon at drogon.net> wrote:

> 
> I'm looking for ways to get stats out of each container on a host - the 
> sort of stuff I'm after is the bandwidth of the network interface and cpu 
> cycles.
> 
> On the CPU monitoring front there is /cgroup/xxx/cpuacct.stat, memory from 
> memory.usage_in_bytes and memory.memsw.usage_in_bytes ...
> 
> But on the network side...
> 
> There is nothing in /cgroup that I can see..
> 
> And there seems to be some oddities as the internal interface number is 
> created new for each interface - so on the host, eth0 is (typically) 
> interface number 2, but in a container it's not 2, but more than 2 - it 
> can be found, however the next time you reboot the container or host, it 
> will change (or it's highly probable that it will change)
> 
> So even if I ran snmp in each container, using something like mrtg to get 
> the stats is going to be problematic as you need to encode the interface 
> number in the mrtg config file...
> 
> In any case, I'd really rather only run one snmpd on the host... The same 
> down-side applies in that the interface number changes every time you 
> restart a container..
> 
> Has anyone worked round this?
> 
> My thoughts are that every time I start/stop a container or reboot the 
> host, to run mrtgs 'cfgmaker' program then parse the output, matching the 
> interface name to the container name (fixing the vethxxxx interface name 
> in the containers config file so I know which is which) then extracting 
> the network interface, dynamically writing an mrtg.cfg file, then running 
> mrtg...
> 
> e.g.
> 
>   # cfgmaker public at localhost | grep 'Interface.*veth'
>   ### Interface 6 >> Descr: 'vethdU4ae6' | Name: 'vethdU4ae6' | Ip: '' | Eth: '4a-d4-c2-97-a9-c0' ###
>   ### Interface 10 >> Descr: 'vethr3M2xv' | Name: 'vethr3M2xv' | Ip: '' | Eth: '76-9a-d2-be-a2-50' ###
>   ### Interface 14 >> Descr: 'vethiPsSOE' | Name: 'vethiPsSOE' | Ip: '' | Eth: '9e-da-70-6c-b1-93' ###
>   ### Interface 18 >> Descr: 'vethQ6lLx8' | Name: 'vethQ6lLx8' | Ip: '' | Eth: '6a-c2-18-f6-10-95' ###
>   ### Interface 22 >> Descr: 'veth8gX8cw' | Name: 'veth8gX8cw' | Ip: '' | Eth: '76-16-5d-a9-0c-fb' ###
>   ### Interface 26 >> Descr: 'vethOSG0De' | Name: 'vethOSG0De' | Ip: '' | Eth: '5e-95-9b-b7-b4-e8' ###
> 
> and parse that to extract the interface numbers (6, 10, 14..) and write an 
> appropriate mrtg config file ...
> 
> Or am I missing something obvious?
> 
> Does anyone else bother with the stats of the individual containers?
> 
> Gordon
> 
> ------------------------------------------------------------------------------
> RSA(R) Conference 2012
> Save $700 by Nov 18
> Register now
> http://p.sf.net/sfu/rsa-sfdev2dev1
> _______________________________________________
> Lxc-users mailing list
> Lxc-users at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users




More information about the lxc-users mailing list