<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Nov 26, 2013 at 8:12 AM, Serge E. Hallyn <span dir="ltr"><<a href="mailto:serge@hallyn.com" target="_blank">serge@hallyn.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div>Quoting Tim Hockin (<a href="mailto:thockin@google.com" target="_blank">thockin@google.com</a>):<br>
> What are the requirements/goals around performance and concurrency?<br>
> Do you expect this to be a single-threaded thing, or can we handle<br>
> some number of concurrent operations? Do you expect to use threads of<br>
> processes?<br>
<br>
</div>The cgmanager should be pretty dumb, so I would expect it to be<br>
quite fast. I don't have any specific perf goals though. If you<br>
have requirements I'm very interested to hear them. I should be<br>
able to tell pretty soon how far short I fall.<br>
<br>
By default I'd expect to run with a single thread, but I don't<br>
imagine one thread can serve a busy 1024-cpu system very well.<br>
Unless you have guidance right now, I think I'd like to get<br>
started with the basic functionality and see how it measures<br>
up to your requirements. I should add perf counters from the<br>
start so we can figure out where bottlenecks (if any) are and<br>
how to handle them.<br>
<br>
Otherwise I could start out with a basic numcpus/10 threadpool<br>
and have the main thread do socket i/o and parcel access<br>
verification and vfs work out to the threadpool, but I'd rather<br>
first know where the problems lie.<br></blockquote><div><br></div><div>From Rohit's talk at Linux plumbers:</div><div><br></div><div><a href="http://www.linuxplumbersconf.net/2013/ocw//system/presentations/1239/original/lmctfy%20(1).pdf">http://www.linuxplumbersconf.net/2013/ocw//system/presentations/1239/original/lmctfy%20(1).pdf</a><br>
</div><div><br></div><div>The goal is O(1000) reads and O(100) writes per second.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div><br>
> Can you talk about logging - what and where?<br>
<br>
</div>When started under upstart, anything we print out goes to<br>
/var/log/upstart/cgmanager.log. Would be nice to keep it<br>
that simple. We could log requests by r to do something<br>
it is not allowed to do, but it seems to me the failed<br>
attempts cause no harm, while the potential for overflowing<br>
logs can.<br>
<br>
Did you have anything in mind? Did you want logging to help<br>
detect certain conditions for system optimization, or just<br>
for failure notices and security violations?<br>
<div><br>
> How will we handle event_fd? Pass a file-descriptor back to the caller?<br>
<br>
</div>The only thing currently supporting eventfd is memory threshold,<br>
right? I haven't tested whether this will work or not, but<br>
ideally the caller would open the eventfd fd, pass it, the<br>
cgroup name, controller file to be watched, and the args to<br>
cgmanager; cgmanager confirms read access, opens the<br>
controller fd, makes the request over cgroup.event_control,<br>
then passes the controller fd back to the caller and closes<br>
its own copy.<br>
<br>
I'm also not sure whether the cgroup interface is going to be<br>
offering a new feature to replace eventfd, since it wants<br>
people to stop using cgroupfs... Tejun?<br></blockquote><div><br></div><div>From my discussions with Tejun, he wanted to move to using inotify so it may still be an fd we pass around. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div><div><br>
> That's all I can come up with for now.<br>
</div></div></blockquote></div><br></div></div>