[lxc-devel] Working with glibc (PID/TID caches).
Eric W. Biederman
ebiederm at xmission.com
Mon Aug 4 18:10:35 UTC 2014
Serge Hallyn <serge.hallyn at ubuntu.com> writes:
> Quoting Eric W. Biederman (ebiederm at xmission.com):
>> Serge Hallyn <serge.hallyn at ubuntu.com> writes:
>>
>> > Quoting Eric W. Biederman (ebiederm at xmission.com):
>> >>
>> >> Serge Hallyn <serge.hallyn at ubuntu.com> writes:
>> >> > Quoting Carlos O'Donell (carlos at redhat.com):
>> >> >> There was a complaint a while back from someone working
>> >> >> on containers about glibc PID caching. I recently received
>> >> >> another request to provide userspace with a way to reset
>> >> >> any PID or TID caches to make clone-based sandboxing easier
>> >> >> (CLONE_NEWPID).
>> >> >>
>> >> >> How did lxc workaround the PID cache in glibc? What APIs
>> >> >> could glibc provide to help the implementation of containers?
>>
>> >> That said clone(3) is a cumbersome API to use when you don't want to
>> >> share the same address space (it sucks to have to allocate an extra
>> >> stack just to call fork(2)), and that probably gets people resorting to
>> >> calling syscall(SYS_clone,...) and then having pid problems.
>>
>> > So, the long and short of it is, we're all happy with what we've got?
>> >
>> > Or did I as usual misread your main point?
>>
>> clone(3) sucks to use for creating namespaces. Having to create a stack
>> when you don't pass CLONE_MM adds all sorts of unnecessary
>
> Ah, yes. Well at least there are enough examples of boilerplate out
> there for ppl to cut-paste, but agreed it would be nice to have a
> simpler to use api. Could clone3(3) simply calculate 10*native page
> size and allocate that much space, or something like that?
In the case of not CLONE_MM clone(3) should allow me to use my current
stack just like fork does. Because it is a giant waste to allocate a
second stack when you already have a perfectly good stack and no other
threads to use the other.
clone(2) allows that. clone(3) sucks at being fork with more flags.
Eric
More information about the lxc-devel
mailing list