[Lxc-users] Cluster filesystem?

Toens Bueker toens.bueker at lists0903.nurfuerspam.neuroserve.de
Mon Oct 8 20:53:57 UTC 2012


Ulli Horlacher <framstag at rus.uni-stuttgart.de> wrote:

> > > "should" - I prefer recommendations ny experience :-)
> > >
> > > I have tried by myself gluster and it is HORRIBLE slow.
> > 
> > If you are interested, try Moosefs. I have quite good experiences with 
> > it, however not under containers.
> 
> Moosefs is FUSE based (for clients) and therefore will be very slow.
> I suspect NFS is faster, even on (only) GbE.
> 
> > multiple mount protection
> > 
> > You cannot mount the partition multiple times at the same time. It's a
> > safe protection. With this trick you can be safe and fast with all the
> > benefits of true posix filesystems.
> 
> Ubuntu 12.04 does not have ext4 MMP support.
> Besides this, I would need n filesystems for n hosts. A failover solution
> would be very complex.

Although it is only partially open source, the "parallels cloud
storage" might give us a hint where to look (http://www.parallels.com/products/pcs/)

As far as I understand it, it consists of basically two components:

- a distributed "filesystem" (which seems to be very similar to cephfs
  (http://ceph.com/) or scality ring (http://www.scality.com/) with
  its metadata servers, chunk servers, number of replicas, etc.)
  
- a special loopback block device: ploop (http://wiki.openvz.org/Ploop)

With ploop you are able to have your container as one big "image file" on
your distributed object store (which is not very good in serving
small files anyway) and access the contents of your container over a
different mount point at the same time.

The distributed filesystem can be mounted on all servers running
containers. The fileservers can be containerservers at the same time.
The containers of a failed containerserver could be restarted on any
other containerserver of the cluster.

This is only a wild guess. Probably there is more to it - but it could
be a starting point.

by
Töns
-- 
There is no safe distance.




More information about the lxc-users mailing list