[lxc-users] Question about your storage on multiple LXC/LXD nodes

Benoit GEORGELIN - Association Web4all benoit.georgelin at web4all.fr
Thu Nov 3 17:25:06 UTC 2016


It's kind of you to share your experience / setup. 
I will have a look on ScaleIO as it seems to be interesting . 

Have a nice day 

Cordialement, 

Benoît 


De: "Ron Kelley" <rkelleyrtp at gmail.com> 
À: "lxc-users" <lxc-users at lists.linuxcontainers.org> 
Envoyé: Jeudi 3 Novembre 2016 12:37:36 
Objet: Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes 

Hi Benoit, 

Our environment is pretty locked down when it comes to upgrades at the Ubuntu server level. We don't upgrade often (mainly for security-related stuff). That said, in the event of a mandatory reboot, we take a VM snapshot then take a small downtime. Since Ubuntu 16 (re)boots so quickly, the downtime is usually less than 30secs for our servers. Thus, no extended outage times. If the upgrade fails, we easily roll back to the snapshot. 

At this time, NFSv3 is the best solution for us at this time. Each NFS server has multiple NICs, redundant power supplies, etc (real enterprise-class systems). In the event of an NFS server failure, we can reload from our backup servers (again, multiple backup servers, etc). 

To address the single point of failure for NFS, we have been looking at something called ScaleIO. It is a distributed/replicated block-level storage system much like gluster. You create virtual LUNs and mount them on your hypervisor host; the hypervisor is responsible for managing the distributed access (think VMFS). Each hypervisor sees the same LUN which makes VM migration simple. This technology builds a Storage Area Network (SAN) over an IP network without expensive Fiber Channel infrastructure. ScaleIO allows you to take multiple HDD failures or even complete storage node failures w/out downtime on your storage network. The software is free for testing but you must purchase a support contract to use in production. Just do a quick search for ScaleIO and read the literature. 

Let me know if you have more questions... 

Thanks, 

-Ron 




On 11/3/2016 11:38 AM, Benoit GEORGELIN - Association Web4all wrote: 
> Hi Ron, 
> sounds like a good way to manage it. Thanks 
> How do you handle your Ubuntu 16.04 upgrade / kernel update ? I case of 
> a mandatory reboot, your LXD containers will have a downtime but maybe 
> not a problem in your situation? 
> 
> Regarding ceph, gluster and drdb, the main concern is about 
> performance/stability so you are right, NFS could be the "best" way to 
> share the data across hyperviseurs 
> 
> Cordialement, 
> 
> Benoît 
> 
> ------------------------------------------------------------------------ 
> *De: *"Ron Kelley" <rkelleyrtp at gmail.com> 
> *À: *"lxc-users" <lxc-users at lists.linuxcontainers.org> 
> *Envoyé: *Jeudi 3 Novembre 2016 10:53:05 
> *Objet: *Re: [lxc-users] Question about your storage on multiple 
> LXC/LXD nodes 
> 
> We do it slightly differently. We run LXD containers on Ubuntu 16.04 
> Virtual Machines (inside a virtualized infrastructure). Each physical 
> server has redundant network links to highly-available storage. Thus, 
> we don't have to migrate containers between LXD servers; instead we 
> migrate the Ubuntu VM to another server/storage pool. Additionally, we 
> use BTRFS snapshots inside the Ubuntu server to quickly restore backups 
> for the LXD containers themselves. 
> 
> So far, everything has been rock solid. The LXD containers work great 
> inside Ubuntu VMs (performance, scale, etc). In the unlikely event we 
> have to migrate an LXD container from one server to another, we will 
> simply do an LXD copy (with a small maintenance window). 
> 
> As an aside: I have tried gluster, ceph, and even DRBD in the past w/out 
> much success. Eventually, we went back to NFSv3 servers for 
> performance/stability. I am looking into setting up an HA NFSv4 config 
> to address the single point of failure with NFS v3 setups. 
> 
> -Ron 
> 
> 
> 
> 
> On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote: 
>> Thanks, looks like nobody use LXD in a cluster 
>> 
>> Cordialement, 
>> 
>> Benoît 
>> 
>> ------------------------------------------------------------------------ 
>> *De: *"Tomasz Chmielewski" <mangoo at wpkg.org> 
>> *À: *"lxc-users" <lxc-users at lists.linuxcontainers.org> 
>> *Cc: *"Benoit GEORGELIN - Association Web4all" 
> <benoit.georgelin at web4all.fr> 
>> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50 
>> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD 
>> nodes 
>> 
>> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote: 
>>> Hi, 
>>> 
>>> I'm wondering what kind of storage are you using in your 
>>> infrastructure ? 
>>> In a multiple LXC/LXD nodes how would you design the storage part to 
>>> be redundant and give you the flexibility to start a container from 
>>> any host available ? 
>>> 
>>> Let's say I have two (or more) LXC/LXD nodes and I want to be able to 
>>> start the containers on one or the other node. 
>>> LXD allow to move containers across nodes by transferring the data 
>>> from node A to node B but I'm looking to be able to run the containers 
>>> on node B if node A is in maintenance or crashed. 
>>> 
>>> There is a lot of distributed file system (gluster, ceph, beegfs, 
>>> swift etc..) but I my case, I like using ZFS with LXD and I would 
>>> like to try to keep that possibility . 
>> 
>> If you want to stick with ZFS, then your only option is setting up DRBD. 
>> 
>> 
>> Tomasz Chmielewski 
>> https://lxadm.com 
>> 
>> 
>> _______________________________________________ 
>> lxc-users mailing list 
>> lxc-users at lists.linuxcontainers.org 
>> http://lists.linuxcontainers.org/listinfo/lxc-users 
>> 
> _______________________________________________ 
> lxc-users mailing list 
> lxc-users at lists.linuxcontainers.org 
> http://lists.linuxcontainers.org/listinfo/lxc-users 
> 
> 
> _______________________________________________ 
> lxc-users mailing list 
> lxc-users at lists.linuxcontainers.org 
> http://lists.linuxcontainers.org/listinfo/lxc-users 
> 
_______________________________________________ 
lxc-users mailing list 
lxc-users at lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20161103/ddaf44bb/attachment.html>


More information about the lxc-users mailing list