<html><body><div style="font-family: arial, helvetica, sans-serif; font-size: 10pt; color: #000000"><div>It's kind of you to share your experience / setup.</div><div>I will have a look on ScaleIO as it seems to be interesting . </div><div><br data-mce-bogus="1"></div><div>Have a nice day</div><div><br></div><div data-marker="__SIG_PRE__"><div><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif;">Cordialement,</span><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;"><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;"><br></span></span></div><div><br></div><div><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;">Benoît</span><span style="color: rgb(51, 51, 51); font-family: times new roman,new york,times,serif; font-weight: bold;" data-mce-style="color: #333333; font-family: times new roman,new york,times,serif; font-weight: bold;"><br></span></div></div><br><hr id="zwchr" data-marker="__DIVIDER__"><div data-marker="__HEADERS__"><b>De: </b>"Ron Kelley" <rkelleyrtp@gmail.com><br><b>À: </b>"lxc-users" <lxc-users@lists.linuxcontainers.org><br><b>Envoyé: </b>Jeudi 3 Novembre 2016 12:37:36<br><b>Objet: </b>Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes<br></div><br><div data-marker="__QUOTED_TEXT__">Hi Benoit,<br><br>Our environment is pretty locked down when it comes to upgrades at the Ubuntu server level. We don't upgrade often (mainly for security-related stuff). That said, in the event of a mandatory reboot, we take a VM snapshot then take a small downtime. Since Ubuntu 16 (re)boots so quickly, the downtime is usually less than 30secs for our servers. Thus, no extended outage times. If the upgrade fails, we easily roll back to the snapshot.<br><br>At this time, NFSv3 is the best solution for us at this time. Each NFS server has multiple NICs, redundant power supplies, etc (real enterprise-class systems). In the event of an NFS server failure, we can reload from our backup servers (again, multiple backup servers, etc).<br><br>To address the single point of failure for NFS, we have been looking at something called ScaleIO. It is a distributed/replicated block-level storage system much like gluster. You create virtual LUNs and mount them on your hypervisor host; the hypervisor is responsible for managing the distributed access (think VMFS). Each hypervisor sees the same LUN which makes VM migration simple. This technology builds a Storage Area Network (SAN) over an IP network without expensive Fiber Channel infrastructure. ScaleIO allows you to take multiple HDD failures or even complete storage node failures w/out downtime on your storage network. The software is free for testing but you must purchase a support contract to use in production. Just do a quick search for ScaleIO and read the literature.<br><br>Let me know if you have more questions...<br><br>Thanks,<br><br>-Ron<br><br><br><br><br>On 11/3/2016 11:38 AM, Benoit GEORGELIN - Association Web4all wrote:<br>> Hi Ron,<br>> sounds like a good way to manage it. Thanks<br>> How do you handle your Ubuntu 16.04 upgrade / kernel update ? I case of<br>> a mandatory reboot, your LXD containers will have a downtime but maybe<br>> not a problem in your situation?<br>> <br>> Regarding ceph, gluster and drdb, the main concern is about<br>> performance/stability so you are right, NFS could be the "best" way to<br>> share the data across hyperviseurs <br>> <br>> Cordialement,<br>> <br>> Benoît <br>> <br>> ------------------------------------------------------------------------<br>> *De: *"Ron Kelley" <rkelleyrtp@gmail.com><br>> *À: *"lxc-users" <lxc-users@lists.linuxcontainers.org><br>> *Envoyé: *Jeudi 3 Novembre 2016 10:53:05<br>> *Objet: *Re: [lxc-users] Question about your storage on multiple<br>> LXC/LXD nodes<br>> <br>> We do it slightly differently. We run LXD containers on Ubuntu 16.04<br>> Virtual Machines (inside a virtualized infrastructure). Each physical<br>> server has redundant network links to highly-available storage. Thus,<br>> we don't have to migrate containers between LXD servers; instead we<br>> migrate the Ubuntu VM to another server/storage pool. Additionally, we<br>> use BTRFS snapshots inside the Ubuntu server to quickly restore backups<br>> for the LXD containers themselves.<br>> <br>> So far, everything has been rock solid. The LXD containers work great<br>> inside Ubuntu VMs (performance, scale, etc). In the unlikely event we<br>> have to migrate an LXD container from one server to another, we will<br>> simply do an LXD copy (with a small maintenance window).<br>> <br>> As an aside: I have tried gluster, ceph, and even DRBD in the past w/out<br>> much success. Eventually, we went back to NFSv3 servers for<br>> performance/stability. I am looking into setting up an HA NFSv4 config<br>> to address the single point of failure with NFS v3 setups.<br>> <br>> -Ron<br>> <br>> <br>> <br>> <br>> On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote:<br>>> Thanks, looks like nobody use LXD in a cluster<br>>><br>>> Cordialement,<br>>><br>>> Benoît<br>>><br>>> ------------------------------------------------------------------------<br>>> *De: *"Tomasz Chmielewski" <mangoo@wpkg.org><br>>> *À: *"lxc-users" <lxc-users@lists.linuxcontainers.org><br>>> *Cc: *"Benoit GEORGELIN - Association Web4all"<br>> <benoit.georgelin@web4all.fr><br>>> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50<br>>> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD<br>>> nodes<br>>><br>>> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:<br>>>> Hi,<br>>>><br>>>> I'm wondering what kind of storage are you using in your<br>>>> infrastructure ?<br>>>> In a multiple LXC/LXD nodes how would you design the storage part to<br>>>> be redundant and give you the flexibility to start a container from<br>>>> any host available ?<br>>>><br>>>> Let's say I have two (or more) LXC/LXD nodes and I want to be able to<br>>>> start the containers on one or the other node.<br>>>> LXD allow to move containers across nodes by transferring the data<br>>>> from node A to node B but I'm looking to be able to run the containers<br>>>> on node B if node A is in maintenance or crashed.<br>>>><br>>>> There is a lot of distributed file system (gluster, ceph, beegfs,<br>>>> swift etc..) but I my case, I like using ZFS with LXD and I would<br>>>> like to try to keep that possibility .<br>>><br>>> If you want to stick with ZFS, then your only option is setting up DRBD.<br>>><br>>><br>>> Tomasz Chmielewski<br>>> https://lxadm.com<br>>><br>>><br>>> _______________________________________________<br>>> lxc-users mailing list<br>>> lxc-users@lists.linuxcontainers.org<br>>> http://lists.linuxcontainers.org/listinfo/lxc-users<br>>><br>> _______________________________________________<br>> lxc-users mailing list<br>> lxc-users@lists.linuxcontainers.org<br>> http://lists.linuxcontainers.org/listinfo/lxc-users<br>> <br>> <br>> _______________________________________________<br>> lxc-users mailing list<br>> lxc-users@lists.linuxcontainers.org<br>> http://lists.linuxcontainers.org/listinfo/lxc-users<br>> <br>_______________________________________________<br>lxc-users mailing list<br>lxc-users@lists.linuxcontainers.org<br>http://lists.linuxcontainers.org/listinfo/lxc-users<br></div></div></body></html>