[lxc-users] Kubernetes Storage Provisioning using LXD
Charles Butler
charles.butler at canonical.com
Thu Feb 16 23:22:32 UTC 2017
Greetings,
The TL;DR - we don’t fully support this today, but are cycling towards a
resolution. I would love to have your thoughts/requirements added to the
bug listed below.
I've given this thread some thought and you're encountering an edge that we
haven't thoroughly tested. We do have a desire to enable developers to
properly model their workloads in kubernetes running on LXD just like they
would on a cloud. This thread was started by the conjure-up folks:
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/202
that somewhat explores the initial thoughts on this work.
I spent about an hour diving into the Ceph integration path we have already
completed to see if it is a viable option, but my results were not
successful, and reproduction seems to be order dependent. This is not an
ideal solution.
What I can say is that we are aware of this limitation, and would love to
enable this. We’re looking towards a lighter weight solution (like gluster
or nfs) for the initial enablement on local development setups.
I’ll include those links and a bit of instruction from my Ceph hacking for
further reading material just in case you feel like diving in and hacking
on that vector:
Solving for RBD Mount/Format permissions denied
https://github.com/lxc/lxd/issues/2709
Install ceph-common on the host
apt-get install ceph-common
Next step would be to stand up CDK
conjure-up canonical-kubernetes
Evaluate the nova-lxd lxd profile for kernel modules and escalated security
on the container.. Yielding a less secure lxd container in this
configuration, but more operability in this context:
https://github.com/conjure-up/spells/blob/master/openstack-novalxd/steps/lxd-profile.yaml
Specifically, you're going to need to add some whitelisted modules, set the
container to privileged, and add the rbd devices (min + major - found via
lsblk /dev/rbd# - order dependent, as the device has to exist first)
(note, I'm using the snap so my commands will be prefixed with lxd. to
scope the request to the snap bins, this may be divergent from your
commands which will just be native lxc profile show, and so on)
$ lxd.lxc profile show juju-storage-test
config:
boot.autostart: "true"
linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables,netlink_diag,rbd
raw.lxc: |
lxc.aa_profile=unconfined
lxc.mount.auto=sys:rw
security.nesting: "true"
security.privileged: "true"
description: ""
devices:
root:
path: /
type: disk
name: juju-storage-test
Once the deployment has converged and it’s pulled down your credentials,
you're ready to deploy Ceph and start enlisting OSD's (using the file
storage type)
# note: you will indeed need 6 total lxd containers to run the ceph
service. 3 mons for quorum, and 3 osd’s to ensure your cluster health. I
tried with one and this failed.
juju deploy ceph-mon -n 3
juju deploy ceph-osd -n 3
juju add-relation ceph-mon ceph-osd
juju config ceph-osd osd-devices=/srv/ceph-osd use-direct-io=false
Juju add-relation kubernetes-master ceph-mon
Juju run-action kubernetes-master/0 create-rbd-pv name=testpv size=50
-- this is where things failed for me with regard to either unable to mount
the RBD due to it thinks there’s a mounted filesystem on it (I presume
watchers were to blame)
Cited sources for the answers:
Nova-lxd bundle configuration for ceph units
https://github.com/conjure-up/spells/blob/master/openstack-novalxd/bundle.yaml#L60-L76
Enable loopback storage support on the localhost provider for Juju
(unreferenced)
https://github.com/juju/docs/issues/1665
cholcomb, and icey on #juju on freenode (storage engineers)
stokachu and lazypower on #juju on freenode (kubernetes engineers)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20170216/5fa8ed25/attachment-0001.html>
More information about the lxc-users
mailing list