[lxc-users] LXD static IP in container
Michael Eager
eager at eagerm.com
Wed Feb 12 16:08:01 UTC 2020
On 2/11/20 4:57 PM, Joshua Schaeffer wrote:
> Not sure this will help but I provided my configuration for LXD below. I
> use Ubuntu so you'd have to translate the configuration network
> configuration portions over to RedHat/CentOS. My containers' configure
> their own interfaces (static, dhcp, or whatever), LXD simply defines the
> interface. These are the basic steps that I do:
>
> 1. On the LXD host I setup bridges based on the vlan's that I want a
> NIC to connect to. Those vlan interfaces use a bond in LACP mode. If
> you don't use vlan's or bond's in your setup then just create the
> bridge from a physical Ethernet device.
> 2. I then create a profile for each bridge corresponding to a vlan.
> 3. When I create a container I can assign those profiles (one or
> multiple) to create the network devices.
> 4. Inside the container I configure the network device just like any
> other system; physical, VM, container, or otherwise.
>
> I do not use LXD managed network devices. All my network devices are
> managed by the host operating system. Again, if you don't use vlan's or
> bond's then you can jump straight to creating a bridge.
>
> Here's the details of the steps:
>
> Step 1:
> Create the network devices that the LXD containers will use.
>
> lxcuser at blllxc02:~$ cat
> /etc/network/interfaces.d/01-physical-network.device
> # This file contains the physical NIC definitions.
>
> ############################
> # PHYSICAL NETWORK DEVICES #
> ############################
>
> # Primary services interface.
> auto enp3s0
> iface enp3s0 inet manual
> bond-master bond-services
>
> # Secondary services interface.
> auto enp4s0
> iface enp4s0 inet manual
> bond-master bond-services
>
> lxcuser at blllxc02:~$ cat /etc/network/interfaces.d/02-bonded.device
> # This file is used to create network bonds.
>
> ##################
> # BONDED DEVICES #
> ##################
>
> # Services bond device.
> auto bond-services
> iface bond-services inet manual
> bond-mode 4
> bond-miimon 100
> bond-lacp-rate 1
> bond-slaves enp3s0 enp4s0
> bond-downdelay 400
> bond-updelay 800
>
> lxcuser at blllxc02:~$ cat /etc/network/interfaces.d/03-vlan-raw.device
> # This file creates raw vlan devices.
>
> ####################
> # RAW VLAN DEVICES #
> ####################
>
> # Tagged traffic on bond-services for VLAN 28
> auto vlan0028
> iface vlan0028 inet manual
> vlan-raw-device bond-services
>
> # Tagged traffic on bond-services for VLAN 36
> auto vlan0036
> iface vlan0036 inet manual
> vlan-raw-device bond-services
>
> # Tagged traffic on bond-services for VLAN 40
> auto vlan0040
> iface vlan0040 inet manual
> vlan-raw-device bond-services
> ...
>
> lxcuser at blllxc02:~$ cat /etc/network/interfaces.d/04-bridge.device
> # This file creates network bridges.
>
> ##################
> # BRIDGE DEVICES #
> ##################
>
> # Bridged interface for VLAN 28.
> auto vbridge-28
> iface vbridge-28 inet manual
> bridge_ports vlan0028
> bridge_stp off
> bridge_fd 0
> bridge_maxwait 0
>
> # Bridged interface for VLAN 36.
> auto vbridge-36
> iface vbridge-36 inet manual
> bridge_ports vlan0036
> bridge_stp off
> bridge_fd 0
> bridge_maxwait 0
>
> # Bridged interface for VLAN 40.
> auto vbridge-40
> iface vbridge-40 inet manual
> bridge_ports vlan0040
> bridge_stp off
> bridge_fd 0
> bridge_maxwait 0
>
> Step 2:
> Create profiles for the network devices. Technically not required but
> helps to setup new containers much more quickly.
>
> lxcuser at blllxc02:~$ lxc profile list
> +----------------------+---------+
> | NAME | USED BY |
> +----------------------+---------+
> | 1500_vlan_dns_dhcp | 5 |
> +----------------------+---------+
> | 28_vlan_virt_mgmt | 15 |
> +----------------------+---------+
> | 40_vlan_ext_core_svc | 0 |
> +----------------------+---------+
> | 44_vlan_ext_svc | 4 |
> +----------------------+---------+
> | 48_vlan_ext_cloud | 0 |
> +----------------------+---------+
> | 80_vlan_int_core_svc | 2 |
> +----------------------+---------+
> | 84_vlan_int_svc | 4 |
> +----------------------+---------+
> | 88_vlan_int_cloud | 0 |
> +----------------------+---------+
> | 92_vlan_storage | 0 |
> +----------------------+---------+
> | default | 15 |
> +----------------------+---------+
>
> lxcuser at blllxc02:~$ lxc profile show 28_vlan_virt_mgmt
> config: {}
> description: ""
> devices:
> mgmt_net:
> name: veth-mgmt
> nictype: bridged
> parent: vbridge-28
> type: nic
> name: 28_vlan_virt_mgmt
>
> Step 3:
> Create the container with the correct profile(s) to add the network
> device(s) to the container.
>
> lxcuser at blllxc02:~$ lxc init -p default -p 28_vlan_virt_mgmt -p
> 44_vlan_ext_svc ubuntu:18.04 bllmail02
>
> Step 4:
> Connect to the container and setup the interface the same way you setup
> any other system. The example below is set to manual but just change to
> however you want to setup your device.
>
> lxcuser at blllxc02:~$ lxc exec bllmail02 -- cat
> /etc/network/interfaces.d/51-container-network.device
> auto veth-mgmt
> iface veth-mgmt inet manual
> ...
>
> auto veth-ext-svc
> iface veth-ext-svc inet manual
> ...
>
> lxcuser at blllxc02:~$ lxc exec bllmail02 -- ip link show veth-mgmt
> 316: veth-mgmt at if317: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP mode DEFAULT group default qlen 1000
> link/ether 00:16:3e:f6:e5:ec brd ff:ff:ff:ff:ff:ff link-netnsid 0
> lxcuser at blllxc02:~$ lxc exec bllmail02 -- ip -4 addr show veth-mgmt
> 316: veth-mgmt at if317: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default qlen 1000 link-netnsid 0
> inet 10.2.28.129/22 brd 10.2.31.255 scope global veth-mgmt
> valid_lft forever preferred_lft forever
>
> lxcuser at blllxc02:~$ lxc exec bllmail02 -- ip link show veth-ext-svc
> 314: veth-ext-svc at if315: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> qdisc noqueue state UP mode DEFAULT group default qlen 1000
> link/ether 00:16:3e:21:ac:dc brd ff:ff:ff:ff:ff:ff link-netnsid 0
> lxcuser at blllxc02:~$ lxc exec bllmail02 -- ip -4 addr show veth-ext-svc
> 314: veth-ext-svc at if315: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> qdisc noqueue state UP group default qlen 1000 link-netnsid 0
> inet 192.41.41.85/26 brd 192.41.41.127 scope global veth-ext-svc
> valid_lft forever preferred_lft forever
>
> --
> Thanks,
> Joshua Schaeffer
Thanks.
That's a lot to unpack and translate from Ubuntu to CentOS.
-- Mike Eager
More information about the lxc-users
mailing list