[lxc-users] LXD Bridged IPv6

Nick Falcone nick at nfalcone.net
Tue Apr 26 18:15:57 UTC 2016


I am not running radvd or dnsmasq, that is what I thought it was odd
that SLAAC works and it got an IP, I thought a neighbor relationship had
to be established for this to work.

I am not sure honestly how routes are adverstised, this current test
machine is a DigitalOcean vps.  At the time there is no firewalling
either happening. I was under the impression only routing would work? If
not that is okay I just am not understanding how then.

`ip a` of host
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
    link/ether 04:01:d4:50:c4:01 brd ff:ff:ff:ff:ff:ff
    inet 162.243.200.170/24 brd 162.243.200.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.13.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2604:a880:0:1010::623:1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::601:d4ff:fe50:c401/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
    link/ether 04:01:d4:50:c4:02 brd ff:ff:ff:ff:ff:ff
4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
    link/ether fe:09:9d:73:d1:98 brd ff:ff:ff:ff:ff:ff
    inet 10.195.87.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 2604:a880:0:1010::623:2/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::3caf:7aff:fe86:1914/64 scope link 
       valid_lft forever preferred_lft forever
6: veth4GSVLP at if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue master lxdbr0 state UP group default qlen 1000
    link/ether fe:09:9d:73:d1:98 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc09:9dff:fe73:d198/64 scope link 
       valid_lft forever preferred_lft forever

`ip -6 r` of host
2604:a880:0:1010::/64 dev lxdbr0  proto kernel  metric 256  pref medium
2604:a880:0:1010::/64 dev eth0  proto kernel  metric 256  pref medium
fe80::/64 dev lxdbr0  proto kernel  metric 256  pref medium
fe80::/64 dev eth0  proto kernel  metric 256  pref medium
fe80::/64 dev veth4GSVLP  proto kernel  metric 256  pref medium
default via 2604:a880:0:1010::1 dev eth0  metric 1024  pref medium


IPv4 is working with what `lxd init` sets up, which I believe is just
some basic iptable masq rules.  I am not using proxy arp, and never have
had to (looking to move my existing KVM Vms to some lxd containers).




On Tue, Apr 26, 2016, at 03:45 AM, Wolfgang Bumiller wrote:
> Curious, the symptoms are almost consistent with when you're trying
> to do routing within a single subnet (which means NDP packets won't
> reach their destination and you need to either setup neighbor proxying
> with `ip neighbor` per-container or setup an ndp proxy daemon (ndppd)),
> yet your container did get successfully autoconfigured? Are you
> running a router advertiser on your host (radvd, dnsmasq, ...) or are
> the routes advertised by your provider? (If the latter is the case,
> is eth0 attached to lxdbr0 or are you really only routing?)
> 
> The host's `ip a` and `ip -6 r` output would be useful (ifconfig
> lacks bridge port information and instead contains lots of useless
> stuff).
> 
> (Obviously any relevant firewall configuration would also be useful)
> 
> You can also use `tcpdump` to try and track the NDP and ping packets,
> see which part fails and where.
> 
> How does your IPv4 setup compare to this, and do you use proxy_arp
> with IPv4?
> 
> > On April 25, 2016 at 2:30 PM Nick Falcone <nick at nfalcone.net> wrote:
> > 
> > 
> > root at test9001:~# ip -6 r
> > 2604:a880:0:1010::/64 dev eth0  proto kernel  metric 256  expires
> > 3434sec pref medium
> > fe80::/64 dev eth0  proto kernel  metric 256  pref medium
> > default via fe80::684e:dcff:feae:fd61 dev eth0  proto ra  metric 1024 
> > expires 1634sec hoplimit 64 pref medium
> > 
> > 
> > root at test9001:~# default via fe80::1 dev eth0  metric 1024  pref medium
> > 
> > 
> > after adding the route you suggested I still get:
> > ip -6 route del default
> > ip -6 route add default via fe80::1 dev eth0
> > From 2604:a880:0:1010:216:3eff:fe87:ff20 icmp_seq=15 Destination
> > unreachable: Address unreachable
> > 
> > On Mon, Apr 25, 2016, at 07:25 AM, Wojciech Arabczyk wrote:
> > > What are your route settings in the container?
> > > ip -6 route show
> > > 
> > > Have you tried adding the generic default route via:
> > > ip -6 route add default via fe80::1 dev eth0
> > > on the container itself?
> > > 
> > > On 25 April 2016 at 13:11, Nick Falcone <nick at nfalcone.net> wrote:
> > > > In my sysctl.conf I have:
> > > >
> > > > net.ipv4.ip_forward=1
> > > > net.ipv6.conf.all.forwarding=1
> > > >
> > > >
> > > > and just to double check
> > > >
> > > > root at lxdtest:~# sysctl net.ipv4.ip_forward
> > > > net.ipv4.ip_forward = 1
> > > > root at lxdtest:~# sysctl net.ipv6.conf.all.forwarding
> > > > net.ipv6.conf.all.forwarding = 1
> > > >
> > > > On Mon, Apr 25, 2016, at 03:44 AM, Wojciech Arabczyk wrote:
> > > >> Are you sure, you have enabled ipv6 forwarding via sysctl?
> > > >>
> > > >> On 22 April 2016 at 18:10, Nick Falcone <nick at nfalcone.net> wrote:
> > > >> > Hello
> > > >> >
> > > >> > I have been banging my head up against a wall for a few days now trying
> > > >> > to get IPv6 to work across my bridged interface for my containers.
> > > >> >
> > > >> > I have tried different VPS and dedicated servers as well as versions of
> > > >> > Ubuntu 14.04, 15.10, and 16.04 to get this working.  The latest test all
> > > >> > this info is from an Ubuntu 16.04 with the included version of LXD.
> > > >> >
> > > >> > First I install and run lxd init, I configure the bridge like so.
> > > >> >
> > > >> > lxdbr0    Link encap:Ethernet  HWaddr fe:82:af:f0:5d:ce
> > > >> >           inet addr:10.195.87.1  Bcast:0.0.0.0  Mask:255.255.255.0
> > > >> >           inet6 addr: 2604:a880:0:1010::623:2/64 Scope:Global
> > > >> >           inet6 addr: fe80::40c6:84ff:fe18:22fb/64 Scope:Link
> > > >> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> > > >> >           RX packets:294 errors:0 dropped:0 overruns:0 frame:0
> > > >> >           TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
> > > >> >           collisions:0 txqueuelen:1000
> > > >> >           RX bytes:21612 (21.6 KB)  TX bytes:2127 (2.1 KB)
> > > >> >
> > > >> > This is my host information too
> > > >> >
> > > >> > eth0      Link encap:Ethernet  HWaddr 04:01:d4:50:c4:01
> > > >> >           inet addr:162.243.200.170  Bcast:162.243.200.255
> > > >> >           Mask:255.255.255.0
> > > >> >           inet6 addr: fe80::601:d4ff:fe50:c401/64 Scope:Link
> > > >> >           inet6 addr: 2604:a880:0:1010::623:1/64 Scope:Global
> > > >> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> > > >> >           RX packets:76258 errors:0 dropped:0 overruns:0 frame:0
> > > >> >           TX packets:8187 errors:0 dropped:0 overruns:0 carrier:0
> > > >> >           collisions:0 txqueuelen:1000
> > > >> >           RX bytes:111074998 (111.0 MB)  TX bytes:1230729 (1.2 MB)
> > > >> >
> > > >> > I launch and enter the first container it has this info:
> > > >> >
> > > >> > eth0      Link encap:Ethernet  HWaddr 00:16:3e:87:ff:20
> > > >> >           inet addr:10.195.87.69  Bcast:10.195.87.255
> > > >> >           Mask:255.255.255.0
> > > >> >           inet6 addr: 2604:a880:0:1010:216:3eff:fe87:ff20/64
> > > >> >           Scope:Global
> > > >> >           inet6 addr: fe80::216:3eff:fe87:ff20/64 Scope:Link
> > > >> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> > > >> >           RX packets:20 errors:0 dropped:0 overruns:0 frame:0
> > > >> >           TX packets:294 errors:0 dropped:0 overruns:0 carrier:0
> > > >> >           collisions:0 txqueuelen:1000
> > > >> >           RX bytes:2175 (2.1 KB)  TX bytes:25728 (25.7 KB)
> > > >> >
> > > >> > so here I can see slaac is successful, but I cannot ping6
> > > >> > 2604:a880:0:1010::623:1 (the host ipv6), I cannot ping google's public
> > > >> > dns ipv6 either.  I CAN successfully ping6 2604:a880:0:1010::623:2 my
> > > >> > bridge public IPv6.
> > > >> >
> > > >> > Seems like a routing issue, so on the host I add:
> > > >> > ip -6 route add 2604:a880:0:1010:216:3eff:fe87:ff20 dev lxdbr0
> > > >> >
> > > >> >
> > > >> > Still not able to ping6 out.  As a side note IPv4 works great.
> > > >> >
> > > >> > Am I missing something here? I cannot seem to find a lot of docs on this
> > > >> > small part.  I thought to look at the demo containers on
> > > >> > https://linuxcontainers.org/lxd/try-it/ but am unable to ping6 out on
> > > >> > those, is this just a limitation?
> > > >> >
> > > >> > Thanks for any help in advance, would really like to use lxd for a
> > > >> > project.  Also I do not care to redact these real IPs, they belong to a
> > > >> > box only used for getting this working then will be destroyed.
> > > >> > _______________________________________________
> > > >> > lxc-users mailing list
> > > >> > lxc-users at lists.linuxcontainers.org
> > > >> > http://lists.linuxcontainers.org/listinfo/lxc-users
> > > >> _______________________________________________
> > > >> lxc-users mailing list
> > > >> lxc-users at lists.linuxcontainers.org
> > > >> http://lists.linuxcontainers.org/listinfo/lxc-users
> > > > _______________________________________________
> > > > lxc-users mailing list
> > > > lxc-users at lists.linuxcontainers.org
> > > > http://lists.linuxcontainers.org/listinfo/lxc-users
> > > _______________________________________________
> > > lxc-users mailing list
> > > lxc-users at lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 


More information about the lxc-users mailing list