Categories
News

IPv6 for a Belgian hosting provider

Note from the IPv6 Council: this article was written by Frank before June 2011 and it does not reflect the current situation at Openminds as their own web sites and some of their customers have been IPv6-enabled since the World IPv6 Day of 2011 🙂

Our history and interest with ipv6 goes back quite a while. One of the Openminds co-founders made his master thesis about IPv6 at a big mobile provider, about 10 years ago. But we decided that if we would provide ipv6, we would do it using the same quality standards as we do for ipv4. That meant being dual-homed, with native ipv6 from providers that can guarantee the same standards as for ipv4. No 100ms+ latencies that are common with a tunnel mesh, no “yeah, we broke it, but ipv6 is experimental, right” mentality.

That means we got our /32 block more than 2 years ago, but only connected our first ipv6 customers about a year ago. Today we announced we finished phase 2 of our ipv6 migration plan. At this stage, all of our network equipment in both our datarooms are fully dual-stacked. That means we can enable ipv6 for all VDS, dedicated server and colocation customers.

We’ve turned on ipv6 for quite a few customers already. We choose *not* to go for SLAAC, as we want to take the “route of least surprise”. Imagine a host that has (ipv4) firewall rules to only allow certain ip ranges access to a particular website or service. Enabling ipv6 just like that would punch a big hole in the firewall. Other examples are webapps that assume the remote client has an ipv4 address and fail when trying do insert an ipv6 address in a database. We do use ND/RA, as we see the benefits, but with without the “auto-configuration” flag.

It is absolutely right that we don’t have AAAA records on openminds.be at the moment. Enabling IPv6 for key internal services was planned for phase 3 of our migration path. I will try to enable it for our own site in time for World IPv6 day, but can’t promise it yet (have to upgrade kernel on the reverse proxy that sits in front of that server).

On the technical side, we’ve just dual-stacked all our networking gear (mix of Juniper, Cisco and HP) and kept the same topology for v4 as for v6 (so it’s a real dual stack), all native, no tunnels, except for a HE tunnel at the office to have something “out of our net” to test connectivity. For rDNS, we use a PowerDNS pipe backend that auto-encodes ip6.arpa addresses. So our entire /32 has rDNS and we can override that for specific hosts if needed.

For the Cisco (just pasting the ipv6-relevant config here)


interface Vlan101
ipv6 address 2A02:D08:1001:101::1/64
ipv6 nd ra-interval 3
ipv6 nd ra-lifetime 9
ipv6 nd prefix 2A02:D08:1001:101::/64 2592000 604800 no-autoconfig

This adds an ip ::1 to the switch, sets short RA intervals, announces the prefix, but sets the no-autoconfig flag in the packet, so clients won’t do SLAAC, as per our “route of least susprises” approach.

For the HP/H3C:


interface Vlan-interface102
ipv6 address 2A02:D08:1002:102::1/64
ipv6 nd ra interval 10 3
ipv6 nd ra prefix 2A02:D08:1002:102::/64 2592000 604800 no-autoconfig
undo ipv6 nd ra halt