Fortinet IPv6 SIT Tunnel Support – Preferring Tunnel Routes Over Router Advertisements

The Problem:

Background

Back at the beginning of the year, before the current Covid-19 pandemic had everyone in the current work-from-home mode, I started scrubbing my home IPv6 connectivity in preparation for finalizing various IPv6 presentations that were planned for the year.

One seeming minor change (minor at the time, in this context) kicked off the preparation. I decided to upgrade my home Fortinet firewall from a FortiOS 5.6 build to 6.2.3. FortiOS 6.2.3 was current at the time of the change. It’s possible that this behavior was occurring prior to the upgrade as well. Initially, TimeWarner/Spectrum Cable were not allocating IPv6 addresses for residential users. After performing the upgrade, I also enabled IPv6 auto configuration on firewall’s WAN1 interface. I was curious to see if Spectrum had started providing allocations, and if so, what size could be expected.

Not long after the upgrade, I noticed that I had lost all IPv6 connectivity. Most of the configuration needed to get a Fortigate talking to Hurricane Electric needs to be done from the command line, as these options were not mainstream enough to be available in the WebUI. Because of that, I anticipated that an upgrade could break the connectivity. From my quick checking though, this was not the case.

Troubleshooting

From the Fortigate, the SIT tunnel to my Hurricane Electric tunnel broker was verified as active. I could ping across the tunnel to the other side. I double checked that the static routing was still in place for the SIT tunnel, and it showed up in both the static route configuration as well as in the routing monitor. So, why couldn’t I ping Hurricane Electric’s DNS servers from the Fortigate, and why could none of my downstream devices get to anything on Hurricane Electric’s network or beyond (via IPv6 anyway)?

After a little over a month working with Fortinet’s TAC on this, we finally have a theory on what’s happening and why. Fortinet TAC is still looking into whether this is a bug or intended (but misunderstood, by me anyway) behavior.

From the Fortigate CLI, the “diagnose ipv6 route list”, we see that the Fortigate is learning a default route from Spectrum as part of the dynamic address autoconfiguration. This learned route is ingested with a much higher priority, 1024, but with the same priority that the static route configured on the Fortgate for reaching back into my home network also has. Oddly, this default route does NOT show up in the Fortigate’s Routing Monitor, while the configured static routes do, as do the link local IPv6 routes learned from the interface configurations on the internal and WAN1 interface would do.

Identification

It wasn’t until I started monitoring the IPv6 implicit deny firewall rule that I got a clue to the issue. I saw the route selection for my outbound traffic attempting to take the Spectrum Cable route instead of my static default route point to the Hurricane Electric tunnel. Now I at least knew why it was taking the path that it was taking. I still needed to figure out how to get the traffic behaving as expected?

Solutions:

Option 1

The first suggestion from Fortinet TAC was to use an IPv6 policy route. The policy was set to specify that traffic coming into my internal interface destined for any destination (::/0) be sent to the Hurricane Electric tunnel interface. This works! The downside is that for every internal interface on the Fortigate, I need a corresponding policy route. It also makes it more complicated if I decide that some low latency IPv6 traffic needs to be shunted directly our the Spectrum Cable interface via NAT66, rather than ride in the overlay tunnel.

Option 2

The other option was to modify the IPv6 default route slightly. I’m currently testing this now. Instead of using the standard “::/0 ” notation to signify that all bits match this mask, we changed the destination to be “::/1”. This makes the static route more specific that the default route that being learned from Spectrum, thus this route takes precedence. While it’s possible that some traffic won’t be caught with this mask, it seems to work for now. The long term use of this option would be suspect though, due to the possibility of inconsistent connectivity when prefixes start to fall outside of that range.

My long term answer right now is going to be to use the policy route to take over for this problem. I’ll update this post once I have resolution from Fortinet TAC on the official resolution or disposition.

This entry was posted in Fortinet, IPv6 and tagged , , , . Bookmark the permalink.

Leave a Reply