Fortinet IPv6 SIT Tunnel Support – Preferring Tunnel Routes Over Router Advertisements

The Problem:

Background

Back at the beginning of the year, before the current Covid-19 pandemic had everyone in the current work-from-home mode, I started scrubbing my home IPv6 connectivity in preparation for finalizing various IPv6 presentations that were planned for the year.

One seeming minor change (minor at the time, in this context) kicked off the preparation. I decided to upgrade my home Fortinet firewall from a FortiOS 5.6 build to 6.2.3. FortiOS 6.2.3 was current at the time of the change. It’s possible that this behavior was occurring prior to the upgrade as well. Initially, TimeWarner/Spectrum Cable were not allocating IPv6 addresses for residential users. After performing the upgrade, I also enabled IPv6 auto configuration on firewall’s WAN1 interface. I was curious to see if Spectrum had started providing allocations, and if so, what size could be expected.

Not long after the upgrade, I noticed that I had lost all IPv6 connectivity. Most of the configuration needed to get a Fortigate talking to Hurricane Electric needs to be done from the command line, as these options were not mainstream enough to be available in the WebUI. Because of that, I anticipated that an upgrade could break the connectivity. From my quick checking though, this was not the case.

Troubleshooting

From the Fortigate, the SIT tunnel to my Hurricane Electric tunnel broker was verified as active. I could ping across the tunnel to the other side. I double checked that the static routing was still in place for the SIT tunnel, and it showed up in both the static route configuration as well as in the routing monitor. So, why couldn’t I ping Hurricane Electric’s DNS servers from the Fortigate, and why could none of my downstream devices get to anything on Hurricane Electric’s network or beyond (via IPv6 anyway)?

After a little over a month working with Fortinet’s TAC on this, we finally have a theory on what’s happening and why. Fortinet TAC is still looking into whether this is a bug or intended (but misunderstood, by me anyway) behavior.

From the Fortigate CLI, the “diagnose ipv6 route list”, we see that the Fortigate is learning a default route from Spectrum as part of the dynamic address autoconfiguration. This learned route is ingested with a much higher priority, 1024, but with the same priority that the static route configured on the Fortgate for reaching back into my home network also has. Oddly, this default route does NOT show up in the Fortigate’s Routing Monitor, while the configured static routes do, as do the link local IPv6 routes learned from the interface configurations on the internal and WAN1 interface would do.

Identification

It wasn’t until I started monitoring the IPv6 implicit deny firewall rule that I got a clue to the issue. I saw the route selection for my outbound traffic attempting to take the Spectrum Cable route instead of my static default route point to the Hurricane Electric tunnel. Now I at least knew why it was taking the path that it was taking. I still needed to figure out how to get the traffic behaving as expected?

Solutions:

Option 1

The first suggestion from Fortinet TAC was to use an IPv6 policy route. The policy was set to specify that traffic coming into my internal interface destined for any destination (::/0) be sent to the Hurricane Electric tunnel interface. This works! The downside is that for every internal interface on the Fortigate, I need a corresponding policy route. It also makes it more complicated if I decide that some low latency IPv6 traffic needs to be shunted directly our the Spectrum Cable interface via NAT66, rather than ride in the overlay tunnel.

Option 2

The other option was to modify the IPv6 default route slightly. I’m currently testing this now. Instead of using the standard “::/0 ” notation to signify that all bits match this mask, we changed the destination to be “::/1”. This makes the static route more specific that the default route that being learned from Spectrum, thus this route takes precedence. While it’s possible that some traffic won’t be caught with this mask, it seems to work for now. The long term use of this option would be suspect though, due to the possibility of inconsistent connectivity when prefixes start to fall outside of that range.

My long term answer right now is going to be to use the policy route to take over for this problem. I’ll update this post once I have resolution from Fortinet TAC on the official resolution or disposition.

Posted in Fortinet, IPv6 | Tagged , , , | Leave a comment

IPv6Buzz Podcast Appearance

As a byproduct of my February IPv6 presentation at the Wireless LAN Professionals Conference (WLPC) 2020 in Phoenix, AZ, I was asked to take part in a follow-on conversation with the Packet Pushers Podcast Series, IPv6 Buzz.

I’ve known of Tom Coffeen since my time at Infoblox, and have followed Ed Horley on Twitter for about as long. When it was suggested that I join the podcast to talk about the WLPC presentation and my experiences with IPv6 in the wireless network edge, I was grateful to see the calendars align!

Posted in IPv6, Wireless | Tagged , , | Leave a comment

WLPC IPv6 Update Presentation – What IPv6 Means for Wireless Engineers

Hard to believe it was four years ago since I first gave an IPv6 for Wireless Engineers presentation to the Wireless LAN Professionals Conference (WLPC) in Dallas, TX. Fortunately, there was interest from the attendees not just in my discussion, but also an IPv6 presentation from John Kilpatrick (@Meatwad650).

Some things have changed since my first presentation on the subject, but some things have stayed the same as well. Primarily as it relates to address auto-configuration … the industry has settled on Android’s insistence on not using DHCPv6 and instead using SLAAC with extensions for deriving the typical network config. Other clients might support other capabilities, however, so it’s up to network operators to decide how in-sync … or not … they want the different methods to be.

This talk was derived from my Atmosphere 2019 and Atmosphere 2020 deck, intended to be a 75 minute discussion, it was shortened to remove the vendor specific details and compress into a 30-minute time frame.

Enjoy, and please don’t hesitate to connect with me here, on social media, or via other means with any questions or comments.

Posted in IPv6, Wireless | Tagged , , | Leave a comment

Aruba ACMPv8 Certified!

May has been a good month for catching up on my Aruba certifications!

This exam should have been done at the end of May, but scheduling and customer work resulted in it being pushed off until June 1st. Regardless, I’m glad to have this one done and on the books!

While I feel like I had the knowledge previously to have had the ACMP certification on prior versions of AOS (ArubaOS), I never took the time to sit the exam and obtain the certification. With coming back to Aruba, and more importantly … some of the fundamental changes that have taken place with AOS 8, it was important to me to actually take this one to completion. Now I just need to wait for the ACMX v8 exam to be released, and in the mean time study for that while working on my CWNP certifications!

Posted in Aruba, Wireless | Leave a comment

CWNA Learning – RF Math

As mentioned earlier (CWNA Re-certified for 2018), I finally got around to re-certifying the CWNA certification that I originally obtained in 2007.

While I consider the CWNA to be the one essential certification for anyone who’s interested in getting into wireless network engineering, the certification is generally viewed as anything but a beginner cert … primarily because of the range of topics covered in this exam. Unlike the more specialized certifications like CWAP, CWSP, and CWDP, the CWNA covers topics from all three of these specializations … albeit at a higher level.

I originally sat and passed the CWNA exam back in 2007 with the original PW0-100 exam. I started down the path of re-certifying in 2013 after letting my initial CWNA status lapse (the certification is valid for 3 years from the date of the successful exam completion), this time with the PW0-106 exam. Last week, I finally say my second CWNA exam, this time the PW0-107, and passed.

One response I received on LinkedIn after posting about the completion, was asking what study methods I use when going for technical certifications. This question brought up some specific methodologies, but also shed some light for me personally on the topic I usually have the most resistance to, but this time finally had a better grasp on … RF Math.

You see, for most Associate and Professional level certification, my typical learning method usually has me sitting down with an official study guide … working my way through the text and focusing on any practice questions available to make sure I comprehend the content well enough. In the case of the CWNA, my go-to book is the study guide published by Sybex … expertly written by David Coleman and David Westcott. Most anyone who’s read through that book knows (or at least has heard of) the dreaded chapter 3 … “Radio Frequency Components, Measurements, and Mathmatics”. Kind of a mouthful right there, but this is where the beginner wireless engineer gets their first exposure to dB math, and breaking the mindset that big linear deltas in measurements are good … while smaller changes are insignificant. As Matthew Gast said at the first WLPC conference in Austin … ask your manager for a 2dB pay raise and see how significant logarithmic math can be.

So, where am I going with this? Sometime between 2007 when I originally got the CWNA certification and now … I picked up photography as a hobby. Wait, what the heck does photography have to do with wireless, you ask? Simply that both get into physics … one dealing with RF energy, and the other dealing with light energy, and the implications of the Inverse Square Law.

With wireless, we often talk about the rule of 10s and 3s … basically, that every time a signal changes (increases or decreases) by 3 decibels, the signal level has actually doubled or halved in strength. What’s more, a change in 10 dB is like an order of magnitude change: take the original signal level and multiple or divide by 10 for the new level. So remember that 2 dB pay raise … yeah, it’s a lot more than just the standard 2% some people got.

Another quick calculation, and the one that really brought my understanding together (and the reason for this post) is the 6 dB rule: a change of 6 dB effectively doubles or halves the distance of the useable signal.

In photography, controlling light is crucial in capturing great images in camera. (I’ll ignore all the possibilities that come through photo editing, since my interest in photography is to get me out from behind the monitor and out in the world … not to spend hours behind the monitor editing images to make a great image out of a mediocre capture. So if I have this battery powered light, how do I make the light brighter when it’s already turned up as bright as the unit can go? Simply … by moving it halve as close to the subject. If the light was 2 ft from the subject, I can move it 1 ft from the subject and have more light light available at my disposal without using more battery power than I already was. Similarly … if I’m too lazy to adjust settings on my light, I can move that light from 2ft away from the subject to 4 ft away from the subject, and I’ll have 1/4th the light available with the same settings.

And this is where it dawned on me. The typical question when the Inverse Square Law is discussed in photography is “why don’t I have half/double the light when I move the light half/double the distance to the subject? Remember that rule of 3? Every three dB is a half/double in power. But if doubling the distance means I put the power (either light or RF) by 6 dB … that’s not a change of a half … that’s a change of a half and then another half, or quarter power.

So what does this all really mean? For me, the rule of 6 dB is good for understanding coverage as it relates to distance. If you add 6 dB to the signal level, then the effective distance is doubled, while reducing power by 6 dB decreases the distance by half. If those distance changes are relative to a client position and I move the AP twice the distance  that it was to where it is (aka, I move it from 2 ft away to 4 ft away), I’m now receiving 1/4th the signal level that I was before.

The rule of 3 dBs explains why that signal level change is what it is. If changing the lowering the signal by 6 dB is the result of doubling the distance from my client to the AP, that’s two different 3 dB changes.

Hopefully that example with light helps. If all I’ve done is further confused the subject (hopefully not), leave me a note in the comments and we’ll discuss it further to either make sure my understanding is right or to get me corrected.

Posted in CWNA, Wireless | Leave a comment

CWNA Re-certified for 2018!

Just a quick note that I should have posted last week …

For the second time since 2007, I’ve completed the certification process for the CWNA! I’ll have more notes posted with my thoughts from the current CWNA, but I’m also making the commitment to go for the CWNE and to share that knowledge along the way. I’m still on track for the Aruba certifications as well, so hopefully lots of good content (or at least good content ideas) in the near future. 🙂

Posted in CWNA | 1 Comment

A New Year, New Perspectives

It wasn’t that long ago (just January 22nd, in my post here) that I was updating anyone still following my blog that I had moved from Skyfii over to Fortinet in the middle of 2016. I started blogging more about my experiences with the Fortinet products, which turned into more posts about switching than wireless, unfortunately, but things have once again changed.

As of March 12th, I am back with Aruba, a Hewlett Packard Enterprise company, working as a Principle Network Engineer for our Aruba Customer Engineering (ACE) team. As part of this move, I’ve got some recertification to do and some new certs to obtain. I’ll be using my blog here to keep notes along the way, plus share things of interest I learn along the way.

My first step with returning to Aruba is to get elbow deep in AOS8, which should give me plenty of topics to write about, along with the ACMA/ACMP/ACMX and CWNP certifications that are waiting for me. 🙂

Posted in Uncategorized | Leave a comment

Fortinet FortiSwitch – Firmware 3.6.5 Released

Just a short note, FortiSwitch firmware version 3.6.5 just released yesterday. The release notes for the release can be found on Fortinet’s Doc website.

While there are no new features added in this firmware release, there are a few spanning-tree and MCLAG fixes that should make this the current “go to” release for switches. It does not appear that 3.6.5 has been released for the latest E series switches yet.

Slightly unrelated, when looking for the image downloads for 3.6.5, I did notice that firmware 3.6.4 has been added to the support site for the E series switches.

Happy switching!

Posted in Fortinet | Leave a comment

Fortinet FortiSwitch – Customize Your Switch View

Integrated FortiSwitch Mode

As a follow-up to my last port on setting QoS profiles for FortiSwitch, I stumbled across a useful way to set profiles to individual ports.

WebUI

Like most networking products, there are at least two ways to do just about any task. While my last article focused on the CLI method, actually applying policies to the individual ports can be done using the WebUI. Why is this helpful? The biggest advantage to using the WebUI is speed. With the WebUI, multiple ports can be selected to apply a given action at once.

How to Do It

First, add the QoS Policy option to the displayed columns. You’ll want to be in the “FortiSwitch Ports” view, so navigate to “WiFi & Switch Controller -> FortiSwitch Ports”. Right click on any of the column names to bring up a list of selected and available columns to display. The default view will show things like Description, Native VLAN, Allowed VLANs, Security Policy, Device Information, and POE Status. I find it helpful to add the LLDP Profile and QoS Profile to this view.

With the column(s) added, the port or ports to configure can be selected. A range of ports can be selected by first selecting the top port, then shift selecting the bottom port in the range to highlight everything in the middle. Or, use ctrl/cmd click (for Windows or Mac users, respectively) to select multiple non-adjacent ports. From there, select the QoS Policy name to pop up a window of all available policies, and make your selection.

Notes

It should be noted, this WebUI method only works once the QoS (or LLDP) profiles have been created. To add or change the profiles, the CLI is still the way to go. Also not, there are both ingress and egress settings when QoS comes into play, and this profile setting only sets the egress queuing policy. (See more information about the various QoS policies in last week’s post: Fortinet Fortiswitch QOS Primer) To set the ingress mapping policies (either/or 802.1p and DSCP mapping), that still needs to be done via the CLI as well.

Comments?

As always, I welcome any feedback in the form of questions or comments. If there are specific topics you’d like to see covered, let me know as well! I do expect this to migrate back towards a predominately wireless focused blog … but first, I’m sensing a spanning-tree post coming…. 🙂

Posted in Fortinet | 1 Comment

Fortinet FortiSwitch – QOS Primer

Quality of Service – QOS

Overview

I originally planned to have my first few Fortinet posts be wireless specific, but since getting questioned about migrating QoS configurations from various 3rd party manufacturers into FortiOS syntax … this is a good place to start documenting this stuff for myself.

As background, Fortinet’s FortiSwitch has two primary modes that it can operate under: standalone, or integrated into the switch-controller built into Fortinet’s Fortigate firewall products. The hierarchy used to configure QoS is basically the same across both platforms, with the primary difference being that profiles can be shared across multiple switches when operating in Integrated mode, vs each switch needing to have profiles created locally when running in standalone mode. But before I get into those details, how ’bout the basics on what QoS profiles are available?

When working with QoS, there are a couple of criteria that will be important, that the switch will then use to make decisions on. This criteria will fall into one of two buckets based on the direction of traffic flow: ingress or egress.

On ingress into the switch, the switch will inspect the traffic to make a determination on how it should prioritize the traffic. For most switches including FortiSwitch, the switch can either look at information included in the layer 2 phy, also referred to as the 802.1p priority code point. This 3-bit field is a part of the overall 802.1Q tag that can be added, that also includes VLAN information for tagged trunk links. Similarly, FortiSwitch can look at the IP header in order to determine the Differentiated Services Code Point, or DiffServ Code Point (DSCP). Some devices may mark one or the other, or both, depending on whether they operate at layer 2 only, layer 3, or a mix of both.

Probably not surprisingly, FortiSwitch provides two profiles to be used on ingress: the dot1p-map and the ip-dscp-map, for mapping 802.1p and DSCP priority bits into the appropriate queues, respectively. By configuring profiles, different profiles can be applied to different ports, so if we find a device out on the network that is incorrectly mixing 802.1p and DSCP, profiles can be created to handle that mix correctly for the ports those devices plug into.

Implementing queueing consistently across an enterprise can be interesting, as different devices or manufacturers may support different numbers of queues. For Fortinet, 8 queues are supported. For some of the integrations with typical Cisco implementations I’ve seen, only four queues are used.

802.1p Queue Mapping

A typical 802.1p mapping policy might look as follows:

config switch qos dot1p-map
edit “custom-dot1p”
set priority-0 queue-4
set priority-1 queue-4
set priority-2 queue-3
set priority-3 queue-2
set priority-4 queue-3
set priority-5 queue-1
set priority-6 queue-2
set priority-7 queue-2
next
end

With the 802.1p priority levels, this map is effectively mapping out the following:
– PCP/Priority values 0 and 1, Background (BK) and Best Effort (BE) map to queue 4
– PCP/Priority values 2 and 4, Excellent Effort (EE) and Video (VI) map to queue 3
– PCP/Priority values 3, 6, and 7, Critical Applications (CA), Internetwork Control (IC), and Network Control (NC) map to queue 2
– PCP/Priority value 5, Voice (VO) maps to queue 1

We’ll talk more about these queues in a moment.

DiffServ Queue Mapping

As I mentioned above, devices like IP phones might mark QoS settings using 802.1p as above, or may choose to mark via DSCP instead (or in addition to). A typical DSCP mapping policy might look as follows:

config switch qos ip-dscp-map
edit “custom-dscp”
config map
edit “1”
set cos-queue 1
set value 46
next
edit “2”
set cos-queue 2
set value 24,26,48,56
next
edit “5”
set cos-queue 3
set value 34
next
end
next
end

With typical DSCP values, this map is mapping out the following:
– DSCP value 46, Expedited Forwarding (EF) map to queue 1
– DSCP values 24, 26, 48, and 56, CS3 Signalling, Assured Forwarding 31 (AF31), CS6 Network Control, and CS7 map to queue 2
– DSCP value 34,  Assured Forwarding 41 (AF41) map to queue 3

Queue Policy

There’s a trend starting to form. High priority traffic, either marked with DSCP EF or an 802.1p VO priority, goes to queue 1. While the DSCP marking of Expedited Forwarding isn’t as obvious as marking voice traffic as VO (in 802.1p), the switch is grabbing low latency traffic and putting it into queue 1. Queue 2 then is grabbing network control traffic; things like routing protocols, and signalling protocols. Queue 3 is next up, primarily handling video traffic; whether that be marked with PCP VI (for Video) or DSCP 34 for AF41. Finally, queue 4 is picking up background and best effort traffic. Anything not captured here, stays in queue 0.

If 802.1p and DSCP are used at ingress for determining what queue to sort traffic into, the queueing policy itself is used on egress of the switch to determine how packets are ordered when leaving the switch. If no configuration is done on uplink/outbound ports, the default policy is applied. That policy looks like this:

config switch qos qos-policy
edit “default”
config cos-queue
edit “queue-0”
next
edit “queue-1”
next
edit “queue-2”
next
edit “queue-3”
next
edit “queue-4”
next
edit “queue-5”
next
edit “queue-6”
next
edit “queue-7”
next
end
set schedule round-robin
next
end

Basically, there is no difference in how the queues get treated. Each queue gets serviced in a round-robin order.

In order to provide priority to the different traffic classes, a queuing profile like the following would be more preferred:

config switch qos qos-policy
edit “voice_policy”
config cos-queue
edit “queue-0”
next
edit “queue-1”
set weight 0
next
edit “queue-2”
set weight 6
next
edit “queue-3”
set weight 37
next
edit “queue-4”
set weight 12
next
edit “queue-5”
next
edit “queue-6”
next
edit “queue-7”
next
end
set schedule weighted
next
end

Applying QOS Policy to Ports

Once the queue policy has been configured, the last step is to apply this policy to the egress ports. With QoS, the point is to affect how queues are serviced when traffic starts to queue on the ports. If ports are operating under capacity, there shouldn’t be traffic queueing, so the queueing policy we created will typically be applied to uplink interfaces were traffic is aggregated. Conversely, the switch needs to decide where to queue traffic when it ingresses the switch, so the mapping profiles for 802.1p and DSCP are applied to the edge ports where the priority traffic is generated.

For setting the queueing policy for trunk ports, the following example would apply:

config ports
edit “port20”
set qos-policy “voice_policy”
next
end

To add the queue mapping profiles to edge ports, the following example would apply:

config ports
edit “port1”
set trust-dot1p-map “dot1p-map”
set trust-ip-dscp-map “ip-dscp-map”
next
edit “port2”
set trust-dot1p-map “dot1p-map”
set trust-ip-dscp-map “ip-dscp-map”
next
end

If different mapping profiles were created, either because devices weren’t using both 802.1p and DSCP … or if they were using them differently, then those different profiles could be added per port as needed.

Conclusion

Finally, the last remaining point to make relates to whether the switch is operating in standalone mode or integrated into the Fortigate’s switch controller.

With standalone switches, the config context is not shared across switches, so profiles and port configuration is done per switch. The examples above can be cookie cutter applied to switches, and then applied to ports as part of the standard config rollout.

With integrated switches, all of the configuration is applied at the Fortigate itself. This provides easy reuse and standardization of the mapping and queue profiles across multiple switches for consistent operation. The difference then is in applying the profiles to individual switch ports.

With switches in integrated, or Fortilink mode, the “config switch-controller managed-switch” context is used in order to identify switch specific configurations. Because the profiles are leveraged across all the switches, this context is not needed for creating policies … regardless of the operating mode. The context only comes into play when applying specific settings to ports.

Switches are listed on the Fortigate by their switch ID. Effectively, the switch ID is an abbreviation of the switch model number, followed by the serial number. To apply the port specific settings on the Fortigate then, the following example shows the logic:

config switch-controller managed-switch
edit S224DF3X12345678
config ports
edit “port1”
next
end
next
edit “FS108D3W12345678”
config ports
edit “port1”
next
end
next

Similarly, when applying the configuration to multiple switches, whether standalone or integration, it’s best to ensure that all switches are capable of performing QOS functionality. As of this writing, the D series switches are the prevalent models, while the E series switches are just starting to release. With that, QOS functionality is supported across the FS-2xxD, FS-4xxD, and FS-5XXD switches, as well as on the new FS-2xxE series access switches. It should be noted that the recently EOO’ed FS-1xxD and the newly available FS-1xxE do not support QOS based priority queuing.

Comments?

While I wrote this article more to memorialize what I learned about configuring QOS on FortiSwitch equipment, I welcome any feedback in the form of questions or comments. I plan to review similar configurations for other enterprise switching gear that I have in my lab, but first … I need to get back to more wireless! 🙂

 

Posted in Fortinet | 1 Comment