Tag Archives: FAS

What you need to know about NetApp’s 40GbE options

­With the introduction of the new NetApp platforms back in September 2016, came 40GbE as well as 32Gb Fibre Channel connectivity.

I had my first taste of 40GbE on the NetApp side back in January when I got to install the first All Flash FAS A700 in Canada. The client requested a mix of 40GbE and 16Gb FC with some of the 40GbE being broken out into 4 × 10GbE interfaces and some being used natively.

NetApp is deploying two flavours of 40GbE cards: the X1144A for the AFF A300, AFF A700s and FAS8200, and the X91440A for the AFF A700 and FAS9000 storage systems. At first glance, you might be tempted to assume that those are the same PCIe card since the part numbers are very similar (the latter just being in some sort of carrier to satisfy the I/O module requirement for the blade-style chassis that is home to the A700 and FAS9000), Upon further inspection the two are not exactly equal.

The ports on most PCIe cards and onboard interfaces are deployed in pairs, with one shared application-specific integrated circuit (ASIC) on the board behind the physical ports. On the X1144A, both external ports share one ASIC with an available combined bandwidth of 40Gb/s, whereas the X91440A has two ASICs. Each has two ports, but one is internal and not connected to anything, giving you 40Gb/s per external port.

The ASIC (or controller) in question is the Intel XL710. What’s important about this is that both external ports on an X91440A can be broken out to 4 × 10GbE interfaces for a total of eight, or one can remain at 40GbE while the other is broken out. On the X1144A however, you can either connect both ports to your switch using 40GbE connections or you can break-out port A to 4 × 10GbE and port B gets disabled. According to Intel, if you connect both ports via 40GbE, “The total throughput supported by the 710 series is 40 Gb/s, even when connected via two 40 Gb/s connections.”

Now before we get all up in arms about this, lets really get into the weeds here. Both the FAS8200/FAS9000 and the AFF A300/700 are using PCIe 3.0. Each PCIe 3.0 lane can carry 8 Gigatransfers per second (GT/s). For the purposes of this post, that is close enough to 8Gb/s. The FAS8200/AFF A300 has an Intel D-1587 CPU with a maximum eight lanes per slot, so roughly 64Gb/s of throughput, whereas the FAS9000/AFF A700 has an Intel E5-2697 with a maximum 16 lanes per I/O slot which gives it about 128Gb/s of throughput. So even if NetApp included a network interface card for the A300/FAS8200 with two XL710’s on it, the PCIe slot it’s connected to couldn’t provide 80Gb/s of throughput, whereas the the I/O modules in the A700/FAS9000 can.

Say you want to change between 40GbE and 10GbE. Unlike modifying UTA2 profiles (as explained here), with the XL710, you need to get into maintenance mode first and use the nicadmin command. Here’s an example:

sysconfig output before:

slot 1: 40 Gigabit Ethernet Controller XL710 QSFP+
                 e1a MAC Address:    00:a0:98:c5:b2:fb (auto-40g_cr4-fd-up)
                 e1e MAC Address:    00:a0:98:c5:b2:ff (auto-unknown-down)

At this point I already had the breakout cable installed. That’s why the second link shows as down.

Conversion example:

*> nicadmin
 nicadmin convert -m { 40G | 10G } <port-name>
 
 
 *> nicadmin convert -m 10g e1e
 Converting e1e 40G port to four 10G ports
 Halt, install/change the cable, and then power-cycle the node for
 the conversion to take effect.  Depending on the hardware model,
 the SP (Service Processor) or BMC (Baseboard Management Controller)
 can be used to power-cycle the node.

sysconfig output after:

slot 1: 40 Gigabit Ethernet Controller XL710 QSFP+
                 e1a MAC Address:    00:a0:98:c5:b2:fb (auto-40g_cr4-fd-up)
                 e1e MAC Address:    00:a0:98:c5:b2:ff (auto-10g_twinax-fd-up)
                 e1f MAC Address:    00:a0:98:c5:b3:00 (auto-10g_twinax-fd-up)
                 e1g MAC Address:    00:a0:98:c5:b3:01 (auto-10g_twinax-fd-up)
                 e1h MAC Address:    00:a0:98:c5:b3:02 (auto-10g_twinax-fd-up)

Unfortunately I don’t have access to either a FAS8200 nor an AFF A300 with 40GbE otherwise I’d provide the sysconfig output before and after there as well.

Now, there’s a bit of a debate going on around the viability of 40GbE over 100GbE. While 40GbE is simply a combined 4 × 10GbE; 100GbE is only a combined 4 × 25GbE. With regards to production costs, apparently to make a 40GbE QSFP+, you literally combine 4 lasers (hence the Q in QSFP) into the module; well, the same goes for 100GbE. You only need one laser to produce the wavelength for 25GbE, and while that still means you need four for 100GbE, four times the production cost still yields 250% of the throughput of 40GbE which makes me wonder where it will end up in a year.

So there you go, more than you ever wanted to know about NetApp’s recent addition of 40GbE into the ONTAP line of products as well as my personal philosophical waxing around the 40 versus 100 GbE debate.

NetApp Refreshes Entry-Level FAS Systems

Today NetApp announced the successors to their entry-level line of FAS storage arrays: the FAS2552, FAS2254 and the FAS2520 which replace the FAS2240-2, FAS2240-4 and FAS2220 respectively.

Why is this important? Until now, in order to run Clustered Data ONTAP, you had to use your one and only expansion option for a 10GbE card for the cluster interconnect network, giving up any chance of deploying Fibre Channel. Technically, since this was a two-port card, you could still provide 10GbE uplink at the expense of redundancy on the ClusterNet backend. However, the new models give up the mezzanine slot altogether in favour of a minimum of 4 ×10GbE on board on the FAS2520 to 4 ×UTA2 ports on both the FAS2552 and FAS2554.

Highlights:

With this refresh NetApp continues to use the same dual-core, hyper-threaded, 1.73GHz Jasper Forest processors as before – which, incidentally, was specifically designed for both embedded and storage applications — but the quantity is doubled to four, not to mention there’s a three-fold increase in memory. All of this added memory increases the ability for Data ONTAP to address more flash, raising the Flash Pool™caching limit to 4TB. Finally, with the addition of onboard 10GbE across the line, NetApp closes the gap in regard to ClusterNet interconnect requirements. The minimum version of ONTAP required for either 7-mode or Cluster-Mode will be the one it ships with, 8.2.2RC1.

FAS2520

The FAS2520A is a 2U appliance supporting 12 SAS, SATA, and NSE drives internally, and up to 72 additional drives externally. Connectivity is provided by 4×6GB SAS ports, 4×1GbE interfaces and 8×10GBASE-T. Unlike its predecessor, there are no expansion slots.

2520

NetApp’s new FAS2520, rear view.

FAS2552/FAS2554

The FAS2552A is a 2U appliance supporting 24 SAS, NSE and SSD drives internally and the FAS2554A is a 4U appliance supporting SATA, NSE and SSD drives internally; both models support up to an additional 120 drives externally. Connectivity is provided by 4×6GB SAS ports, 4x1GbE interfaces and 8×UTA2 ports. The UTA2 ports can be configured as either 8Gb FC, 16Gb FC, or 10GbE. The 10GbE configuration does indeed support FCoE as well as the usual CIFS, NFS and iSCSI options. Due to the fact that each pair of ports is driven by one ASIC, the UTA2 ports must be configured in pairs. However, it should be noted that their personality can be modified in the field; this requires a reboot as well as the requisite SFP.

2552

NetApp’s new FAS2552, rear view.

2554

NetApp’s new FAS2554, rear view.

Port Legend

Summary

With this second round of major updates to the FAS systems this year, the entire line is now truly Clustered Data ONTAP-ready, with every model sporting 10 Gig connectivity on-board. What I find most noteworthy is the amount of RAM that has been added which significantly increases the amount of flash-based cache the devices can address. Flash Pools abound!