Yesterday, NetApp announced a new addition to the midrange tier of their All-Flash FAS line, the AFF A320. With this announcement, end-to-end NVMe is now available in the midrange, from the host all the way to the NVMe SSD. This new platform is a svelte 2RU that supports up to two of the new NS224 NVMe SSD shelves, which are also 2RU. NetApp has set performance expectations to be in the ~100µs range.
Up to two PCIe cards per controller can be added, options are:
- 4-port 32GB FC SFP+ fibre
- 2-port 100GbE RoCEv2* QSFP28 fibre (40GbE supported)
- 2-port 25GbE RoCEv2* SPF28 fibre
- 4-port 10GbE SFP+ Cu and fibre
*RoCE host-side NVMeoF support not yet available
A couple of important points to also note:
- 200-240VAC required
- DS, SAS-attached SSD shelves are NOT supported
An end-to-end NVMe solution obviously needs storage of some sort, so also announced today was the NS224 NVMe SSD Storage Shelf:
- NVMe-based storage expansion shelf
- 2RU, 24 storage SSDs
- 400GB/s capable, 200Gb/sec per shelf module
- Uplinked to controller via RoCEv2
- Drive sizes available: 1.9TB, 3.8TB and 7.6TB. 15.3TB with restrictions.
Either controller in the A320 has eight 100GbE ports on-board, but not all of them are available for client-side connectivity. They are allocated as follows:
- e0a → ClusterNet/HA
- e0b → Second NS224 connectivity by default, or can be configured for client access, 100GbE or 40GbE
- e0c → First NS224 connectivity
- e0d → ClusterNet/HA
- e0e → Second NS224 connectivity by default, or can be configured for client access, 100GbE or 40GbE
- e0f → First NS224 connectivity
- e0g → Client network, 100GbE or 40Gbe
- e0h → Client network, 100GbE or 40Gbe
If you don’t get enough client connectivity with the on-board ports, then as listed previously, there are myriad PCIe options available to populate the two available slots. In addition to all that on-board connectivity, there’s also MicroUSB and RJ45 for serial console access as well as the RJ-45 Wrench port to host e0M and out-of-band management via BMC. As with most port-pairs, the 100GbE ports are hosted by a single ASIC which is capable of a total effective bandwidth of ~100Gb.
Food for thought…
One interesting design change in this HA pair, is that there is no backplane HA interconnect as has been the case historically; instead, the HA interconnect function is placed on the same connections as ClusterNet, e0a and e0d. This enables some interesting future design possibilities, like HA pairs in differing chassis. Also, of interest is the shelf connectivity being NVMe/RoCEv2; while currently connected directly to the controllers, what’s stopping NetApp from putting these on a switched fabric? Once they do that, drop the HA pair concept above, and instead have N+1 controllers on a ClusterNet fabric. Scaling, failovers and upgrades just got a lot more interesting.