Author Archives: chrismaki

ONTAP 9.8 has been announced

Timed perfectly with NetApp INSIGHT 2020 is the annual ONTAP payload announcement. Once again, there’s a lot in this payload, so I will simply deliver a list of bulleted sections, addressing as many of the changes as I’m able. I’ll provide additional detail on the ones I feel are the most interesting. For a full run down, please consult the release notes or start a conversation with me on twitter.

FlexGroup Volume Enhancements

  • Async Delete
    • Delete large datasets rapidly from the CLI.
      • This is great for those high file count deployments.
  • Backup enhancements
    • 1,023 snapshots supported
    • NDMP enhancements
  • FlexVol to FlexGroup in-place conversion enhancements
  • VMware datastore support
  • Proactive resizing of constituent volumes

FlexCache Volumes, a true global namespace

  • SMB support added with distributed locking
  • 10x origin to cache fan-out ratio, now 1:100
  • Caching of SnapMirror secondary volumes
  • Cache pre-population

Data Visibility

  • File system analytics, viewable in System Manager
    • Enabled on a per-volume basis
    • Can also be queried via API access
  • QoS for Qtrees
    • IOPS and throughput policies available per qtree object
    • Managed via REST API or CLI
    • Qtree-level statistics
    • NFS only in this release, no adaptive QoS

All-SAN Array (ASA) enhancements

  • Persistent FC Ports
    • Symmetric active/active host-to-LUN access
    • Each node on the ASA will maintain a “shadow FC LIF”, reducing SAN failover times even further.
  • Larger Capacities
    • Max LUN = 128TB LUNs
    • Max FlexVol = 300TB
      • These limit increases are on the ASA only
  • MCC-IP support
  • Priced ~20% less than unified platforms
Before Persistent FC Ports
With Persistent FC Ports

ONTAP S3

  • Preview-only in 9.7, GA in 9.8
  • System manager integration
  • Bucket access policies
  • Multiple buckets per volume
  • TLS 1.2 support
  • Multi-part upload
    ONTAP S3 is not a replacement for a dedicated, global object store

Storage Efficiency Enhancements

  • FabricPool
    • Tiering from HDD aggregates
    • Object tagging (For information life cycle policies)
    • Increased cooling period (max 183 days)
    • Cloud retrieval
  • Storage efficiencies
    • Differentiation of hold and cold data for application of different compression methods, 8k compression group for hot, 32k for cold
    • Deduplication prior to compression

Simplification

  • Upgrade directly to two versions newer without passing via intermediary version
  • Headswaps using nodes running the latest version of ONTAP can be used on nodes running versions of ONTAP up to two versions behind
  • REST API enhancements
    • ZAPI to REST mapping documentation
    • ONTAP version information in API documentation
  • System Manager Improvements
    • Single-click firmware upgrades
    • File system analytics
      • Granular details about your NAS file systems
    • Hardware and Network visualization
    • Data Protection Enhancements
      • Reverse resync
  • Simpler Compliance
    • Volume move support, no second copy required
    • WORM as the default

Security and Data Protection Enhancements

  • Secure purge
    • crypto shred individual files
  • IPSec
    • encrypted network traffic, regardless of protocols
      • Simplifies secure NFS, no need for Kerboros
      • iSCSI traffic on the wire can now be encrypted
  • Node root volume encryption
  • MetroCluster
    • Unmirrored aggregate support
  • SnapMirror
    • SnapMirror Business Continuity (SM-BC) provides automated failover of synchronous SnapMirror relationships for application-level, granular protection
      • These are non-disruptive
      • SM-BC is preview-only in 9.8 and SAN-only.
    • SnapMirror to Object Store
      • Google Cloud, Azure, or AWS
      • Meta Data included so Object Store data is a complete archive
      • Efficiencies maintained
SnapMirror to Object Store

Virtualization Enhancements

  • FlexGroup volumes as VMware datastores
  • SnapCenter backup support
  • 64TB SAN datastore on the ASA
  • SRA support for SnapMirror Synchronous
  • Support for Tanzu storage

That sums up the majority of the improvements, looking forward to this release coming out. See you at NetApp INSIGHT 2020!

NetApp releases a new AFF and a new FAS(?)

While we ramp up for NetApp INSIGHT next week, (the first virtual edition, for obvious reasons), NetApp has announced a couple of new platforms. First off, the AFF A220, NetApp’s entry-level, expandable AFF is getting a refresh in the AFF A250. While the 250 is a recycled product number, the AFF A250 is a substantial evolution of the original FAS250 from 2004.

The front bezel looks pretty much the same as the A220:

AFF A250 – Front Bezel

Once you remove the bezel, you get a sneak peak of what lies within from those sexy blue drive carriers which indicate NVMe SSDs inside:

AFF A250 – Bezel Removed

While the NVMe SSDs alone are a pretty exciting announcement for this entry-level AFF, once you see the rear, that’s when the possibilities start to come to mind:

AFF A250 – Rear View

Before I address the fact that there’s two slots for expansion cards, let’s go over the internals. Much like its predecessor, each controller contains a 12-core processor. While the A220 contained an Intel Broadwell-DE running at 1.5GHz, the A250 contains an Intel Skylake-D running at 2.2GHz providing roughly a 45% performance increase over the A220, not to mention 32, third generation PCIe lanes. System memory gets doubled from 64GB to 128GB as does NVRAM, going from 8GB to 16GB. Onboard connectivity consists of two 10GBASE-T (e0a/e0b) ports for 10 gigabit client connectivity with two 25GbE SFP28 ports for ClusterNet/HA connectivity. Since NetApp continues to keep HA off the backplane in newer models, they keep that door open for HA-pairs living in separate chassis, as I waxed about previously here. Both e0M and the BMC continue to share a 1000Mbit, RJ-45 port, and the usual console and USB ports are also included.

Hang on, how do I attach an expansion shelf to this? Well at launch, there will be four different mezzanine cards available to slot into one of the two expansion slots per controller. There will be two host connectivity cards available, one being a 4-port, 10/25Gb, RoCEv2, SFP28 card and the other being a 4-port, 32Gb Fibre Channel card leveraging SFP+. The second type of card available is for storage expansion: one is a 2-port, 100Gb Ethernet, RoCEv2, QSFP28 card for attaching up to one additional NS224 shelf, and the other being a 4-port, 12Gb SAS, mini-SAS HD card for attaching up to one additional DS224c shelf populated with SSDs. That’s right folks, this new platform will only support up to 48 storage devices, though in the AFF world, I don’t see this being a problem. Minimum configuration is 8 NVMe SSDs, max is 48 NVMe SSDs or 24 NVMe + 24 SAS SSDs, but you won’t be able to buy it with SAS SSDs. That compatibility is being included only for migrating off of or reusing an existing DS224x populated with SSDs. If that’s a DS2246, you’ll need to upgrade the IOM modules to 12GB prior to attachment.

Next up in the hardware announcement is the new FAS(?)…but why the question mark you ask? That’s because this “FAS” is all-flash. That’s right, the newest FAS to hit the streets is the FAS 500f. Now before I get into those details, I’d love to get into the speeds and feeds as I did above. The problem is that I would simply be repeating myself. This is the same box as the AFF A250, much like how the AFF A220 is the same box as the FAS27x0. The differences between the AFF 250 and the FAS500f are in the configurations and abilities or restrictions imposed upon it.

While most of the information above can be ⌘-C’d, ⌘-V’d here, this box does not support the connection of any SAS-based media. That fourth mez card I mentioned, the 4-port SAS one? Can’t have it. As for storage device options, much like Henry Ford’s famous quote:

Any customer can have a car painted any color that he wants so long as it is black.

-Henry Ford

Any customer can have any size NVMe drive they want in the FAS500f, so long as it’s a 15.3TB QLC. That’s right, not only are there no choices to be made here other than drive quantity, but those drives are QLC. On the topic of quantity, the available configurations start at a minimum 24 drives and can be grown to either 36 or 48, but that’s it. So why QLC? By now, you should be aware that the 10k/15k SAS drives we are so used to today for our tier 2 workloads are going away. In fact, the current largest spindle size of 1.8TB is slated to be the last drive size in this category. NetApp’s adoption of QLC media is a direct result of the sunsetting of this line of media. While I don’t expect to get into all of the differences between Single, Multi, Triple, Quad or Penta-level (SLC, MLC, TLC, QLC, or PLC) cell NAND memory in this post, the rule of thumb is the more levels, the lower the speed, reliability, and cost are. QLC is slated to be the replacement for 10k/15k SAS yet it is expected to perform better and only be slightly more expensive. In fact, the FAS500f is expected to be able to do 333,000 IOPS at 3.6ms of latency for 100% 8KB random read workloads or 170,000 IOPS at 2ms for OLTP workloads with a 40/60 r/w split.

Those are this Fall’s new platforms. If you have any questions put it in a comment or tweet at me, @ChrisMaki, I’d love to hear your thoughts on these new platforms. See you next week at INSIGHT 2020, virtual edition!

What’s going on with Intel’s X710 Ethernet controller?

I’ve previously written about this Ethernet controller back when 40GbE Ethernet was relatively new to NetApp’s FAS and AFF controllers. Since that article, I’ve started to come across various oddities with this Ethernet controller.

Last Fall, I had a customer who was experiencing problems with LACP during an ONTAP upgrade (9.1 → 9.3 → 9.5P6) on their AFF A700s using the X1144A, dual port 40GbE card, which uses the Intel X710 Ethernet controller. We had the first 40GbE port broken out into 4x10GbE links, 2-each to either half of a pair of Cisco Nexus N9K-C9396PX in the same vPC Domain. During a controller reboot, we noticed that the interface group using multimode_lacp, most or all of the ports wouldn’t come up and on the Cisco-side, the port(s) would become disabled due to too many link up/down events. Immediately we wanted to look at potential cable problems but quickly dismissed that idea as well. After some digging, it looked as though NetApp was referencing Cisco Bug ID CSCuv87644 as potentially related. This led me down a long path of investigating the changes made to the networking stack in ONTAP over the past couple of years, and I’ve still got a post I’m working on around that. The workaround was to increase the debounce timer value on the Cisco 9k to 525ms, the default value is 100ms.

The port debounce time is the amount of time that an interface waits to notify the supervisor of a link going down. During this time, the interface waits to see if the link comes back up. The wait period is a time when traffic is stopped.

Source: Cisco

Recently, a different customer of mine was trying to buy a Nimble HF20 and they wanted to include the Q8C17B, a four port, 10GbE NIC, also based on the Intel X710 Ethernet controller. The vendor came back to me and said they needed to know if the customer was going to be using VLAN tagging on the Q8C17B, because if they needed VLAN tagging, they’d have to choose a two port NIC instead. This confused me, but after some emails back and forth, HPE Nimble Storage Alert # EXT-0061 was referenced as the reason for this. At some point Nimble will release a patch that updates the firmware on this NIC, hopefully bringing back VLAN functionality. A bit of looking around, and the same VLAN issue has been identified by VMware in KB2149781.

Lastly, I also came across a NIST vulnerability from 2017 regarding the same Ethernet controller, it seems that has since been addressed in a firmware update.

While the above doesn’t necessarily imply a huge problem with the X710, I simply found it interesting and thought I’d include them all in one post.

ONTAP Fall 2019 Update – 9.7

Right on schedule, to coincide with NetApp INSIGHT 2019 is the announcement of the next release of NetApp’s ONTAP, 9.7. Going over the list of improvements, much of what is expected in 9.7 seems incremental. The themes for this release are High Performance, Simplicity and Data Protection. This release will also bring support for a few new platforms, the FAS8300, 8700 and the AFF A400. Also, a new twist on the A220 and A700, the first models in the new All SAN Array(ASA) versions of the all flash FAS’.

FlexCache, the most recent feature to be brought back from the depths of 7-mode gets a bit more attention. First up, both FC and IP MetroCluster support, allowing you to extend a volume namespace across MCC sites and per-site load-balancing for NFS clients. Also, FlexGroups can now be the origin volume for FlexCache, allowing for origin volumes greater than 100TB and higher file counts. 

In the realm of security, data-at-rest encryption is on by default for all newly created volumes provided there is a key manager configured. ONTAP will encrypt the data using hardware encryption if the drives are available, otherwise it will leverage software-based encryption. Setting up the onboard key manager is now extra simple with a setup wizard available in System Manager.

MetroCluster network can now co-exist on your data access switches provided they comply with specifications. MCC’s with either an A220 or FAS2750 do not qualify. 

There’s an interesting new bit of engineering coming in the new AFF A400 platform where compression will be offloaded to a PCI network card.

FlexGroup improvements include NDMP support, allowing backup by any 3d party application that supports NDMP. ONTAP 9.7 brings NFS v4.0 and v4.1 to FlexGroups, including support for pNFS. The long awaited conversion in-place from FlexVol to single-member FlexGroup is here, allowing you to scale capacity and performance without having to perform a client-based copy. While VMware datastores will work on FlexGroups, this isn’t supported quite yet. If you’re a NetApp partner and you have a customer who would like to use FlexGroups as a VMware datastore, contact your SE.

Another oft-request feature, this one of FabricPools, is the ability to tier to more than one object store. In 9.7, FabricPool Mirrors is announced, allowing you to tier to two separate object stores. FabricPool mirrors can be used to add resiliency, or change providers, perhaps to re-patriate your data to an on-premises StorageGRID deployment. Keeping on the topic of FabricPool, customers wanting to tier to an object store that isn’t officially qualified no longer need an FPVR, though they must perform their own testing to ensure the object store meets their needs. The officially qualified object stores are: Alibaba Cloud Object Storage Services, Amazon S3, Amazon Commercial Cloud Services, Google Cloud Storage, IBM Cloud Object Storage, Microsoft Azure Blog Storage and StorageGRID.

FabricPool Mirrors

Wrapping up the 9.7 updates, ONTAP Select gets NVMe device support, 12-node clusters and NSX-T support on ESXi.

Rubrik and NetApp, did that just happen?

I wasn’t sure I’d ever see the day where I’d be writing about not only the partnership of NetApp and Rubrik, but actual technological integration, this always seemed somewhat unlikely. While there have been some rumours flying around in the background for some time, the first real sign of cooperation between the two companies was when we saw the publication of a Solution Brief around combining NetApp StorageGRID with Rubrik Cloud Data Management (CDM) to automate data lifecycle management through Rubrik’s simple control plane while using StorageGRID as a cloud-scale object-based archive target. And then…nothing, not even the sound of crickets.

As Summer started to draw to a close and the kids were back in school, those in the inner circle started to hear things, interesting things. If you were to talk to your local Rubrik reps or sales engineers, the stories they had to tell were around NAS Backup with NAS Direct Archive as well as using older NetApp gear as a NAS target, nothing game changing. This backing up of the NAS filesystems still involved completely trolling the directory structure which was time consuming and performance impacting; something was still missing.

On September 24th this year, exactly one month ago, a new joint announcement hit the Internet, Rubrik and NetApp Bring Policy-Based Data Management to Cloud-Scale Architectures. While interesting, still not exactly what some of us were waiting for. Well, wait no longer, as of now, Rubrik has officially announced plans to integrate with NetApp’s SnapDiff API. What’s that you may ask? It is the ability to poll ONTAP via API call to leverage the internal meta-data catalogue to quickly identify the file and directory differences between two snapshots. This is a game changer for indexing NAS backups, since Rubrik will no longer need to scan the file shares manually, backup windows will shrink dramatically. Also, while other SnapDiff licensees can send data to another NetApp target, Rubrik is the first backup vendor to license SnapDiff and be able to send the data to standard public cloud storage.

Since the ink is just drying on Rubrik’s licensing of the SnapDiff API, it’s not quite ready in their code yet, but integration is being targeted for release 5.2 of CDM. Also, Rubrik will have a booth at INSIGHT (207) and be presenting on Tuesday, session number 9019-2, stop by to see what all the fuss is about. Also, be sure to look for me and my fellow A-Team members, there’s a good chance you’ll find us hanging around near the NetAppU booth where you’ll find a pretty cool surprise! You can also find me Wednesday, October 30th, at 11:30 am presenting 3009-2 Ask the A-Team – Building A Modern Data Platform, register for that today.

Gartner’s new Magic Quadrant for Primary Storage

Hot off the presses is Gartner’s new Magic Quadrant (GMQ) for Primary Storage and it’s great to see NetApp at the top-right, right where I’d expect them to be. This is the first time Gartner has combined rankings for primary arrays and not separated out all-flash from spinning media and hybrid arrays, acknowledging that all-flash is no longer a novelty.

As you can see on the GMQ below, the x-axis represents completeness of vision while the y-axis measures ability to execute, NetApp being tied with Pure on X and leading on Y.

As mentioned, this new MQ marks the retiring of the previous divided GMQs of Solid-State Arrays and General-Purpose Disk Arrays. To read more about NetApp’s take on this new GMQ, head over to their blog post on the subject or request a copy of the report here.

There’s a new NVMe AFF in town!

Yesterday, NetApp announced a new addition to the midrange tier of their All-Flash FAS line, the AFF A320. With this announcement, end-to-end NVMe is now available in the midrange, from the host all the way to the NVMe SSD. This new platform is a svelte 2RU that supports up to two of the new NS224 NVMe SSD shelves, which are also 2RU. NetApp has set performance expectations to be in the ~100µs range.

Up to two PCIe cards per controller can be added, options are:

  • 4-port 32GB FC SFP+ fibre
  • 2-port 100GbE RoCEv2* QSFP28 fibre (40GbE supported)
  • 2-port 25GbE RoCEv2* SPF28 fibre
  • 4-port 10GbE SFP+ Cu and fibre
    *RoCE host-side NVMeoF support not yet available

A couple of important points to also note:

  • 200-240VAC required
  • DS, SAS-attached SSD shelves are NOT supported

An end-to-end NVMe solution obviously needs storage of some sort, so also announced today was the NS224 NVMe SSD Storage Shelf:

  • NVMe-based storage expansion shelf
  • 2RU, 24 storage SSDs
  • 400GB/s capable, 200Gb/sec per shelf module
  • Uplinked to controller via RoCEv2
  • Drive sizes available: 1.9TB, 3.8TB and 7.6TB. 15.3TB with restrictions.

Either controller in the A320 has eight 100GbE ports on-board, but not all of them are available for client-side connectivity. They are allocated as follows:

  • e0a → ClusterNet/HA
  • e0b → Second NS224 connectivity by default, or can be configured for client access, 100GbE or 40GbE
  • e0c → First NS224 connectivity
  • e0d → ClusterNet/HA
  • e0e → Second NS224 connectivity by default, or can be configured for client access, 100GbE or 40GbE
  • e0f → First NS224 connectivity
  • e0g → Client network, 100GbE or 40Gbe
  • e0h → Client network, 100GbE or 40Gbe

If you don’t get enough client connectivity with the on-board ports, then as listed previously, there are myriad PCIe options available to populate the two available slots. In addition to all that on-board connectivity, there’s also MicroUSB and RJ45 for serial console access as well as the RJ-45 Wrench port to host e0M and out-of-band management via BMC. As with most port-pairs, the 100GbE ports are hosted by a single ASIC which is capable of a total effective bandwidth of ~100Gb.

Food for thought…
One interesting design change in this HA pair, is that there is no backplane HA interconnect as has been the case historically; instead, the HA interconnect function is placed on the same connections as ClusterNet, e0a and e0d. This enables some interesting future design possibilities, like HA pairs in differing chassis. Also, of interest is the shelf connectivity being NVMe/RoCEv2; while currently connected directly to the controllers, what’s stopping NetApp from putting these on a switched fabric? Once they do that, drop the HA pair concept above, and instead have N+1 controllers on a ClusterNet fabric. Scaling, failovers and upgrades just got a lot more interesting.

ONTAP 9.6

UPDATE, MAY 17: RC1 is out, you can grab it here.

It’s my favourite time of year folks, yup it’s time for some new ONTAP feature announcements. It feels as though 9.6 is going to have quite the payload, so I’m not going to cover every little tid-bit, just the pieces that I’m excited about. For the full release notes, go here, NetApp SSO credentials required. Or, if you’re one of my customers feel free to email me for a meeting and we can go over this release in detail.

The first thing worth mentioning is that with 9.6, NetApp is dropping the whole LTS/STS thing and all releases going forward will be considered Long Term Service support. What this means is every release has three years of full support, plus two years of limited support.

The rest of the updates can be grouped into one three themes or highlights;

  1. Simplicity and Productivity
  2. Expanded customer use cases
  3. Security and Data Protection

Some of the Simplicity highlights are:

  • System Manager gets renamed to ONTAP System Manager and overhauled, now based on REST APIs with Python SDK available at GA
    • Expect a preview of a new dashboard in 9.6
  • Automatic Inactive Data Reporting for SSD aggregates
    • This tells you how much data you could tier to an object store, freeing up that valuable SSD storage space
  • FlexGroup volume management has gotten simpler with the ability to shrink them, rename them and MetroCluster support
  • Cluster setup has gotten even easier with automatic node discovery
  • Adaptive QoS support for NVMe/FC (maximums) and ONTAP Select (minimums)

Here’s what the System Manager dashboard currently looks like:

And here’s what we can look forward to in 9.6

The Network Topology Visualization is very interesting, I’m looking forward to seeing how in-depth it gets.

Expanded Customer Use Cases

  • NVMe over FC gets more host support; it now includes VMware ESXi, Windows 2012/2016, Oracle Linux, RedHat Linux and Suse Linux.
  • FabricPools improvements:
    • Gains support for two more hyperscalers: Google Cloud and Alibaba Cloud
    • The Backup policy is gone replaced with a new All policy, great for importing known-cold data directly to the cloud
    • Inactive Data Reporting is now on by default for SSD aggregates and is viewable in ONTAP System Manager – Use this to determine how much data you could tier.
    • FabricPool aggregates can now store twice as much data
    • SVM-DR support
    • Volume move – Can now be done without re-ingesting the cloud tier, moves the meta data and hot data only
  • FlexGroup Volume Improvements:
    • Elastic sizing to automatically protect against one constituent member filling up and returning an error to the client
    • MetroCluster support, both FC and IP MetroCluster
    • Volume rename now trivial
    • Volume size reduction now availble
    • SMB Continuous Availability (CA) file share support
  • FlexCache Improvements:
    • Caching to and from Cloud Volumes ONTAP
    • End-to-end data encryption
    • Max cached volumes per node increased to 100 from 10
    • Soft and hard quota (tree) on origin volume enforced on cached volume
    • fpolicy support

Security and Data Protection

  • Over-the-wire encryption for SnapMirror
    • Coupled with at-rest encryption, data can now be encrypted end-to-end
  • SnapMirror Synchronous now supports
    • NFSv4, SMB 2 & 3 and mixed NFSv3/SMB volumes
    • This is in addition to existing support for FCP, iSCSI and NFSv3
  • NetApp Aggregate Encryption (NAE)
    • This can be seen as an evolution of NetApp Volume Encryption (NVE), all volumes in the aggregate share the same key.
    • Deduplication across volumes in the aggregate is supported for added space savings
  • Multi-tenant Key Management for Data At-Rest Encryption
    • Each tenant SVM can be configured with it’s on key management servers
  • Neighbour tenants are unaffected by each others’ encryption actions and much maintain control of their own keys
    • This is an added license
  • MetroCluster IP Updates
    • Support for entry AFF and FAS systems!
      • Personally I think this one is a game-changer and will really drive MetroCluster adoption now that the barrier to entry is so low
    • AFF A220 and FAS2750 and newer only

And that is most of the new enhancements of features appearing in 9.6; 9.6RC1 is expected around the second half of May, GA typically comes about six weeks later. You can bet that I’ll have it running in my lab the day it comes out.

ONTAP 9.5

UPDATE: 9.5RC1 is now out and you can grab it here.

It’s that time of year again, time for NetApp’s annual technical conference, Insight. This also means that a Long-Term Support (LTS) release of ONTAP is due, this time it’s 9.5. As I write this, I am sitting in the boarding lounge of YVR, waiting for my flight to Las Vegas for NetApp Insight and I see the Release Candidate (RC) for 9.5 is not out quite yet, but I do have the list of new features for you nonetheless.

The primary new features of 9.5 are:

  • New FlexCache accelerates performance for key workloads with read caching across a cluster and at remote sites.
  • SnapMirror Synchronous protects critical applications with synchronous replication
  • MetroCluster-IP enhancements reduce cost for business continuity: 700km between sites; support midrange systems (A300/FAS8200)
  • FabricPool now supports automated cloud tiering for FlexGroup volumes

Now, let’s dig into each one of these new features a bit.

FlexCache: FlexCache makes its return in 9.5 and provides the ability to cache hot blocks, user data and meta data on a more performant tier while the bulk of the data sits in a volume elsewhere in the cluster or even on a remote cluster. FlexCache can enable you to provide lower read latency while not having to store the bulk of your data on the same tier. At this time, only NFSv3 is supported though the source volume can be on AFF, FAS or ONTAP Select. While the volume you access is a FlexGroup volume, the source volume itself cannot be a FlexGroup but rather must be a FlexVol. An additional license is required.

SnapMirror Synchronous: SM-S also makes a long-awaited return to ONTAP allowing you to provide a recovery point objective (RPO) of zero and very low recovery time objective (RTO). FC, iSCSI and NFSv3 only at this time and your network must have a maximum roundtrip latency of no more than 10ms, FlexGroup volumes not supported. An additional license is required.

MetroCluster-IP (MC-IP): NetApp continues to add value to the mid-range of appliances by bringing MC-IP support to both the AFF A300 as well as the FAS8200. At the same time, NetApp has increased the maximum distance to 700km, provided your application can tolerate up to 10ms of write acknowledgement latency.

FabricPool: Previously hampered by the need to tier volumes greater than 100TiB? Now that FabricPool supports FlexGroups, you are in luck. Also supported in 9.5 is end-to-end encryption of data stored in FabricPool volumes using only one encryption key. Lastly, up until now, data would only migrate to your capacity tier once your FabricPool aggregate reached a fullness of 50%, this parameter is now adjustable though 50% remains the default.

While those are the primary features included in this latest payload, existing features continue to gain refinement, especially in the realm of storage efficiency. Specifically, around logical space consumption reporting, useful for service providers. Also, adaptive compression is now applied when 8KB compression groups (CG) are <50% compressible, allowing CG’s to be compacted together. Databases will see the most benefit here, typical aggregate savings in the 10-15% range. Finally, provided you have provisioned your storage using System Manager’s application provisioning, adaptive compression will be optimized for the database being deployed; Oracle, SQL Server or MongoDB.

That’s all for now, if you want more details come find me at NetApp Insight on the show floor near the Social Media Hub or at my Birds of a Feather session, Monday at 11:15am where myself and other NetApp A-Team members will discuss the Next Generation Data Centre.

NetApp HCI Update

As NetApp continues to make its mark on and help define the Next Generation Data Centre, the need for more node types of their HCI offering has become apparent and they are responding in kind.

First up, staying current by using the latest generation of Intel Skylake processors in the new nodes is a given; as well as offering myriad combinations of both CPU and memory while maintaining interoperability with the current generation of HCI nodes.

First up, are a raft of new compute nodes, some of which are optimized around core count which you can use to satisfy various licensing models.

 

Model # Processor Memory
H410C-14020 2 x Xeon Silver 4110
(8 core @ 2.1GHz)
384 GB
H410C-15020 512 GB
H410C-17020 768 GB
H410C-25020 2 x Xeon Gold 5120
(14 core @ 2.2GHz)
512 GB
H410C-27020 768 GB
H410C-28020 1 TB
H410C-35020 2 x Xeon Gold 5122
(4 core @ 3.6GHz)
512 GB
H410C-37020 768 GB
H410C-57020 2 x Xeon Gold 6138
(20 core @ 2.0GHz)
768 GB
H410C-58020 1 TB

 

Next up, the much-requested GPU accelerated compute nodes have been announced, optimized for Windows 10 VDI deployments. This one moves away from the 2 RU chassis with 4 compute nodes and is one 2 RU server in itself consisting of:

  • 2 x NVIDIA Tesla M10 GPUs
  • An Intel Skylake Xeon 6130 (16 cores @ 2.1GHz)
  • 512MB RAM

On to the networking-side of things, your concerns have been heard. NetApp will soon begin offering their H-Series switch, the Mellanox SN2010 to help complete your HCI build-outs. This switch is a paltry 1RU, half-width consisting of 18 SFP+/28 ports with optional cable and transceiver bundles. Support for this switch will be NetApp-direct, so no worries around cross-vendor finger pointing.

Keeping in the network mindset, NetApp is making things simpler by reducing the required network port count and associated infrastructure by 40%. HCI compute nodes now only require two SFP28 connections, down from four, vSphere distributed switch is a requirement.

Tied closely to NetApp’s HCI offering is their Solidfire storage whose latest release, version 11, provides some great new features. Version 11 brings to the table the ability to SnapMirror to ONTAP Cloud, IPv6 management network, 16TB maximum volume size and protection domains. This last feature helps protect your HCI deployments against chassis failure by automatically detecting HCI chassis and node configuration. Solidfire’s double-helix data layout ensures that secondary blocks span domains.

All the above should allow you to build a truly Next Generation Data Centre for your employer or your customers.