Tag Archives: ONTAP

ONTAP 9.6

UPDATE, MAY 17: RC1 is out, you can grab it here.

It’s my favourite time of year folks, yup it’s time for some new ONTAP feature announcements. It feels as though 9.6 is going to have quite the payload, so I’m not going to cover every little tid-bit, just the pieces that I’m excited about. For the full release notes, go here, NetApp SSO credentials required. Or, if you’re one of my customers feel free to email me for a meeting and we can go over this release in detail.

The first thing worth mentioning is that with 9.6, NetApp is dropping the whole LTS/STS thing and all releases going forward will be considered Long Term Service support. What this means is every release has three years of full support, plus two years of limited support.

The rest of the updates can be grouped into one three themes or highlights;

  1. Simplicity and Productivity
  2. Expanded customer use cases
  3. Security and Data Protection

Some of the Simplicity highlights are:

  • System Manager gets renamed to ONTAP System Manager and overhauled, now based on REST APIs with Python SDK available at GA
    • Expect a preview of a new dashboard in 9.6
  • Automatic Inactive Data Reporting for SSD aggregates
    • This tells you how much data you could tier to an object store, freeing up that valuable SSD storage space
  • FlexGroup volume management has gotten simpler with the ability to shrink them, rename them and MetroCluster support
  • Cluster setup has gotten even easier with automatic node discovery
  • Adaptive QoS support for NVMe/FC (maximums) and ONTAP Select (minimums)

Here’s what the System Manager dashboard currently looks like:

And here’s what we can look forward to in 9.6

The Network Topology Visualization is very interesting, I’m looking forward to seeing how in-depth it gets.

Expanded Customer Use Cases

  • NVMe over FC gets more host support; it now includes VMware ESXi, Windows 2012/2016, Oracle Linux, RedHat Linux and Suse Linux.
  • FabricPools improvements:
    • Gains support for two more hyperscalers: Google Cloud and Alibaba Cloud
    • The Backup policy is gone replaced with a new All policy, great for importing known-cold data directly to the cloud
    • Inactive Data Reporting is now on by default for SSD aggregates and is viewable in ONTAP System Manager – Use this to determine how much data you could tier.
    • FabricPool aggregates can now store twice as much data
    • SVM-DR support
    • Volume move – Can now be done without re-ingesting the cloud tier, moves the meta data and hot data only
  • FlexGroup Volume Improvements:
    • Elastic sizing to automatically protect against one constituent member filling up and returning an error to the client
    • MetroCluster support, both FC and IP MetroCluster
    • Volume rename now trivial
    • Volume size reduction now availble
    • SMB Continuous Availability (CA) file share support
  • FlexCache Improvements:
    • Caching to and from Cloud Volumes ONTAP
    • End-to-end data encryption
    • Max cached volumes per node increased to 100 from 10
    • Soft and hard quota (tree) on origin volume enforced on cached volume
    • fpolicy support

Security and Data Protection

  • Over-the-wire encryption for SnapMirror
    • Coupled with at-rest encryption, data can now be encrypted end-to-end
  • SnapMirror Synchronous now supports
    • NFSv4, SMB 2 & 3 and mixed NFSv3/SMB volumes
    • This is in addition to existing support for FCP, iSCSI and NFSv3
  • NetApp Aggregate Encryption (NAE)
    • This can be seen as an evolution of NetApp Volume Encryption (NVE), all volumes in the aggregate share the same key.
    • Deduplication across volumes in the aggregate is supported for added space savings
  • Multi-tenant Key Management for Data At-Rest Encryption
    • Each tenant SVM can be configured with it’s on key management servers
  • Neighbour tenants are unaffected by each others’ encryption actions and much maintain control of their own keys
    • This is an added license
  • MetroCluster IP Updates
    • Support for entry AFF and FAS systems!
      • Personally I think this one is a game-changer and will really drive MetroCluster adoption now that the barrier to entry is so low
    • AFF A220 and FAS2750 and newer only

And that is most of the new enhancements of features appearing in 9.6; 9.6RC1 is expected around the second half of May, GA typically comes about six weeks later. You can bet that I’ll have it running in my lab the day it comes out.

ADP(v1) and ADPv2 in a nutshell, it’s delicious!

Ever since clustered Data ONTAP went mainstream over 7-Mode, the dedicated root aggregate tax has been a bone of contention for many, especially for those entry-level systems with internal drives. Can you imagine buying a brand new FAS2220 or FAS2520 and being told that not only are you going to lose two drives as spares, but also another six to your root aggregates? This effectively left you with four drives for your data aggregate, two of which would be devoted to parity. I don’t think so. Now, this is a bit of an extreme example that was seldom deployed. Hopefully you had a deployment engineer who cared about the end result and would use RAID-4 for the root aggregates and maybe not even assign a spare to one controller, giving you seven whole disks for your active-passive deployment. Still, this was kind of a shaft. In a 24-disk system deployed active-active, you’d likely get something like this:

Traditional cDOT

Enter ADP.

In the first version of ADP introduced in version 8.3, clustered Data ONTAP gained the ability to partition drives on systems with internal drives as well as the first two shelves of drives on All Flash FAS systems. What this meant was the dedicated root aggregate tax got a little less painful. In this first version of ADP, clustered Data ONTAP carved each disk into two partitions: a small one for the root aggregates and a larger one for the data aggregate(s). This was referred to as root-data or R-D partitioning. The smaller partition’s size depended on how many drives existed. You could technically buy a system with fewer than 12 drives, but the ADP R-D minimum was eight drives. By default, both partitions on a disk were owned by the same controller, splitting overall disk ownership in half.

8.3 ADP, R-D

 

You could change this with some advanced command-line trickery to still build active-passive systems and gain two more drive partitions’ worth of data. Since you were likely only building one large aggregate on your system, you could also accomplish this in System Setup if you told it to create one large pool. This satisfied the masses for a while, but then those crafty engineers over at NetApp came up with something better.

Enter ADPv2.

Starting with ONTAP 9, not only did ONTAP get a name change (7-Mode hasn’t been an option since version 8.2.3), but it also gained ADPv2 which carves the aforementioned data partition in half, or R-D2 (Root-Data,Data) sharing for SSDs. Take note of the aforementioned SSDs there, as spinning disks aren’t eligible for this secondary partitioning. In this new version, you get one drive back that you would have allocated to be a spare, and you also get two of the parity drives back, lessening the pain of the RAID tax. With a minimum requirement of eight drives and a maximum of 48, here are the three main scenarios for this type of partitioning.

12 Drives:

ADPv2, R-D2 ½ shelf

24 Drives:

ADPv2, R-D2 1 shelf

48 Drives:

ADPv2, R-D2 2 shelves

As you can see, this is a far more efficient way of allocating your storage that yields up to ~17% more usable space on your precious SSDs.

So that’s ADP and ADPv2 in a nutshell—a change for the better. Interestingly enough, the ability to partition disks has lead to a radical change in the FlashPool world called “Storage Pools,” but that’s a topic for another day.

NetApp announces Clustered Data ONTAP 8.3

Today NetApp announced the next major release of its Clustered data ONTAP operating system and a major release it is. This is the first release of ONTAP that does not include the dual payload of both 7-mode and cluster-mode and will be the norm going forward. This release has three major themes:

  1. Flash, data protection, multi tenancy, cloud, and efficiency enhancements
  2. Simplified Deployment, upgrade, transition, and support
  3. Clustered ONTAP in mission critical environments with MetroCluster

Flash, data protection, multi tenancy, cloud, and efficiency enhancements

The first theme brings with it performance enhancements in the following ways:

  • More consistent and predictable performance and higher IOPS at lower latency in the All Flash FAS (AFF) and other flash-enabled systems thanks to read-path optimization.

Random Read IO

  • The CIFS lock manager has been paralleled bringing improvements to CIFS-based file-services workloads.
  • The initial transfer as well as incremental updates for both SnapMirror and SnapVault relationships have been improved.
  • 8.3 has been optimized for more CPU cores bringing performance enhancements to pre-FAS8000 systems. Initial claims are that FAS62xx performance is similar those running 8.1 while the FAS3xxxx and FAS22xx are showing 8.1-type performance in SAN deployments.

As far as efficiency enhancements are concerned, a long awaited feature by myself is Advanced Disk Partitioning (ADP) which has three use cases:

  1. Root-data partitioning for All Flash FAS (AFF) systems.
  2. Root-data partitioning for Entry-level platforms.
  3. SSD partitioning for Flash Pools

The first two use cases mentioned above will greatly ease the dedicated root aggregate disk tax which has been the bane of the SMB buyers since cDOT’s initial (non-GX) release, providing 20+% increase in storage efficiency in 24-drive FAS255x as well as the FAS2240. This will be the default configuration for systems purchased with 8.3 but if you wish to retrofit an existing system you’ll have to evacuate your data and start fresh. As far as the third use case is concerned, the benefit here is the parity disk tax as represented by the graphic below:

 

ADP

Other efficiency enhancements come in the way of addressable cache, in fact the complete complement of contemporary systems (read: FAS80xx and FAS25xx) has been quadrupled. Also, the 16KB cutoff for Flash Pool has been eliminated, compress blocks are now read cacheable as are read-only volumes such as SnapMirror and SnapVault destinations.

Simplified Deployment, upgrade, transition, and support

In the never ending quest to make their product easier to deploy, transition to and use NetApp brings the following laundry list of improvements.

  1. System Setup 3.0
    • Support of AFF aggregate creation
    • 8.3 networking support (More on this in a subsequent post.)
    • Four port cluster interconnect support
  2. System Manager 3.2
    • This becomes a cluster-hosted web service which can be reached from the network using Mozilla, Chrome and IE on Windows, Linux and Mac platforms.
    • 8.3 networking support
  3. Automated NDU
    • Three commands to upgrade your cluster.
    • One command to monitor the progress.
  4. Networking
    • There is a whole litany of changes/improvements, too many to list here. The biggest one however may be IPSpaces so know you can have overlapping subnets in those multi-tenant environments.
  5. Virtualization
    • vVol support (pending VMware support)
    • FlexClone for SVI
    • Inline zero write detection and elimination.
  6. 7MTT
    • Version 1.4 will bring with it a new collect and asses feature to validate the destination cluster based on the assessment of the source 7-mode system.
    • 2.0 brings with it the much sought after SAN migration.

Clustered ONTAP in mission critical environments with MetroCluster

Not a whole lot more to say around that except that it is finally here. Some of the highlights are:

  • Two node cluster at either site
  • Clients can be served from all four nodes at the same time
  • Support for Non Disruptive Operations (NDO)

While I covered a lot in this post, I didn’t cover everything as 8.3 is a major release indeed. Now the big question many of you will have is what platforms will support it? Look no further:

  • FAS8xxx
  • FAS25xx
  • FAS62xx
  • FAS32xx (except the FAS3210)
  • FAS22xx

As for what I didn’t cover in this post but you may wish to research further:

  • VM Granular Management
  • 8.3 style networking
  • DataMotion for LUNs
  • Offline Foreign LUN Import
  • Version Independent SnapMirror (this one’s pretty cool)
  • Other Performance Improvements
  • Further Protocol Enhancements (SAN and NAS)
  • Data ONTAP in the cloud (Cloud ONTAP)