Tag Archives: NetApp

C-Series lineup

NetApp Announces A Whole New Line

Up until today, if you were looking for a physical ONTAP array for your environment, your choices were the hybrid flash, FAS array offering around 5-10ms of latency or the sub-ms AFF A-series. Sure there was one anomaly in there, the QLC-based FAS 500f, but that AFF in FAS clothing was just that, an anomaly. While I have no evidence to point to here, but my theory is that the 500f was NetApp’s way of dipping their toe in the water of QLC-based arrays. Upon launch, the 500f was pricey and the configurations limited and restricted, both of which were addressed at some point after launch. As an employee at a partner that sells a lot of NetApp, I looked at the 500f when it first launched and then basically never looked at it again because of those two points.

Today, NetApp is announcing the all new C-Series of QLC-based arrays, the “C” being for “Capacity Enterprise Flash”. While the controllers themselves aren’t new, the fact that they only support QLC media is what is different. While I won’t go into the details of what QLC, or Quad Layer Flash is in this post, the fact of the matter is that it is more affordable than Triple Layer Flash (TLC) and almost as performant. What this means for those purchasing NetApp arrays is that they can get near the performance of an AFF system at a fraction of the cost. Most of us in the storage world know that 10k and 15k RPM SAS drives are slowly going to be phased out in favour of high-capacity SATA drives and high-performance NAND storage, leaving a void. QLC-based arrays will fill that void, and at a higher performance level. If you start to research QLC vs TLC, you’ll find lots of concerns around durability which are not completely unfounded, but you would have also found these concerns when the industry went from Multi-cell (MLC) to TLC and that seems to have gone well enough. Technology of the storage devices themselves improve over time and software-based mitigation strategies such as write avoidance also improve. I’m not knowledgeable enough on this latter point to go into details, but ONTAP is a beast and has all sorts of tricks up its sleeve.

So without further ado, I present NetApp’s Enterprise Capacity Flash line, the AFF C800, AFF C400 and AFF C250:

AFF C800
AFF C400
AFF C250

Quick Specs:

AFF C800AFF C400AFF C250
Max drive count (15.3TB NVMe QLC)1449648
Max effective capacity (5:1 efficiencies)8.8 PB5.9 PB2.9 PB
Max Usable capacity (1:1)1.6 PiB1.06 PiB540.37 TiB
Minimum configurations12 × 15.38 × 15.38 × 15.3
100GbE ports per HA pair20164
25GbE ports per HA pair1612 onboard / 16 HBA4 onboard / 16 HBA
32Gb FC ports323216
By the numbers

Now some of you may have thought, “I thought there was already a C-series with the C190?”, and you’d be right. NetApp is repurposing the C-series branding as well as introducing a successor to the C190, the AFF A150. While the new A150 will still have some restrictions, it won’t be nearly as restrictive as the C190. The physical form-factor remains the same as the C190, but the A150 will allow for up to two expansion shelves for a total of 72 SAS SSDs including the internal ones in capacities of 960GB, 3.8TB and 7.6TB, coming to a max usable capacity of ~402TiB, or 2.2PB at an efficiency level of 1:5.

Back to the new C-Series conversation, they bring with them a new default licensing model, ONTAP One. ONTAP One is something I have personally been asking for many years at this point, and it includes all of the licenses; Core, Data Protection, Hybrid Cloud and Security & Compliance. Personally I’m looking forward to not having to worry about what features are available with a certain license offering, instead, C-Series with ONTAP One as the default licensing model will ensure you or your customers will never be left wondering if their array has a given feature.

The C-Series should be available to quote as of March 27, 2023 and should start shipping by the end of April. This statement as well as all of the information above is based on pre-release information I received and may be subject to change at press time. I will endeavour to add corrections below should any of the above change at launch.

Rubrik and NetApp, did that just happen?

I wasn’t sure I’d ever see the day where I’d be writing about not only the partnership of NetApp and Rubrik, but actual technological integration, this always seemed somewhat unlikely. While there have been some rumours flying around in the background for some time, the first real sign of cooperation between the two companies was when we saw the publication of a Solution Brief around combining NetApp StorageGRID with Rubrik Cloud Data Management (CDM) to automate data lifecycle management through Rubrik’s simple control plane while using StorageGRID as a cloud-scale object-based archive target. And then…nothing, not even the sound of crickets.

As Summer started to draw to a close and the kids were back in school, those in the inner circle started to hear things, interesting things. If you were to talk to your local Rubrik reps or sales engineers, the stories they had to tell were around NAS Backup with NAS Direct Archive as well as using older NetApp gear as a NAS target, nothing game changing. This backing up of the NAS filesystems still involved completely trolling the directory structure which was time consuming and performance impacting; something was still missing.

On September 24th this year, exactly one month ago, a new joint announcement hit the Internet, Rubrik and NetApp Bring Policy-Based Data Management to Cloud-Scale Architectures. While interesting, still not exactly what some of us were waiting for. Well, wait no longer, as of now, Rubrik has officially announced plans to integrate with NetApp’s SnapDiff API. What’s that you may ask? It is the ability to poll ONTAP via API call to leverage the internal meta-data catalogue to quickly identify the file and directory differences between two snapshots. This is a game changer for indexing NAS backups, since Rubrik will no longer need to scan the file shares manually, backup windows will shrink dramatically. Also, while other SnapDiff licensees can send data to another NetApp target, Rubrik is the first backup vendor to license SnapDiff and be able to send the data to standard public cloud storage.

Since the ink is just drying on Rubrik’s licensing of the SnapDiff API, it’s not quite ready in their code yet, but integration is being targeted for release 5.2 of CDM. Also, Rubrik will have a booth at INSIGHT (207) and be presenting on Tuesday, session number 9019-2, stop by to see what all the fuss is about. Also, be sure to look for me and my fellow A-Team members, there’s a good chance you’ll find us hanging around near the NetAppU booth where you’ll find a pretty cool surprise! You can also find me Wednesday, October 30th, at 11:30 am presenting 3009-2 Ask the A-Team – Building A Modern Data Platform, register for that today.

ONTAP 9.6

UPDATE, MAY 17: RC1 is out, you can grab it here.

It’s my favourite time of year folks, yup it’s time for some new ONTAP feature announcements. It feels as though 9.6 is going to have quite the payload, so I’m not going to cover every little tid-bit, just the pieces that I’m excited about. For the full release notes, go here, NetApp SSO credentials required. Or, if you’re one of my customers feel free to email me for a meeting and we can go over this release in detail.

The first thing worth mentioning is that with 9.6, NetApp is dropping the whole LTS/STS thing and all releases going forward will be considered Long Term Service support. What this means is every release has three years of full support, plus two years of limited support.

The rest of the updates can be grouped into one three themes or highlights;

  1. Simplicity and Productivity
  2. Expanded customer use cases
  3. Security and Data Protection

Some of the Simplicity highlights are:

  • System Manager gets renamed to ONTAP System Manager and overhauled, now based on REST APIs with Python SDK available at GA
    • Expect a preview of a new dashboard in 9.6
  • Automatic Inactive Data Reporting for SSD aggregates
    • This tells you how much data you could tier to an object store, freeing up that valuable SSD storage space
  • FlexGroup volume management has gotten simpler with the ability to shrink them, rename them and MetroCluster support
  • Cluster setup has gotten even easier with automatic node discovery
  • Adaptive QoS support for NVMe/FC (maximums) and ONTAP Select (minimums)

Here’s what the System Manager dashboard currently looks like:

And here’s what we can look forward to in 9.6

The Network Topology Visualization is very interesting, I’m looking forward to seeing how in-depth it gets.

Expanded Customer Use Cases

  • NVMe over FC gets more host support; it now includes VMware ESXi, Windows 2012/2016, Oracle Linux, RedHat Linux and Suse Linux.
  • FabricPools improvements:
    • Gains support for two more hyperscalers: Google Cloud and Alibaba Cloud
    • The Backup policy is gone replaced with a new All policy, great for importing known-cold data directly to the cloud
    • Inactive Data Reporting is now on by default for SSD aggregates and is viewable in ONTAP System Manager – Use this to determine how much data you could tier.
    • FabricPool aggregates can now store twice as much data
    • SVM-DR support
    • Volume move – Can now be done without re-ingesting the cloud tier, moves the meta data and hot data only
  • FlexGroup Volume Improvements:
    • Elastic sizing to automatically protect against one constituent member filling up and returning an error to the client
    • MetroCluster support, both FC and IP MetroCluster
    • Volume rename now trivial
    • Volume size reduction now availble
    • SMB Continuous Availability (CA) file share support
  • FlexCache Improvements:
    • Caching to and from Cloud Volumes ONTAP
    • End-to-end data encryption
    • Max cached volumes per node increased to 100 from 10
    • Soft and hard quota (tree) on origin volume enforced on cached volume
    • fpolicy support

Security and Data Protection

  • Over-the-wire encryption for SnapMirror
    • Coupled with at-rest encryption, data can now be encrypted end-to-end
  • SnapMirror Synchronous now supports
    • NFSv4, SMB 2 & 3 and mixed NFSv3/SMB volumes
    • This is in addition to existing support for FCP, iSCSI and NFSv3
  • NetApp Aggregate Encryption (NAE)
    • This can be seen as an evolution of NetApp Volume Encryption (NVE), all volumes in the aggregate share the same key.
    • Deduplication across volumes in the aggregate is supported for added space savings
  • Multi-tenant Key Management for Data At-Rest Encryption
    • Each tenant SVM can be configured with it’s on key management servers
  • Neighbour tenants are unaffected by each others’ encryption actions and much maintain control of their own keys
    • This is an added license
  • MetroCluster IP Updates
    • Support for entry AFF and FAS systems!
      • Personally I think this one is a game-changer and will really drive MetroCluster adoption now that the barrier to entry is so low
    • AFF A220 and FAS2750 and newer only

And that is most of the new enhancements of features appearing in 9.6; 9.6RC1 is expected around the second half of May, GA typically comes about six weeks later. You can bet that I’ll have it running in my lab the day it comes out.

NetApp announces Clustered Data ONTAP 8.3

Today NetApp announced the next major release of its Clustered data ONTAP operating system and a major release it is. This is the first release of ONTAP that does not include the dual payload of both 7-mode and cluster-mode and will be the norm going forward. This release has three major themes:

  1. Flash, data protection, multi tenancy, cloud, and efficiency enhancements
  2. Simplified Deployment, upgrade, transition, and support
  3. Clustered ONTAP in mission critical environments with MetroCluster

Flash, data protection, multi tenancy, cloud, and efficiency enhancements

The first theme brings with it performance enhancements in the following ways:

  • More consistent and predictable performance and higher IOPS at lower latency in the All Flash FAS (AFF) and other flash-enabled systems thanks to read-path optimization.

Random Read IO

  • The CIFS lock manager has been paralleled bringing improvements to CIFS-based file-services workloads.
  • The initial transfer as well as incremental updates for both SnapMirror and SnapVault relationships have been improved.
  • 8.3 has been optimized for more CPU cores bringing performance enhancements to pre-FAS8000 systems. Initial claims are that FAS62xx performance is similar those running 8.1 while the FAS3xxxx and FAS22xx are showing 8.1-type performance in SAN deployments.

As far as efficiency enhancements are concerned, a long awaited feature by myself is Advanced Disk Partitioning (ADP) which has three use cases:

  1. Root-data partitioning for All Flash FAS (AFF) systems.
  2. Root-data partitioning for Entry-level platforms.
  3. SSD partitioning for Flash Pools

The first two use cases mentioned above will greatly ease the dedicated root aggregate disk tax which has been the bane of the SMB buyers since cDOT’s initial (non-GX) release, providing 20+% increase in storage efficiency in 24-drive FAS255x as well as the FAS2240. This will be the default configuration for systems purchased with 8.3 but if you wish to retrofit an existing system you’ll have to evacuate your data and start fresh. As far as the third use case is concerned, the benefit here is the parity disk tax as represented by the graphic below:

 

ADP

Other efficiency enhancements come in the way of addressable cache, in fact the complete complement of contemporary systems (read: FAS80xx and FAS25xx) has been quadrupled. Also, the 16KB cutoff for Flash Pool has been eliminated, compress blocks are now read cacheable as are read-only volumes such as SnapMirror and SnapVault destinations.

Simplified Deployment, upgrade, transition, and support

In the never ending quest to make their product easier to deploy, transition to and use NetApp brings the following laundry list of improvements.

  1. System Setup 3.0
    • Support of AFF aggregate creation
    • 8.3 networking support (More on this in a subsequent post.)
    • Four port cluster interconnect support
  2. System Manager 3.2
    • This becomes a cluster-hosted web service which can be reached from the network using Mozilla, Chrome and IE on Windows, Linux and Mac platforms.
    • 8.3 networking support
  3. Automated NDU
    • Three commands to upgrade your cluster.
    • One command to monitor the progress.
  4. Networking
    • There is a whole litany of changes/improvements, too many to list here. The biggest one however may be IPSpaces so know you can have overlapping subnets in those multi-tenant environments.
  5. Virtualization
    • vVol support (pending VMware support)
    • FlexClone for SVI
    • Inline zero write detection and elimination.
  6. 7MTT
    • Version 1.4 will bring with it a new collect and asses feature to validate the destination cluster based on the assessment of the source 7-mode system.
    • 2.0 brings with it the much sought after SAN migration.

Clustered ONTAP in mission critical environments with MetroCluster

Not a whole lot more to say around that except that it is finally here. Some of the highlights are:

  • Two node cluster at either site
  • Clients can be served from all four nodes at the same time
  • Support for Non Disruptive Operations (NDO)

While I covered a lot in this post, I didn’t cover everything as 8.3 is a major release indeed. Now the big question many of you will have is what platforms will support it? Look no further:

  • FAS8xxx
  • FAS25xx
  • FAS62xx
  • FAS32xx (except the FAS3210)
  • FAS22xx

As for what I didn’t cover in this post but you may wish to research further:

  • VM Granular Management
  • 8.3 style networking
  • DataMotion for LUNs
  • Offline Foreign LUN Import
  • Version Independent SnapMirror (this one’s pretty cool)
  • Other Performance Improvements
  • Further Protocol Enhancements (SAN and NAS)
  • Data ONTAP in the cloud (Cloud ONTAP)

NetApp Refreshes Entry-Level FAS Systems

Today NetApp announced the successors to their entry-level line of FAS storage arrays: the FAS2552, FAS2254 and the FAS2520 which replace the FAS2240-2, FAS2240-4 and FAS2220 respectively.

Why is this important? Until now, in order to run Clustered Data ONTAP, you had to use your one and only expansion option for a 10GbE card for the cluster interconnect network, giving up any chance of deploying Fibre Channel. Technically, since this was a two-port card, you could still provide 10GbE uplink at the expense of redundancy on the ClusterNet backend. However, the new models give up the mezzanine slot altogether in favour of a minimum of 4 ×10GbE on board on the FAS2520 to 4 ×UTA2 ports on both the FAS2552 and FAS2554.

Highlights:

With this refresh NetApp continues to use the same dual-core, hyper-threaded, 1.73GHz Jasper Forest processors as before – which, incidentally, was specifically designed for both embedded and storage applications — but the quantity is doubled to four, not to mention there’s a three-fold increase in memory. All of this added memory increases the ability for Data ONTAP to address more flash, raising the Flash Pool™caching limit to 4TB. Finally, with the addition of onboard 10GbE across the line, NetApp closes the gap in regard to ClusterNet interconnect requirements. The minimum version of ONTAP required for either 7-mode or Cluster-Mode will be the one it ships with, 8.2.2RC1.

FAS2520

The FAS2520A is a 2U appliance supporting 12 SAS, SATA, and NSE drives internally, and up to 72 additional drives externally. Connectivity is provided by 4×6GB SAS ports, 4×1GbE interfaces and 8×10GBASE-T. Unlike its predecessor, there are no expansion slots.

2520

NetApp’s new FAS2520, rear view.

FAS2552/FAS2554

The FAS2552A is a 2U appliance supporting 24 SAS, NSE and SSD drives internally and the FAS2554A is a 4U appliance supporting SATA, NSE and SSD drives internally; both models support up to an additional 120 drives externally. Connectivity is provided by 4×6GB SAS ports, 4x1GbE interfaces and 8×UTA2 ports. The UTA2 ports can be configured as either 8Gb FC, 16Gb FC, or 10GbE. The 10GbE configuration does indeed support FCoE as well as the usual CIFS, NFS and iSCSI options. Due to the fact that each pair of ports is driven by one ASIC, the UTA2 ports must be configured in pairs. However, it should be noted that their personality can be modified in the field; this requires a reboot as well as the requisite SFP.

2552

NetApp’s new FAS2552, rear view.

2554

NetApp’s new FAS2554, rear view.

Port Legend

Summary

With this second round of major updates to the FAS systems this year, the entire line is now truly Clustered Data ONTAP-ready, with every model sporting 10 Gig connectivity on-board. What I find most noteworthy is the amount of RAM that has been added which significantly increases the amount of flash-based cache the devices can address. Flash Pools abound!

Simulate a two node cDOT 8.2 cluster on ESXi 5.1 in 17 easy steps.

Here’s a quick-how to I wrote a few months back when I ran into trouble trying to install the cDOT simulator in an ESXi environment. This was done on 5.1 but should work for 5.0 and 5.5 as well.

  1. Load the vmware multiextent module “/sbin/vmkload_mod multiextent” (Add this to /etc/rc.local.d/local.sh so it gets loaded on boot going forward. This used to be loaded by default but that changed in VMWare 4.1, more here.)
  2. Create a new vSwitch to use for the Cluster Network.
  3. Download the 8.2 cDOT VMDK here.
  4. Untar and ungzip the vsim_esx-cm.tgz and copy it to your datastore.
  5. Using vCentre, browse the directory on your datastore that contains the unarchived files above.
  6. Locate DataONTAP.vmx, right-click and choose “Add to Inventory.”
  7. Give it a name (cDOT 8.2 Cluster node 1), choose a host, click Finish. DO NOT POWER IT ON YET.
  8. Edit the properties of this newly created VM and make sure that the first two NICs (e0a and e0b) are on the cluster vSwitch.
  9. Power on the vm and open the console.
  10. Hit CTRL-C to enter the Boot Menu.
  11. Choose option 4, type “yes” to the two questions and wait for your disks to zero.
  12. Run through the cluster setup script, entering the licenses required (available here) when prompted. The only required one is the Cluster License itself, the rest can be added later.
  13. Repeat steps 4-8 from above, choosing a different name in step 7 (cDOT 8.2 Cluster node 2). You MUST repeat step 4, do NOT leverage cloning, it will NOT work.
  14. When you power up this VM, it is VERY important to not let it boot, so open up the console right away and hit any key other than Enter for the VLOADER prompt.
  15. Set and verify the SYS_SERIAL_NUM and bootarg.nvram.sysid as described on page 32, steps 10 and 11 in the Simulate ONTAP 8.2 Installation and Setup Guide.
  16. Type boot at the VLOADER prompt to boot the node.
  17. Repeat steps 10-13 from above, choosing to join an existing cluster and using the second set of licenses located in the text file linked to in step 12. ONTAP 8.2 introduced node-locked licensing so it is important to use the right keys.

 

You should now have a functioning, simulated two node cluster.