Category Archives: ONTAP

ONTAP One for all and all for One

Back in February, NetApp announced their new QLC-based AFF systems, NetApp C-Series, C being for capacity flash. That new product line alone was celebration-worthy, what was really exciting and involved a touch of burying the lead was the inclusion of a licensing model called ONTAP One which is the all-you-can-eat equivalent of NetApp licensing. When a C-series is licensed with ONTAP One, you get to use all of the features of ONTAP. At the time of launch, my only complaint was that it was only on the new platform, but behind the scenes I was told to watch that space. Well, as of today, all new and existing FAS, AFF and ASA systems licensed with anything more than the bare minimum can now get licenses for everything ONTAP.

NetApp has simplified their licensing to only two options, ONTAP Base and ONTAP One. If your existing system had either Flash, Core+DP, or Premium, you are now entitled to ONTAP One licensing. What exactly does that look like? Here’s a picture:

How do you acquire your new licenses you may ask? Customers with a valid support contract can login to the NetApp support portal, download a new license file and install it. Some features may require you to upgrade to 9.10.1, but you should really be on at least that new of a release by now.

As with all great things, there are some caveats and restrictions but not enough to warrant covering them here. The majority of my readers will be able to proceed as above, and edge cases around the IPA license model versus LICKEYs or SnapMirror Cloud/S3 SnapMirror licenses can be found in the documentation.

ONTAP One is something I’ve wanted NetApp to introduce for years, this will not only eliminate and post-sales problems due to improper configurations but also remove a FUD point for their competition.

Installing the ONTAP 9.7 simulator in Fusion 12.1

At the time of writing this 9.8 is available but I’m specifically writing this for someone who is trying to install 9.7 and having problems. Before I get into the actual simulator installation we need to come cover some stuff around VMware Fusion first.

With regards to networking, VMware Fusion can provide three different interface types, they are as follows:

  1. Bridged – this type puts the interface directly on the same LAN as your Mac, this is great if you want the VM to appear as though it’s on the network that your Mac is using.
  2. Host-only – this is a completely isolated network, the only hosts that can access it are those on your Mac configured with this type of interface. There is no external access with this type.
  3. NAT – this is similar to number two, but allows the host with this type to reach out of the Mac, such as for Internet access.

    If you want more details on this please go read this KB.

By default, the simulator has four network interface; the first two, e0a/e0b are for the ClusterNet network, the back-end network used by cluster nodes to communicate with each other, and should be of type host-only. The second two, e0c/e0d are for client access and management access, these are of type NAT but can also be set to bridged. If you use Nat, then VMware will assign IP addresses via DHCP based on the configuration of the VMNET8 interface settings; to view this cat the file located here:

/Library/Preferences/VMware\ Fusion/vmnet8/dhcpd.conf

Mine looks like this:

allow unknown-clients;
default-lease-time 1800;                # default is 30 minutes
max-lease-time 7200;                    # default is 2 hours

subnet 172.16.133.0 netmask 255.255.255.0 {
	range 172.16.133.128 172.16.133.254;
	option broadcast-address 172.16.133.255;
	option domain-name-servers 172.16.133.2;
	option domain-name localdomain;
	default-lease-time 1800;                # default is 30 minutes
	max-lease-time 7200;                    # default is 2 hours
	option netbios-name-servers 172.16.133.2;
	option routers 172.16.133.2;
}
host vmnet8 {
	hardware ethernet 00:50:56:C0:00:08;
	fixed-address 172.16.133.1;
	option domain-name-servers 0.0.0.0;
	option domain-name "";
	option routers 0.0.0.0;
}

What this means is that any interface set to NAT in my instance of Fusion will receive DHCP addresses in the subnet 172.16.133.0/24, but the DHCP pool itself is only 172.16.133.[128-254]. The subnet mask will still be 255.255.255.0 (ie: /24) and the default gateway is 172.16.133.2 as that is the internal interface of the virtual router created to do the NAT; .1 is held by the “external” interface which you can view by issuing an ifconfig vmnet8 at the command prompt. Note, this interface is created when Fusion is launched and torn down when you quit. If you set the interface type to bridged, those interfaces will get DHCP addresses from the same LAN that the Mac is connected to.

On to the actual installation…

First thing you need to do is download the OVA from NetApp:

  1. Go to https://support.netapp.com/
  2. login (yes, required)
  3. At the top click Downloads → Product Evaluation
  4. Click “Data ONTAP™ Simulator
  5. Agree to the terms
  6. Download the OVA and license keys for the version you’re looking for.

Now that you have the OVA, you’re ready to import it into Fusion. Launch Fusion, then click the + sign and choose Import:

import

Browse for and open the downloaded OVA:

choose file
open

Now click continue:

continue

Give the folder you’re going to store it in a name and click save, I like to name it after the node:

ONTAP 9.7, Node 1

Fusion will import the OVA and present you with the settings. You can modify them if you want, but for now I’m going to leave them as default. Click Finish:

finish

You’ll likely be asked if you’d like to upgrade the VM version, don’t bother:

At this point the vSIM will boot for its first time, I believe the official instructions tell you to hit CTRL-C, halt the boot and call for the maintenance menu then issue an option 4, but if this is the first node you do not have to do that. The root aggregate is automatically created:

First boot with aggr0 creation

Now you can open a browser and point it at the IP address listed on your screen, in my case it will be https://172.16.133.132/, but it may be different for you. You will get a certificate error, but bypass that to access the GUI to finish the configuration. IF you do not get the following screen or get no site at all, there’s something else wrong. Also, hover your mouse over the node in the Health card, if the serial number doesn’t appear, refresh the web page, otherwise configuration will fail:

No node serial

It should look like this:

With node serial

Now enter all the required information, since the IP addresses are being statically assigned, I’m choosing ones outside of the DHCP range, as should you:

Cluster name and admin password
Networking information

I don’t check the “single-node” box, it will still work as a single node if you don’t but if you do, it removes the ClusterNet interfaces completely. I like having those interfaces for experimentation and teaching purposes; also it keeps the door open to adding a second node, which I will cover in a follow up post if there is anyone interested. Now click Submit:

other info

At this point I like to start pinging either the cluster IP I specified or the node IP so I can see when the cluster gets configured since the browser doesn’t always refresh to the new IP address:

ping

Once ping starts responding, go ahead and visit the new IP address via your browser:

Now the person I wrote this blog entry for isn’t getting the GUI above, but instead the GUI for the out-of-band interface for a UCS server, so the IP space their vmnet8 is using collides with production IP space. This can be verified at this point by disconnecting any Ethernet connections and turning off WiFi, once that is done, reload the browser and the IP conflict should be resolved until you’re connected once again. To resolve it permanently, that person will need to edit the dhcpd.conf file for vmnet8 mentioned above, using a subnet known to not conflict. Here’s an example, alternative dhcpd.conf:

allow unknown-clients;
default-lease-time 1800;                # default is 30 minutes
max-lease-time 7200;                    # default is 2 hours

subnet 10.0.0.0 netmask 255.255.255.0 {
	range 10.0.0.128 10.0.0.254;
	option broadcast-address 10.0.0.255;
	option domain-name-servers 10.0.0.2;
	option domain-name localdomain;
	default-lease-time 1800;                # default is 30 minutes
	max-lease-time 7200;                    # default is 2 hours
	option netbios-name-servers 10.0.0.2;
	option routers 10.0.0.2;
}
host vmnet8 {
	hardware ethernet 00:50:56:C0:00:08;
	fixed-address 10.0.0.1;
	option domain-name-servers 0.0.0.0;
	option domain-name "";
	option routers 0.0.0.0;
}

This changes the subnet in use to 10.0.0.0/24 with the DHCP range being 10.0.0.[128-254] and the default gateway of VMs using it to 10.0.0.2.

This is where I’m going to end this post for now as the simulator is now accessible via HTTPS and SSH and ONTAP is ready to be configured. You will still need to assign disks, create a local storage tier (aggregate) as well as an SVM with volume(s) for data among other things. The intent of this post was to get this far, not to teach ONTAP. If you’d like to see a post around either adding a second node to the cluster or configuring ONTAP on the first one, please leave a comment and I’ll try and get around to it.

ONTAP 9.6

UPDATE, MAY 17: RC1 is out, you can grab it here.

It’s my favourite time of year folks, yup it’s time for some new ONTAP feature announcements. It feels as though 9.6 is going to have quite the payload, so I’m not going to cover every little tid-bit, just the pieces that I’m excited about. For the full release notes, go here, NetApp SSO credentials required. Or, if you’re one of my customers feel free to email me for a meeting and we can go over this release in detail.

The first thing worth mentioning is that with 9.6, NetApp is dropping the whole LTS/STS thing and all releases going forward will be considered Long Term Service support. What this means is every release has three years of full support, plus two years of limited support.

The rest of the updates can be grouped into one three themes or highlights;

  1. Simplicity and Productivity
  2. Expanded customer use cases
  3. Security and Data Protection

Some of the Simplicity highlights are:

  • System Manager gets renamed to ONTAP System Manager and overhauled, now based on REST APIs with Python SDK available at GA
    • Expect a preview of a new dashboard in 9.6
  • Automatic Inactive Data Reporting for SSD aggregates
    • This tells you how much data you could tier to an object store, freeing up that valuable SSD storage space
  • FlexGroup volume management has gotten simpler with the ability to shrink them, rename them and MetroCluster support
  • Cluster setup has gotten even easier with automatic node discovery
  • Adaptive QoS support for NVMe/FC (maximums) and ONTAP Select (minimums)

Here’s what the System Manager dashboard currently looks like:

And here’s what we can look forward to in 9.6

The Network Topology Visualization is very interesting, I’m looking forward to seeing how in-depth it gets.

Expanded Customer Use Cases

  • NVMe over FC gets more host support; it now includes VMware ESXi, Windows 2012/2016, Oracle Linux, RedHat Linux and Suse Linux.
  • FabricPools improvements:
    • Gains support for two more hyperscalers: Google Cloud and Alibaba Cloud
    • The Backup policy is gone replaced with a new All policy, great for importing known-cold data directly to the cloud
    • Inactive Data Reporting is now on by default for SSD aggregates and is viewable in ONTAP System Manager – Use this to determine how much data you could tier.
    • FabricPool aggregates can now store twice as much data
    • SVM-DR support
    • Volume move – Can now be done without re-ingesting the cloud tier, moves the meta data and hot data only
  • FlexGroup Volume Improvements:
    • Elastic sizing to automatically protect against one constituent member filling up and returning an error to the client
    • MetroCluster support, both FC and IP MetroCluster
    • Volume rename now trivial
    • Volume size reduction now availble
    • SMB Continuous Availability (CA) file share support
  • FlexCache Improvements:
    • Caching to and from Cloud Volumes ONTAP
    • End-to-end data encryption
    • Max cached volumes per node increased to 100 from 10
    • Soft and hard quota (tree) on origin volume enforced on cached volume
    • fpolicy support

Security and Data Protection

  • Over-the-wire encryption for SnapMirror
    • Coupled with at-rest encryption, data can now be encrypted end-to-end
  • SnapMirror Synchronous now supports
    • NFSv4, SMB 2 & 3 and mixed NFSv3/SMB volumes
    • This is in addition to existing support for FCP, iSCSI and NFSv3
  • NetApp Aggregate Encryption (NAE)
    • This can be seen as an evolution of NetApp Volume Encryption (NVE), all volumes in the aggregate share the same key.
    • Deduplication across volumes in the aggregate is supported for added space savings
  • Multi-tenant Key Management for Data At-Rest Encryption
    • Each tenant SVM can be configured with it’s on key management servers
  • Neighbour tenants are unaffected by each others’ encryption actions and much maintain control of their own keys
    • This is an added license
  • MetroCluster IP Updates
    • Support for entry AFF and FAS systems!
      • Personally I think this one is a game-changer and will really drive MetroCluster adoption now that the barrier to entry is so low
    • AFF A220 and FAS2750 and newer only

And that is most of the new enhancements of features appearing in 9.6; 9.6RC1 is expected around the second half of May, GA typically comes about six weeks later. You can bet that I’ll have it running in my lab the day it comes out.

ONTAP 9.5

UPDATE: 9.5RC1 is now out and you can grab it here.

It’s that time of year again, time for NetApp’s annual technical conference, Insight. This also means that a Long-Term Support (LTS) release of ONTAP is due, this time it’s 9.5. As I write this, I am sitting in the boarding lounge of YVR, waiting for my flight to Las Vegas for NetApp Insight and I see the Release Candidate (RC) for 9.5 is not out quite yet, but I do have the list of new features for you nonetheless.

The primary new features of 9.5 are:

  • New FlexCache accelerates performance for key workloads with read caching across a cluster and at remote sites.
  • SnapMirror Synchronous protects critical applications with synchronous replication
  • MetroCluster-IP enhancements reduce cost for business continuity: 700km between sites; support midrange systems (A300/FAS8200)
  • FabricPool now supports automated cloud tiering for FlexGroup volumes

Now, let’s dig into each one of these new features a bit.

FlexCache: FlexCache makes its return in 9.5 and provides the ability to cache hot blocks, user data and meta data on a more performant tier while the bulk of the data sits in a volume elsewhere in the cluster or even on a remote cluster. FlexCache can enable you to provide lower read latency while not having to store the bulk of your data on the same tier. At this time, only NFSv3 is supported though the source volume can be on AFF, FAS or ONTAP Select. While the volume you access is a FlexGroup volume, the source volume itself cannot be a FlexGroup but rather must be a FlexVol. An additional license is required.

SnapMirror Synchronous: SM-S also makes a long-awaited return to ONTAP allowing you to provide a recovery point objective (RPO) of zero and very low recovery time objective (RTO). FC, iSCSI and NFSv3 only at this time and your network must have a maximum roundtrip latency of no more than 10ms, FlexGroup volumes not supported. An additional license is required.

MetroCluster-IP (MC-IP): NetApp continues to add value to the mid-range of appliances by bringing MC-IP support to both the AFF A300 as well as the FAS8200. At the same time, NetApp has increased the maximum distance to 700km, provided your application can tolerate up to 10ms of write acknowledgement latency.

FabricPool: Previously hampered by the need to tier volumes greater than 100TiB? Now that FabricPool supports FlexGroups, you are in luck. Also supported in 9.5 is end-to-end encryption of data stored in FabricPool volumes using only one encryption key. Lastly, up until now, data would only migrate to your capacity tier once your FabricPool aggregate reached a fullness of 50%, this parameter is now adjustable though 50% remains the default.

While those are the primary features included in this latest payload, existing features continue to gain refinement, especially in the realm of storage efficiency. Specifically, around logical space consumption reporting, useful for service providers. Also, adaptive compression is now applied when 8KB compression groups (CG) are <50% compressible, allowing CG’s to be compacted together. Databases will see the most benefit here, typical aggregate savings in the 10-15% range. Finally, provided you have provisioned your storage using System Manager’s application provisioning, adaptive compression will be optimized for the database being deployed; Oracle, SQL Server or MongoDB.

That’s all for now, if you want more details come find me at NetApp Insight on the show floor near the Social Media Hub or at my Birds of a Feather session, Monday at 11:15am where myself and other NetApp A-Team members will discuss the Next Generation Data Centre.

Raw AutoSupport, tried and true – sysconfig

While NetApp keeps improving the front end that is ActiveIQ, for both pre-sales and support purposes, I constantly find myself going into the Classic AutoSupport and accessing the raw autosupport data; most often it’s sysconfig -a. Recently I was trying to explain the contents to a co-worker and I realized that I should just document it as a blog post. So here is sysconfig explained.

The command sysconfig -a is the old 7-mode command to give you all the hardware information from the point of view of ONTAP. All the onboard ports are assigned to “slot 0” whereas slot 1-X are the physical PCIe slots where myriad cards can be inserted. Here’s one example, I’ll insert comments as I feel it is appropriate. Continue reading

ONTAP 9.4 – Improvements and Additions

While the actual payload hasn’t hit the street yet, here’s what I can tell you about the latest release in the ONTAP 9 family which should be available here any day now. **EDIT: RC1 is here. 9.4 went GA today.

FabricPool

Lots of improvements to ONTAP’s object-tiering code in this release, it appears they’re really pushing development here:

  • Support for Azure Blob, both hot and cool tiers, no archive tier support
    • This adds to the already supported AWS-S3 and StorageGRID Webscale object stores
  • Support for cold-data tiering policies, whereas in 9.2,9.3 it was backup and snapshot-only tiering policies
    • Default definition of cold data is 31 days but can be adjust to anywhere from 2-63 days.
    • Not all cold blocks need to be made hot again, such as snapshot-only blocks. Random reads will be considered application access, declared hot and written back into performance tier whereas sequential reads are assumed to be indexers, virus scanners or other and should be kept cold and therefore will not be written back into performance tier.
  • Now supported in ONTAP Select, in addition to the existing ONTAP and ONTAP Cloud. Wherever you run ONTAP, you can now run FabricPools, SSD aggregate caveat still exists.
  • Inactive Data Reporting by OnCommand System Manager to determine how much data would be tiered if FabricPools were implemented.
    • This one will be key to clients thinking about adopting FabricPools
  • Object Store Profiler is a new tool in ONTAP that will test the performance of the object store you’re thinking of attaching so you don’t have to dive in without knowing what your expected performance should be.
  • Object Defragmentation now optimizes your capacity tier by reclaiming space that is no longer referenced by the performance tier
  • Compaction comes to FabricPools ensuring that your write stripes are full as well as applying Compression, Deduplication

Continue reading

ONTAP 9.3 is out soon, here’s the details you need.

It’s that time of year again, time for an ONTAP release…or at least an announcement. When 9.3 drops, not only will it be an LTS (Long Term Support) version, but NetApp continues to refine and enhance ONTAP.

Simplifying operations:

  • Application-Aware, data management for MongoDB
  • Adaptive QoS
  • Guide cluster setup and expansion
  • simplified data protection setup, much simpler.

Efficiencies:

Not so long ago, in ONTAP 9.2, NetApp introduced inline, aggregate-level dedupe. What many people may not have realized, due to the nature of way ONTAP coalesces writes in NVRAM prior to flushing them to the durable layer is that this inline aggregate dedupe’s domain was restricted to the data in the NVRAM. With 9.3, a post-process aggregate scanner has been implemented to provide true, aggregate-level dedupe.

Continue reading

ONTAP 9.2RC1 is out

Continuing with their new standard six month release cadence, ONTAP 9.2RC1 was released today and I continue to be impressed with the feature payload NetApp has been delivering with each new release; here are the highlights:

  • FabricPools
    • Automated cloud-tiering of NetApp Snapshots to a target that speaks S3 (AWS or StorageGrid)
  • QoS Floors or minimums
    • Allows you to reserve performance for critical workloads, SAN on AFF only.
  • Efficiencies:
    • Inline dedupe at the aggregate level
      • Previously volume-only
      • Not applicable to post-process dedupe
      • AFF only
    • Advanced Disk Partitioning now available on FAS8xxx and FAS9xxx, for the first 48 drives.
  • NetApp Volume Encryption
    • Now available for FlexGroups

Here’s the list of supported platforms for ONTAP 9.2:

AFF FAS
AFF A700s FAS9000
AFF A700 FAS8200
AFF A300 FAS2650
AFF A200 FAS2620
AFF80x0 FAS80x0
FAS25xx

ONTAP Select, the software-only version of ONTAP gets a few nice improvements as well:

  • FlexGroups
  • 2-Node HA
  • ESX Robo licensing (This is a big deal for me and my customers)

Read the full release notes here.

NetApp Volume Encryption, The Nitty Gritty

It all begins in the configuration builder tool

This article focuses on the implementation and management of encryption with NetApp storage. Data at Rest Encryption (NetApp Volume Encryption or NVE for short) is one of the ways that you can achieve encryption with NetApp, and it’s one of the most exciting new features of ONTAP 9.1. Here’s how you go about implementing it.

If you’re a partner or NetApp SE, when building configurations, as long as the cluster software version is set to 9.x, there is a checkbox that lets you decide which version of ONTAP gets written to the device at the factory. As of 9.1, ONTAP software images will either be capable of encryption via a software encryption module, or not. There are laws around both the import and export of software that is capable of encryption, but that is beyond the scope of this article. I do know you can use the encryption-capable image in Canada (where I am located), so I’m covered. If you’re unsure about the laws in your country, consult your legal adviser on this matter.

Once this cluster-level toggle has been set and you add hardware into the configuration, there are two more checkboxes in the software section:

  1. NetApp Volume Encryption (off by default)
  2. Trusted Platform Module (TPM, on by default) ***Clarification Update*** – TPM NOT REQUIRED FOR NVE

The first one triggers the generation of the license key for NVE and the second one activates a piece of hardware dedicated to deal with cryptographic keys. One thing I’m still not sure of is (should you choose to remove the checkmark)  if the TPM is simply disabled or doesn’t physically exist in your NetApp controller, I have an email into NetApp to confirm this. [Update: The module is integral to the controller and disabled in firmware if being shipped to certain countries. Shout out to @Keith_Aasen for tracking that down for me.]

Okay, now for the more customer-relevant information…

To get started with NVE, you’re going to need a few things:

  1. A encryption-capable platform
  2. A encryption-capable image of ONTAP
  3. A key manager
  4. A license key for NVE

Encryption-capable platform

The following platforms are currently capable of encryption: FAS6290, FAS80xx, FAS8200, and AFF A300. This is limited by the CPU in the platform as it must have a sufficient clock-speed and core-count with support for the AES instruction set. I’m sure this list will be ever-expanding, but be sure to check first if you’re hoping to use NVE. [UPDATE: After some digging, I can confirm that all the new models support NVE, the entry-level FAS2650 included.]

Encryption-capable image of ONTAP

Provided you’re not in a restricted country as per the above, your image will be the standard nomenclature of X_q_image.tgz where X is the version number. The non-encryption-capable version will be X_q_nodar_image.tgz which I’ll simply refer to as nodar(e) (No Data At Rest Encryption) for the rest of this article. The output of version -v will tell you if you’re nodar or standard.

NetApp Release 9.1RC1: Sun Sep 25 20:10:49 UTC 2016 <1O>

NetApp Release 9.1RC1: Sun Sep 25 20:10:49 UTC 2016 <1Ono-DARE>

Key manager

The on-board key manager introduced in ONTAP 9.0 enables you to manage keys for use with your NSE drives, helping you avoid costly and possibly complex external solutions. Currently, NVE only supports using the on-board manager, so if you’re going to use NVE layered on top of NSE, you need to use the on-board one.

Setting this up is exactly one command:

security key-manager setup

You’ll be prompted for a passphrase, and that’s it, you’re done.

License key for NVE

If you didn’t get this license key at time of purchase, talk to your account representative or SE over at NetApp (though, hopefully, if you’ve bought one of the new systems announced at Insight 2016, they decided to include it since, at least for now, it is a no-cost license).

What next?

Now that you’ve got all the prerequisites covered, encrypting your data is very simple. As the name implies, encryption is done at the volume level, so naturally it’s a volume command that encrypts the data (a volume move command, in fact):

volume move start -volume vol_name -destination-aggregate aggr_name -encrypt-destination true

The destination aggregate can even be the same aggregate that the volume is already hosted on. Don’t want that volume encrypted anymore for some reason? Change that last flag to false.

If you’re creating a new volume that you want encrypted, that’s just as simple:

volume create -volume vol_name -aggregate aggr_name -size 1g -encrypt true

Wrapping up

NetApp Volume Encryption is pretty easy, but since it’s so new, OnCommand System Manager doesn’t support it just yet. You’ll have to stick to the CLI for now, although I’m sure the GUI will catch up eventually, if that’s your preferred point of administration. It should also be noted that while NSE solutions are FIPS 140-2 compliant, NVE has yet to go through the qualifications. Also, if FIPS is a requirement, the on-board key manager isn’t compliant yet either. Since with the on-board key manager the keys are literally stored on the same hardware using them, NVE only protects you from compromised data on individual drives removed from your environment through theft or RMA. If someone gained wholesale access to the HA pairs, the data would still be retrievable. Also, this is for data-at-rest only. You must follow other precautions for data-in-flight encryption.

Into the weeds

I did all my tests for this post using the simulator, and I learned a lot, but your mileage may vary. In the end, only you are responsible for what you do to your data. I had heard that if you have the wrong software image then you’d have to do a complete wipe of your HA pair in order to convert it. I have since proven this wrong (at least in the simulator) and I definitely can’t guarantee the following will be supported.

For my tests I had two boot images loaded: one standard and one nodar. What I learned is that you can boot into either mode, provided you don’t have any encrypted data. Even if you have the key manager setup and NVE is licensed, you can still boot back and forth. The first time you boot your system using the nodar image with encrypted data on the system, however, you’ll hose the whole thing. I did test first encrypting data, then decrypting it, then converting to nodar, and the simulator booted fine. When I booted into nodar with an encrypted volume, even going back to standard didn’t work. Booting into maintenance mode shows the aggregates with a status of partial and the boot process hints that they are in some sort of transition phase (7MTT?). Either way, I was unable to recover my simulator once I got it to this state, so I definitely advise against it in production. Heck, I’d advise you just to use the proper image to start with.

I hope you learned something. If you have any questions or comments, either post them below or reach out on twitter. I’m @ChrisMaki from the #NetAppATeam and Solution Architect @ScalarDecisions.

ADP(v1) and ADPv2 in a nutshell, it’s delicious!

Ever since clustered Data ONTAP went mainstream over 7-Mode, the dedicated root aggregate tax has been a bone of contention for many, especially for those entry-level systems with internal drives. Can you imagine buying a brand new FAS2220 or FAS2520 and being told that not only are you going to lose two drives as spares, but also another six to your root aggregates? This effectively left you with four drives for your data aggregate, two of which would be devoted to parity. I don’t think so. Now, this is a bit of an extreme example that was seldom deployed. Hopefully you had a deployment engineer who cared about the end result and would use RAID-4 for the root aggregates and maybe not even assign a spare to one controller, giving you seven whole disks for your active-passive deployment. Still, this was kind of a shaft. In a 24-disk system deployed active-active, you’d likely get something like this:

Traditional cDOT

Enter ADP.

In the first version of ADP introduced in version 8.3, clustered Data ONTAP gained the ability to partition drives on systems with internal drives as well as the first two shelves of drives on All Flash FAS systems. What this meant was the dedicated root aggregate tax got a little less painful. In this first version of ADP, clustered Data ONTAP carved each disk into two partitions: a small one for the root aggregates and a larger one for the data aggregate(s). This was referred to as root-data or R-D partitioning. The smaller partition’s size depended on how many drives existed. You could technically buy a system with fewer than 12 drives, but the ADP R-D minimum was eight drives. By default, both partitions on a disk were owned by the same controller, splitting overall disk ownership in half.

8.3 ADP, R-D

 

You could change this with some advanced command-line trickery to still build active-passive systems and gain two more drive partitions’ worth of data. Since you were likely only building one large aggregate on your system, you could also accomplish this in System Setup if you told it to create one large pool. This satisfied the masses for a while, but then those crafty engineers over at NetApp came up with something better.

Enter ADPv2.

Starting with ONTAP 9, not only did ONTAP get a name change (7-Mode hasn’t been an option since version 8.2.3), but it also gained ADPv2 which carves the aforementioned data partition in half, or R-D2 (Root-Data,Data) sharing for SSDs. Take note of the aforementioned SSDs there, as spinning disks aren’t eligible for this secondary partitioning. In this new version, you get one drive back that you would have allocated to be a spare, and you also get two of the parity drives back, lessening the pain of the RAID tax. With a minimum requirement of eight drives and a maximum of 48, here are the three main scenarios for this type of partitioning.

12 Drives:

ADPv2, R-D2 ½ shelf

24 Drives:

ADPv2, R-D2 1 shelf

48 Drives:

ADPv2, R-D2 2 shelves

As you can see, this is a far more efficient way of allocating your storage that yields up to ~17% more usable space on your precious SSDs.

So that’s ADP and ADPv2 in a nutshell—a change for the better. Interestingly enough, the ability to partition disks has lead to a radical change in the FlashPool world called “Storage Pools,” but that’s a topic for another day.