NetApp Volume Encryption, The Nitty Gritty

It all begins in the configuration builder tool

This article focuses on the implementation and management of encryption with NetApp storage. Data at Rest Encryption (NetApp Volume Encryption or NVE for short) is one of the ways that you can achieve encryption with NetApp, and it’s one of the most exciting new features of ONTAP 9.1. Here’s how you go about implementing it.

If you’re a partner or NetApp SE, when building configurations, as long as the cluster software version is set to 9.x, there is a checkbox that lets you decide which version of ONTAP gets written to the device at the factory. As of 9.1, ONTAP software images will either be capable of encryption via a software encryption module, or not. There are laws around both the import and export of software that is capable of encryption, but that is beyond the scope of this article. I do know you can use the encryption-capable image in Canada (where I am located), so I’m covered. If you’re unsure about the laws in your country, consult your legal adviser on this matter.

Once this cluster-level toggle has been set and you add hardware into the configuration, there are two more checkboxes in the software section:

  1. NetApp Volume Encryption (off by default)
  2. Trusted Platform Module (TPM, on by default)

The first one triggers the generation of the license key for NVE and the second one activates a piece of hardware dedicated to deal with cryptographic keys. One thing I’m still not sure of is (should you choose to remove the checkmark)  if the TPM is simply disabled or doesn’t physically exist in your NetApp controller, I have an email into NetApp to confirm this. [Update: The module is integral to the controller and disabled in firmware if being shipped to certain countries. Shout out to @Keith_Aasen for tracking that down for me.]

Okay, now for the more customer-relevant information…

To get started with NVE, you’re going to need a few things:

  1. A encryption-capable platform
  2. A encryption-capable image of ONTAP
  3. A key manager
  4. A license key for NVE

Encryption-capable platform

The following platforms are currently capable of encryption: FAS6290, FAS80xx, FAS8200, and AFF A300. This is limited by the CPU in the platform as it must have a sufficient clock-speed and core-count with support for the AES instruction set. I’m sure this list will be ever-expanding, but be sure to check first if you’re hoping to use NVE. [UPDATE: After some digging, I can confirm that all the new models support NVE, the entry-level FAS2650 included.]

Encryption-capable image of ONTAP

Provided you’re not in a restricted country as per the above, your image will be the standard nomenclature of X_q_image.tgz where X is the version number. The non-encryption-capable version will be X_q_nodar_image.tgz which I’ll simply refer to as nodar(e) (No Data At Rest Encryption) for the rest of this article. The output of version -v will tell you if you’re nodar or standard.

NetApp Release 9.1RC1: Sun Sep 25 20:10:49 UTC 2016 <1O>

NetApp Release 9.1RC1: Sun Sep 25 20:10:49 UTC 2016 <1Ono-DARE>

Key manager

The on-board key manager introduced in ONTAP 9.0 enables you to manage keys for use with your NSE drives, helping you avoid costly and possibly complex external solutions. Currently, NVE only supports using the on-board manager, so if you’re going to use NVE layered on top of NSE, you need to use the on-board one.

Setting this up is exactly one command:

security key-manager setup

You’ll be prompted for a passphrase, and that’s it, you’re done.

License key for NVE

If you didn’t get this license key at time of purchase, talk to your account representative or SE over at NetApp (though, hopefully, if you’ve bought one of the new systems announced at Insight 2016, they decided to include it since, at least for now, it is a no-cost license).

What next?

Now that you’ve got all the prerequisites covered, encrypting your data is very simple. As the name implies, encryption is done at the volume level, so naturally it’s a volume command that encrypts the data (a volume move command, in fact):

volume move start -volume vol_name -destination-aggregate aggr_name -encrypt-destination true

The destination aggregate can even be the same aggregate that the volume is already hosted on. Don’t want that volume encrypted anymore for some reason? Change that last flag to false.

If you’re creating a new volume that you want encrypted, that’s just as simple:

volume create -volume vol_name -aggregate aggr_name -size 1g -encrypt true

Wrapping up

NetApp Volume Encryption is pretty easy, but since it’s so new, OnCommand System Manager doesn’t support it just yet. You’ll have to stick to the CLI for now, although I’m sure the GUI will catch up eventually, if that’s your preferred point of administration. It should also be noted that while NSE solutions are FIPS 140-2 compliant, NVE has yet to go through the qualifications. Also, if FIPS is a requirement, the on-board key manager isn’t compliant yet either. Since with the on-board key manager the keys are literally stored on the same hardware using them, NVE only protects you from compromised data on individual drives removed from your environment through theft or RMA. If someone gained wholesale access to the HA pairs, the data would still be retrievable. Also, this is for data-at-rest only. You must follow other precautions for data-in-flight encryption.

Into the weeds

I did all my tests for this post using the simulator, and I learned a lot, but your mileage may vary. In the end, only you are responsible for what you do to your data. I had heard that if you have the wrong software image then you’d have to do a complete wipe of your HA pair in order to convert it. I have since proven this wrong (at least in the simulator) and I definitely can’t guarantee the following will be supported.

For my tests I had two boot images loaded: one standard and one nodar. What I learned is that you can boot into either mode, provided you don’t have any encrypted data. Even if you have the key manager setup and NVE is licensed, you can still boot back and forth. The first time you boot your system using the nodar image with encrypted data on the system, however, you’ll hose the whole thing. I did test first encrypting data, then decrypting it, then converting to nodar, and the simulator booted fine. When I booted into nodar with an encrypted volume, even going back to standard didn’t work. Booting into maintenance mode shows the aggregates with a status of partial and the boot process hints that they are in some sort of transition phase (7MTT?). Either way, I was unable to recover my simulator once I got it to this state, so I definitely advise against it in production. Heck, I’d advise you just to use the proper image to start with.

I hope you learned something. If you have any questions or comments, either post them below or reach out on twitter. I’m @ChrisMaki from the #NetAppATeam and Solution Architect @ScalarDecisions.

NetApp is Finally Making the Data Fabric Real

 

Since Insight Berlin is  now done and several of my fellow A-Team’ers have already done their own “Insight wrap-up” blogs, and you can check them out for a detailed run down of all the awesome stuff that NetApp is coming out with in the next year, I figured I should get mine done as well (posts from @mcbride_ruairi and @NFSDudeAbides to start you off). While all that stuff is really cool (ONTAP 9, new All Flash FAS, etc.), what was the most inspiring for me from this year’s conference is how NetApp is bringing the “vision” of the Data Fabric to life with a couple of key advancements.

Flash (and ONTAP) Everywhere

Since 2014, when George Kurian announced the Data Fabric on stage at Insight, NetApp has really pushed its flash portfolio in new directions. The subsequent acquisition of SolidFire has bolstered NetApp’s position in the market, while giving customers more options and improved capabilities for deploying flash in their data centers. This year, George set the goal (albeit a bit lofty) of being the number one flash vendor in the world, and with that attitude and the right strategy, they seem like they’re on their way.

Of course, the big to-do around flash this year at Insight was the refresh of the All Flash FAS line of storage systems. It was the biggest hardware refresh in NetApp’s history, with the introduction of the new FAS2600 and FAS8200 lines as well as the new AFF A300 and AFF A700 systems. Along with this comes 32Gb Fibre Channel and 40Gb Ethernet, which is a first for the SAN world and helps position the storage better for flash.

But instead of these announcements just being the shiny new toys for engineers to ooh and ah over, NetApp brought them to market with the strategic intent to help customers access their data anywhere. The importance of data was a central element throughout the conference, with the “data is the currency of the digital economy” message resonating loud and clear. One of my favorite quotes from the event was, “[In the past,] data was just there to run your business. Now data is your business.”

With the increased capabilities and applicability of flash comes the more pronounced ubiquity of ONTAP in the Data Fabric. NetApp’s strategy has always been to help customers move and utilize data where it can deliver the most value to them, and this year they are making some big strides in that direction. Tools like Flash Cache, Flash Pool, SnapMirror, SnapVault, and others are being expanded across the portfolio to deliver a seamless data management experience.

Another cool piece of software that they’re bringing out is called Cloud Control, which enables you to back up your data that resides with SaaS providers within SnapCenter, things like Office 365, with support for other SaaS providers to come. They showcased a pretty fancy-looking SnapCenter with cataloguing and all sorts of other cool features to help you manage cloud data like you would your on-premises storage. SnapCenter is currently free, but with how cool Cloud Control is gonna be, I honestly have no idea why they wouldn’t want to monetize it.

Renewed Focus on Developers, a Sign of Greatness to Come

This idea of an open, flexible, and secure ecosystem for your data is only going to get more crystalized as time goes on. Something that my fellow A-Teamer Jesse Anderson blogged about a while back that struck me as well was NetApp’s focus on the developer at Insight. I think it’s a sign of a cultural shift that will only further enable the Data Fabric. The introduction of thePub, this place where developers can coalesce and share ideas to make the Data Fabric better, is a huge step in the right direction as far as I’m concerned.

I’m really excited to see what that future has in store for NetApp and the Data Fabric. See you next year!

ADP(v1) and ADPv2 in a nutshell, it’s delicious!

Ever since clustered Data ONTAP went mainstream over 7-Mode, the dedicated root aggregate tax has been a bone of contention for many, especially for those entry-level systems with internal drives. Can you imagine buying a brand new FAS2220 or FAS2520 and being told that not only are you going to lose two drives as spares, but also another six to your root aggregates? This effectively left you with four drives for your data aggregate, two of which would be devoted to parity. I don’t think so. Now, this is a bit of an extreme example that was seldom deployed. Hopefully you had a deployment engineer who cared about the end result and would use RAID-4 for the root aggregates and maybe not even assign a spare to one controller, giving you seven whole disks for your active-passive deployment. Still, this was kind of a shaft. In a 24-disk system deployed active-active, you’d likely get something like this:

Traditional cDOT

Enter ADP.

In the first version of ADP introduced in version 8.3, clustered Data ONTAP gained the ability to partition drives on systems with internal drives as well as the first two shelves of drives on All Flash FAS systems. What this meant was the dedicated root aggregate tax got a little less painful. In this first version of ADP, clustered Data ONTAP carved each disk into two partitions: a small one for the root aggregates and a larger one for the data aggregate(s). This was referred to as root-data or R-D partitioning. The smaller partition’s size depended on how many drives existed. You could technically buy a system with fewer than 12 drives, but the ADP R-D minimum was eight drives. By default, both partitions on a disk were owned by the same controller, splitting overall disk ownership in half.

8.3 ADP, R-D

 

You could change this with some advanced command-line trickery to still build active-passive systems and gain two more drive partitions’ worth of data. Since you were likely only building one large aggregate on your system, you could also accomplish this in System Setup if you told it to create one large pool. This satisfied the masses for a while, but then those crafty engineers over at NetApp came up with something better.

Enter ADPv2.

Starting with ONTAP 9, not only did ONTAP get a name change (7-Mode hasn’t been an option since version 8.2.3), but it also gained ADPv2 which carves the aforementioned data partition in half, or R-D2 (Root-Data,Data) sharing for SSDs. Take note of the aforementioned SSDs there, as spinning disks aren’t eligible for this secondary partitioning. In this new version, you get one drive back that you would have allocated to be a spare, and you also get two of the parity drives back, lessening the pain of the RAID tax. With a minimum requirement of eight drives and a maximum of 48, here are the three main scenarios for this type of partitioning.

12 Drives:

ADPv2, R-D2 ½ shelf

24 Drives:

ADPv2, R-D2 1 shelf

48 Drives:

ADPv2, R-D2 2 shelves

As you can see, this is a far more efficient way of allocating your storage that yields up to ~17% more usable space on your precious SSDs.

So that’s ADP and ADPv2 in a nutshell—a change for the better. Interestingly enough, the ability to partition disks has lead to a radical change in the FlashPool world called “Storage Pools,” but that’s a topic for another day.

NetApp refreshes entire line of FAS and AFF platforms

Today NetApp announced a complete revamping of both the FAS and AFF lines and with it a divergence in model numbers. My favourite improvement is that NetApp has changed the way FlashCache gets delivered; now all FAS platforms can take advantage of FlashCache using an M.2 NVMe device onboard the controller, even the entry-level models; in fact, it’s standard on all models. In the realm of connectivity, both the top-end FAS as well as all AFFs can now offer not only 40GbE but 32Gb FC as well, first to market for both of these.

Without further ado, here are the new models in the FAS line:

  • FAS2620 and FAS2650
    • Appears to be the same 2RU enclosure as the FAS2240-2, FAS2552, and DS2246, likely with an upgraded mid-plane.
    • FAS2620 holds 12 large form factor (3.5″ NL-SAS/SSD) drives internally
    • FAS2650 holds 24 small form factor (2.5″ SAS/SSD) drives internally
    • Both models come with 1TB of FlashCache
  • FAS8200
    • Appears to be the same 3RU enclosure as the FAS8020
    • 1TB of FlashCache is now standard, upgradeable to 4TB
  • FAS9000
    • This all-new chassis separates the I/O from the controller so there are no more onboard ports and all I/O is done using PCIe cards, 10 slots per node.
    • 2TB of FlashCache are now part of the standard configuration, upgradeable to 16TB.

And the new AFF line now consists of:

  • A300 (Same chassis as FAS8200)
  • A700 (Same chassis as FAS9000)

Strictly the numbers*:

Model RU RAM NVRAM (NVMEM) Max HDD (SDD) Max Flash Cache Max Flash Pool Onboard UTA2 Onboard 10GbE Onboard 10GbE Base-T Onboard 12GB SAS PCIe Expansion Slots Cores
FAS
FAS2620 2 64GB (8GB) 144 1TB 24TB 8 4 4 N/A N/A 12
FAS2650 2 64GB (8GB) 144 1TB 24TB 8 4 4 N/A N/A 12
FAS8200 3 256GB 16GB 480 4TB 48TB 8 4 4 4 4 32
FAS9000 8 1024GB 64GB 1440 (480) 16TB 144TB N/A N/A N/A N/A 20 72
AFF
A300 3 256GB** 16GB 384 N/A N/A 8 4 4 4 4 32
A700 8 1024GB 64GB 480 N/A N/A N/A N/A N/A N/A 20 72
  • *Numbers are per HA pair
  • **16GB Carved out for NVLOGS

Performance Improvements

The FAS2600 comes with 3 times as many cores, twice as much memory and >3 times the NVMEM than that of the FAS2500 and brings 12Gb SAS and 1TB of NVMe FlashCache is expected to perform 200% faster than its predecessor running 8.3.x, making the entry-level line of controllers smoking fast. The 8200 has twice as many cores and four times as much memory as the FAS8040 and also comes with 12Gb SAS and 1TB of FlashCache, making it roughly 50% faster.

The new top-end model, the FAS9000 goes modular, decoupling I/O from the controllers. This performance monster which has 2TB of FlashCache standard and 20 PCIe slots for I/O is expected to run 50% faster than the FAS8080 on 8.3.x. A cluster of 24 FAS9000 nodes (12 HA pairs) scales up to as much as 172PB.

FAS9000 AFF A700 Chassis

Here’s how the new models map to the old:

New FAS platforms

As for the new AFF models, the A300 should get about 50% more throughput than AFF8040 running 8.3.1 while the A700 aims to replace the dual chassis AFF8080, saving four precious rack units but still providing 100% more IOPS, in fact it should be able to handle about double the workload at half the latency.

Oracle testing

And here’s how the new AFFs line up with the existing ones:

AFF model alignmentThe new lineup, both FAS and AFF are definitely addressing some concerns; FlashCache not only available throughout the FAS line but standard as well is a move in the right direction as is the addition of 12Gb SAS. The introduction of both 40GbE and 32Gb FC into the mid-range and upper models of both lines should provide the fire hose required to deliver all that new controller and storage back-end performance. The two new AFF model numbers lead me to believe that they may be leaving room in the middle to add models to the line.

While ADP has been around for a while and is a great work around to dedicated root aggregates, I would love to see NetApp move away from root aggregates completely and do something with M.2. I’ll keep my fingers crossed for this one, but won’t hold my breath either.

ONTAP 9.0 is here.

That’s right folks, not the 8.4 you were thinking was next, but straight to 9. With the jump to 9 in the versioning also comes a bit of rebranding. When referring to the OS of your FAS, you can finally simply say ONTAP, no more qualifying that with “7-mode” or “clustered” or even prefixing it with “Data”. Alongside this new version comes two variants, one that runs in the cloud and one that you can run in your VMware environment. ONTAP Cloud and ONTAP Select respectively. Currently ONTAP Cloud is AWS only, but all signs point to an Azure release in the very near future. ONTAP Select picks up where Edge left off.

I’ve had some helm time with the new version and the first thing you’ll notice is that System Manager has been cleaned up and rearranged. When you first login, you’ll now get the following dashboard with a quick view of your cluster:

Screen Shot 2016-05-30 at 4.58.02 PM

This dashboard is good for a quick glance at performance and capacity, but clicking around the other tabs still leaves something to be desired on the usability front, but only because there is just so much available in this interface. I feel like I’ve got the “advanced view” option permantly checked off. Personally I don’t mind all the various tabs and sub-tabs, but for your day-to-day operator, most of the options available aren’t necessary.

Moving over to the technical side of the equation, ONTAP 9 brings with it a few new exciting features. First of all, support for the new 15.3TB SSDs makes its way into the payload, as well as RAID-TEC triple-parity protection. As far as I know, NetApp will be first to market with these 15.3’s and I can’t wait to see them in the field. RAID-TEC, or RAID Triple Erasure Coding will really help out with the larger disk sizes. While it won’t be mandatory for the large SSDs, I highly recommend it for spinning drives >= 6TB. These drives currently have a max RAID group size of 14, but the introduction of RAID-TEC will increase that to 28 drives. This will not only double your RAID protection level and decrease rebuild times,  but most importantly the RAID tax won’t be so bad in the larger deployments. If you’ve already got these large drives deployed, you can move to RAID-TEC and larger RAID groups provided you have the disks to add to the aggregate.

In the realm of performance, NetApp is claiming a 60% increase in IOPS over 8.3.1, as well as the introduction of “Headroom for visibility of performance capacity”. What this means is that at a glance, you should be able to see how much more performance is left in your cluster. NetApp has also introduced a new data-reduction technology called Data Compaction. With this latest addition to the existing data reduction tricks, namely deduplication, compression, cloning and thin-provisioning NetApp is now boasting a 4:1 data reduction number and is backing it with a guarantee.

Finally, two more feature introductions for the compliance-minded folks out there. First you’ll be happy to hear SnapLock® software is back, and for those not looking to introduce the cost and complexity of an external key manager for NSE drives, NetApp has introduced an onboard key manager.

 

Make sure to check out some of the other posts on this subject by my fellow NetApp A-Team members:

***Note: ONTAP 9 is not out yet, but the details are. The exact release date is not public yet.

ONTAP 8.3.2 is GA!

Just a quick note now that 8.3.2 is now GA and here’s a few features that I may have missed in my combined post on 8.3.1 and 8.3.2 or the details of which just weren’t available to me. I’ll start by reiterating some of the highlights in point form:

  • Copy Free Transition (CFT)
  • In-line DeDupe (on AFF)
  • In-place, adaptive compression

Those were some of the more poignant features, but here’s some others that I either didn’t know about or just weren’t to me:

  • MetroCluster distance increased by 50% bringing it to 300KM
  • Oracle NFS workload provisioning for All Flash FAS (AFF)
    • Using a Quick Start guide, and a wizard in the on-board OnCommand System Manager, the claim is you can have your new AFF cabled and serving Oracle over NFS in under 15 minutes.

The real beauty in this release however is in regards to CFT. I’ve said it before, but I’m still impressed by this feature. Basically you can stand up a new cDOT system with it’s own root aggregates and connect your 7-mode disk to it (yes there’s caveats) and with some 7MTT magic smoke, your data is now being served out of your shiny, new cluster-mode environment. Previously 7MTT (7-mode Transition Tool) only supported source data in the 8.1.x code line, but with this new release, 7MTT now supports 7-Mode systems running Data ONTAP 8.1.4P4 – 8.1.4P9 and Data ONTAP 8.2.1 or later. Also, in 8.3.2RC, CFT would only work on a net-new system with only its root aggregates, but in the 8.3.2GA, CFT now supports importing your 7-mode disk on to 8.3.2GA systems with pre-existing data aggregates and volumes. Here are all the permutations that you can now leverage CFT to import your 7-mode data:

  • Import 7-Mode disk shelves in the following ways:
    • Import disk shelves from a 7-Mode HA pair to a new HA pair in a new cluster.
    • Import disk shelves from a 7-Mode HA pair to a new HA pair in an existing cluster that has additional data-serving nodes.
    • Import disk shelves from a 7-Mode HA pair to an HA pair that has data aggregates in an existing cluster that is serving data.
    • Import disk shelves from an HA pair that contains volumes in a volume SnapMirror relationship to an HA pair in a new or existing cluster.
      You must manually create the cluster peer relationship after transition; however, a rebaseline transfer is not required, and you can retain the SnapMirror relationship after transition.

*UPDATE: The above is actually more a function of 7MTT 2.3, but you need it for CFT anyway.

For all the gory details of CFT and how awesome it is, go here for your copy of the Copy-Free Transition Guide.

What does all this actually mean? It means that soon we can finally stop referring to “it” as either 7-mode or cluster-mode and just refer to “it” as ONTAP or Data ONTAP again.

Have Your Pi and Eat It Too!

Last year I gained access to an arcade cabinet that had been outfitted with a standard consumer PC running Windows and being the front end to a handful of emulators. The cabinet itself is actually a custom build, not a repurposed one from years ago. This was purchased from Arcade Time Machine and it is pretty awesome. That however is not the point of this post, what I am doing is tracing back the root of my RaspberryPi obsession. Talking about the cabinet with some co-workers, they were telling me that you can do all of that on a RaspberryPi, and there was actually a community around exactly that. The “solution” is called RetroPie and you can read more about that over there. I quickly visited Amazon and put together a list of components  which included the following:

  • Raspberry Pi 2 Model B
  • Power Supply and Case
  • SD Card
  • WiFi dongle
  • HDMI Cable
  • 2 × Logitech F710 gaming controllers
  • Logitech K400 wireless keyboard

Pretty much everything I needed to get my own RetroPie gaming system up and running at home.

Talking with various co-workers around the country over our internal Slack, I found that apparently one guy had built his RetroPie and everyone else had pretty much just gotten an image of his SD card. I decided that I would also go down this route. I got the image burnt and after a bit of tweaking I had a mostly functioning system. I played some of the old 8-bit games that I used to love, Pyros, Shinobi and Mario Brothers to name a few. Some things just didn’t work how I wanted them to however, so I figured I’d have to build my own RetroPie image. While I haven’t tried installing the RetroPie packages on top of Raspbian myself yet, that is the next step. Currently my Pi boots into RetroPie using the image you can download from RetroPie directly but I just can’t get it to work and I think it’s because I don’t know enough about what’s going on behind the GUIs and the only way to figure it out is to install everything myself and not rely on an image file.

Between the co-worker-supplied image and now however, I’ve done a few other things with my Pi. One of the neat things you can do with it is turn it into a digital media set-top box that can replace or supplement your AppleTV, Chromecast or Roku. I did this by installing Kodi (formerly XBMC) on it. Now I was already using XBMC on my jailbroken ATV2, so I already had some expectations which it pretty much met. The Pi however appears to run Kodi much faster than the ATV2, possibly due to it being newer, faster hardware. I don’t really know or care actually as it was only a test and I still rely on my ATV2 for day-to-day viewing, at least until I buy a supplementary Pi to devote to this. As a bit of a side note, I’ve also been using Plex on my QNAP TS-653 via my Chromecast which is even more slick than Kodi so that may end up being my end solution anyway.

The thing that really got me excited about my Pi is the ability to leverage the GPIO pins to do things outside of the Pi. I’ve always had an interest in coding and have even done a bunch of Python back in the day, since this is the preferred language for those hacking on the Pi, this was quite convenient. So I ordered myself a starter kit of sorts. Don’t order that one for your Pi2 though, it comes with the wrong extension board, this one is for the original Pi. When it arrived and I realize I had ordered the wrong part, a quick trip to Lee’s Electronics and I was in business. I did what most people do and I wrote the Hello World of the Pi and hardware hacking world, I made some LEDs blink:

I continued to play with LEDs for a while, writing various python functions to make them blink in different orders but I soon grew bored and decided I needed to try another one of the external pieces of kit that I had, so I figured the 16 × 2 character display was a good next step. This was actually quite easy, so after I figured it out, I quickly whipped together some code to wish my good friend Jason a happy birthday since it happened to be the day I was working on this:

Cue the oohs and ahs!

Using a 16 × 2 character display with the RaspberryPi.

The next thing I did was attach a DS18B20 temperature sensor because now I had a project in mind. I started with some code that displayed the current temperature in Celsius and Fahrenheit to both the terminal as well as to the character display, I then added logging to it and removed the character display code. Lately I’ve been finding that my bedroom is a little warm at night, so I figured I could deploy the Pi as a temperature logging device and figure out if the room was actually getting hot or if I’m just eating too much salt or drinking too much red wine; after all, you can’t manage what you can’t measure. That’s where my Pi currently sits, powered up in the bedroom logging the temperature to a CSV file every five minutes and has been doing so for about three days. I have yet to come up with my next project, but I do know that I want/need at least one more Pi since my son Jordan has been playing with the Pi as well.  I wouldn’t want to hinder his ability to play with it and learn because I’m hogging it for the same purpose. Also, since the Pi3 is now out, I would like to have that model if only for the built-in WiFi. I’m still not done with the RetroPie project, but I am a little frustrated by it so more to come on that one in the future.

Feel free to suggest any fun projects that I may want to take on, I have a new soldering station on the way from Amazon and am looking for something cool to do with it.

 

Geek Defined or: How I Learned To Stop Worrying And Love So Many Things.

I’m going to start to deviate from my usual NetApp-centric posts and just talk about what I’ve been up to on the geeky side of Chris. Out of the very limited list of podcasts I listen to, one of them is The Geek Whisperers which focuses not on specific technologies but rather on personal and professional advancement from a Geek’s point of view. Due to my association with the NetApp A-Team and NetApp’s acquisition of Solid Fire, I was lucky enough to find myself on a call with the inspirational Amy Lewis, a.k.a. CommsNinja, one of the hosts of TGW. We were discussing social media, personal development and other such things. Amy made a comment about how people should be blogging about more things that they’re interested in because you never know which members of your readership may have the same interests and it may also draw in new readers.

While the term Geek used to mean either “an unfashionable or socially inept person.” or “a carnival performer who performs wild or disgusting acts.”, these are no longer the de-facto definitions. These days, Geek will often refer to someone who “engages in or discusses computer-related tasks obsessively or with great attention to technical detail” and/or some who “is or becomes extremely excited or enthusiastic about a subject, typically one of specialist or minority interest”. It is definitely this latter one that me and my geek brethren identify with. While the stereotypical “geek” typically is also a “computer nerd”, I’m betting most, if not all of them have some pretty intense interest in some non-computer related hobbies as well. For example, here’s a list of interests I’ve obsessed over in the past five to ten years, some of which are current while others have passed or are waning.

  • Photography
  • “Computers” (this one has so many facets, I almost hesitate to include on its own)
  • Sailing
  • Languages (not programming)
  • Cooking
  • RaspberryPi
  • Ingress
  • Carpentry
  • Snowboarding
  • Virtualization
  • Magic: The Gathering (This one is actually from about 20 years ago but has resurfaced now that I can play with my son)

This list is by no means exhaustive, but rather a glimpse into who I am. Personally, I find myself geeking out over so many different things, the hardest part about being a geek is finding the time to devote to such a plethora of hobbies, and then when something new comes up that strikes my fancy, finding time to fit that new one in. Maybe that’s what makes a geek a geek, someone who is willing to obsess over something that interests them until they get good at it or maybe a geek is just someone who loves to learn new things? I do find that often time my interests will somehow be related to the sciences in one way or another, be it math, physics or chemistry. I never thought I’d mention that last one until recently however when I started reading about soap making after a discussion about it with a newer co-worker; so stay tuned, there may be a soap making post in my future.

So why exactly am I posting this at all? Well I’m basically giving myself permission to post about whatever the heck I want, hopefully it’ll be enough to keep some people interested.

8.3.1 and 8.3.2…dot releases never felt so good.

NetApp released ONTAP 8.3 over a year ago now, and since then two minor releases have come as well, and with them far more payload than you’d usually expect for dot releases. Typically the major releases get all the hype, but after you see all that has been included with the two minor releases of 8.3, you’ll see what all the fuss is about.

First of all, if you can’t remember what was included with 8.3, go over here and read about it. Highlights included but weren’t limited to:

  • Metro Cluster
  • Non-disruptive LUN migration
  • Serious performance improvements in the flash space
  • Version independent SnapMirror

When 8.3.1 came out in early September, it brought some pretty spectacular:

  • More flash performance improvements
  • Storage Virtual Machine Disaster Recovery (SVM DR)
    • This is the ability to replicated entire SVMs and not just volumes to another cluster. This has two modes, Identity Preserve True or False which can replicate all the network related info for those who’s DR site supports it, i.e.: L2 connectivity.
  • In-line compression and zero elimination
  • Two node MetroCluster, i.e.: one per site
    • Uses ATTO bridge to connect the disk
    • This is more of a “Stretch MetroCluster” and is suitable for campus level DR where the loss of a building is being protected against.
  • Some performance metrics now available in System Manager

8.3.2:

  • Copy Free Transition
    • This has got to be one of the coolest features so far, it lets you stand up a new cDOT system with minimum disk, then move your 7-mode disk over to it without having to do a data migration.
  • In-line deduplication
  • More performance improvements for SAN on AFF
  • In-place, adaptive compression
  • Fibre Channel over IP for MetroCluster
    • Up to 200km, between switches that support it, such as the Cisco 9250i
  • Quality of Service policies previously limited to 8 notes can now be applied to up to 24
  • System Manager Improvements:
    • Cluster performance charts with IOPS and latency available within System Manager
    • Manual IP assignment
      • Previously you had to create the subnet, that is no longer the case
    • SyncMirror (introduced in 8.3 with Metro Cluster) support in System Manager
      • This is not the same as synchronous SnapMirror, which is still not available in cDOT
    • You can now manage your MirrorVaults in the GUI
    • Various other System Manager improvements, far too many to list.

As you can see by the points I’ve covered off, the dot releases of cDOT 8.3 have been packing quite the payload, I’m sure that not having support 7-mode in the same release has helped speed up the development cycle for many features not to mention some of those engineers have probably been reassigned to cDOT work. I’ve left some of the more esoteric details out, but if you want to see them all, head over here to read the release notes for the individual versions.

NetApp Certifications, What’s New and What You Need to Know.

If you’re a NetApp nerd like myself or if you prefer to call yourself an “avid NetApp user”, then you’re probably familiar with their annual conference, NetApp Insight and the fact that is is just around the corner. Since you’re reading this article at all, you may already have or have at least considered getting certified. There’s not a lot new since the major update back in April when the exams were updated to reflect the release of 8.3 but there is at least one completely new exam and certification, the NetApp Certified Storage Installation Engineer, Clustered Data ONTAP NS0-180, which becomes available on September 23, 2015.

This year at Insight, there’s going to be a whopping 14 separate exam prep sessions at both the Las Vegas and Berlin versions of the conference covering the following:

  • NS0-155, NCDA 7-Mode
  • NS0-157, NCDA cDOT
  • NS0-505, NCIE-SAN E-Series
  • NS0-506, NCIE-SAN cDOT
  • NS0-511, NCIE-Data Protection

The beauty of Insight is that during the course of the conference, you can take as many exams for free as you’d like as long as it falls within their exam retake policy. If the exam centre is anything like year’s past, then it will be very busy and I highly recommend you pre-register for up to there of your exams now over here.

While we’re on the topic of certifications, NetApp is going to show the proverbial love to those of us who are already certified as well as to those who get certified while at Insight. I won’t give away all the details, but there will be different schwag based on what certifications you already hold. They’re also going to hold the first ever Appreciation and Recognition event for the NetApp Certified.

So with all this talk about certifications, lets talk about getting prepared for getting certified. The first thing you should do is follow @NetAppCertify on Twitter, join in the discussion over at the NetAppU Community and peruse the materials and sample exams available here. Sample exams are available for NS0-157, NS0-506, NS0-511 and the latest addition, NS0-180. If you’ve already have your NAIPCDOT, you’ll need to to earn the NCSIE cDOT by November 1, 2016, so I’m sure this will be a popular one. For the complete low-down on what NS0-180 might mean to you, check out this NetApp Community entry here. Lastly, be sure to check out The Value of NetApp Certification Video as well, especially since some of my friends are in it.

Finally, to further emphasize the value of NetApp Certifications, starting in October you’ll be able to add all new digital badges to your LinkedIn profile which will help job-seekers and recruiters find each other.

AddTo

This new Digital Badge helps protect the value of your certification as well as providing easy verification of your NetApp Certifications.

Badgers

Insight is just over a month a way as of this writing, and it’s time to start studying so that you can take advantage of those free exams which by now you’ve registered for, right? I know I have.

At the Las Vegas version of Insight, make sure you stop by The Geek & Greet Certification Appreciation Event, Wednesday at 5:15 and say hi to me and my fellow A-Team members, we may even buy you a beer.