Profiling a NetApp filer
Our preview for supporting NetApp profiling is right around the corner and this entry will give you some features to anticipate as it starts to roll out.
We are excepting select customers profiling to start as early as May 23rd and a preview release at the end of May 2018.
It’s widely known that NetApp has a fair share of the storage/filer market share and it only makes sense that Live Optics supports the ability to profile this technology.
Release of Live Optics support will be broken into 2 phases: configuration and performance.
For Phase 1 filer configurations will be pulled automatically via the Live Optics collector.
It will do so by establishing an SSH connection to the CLI on the NetApp array and running CLI commands to collect the data. The data will then be seamlessly transmitted back to the Live Optics Viewer Portal for analysis and display.
The SSH commands will grab data through various files and for those of you that are curious, here are some of the sample commands that would be issued. The entire process will take about 1 min or less from start to finish.
Command list: df, df –S, cluster image show, system node show, volume show, disk show, lun show, node external-cache show, system feature-usage show-summary, cf partner, sysconfig –a, aggr show space, vol status -r
Here is the full version support matrix for the preview release.
OnTAP 8.3, 8.3.1, 8.3.2, 9.0, 9.1, 9.2, 9.3
Note: 7-Mode and v8.2 and below will be supported shortly after, but in a subsequent release.
Here are some screen shots from our mockups and a matrix of data points will follow at the end of this post.
Here is a list of fields to be provided (this might not be the final list)
NetApp Release/Installation date
Total Raw Capacity
Total size for aggregate name
Total used for aggregate name
Total free for aggregate name
Total Size of Node Volume
Total Used of Node volume
Total Free of Node Volume
Total Snapshot size of Node Volume
Total Snapshot used of Node Volume
Total Snapshot free of Node Volume
Total Deduplication of Volume
Total Compressed of Volume
Total Size of Cluster
Total Used of Cluster
Total Free of Cluster
Total # Drives of Cluster
Total # shelves
Total Node total
Total Node used
Total Node free