IBM Spectrum Protect (TSM) - Release Notes (02/22/19)

Live Optics Agent

Release Date: 2/18/19

Live Optics Collector Version: 2.5.15.466821

Live Optics Script Versions: Linux 1.0.9 / Windows 1.0.6

IBM Spectrum Protect (TSM) Versions Supported: 7.x/8.x

 

 

  • IBM Spectrum Protect (TSM)
    • (NEW!) No longer in Preview Only mode - PowerPoint presentations are now available for download! Please note this release is associated with a new version of Live Optics Collector and associated IBM Spectrum Protect (TSM) assessment scripts, which create a new Node tab in the XLSX. It is strongly recommended that you download the latest Collector prior to running an assessment:  https://app.liveoptics.com/collector/download

 

You can download the PowerPoint by clicking the Download PPT button while viewing a completed project:

 

 

PowerPoint slides and descriptions

 

Slide 1 – Title Slide

 

 

Among other standard Live Optics info, contains the version of IBM Spectrum Protect under assessment.  In this case 8.1.5.0

 

Slide 2 – Summary

 

 

This Summary slide is meant to convey an overall picture of the backup environment.  TSM specific categories/terminology are used and each category is calculated as follows:

 

FETB:

  • Step 1
    • Referencing the FileSpace tab, (for each row in Column E [Capacity] multiply by adjacent cell in Column F [Pct_Util]) (which is a %, so convert to decimal first) then sum all results. For example,

Col E * Col F (%) = Result

214936.99 * 56.03% = 120429.1955

 

  • Step 2
    • Add the following to total from 1) above:

For each Node_name (Column A) which has a Filespace_type (Column D) starting with API:, look up that Node_name in the Summary tab's Column E (Entity). There will be dupes, you want to take the largest backup Bytes (Column L) from each grouping.

Note that Column E is in MB and needs to be converted!

 

Number of Nodes: From the (new for this release) Node tab

 

Number of Files: Sum of Column E, Occupancy tab.  Please note that this is inclusive of offsite (Copypool) and is derived from Column E in Occupancy Tab (Copypool is Offsite, Tapepool is Onsite)

 

Note that number of Jobs can be deduced from (Number of Rows in Summary tab - 1)

 

Slide 3 – Environmental Topology

 

 

This slide helps give an overall picture of how OS’s are distributed in the environment and breakdown by capacity of specific TSM Domains (limited to 28 entries)

 

The "Capacity per Domain Name" table shows the distribution of capacity across the environment.  This is based on the Domain Names and is presented in GB's not percentages in descending order (capping at 14 table rows on the slide).  Leveraging information from the Nodes tab, we find the total capacity for each Domain Name (which is in the Occupancy tab listed by Node_Name). 

 

Here is how this is calculated:

 

1- We group all the nodes together by “Domain_Name” (there can be, and usually are, multiple Node_Names in each domain name).

 

2- We look the Node_Names up in the Occupancy tab and sum up Column H [Reporting_Mb] and convert to GB. That sum is what is listed in the table for each Domain_Name.

 

For the OS Distribution Chart:

Similar process as above, however we are looking to sum up capacity based on OS types, where capacity is taken from Column H [Reporting_Mb] in Occupancy tab. From the Nodes tab, we group based on Column C (Platform_Name) which reflects the OS for the nodes. Grouping is done by flavors of Unix, Windows and Other (for example DB2 might be listed with no OS, or it's blank).  Architectures are not delineated (i.e., 64-bit or other), this information will show in the table on the right or you can look this up in the XLSX for this project.  The bars reflect total capacity (converted from MB to GB).

 

-------------------------------------------------------------------------------------------------

NOTE: If this slide is blank, it's likely that Node information is missing.  With this release, there is a new tab in the XLSX called “Node” which is populated via a new select statement in the assessment script.  Please check the XLSX for this project and if the Node tab is missing then please ensure you are running the latest version of the TSM Live Optics script (version 1.0.0.9 for Linux, 1.0.0.6 for Windows). If you are sure you are running the latest version, then please contact support@liveoptics.com for assistance.

-------------------------------------------------------------------------------------------------

 

Slide 4 – Backup Overview

 

 

The goal of this slide is to present an overview of backups which may be of interest. 

 

The chart on the left presents an overall picture of backup time frames, to help show the number of jobs running 8+ hours (which may be of interest to the backup admin).  To determine this, we reference the Summary Tab and calculate Column B - Column A (time delta), given Column C (Activity) is set to BACKUP only.  All types of backups are considered for this chart.

 

The table on the right lists (in descending order) the Top 10 largest Entity backups.  With Column C (Activity) set to BACKUP only, we take the largest backup in Column L [Bytes] for each unique Entity name and convert to GB.

 

NOTE: A common issue in discussions with TSM backup admins is about them backing up their TSM server (i.e., they added a client to the TSM server because they want to backup their system hosting TSM). In some environments, this can be the largest backup and isn't a Best Practice.  Refer to the project's XLSX and check the Summary Tab, Column C (Activity) for DBBackup, which is the TSM catalog.

 

Slide 5 – Top 5 Domain-Class Retention

 

 

This slide depicts the Top 5 Domain-Class pairings with regards to retention, with preference given to VerExists.  This is determined by filtering Set_Name on Active only and sorting the VerExists, VerDeleted, RetExtra and RetOnly columns (Columns E thru H) in the Bu_Copygroup tab in the XLSX for this project.

 

The Y axis is the [Domain Name-Class Name] pairs from (Column A-Column C) in Bu_Copygroup tab

 

To get the Top 5 , we sort Column E in descending order, then F, then G, then H and we do not include rows where any of these columns = NOLIMIT (forever).  It's not that NOLIMITs aren't important to review, but they are easily found in the XLSX and this chart will help show the other more interesting retention from a business perspective.

 

Slides 6 – Top 10 Longest Running Backups

 

 

To calculate the longest running backups, we reference the Summary tab in the XLSX for this project and populate the chart in descending order as follows:

 

Entity: [Entity], Column E

Duration: ([End_Time] - [Start_Time]) (Column B - Column A)

Data Transferred: [Bytes] Column L (converted to Gigabytes)

Throughput:

[Bytes] Column L (converted from Bytes to Megabytes) / Duration (from above, converted to seconds)

 

As obvious as it sounds, this is the listing of the backups which are in the Top 10 for duration.

Note:

Capacity is taken from Column AI (Bjs Backed Up Size, Raw Data tab) and converted to GB from Bytes (please note that Bjs Backed Up Size may be zero in some cases and you may want to reference Columns AF, AG and AH to determine read, stored and processed sizes respectively).

Duration is derived from Raw Data Tab’s Column M (Ts End Time) – Column L (Ts Creation Time)

Throughput is from Column T (Ts Average Speed, Raw Data tab) converted to MB/Sec

 

Slide 7 – Top 10 Largest Nodes

 

 

To determine the Top 10 Largest Nodes, we reference the Occupancy tab to get the following and take the largest from each unique Node_Name set:

 

Node name: Column A, [Node_Name]

Number of files: Column E, [Num_Files]

Data Stored: Column H, [Reporting_Mb] (converted from Megabytes to Gigabytes)

 

Table is sorted by descending by Data_Stored.

 

Slide 8 – Total Daily Data Transfer

 

 

An often-overlooked aspect of a backup environment assessment is the bigger picture of the amount of data being transferred each day.  This slide helps paint a picture of a backup admin’s practices along with the environment.

 

Basically, total backup jobs are broken down into types based on referencing the Summary Tab and with Activity set to Backup only:

 

Number of Backup Jobs:

To determine this, we sort the Summary tab by Start_Time.  Then we count of the number of rows per day.

 

Capacity:

The sum of the amount of data Column L [Bytes] (converted to GB) per day Column A [Start Time]

 

The peak of the bar shows the Total Capacity initiated for backup in that day.  Jobs can cover more than one day. So, a job would be identified by starting on a particular day, completing successfully and then using the capacity backed up at completion. Ts End Time is not evaluated for specific date/time.

 

Average Daily Transfer is the amount of data transferred, not the transfer rate, and is an average of the bar chart total values (in red):  Sum(each day’s total capacity backed up) / number of days (where 30 days is the most recent 30 days of backups, starting at the most recent backup (i.e., not calendar day) and correlates with the left vertical axis.

 

Backup Jobs (purple line) are the number of jobs that were initiated on each day.  These values align with those on the right-hand vertical axis.

 

Slide 9 – Backup Summary: Average Backup Rate by Number of Jobs / Top 5 Slowest Node Backups

 

 

How this slide is calculated:

 

Referencing the Summary tab in the XLSX for this project the Top Chart is populated as follows:

 

We calculate the Throughput in MB/s for each job,

 

(Column L), in MB

---------------------

(Column B [End Time] – Column A [Start_Time]), in Seconds

 

Then we “bucketize” the range of throughput by rounding up to the nearest 10's value (using a Ceiling function), with MB/s shown as a whole number.  So, for example:

  >0 and <=10, are in 10MB/s

>10 and <=20, are in 20MB/s, and so on.

 

We ignore 0's and truncate the chart at 300MB/s.  General idea for this chart is to give an overview of the transfer rates across the environment.  The highest throughput jobs (perhaps fast Tape/VTL) aren’t factored in the chart.  It's probably known, and expected, that these fast jobs exist.  You can always reference the spreadsheet and create a throughput column and sort to look at more detailed information.

 

Again, referencing the Summary tab in the XLSX for this project the Bottom Table is populated as follows:

 

The average throughput (using the formula above) and capacity for each unique grouping of Entity is calculated and the 5 slowest of these averages are listed in the table with slowest at the top.

Empty Entity's in Column E are not factored.

 

Note: NODE is equivalent to ENTITY (Column E in Summary tab), IBM has a naming inconsistency here.

Note: Each Excel row in the Summary tab is a backup job

 

Known Issues

 

Please note if you select this button on a project which was created prior to this release, you may get the following:

 

 

The Live Optics engineering team is researching the feasibility of enabling this functionality for projects created prior to this release, for all DPS platforms.

Was this article helpful?
0 out of 0 found this helpful
Please sign in to leave a comment.