Running Squeeze on VMware ESXi for UNITY (12/18/18)

Keith Younger


Live Optics Squeeze supports analyzing a VMware ESXi server’s block storage devices and estimating how well the data will compress when deployed on a UNITY storage system.  This document outlines the process for running one of these collections.

Before You Get Started

Live Optics Squeeze is going to read data directly from block devices connected to your VMware ESXi server.  Squeeze does not support collecting from a mounted network file share (NFS or CIFS mount.)  Squeeze will require root level privileges to read from the block devices.

Quick-Start Directions

  1. Download the squeeze_x86_64.tar.gz package from Live Optics
  2. Use either a Linux shell, or a tool such as WinSCP, Putty, etc. to copy the tar file to your ESXi server
  3. Unpack the package on your target VMware ESXi server
  4. Run the squeeze executable with the appropriate command options (./squeeze -UNITY
  5. Wait for the collection to complete.
  6. Upload the output file to Live Optics using the upload command (./squeeze –upload <filename.SIOKIT>)

Detailed Instructions

Step 1: Download the Squeeze collector from the Live Optics portal

  • Login to Live Optics (
  • Expand the left menu and click on Collectors -> Squeeze
  • Click on the Linux64 button to download the appropriate version of the Squeeze collector.  

 Step 2: Transfer and Extract Squeeze software

  • Use either a Linux shell, or a tool such as WinSCP, Putty, etc. to copy the tar file to your ESXi server

scp squeeze_x86_64.tar.gz root@<IP Address>:~/.

  • Unpack the compressed TAR file.  
    tar -xzvf squeeze_x86_64.tar.gz

NOTE:  Squeeze will generate some local files in the current working directory from wherever you run it. So, you may wish to create a temporary folder, where you want to run squeeze.

 Step 3: Execute the Squeeze program

  • Squeeze requires root privileges to execute. If you are logged in as root, then simple run squeeze like so:
    root$ ./squeeze -UNITY 
  • For additional configuration options, refer to Appendix I below.

 Step 4: Squeeze output files

  • Upon completion, Squeeze will output the following files in the current working folder:


The trace file is useful for Live Optics Support in the unlikely event that something goes wrong.

  • Upload the output file to Live Optics using the upload command (./squeeze –upload <filename.SIOKIT>)

Appendix I

Specifying Devices for Collection

By default, Squeeze will collect data from all of your block devices.  You may optionally specify the devices that you wish to collect on the command line.

     ./squeeze -UNITY  /dev/sda  /dev/sdb  /dev/sdc

Reading More or Less Data

By default, Squeeze will only read a small sample (15%) of your data.  The compression algorithms will be extrapolated from this small dataset to provide you with an estimate for your entire dataset.  This greatly reduces the amount of time required for a Squeeze collection and is generally recommended.

To increase or change the amount of data that squeeze should read, adjust the Skip percentage.  Skip is the amount of data that Squeeze will overlook.  The default Skip is 85%.

This example will skip only half of the data:

     ./squeeze -UNITY -Skip 50

This example will read all of the data (nothing will be skipped)

     ./squeeze -UNITY -Skip 0

Note, that no % sign is provided on the command line.  The Skip must be an integer value between 0 and 99.

Speeding Up Squeeze with Lots of Devices

By default, squeeze will only read from one block device at a time.  You may increase this to any number.  Increasing the number of concurrent devices generally will decrease the overall scan time, as devices will be scanned in parallel.  However, before doing this, keep the following in mind:

  • Increasing concurrent devices will only reduce collection time if the block devices have independent backing disks. Concurrent scans on devices that share the same backing disks will not improve performance and might even slow down the collection.
  • Increasing the concurrent operations will have a linear increase on CPU consumption by Squeeze and will likely increase the IO read workload on the backing disks. Only use this option if you are not concerned about overwhelming the server and backing storage.

This example will increase the number of concurrent devices to four:

     ./squeeze -UNITY -concurrent 4

Reducing the CPU usage and IO from Squeeze

By default, Squeeze will attempt to saturate up to three CPU cores for each concurrent device that is being read.  You may reduce this to a single core.  Reducing Squeeze to a single core will also significantly reduce the IO read workload on the underlying backing disks.  However, doing so will result in increasing the overall time of the collection.


Was this article helpful?
0 out of 0 found this helpful
Please sign in to leave a comment.