Running Squeeze on VMware ESXi for VMAX/PowerMax (12/18/18)

Keith Younger


Live Optics Squeeze supports analyzing a VMware ESXi server’s block storage devices and estimating how well the data will compress when deployed on a VMAX/PowerMax storage system.  This document outlines the process for running one of these collections.

Before You Get Started

Live Optics Squeeze is going to read data directly from block devices connected to your VMware ESXi server.  Squeeze does not support collecting from a mounted network file share (NFS or CIFS mount.)  Squeeze will require root level privileges to read from the block devices.

Quick-Start Directions

  1. Download the squeeze_x86_64.tar.gz package from Live Optics
  2. Use either a Linux shell, or a tool such as WinSCP, Putty, etc. to copy the tar file to your ESXi server
  3. Unpack the package on your target VMware ESXi server
  4. Pick a datastore volume with suitable free space for the map file.
    ~ # df -h
    Filesystem Size Used Available Use% Mounted on
    VMFS-5 199.8G 173.0G 26.8G 87% /vmfs/volumes/dpack-ds-vcenter51-linux
    VMFS-5 833.2G 49.5G 783.8G 6% /vmfs/volumes/datastore55
    vfat 4.0G 36.1M 4.0G 1% /vmfs/volumes/50c5f527-4c3ce996-6f0e-24b6fda29bc2
    vfat 249.7M 130.1M 119.6M 52% /vmfs/volumes/1622f970-f02bfff7-ab46-35dc0d2c185e
    vfat 249.7M 8.0K 249.7M 0% /vmfs/volumes/16288d70-cfb3d647-2888-59f3d44268a0
    vfat 285.8M 201.9M 83.9M 71% /vmfs/volumes/50c5f510-8b52a5c2-fe8b-24b6fda29bc2
  5. Run the squeeze executable with the appropriate command options

(./squeeze –VMAX  -Dedup /vmfs/volumes/datastore55/myMapFile)

  1. Wait for the collection to complete.
  2. Upload the output file to Live Optics using the upload command (./squeeze –upload <filename.SIOKIT>)

Detailed Instructions

Step 1: Download the Squeeze collector from the Live Optics portal

  • Login to Live Optics (
  • Expand the left menu and click on Collectors -> Squeeze
  • Click on the Linux64 button to download the appropriate version of the Squeeze collector.  

Step 2: Transfer and Extract Squeeze software

  • Use either a Linux shell, or a tool such as WinSCP, Putty, etc. to copy the tar file to your ESXi server

scp squeeze_x86_64.tar.gz root@<IP Address>:~/.

  • Unpack the compressed TAR file.  
    tar -xzvf squeeze_x86_64.tar.gz

NOTE:  Squeeze will generate some local files in the current working directory from wherever you run it. So, you may wish to create a temporary folder, where you want to run squeeze.

Step 3: Execute the Squeeze program

  • Squeeze requires root privileges to execute. If you are logged in as root, then simple run squeeze like so:
    root$ ./squeeze –VMAX –Dedupe <mapfile name>
  • For additional configuration options, refer to Appendix I below.

Step 4: Squeeze output files

  • Upon completion, Squeeze will output the following files in the current working folder:


The trace file is useful for Live Optics Support in the unlikely event that something goes wrong.

  • Upload the output file to Live Optics using the upload command (./squeeze –upload <filename.SIOKIT>)

Appendix I

Specifying Devices for Collection

By default, Squeeze will collect data from all of your block devices.  You may optionally specify the devices that you wish to collect on the command line.

     ./squeeze -VMAX –Mapfile <Mapfile path> /dev/sda /dev/sdb /dev/sdc

Warning:  Do not use the –skip option when running Squeeze with deduplication

Locating the De-Dup mapfile

When estimating dedupe and compression ratios for VMAX/PowerMax, you must specify a path for where the dedup map file will be stored. 

Warning:  The de-dup map file may grow to an unlimited size.

Be sure to provide a folder with enough disk space to accommodate the de-dup map file.  Contact your Live Optics representative for more information.

Speeding Up Squeeze with Lots of Devices

By default, squeeze will only read from one block device at a time.  You may increase this to any number.  Increasing the number of concurrent devices generally will decrease the overall scan time, as devices will be scanned in parallel.  However, before doing this, keep the following in mind:

  • Increasing concurrent devices will only reduce collection time if the block devices have independent backing disks. Concurrent scans on devices that share the same backing disks will not improve performance and might even slow down the collection.
  • Increasing the concurrent operations will have a linear increase on CPU consumption by Squeeze and will likely increase the IO read workload on the backing disks. Only use this option if you are not concerned about overwhelming the server and backing storage.

This example will increase the number of concurrent devices to four:

     ./squeeze -VMAX -concurrent 4

Reducing the CPU usage and IO from Squeeze

By default, Squeeze will attempt to saturate up to three CPU cores for each concurrent device that is being read.  You may reduce this to a single core.  Reducing Squeeze to a single core will also significantly reduce the IO read workload on the underlying backing disks.  However, doing so will result in increasing the overall time of the collection.


Was this article helpful?
0 out of 0 found this helpful
Please sign in to leave a comment.