Ignore latencies that are abnormal when IOPS are low (say below 200).
The reason lies within how latencies are measured by most servers OSes and Subsystems. The operating systems add up a counter of time that each IO takes. Software like Live Optics will read those counters and divides them by the number of IOPS that the tool has calculated in a corresponding time period. Then the tool does simple division. Avg. latency = total time / total IOs for the sample period.
This works fine when there is a steady state of lots of IO.
However, when you get starts and stops during idle periods, then you can get some misleading results. This is partly due to the fact that the collection of the latency counter and the IOs counter are not atomic operations. Small sampling errors between the collection moments between these counters can result in overstated latencies.
As a general rule, unless you were seeing a sustained bar of bad latencies, ignore latencies where the IOPS are small.