PAVMUG’s annual users conference was held today at the
Valley Forge Radisson. This event is a
good way to: socialize with peers, learn
about VMware related products, and to talk to vendors.
One of the more interesting vendors at today’s conference
was PernixData. PernixData is a storage
performance acceleration company co-founded by Satyam Vaghani. While working for VMware, Satyam created the
VMFS file system. PernixData came out
of stealth mode earlier this year and their product is currently in beta, with the 1.0 product being shipped soon?
What is Flash Virtualization Platform?
Flash Virtualization Platform (FVP) is PernixData’s first product. FVP takes vSphere local PCIe Flash and SSDs and
pools them together for the purpose of localizing
disk reads and writes through high speed caching. The
more data that a VM can read/write to/from this flash pool, the faster the VM’s disk
performance and the lower the latency and utilization on the existing SAN spindles.
What makes FVP unique:
- FVP enhanced VMs write-cache can be highly available on one or more
hosts in the cluster. Why is this
important? If a VM’s writes are only being written
to a local flash resource and the host fails,
unless there is a redundant copy of that write cache on another
host, then data could be lost when the
VM is restarted on a different host.
- FVP enhanced VMs can be vMotioned to other FVP enabled
vSphere hosts and the VMs cache will be copied to the remote host’s local cache
via the vMotion network. This results
in cache efficiencies following the VM no matter where it resides in the pool.
- FVP enhanced VMs can vMotioned to non FVP enabled
vSphere hosts as well. When migrated to
a non FVP enabled vSphere hosts, the VM continues
to work as normal, without the benefit
of the high-speed local cache that FVP provides.
FVP is not a SAN
replacement. VM data still resides on
your SAN, FVP increases VM disk
performance while reducing SAN utilization.
How does it work?
PernixData developed a driver that uses vSphere’s storage
APIs. One of the keys of the product is
simplicity. Here is the “complexity”
involved in getting FVP running:
- Download FVP VIB
- Install FVP VIB using vSphere Update Manager
- Install vSphere plugin
- Add PCIe Flash/SSD drives to vSphere hosts
- Install FVP manager software (this is a
management interface that gather perf data.
Manager is not part of the caching mechanism)
- Open FVP vSphere Plugin
- Choose flash resources to add to flash pool
- Choose VMs that will use flash pool
That’s it. No VM
drivers, no special configuration. As
Satyam was going through the install process all I could think of was the Geico
commercial “So easy a caveman can do it”.
VMs can be configured to use write-through mode which acts
as a read cache or write-back mode which will cache both reads and writes. When choosing write-back mode you have the
option a creating redundant copies of the write on other FVP enabled
hosts. These redundant copies will
prevent data loss that could occur if vSphere host fails and the VM writes to
the flash pool cache are not committed to the SAN prior to the vSphere hosts
crashing.
Once the product is in place and VMs are using the flash
pools, you can view the performance data
that shows your cache IOPs, throughput, hit rate, and most importantly how much throughput the
product prevented going to the SAN.
Is FVP the answer to a vSphere environments storage IO and
latency problems? Based on the simplicity, performance, and polish
of the product I hope so.
Labels: FVP, Latency, PernixData, SAN, Satyam Vaghani, Storage, vscsifilter, vSphere, vStorageAPI