We just raised a $30M Series A: Read our story

PernixData FVP [EOL] OverviewUNIXBusinessApplication

What is PernixData FVP [EOL]?
PernixData FVP software is a 100% software solution that clusters server flash and RAM to create a low latency I/O acceleration tier for any shared storage environment.
Buyer's Guide

Download the Software Defined Storage (SDS) Buyer's Guide including reviews and more. Updated: October 2021

PernixData FVP [EOL] Customers
GKL Marketing-Marktforschung, University Hospital Leipzig

Archived PernixData FVP [EOL] Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
it_user405018
Technology Sales Director at a tech vendor with 51-200 employees
Vendor
Our company needed IOPS to solve the bottleneck on the storage side, but storage upgrade costs were awful. We achieved IOPS values nearly 6-10 times faster without any hardware upgrades.

What is most valuable?

Incredible IOPS increase in storage: IOPS is required to fulfill a disk job faster. It is input/output per second. When you request a report on your SQL server, you need as much as IOPS you can. So many IT guys put their most important databases on SSD disks to reach the maximum IOPS within their storage. But SSD disks are still so expensive. On the other hand, if you have multiple VDI systems in different locations, you will experience boot storm every morning on your VM servers. You need IOPS to handle this problem and it leads you to buying more SSD.

With FVP deployment, you create more IOPS than by adding SSD disks, with a lower rice. Additionally, you will be able to create tiering across your clusters.

  • Architect dashboard tool: You can see the overall performance of your FVP cluster. It even provides valuable info that VMware can't report to the user about the VM performance. You can add sources on your cluster or move VM systems among tiered clusters. Also, it is possible to see the amount of latency decrease and IOPS created without adding any SSDs to the storage.
  • Low initial cost
  • Works with different flash resources such as NV PCI cards, memory and SSD
  • Ease of use
  • Stability

How has it helped my organization?

Basically, our company needs IOPS to solve the bottleneck on the storage side, but storage upgrade costs were awful. Instead, we chose FVP and achieved IOPS values nearly 6-10 times faster without any hardware upgrade.

What needs improvement?

Right now, you can only use this technology in VMware environments. Definitely supporting different OS and hypervisor systems would help this product reach a much larger community.

This issue has not actually affected me. However, other people won't be changing their hypervisors and OSs to use FVP, so it does limit the audience.

For how long have I used the solution?

I have been using it for more than 18 months.

What do I think about the stability of the solution?

Stability is never a problem with FVP. Just install and forget. Type software and with parity level 2, you can have the same protection level as RAID6. You lost two of your nodes but your data is still reachable.

What do I think about the scalability of the solution?

You can add as many server nodes as you want to your FVP cluster, so it isn’t an issue. But you have to start with at least three nodes.

How are customer service and technical support?

Technical support is quite qualified and responds fast. Fortunately, except for technical questions, we haven’t had to open a ticket for an error or malfunction.

Which solution did I use previously and why did I switch?

We previously used different solutions, but the main problem was none of them supported all flash resources as does Pernixdata. Some of them work only on memory, and some of them only with SSD.

How was the initial setup?

You don’t even have to shut down the system to install FVP. It’s installed on the fly and no downtime occurred. Sure, you’ll have to follow a procedure to install FVP+Architect. It’s like one hour, at most, under normal conditions.

What's my experience with pricing, setup cost, and licensing?

If you are a vSphere Essential Plus customer, then it’s a real bargain, because you buy three FVP hosts at a very low price. Unfortunately, prices will be much higher if you use vSphere Standard or Enterprise editions.

What other advice do I have?

It is pretty effective for companies who need to upgrade their storage or server systems to reach higher IOPS with high protection levels.

It definitely costs less compared to hardware upgrades, which can’t provide as high IOPS as FVP.

I can answer any technical question as well because I have installed and managed 7 FVP deployments.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Matt Baltz
Data Center Engineer at Strategic Solutions of Virginia
Real User
Leaderboard
It increased the performance of our legacy SAN while decreasing its load.

What is most valuable?

The analytic data that is reported using PernixData Architect is by far the best storage metrics we have been able to gather on our environment.  PernixData FVP increased the performance of our legacy SAN while decreasing its load.

How has it helped my organization?

Architect has reduced our time in troubleshooting infrastructure verses application issues.  We can tell real time on a per virtual machine basis if there is an underlying storage performance issue that needs to be addressed.

FVP has saved millions of IOPS from being served off of our array and therefor decreased traffic sent, all while lowering latency to our VMs.

What needs improvement?

In moving from the version 2 to version 3 of the product the console was moved from vsphere client to an independent HTML5 interface.  While the new interface is a step forward, I believe this transition focused more on the per VM basis and the holistic environment metrics are not as easy to view.

For how long have I used the solution?

For nearly 2 years.

What was my experience with deployment of the solution?

Deployment is fairly straight forward and can be done in a trial period for 30 days.

What do I think about the stability of the solution?

The product is stable and if there are any issues they will be addressed swiftly by the PernixData support team.

What do I think about the scalability of the solution?

Scalabilty issues are practically non existent with FVP as you may use RAM, NVMe, or Flash media in a scale out fashion to increase performance.

Which solution did I use previously and why did I switch?

Previous to PernixData we did not have a solution for analytic data or host side acceleration.

How was the initial setup?

The setup entails a management server, vmware installation bundle, and acceleration media(FVP only).

What about the implementation team?

We setup our FVP cluster and have multiple deployments completed for customers.

What other advice do I have?

I would no longer recommend this product since the company has been bought by Nutanix and the brand is effectively being killed off.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are a PernixData Customer and Partner.
Find out what your peers are saying about Nutanix, DataCore, Peaxy and others in Software Defined Storage (SDS). Updated: October 2021.
542,823 professionals have used our research since 2012.
ITCS user
Cloud Solutions Architect at Clouditalia Telecomunicazioni
Real User
​We would like to have deeper analysis per virtual machine, but it helps us to boost the performance of our existing storage systems.

What is most valuable?

  • Boost storage performances giving new life to our existing storage investment
  • Possibility to use just RAM or both RAM and SSD storage
  • PAYG model licensing (at least for CSP)

How has it helped my organization?

We've been able to create a new catalogue item; our customers now have the possibility to choose a faster storage for their IaaS at a different price.

What needs improvement?

We would like to have deeper analysis per virtual machine.

For how long have I used the solution?

I've been using it since February 2015, so six months.

What was my experience with deployment of the solution?

No issues encountered.

What do I think about the stability of the solution?

Not because of them. During a power outage we lost some data, but it was our fault since we set it up as a write through option instead of a write back. Anyway, it's a very unlikely scenario.

What do I think about the scalability of the solution?

No, in fact, this is one of their key advantages.

How are customer service and technical support?

Customer Service:

9/10 - higly responsive, just in the previous critical situation we had to escalate to a 3rd level engineer.

Technical Support:

7/10 - qualified, but it's a little bit rigid to find a workaround for our previous situatuion.

Which solution did I use previously and why did I switch?

This is our first solution of this type.

How was the initial setup?

I can say it was reasonably straightforward.

What about the implementation team?

We used a vendor team, and they were very highly qualified. He was one of the first developers of the product, and one of their best engineers.

What's my experience with pricing, setup cost, and licensing?

In our case (CSP) the PAYG model is perfect. If perpetual licenses, I evaluated the cost as too high, and too hard to afford for a common private mid-sized company.

Which other solutions did I evaluate?

We evaluated some SDS, but we wanted to keep our existing storage systems. We decided on a software solution, and we didn't even consider others because of their leadership in the market.

What other advice do I have?

Ask for a support in the setup process. Implement it if really needed because of old storage and high IOPS requested apps, and a new storage investment will result too high to cover the same performnances

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Vice President, Products and Services with 51-200 employees
Vendor
By adding storage intelligence at the server tier, it enables an architecture decoupling storage performance from capacity to utilize high speed server media in conjunction with any shared storage.
Originally posted at https://www.freeitdata.com/era-serverside-storage-intelligence. Virtualization has revolutionized the modern datacenter. However, with increasingly capable servers increasing the virtual machine (VM) density, I/O traffic becoming more randomized, heavy congestion on the network fabric, and storage controller cycles not scaling proportionately with capacity, traditional storage systems are struggling to keep pace. In fact, in a survey of more than 350 IT professionals ranging from C-level executives to hands-on administrators, four out of their top six priorities were either directly related or attributable to storage performance. Much of this is due to the shared storage array being forced to deliver fast access to active data (performance), retain inactive…

Originally posted at https://www.freeitdata.com/era-serverside-storage-intelligence.

Virtualization has revolutionized the modern datacenter. However, with increasingly capable servers increasing the virtual machine (VM) density, I/O traffic becoming more randomized, heavy congestion on the network fabric, and storage controller cycles not scaling proportionately with capacity, traditional storage systems are struggling to keep pace. In fact, in a survey of more than 350 IT professionals ranging from C-level executives to hands-on administrators, four out of their top six priorities were either directly related or attributable to storage performance.

Much of this is due to the shared storage array being forced to deliver fast access to active data (performance), retain inactive or “cold” data (capacity) while providing value-added data services at the same time. The end result is wasted capacity, higher cost and an environment not optimized for VMs.

What if you could solve this storage performance challenge by installing a powerful new piece of software? What if this software let you scale-out storage performance in a predictable manner, while avoiding the purchase of unnecessary storage hardware? And what if it was installed in minutes, with no changes to your existing applications, hosts or storage environment?

That’s what PernixData FVP software delivers. By adding storage intelligence at the server tier, it enables an architecture decoupling storage performance from capacity to utilize high speed server media in conjunction with any shared storage to deliver the maximum virtualized application performance at the lowest cost. Installed in the hypervisor kernel in minutes with no reboots or configuration changes required, FVP accelerates both read and write operations using flash and/or RAM to speed any I/O-intensive workload. As a clustered solution, it’s transparent to VM operations (e.g. vMotion, DRS, HA), and provides performance on-demand simply by adding more server resources to the FVP cluster.

Tata Steel saw Oracle query times balloon from 12 to 30 minutes after virtualizing their database. With PernixData FVP software and Intel DC S3700 series SSDs installed in the existing hosts, latency dropped 73% (reporting completing in 8 minutes vs. 30 minutes) to enable the virtualized environment to outperform bare metal. The remarkable increase in performance with the in-place hardware obviated the need for a redundant pair of all-flash arrays.

An early adopter of Citrix XenDesktop, Southern Waste Systems found itself unable to expand the virtual desktop deployment until the storage latencies were under control. Leveraging the spare RAM footprint in the ESXi hosts to create a clustered layer of acceleration using PernixData FVP software, the latency spikes reaching upwards of 500 ms were eliminated to provide consistent sub-millisecond service levels without the need to make a trip to the colocation facility. The seamless installation was done 100% remotely, during the day, in the production environment without a service interruption.

Disclosure: My company has a business relationship with this vendor other than being a customer: We implement PernixData FVP software for our customers.
ITCS user
Sr. Virtualization Consultant at a tech services company with 51-200 employees
Consultant
It will deliver once you’ve chosen it, but you do need to rethink your current design principles and building blocks on storage performance.
A while ago I did a write-up about PernixData FVP and their new 2.0 release. In blogpost “Part 2: My take on PernixData FVP2.0” I ran a couple of tests which were based on a Max IOPS load using I/O Analyzer. This time ’round, I wanted to run some more ‘real-life’ workload tests in order to show the difference between a non-accelerated VM, a FVP accelerated VM using SSD and a FVP accelerated VM using RAM. So I’m not per se in search of mega-high IOPS numbers, but looking to give a more realistic view on what PernixData FVP can do for your daily workloads. While testing I proved to myself it’s still pretty hard to simulate a real-life work-load but had a go at it nonetheless… Equipment As stated in previous posts, it is important to understand I ran these test on a homelab. Thus…

A while ago I did a write-up about PernixData FVP and their new 2.0 release. In blogpost “Part 2: My take on PernixData FVP2.0” I ran a couple of tests which were based on a Max IOPS load using I/O Analyzer.

This time ’round, I wanted to run some more ‘real-life’ workload tests in order to show the difference between a non-accelerated VM, a FVP accelerated VM using SSD and a FVP accelerated VM using RAM. So I’m not per se in search of mega-high IOPS numbers, but looking to give a more realistic view on what PernixData FVP can do for your daily workloads. While testing I proved to myself it’s still pretty hard to simulate a real-life work-load but had a go at it nonetheless… :)

Equipment

As stated in previous posts, it is important to understand I ran these test on a homelab. Thus not representing decent enterprise server hardware. That said, it should still be able to show the differences in performance gain using FVP acceleration. Our so-called ‘nano-lab’ consists of:

3x Intel NUC D54250WYB (Intel core i5-4250U / 16GB 1.35V 1600Mhz RAM)
3x Intel DC S3700 SSD 100GB (one per NUC)
3x Dual NIC Gbit mini PCIe expension (3 GbE NIC per NUC)
1x Synology DS412+ (4x 3TB)
1x Cisco SG300-20 gigabit L3 switch

Note the bold 1.35V. This is low voltage memory! While perfect for keeping power consumption down on my homelab, it makes the concession of lower performance compared to 1.5V memory. Since we are testing FVP in combination with RAM it’s good to keep this in mind.

Pre-build the lab looked something like this:

FVP version

I updated my FVP installation to the newest version extension (and management server) which contains further enhancements on the new FVP 2.0 features.

IO tests

It felt like I did fool around with pretty much every ICF (IOmeter Configuration File) out there. Eventually I customized an ICF which was based on a ‘bursty OLTP (Online Transaction Processing)’ workload. OLTP database workloads seemed like a legit IO test as they are a good example of a workload in need of low latency, high availability on data and not so much high throughput.

So, the IO test consists of 2 workers with IO Analyzer using a raw VMDK residing on a iSCSI LUN using the default vSphere iSCSI software adapter. The VMDK has a size of 10GB representing the working set of my fictional application. I made sure my Synology was pretty much idle when performing the tests.

FVP is configured with policy ‘Write Back (Local host and 1 peer)‘ in order to meet the data availability ‘requirement’. I did test with the FVP policy set to write back with zero peers and noticed an improvement because no additional latency is created by writing cache data to the network peer(s). However, I believe this isn’t a configuration which will be used when accelerating an application in a enterprise environment.

The 2 IO workers are configured with the specifications as listed below. The workers are run simultaneously during tests.

Write-worker Constant Write
Bursty Write Seq
Constant Write
Bursty Write Seq
Etc.
Read-worker Bursty Read
Constant Read
Bursty Read
Constant Read
Etc.
Constant Write = 8Kb 100% random write 1ms transfer delay 4 IOs burst length
Bursty Write Seq = 8Kb 100% sequential write 0ms transfer delay 1 IO burst length
Constant Read= 8Kb 100% random read 1ms transfer delay 32 IOs burst length
Bursty Read Seq = 8Kb 100% sequential read 0ms transfer delay 1 IO burst length

Results

I used the numbers given by ESXTOP, filtered out the useful numbers and did some excel work to create these graphs. The contents of these graphs could be compared to the PernixData FVP ‘VM observed‘ numbers.

I could, as I did in previous FVP posts, use the much more slick looking FVP graphs… But this time I wanted to not take the FVP graphs for granted. Next to that I wanted to be able to do a comparison on FVP modes within the graphs. A concession of not using the FVP graphs is the ability to see the network peer latency so we’ll keep focus on VM observed latency.

First let us have a look at the latency graphs:

The most important thing to notice in the graphs above is that the latency peaks are flattened and consistent when accelerated by FVP. Next to off-course being dramatically lowered!! The part of latency being lowered and consistent is a game changer for your customers’ user experience! Their application will be more responsive and again… consistent in performance!!

Now check the IOPS graphs:

When comparing the IO performance there is a vast improvement noticeable when being accelerated by FVP. I guess I don’t have to point out that a higher number of IOPS is preferred.

Although it isn’t really transparent to crunch down the numbers, it is useful to see the average numbers to indicate a difference in performance between the non-accelerated and the accelerated modes.

  avg. read
IOPS
avg. write
IOPS
avg.
read latency
(ms)
avg.
write latency
(ms)
No(!) FVP acceleration 184 1520 23.04 5.77
FVP2.0 SDD acceleration 1876 2028 1.20 1.28
FVP2.0 RAM acceleration 4544 2262 0.27 0.33

Conclusion

Again I’m impressed by FVP! From your customers point of view they will notice a great deal of performance improvement and performance consistency while using their applications running on your VM’s!

PernixData’s view (‘de-couple performance from capacity’) is a very interesting one. When adopting these kind of technologies us consultants/architects should rethink our current design principles and building blocks on storage performance.

As always it fully depends on your workloads and what your current experiences are when it comes to storage performance. When you are using an enterprise range array with FC connections to your hosts you probably are more used to sort-off acceptable latency numbers in comparison to when you’re using a mid range NFS array using ethernet connections to your hosts.

But even when using that enterprise array with acceptable latency/performance; what to choose when your storage and/or host assets are financially depreciated or are running out of support. Will you still go for a traditional storage array? Or will you rethink your design principles and building blocks by designing your array to deliver data services only while your performance layer resides at your hosts?

Food for thought… We can state that PernixData FVP will deliver a great job once you’ve chosen it as your performance layer and I’m glad to see it adopted by customers.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Sr. Virtualization Consultant at a tech services company with 51-200 employees
Consultant
This new FVP compression tech is only used when using 1Gbit network interfaces for your FVP acceleration traffic. It won’t even work on a 10Gbit network because the gain would be close to nothing.
In the blog post Part 1: My take on PernixData FVP I mentioned the release date on FVP version 2.0 to be very soon. Well… PernixData went GA status with FVP 2.0 on the 1st of October. I liked the announcement e-mail from Jeff Aaron (VP Marketing at PernixData) in which he first looks back at the release of version FVP 1.0 before he mentions the new features within FVP 2.0: FVP version 1.0 took the world by storm a year ago with the following unique features: Read and write acceleration with fault tolerance Clustered platform, whereby any VM can remotely access data on any host 100% seamless deployment inside the hypervisor using public APIs certified by VMware. Now FVP version 2.0 raises the bar even higher with the following groundbreaking capabilities: Distributed Fault…

In the blog post Part 1: My take on PernixData FVP I mentioned the release date on FVP version 2.0 to be very soon. Well… PernixData went GA status with FVP 2.0 on the 1st of October.

I liked the announcement e-mail from Jeff Aaron (VP Marketing at PernixData) in which he first looks back at the release of version FVP 1.0 before he mentions the new features within FVP 2.0:


FVP version 1.0 took the world by storm a year ago with the following unique features:

  • Read and write acceleration with fault tolerance
  • Clustered platform, whereby any VM can remotely access data on any host
  • 100% seamless deployment inside the hypervisor using public APIs certified by VMware.

Now FVP version 2.0 raises the bar even higher with the following groundbreaking capabilities:

  • Distributed Fault Tolerant Memory (DFTM) – Listen to PernixData Co-founder and CTO, Satyam Vaghani, describe how we turn RAM into an enterprise class medium for storage acceleration in this recent VMUG webcast
  • Optimize any storage device (file, block or direct attached)
  • User defined fault domains
  • Adaptive network compression

We will take a look at PernixData FVP 2.0, how to upgrade from version 1.5 and explore the newly introduced features…

Upgrade from FVP 1.5

In order to test FVP 2.0, we first have to upgrade the existing install base. The base components haven’t changed and still consist of the management software and the vSphere host extension.
We are using the FVP host extension version 2.0.0 (duh) build 31699 and management server version 2.0.0 build 6701.0.

1. Before you upgrade!

Note that before upgrading, just must possibly change your write policy status to Write Through. This change is not(!) instant. You should monitor the ‘Usage‘ tab to check the ‘Requested Write Policy‘ column if all your accelerated VM’s are transitioned to Write Through mode!!

After that, PernixData states that when upgrading from 1.5, you should first upgrade the management server before upgrading the host extension on your vSphere hosts.

2. Upgrading management server

Upgrading the management server is dead easy, won’t bother you with the next > next > finish windows. :) During/after upgrading, while viewing your PernixData tab, you can get an error in you vSphere (web)client like:

Don’t worry… Because of the upgraded management server, your vSphere client plugin is just outdated. Because the FVP extension is still active, acceleration should be ongoing during the management server upgrade! Restart your client (or browser) and upgrade to the new FVP 2.0 plugin when using the thick client. Not necessary when using the web client (which you should probably use when running vSphere 5.5) of course.

3. Upgrading host extension

In order to install the FVP 2.0 extension, you must uninstall the FVP1.5 host extension. Therefor using VUM is not supported for upgrading the FVP extension. Clean installs are perfectly done by VUM of course.

After upgrading, a reboot is not necessary.

PernixData provides instructions in their upgrade guide. Follow these instructions per host:

1. Put the host in maintenance mode.

2. Login to the host ESXi shell or via SSH as a root user.

3. Using the command below, copy and then execute the uninstall script to remove the existing FVP host
extension module: cp /opt/pernixdata/bin/prnxuninstall.sh /tmp/ && /tmp/prnxuninstall.sh
The uninstall process may take a few minutes.

4. Using the esxcli command below install the PernixData FVP Host Extension Module for version 2.0. Example: If you copied the file host extension to the /tmp directory on your ESXi server then you would execute the command below: esxcli software vib install -d /tmp/PernixData-host-extension-vSphere5.5.0_2.0.0.0-31699.zip

5. Using the esxcli command below, backup the ESXi configuration to the boot device: /sbin/auto-backup.sh

6. Remove the host from maintenance mode

Aaaannnd… we’re done! Moving on to do some testing!

DFTM

Or Distributed Fault Tolerant Memory… A key feature in the 2.0 release. Previous FVP versions made it possible to accelerate write caching with fault tolerance on supported SSD or PCIe flash devices. Now, we can use a RAM repository for cached blocks with fault tolerance!! RAM!! Should be fast!

RAM is added with a minimum of 4GB per host and scales up to 1TB per host in increments of 4GB. 1TB should be sufficient in most cases :)

So, in our nano lab, let’s select 8GB of RAM on one of our hosts:

Do note that RAM and a flash device on the same host cannot be selected together! You can have either your flash device or RAM. Or… can you?!? You can configure multiple FVP clusters, one containing your flash devices and one containing RAM. Frank Denneman did a nice write-up on such a scenario >> link.

We did a test on FVP1.5 using VMware IOanalyzer in blog post part 1. Although this test (max write IOPS workload) isn’t really representative for a real-life workload, it does show the performance gain of the VM(s) compared to non-accelerated VM’s. To keep a clear overview, we will run the same test on a FVP2.0 accelerated VM on SSD and on an accelerated VM on RAM.
These screenshots were taken during the individual tests on our nano lab:

FVP1.5 – SSD – IOPS

FVP1.5 – SSD – latency

FVP2.0 – SSD – IOPS

FVP2.0 – SSD – Latency

FVP2.0 – RAM – IOPS*

FVP2.0 – RAM – Latency (note that network acceleration is still handled by a SSD)

To summarize some of the numbers:

  max IOPS Latency
FVP1.5 SDD acceleration ~56.000 0.22ms
FVP2.0 SDD acceleration ~34.000 0.12ms
FVP2.0 RAM acceleration* ~150.000*~35.000* 0.04ms

* Note: I did notice some strange numbers here. Testing began at a somewhat 200.000 IOPS(!) to drop to ~35.000. Considering we’re using a nano lab, with not the fastest 1.35V DDR3 1600Mhz RAM on an Intel NUC, this could be a platform not ideal for testing RAM on FVP. We will configure and test another FVP cluster on other machinery!

Conclusion:

Needless to say; the VM accelerated on RAM is clearly the winner on latency!! During tests we never encountered a latency higher than 0.04ms!! It looks like overall latency on FVP2.0 is lower than FVP1.5. Having said that, I couldn’t get the number of IOPS on SDD at the same level as it used to be on FVP1.5. Same test, same host, 100% hitrate. A changed algorithm to crunch down the numbers maybe? More focus on latency? Who knows… But it was something that caught the eye.

Due to our testing platform we’re not convinced FVP2.0 on RAM has shown it’s full potential. We will therefor test on another platform to get a clear view on what FVP2.0 is capable of doing when cache resides on RAM.

Any storage device

With iSCSI, FC and FCoE already supported in previous versions, the only missing protocol was NFS. With FVP 2.0 also supporting NFS there shouldn’t be any boundaries on datastores to accelerate. A quick looks show us it is now possible to select NFS datastores together with my existing (iSCSI) datastores.

I could not spot significant differences in performance between file- or blockbased storage when accelerated by FVP using write back, which is pretty logical… :)

User defined fault domains

Fault Domains allow us to take control of where cache data is replicated to when using Write Back on peers. The options you get to choose from are up to 2 peers in the same fault domain or peers in (multiple) different fault domains.

Think about a stretched cluster environment where it would seem logical to configure the fault tolerant cache to reside on hosts within the same site due to the lower latency. Or maybe peers in the same fault domain as well in a different fault domain is the way to go if higher latency on your peers isn’t a big deal…

Adaptive network compression

This new FVP compression tech is only used when using 1Gbit network interfaces for your FVP acceleration traffic (vMotion network by default). It won’t even work on a 10Gbit network because the gain would be close to nothing.

I could go into detail, but what better way to show the inside out of Adaptive network compression then insider Frank Denneman’s article, found here: https://frankdenneman.nl/2014/10/03/whats-new-pernixdata-fvp-2-0-adaptive-network-compression/

Licensing

FVP can be delivered in 5 types of licensing. Note that user defined fault domains for write back are only available in the Enterprise or Subscription version. The overview below is found on the PernixData website:

  • FVP Enterprise: FVP Enterprise is designed for the most demanding applications in the data center. Deployment can be on flash, RAM or a combination of the two. FVP Enterprise also introduces topology aware Write Back acceleration via Fault Domains that allows enterprises to align FVP with their data center design best practices. In addition, FVP Enterprise comes with sophisticated, built-in resource management that makes the best possible use of available server resources. With FVP Enterprise, there is no limit placed on the number of hosts or VMs supported in an FVP Cluster™.
  • FVP Subscription: A version of FVP Enterprise that is purchased using a subscription model, making it ideal for service provider environments.
  • FVP Standard: FVP Standard is designed for the most common virtualized applications within the data center. It supports deployments via all flash or all RAM. No limit is placed on the number of hosts or VMs in an FVP cluster. FVP Standard is purchased as a perpetual license only.
  • FVP VDI:A version of FVP exclusively for virtual desktop infrastructures (priced on a per desktop basis.)
  • FVP Essentials Plus: A bundled version of FVP Standard that supports 3 hosts and accelerates up to 100 VMs (in alignment with vSphere Essentials Plus). This product replaces the FVP SMB Edition.

What’s next?

Well, FVP2.0 is major step for PernixData and it should be highly usable as addition on any type of storage now. But what’s next for PernixData? I know, version 2.0 is just released, but I keep wondering what direction of development they will bring us in the future. I’ve discussed VMware’s VAIO in part 1, will this be something PernixData will hook into? What is there more to gain on flash virtualization??

I’m sure the clever minds at Pernix will have an answer to that. Time will tell. For now, let’s enjoy this product!!!

Disclosure: I am a real user, and this review is based on my own experience and opinions.
ITCS user
Sr. Virtualization Consultant at a tech services company with 51-200 employees
Consultant
Considering this is a ‘nano’ lab one can still see a pretty awesome gain of performance.
Having posted an article on Software Defined Storage a short while ago, I want to follow up with some posts on vendors/products I mentioned. First of all we’ll have a closer look at PernixData. Their product FVP, stands for Flash Virtualization Platform, is a flash virtualization layer which enables read and write caching using serverside SSD or PCIe flash device(s). Almost sounds like other caching products which are out there, don’t it… Well, PernixData FVP has features which are really distinctive advantages over other vendors/products. With a new (2.0) version of FVP coming up I decided to do a dual post. Version 2.0 should be released very soon. What will FVP do for you? PernixData states: "Decouple storage performance from capacity" So what does that mean? Well, it means we no…

Having posted an article on Software Defined Storage a short while ago, I want to follow up with some posts on vendors/products I mentioned.

First of all we’ll have a closer look at PernixData. Their product FVP, stands for Flash Virtualization Platform, is a flash virtualization layer which enables read and write caching using serverside SSD or PCIe flash device(s). Almost sounds like other caching products which are out there, don’t it… Well, PernixData FVP has features which are really distinctive advantages over other vendors/products. With a new (2.0) version of FVP coming up I decided to do a dual post. Version 2.0 should be released very soon.

What will FVP do for you? PernixData states:

"Decouple storage performance from capacity"

So what does that mean? Well, it means we no longer must try to fulfill storage performance requirements by offering more spindles in order to reach the much demanded IOPS. Next to that we must try to keep a low as possible latency. Doing so, what better place for flash to reside on the server! Keeping the I/O path as short as possible is key!!
When storage performance is no longer an issue, capacity requirements are easily met.

PernixData is a young company and came out of the shadows early in the year of 2013. The company is started by Satyam Vaghani and Poojan Kumar, both working at VMware prior to PernixData. Both were closely involved with data services within VMware. Since 2013 PernixData is making a pretty decent name for themselves by delivering a revolutionary product together with decent marketing including persuading Frank Denneman to become the technology evangelist.

I won’t bother you with any installation or configuration procedures as there are many blogs describing these parts already.

Why FVP?

So why is FVP. What makes this product so special?

  • FVP runs within VMware vSphere kernel; The FVP VIB installs as a kernel module per vSphere host.
  • Fully compatible with VMware services such as (storage) vMotion, (storage) DRS, HA.
  • Facilitates read and write server side cache.
  • Write-trough and fault tolerant Write-back options; One of my favorites!! Write-back caching can be replicated to 1 or 2 additional flash devices before being destaged to the storage array itself providing data protection.
  • Vendor independent backend storage, supporting protocols iSCSI, FC(oE).
  • Scale-out architecture by adding hosts and flash devices.
  • Very easy implementation; VIB per vSphere host and management software running on a MSSQL backend.

Future for FVP

Although FVP is an outstanding product, I wonder what the next level of development will bring. As said earlier, version 2.0 is about to go GA. I will dedicate part 2 of this post writing about FVP 2.0. Version 2.0 in short; NFS is supported. More important is that RAM can be assigned as cache repository! Further enhancements will be the possibility to create multiple flash replication groups.

But what about VMware and their development on vSphere APIs for I/O Filters, in short; VAIO? VMware’s intention with VAIO is to easily enable partners to hook on their products directly into the I/O path of VM’s. With this part becoming easier to implement, isn’t one of the key features of FVP (running in the vSphere kernel) disappearing in the near future? VMware has already partnered with SanDisk’s FlashSoft team to provide a distributed flash cache filter to provide read and write acceleration of virtual machine I/O.

I am curious to see how PernixData will develop their software post version 2.0. I hope to find out on VMworld EMEA 2014!!

See for myself

As always; writing and reading about something is nice, but it is an absolute necessity to try a product for yourself. PernixData provided me, being a PernixPrime, with NFR licences. I’ve got a nano lab at my disposal which is perfect for testing server side caching solutions.

The nano lab contains 3 Intel NUC hosts using a single Intel i5-4250U CPU and 16GB RAM each. Each NUC is equipped with a Intel DC S3700 series SSD. The NUC’s are connected by a Cisco SG300 Gbit switch. A Synology NAS is used as backed storage providing multiple iSCSI LUN’s.

I’m running vSphere 5.5 (build 2068190) and PernixData FVP host extension for vSphere5.5 version 1.5.0.5 build 30449. The FVP write policy is configured to do write back with 1 network flash device for data protection as shown in this picture.

Using this setup I used the VMware IO analyzer fling to do the benchmark testing. I used the max write IOPS workload which is predefined in the IO analyzer. Running just one instance, the performance enhancements are amazing!

First notice a warmed up cache repository with a hit rate of 100% during the write tests:

During the write test we experienced a constant rate of ~56.000 IOPS and a latency of 0.22ms on the local flash and ~11ms on the network flash.

Considering this is a ‘nano’ lab one can still see a pretty awesome gain of performance!!

Other sources

While reading about PernixData, some blog posts stood out which are definitely worth reading:

https://willemterharmsel.nl/interview-poojan-kumar-ceo-of-pernix-data/
https://frankdenneman.nl/pernixdata/
https://vmwarepro.wordpress.com/2014/09/17/pernixdata-top-20-frequently-asked-questions/

Enthusiastic

I became an instant fan of FVP. It’s easy deployment and awesome performance gains… If you’re a PernixData enthusiast yourself, consider becoming a PernixPrime or PernixPro!!!
We will do a write up on FVP 2.0 as soon as possible.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Download our free Software Defined Storage (SDS) Report and find out what your peers are saying about Nutanix, DataCore, Peaxy, and more!