What is most valuable?
The most valuable feature is that it reduces the dependency so that the down time of the environment is not a major cost. That cost can be used for something else like the cloud.
The important is thing is that Service Virtualization allows you, each and every individual, and each and every system, to be up at any time throughout the cycle.
That indirect cost impact is in the millions of dollars for any IT organization. The direct cause may be that the environment is down and that the tester is sitting for ten hours.
However, the indirect cost is that the end product is not achieving its release date. That is the business impact. If you look at the broader picture, the silos may be the Service Virtualization, but the end-to-end picture is that there is a huge impact, and that impact is actually priceless.
I cannot go and say that, "Okay, tomorrow I'm a mobile service. My apps are not going on mobile." So, I cannot calculate it, and I cannot write that particular dollar value, because I never know what it is.
For example, a sales platform may have the features that I have added, to give me a million dollar sale, or it may not. However, Service Virtualization allows you to release on time at least, so you can align with the strategy of your business.
It gives you granularity and transformation. My personal point of view is that Service Virtualization plays a major role in transforming your development, testing, and operating engineer into a business engineer.
The industry is moving towards Agile and DevOps. The most important thing is that each and every individual in your company is not thinking of themselves as individuals, but rather as if they are the end consumer, or running the business themselves.
Certain products have their own monopoly and they really don't want to share their market share. Service Virtualization enables you to share everything in terms of the protocol tier.
With a CRM system, you have all other systems integrated. You can virtualize and then the core system is CRM. And if you never virtualize, your own virtualization falls down. This is how it works.
How has it helped my organization?
I can use functional performer automation everywhere. However, the questions still remain of when and how I need to use it.
If I run a load test and I need a Service Virtualization, I can use the same asset. But when I'm using the same asset against the functional test, do I need to use it as is, or do I need to modify the workflow?
They must have that pace, but the product itself is up and coming as a new product in the market. Companies themselves are struggling to find out how they can align the technology with testing and developing practices, or where they can fit it into functional automation of performers or integration.
At the same time, there are a lot of emerging technologies coming onto the market. These other companies are saying that Service Virtualization leads to the first step, and then they realize that protocol leads to the fifth step.
The problem is that there is too much around. I hope that the company who manufactures a product will be more focused on how they can reduce this timeline distance by utilizing Service Virtualization.
If I have an in-house application in place, it allows me to pull information. If I can create an in-house environment manager, then I can go and buy some of the off-the-shelf products from the industry, and then I can do it.
This allows me to have that control of each Service Virtualization asset. It also tells me if somebody has done some sort of ideological testing, or ideological test flow. It will let me know how I can adopt that work flow and add my ideas. Alternatively, I can really go on his idea and tell the developer that the idea sounds great, but it should have been done another way.
What needs improvement?
The awareness of Service Virtualization needs to be improved. People still have doubts about having a virtual asset over a physical environment. They want to know if they are going to behave differently, or in a similar way.
They want to know if they are running everything through Service Virtualization if there will be some sort of compliance issue.
I see that it is the simulation of an actual environment, and that the compliance issue can be addressed. Most products are not provided with detailed information, and that creates a lot of hesitation in the market about whether they should adopt it or not.
If it is adopted, they need to know whether or not they're going to get a similar result to what they see in a physical environment.
They need to know if they have to educate their staff and if are they able to be educated prior to implementation for the amount of CapEx invested.
HPE products are good, but they never make a product for a specific use. They make a product for the enterprise because that is their vision. They like multi-generational business plans. That means that they don't deliver small bits and pieces, but rather, they deliver to the enterprise.
When you talk about a complex domain like Service Virtualization, and when you talk about delivering such a wide landscape product, it has to go through lot of improvement cycles.
They are doing it. They are putting forth hard effort. They are putting in dollars, manpower, and they are hiring good techs. We respect that and hopefully they will arrive to the point where they will be able to compete in the market and become one of the dominant leaders, or THE dominant leader for Service Virtualization.
When you have business potential, why not spend money on R&D? Anybody will spend, and rather than being led by their CEO, R&D is being led by their customers who are thinking in the same direction.
The protocol maturity, the technology maturity, is about to come. The Service Virtualization is the first case, and it is not marketed well by any of the vendors. It should have a very strategic view. That is also missing.
The most important thing is that more and more customers need to be involved in lunch and learn sessions where get they can get feedback so the program is not run by product companies.
If you really go to the networking, infra, and other ADM spaces, you have a bunch of them running it, but not in this particular space.
What do I think about the stability of the solution?
Service Virtualization is not stable in the market right now, if you really check the technology behind it.
We majored a system in an OSI reference model, so we skipped most layers as well as apps and session tiers where traffic is recorded.
I have a web server, an application server, a file server, and a database server that takes a normal user work floor. If a user comes, they log in, and the traffic flows in. Service Virtualization allows you to record it, and simulate the way it's supposed to be simulated in a real environment.
I would like to define the stability of a critical system, for example, with the mainframe. We are still struggling with how to create APIs in Service Virtualization to talk with our AS/400 system and with a couple of CRM systems that are so proprietary that you cannot even record a set out of them.
Protocols are constantly changing. If you asked me ten years ago how many protocols I was using, I could give a number.
However, today, it's not X, it's X plus Y, plus Z. This keeps growing, and the product, and the companies who are making the Service Virtualization product are unable to keep up that pace.
What do I think about the scalability of the solution?
In terms of scalability, I have designed one of the biggest Service Virtualization systems where I'm running a bunch of assets.
In my opinion, I have seen the scalability and I know that it goes from the network to the apps.
In terms of technology, the scalability is good and robust. It varies from industry to industry, and it is dependent on your business model and your technology landscape. There are a lot of “ifs and buts”.
How are customer service and technical support?
I did use technical support. My personal view is that HPE is maturing in the Service Virtualization domain. They have a long way to go.
Which solution did I use previously and why did I switch?
We were not using a different solution. One of my customers wanted to make sure that they heard about it, and they wanted to go in that direction.
Then I went to the market, performed a lot of Google research, and then I called a vendor. They came in and did a lot of presentations to help the customer to understand what they have. Then, we had our own matrix in place while the vendor selection took place and we decided which product to take.
We contacted the top three vendors again and asked their solutions architect to design a prototype in our environment. We identified tone prototype that had a good thing, but two had nothing. At the end of the day, nobody was perfect, but HPE was the best.
How was the initial setup?
I'm not normally involved in the setup. We usually have a process in place, where we follow the enterprise architect cycle.
It is not like I just go and get the product. I have to make sure that the product is stable. I have to ask if it has the ability to design a prototype and whether or not it fits.
I was involved in that. I ran a cycle and I identified a product. It took a long time to explain to people how to create assets and build up the communication collaboration.
We had to go through an eight to twelve month cycle to make them understand how to set up the actual tools and learn where they needed to use them.
It was probably a two year plan, or roadmap, to educate our staff, but we succeeded. It is one of the most successful projects that give us millions of dollars in ROI. There was a lot of integration, so it was great fun to work on, to learn, and it is a very good teaching tool as well.
It's a complex system and eventually it becomes straightforward. It's simple for an individual to use, but complex when you have the entire organization adopting it.
What was our ROI?
You should stay and stick with it for a minimum of eighteen months to two years. You will then see a lot of ROI out of it. The ROI may be less in terms of dollar value, but the long term ROI is very good.
I have done it personally. I drew up an end-to-end implementation and the whole process implementation along with the technology. I have a proven experience behind it.
Which other solutions did I evaluate?
I don't want to provide any names, but there are many other players in the market, from thirty-year old companies to newer two-year old ones. They all are doing well. They all provide benefits to their respective domains.
If you ask me, everybody is trying and definitely HPE is among the market leaders. They really perform good market research and have good expertise with a good amount of dollars invested.
I don't just go and purchase by name, by product, or by need. Here’s my process:
- I would go and explore a minimum of three different vendors.
- Bring them in-house, ask them to design a prototype, and perform a PoC.
- Study the vendor profile, how good they are with you in your journey, and where your IT is moving.
- Follow the end-to-end processes, because it gives you enterprise framework vision, of how your IT landscape is going to look.
- Then do the selection based on your financial technology, maintenance, support, and your comfort, and see if your organization's needs are matching.
Use these five major criteria, I usually call them five finger criteria, and the one that ticks the most products is the product that you select.
What other advice do I have?
Service Virtualization is one of the great tool sets coming into the market. It's really going to change the industry.
In order to release my product on schedule, the development and testing time becomes one of the major challenges, although it is not only challenge.
Other challenges include dependency and time consumption. On the other hand, the industry is dealing with a big shift in transformation from legacy to brownfield, and from brownfield to greenfield. We have a separate greenfield layout going on.
When I have a tough integration, a very complex integration system, and when I'm going to roll out my product quickly, I'm looking at concepts like DevOps and Agile, where time to market is going to be much less.
I have to expedite my testing and development. To do that expedition, I have to make sure my pre-production staging integration environments are fully ready. If they are not ready, I'm unable to achieve my desired goal.
It plays a very vital role in this whole cycle, where you can really create a Service Virtualization asset. An asset may be your environment, a web service, or even a single call. We create that asset and we reduce the dependency.
As an example, if I want to virtualize my Telco system, I need payment and billing gateway services. I also need some sort of verification services from the third party vendor. So, rather than building an environment, I will create a Service Virtualization payment asset such as a Visa.
I then use this Visa environment for the billing system, so I can create an asset. I can recall that asset as many times as I want to in my different testing scenarios.