What is our primary use case?
In terms of the primary use case, I have a contract with Fortune 500. I was hired to go in and to maintain their test evaluation network, the test development network and managing components of that. This is a DevOps operation, where they develop software drivers. And so the network monitoring tools that are separately used there are IT-based. The testing devices that I had to monitor and manage, were a combination of Cisco, Extreme, Aruba, and Mellanox. It was all a test development environment developing software that they were going to monetize to be able to sell to distributors and people who have incorporated their product into network devices.
How has it helped my organization?
For the customers I work with it provides flexibility as far as storage is concerned, so it's security and access.
What is most valuable?
We use the HyperCloud for instant storage. It is all intellectual properties, so controls and constraints and the data that's associated with that, all of the "secret sauce" as they call it, is always kept on-prem, but some of the old documentation, rather than having it take up space locally they've pushed that to the cloud, they access it at will, using the hybrid cloud type of concept.
Everybody can get to it. You don't have to concern yourself to the remote again to the old school way. You had to log into the department, the company's network to be able to get to the data. Now because you're in the Cloud, technically you don't have to log into the company network, you can log directly into the Cloud from wherever you are.
It just makes it a lot easier instead of going in through the company network and then up to the Cloud, depending on what type of portal you've created that is web-based, you can log into that portal directly into the Cloud and come onto the prem on the backside of that if you wanted to.
What do I think about the stability of the solution?
The stability is a double-edged sword. There are a lot more legacy systems out there that cannot talk to the cloud, the on-prem, monolithic and proprietary. Then there are contemporary systems that have been upgraded that do not communicate between the two. That's the reason for the hybrid title is because it's not fully Cloud-based, it's got to be able to talk to the Prem.
A lot of the software that is legacy based wasn't designed to be able to talk to the cloud in a way that the new contemporary software does, it makes it completely transparent. It's the next best thing to having two separate environments and having two separate logons to be able to access those environments. We have an integrated environment with some patches and some bells and whistles. It's got to be monitored a little bit more heavily than having two separate networks.
What do I think about the scalability of the solution?
The scalability of the solution is a godsend. Depending on how it's set up is how successful it's going to be.
For example, VMware is the dominant on-prem virtualized operating system, they were the ones who created the whole virtual machine concept. Microsoft jumped on board with Hyper-V a couple of years after VMware established itself. VMware on-prem essentially is a cloud on the premise.
The cloud itself is strictly virtualized. As far as the efficacies of the cloud and the hybrid Cloud concept, VMware is the clear winner when it comes to virtualization. Microsoft is a clear winner when it comes to desktops.
Now VMware can talk to Microsoft Azure. It can talk to Google Cloud. It can talk to AWS. It can talk to Oracle. It has connectors, which is the hybrid cloud piece. They've developed connectors now for all of the multi-cloud environments.
How are customer service and technical support?
Their customer support leaves a lot to be desired. The standards are not that great.
A good example is that Apple has its own proprietary operating system, even though the phone may be Android-based, but the iOS that runs on the operating system that runs on Apple is proprietary. At the end of the day, it comes down to testing.
Back in the '90s, there was the thing called straight-line testing and what it essentially would be is that a software developer would create an app pursuant to a set of specifications. There'd be manufacturer specifications and then the engineering spec. Then with that, you would create the application. It would be given to testers to test and see if it worked as described in the specification and then there would be integration testing or compatibility testing done with that application for any type of conflicts and to see if it breaks anything else that it would be installed against.
My point is that all of the apps that people have on their cell phones have all been designed at minimum to a common standard according to what Apple published. The problem is that a small software company can sit back and write an application at their desktop remotely and sitting in their house and an up and coming software developer doesn't have the deep pockets or the expertise to do quality testing against any of the applications that are going to be installed on the phone. He has no idea what you're going to put on the phone. Therefore any app you put on the phone has the potential of being affected by any apps that are installed on the phone. Or any app you put on the phone has the potential of affecting all of the apps once it's installed on the phone. This is the reason why phones have the problems they do.
It's a computer, but you have to have a common ground for it to be able to inter-operate in the presence of all the other applications that are there. This is where the hybrid cloud and knowing about how all the different operating systems function out of the box is important. That's the only reference you have because that's the only consistency when you go into an environment that you didn't install yourself. That's what's missing as far as the support is concerned.
Agile software testing happens to all applications and they throw it over the wall before it's mature. They throw it over the wall, then there's a series of updates to come after that. It's not fully baked before they put it out to the public. Whatever the problems are, they know that they are trying to fix it in the next release, but they're not doing any regression testing against the product.
It took longer to go to market doing waterfall because it was a thorough testing path. Because everybody's rushing to market to try and get their app out and make their money and stuff like this.
Anybody can create anything, throw it out there, and say it's something and because it hasn't been thoroughly tested, you have no idea until you put it on the phone whether it's going to break it or not.
That's the problem with hybrid and providing technical support for hybrid environments is that there's no watermark established for how the product was six months.
The problem is that everybody is throwing it over the wall and they're letting the customer field test the product.
How was the initial setup?
The complexity of the set up is subjective. The initial setup is a matter of the end-user preference, and it's a matter of what the security protocols and what the policy is for a particular company. For example, if you're in Europe or outside the United States, you're subjected to data governance that falls under the GDPR regulations. It's a straightforward protocol for setting up and then providing access to end-users, making sure that whatever it is they access is on a need-to-know basis, least privilege as far as that's concerned. So it's site-specific and it's customer-specific as far as the complexity or the simplicity of it altogether.
What other advice do I have?
My advice to someone considering this solution is to take a pragmatic approach, there are set pieces in a hybrid cloud deployment and you have to do a proof of concept with. You have to go in and do an audit. You've got to create a statement of work. You've got to identify, who are the administrators or who's impacted by any downtime once you try to introduce this. You've got to be able to sandbox it. Duplicate the environment in a lab and then sandbox the improvements that you want to integrate and then you got to roll out a pilot and try it.
My biggest pet peeve about doing anything with clouds is that once you sign up for something on the cloud, you don't have any control over what they do with that information. Because of the convenience, nobody thinks about the consequence of putting that stuff out there and what they're going to do with it. Unless you're super technical, you have no concept of what's being done. All you know is that you are getting what you want and you really don't want to burden yourself with figuring out on what's being done.
If you knew what's being done, you wouldn't do it.
What people don't know, is that IPV6 is the back door. This is where the security issue comes in. IPV6 is the back door for Microsoft all of the network operating. You have to go in and turn it off manually.
You have to understand those things too. It's second nature for people who work on the environment, but we have to be aware of that because when you're troubleshooting in a hybrid environment, you have to rely on whether it's the existing documentation; you have to rely on your working knowledge about how things connect and what have you, and then you have to try and figure out what's been done. You also have to identify what's on and what's not on; and who's talking and where.
So the hybrid cloud adds a layer of complexity as opposed to having two separate environments to log into; the on-premise environment in the cloud environment.
If you're separate, it makes it a lot easier. But when you're integrated, you're bringing in a whole can of worms at that point.
The next feature I would like to have full disclosure of what's being done with the data.
I would rate it a ten out of ten. But it's a catch 22. It requires a thorough understanding of what the limitations are and what the consequences of utilizing the system itself or trying to accept those limitations.