What is our primary use case?
We've dealt with the reverse proxy before; it pretty much goes in that direction. It has a load balancer component for the reverse proxy which allows us to do some content caching.
These are use cases that I have experience with. The native method is something else that you don't see with the traditional NGINX controllers, the proprietary version, as well as the open-source version.
When I started this project, I wanted to see what steps needed to be taken in order to add a third-party application into Harmony Controller.
To begin with, I just put a simple web server in it. I just tried to set up the initial environment. I asked myself, "If this is what it should look like in the end, what steps should I perform to reach this goal?"
At the end of the day, I needed to perform some custom integration between Lightning ADC and Harmony controller on the front-end side. Once Kubernetes was involved, I could deploy and monitor whatever applications I wanted.
What is most valuable?
What I like about Lightning ADC, is that instead of having a big appliance sitting in front of the Kubernetes cluster, Lightning can pretty much go inside of Kubernetes.
What needs improvement?
Sometimes, there are multiple ADC components running at the same time which causes problems relating to the load balance in traffic leading to the matrices being pushed back towards the harmony site. This is an area for improvement that I have seen.
It's not like AWS or Azure. The documentation is pretty much closed and I think A10 crashes sometimes because initially, I was not able to set up lighting ADC. Eventually, I figured it out myself. The problem was that there was a mismatch with the docker version, that's why they were not communicating with each other — lightning ADC and Harmony.
A10 documentation is not as open and accessible as AWS and Azure documentation is. For a beginner, it takes a bit of time to get used to it.
The ADC devices that I have seen so far, seem to follow in the direction of an iOS. It's pretty much an operating system — you have to perform your own configurations depending upon the use cases that you want to use it for. Kubernetes is different from Lightning in this sense.
For how long have I used the solution?
I haven't been using this solution for long, but I've used Kubernetes many times before, so the environment itself isn't new to me, but lighting ADC is all new to me. Lightning has a native integration within the Kubernetes environment.
What do I think about the scalability of the solution?
Regarding scalability, sometimes we need to try to generate some traffic and load the system to see how it's going to behave. We need to have another benchmark also, maybe another product or something parallel to compare the numbers with.
We're not at that stage at the moment. Currently, we're thinking about provisions. If you have a third-party application, how would you provision the appliance or the device? Once we have done that, then we will put it in an environment where we can have the numbers to steady the performance aspect. We could load test it, but I think it's a little premature to say anything about it at the moment.
How are customer service and technical support?
In the beginning, I emailed A10 customer support a couple of times — a few initial steps were overlapping. I had some questions about whether it was possible to put Harmony, Lighting ADC, and the application, all into one Kubernetes cluster.
I got a reply, but It wasn't a technical reply, so I just gave it a shot anyway; eventually, I figured out that there was a version mismatch. Once I fixed that as well as the configuration for the credentials of Harmony Controller on the Kubernetes side, everything worked just fine — no issues at all.
How was the initial setup?
If you happen to know the exact steps, then deployment should take less than five minutes. Figuring out the exact three to four steps can take some time, though.
Docker installation only takes two, maybe three minutes — It just downloads the image. You need to name the ADC cluster, ATI server, and URL, as well as the cluster-ID.
For one device, it doesn't take that much time. If you happen to know the exact steps, it should not take more than five to 10 minutes.
What other advice do I have?
I think most people use NGINX. It's pretty much open-source and there is no cost barrier. But for the enterprise edition, it's another story.
If you already have experiences in NGINX, you can probably easily configure it onto the lightning ADC controller. It's a good IT solution. I think it requires a license to integrate it with other things.
Most of the people that I see here in Pakistan, in the industry, use NGINX on the front-end. Lightning ADC is pretty much for clients not residing in Pakistan, but residing in America — the licensing is not cheap.
On a scale from one to ten, I would give Lightning ADC a rating of seven.
Kubernetes' environment is not simple. I used it in the last company that I worked for and it could be a real mess if there were issues during production.
If you don't have the right set of tools, it can be really problematic for whoever is managing the cluster. The spike would occur at roughly 2 am when everybody's sleeping. We would get a few alerts and then the application would crash because the containers were dead or something really weird happened.
After a while, you start to realize what the problem is. You start looking at the NGINX side: how many client connections are there? What is happening with the traffic that is being brought in from the front-end towards the Kubernetes side? How is the traffic getting to the ports?
Watching YouTube videos and demos can be a big help. They can give you a better picture of what is happening from the front-end to the back-end. It might also help to slow down the video to help you with problems you may encounter at various points with the application.
The problem happens in the production environment. If the application chokes or there's a really bad bottleneck that occurs, it's going to be really hard to fix.
The easiest way that people seem to resolve this issue is by restarting the port — restarting everything and giving the application some downtime. The problem is, once the application has some downtime, the findings are affected. These issues could be reduced with the help of some simple videos and demos. If they incorporated some, it would be a huge improvement and I would give them a higher rating.