Like many companies in the financial services market, our bank has grown significantly in recent years through a careful strategy of acquiring and merging with other banks. Combining the networks of these separate organizations with overlapping and often completely different hardware, applications and third-party service providers requires us to have a deep level of understanding about how applications and services are behaving on the network and how users interact with them.
The network we manage is fairly typical of a company in an industry with a lot of merger and acquisition activity. It’s a mixed bag. We inherited some smaller networks that were not up to our overall standards, we have some aging equipment, and we also have state of the art platforms including a lot of virtual desktops and servers running on Citrix.
With a wide variety of service offerings for customers, we require a number of hosted and cloud-based applications. This large stable of applications presents a challenge when evaluating usage, rationalizing support and executing ongoing management.
We are also in the midst of opening a new data center in Birmingham, Alabama. This creates a major new challenge: identifying users that rely on specific applications in order to make the migration process as painless as possible. While we have a list of all the applications running on our network, we don’t know who is running them and when. We just want to be able to give users notice when we need to shut down their applications in order to move the servers that are running them into the new data center. But, without an easy way to see this data, the migration process has the potential to create a lot of problems.
The most common complaint that our network team gets is about a “slow network.” Users experience delays in opening applications and files or accessing data, and assume it is a network performance problem. Some problems were consistently occurring at the same time every day for weeks or months. I knew that these problems could just as easily be utilization issues, with applications creating large amounts of traffic simultaneously or at the wrong times. Unfortunately, our team lacked the ability to easily analyze utilization information.
We selected Visual Performance Manager from Fluke Networks because of its combination of capabilities, price and its ability to be a turnkey solution for a wide variety of visibility issues. This was something we needed yesterday, and Fluke Networks was clearly going to be able to come in and get us up and running quickly.
I was familiar with VPM from using it with a previous employer and knew it was the right tool to help us solve our challenges. In particular, VPM could play a key role as we consolidated networks and applications from a collection of separate and overlapping organizations.
One of the main reasons we purchased VPM was to give us visibility into applications and connections. We need to be able to map out an application to user connectivity and vice versa so that we were not blind to what was on our network. When we go forward with the consolidation of all of our applications, we need to ensure we’ve identified all of the users who are using each application so it’s seamless.
We saw the benefits of using VPM within days, as it was able to identify some unexpected disk-to-disk replication that was causing a network slowdown. In another case, VPM helped to uncover storage replication traffic between three branch offices that was also affecting network performance. The replication was supposed to occur over night during non-business hours, but was taking too long and bleeding into the business day where it impacted users. After using VPM to find the source of the slowdown, we were able to apply some QoS policies to the replication traffic to address the problem.
As we bring more of the responsibility for managing our network in-house, VPM’s ability to generate utilization reports is extremely valuable. In some cases, VPM can create reports that third-party vendors cannot. And with a small team – we have just 4 people managing the network across more than 100 locations – the ability to remotely identify and diagnose problems saves on time and travel costs.
By using VPM, we have been able to discover a number of utilization issues that affect network performance. It turns out that many “slow network” complaints are actually the result of things like traffic from automated updates to Microsoft Office applications and virus definition downloads. VPM allows us to identify those issues and apply policies that control when they happen.
And, VPM has helped me get a handle on the circuit optimization problem. By taking a close look at application and user behavior with VPM, we believe that we can significantly reduce the number of broadband circuits they require. In fact, the savings will be so great that I expect we will have payback on the purchase price of VPM within 6-8 months based on circuit optimization alone.
Looking ahead, I see VPM playing a key role in managing and optimizing our entire network infrastructure. We want to use application fingerprinting to ensure that business critical applications are consistently performing within target parameters and not being slowed down by network latency or server response time issues. We want to create an automated reporting system that will alert them if packets are moving to and from a server in a timely fashion. With a growing base of users and locations, we also expect to use VPM to uncover trends that will allow us to proactively add circuits to sites that are trending toward overutilization and better optimize key financial service applications.
Down the road, we also plan to bring some application development in-house. We plan to use VPM as part of the application testing and network validation process. We even envision an automated help desk process that would use VPM data to quickly identify whether a problem is network, application or server related.
VPM gives us a level of insight we previously didn’t have and will allow our team to work more collaboratively to solve IT challenges. It’s already proven itself as an indispensable system and the go-to resource when evaluating problems, planning changes or taking the pulse of the key systems.