Pico Corvil Analytics Benefits

Ted Hruzd - PeerSpot reviewer
Founder at AI Fit LLC

In the trading business, generating revenue is of utmost importance, and any disruption can result in significant losses. Therefore, my number one advice would be to protect your trading revenue and take preemptive measures to avoid any disruptions. For instance, during my time at an investment bank, I used Pico Corvil Analytics to perform preemptive analysis. I had a well-defined process in place where I adjusted data and utilized my software to identify unusual connections using network analysis. This helped me in detecting any potential issues and resolve them before they could cause any harm.

View full review »
SV
Director at a financial services firm with 10,001+ employees

Before Corvil, the business side of our company would talk to the client and would get first-hand information on what the client perceived as issues. They would convey that to the networking folks, and the networking folks would try to look into all the issues pertaining to that client. Troubleshooting was a long process. With Corvil, we are able to perform real-time analysis, monitor things in real-time, measure packet level and any packet-level details, and pull out specific scenarios as the case demands, all on a real-time basis. Corvil has enabled us to troubleshoot and analyze problems in real-time.

Previously, we had sniffers placed in the networking area and then we had to capture data and wait for an issue to recur. Then we could analyze the packets. Whereas now, Corvil has made it easy in terms of monitoring the entire network. It reduced the resolution time from days to hours or minutes.

Using Corvil, the time it takes to isolate root cause is less than an hour. If it's a monitored session already, it's easy to drill down and get to the root cause of the problem.

Corvil definitely delivers a performance advantage for our firm over our competition because we are able to address all issues on a near real-time basis. Obviously, clients look for venues that are more receptive to their needs and companies that listen to their issues. That's one of the reasons that most of the client flow is coming to our company.

We have a few devices on the client-facing side as well as on the internal banking side. Our primary focus was to get things analyzed in real-time for the client-facing sessions, and Corvil has helped us a lot on that. We are planning to get into the internals of the network on the banking side. We are trying to get those monitored as well because we have operations worldwide. We have a diverse network and it's good to monitor everything in one central location.

Corvil helps to correlate individual client or trade desk transactions to infrastructure and venue latency. That is what we primarily use it for. We make sure that the fill rates for a certain client are as expected, and the algorithmic trading as well. We look at the way orders get executed in different venues and which venue is giving us better fill rates. It's used for analysis and tweaking. We haven't gone to the extent of tweaking algorithms based on the Corvil analytics, but that will be in the pipeline.

In addition, it helps us to determine where to focus our performance improvement efforts. It frees up the resources who were looking at the network packets and looking at the connectivity and trying to analyze things packet-by-packet. Corvil does most of the job and frees up the folks who were doing that. It gives us a better picture and also gives us a better front end to navigate through. That's one of the reasons we are able to do it in real-time and monitor most of the venues.

We have seen an increase in productivity by using Corvil. At least when it comes to the people who were put to task when there was an issue - who had to set up the sniffers and try to record a session and analyze packets - it was a pretty manual task. That has improved drastically. Now, with the same number of people, we are able to monitor more sessions in real-time.

We run a dark pool and we have a lot of clients coming into it. The clients measure us by the latency we have in matching their orders. It gives us an ability to focus on points in our system that are bottlenecks, that have high latency, and enables us to make the processing speed more efficient and to focus on the right things.

Corvil has also very much reduced the time it takes to provide reporting or dashboards to answer business questions. The overall dashboard and the things that are seen by the trade desk - because they field the questions from the clients regarding fill rates and quality of execution - make it very easy for them to answer questions from the client side. It gives them more information at hand to answer questions from clients.

View full review »
RM
Performance Engineer at a financial services firm with 1,001-5,000 employees

Previously, we were reliant on application logging, which causes a performance hit on the applications that we're trying to understand better. Corvil, being a passive listener, allows us to get the same amount that the application logs previously gave us.

It is also more time-granular. Given the fact that we are able to take multiple applications and point them to the same clock, it gives us more confidence in the measurements we're taking.

In the sense that it helps us identify performance issues, it does give us a performance advantage over competitors. For example, we found an issue where we thought part of a network was running efficiently. When we hooked Corvil up to monitor that specific feed, it showed us that the network was not performing as anticipated. It helped us make that area more efficient, which resulted in a customer experience advantage. And that helps us as a company in general.

Also, customer order-entry has gotten better due to our production monitoring measurements. And we're able to monitor how efficient we are with the market data dissemination, and that answers customers' questions when they're wondering, "Hey, why am I seeing message B before message A? It should be reversed." Corvil helps us identify those things, to know what's really going on.

Corvil helps to correlate individual client or trade desk transactions to infrastructure and venue latency. We do order-to-market data correlation and latency measurement. That's what the majority of what anybody you talk to in this field would say that they're measuring. There are some other features that will let you go deeper and let you know if a message got stored correctly or not.

When it comes to determining where to focus performance improvement efforts, let's say you have a multi-server architecture. Corvil has what's called a multi-hop type of configuration which allows you to correlate a message. Let's say it's four hops deep - four different servers, and each of them may speak the same type of protocol or different types of protocols. Corvil enables you to build channels and signatures which, after it's seen on the first server for the first time, and on the second server, correlates those. That lets you know how long it took in that server, going in that particular direction; and similarly, across the other servers and back again. You can isolate things: "Hey, this took a second. Oh, but server three took three-quarters of a second to pass it along to server four."

If there is something that is happening right now, it gives us immediate awareness of what is going on. Previously, we would either have to have an operations team extract logs or dig through logs and then do manual correlations to figure out what was going on, or wait until the end of the day to get those logs.

In terms of increasing our productivity, ops uses it to communicate with clients and to see their behavior, so that would be increased productivity. In performance engineering, it has greatly increased our productivity.

Also, the new version has reduced the time it takes to provide a new dashboard. My business guys will call up and say, "Hey, I'd like to do this", and it is much easier to do now than it was with the old dashboard. Depending on what they want, an old dashboard used to take a couple of hours to get it right. The old dashboard was Flash-based, so the performance was not all that good and it was all drag-and-drop. The new one is HTML5 and it is built on XML, so I can actually script something together and throw a new one in there.

Finally, it indirectly helps improve our order execution and revenue because the more efficient we become, the more attractive sending orders to our system is.

View full review »
Buyer's Guide
Pico Corvil Analytics
April 2024
Learn what your peers think about Pico Corvil Analytics. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,578 professionals have used our research since 2012.
KE
Works at a financial services firm with 10,001+ employees

The product that we offer to our clients internally is a low-latency platform. It's a trading platform and the primary pitch for the product is low latency. We have clients who want to get to the market as fast as they can and that's how we have designed our product and that's what we sell. The way we constantly reevaluate that our product is the best in the market and how it is doing against competitors is by measuring latency. Corvil has done exactly that throughout our product evolution in the last three or four years. I've been using it for two years, but the Corvil installation has been with our firm for a while now, four-plus years.

This information that constantly comes from Corvil is what has helped us to evolve our product in terms of latency. That's primarily how Corvil has helped our product.

In the last year-and-a-half we've also extracted this information and presented it in different ways so we can actually pinpoint, at various points of the day, where we see higher latency or a hit to our median latency. We then want to know, when the latency deviates from the median latency, what has caused it? Our application is Java-based. There's something in Java called "garbage collection." As soon as the garbage collection hits, there's a higher latency and we can actually pinpoint these points from the Corvil data because Corvil has statistics for each individual order throughout the day. So we can look at a certain time of day and get the metrics for that order. We can then go to our application logs and see what was happening at that point and confirm that the issue was Java garbage collection.

We had a client who was concerned with outliers. There is the median latency and, at various points in time, we find outliers which jump away from the median. We can use Corvil data to identify that it was the garbage collection points where the client was seeing pretty high latencies compared to the median. We were then able to redesign our product, make some changes to the JVM parameters, and make the product perform better. Doing so made the client happy.

In terms of the product's performance analysis for our electronic trading environment, as I've said, our application is a low-latency application. We have our own internally developed proprietary system that runs on our own servers. Corvil is sniffing for traffic that comes from the client to our process and through to the exchange and then it's calculating latency between these different endpoints: coming from the client to our servers, from our servers to the exchange, and it's giving us latency. Apart from the network latency, the area where we want more control is inside our process. As I already mentioned, it has helped us improve our product.

The solution helps to correlate individual client or trade-desk transactions to infrastructure and venue latency. As long as the client message or the trader message has the relevant information, you decode it and you have that information. So we have statistics based on client. We have statistics based on exchange. With the new introduction of this Intelligence Hub, which we are still reviewing, you can break it down by any number of parameters, like symbol, or site, etc.

We have all that information. In fact, we were able to break down the information by client because we have a particular instance of our process that just handles client traffic. We get multiple clients sending into the same process and we can see, by visualization, that there's one client who seems to have better performance, another client who seems to have a little degraded performance. Why is that the case? We were able to drill down into it and we saw that it looked like the client who was doing better was trading on a different market which is a particular protocol, a FIX protocol, for example. The other client who's trading on a different market could be on an OUCH protocol. And his trading behavior is different. All of these patterns of trading from different clients on different markets can be drilled down into. In this scenario, it helped us to look at the protocol implementation of our process and see if we could make improvements to our protocol. It actually did help us and we found that there was an issue with one of our protocol implementations. We had to go and fix it.

When you break it down by client or by market you can see which client is performing better, which market is doing better. Then you can drill down into that and see the trading pattern of that client: Why he is doing well, why the other guy seems a little off.

We have also seen increased productivity from using this solution. If I had to go figure out the latency and then see where the problem is, I would have to do a lot more analysis from my own logs, but that wouldn't be as reliable. If I'm capturing anything in my process then I'm adding a latency on top of my processing, as well as disk latency, network latency, etc. Having a source outside of my process telling me how my process is doing is way better than just doing everything from my process. It has definitely helped us improve a lot of things including our productivity.

View full review »
RP
Senior Network Engineer at a financial services firm with 501-1,000 employees

What we've done a lot of work on is tick flow. We generate prices on various instruments, for example DAX Futures. We need to understand, internally, how long does it take for the DAX Future tick to leave the exchange and to exit our pricing infrastructure, which generates the prices and feeds them into the apps that the clients use. We need to know at every step what that latency is.

What we've built in Corvil is a dashboard that shows that. It shows the latency from the exchange to us. Then it shows the latency from us into what we call our MDS system, and from there into our calc servers, which actually do the grunt work of generating the prices, and from there into our price distribution system. At each point, we have a nice, stacked view that shows the latency of each component.

We can look at this and say, "Oh, actually it's our calc servers that are causing the most latency." A good example is that recently our platform guys did some analysis on the kind of improvement we could expect if we put Solarflare network cards in our servers. The analysis showed we could get a 50 microsecond improvement. By using our Corvil data, we could say that, while Solarflare would give us 50 microseconds of improvement, our calc servers alone are generating something like 20 to 30 milliseconds of latency on a bad day. So in the grand scheme of things, spending all this extra money to save 50 microseconds isn't going to cut it when there is a lot more scope to save latency by just rewriting the code on our calc servers. That's a good example of how Corvil helps us. It gives us that kind of level of detail so we can pinpoint exactly where the latency is.

Corvil helps us determine where to focus our performance improvement efforts.
With the tick flow process, we've split it to cover four key parts of our infrastructure. We can see, straightaway, which part is generating the most latency, and that tells us where we should focus our efforts, where we should spend time and effort and money. You have to build that view. You definitely can't take it out-of-the-box and it will come up with that view. You have to understand how your traffic flows, and build that view.

In addition, we do venue performance analysis. A good example is FX pricing. We take all the OTC pricing from various liquidity providers like the Tier 1 banks. Key metrics for us with FX are things like sending-time latencies. We look at that. We always knew anecdotally that one of our feeds was really poor when it came to latency. But without Corvil, we didn't have the numbers to prove it. We could just tell by looking at the quotes from the others to know how far out this particular feed was and from that deduce that the latency was really bad. Corvil helped us show that information in a nice, graphical manner and gave us some metrics to justify a scheme of works to improve matters. Corvil makes it really simple to extract the information required. For example, we are asked sometimes asked something in the line of "Could you supply us with quote ID where you observed these issues," maybe, in some respects, thinking it would take us a long time to get that information. But we can literally, in two clicks, export the spreadsheet and send it to them.

Having latency information helps us improve order routing decisions. A lot of our trading is automated. It's not that the Corvil tool is used to directly feed the automation, but it has provided visibility to allow us to support the process. For example, we can determine precisely when a venue was down and what trades were impacted. That's definitely helpful. We never had that kind of correlation in the past. It was a case of a trader coming up to us and saying, "Did you have a problem with XYZ at this particular time?" We'd have to dig out multiple logs from the network and other infrastructure components and try to figure out what was going on at that time. It allows us to narrow down on the problem a lot quicker. We've got a copy of all the messages, we know what went on at each point. 

Corvil has definitely helped reduce incident diagnosis time. Just the fact that it's so easy to pull out captures or the actual messages, whereas before that was probably the thing that would take us the longest, just getting the data. Before you can start looking at the data you actually have to get the data. Now, it's easy. We just say to the user, "Tell us the time XYZ happened." We can find it, we can zoom in on it, we can extract the messages. It happens a lot of when we're talking to third-parties about latency.

In terms of Corvil reducing the time it takes to isolate root causes, quite a few times we've been able to look at a stream in Corvil and straightaway we can identify what the issue is. A good one is always batching. We can always tell, nowadays, when latency is being caused by batching. We can download the capture straightaway, take a look at it, and it's very easy to see that when a particular venue is sending multiple quotes together, queueing up one behind the other, rather than as they are generated. 

We're definitely able to diagnose things like that really quickly now, whereas before it would be a big struggle to get the data in the first place. If I had to put a number on how much time we save it's at least a good three or four hours.

Finally, the dashboards have helped reduced the time it takes to answer business questions. We sit down with some of the application support teams and say to them, "What do you want to see?" So that's definitely there immediately for them so they don't have to be generating their own stuff. I don't know how much time it saves but the ability's there for them to log in, have a look at their dashboards for their particular products. The information is all there for them.

View full review »
reviewer1020198 - PeerSpot reviewer
Works with 10,001+ employees

We can measure the latency. Usually, we are measuring the client round trip and revenue round trip latency for different markets and clients. We are also tracking our internal latency for our applications. So, it has helped us to understand how much time is spent within the applications to process any particular order, how much time it takes for an acknowledgement to come back from the exchange, and then back to the client. It gives us an understanding of how much time is spent in the applications and where we could improve the time within the applications, if there is certain things were improved, e.g., if there was extra time spent within the applications for an order to be processed. We are currently trying to improve this. This is how it has helped my team.

We use the data to analyze how much time we spend within the applications. Then, based on that, we are doing multiple analyses and types of investigations to work on reducing the amount of time spent on the latency, which helps our applications.

Corvil helps to correlate individual client or trade desk transactions to infrastructure and venue latency. We are using it for tracking the client round trip times, as well as the venue round trip times. The time it takes when the order comes in from the client to the time that it takes when it goes out. Plus, we are tracking the times when an acknowledgement comes back from the exchange. So, these are the type of statistics that we are using at the moment.

View full review »
JS
EMEA Head of Electronic Trading App Management at a financial services firm with 10,001+ employees

The benefits are two-fold:

  1. Corvil is industry recognized. If we provide a client with latency metrics, from Corvil, that it has taken X amount of milliseconds or microseconds for us to execute their order, or reach the exchange. Since it is an industry standard product, the clients recognize and trust in it. Therefore, it's easier for us to demonstrate the performance benefits of our applications, because our clients recognize Corvil as an industry-standard product and trust in it.
  2. We don't have anything internally which would holistically provide the type of granular breakdown of each hop through each of the different systems using different messaging protocols in different applications or in different languages. We wouldn't be able to do it easily with a product that wasn't off-the-shelf, like Corvil.
View full review »
FM
Network Operations at a financial services firm with 1,001-5,000 employees

Most of the tools that we have don't have any visibility into microbursts. Corvil can provide that visibility into microsecond timestamping. It has more granularity into the scope of how data has passed.

The performance analysis that Corvil provides for electronic trading environments is very nice. That is what we primarily use it for: To monitor order routing and order execution to all the venues, as well as to our trading partners. It performs very well.

View full review »
Buyer's Guide
Pico Corvil Analytics
April 2024
Learn what your peers think about Pico Corvil Analytics. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
768,578 professionals have used our research since 2012.