We just raised a $30M Series A: Read our story

PubSub+ Event Broker OverviewUNIXBusinessApplication

PubSub+ Event Broker is #2 ranked solution in top Message Oriented Middleware (MOM) tools, #3 ranked solution in top Message Queue Software, and #3 ranked solution in Streaming Analytics tools. IT Central Station users give PubSub+ Event Broker an average rating of 8 out of 10. PubSub+ Event Broker is most commonly compared to Apache Kafka:PubSub+ Event Broker vs Apache Kafka. The top industry researching this solution are professionals from a computer software company, accounting for 25% of all views.
What is PubSub+ Event Broker?

PubSub+ is a complete event streaming and management platform for the real-time enterprise.
PubSub+ helps enterprises design, deploy and manage event-driven architectures across hybrid cloud, multi-cloud and IoT environments, so they can be more integrated and event-driven.

The "+" in PubSub+ means it supports a wide range of message exchange patterns beyond publish/subscribe, including request/reply, streaming and replay, as well as different qualities of service, such as best effort and guaranteed delivery. It's available as an appliance, software, and as-a-service. All options offer the same functionality and management experience.

PubSub+ lets users connect event brokers to form an event mesh - an architectural layer that supports dynamically routing events from one application to any other application no matter where those applications are deployed (no cloud, private cloud, public cloud) - so users can connect and orchestrate microservices, push events from on-premesis systems of record to cloud services, and enable digital transformation across LoBs and IoT.

Its APIs supports popular programming languages, with open APIs and protocols to connect to any application, providing a best-in-class approach to messaging to never get locked in to any technology — including the vendor's own.

PubSub+ Event Broker is also known as Solace Virtual Message Router, Solace Cloud, Solace Message Router Appliance.

PubSub+ Event Broker Buyer's Guide

Download the PubSub+ Event Broker Buyer's Guide including reviews and more. Updated: November 2021

PubSub+ Event Broker Customers

FxPro, TP ICAP, Barclays, Airtel, American Express, Cobalt, Legal & General, LSE Group, Akuna Capital, Azure Information Technology, Brand.net, Canadian Securities Exchange, Core Transport Technologies, Crédit Agricole, Fluent Trade Technologies, Harris Corporation, Korea Exchange, Live E!, Mercuria Energy, Myspace, NYSE Technologies, Pico, RBC Capital Markets, Standard Chartered Bank, Unibet 

PubSub+ Event Broker Video

Pricing Advice

What users are saying about PubSub+ Event Broker pricing:
  • "We have been really happy with the product licensing rates. It has been free for us, up to a 100,000 transactions per second, and all we have to do is pay for support. Making their product available and accessible to us has not been a problem at all."
  • "Having a free version is critical for our technology operations use case. This is primarily because our technology operations team is a cost center in our company. They are not profit drivers and having a free version for installation will probably meet our needs. Even for production, it'll support up to a 100,000 messages per second. I don't think in technology operations that we have that many events and alerts from our detection tools. Even if I have 20 or 30 event detection products out there, they're only going to publish the things which are critical or warnings. I don't think we'll ever reach a 100,000 messages per second."
  • "There are different tiers where you can choose what would work for you. As a customer, you need to know roughly how many messages a month you will use."
  • "We are looking for something that will add value and fit for purpose. Freeware is good if you want to try something quickly without putting in much money. However, as far as our decision is concerned, I don't think it helps. At the end of the day, if we are convinced that a capability is required, we will ask for the funding. Then, when the funding is available, we will go for an enterprise solution only."
  • "The licensing is dependent on the volume that is flowing. If you go for their support services, it will cost some more money, but I think it is worth it, especially if you are just starting your journey."
  • "Having a free version of the solution was a big, important part of our decision to go with it. This was the big driver for us to evaluate Solace. We started using it as the free version. When we felt comfortable with the free version, that is when we bought the enterprise version."
  • "The pricing and licensing were very transparent and well-communicated by our account manager."

PubSub+ Event Broker Reviews

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
JC
Managing Director at a financial services firm with 5,001-10,000 employees
Real User
Top 5Leaderboard
We can add an application or users in the middle of the day, with no disruption to anyone

Pros and Cons

  • "We've built a lot of products into it and it's been quite easy to feed market data onto the systems and put entitlements and controls around that. That was a big win for us when we were consolidating our platforms down. Trying to have one event bus, one messaging bus, for the whole globe, and consolidate everything over time, has been key for us. We've been able to do that through one API, even if it's across the different languages."
  • "When it comes to granularity, you can literally do anything regarding how the filtering works."
  • "We've pointed out some things with the DMR piece, the event mesh, in edge cases where we could see a problem. Something like 99 percent of users wouldn't ever see this problem, but it has to do with if you get multiple bad clients sending data over a WAN, for example. That could then impact other clients."

What is our primary use case?

We do a lot of pricing data through here, market data from the street that we feed onto the event bus and distribute out using permissioning and controls. Some of that external data has to have controls on top of it so we can give access to it. We also have internal pricing information that we generate ourselves and distribute out. So we have both server-based clients connecting and end-user clients from PCs. We have about 25,000 to 30,000 connections to the different appliances globally, from either servers or end-users, including desktop applications or a back-end trading service. These two use cases are direct messaging; fire-and-forget types of scenarios.

We also have what we call post-trade information, which is the guaranteed messaging piece for us. Once we book a trade, for example, that data, obviously, cannot be lost. It's a regulatory obligation to record that information, send it back out to the street, report it to regulators, etc. Those messages are all guaranteed.

We also have app-to-app messaging where, within an application team, they want to be able to send messages from the application servers, sharing data within their application stack. 

Those are the four big use cases that make up a large majority of the data.

But we have about 400 application teams using it. There are varied use cases and, from an API perspective, we're using Java, .NET, C, and we're using WebSockets and their JavaScript. We have quite a variety of connections to the different appliances, using it for slightly different use cases.

It's all on-prem across physical appliances. We have some that live in our DMZ, so external clients can connect to those. But the majority, 95 percent of the stuff, is on-prem and for internal clients. It's deployed across Sydney, Hong Kong, Tokyo, London, New York, and Toronto, all connected together.

How has it helped my organization?

With the old platforms we were coming from, if we wanted to make changes, some of those changes were intrusive to make. For example, to add a new application into the environment, we would have to make a change that might cause some disruptions to the environment. We only have very limited downtime for our environment on a Saturday after midnight and before midnight again on Sunday. That is our only change-window for the week, if we have to do something intrusive. That limited us to when we could truly make changes. On a lot of other vendors' platforms, to add things, you've got to restart components and cause disruption. 

The benefit of Solace is that we can add an application in the middle of the day, with no disruption to anyone. It's purely based on our access-control list and permissioning. We can add an application in with zero disruption. We can onboard applications during the middle of our business day. It's still under change control, but there's zero impact by doing it. For us, that is super-powerful. Whether we're adding users or adding applications, we can do it, without causing any disruption. For a lot of other products, that's not the case. That's been a huge win for us.

In terms of application design, I've seen applications go live in less than a week, from coding the first line of code to putting something into production. It depends on how complex the application is. We have a central team where we support the wrappers on top of the vendor's API and we have some example code bases where we show a simple application built using our wrapper on top of Solace's API. A developer who joins our company knowing nothing about Solace, can walk through our documentation, have a look at our wrappers, take some of our example code, and get up and running and off to the races pretty quickly. Getting up to speed is definitely not difficult.

We might get a new user in our bank who is familiar with other messaging systems and who has preconceived ideas on how they want to do things. They might ask us, "How do I get access to this messaging system that I used to use with my old organization? That's what I'm familiar with." Sometimes we have to do sessions with those people and say, "Okay, we're familiar with the systems you're talking about. We supported them in the past. Talk us through what your use case is, what it is you are trying to achieve." Once they explain their use cases, we can say, "Okay, great. We actually have this and here's some example code and this how to do it." Within a day, that person has gone from knowing nothing about it to saying, "Okay, you're, absolutely meeting my application needs and now I'm educated on how this works." They're off and running very quickly.

We take all kinds of data onto the environment to share. Because the event bus is the place that every application always needs to start, they're no longer building an application now within the capital markets organization without putting their data onto our bus in some way. It's definitely a way of lowering the barrier to sharing data and getting things up and running quickly. Similarly, they can take data from other teams, once they find out what's available. Someone might say, "I need all the FX prices in the bank. Oh, I can just subscribe them from here. I don't even need to talk to the FX team." Teams can get up and running very quickly without having to spend a lot of time working with other groups to make that happen.

By having all of that data together in one place, Event Broker has definitely reduced the amount of time it takes to get a new application onboarded. We came from a place with six or seven different systems, where we might have bridged some of those together in some way, but it wasn't one common environment. Now, we've got application A that comes online and starts putting that data out for application B to get up to speed and to start looking at that data. That is very quick and easy for us to do. All the messaging that we do is self-describing. They can look at the payload of a message and understand it without even needing to talk to the upstream application. We can have applications starting to look at data where they didn't even have to speak to the upstream application. We've gone from 8 x 1 Gig, 10 years ago, to 8 x 10 Gigs today, and the reason for that is because we keep putting more and more data and applications on here. That continues to grow exponentially. If it wasn't easy to do, the data wouldn't be going up and we wouldn't have all these applications now on here. It's hard for me to say it has definitely increased the productivity, because I don't own the application development piece but, anecdotally, I would say it has.

Another area of benefit is that we're in the process of containerizing all of our applications at the moment, whether they'll be run on-prem or in the public cloud. The underlying piece is that these containers, wherever they run, are going to need to share data between the different applications and then back to the users. The Solace event mesh or event brokers are the underlying lifeblood among all of these containers. They need to have some way of communicating with each other and we see solace as being that connection among all of them. All the different cloud environments have their own messaging and we don't want to build applications that are specific to any one cloud; we want to be cloud-agnostic. To do that, we need to have a messaging system that is equally agnostic. Given that we already have a huge investment on-premise for all of our Solace stuff, we see that the future of containerizing our applications goes hand-in-hand with our messaging strategy around Solace, so we can be totally cloud-agnostic.

Technology, in the last 10 years, has probably become a lot more stable generally, but I can say that with the amount of data we put through these appliances, and route globally every day, if our environment was down capital markets wouldn't be operating for the bank. That's how critical it is. We can't afford to have any issues. At the same time, literally no application can run in our front office without this. If I look back 10 years ago, we might have had six or seven different distributed systems, all with their own problems. Now that we've consolidated all that, there's a huge efficiency by sharing all our data between the different groups. It means we can get up to speed very quickly, but also, what we're enabling from a business perspective, by sharing 95 billion messages a day, is hugely valuable to our front office.

What is most valuable?

I've been running messaging systems for most of my career, getting on toward 16 or 17 years. The most valuable feature is the ability of the appliances to cope in a way that I haven't seen other vendors do. You always get into types of message-loss states that can't be explained with some other products that are out there. You raise tickets with the vendors and they'll give you an explanation. But in the 10 years that we've been in production with Solace, we've never had something that cannot be explained. I've got tickets open with the likes of IBM that have never been resolved, for years. The Solace product's stability is absolutely essential. 

There is also the ability to have so many things laid in, where we're doing guaranteed messaging and direct messaging laid into the same appliance.

There is also the interoperability. We've built a lot of products into it and it's been quite easy to feed market data onto the systems and put entitlements and controls around that. That was a big win for us when we were consolidating our platforms down. Trying to have one event bus, one messaging bus, for the whole globe, and consolidate everything over time, has been key for us. We've been able to do that through one API, even if it's across the different languages. We support a wrapper on top of the vendor's API and we enforce certain specifications for connecting to our messaging environment. That way, we've been able to have that common way of sending and sharing data across all the groups. That has been very important for us. 

In terms of ease of management, from a configuration perspective you can have all your appliances within one central console. You can see your whole estate from there. And you can configure the appliances through API calls so you can be centrally polling and managing and monitoring them, and configure them as you need to. There are certain things where that's a little more tricky to do, but at a general level we have abstracted things like user-commissioning into other systems. So we just have a front-end where we change the commissioning and push it to the appliance in whatever region and it updates the commissioning. From a central management and configuration point of view, it's been extremely easy to interact, operate, and support.

When it comes to granularity, you can literally do anything regarding how the filtering works. It has a caching product that sits on top of that, so depending on the region that you're trying to filter, caching level can make it a bit more difficult than the real-time streaming. But from a real-time stream, you can pretty much filter at any level or component and it's extremely flexible in that regard.

What needs improvement?

We have various items on the docket with Solace. We've pointed out some things with the DMR piece, the event mesh, in edge cases where we could see a problem. Something like 99 percent of users wouldn't ever see this problem, but it has to do with if you get multiple bad clients sending data over a WAN, for example. That could then impact other clients. In our current state, we've architected around that with Solace. We can see, in the future, with the event mesh DMR, that there is potential for several bad clients to cause problems for other clients. We actually had a design session yesterday with the head of engineering where we started working on how to solve that 1 percent "corner case." We're working on the basis that if it can happen it will, even if it's very unlikely. We design for those kinds of days as opposed to, "Oh, this will never happen." 

It's really about multiple streams and guaranteed messaging between multiple regions over WAN potentially causing one user a problem. We're trying to solve for that kind of stuff. It's very specific, but that's just how we think. Being a financial organization that is obviously regulated, we can't afford downtime. So we try to look at everything through that lens.

For how long have I used the solution?

We've been using Solace products for over a decade.

What do I think about the stability of the solution?

The uptime on the appliances is huge. We just don't have problems with these appliances in production. For example, we have just gone through the whole COVID-19 situation and the markets went crazy during that. Our previous maximum of data through the appliances in any one day was about 67 billion messages. During the COVID-19 of February and March, we hit 95 billion messages a day. That was a 40 percent increase on the data rates and the environment coped just fine. We didn't have any problems. There was zero business disruption. I don't know of any other system where, if I threw an extra 20 or 30 billion messages at it, without adding anything, without having to change anything, it would just cope. If it wasn't able to cope with that, the amount of money that might have been lost to the organization would have been exponential. It's definitely paying for itself.

Going back 10 years ago — I want to be real clear, not recently — there were some issues with disks that were in the devices. It was just a faulty batch of disks from their supplier. We had to change the disks. But everything is resilient. So when we had these failures — they were more common than you would expect — we might have a HA failover but not an outage, per se. But that was a very long time ago. 

The only other thing that causes issues, and I use that term loosely, is that these are the biggest things on our network within the bank. An 80-Gig appliance is the biggest thing that talks on the network, and it's sending an awful lot of traffic. What you tend to then get into are problems with your own network not being able to cope. You may not have built your network to cope with the volume of traffic you want to try putting over it. As a company we have definitely experienced that over the last few years. It's not a Solace issue, but more a pure core-networking issue. That's a common issue that I know Solace's clients deal with. I meet other Solace clients through various events and they're all having challenges with their network team actually providing a good network to be able to cope. You've got a very strong messaging product that sits on top of the network. It's the biggest thing on the network. Is your network then able to cope with it? So we've had Solace's engineers on calls with our network team, walking them through. That's probably the biggest pain point we have, but it's not a Solace fault.

What do I think about the scalability of the solution?

The scalability of these over time has been very good. When we started on them 10 years ago, we were 8 x 1 Gig appliances, so we had 8 Gig of capacity? We're now doing 8 x 10 Gigs. In 10 years we've grown our footprint by 10 times in terms of volume. And the number of servers, the appliances in our data centers, hasn't really increased. They've obviously continued to grow the capacity of the appliances over that 10 years, without us needing to buy another 20 or 30 appliances to continue to build out. They have the ability to scale.

In terms of users of the solution, there are about 9,000 people in capital markets, of which I'd say about 6,000 or 7,000 of them are using it across the different geographies. Each of those users might be running multiple applications and making multiple connections to the appliances for different applications. A user might have four different applications on their desktop, and they would be making four connections. That works out to about 20,000 to 30,000 actual connections to the appliances. And we have about 5,000 servers in our data centers. A good 80 percent of those are making connections to Solace.

The amount of messaging that we put through it grows every year. We're constantly looking at the volume of data that goes through there and deciding if we need to stripe out the number of appliances to support that. Or, if Solace produces a bigger appliance, do we need to be buying it from a pure networking or volume-of-traffic point of view?

We are in the process of working through what our cloud implementation is going to look like with them. It's going to be a mixture of some of their messaging-as-a-service piece and some of us running our own Docker engines of the software version. There's going to be a bit of a mix as we bridge data between the public cloud, as we stand that up, and our existing on-prem appliances. We don't see the on-prem appliances going away anytime soon. There's no key to getting rid of those. We're putting so much traffic through them, it's massive. But, as some of our workload moves to the cloud, so will some of that traffic and we will need to be able to support that.

But every year the messaging rates only ever go up, as does the number of applications that come on. Last week we added another 1,300 users for a new application across three or four geographies and that was all completely seamless. It's continually growing. It's like the blood that pumps around the body, to be honest.

How are customer service and technical support?

Solace is truly the best company that we have to deal with when it comes to tech support. In the role that I have I deal with about 100 different vendors, everything from market data exchanges to software vendors, through the likes of IBM and Microsoft, etc. Ten years ago, when we first started dealing with them, Solace was obviously a much smaller company. They've grown. They were only some 50 or 60 people at the time and I think there are a couple of hundred now. All their support guys who were there originally are still there — they've added more over time — were excellent. They know everything about their HI and their environment. 

If I reach out to IBM, for example, I'm going to get passed to six help desks before anyone I reach even knows what product I'm talking about. I support Cloudera for our company, as well. Cloudera has sold its support to IBM and when I raise a ticket with IBM, I wait a week to get a response. I have had some pretty shocking support experiences.

We always felt that Solace's support wasn't going to survive as they grew as a company. It was so good. That was one issue I kept raising because it was so good I couldn't see how it would scale. Surely it couldn't. But I can tell you, 10 years later, Solace is still the only company where I have zero outstanding issues, or unknown items, or support tickets that they haven't resolved. If you have a problem, they jump on a WebEx with you and, within minutes, we know what it is. Whereas I can't even get IBM to respond to a support ticket.

I deal with a lot of different people in my role and I can genuinely put my hand on my heart and say they're the best support company that we deal with.

Which solution did I use previously and why did I switch?

We had TIBCO EMS, TIBCO RV, IB MQ, and Informatica's LBM. The latter used to be a company called 29West and Informatica bought them. We also had Thomson Reuters RMDS platform, which is now called TREP, sending messages around the planet.

We were using Thomson Reuters RMDS — Reuters Messaging Data System — as a generic messaging bus at the time. Even though you can put their data onto the platform, you can also use it to read your own data around the world. That was a big platform for us at the time and it was coming from two of the underlying systems. You could publish any message onto that bus and send it around. I worked at another bank before the one I'm at now, and we did exactly the same thing there. We were putting a lot of our own internal data onto their messaging bus. It was a good message bus and it still is.

But Thomson Reuters, at the time, now Refinitiv, decided to license it differently. They said that if you put your own data on their platform, they wanted to be paid by every message you sent. We thought, "Okay, well that's crazy. If we buy something from you and pay you a million dollars for it, and then send a hundred messages or a million messages with it, that's nothing to do with you and we're not going to pay you for it." They tried across the entire street to change their pricing model and they really shot themselves in the foot. A lot of people walked away from them over it.

We knew at that point we needed to do something else. We had TIBCO RV, TIBCO EMS; we had so many different systems that we were trying to bridge and connect together, but the RMDS platform along with TIBCO RV dwarfed all the others. Those two together made up 90 percent of all the traffic. That really pushed us to go out.

How was the initial setup?

We spent about two to three months designing out our topic hierarchy when we started this 10 years ago. In the last 10 years we've made very few changes to our topic hierarchy and schema. But we sat with Solace and designed it out. We created a 90-page manual for how we wanted to stand up our event mesh at the time. Bear in mind that our first implementation was not guaranteed messaging, but direct messaging. It was between Sydney, Hong Kong, Tokyo, London, New York, and Toronto. We had primary and secondary data centers in every region. I would never characterize it as simple because of the overall scale of what we were putting in place. The actual configuration, and working with Solace to implement that originally, that wasn't the difficult piece of it. Actually standing it up — once we had the appliances in our data centers and all on the network — hooking them up and making them work together that wasn't complex.

What was more complex was the fact that we were meshing up six regions at the same time, and turning on a brand new environment. We didn't stay in one region. We didn't just turn London on. We went big from day one, so it was complex from a geographies perspective, but not complex from a Solace-configuration perspective.

We paid for their heads of engineering to come and sit onsite with us and work through that document. I've actually recommended to Solace that they shouldn't sell their product to anyone without doing that design work upfront because I think it's extremely valuable.

This is true of any system. If you take a good system and don't architect it well, then you can make a good system really bad. Two years down the road you've got people saying, "Okay, I want to go somewhere else," because we've done a bad job of this. Anecdotally, I was talking to the CEO of Confluent about six to nine months ago, and he told me that a large, well-known company has redone its Kafka implementation three times in two years, because they hadn't architected it properly. You can take any technology and make it bad.

Our deployment took about six months, start to finish, from initial discussions and purely white-boarding through to being live in six regions. The first five years after it was implemented, we weren't allowed to build any net-new application that didn't go onto the bus. Every application has a three-year life cycle within the bank. In that five years, a good 80 percent of our applications had been completely rewritten, at which point we only had 20 percent left on our old environment to force over and bridge between old and new environments. After a couple of years of doing that, we didn't have to run any of the old environments anymore and just had one major platform that everyone connects to. That has been the state for the last five or six years.

I speak to other Solace clients occasionally, new ones who are looking at starting up, and they say, "Well, can we be done in a year?" And I say, "Well, your Solace can be done. That's not the issue. It's your life cycle of applications. If anyone tells you you're going to switch all your applications in one year, it's nonsense." Yes, it depends on the scale. If you're a small company, sure. But if you're a company of our size, you've got hundreds of applications and you're not going to rewrite them all overnight. But, we did a migration of JMS users from TIBCO EMS a few years ago and that was actually very simple. It was two or three lines of codes for each of the 200 applications that were connected. Within about three months we'd moved 200 applications. So it is easy to do pure JMS conversions, for example. But if you've actually got to rewrite the application completely, because you're changing how it operates, that's very different.

In that three months of discussion that I mentioned, we were working on our topic hierarchy and making sure that we didn't have any pitfalls. The rest was that it takes a long time to get things set up at data centers, racked and networked and dealing with the firewalls. But the actual configuration of the appliances between all the regions was only about two weeks' worth total, for 12 different data centers. That was not the lion's share of the work. The planning for doing it across multiple regions was the lion's share of that.

The topic hierarchy is hugely flexible, but you do have to put time in to plan your hierarchy and try to think through all the eventualities of how you're going to use it. Otherwise, it can become a bit of a free-for-all if you don't govern and control it in some way. You need a good onboarding process for how you want to use things. If you leave it totally open to your teams to choose, you're going to end up with a bit of a mess.

For naming, we start everything with a region and go from there:

  • where the data is coming from or to
  • what business area the data is related to
  • what type of data it is
  • what application team
  • what instances they're coming from
  • then we get into the actual data name itself.

There are six or seven layers of our topic schema that we have published. After that, the application teams can be specific on how they want to name the seventh or eighth level. But the first several levels are defined by us and we say, "Okay, if you're this, you're going to be choosing New York, you're going to be choosing fixed income, you're going to be choosing that this is market-data price, and then you're going to be choosing that your application name is this, and the datatype is real-time. And the message instrument itself is X and the data it contains is Y." So we've already mapped out our schema for all those levels, and then they can put their payload in at that level.

This way, it becomes really easy if you're trying to wildcard things at a higher level. You can say, "I just want to see all the market data prices." I can wildcard three levels and be able to pick those up without having to know anything else. I can look at pretty much any topic name that someone has. And you've got 255 characters to choose from. I've seen people who try to map everything, but then it becomes unreadable. Unless you've got a guide to figure out what topic schema look like, it becomes very difficult for a human to interpret. It has to be readable to them. Six to eight levels works, without needing some sort of decoder to work out what things mean.

In terms of staff involved in the deployment at the time, we had about 16 people, globally, across the different regions. But this wasn't the only thing they were doing. We also support 20 or 30 different systems because we look after the market data system for the bank as well. Solace isn't our only job. In addition to those 16 people for the initial implementation we had 30-something in compliance across Prod, QA, and Dev, etc.

Today, the number of people we have doing maintenance on it is in the high 20s . We haven't exponentially grown our staff around what we're charging back to the business for the true staffing of this. The only thing we have grown out a little bit, over time, is our development team that supports the applications, as we've had 400 applications come on. They have general, day-to-day questions. We only have three people in that Dev team, but they're acting like a first-responder before we raise a question to Solace's support team around API issues. A lot of the questions people ask are common questions that we've answered two times already. We have a lot of Confluence pages with basic how-to and FAQs. But sometimes people just want to jump on a call, go WebEx, and walk through what they were thinking of doing. We only had one developer doing that originally and we've got three now.

We're just going through an upgrade at the moment. We've been trying out a few of the version 9s, version 9.1, 9.2, 9.3. Version 9.5 is the one we're planning to roll out in production at the moment.

What about the implementation team?

Although we didn't do so on day one, we now work with three companies in this ecosystem. There is a company called BCCG, a Germany-based company. We originally wrote some feed-handlers with Solace to bring market data from companies like Refinitiv and Bloomberg onto the platform. We didn't want to own those, long-term. We felt it was something that could be out on the street. So we partnered up with this company, BCCG, who Bloomberg recommended to us. They're a small startup company and they now own the feed-handlers and the permissioning agents and are selling those as a product on the street. They have a partnership with Solace 

We also partnered with a company called MDX Technology and that was really for an Excel plugin. We have a lot of users who use Excel sheets and we want to be able to send and receive data from and to Excel. So MDXT wrote a plugin for Solace. They have plugins to a lot of other messaging environments. They just created one for Solace and, again, they're selling it out on the street. They built it based on us and now they have sold it to plenty of other Solace clients.

We also partnered with ITRS, which is a monitoring company, to build plugins on top of Solace's environment. ITRS is our monitoring system. Every major bank uses them. They have plugins into all the different systems that you might have. We worked with ITRS and Solace to create monitoring for Solace. Again, ITRS has then sold that to a whole bunch of Solace's customers.

The only other one is a company called CJC, which is more of a consultancy and support company. During Asia-PAC hours, they look after first-line support of the whole platform, including the market data as well as the Solace platform. They're doing level-one and level-two during the day in Hong Kong. That's not in any way expensive. They're the company that actually supports Refinitiv's platform so they already have people and staff there.

What was our ROI?

Capital markets couldn't operate today if Solace were down. Our turnover on a daily basis is significant. To put a dollar value on it would be very difficult. But by not having 500 servers across the globe and having about 54 appliances at the moment instead, we've got a 10-to-one footprint, so in pure infrastructure costs we have hard-dollar savings. By having the appliances in, we've enabled the business to make millions on a daily basis.

Which other solutions did I evaluate?

We did an RFP and pulled all the vendors in, including Thomson Reuters, TIBCO, and a whole bunch of others such as Informatica, and we did a proper vendor evaluation. It came down to Informatica and Solace, head-to-head, in the final decision.

The choice to go with the Solace appliances has actually paid off massively in savings from an infrastructure point of view. The reason is that, in our old platforms, for example our RMDS Thomson Reuters platform, we had about 500 servers around the globe sending all the data to each other, meshed up in a huge administrative nightmare. The Informatica solution was going to be very similar, as in commodity hardware that you would mesh up to send all the data. We looked at that and said, "Well, a server in our data center is going to cost us $20,000 a year to run," so if we still had 500 of those, you can do the math. If we were to buy the Solace appliances, working out to about $100,000 each, we would then only have to pay support and maintenance on them for the next two or three years, at about $20,000 a year. We only needed 30 of them, compared to the 500 servers. This has been a huge cost saving for us. The 500 servers that we used to have are all gone, and we have replaced them with 30 to 40 appliances. The cost of running things in the data center has, therefore, shrunk significantly.

Although people do view Solace as being this premium product you pay a lot of money for, if you're going to put a lot of data through these things, the amount of servers you need to do that with is also extremely costly. We have saved millions a year by having the appliances, and that was something we picked up right at the beginning. We said, "If we go down this path and these appliances can truly do what they say, then the footprint in our data center is going to shrink 10-to-one, and the cost of running this in our data center is going to be significantly less."

We also support multiple instances of Kafka. There's an enterprise version within our bank, which is the biggest one, and we have some small pockets of it within capital markets. The configuration and support around Kafka, and the quantity of components needed to keep it going, are a configuration nightmare. We use the software broker for development. In our non-production environments we have a non-appliance based version running in things like Docker. But the ability to have one component that does everything, as opposed to having to layer in multiple components to be able to build the ecosystem for messaging or storage, is extremely powerful from a support perspective. The time spent on keeping Kafka running, compared to Solace, is not in the same league.

We have a lot of problems with Kafka, generally, that we do not have on Solace. The enterprise runs the majority of the Kafka, the stuff that we support for our regular Cloudera stack. To try to give an idea of scale, the enterprise bank is doing, maybe, a few million messages a day on its Kafka environment, which is still a big environment for them. But we're doing 95 billion messages, so we're not even in the same swim lanes. We know they have a lot of problems on that. And in our own Cloudera Kafka, we have problems with Cloudera period, and their IBM stuff. We're paying an onsite consultant from Cloudera, and have been for the last nine months, to try and fix their stuff. It's just awful. Whereas our Solace stuff is bulletproof.

Kafka has its place. There's absolutely no question about that. There is some stuff that it does really well, like some of the elastically expanding storage concepts that people have where they want to keep storing everything forever. They can keep elastically expanding their Kafka brokers to do that. Whereas, with a Solace appliance, you are going to have a SAN storage connected to it and you're limited by the size of the SAN you can put on there, or you're going to need to buy another appliance and buy another SAN. With their software broker you could elastically expand that, but you still have the storage issues. 

The one real positive with Kafka is that you have a big community of people, and this is something I've spoken to Solace about too. There is this groundswell of community around it, where there are a lot of adapters that are off-the-shelf to a lot of other things. It's a double-edged sword. Sometimes we have new users join the bank who say, "Yeah, but Kafka has a SQL adapter off-the-shelf." We say, "Okay, but we already have written a SQL adapter for Solace. Here you go. It was 10 minutes' work." At the same time, it is nice to have a catalog of 200 adapters that you can use on Kafka. That is definitely a benefit of Kafka, with the community around it. But at the same time, when you scratch the surface of it, the amount of work to do a plugin isn't actually much more, and with the Kafka stuff you need six or seven different components to run it. 

In my last design overview with the console guys they said, "And then we're going to add this component, and if you want global..." and I said, "Well, actually, all our stuff is global. We don't do anything that's just one region." They said, "Well we haven't gotten our global solution built yet so you could run two versions and start copying data." I said, "Well, I don't really want to do that. We want you to be able to replicate data between regions, under the covers." They're now doing that. They're getting up to speed on some of those things. It all depends on what your use case is.

We even have some stuff where, at the edge of our environment, we might bridge data between Solace and Kafka and we've got a bridge component to do that. It would be when there's a very specific use case around what someone wanted to do. For example, if a third-party vendor is only supporting Kafka, we'll plug in Kafka there, but we don't want people then connecting to Kafka because there's no need for it. So we'll then bridge from Kafka to Solace so the data is all on Solace. There are definitely use cases for Kafka. It's just that the scale of Kafka, depending on what the use case is, is a little bit different. I feel people use Kafka because they're just trying to lazily store everything as a long-term retention process.

The implementation of Kafka compared to Solace is very different. As I mentioned, there are multiple components to build up Kafka. I can tell you that our Confluent contract is not cheap because we're really employing Confluent employees to come and help configure half the stuff and do hand-holding all the time. We don't really have those kinds of challenges on the Solace environment. We're far more comfortable supporting the Solace environment than our Kafka environment.

What other advice do I have?

If I was coming into this cold, and knowing what I know today, the one thing we would do differently is we'd have the network team involved throughout the whole process of bringing it into the bank. Bring your network team on that journey with you, because if it's going to become like it has with us — the biggest thing on the network — then you want to have the network team at the table from day one. That way, networking knows things are coming. We're putting these huge things into the data centers and they're going to send huge amounts of data around. That team needs to be ready, so they need to be at the table. 

In terms of the onboarding and governance processes, fortunately we did think ahead and plan that stuff. But I speak to other customers that didn't and they're struggling with having the right onboarding processes and the right governance around things. At the end of the day, if you've got 95 billion messages going around, if you don't have a good onboarding and governance process, you could just have a 95-billion message mess. We don't have that because we had a good governance and a good architecture to begin with.

As I mentioned, I've suggested to Solace that they shouldn't sell their products without enforcing a bit of the architectural piece to begin with. The problem is that everyone has their own budgets and thinks, "Oh, I don't need you guys to help me, and I don't want to pay for it," figuring that Solace is trying to push its Professional Services a bit. But that small investment in Professional Services, when you first stand it up, could be hugely involved in the success of your platform. The Solace Professional Services that we've experienced, and the general value out of that, is worth the dollars you pay for it.

From a maintenance point of view, every time Solace releases a new version of the API, we review what has changed in that and whether it affects us in any way. Sometimes a release is for something specific that another client has asked for and that doesn't have any value to us. We don't force applications to upgrade every time a version changes. We tend to do a yearly request of the application teams to upgrade their API to the latest one that we vetted. It's like a yearly maintenance to update the API. And to do that work, to integrate the new API version, it's generally not more than half an afternoon's work to put it in. It might take longer than that to QA, test, and validate your application to put it into production, but the actual coding piece takes an hour or two at most. It's not a huge overhead to be able to do that.

In terms of the event mesh feature, we're a bit of a "halfway house." They have multiple things. One is called dynamic message routing (DMR) and another is multi-node routing (MNR). We use the multi-node routing piece. We are testing out the DMR piece of it, which is their newest function for public cloud use. We're in a proof of concept with them around using that for expanding out into Azure and AWS.

Internally, we're using their MNR so it's all an event mesh and everything is automatic. If you publish a message in Sydney and you want us to scribe it in New York, we have to do nothing to get that message from A to B. You subscribe and it gets there. Depending on which terminology you're using around event mesh, we consider ourselves to be on event mesh, but we have not deployed that for guaranteed messaging for our general population. We're still using their multi-node routing, which means direct messages fly on demand, and we have to bridge guaranteed messaging.

The clustering feature is really designed around trying to make things easier for clients on configuration, so that you don't have to look at things as an HA pair in a DR device, by representing that as a cluster node. This is all work related to trying to make things easier from a support perspective. Today, if you make a change on an HA pair, you can then force-sync that to DR. It automatically happens to the HA box so you only make a change on the primary; it syncs to the backup. You can then choose whether you want to sync that to the DR device or not by putting it into a cluster node. They're just making it simpler for people. It's definitely a positive. We've actually been involved in helping them design that because we were one of their first and one of their bigger customers. We sit in with their engineering at least every six months and they walk through things they've got coming down the road and we talk about how they go about implementing stuff.

As for the free version of Solace, at the time, 10 years ago, the free version — that's the software version — didn't exist. With the software version there are limits to the number of messages, something like 10,000 messages a second. We're doing 1,000,000 messages a second. We could run lots of 10,000 messages-a-second instances, but then we would need a lot of commodity servers to run them on. If you are a small company that has some messaging requirements and you are looking for a good way to do that, the free version is absolutely an option. It doesn't come with any support either, obviously. You can pay for support on top of that version, but it's only going to do you 10,000 messages a second. At the scale we have, that wouldn't work. For non-production, giving that to a developer to run on their machine, to play around with, absolutely. So we don't really pay for any of the Dev stuff that we have. We're only paying for the physical production appliances and the reason we need those is just the scale of messaging that we do.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
DN
Enterprise Automation Architect at CIBC
Real User
Top 10
Provides operational efficiency and better mean time to resolution for incidents

Pros and Cons

  • "The most useful features has been the WAN optimization and probably the HybridEdge, which requires some third-party adapters or plugins. The idea that we can position Solace as a protocol-agnostic message transport fabric is key to our company having all manners of asynchronous messaging protocols from MQ, Kafka, JMS, etc. I really like the WAN optimization: Send once over a WAN, then distribute locally as many times as there are subscribers."
  • "One of the areas of improvement would be if we could tell the story a bit better about what an event mesh does or why an event mesh is foundational to a large enterprise that has a wide diversity of applications that are homegrown and a small number off the shelf."

What is our primary use case?

The first use case is technology operations tools. We are a best of breed monitoring shop. We have all kinds of tools that monitor things, like storage, network, servers, applications, and all types of stovepipes that do domain specific monitoring. Each one of those tools was sold to us with what they called a single pane of glass for their stovepipe. However, none of the tools are actually publishing or sharing any of the events that they have detected. So, we have been doing a poor job of correlating events to try and figure out what's going on in our operations. 

Our use case was to leverage that existing investment. For about a year, we have been proving that we can build publishing adapters from these legacy monitoring tools which are each valid in their own right, like storage monitoring tools, network monitoring tools, and application monitoring tools (like Dynatrace), and more modern than other ones. We have been building publishing adapters from those things so we can transport those events to an event aggregation and event correlation service. We're still trying to run through our list of candidates for what our event correlation will be, but the popular players are Splunk, Datadog, and Moogsoft, then ServiceNow has its own event management module. 

From an IT systems management perspective, our use case is to have a common event transport fabric that spans multiclouds and is WAN optimized. What is important for me is topic wildcarding and prioritization/QoS. We want to be able to set some priorities on IT events versus real business events. 

The second use case is more of an application focus. I'm only a contributor on the app side. I'm more of an infrastructure cloud architect and don't really lead any of the application modernization programs, but I'm a participant in almost all of them. E.g., we have application A and application B side by side sitting in our on-prem data center, and they happen to use IBM MQ Hub to share our data as an integration. Application A wants to move to Azure. They are willing to make their investment to modernize the app, not a forklift, but some type of transformation event. Their very first question to us is, "I need to bring IBM MQ with me because I need to talk to app B who has no funding and is not going to do anything." Therefore, our opening position is, "Let's not do that. Let's use cloud-native technology where possible when you're replatforming your application. Use whatever capability you have for asynchronous messaging that Azure offers you. Let's get that message onto the Azure Event Hub. Don't worry about it arriving where it needs to arrive because we'll have Solace do some protocol transformation with HybridEdge, essentially building a bridge between the Azure Event Hub and MQ Hub that we have in our data center."

The idea is to build bridges between our asynchronous messaging hubs, and there's only a small handful of them, where Azure Event Hub is the most modern. We have an MQ Hub that runs on a mainframe and IBM DataPower appliances that serve as our enterprise service bus (ESB). Therefore, if we build bridges between those systems, then our app modernization strategy is facilitated by a seamless migration to Azure.

The most recent version is what we installed about three weeks ago.

The solution is deployed on Azure for now. We will be standing up some nodes in our on-prem data centers during next phase, probably in the next six months. 

The plan is to use event mesh. We're not using it as an event mesh yet, as we are only deployed with Azure. We want to position a Solace event mesh for enterprise, but we're just now stretching into Azure. We're a little slow on the cloud adoption thing. We've got 1200 applications at CIBC with about four of them hosted in clouds: one at AWS and three at Azure. So, we're tiptoeing into Azure right now. We're probably going to focus our energy on moving stuff into Azure. However, for now, because the volume is so low on stuff that's outside of our data center, the concept of a mesh has been socialized. There's not a ton of enthusiasm for it, even though I might be shouting from the rooftops saying, "It's a foundational capability in a multicloud world." It looks like we're putting that funding on the back burner for using it as an event mesh.

How has it helped my organization?

This solution has increased our application design productivity compared to other solutions. There is a ton of momentum in our application development space for leveraging Dynatrace with Solace's monitoring tool. We have made the investment in getting Dynatrace to publish events that it detects, mostly application performance related events. The app development teams have taken a liking to implementing the application monitoring tool early in their development cycles, maybe not in development, but in their performance testing cycles. We can practice what a code drop stack shift would look like if they're shifting from stack A to stack B or if they're doing rolling reboots on some of their app servers as they're doing upgrades. We get to exercise that and see what the monitoring patterns look like correlated with servers going up and down along with web services coming up and down. That's been helpful to the development community to see that automation occur in mid-environments.

There have been quite a few incidents where a test infrastructure has become unavailable because of some change going on and the app developers aren't on the nut for fixing the problem in the UAT environment because they know the outage was caused by the fact that they did a code drop 20 minutes earlier, which is a legitimate server outage. We are seeing some benefit, but it's more of an optimization of incident management resources. E.g., we have somewhere between five and 10 Internet-facing applications, and when something goes bump on the firewall that's behind them, we have DMZs (or different zones) where we put our web tier and app tier. Therefore, when something goes bump in our network tier, we got 10 application teams that are all fired up, and say, "What's going on?" Then, they all spin up their own tech bridges. Meanwhile, the firewall guys are working on a problem that we just don't know about. So, we have wasted a lot of time and energy trying to figure out things that aren't our problem. The bad scenario hasn't happened to us in production, but in our test environment, it's happens once where a couple of app dev teams have been able to stand down because we were correlating events correctly.

We struggle with mean time to resolution on things. We do have a lot of change control rigor, but the solution hasn't changed our organization yet. The idea is when we're getting the events for our service provider of operating systems, servers, and storage network correlated intelligently together with our application changes, application performance monitors, and application availability monitoring tools, then we'll make more intelligent decisions about root cause, where problems lie, and be able to react more intelligently. This will reduce mean time to resolution, but we're not there yet. 

The division who has been using Solace for years has a mature costing estimator model for internal projects. That model certainly will be leverageable for the technology operations guys. We haven't crossed that bridge yet because we're still in PoC mode. It's very likely that once we hit prod, we'll have ease of solution design when we have a protocol-agnostic message transport in place, and that our solutions will be easier to craft and give cost estimates.

It is easy for architects and developers to extend their design and development investment to new applications using this solution. In our architecture practices, we are always documenting compositions. We care a lot about the data exchanges between applications or the integrations. We have a lot of contractors and other integrations that we care about. Having transmission facilitators definitely makes the architect's life a lot easier when we just put a message on the queue and it's going to get transported by the facilitators to wherever it needs to go. It is definitely easier when we have Solace and an event mesh up and running. Today, when we have integrations that don't leverage those transmission facilitators, like an MQ Hub or Solace event mesh, those integrations are much harder to get approved because we have to dive into the security, access controls, encryption, and all that other stuff. 

What is most valuable?

The most useful features has been the WAN optimization and probably the HybridEdge, which requires some third-party adapters or plugins. The idea that we can position Solace as a protocol-agnostic message transport fabric is key to our company having all manners of asynchronous messaging protocols from MQ, Kafka, JMS, etc. I really like the WAN optimization: Send once over a WAN, then distribute locally as many times as there are subscribers.

I don't think we have yet unleashed the full potential of topic wildcarding. That is a silver bullet that we haven't yet maximized the value on because we don't have a ton of subscribers yet. Coming up with a topic naming convention in our large company has been difficult. However, once we start forking data over to some of our data lakes, enterprise data hub, and security event depositories, it will become a useful feature in the future.

What needs improvement?

The storytelling about the benefits needs improvement. We have four major lines of business in our company. Our retail, capital markets, and internal corporate center lines of business along with technology operations, which is more of a cost center. Technology operations are not innovators, but more a keep the lights on arm of the business. One of the areas of improvement would be if we could tell the story a bit better about what an event mesh does or why an event mesh is foundational to a large enterprise that has a wide diversity of applications that are homegrown and a small number off the shelf. I wish we were better able to tell the story in a cohesive way to multiple lines of business, but that's more of a statement of our own internal structure and how we absorb or adopt new technology than it is about Solace or the product itself.

It been a bit of a tough slog to try and get everybody to see event meshes are foundational in a multi-data center, multicloud landscape, when we're not there yet. Our company has most of our applications in two data centers that are close to each other. There is no real geo-redundancy, but everything we've ever done has been on-prem with only a small handful of Azure adoptions. Therefore, having folks see the benefit of an event mesh has been tough. I wish we could improve our storytelling a little bit.

We have struggled in a sort of perpetual PoC mode internally. This is no fault of Solace's. It's just that the only executive looking to benefit here is our technology operations team, and they have no money for investments. They're a cost center internally, so they have to be able to make the case that we're going to improve efficiency by leveraging this tech. Thus, the adoption has been slow.

For how long have I used the solution?

We have three different lines of business in our company. One of them has been using Event Broker for about six or seven years.

Personally, I have been engaged in a proof of concept for about 18 months.

What do I think about the stability of the solution?

Solace has been incident free in HA deployment for seven years. I did an analysis before we started our PoC for the technology operations team, looking for a lot of incidents. One of the pieces of work I did internally was to figure out our app stabilization, and I couldn't find anything Solace related in terms of the bumpiness. It had a clean track record, unlike our DataPower appliances which have gotten us in the newspapers a couple of times in the last three years.

When I did my analysis, I found a lot of dependencies on our file transmission hub and the product that we use. I found a lot of victims of our DataPower appliances. I found no victims nor incidents related to our Solace hardware appliances under the coverage. There was not a single incident in six years. I went back to the well to try and see if I can find more, but I can speak to the hardware appliances and how stable they have been. They were only deployed within a single line of business, so it didn't have the complexity of an enterprise shared service in multi-LOB mode. However, the stability has been really good with a good track record.

What do I think about the scalability of the solution?

If we deploy this the right way, we get a presence on each cloud at each data center and the full mesh effect. Plugging them into each other or making them part of the same ecosystem so they are aware of each other is not complicated for the guy whom we have working on this. He's not deploying it that way yet for our technology operations use case.

As we start to generate a little more momentum for our event correlation engine, we're probably going to uplift ourselves to a Tier 1 capability that has more of these nodes deployed throughout our various geographies around the globe. But, for now, it's only in one region of Azure Canada Central.

The group who has been using the solution for six or seven years has the physical appliances. Within the last two years ago, they just refreshed on physical appliances again. We're probably not going to do it all. The physical appliances have been in the control of a single line of business in our company who have been able to self-manage. There wasn't really an enterprise-wide adoption that required a lot of coordination in our change process. We've done a lot of change management rigor in our company, so when a service is wholly contained within a particular line of business, then the ease of getting stuff done is a lot higher.

We have a small set of publishers, probably eight or 10 publishers, with maybe two subscribers. We haven't had the need to get into a whole bunch of granularity. The scope of our program: All publishers are sending to the two subscribers. There is really not a need to get very granular about who sends to where.

Today, in IT operations, the usage number is still zero because we are not live. The benefit will be probably 2000 operations staff across our own company and our service provider DXC. It's a 50/50 split. DXC has hundreds of guys doing incident management and operations for servers and below. We have retained services in the application space who are application operators and security operators. Those are retained people who will be working more efficiently as well. 

How are customer service and technical support?

I have not personally dealt with their technical support. They are always responsive. I know I like to talk with them on emails that go back and forth, but it's really about sales, e.g., trying to get statuses on our proof of concept and how it's going. We've not had any reason to reach out to them for tech support issues.

Occasionally, we have needed help for HybridEdge when we were trying to build a new protocol transformation adapter, then we will reach out to them. However, this is not in incident mode. It's always in a sort of a how-to mode for a PoC. We have never had to reach out to them for urgent requests.

Which solution did I use previously and why did I switch?

We have protocols specific message transport passport hubs, like SFTP hub or IBM MQ Hub, but we never had a tech that has been protocol-agnostic. Therefore, the solution is kind of new. 

Our IBM DataPower appliances have had the capability to do protocol transformation, but we've never done it. We've always just used it for REST and XML type stuff.

Our enterprise data hub has been essentially a big data lake for business data, customer information, etc. They are in year three of the enterprise data hub program. For the first three years, they had been receiving data only by file transfer, which was yesterday's data at best. Only because I'm a participant in different projects, I happen to know that two months ago they enabled real-time event streaming by Cloudera Kafka from our customer information repository. When a customer update happens and changes their street address, for example, we publish through Kafka to get that information into our enterprise data hub in near real-time, as opposed to waiting for tomorrow's file transfer. My understanding of that tech is that it requires a queue can be defined between the source and destination but may not scale. It kind of reminds me of the early days of MQ when we had point-to-point MQ happening all over the place. We got about 150 queues in and realized, "Oh my God! Having a hub would be nice." Then, we implemented IBM MQ hub and waited for the next best opportunity to get folks to talk to the hub.

I'm thinking the same thing will probably happen with Kafka emerging through our enterprise data hub service that individually setting up queues to get events into the enterprise data hub. Getting these individual messages one by one for 600 applications will become onerous for the operations and support teams. I suspect before we get to that number that an event mesh will garner more attention.

How was the initial setup?

The initial setup was straightforward. We were a bit lucky because we have a guy on our technologies operations team who did the initial setup of the physical appliances. When it came time to get the software and run it on servers, like Azure, it was relatively easy. Because we outsourced our infrastructure operations and monitoring tools to a service provider, the most complicated part was getting the firewall rules figured out for the publishers from the the legacy systems. The complexity of setting up their product had nothing to do with the Solace. 

We are not live yet, but we're deploying using Azure with the intent to build our first bridge to the Azure Event Hub. The applications are hosted with Azure so we're recommending that they leverage cloud-native messaging technology, or Azure native messaging tech. We'll listen in on the messages that traverse the Azure Event Hub and fork them over to a Splunk (probably). The strategy is sort of non-disruptive and not mission-critical. In technology operations, we are just looking to see what events occur at Azure and trying to correlate them with events that are happening on-prem, since our customer information and account information are all stored in mainframes, NonStop environments, and platforms which are not moving to Azure. The implementation strategy is to insert Solace as means of transporting events into common spots so we can have a view of what's happening.

In a company that does rigorous change management, the initial setup took one of our guys probably three or four weeks. He was already supporting the physical appliances, so he had a bit of a running start. However, every time we cut a change record in our company, we need two weeks lead time: Two weeks to get our server infrastructure provisioned, then two weeks to get our firewall rules implemented. After four weeks, we were done.

A quarter of the same person's time who is also supporting the physical appliances is what is needed for maintenance.

What about the implementation team?

I have two techie guys who work on installing it. I am more of the enterprise architect, PowerPoint guy.

On use case number one, we struggle with our mean time to resolution and technology operations. We've outsourced a lot of our data center operations and server storage network operations to a third-party (DXC), who is formerly HPE Enterprise Services. They manage our data centers, OSs, and servers. CIBC applications are mostly homegrown, so we support and maintain our applications. We do code chops, code changes, DevOps toolchains, etc. So, when something goes bump, there is a lot of finger-pointing.

We have DXC publishing their events now. Going forward, we need to figure out which tools we correlate those events to and start recognizing some of the benefits.

What was our ROI?

We have not seen ROI.

The operational efficiencies that we intend to gain should result in a reduced internal chargeback of tech resources. That's really the ROI that we're going after: operational efficiency and better mean time to resolution for our incidents. 

What's my experience with pricing, setup cost, and licensing?

We have been really happy with the product licensing rates. It has been free for us, up to a 100,000 transactions per second, and all we have to do is pay for support. Making their product available and accessible to us has not been a problem at all. 

Having a free version is critical for our technology operations use case. This is primarily because our technology operations team is a cost center in our company. They are not profit drivers and having a free version for installation will probably meet our needs. Even for production, it'll support up to a 100,000 messages per second. I don't think in technology operations that we have that many events and alerts from our detection tools. Even if I have 20 or 30 event detection products out there, they're only going to publish the things which are critical or warnings. I don't think we'll ever reach a 100,000 messages per second.

We have been dealing with the free version for a better part of 18 months now. There have been no allergic reactions. You should expect maintenance costs, but we've not really needed that because we're not live yet in production for our first use case. For our physical appliances, capital markets folks were happy to get a big discount on the last version of the physical appliances. I've heard no complaints about what they're being charged for the Solace product that they've had in use for seven years. However, they haven't modernized any of their applications into Azure yet. 

Which other solutions did I evaluate?

When we were searching for protocol-agnostic event meshes, I wasn't the one doing the research. It was our integration domain architect. He had experienced with Solace already. When he was doing market research for protocol-agnostic event meshes, his input to me was there was only one player, a Canadian company based out of Ottawa. Therefore, we didn't do a bake-off with anything else.

Other lines of business in our company have been using things like MQ Hub and IBM DataPower appliances. Our technology operations division has a program that I'm working on right now for trying to start getting our tools to interact together using Solace Event Broker.

Our company is pretty passionate about making sure that we have vendor support. When we do use open source products, we go out and get third-party support. When compared to some other messaging hubs that we do have, I have to admit that our IBM MQ Hub has been also incident free for many years while running on a mainframe, but our IBM DataPower experience has not been good. I would say that Solace fits right up there with the best that we have for message transport in our company.

Topic wildcarding implies that if we had a set hierarchy for our topic naming convention that we could deliver it to subscribers based on wild cards, which is something that differentiates from Kafka. We're not leveraging topic wildcarding, but my understanding of the tech is it would allow our security tools (for example) to be able to poke their nose into topics of interest to them using authorizations that Solace would control. 

Kafka is really the only other competitor. We have IBM DataPower, but that's not really a fair comparison. We aren't intending to do format or data transformations with this tech. We're only looking at protocol transformations and message transport. Kafka has gotten a lot of momentum whenever our app developers Google that stuff, they get a lot of support and hits. Trying to find some momentum for Solace has been a bit difficult, but the idea of having Solace be our protocol-agnostic message transport system is the plan. However, when we have only had a small number of applications hosted in the cloud right now, the point-to-point message delivery is not unmanageable. Building a Kafka interface to something with Azure is tolerable and manageable when we have less than five subscribers. 

When we realized that that message would be best consumed by something that talks a different language, then we'll start recognizing Solace is an important instead of publishing a message twice in two different protocols. We'll be able to do it by publishing to the Azure Event Hub, not worrying about what language our subscribers talk. We've been juggling between: Do we do Kafka or do we do Solace? Right now, the momentum for Solace is not yet there because the volumes of applications modernizing are so low. But that tide is changing, we're gaining some speed.

In technology operations, we have no use cases that are Kafka-centric. That's mostly because our enterprise tooling doesn't exchange data with anything. There are just these stovepipes of monitoring data.

What other advice do I have?

Get folks in various stovepipes to recognize that their data is valuable to aggregate for the entire enterprise. The biggest lesson learnt for me in use case number one has been to get various support organizations to realize that publishing your data is not about pointing fingers and finding culprits. It's about efficiency of restoring service.

The solution got us to look internally at how we operate and we behave as a split-brain support organization, where we have some of it on the inside and some of it outsourced. That has been a benefit to us.

I would rate this solution as a 10 (out of 10).

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Learn what your peers think about PubSub+ Event Broker. Get advice and tips from experienced pros sharing their opinions. Updated: November 2021.
552,136 professionals have used our research since 2012.
SA
Technology Lead at a pharma/biotech company with 10,001+ employees
Real User
Top 10
Event life cycle management changes the way a designer or architect will design a topic and discover what is available

Pros and Cons

  • "In my assessment of Solace against other products — as I was responsible for evaluating various products and bringing the right tool into companies in the past — I worked with multiple platforms like RabbitMQ, Confluent, Kafka, and various other tools in the market. But I found the event mesh capability to be a very interesting as well as fulfilling capability, towards what we want to achieve from a digital-integration-strategy point of view... It's distributed, yet it is intelligently connected. It can also span and I can plug and play any number of brokers into the event mesh, so it's a great deal. That's a differentiator."
  • "A challenge we currently have is Solace's ability to integrate with single sign-on in our Active Directory and other single sign-on tools and platforms that any company would have. It's important for the platforms to work. Typically, they support only LDAP-based connectivity to our SQL Servers."

What is our primary use case?

We have a hybrid model because we have a lot of systems on-premise as well as a lot on the cloud. We have one instance of Solace in AWS Europe, and the other one is an on-premise setup in our data center, also in Europe.

How has it helped my organization?

Given the levels that we have designed into our topic taxonomy and the hierarchies, Solace gives us decent levels that we can get down to, in terms of granularity. It supports two to three character sets of their entire, end-to-end topic structure, so I can actually get down to level six or seven, or even more than that.

The last couple of releases have brought about event life cycle management. That changes the way a designer or an architect will design a topic and quickly discover what is available, and whether something has to be built out. That's pretty easy. With the life cycle of the event portal and the event cataloging that is available, it makes life easier for them. With all these new features in place it increases our productivity by something like 50 percent. Now, because we have a nice, curated view of the contents of the event in the event portal, it is easy to discover and to publish new topics. What used to take one day can be done in half a day, leveraging all the best-practices and the features that come with this product. Of course, you need to pay more if you use the event portal or catalog, but assuming all those tools are in place, it is beneficial for the productivity side.

There has also been an increase in productivity around solution management because of the ease of the key features that they offer. You don't need to spend time moving around multiple screens to manage something on the monitor, implement fixes, find hotspots, or even to publish something new. Because it is easier to navigate around, following the life cycle of an event, it definitely increases the productivity, whether it is from a solution management point of view or an operations point of view. From whichever angle you look at it, it makes life easier for that particular person.

What is most valuable?

We are implementing the event mesh feature right now. In my previous organization, we used the event mesh. Solace DMR, which is its dynamic message routing, and their event mesh capability is one of their unique selling points. It's a stand-out, a distinctive capability and a differentiator. It is a great feature and, honestly speaking, it is one of the biggest differentiators they bring to the table, compared to many of the message broker platforms or event broker platforms that I have used in the past.

In my assessment of Solace against other products — as I was responsible for evaluating various products and bringing the right tool into companies in the past — I worked with multiple platforms like RabbitMQ, Confluent, Kafka, and various other tools in the market. But I found the event mesh capability to be a very interesting, as well as fulfilling capability, towards what we want to achieve from a digital-integration-strategy point of view. It's distributed, yet it is intelligently connected. It can also span and I can plug and play any number of brokers into the event mesh, so it's a great deal. That's a differentiator.

It is completely self-sufficient when it comes to connecting the brokers together because it uses a proprietary protocol over the TCP layer. It is a Solace messaging protocol and it is not very difficult to configure it and use it. It is easy to use, easy to configure brokers and to connect them all together. 

From an administration point of view, Solace gives us a visual view of all the brokers in there. The capability of spinning up a broker and connecting it visually is still in progress in their roadmap. But, technically speaking, if somebody knows the administration of Solace very well, they can actually spin up a broker easily, either on a cloud or on-premises, on Kubernetes or on Docker, and can quickly connect them all together, and it starts showing up in their portal. It is pretty straightforward and pretty easy to implement. Here, we have been able to quickly set up the basic mesh architecture for the sandbox environment. It's straightforward and pretty cool as well.

Another feature and selling point of Solace is that it promotes and uses open standard protocols like SOAP or REST. We use AMQP in some scenarios and there are multiple other ways that we could connect as well, including JMS and TCP. There are five or six different ways that we could integrate with other inter-operating, distributed applications within our enterprise. Since Solace supports all of these open, standards-based protocols, it is pretty easy to connect.

It is also pretty simple to manage. The two major standout points are a very simple architecture and that it's a lightweight middleware platform. You just spin up somewhere and connect. On the top layer there is a single pane of glass to monitor and to keep the checks and balances in place, and also to administer from a cloud platform. That's a pretty simple, straightforward setup, like any cloud-based or middleware platform. The model that I have for MuleSoft in my company is the same thing for Solace as well. I would rate it as simple and straightforward.

I would rate Solace's ease of management better than competitive or open-source solutions, because they have brought thought leadership to the table for looking at event management and building a complete life cycle view of an event. Right from the time an event starts in the company, until the time that the event has to be retired, it goes through a life cycle. That includes discovering an event, designing the event, adding certain rules to it, configuring it, and deploying it. Finally, you'll want to monitor and operate it. The whole life cycle is completely manageable using Solace's UI. That is a great deal. None of the competition has brought that view to the table yet. This is another distinctive differentiator that Solace has.

In terms of the solution's topic hierarchy there are two ways to look at it. One is that there are particular topics that we set up and that are very static in nature because we know about their data already. For any other areas that are fixed, it is pretty straightforward because the topic taxonomy is already agreed on. It is already aligned with the stakeholders and it is easily implementable in Solace.

The other side is that if a publisher chooses to dynamically post a topic  — a new topic — if they know what the topic taxonomy model looks like for our company, then it is also possible to dynamically put the topic in place and publish it, as it is. 

It also gives you wildcard-based routing rules. Based on the topic taxonomy and hierarchy, I am able to route a message or use the wildcards that are placed in the higher topic hierarchy to even put in security. If a particular group shouldn't see a particular message coming in on a topic, I can control that as well using the right topic taxonomy or the topic hierarchy. In Solace, that is also pretty straightforward because their topic taxonomy definition and the way that they promote it and the way that we have understood it from them is pretty easy.

Kafka has a different way of doing that. RabbitMQ is very similar to the JMS-type of message platforms. Solace is very similar and it supports both dynamic and static. The solutions are even, from that perspective.

What needs improvement?

Another product that I use very much in my current portfolio is MuleSoft. It's an API management platform, and also iPass, which is Salesforce's company now. Both these products have to work together to give an assured-delivery type of middleware platform. We felt that having a connectivity layer or a connector or an adapter already pre-built in Solace for platforms like MuleSoft, Dell Boomi — middleware especially — would be pretty interesting. It would make it a more authentic and credible connector as well.

Today, we have to rely on JMS or a REST-based protocol but we have raised this request with Solace. While connectivity is definitely easier, at the same time, Solace needs to work on some of the connectors for industry-leading applications like Salesforce, Workday — multiple typical distributed applications that we might have. It is pretty good at this point but they can do better on that.

Also, a challenge we currently have is Solace's ability to integrate with single sign-on in our Active Directory and other single sign-on tools and platforms that any company would have. It's important for the platforms to work. Typically, they support only LDAP-based connectivity to our SQL Servers. 

We have one critical step, from an IT security point of view. If there are any SaaS applications or cloud applications which are hosted out of our cloud platform, then the only way that we can do SSO is through a SAML-based or another specific protocol. Solace doesn't support them at this point in time and we have raised this as a platform request. I think it is on their roadmap. But currently, it supports only LDAP. That is an improvement area for them.

For how long have I used the solution?

This is going to be my third year using PubSub+ Event Broker. I was with another company earlier on before I joined my current company. It was on the fast-moving consumer goods side and I started using Solace there. In my current company, this is a very new platform and I'm setting it up. But my overall experience on Solace would be two to three years' time.

What do I think about the stability of the solution?

Stability is definitely one of the key factors for us. My experience is that it's one of the robust platforms, because of the way that it's engineered and designed to work. It's absolutely a stable solution. We've never had any problems, given the way that we have implemented it.

What do I think about the scalability of the solution?

It's a completely scalable solution. Our architects have been looking at using Solace for multiple different use cases, whether it is to do with event architecture or assured-delivery types of projects or even for a simple publish/subscribe type of messaging or an async-API type of model. It seems that our architects find this to be a tool that can extend across these lines of capabilities. Solace brings that to the table.

From the developer's point of view, it provides ease of use and ease of configuration. After somebody has worked on and is really proficient in IBM MQ or TIBCO EMS, which are heavyweight platforms that come with certain benefits, those architects and developers find Solace pretty easy to handle and to extend it to other application areas or use cases, including IoT, async APIs, pub/sub, and event-driven messaging. We also are using it for assured delivery, leveraging their queues and persistent layer. It does help our architects and our developers to extend their applications to all of those areas.

How are customer service and technical support?

Their technical support is pretty quick. We are bound by an SLA and we have the highest tier of support from them. The turnaround time is pretty good and they are strong technically. I would rate their technical support as good.

Which solution did I use previously and why did I switch?

The fact that there is a free version of Solace was something that we looked at from multiple angles. For example, when we need sandboxes, the question we had in mind was whether we should go for the paid version or use the free version. The free version doesn't come with support but it offers a lot of capabilities which a developer can play around with. 

But when we had to choose between the free version and the licensed version for anything on our test stage, pre-prod, and prod, which are the other instances that we have, it was a no-brainer that we wanted to go with the paid version, because that brings in a whole lot of enterprise-class support and multiple other things along with it. We take advantage of the free version for sandbox, for a little bit of training, and PoCs. But predominantly, we use the enterprise-class version for the other instances we have.

How was the initial setup?

The initial setup was straightforward in terms of: 

  1. The architecture design. The tool is organized in a very clean way, the mesh is organized. It is easy to spin up a particular broker in an instant, purely from an architecture point of view, as compared to real heavyweights like IBM MQ or TIBCO EMS.
  2. From a solutioning point of view, because they have features which were released recently which cover the life cycle of an event, it is easier and quick to handle the event flow from start to end. Whether it is for an architect or for a developer, it's a pretty nice tool to have. That's the second point: the simplicity of their UI and the way the life cycle works.
  3. From an ops point of view, after our applications go live, the dashboards and some of their operational monitoring capabilities or features are also simple and straightforward.

We haven't found anything significantly complex.

What was our ROI?

We haven't seen return on our investment with Solace yet because it's pretty new in our environment. But we do see there is a value it brings to the table from a digital-transformation point of view. Both the companies that I was part of, where I was fortunate to lead the digital transformation projects, identified Solace as the platform to make that change: from a heavyweight, old or legacy model of middleware, or MQ platform, to a very lightweight, modern, completely distributed model. It's quick and nimble and agile in all types of setups. That is a huge shift in the way that we do things and make things notably faster. Qualitatively, this has definitely been a great tool.

Quantitatively, I would not be able to disclose any numbers, but we sense that there is going to be a huge return on investment because we might shut down some of those old, heavyweight, on-premise-only platforms. Because this is also a pay-as-you-use model, we can effectively make use of the license, as and when we require it. There are definitely going to be good cost savings as well.

What's my experience with pricing, setup cost, and licensing?

They have good pricing in place. Their licensing model is a simple model. 

There are different tiers where you can choose what would work for you. As a customer, you need to know roughly how many messages a month you will use. 

If you know that it is going to be between 50,000 and 100,000, while there is a large gap between those two figures, you can start small and scale it over a period of time until you reach 100,000. You might start with 50,000. Since it might take six months to reach 100,000, what I would suggest is starting with the lower tier, because you don't need to pay for something that is higher. Then, as the demand grows, the tier can be revisited. That's based on the license agreement that you should have as part of the contract. You should agree with Solace that you will start small but that your intentions are to grow, depending on the demand that's coming in. Provide a roadmap of how long it will take to reach the next tier.

Solace appreciates that view of your roadmap, and they will also come along with you in that journey. They will tell you, "Okay, start with a giga tier, don't go for a tera," or even start with a kilo tier. Slowly, as you see demand going up — it could be once every two or three months — you can have a look at it. It could also be once in six months if you don't want that many interactions. See how many you have done. If it has not gone beyond 75,000, you can continue to operate under the current tier. But if you think it's going beyond 75,000, you can move to 100,000 tier. It's a staged and calculated approach.

You also have to choose which of their product models would work for you. They have an appliance, they have a software as a service model, and they have an on-premise model, using a Kubernetes based setup. You need to look at your architecture and where your real needs are for event-driven brokers to be sitting. The licensing model also changes accordingly.

You have to have the right contract in place so that you can reassess that contract every few months to see whether you have breached your threshold. It's not that it's going to stop working, but you need to have that as part of your agreement, that even before it reaches the 70 or 80 percent of the threshold you will have a call to see whether you want to upgrade or not. That's all part of the contractual terms and conditions and negotiations. 

What other advice do I have?

There are two important things to keep in mind when considering this tool. The first is to know what kind of problem that you're trying to solve. If it is just about having a pub/sub, there are a number of other tools in the market — including Solace as well, which offers a simple, straightforward solution. But if you are looking at completely digitally transforming your company and bringing in event-driven architecture as a key factor in your integration strategy, then Solace is definitely a go-to tool. Knowing the end-goal that you're going toward, the objective that you're trying to meet, is very important. That is the first step one needs to be aware of and clear about.

The second thing is the engagement model with Solace, whether it is the terms of the licensing model or the way you will work with their Professional Services team or their support team. All that has to be discussed and agreed with a clear customer-success plan in place.

Thirdly, you want to clearly identify what architecture you want to implement because the mesh can span across anything. But you don't want to start a big-bang approach. Start small and then grow. So you need to know how your architecture is evolving. Start putting that simple MVP in place and from there you can grow it into multiple phases. That's what we are doing.

Have the right people in place. Somebody who has a good background and experience in implementing Solace can turn things around quickly.

We have four or five architects who use Solace, and we have two administrators of the platform, or platform architects. And we have about five developers now using it, but that will probably go up a little bit once we extend the mesh further. We also have two or three in support.

I would rate the solution at eight out of 10. I don't want to give them full marks because there is a lot that they could improve on: the SSO front; there is also the community front, they are also changing their architecture depending on best practices of communities, the way the community works, and so on. There's a lot of work for them to do to re-invent their on-premise model for a Kafka container-based solution. I would give those additional two points, out of 10, if I had seen all of that in action. There is definitely thought leadership within Solace, so I'm assuming that it will come through at sometime.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Sushil Sarda
Lead Manager at a manufacturing company with 10,001+ employees
Real User
Top 5Leaderboard
Based on your requirements, there are various size levels, similar to t-shirt sizing

Pros and Cons

  • "When we went to add another installation in our private cloud, it was easy. We received support from Solace and the install was seamless with no issues."
  • "We have requested to be able to get into the payload to do dynamic topic hierarchy building. A current workaround is using the message's header, where the business data can be put into this header and be used for a dynamic topic lookup. I want to see this in action when there are a couple of hundred cases live. E.g., how does it perform? From an administration perspective, is the ease of use there?"

What is our primary use case?

One of our use cases at our global company went live recently. We have a lot of goods that move via sea routes. While there are other modes of transport, particularly for the sea route, we wanted to track our shipments, their location, and that type of information and generate some reports. Also, there are multiple applications which need this data.

With Solace, we are bringing information in every minute (almost real-time) from our logistic partners and putting it on Solace. Then, from Solace, the applications that want to consume the information can take it. E.g., we are generating some dashboards in Power BI using this information. We are also pushing this information into our data lakes where more reporting plus slicing and dicing is available. In future, if more subscribers want this information, they will also be able to take it. 

We have both our private cloud and a version completely hosted on SaaS by Solace. 

How has it helped my organization?

The base use case is that we wanted the shipment tracking information in multiple applications, like Power BI and data lake, for the more reporting, etc. If we would not have got Solace, then we need to extract this information multiple times from source application. Now, we pull this information only once and then put it on Solace. Anybody can take it from Solace because the information is readily available. We are generating the data only once. 

Every organisation has exposure of their data with their devices being interconnected. We don't want to transfer the same data multiple times. Solutions like Solace can help us in publishing data only once, then anybody can pick it from there. This reduces costs of data transfer. It reduces the load on data sources, because we aren't asking them to generate the same data multiple times.

Our company is an SAP centred company. A lot of our key applications are using the SAP product suite. When we talk about transaction data and master data, that is where the real complexity comes into play. There are a couple of use cases that we have discussed with Solace for topic hierarchy. E.g., a product master might be sold by multiple channels, produced in multiple factories, and sold in multiple geographies, so creating a topic hierarchy for these could be challenging. When we started, we discussed this complexity with Solace. They helped us arrive at an initial topic hierarchy based on some similar use cases which have been implemented for other customers, sharing their insights.

Another point is their overall approach to topic building. They have very good documentation. It will be our own internal complexity that will drive the topic hierarchy. We are currently in the early stages, and so far journey has been good. Right now, we are comfortable with information and help we are getting from Solace along with the overall approach recommended for topic building. However, time will tell, especially after we generate very complex cases with Solace, how this topic hierarchy functions.

What is most valuable?

The most valuable thing for us is being able to publish a message, then have the ability to subscribe it on the fly. We want to democratize the usage of this going forward.

We are currently using the basic platform, and as we become more mature, I am particularly excited about using the Event Catalog. This was launched recently. There are certain features like event visualisation and event discovery which we want to see in action. It will take some time for us to make more events published on Solace. 

The software has been very good because:

  1. You can spin off a Solace instance very quickly. 
  2. Based on your requirements, there are various size levels, similar to t-shirt sizing. 
  3. When we went to add another installation in our private cloud, it was easy. We received support from Solace and the installation was seamless with no issues. 

After publishing, we have seen the solution’s topic filtering go into approximately six levels, which is quite granular. These many levels are good enough. Also, the business payload lookup is supported.

What needs improvement?

We have requested to be able to get into the payload to do dynamic topic hierarchy building. A current workaround is using the message's header, where the business data can be put into this header and can be used for a dynamic topic lookup. I want to see this in action when there are a couple of hundred cases live. E.g., how does it perform? From an administration perspective, ease of use etc.?

The second challenge is about skills and not related to product directly. Resource availability can be a challenge, e.g. if we have a lot of use cases for this and insufficient manpower, which comes from our partner companies and other IT companies, that will slow us down. This is an area that if Solace could do something, it would be good. If they could add some training or certification, that will be good. From a product perspective, so far in journey, it looks okay but time only will tell once we have put lot of volume and use cases on it.

The topic catalog was actually a gap in the product about a year and a half ago when we started. It was not available in the basic platform, and we said, "This will become a challenge." Now, they have recently launched the Event Catalog and event visualization, where you can do an impact assessment if you change something, e.g., you can see the whole visual of impact. If you are publishing an event, you can see who are the subscribers, etc. It does look good, but we are in a very early stage. Therefore, we want to see it in action for a broader base and more use cases, before we say, "Yes, from administration perspective, this makes sense."

For how long have I used the solution?

We have been using the solution close to a year. Our use case recently went live. Now, we are working on a couple more projects.

What do I think about the stability of the solution?

So far, we haven't seen any downtime or issues. We are in our initial journey, but we don't see any such challenges of instability at all. In fact, during the platform setup or during initial test cases, we never encountered any issues of services not working or downtime etc. 

What do I think about the scalability of the solution?

As far as the SaaS solution is concerned, this solution is available in the t-shirt sizes. On the fly, it can be added to the existing subscription. Scalability should not be a challenge.

We keep on looking at our usage and approach. When we reach 70 percent usage, thats an indication of need of further scaling up. By the time we reach 85 or 90 percent, we would have already added capacity to your solution.

Our users are IT teams, not business users. Our use case originates from the supply chain, then the integration team manages Solace. We have around five to six IT users who interact with the platform to develop the solution. Once it has gone live, they support it in the production environment.

How are customer service and technical support?

The technical support helped us in all the aspects. So far, there are no complaints. We got really fantastic support from them. Their leadership is also very much committed. Their senior VP joins us in weekly review, which is the kind of commitment coming from them since their leadership is involved. Their technical teams are definitely involved and fully committed to our success.

A year and a half ago, the Event Portal was not available when we started our journey. This is a strong feature that they added based on feedback that they heard from us. This would not had been something that we could have requested with an open source, like Kafka. We would have had to outsource it to a partner for them to build it.

Which solution did I use previously and why did I switch?

Solace is a new product for us that changes the way we approach towards architecture. Solace is helping us to add some workarounds which will convert messages into event enabled messages. That is how we are using Solace right now. However, before this solution, we did not have anything.

How was the initial setup?

The initial setup was pretty straightforward.

The deployment took four to five weeks. When we got the first use case, we started understanding the requirements pretty well, then built the solution, did the testing, and made it live. After the solution is live, if anyone else needs data, that can be done very quickly. It won't take a couple weeks like first time. You can just connect, pull the data, and test it. 

From implementation strategy perspective, we wanted a simple use case, e.g., just publish and subscribe. The easiest case could be a point-to-point where there is only one publisher and one subscriber. Things that are non-business critical, we wanted to put them first on Solace and see the performance, learn how they worked, their challenges, and dos and don'ts.

Later, we gradually wanted to move into business critical cases. The next set of our use cases, which are running on other middleware, we are trying replicating them on Solace. However, we will not be jumping to Solace directly. Rather, it is like a parallel solution, which is being built on the Solace layer. We want to see whether it is working fine. Gradually, we'll start switching from those scenarios which are running on other middleware to the Solace layer.

During this journey, we have also been targeting our topic building mechanism/approach, which will get firmly established. That is how we are approaching Solace overall. At the same time, we have also brought in our partners of other middleware - MuleSoft, Dell EMC, and SAP. These are some of our strategic partners. It is not just a big product which you take, then you forget. It's ability of that tool and how well it fits into your ecosystem as well. Solace can be a very good tool, but if other applications are not able to communicate with it, then it will be not of use to us. Therefore, we are also seeing that how Solace with Dell EMC, MuleSoft or SAP could create value for us. That's another thing which we are doing from a strategic perspective.

What about the implementation team?

The technical support partnered in creating the initial use cases and setting up the platform. Our IT team and infra network team from the back-end worked to install Solace on our private cloud.

When we did the first project, we worked with the Solace team. They were good people who helped us go into the smallest level of detail for the project requirements. 

Our staff resources customize whatever work has to be done on the Solace piece. Once it goes live, they do regular monitoring. For a new onboarding project, we rely on these staff resources.

What was our ROI?

We have just started. The journey for us is new; we are not mature yet.

What's my experience with pricing, setup cost, and licensing?

Go for the best deal that you can get from Solace. Primarily, the licensing is dependent on the volume that is flowing. If you go for their support services, it will cost some more money, but I think it is worth it, especially if you are just starting your journey.

I don't think it makes any difference if somebody gives us a free version. That would be very small from a capacity perspective. For an enterprise organization that would not be sufficient, so we are not looking for freeware. 

We are looking for something that will add value and be fit for purpose. Freeware is good if you want to try something quickly without putting in much money. However, as far as our decision is concerned, I don't think it helps. At the end of the day, if we are convinced that a capability is required, we will ask for the funding. Then, when the funding is available, we will go for an enterprise solution only.

Which other solutions did I evaluate?

Our journey to Solace was not very long. We started interacting with Solace leadership probably about a year and a half year back. That was the first time that we spoke with them about this concept and product. There were a few things that we asked for as part of product roadmap. Then, we moved to the product evaluation where we also brought in a couple of other competition tools. Finally, we selected Solace.

The challenge with open source is they give you a basic flavor, which is decent enough. However, when you look at enterprise level, you need the following: 

  • A good support mechanism available
  • Reporting 
  • Administration
  • A distributed license, since there is talk about how to decentralize usage.

These are the challenges that come with an open source product. They do the basic thing well, but if you need to make the solution fit for purpose, you need to maintain the custom solution on your own. This becomes a problem from a resource and investment perspective, as technologies keep on changing.

If we talk about Solace, you see the value-add layer. I can say that Solace is a basic Kafka. But on top of that Kafka layer, they have added their own layer. That is really good, as this is where it adds value and why we went for it.

There are a lot of good things that made us decide to go for Solace. Looking at Kafka, the value-added monitoring, Event Catalogs, and visualization are not there. When we talk about Solace's competitors on certain aspects, we rank them a little lower. Overall, when ranking them, Solace was the one who has scored highest, so we went for it. 

We do not use other competitor products, so we don't have direct experience with their ease of design. We also evaluated:

  • A Microsoft solution: This solution was the closest to Solace.
  • OpenText
  • Kafka (open source)
  • Confluent
  • Also, two data stream solutions for high volume data.

The challenge with Kafka is you have to think of everything on your own. You have to build the complete service part of the solution on Kafka. Solace compared to Kafka was a no-brainer. Solace distinguished itself with topic building and scalabilty. When in cloud, you can quickly scale up.

With Kafka, the challenge comes when you design a solution that has topic management. How do you make a topic discoverable? How do you define the dependencies between one topic and a subscriber?

From a monitoring perspective, I also feel Solace has a better product. More than that, there are commitments which comes from the Solace product, such as improvements. They are open to hear what we were saying. If there are certain things which are not available, they said that they will try to plug those gaps.

What other advice do I have?

Start with the simplest use case. Learn how Solace operates and about the ways it will work in your own internal organization. You will have to come up with standard guidelines, best practices, ways of working, etc. Once you understand all of these things, then start picking more use cases at the next level of complexity.

Before you put anything directly into production, do a pilot run. Once you are pretty comfortable with this new technology, only then switch over to new technologies.

We want to use the solution's event mesh feature, but we are not there yet. Currently, we have two instances of Solace that are connected in a small mesh, but this is a very basic thing.

We have the software but did not go for the hardware part of the solution.

I would rate this solution as an eight (out of 10).

Which deployment model are you using for this solution?

Private Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
MO
Senior Project Manager at a financial services firm with 5,001-10,000 employees
Real User
Top 10
Enabled businesses and cross-business synchronization to be able to share data across application teams and across business verticals

Pros and Cons

  • "Going from something where we had outages and capacity issues constantly to a system that was able to scale with the massive market data and messaging spikes that happened during the initial stages of the COVID crisis in March, we were able to scale with 40 plus percent growth in our platform over the course of days."
  • "Some of the feature's gaps with some of the open-source vendors have been closed in a lot of ways. Being more agile and addressing those earlier could be an area for improvement."

What is our primary use case?

We're a capital markets organization, so we primarily use it for our trading algos order management, streaming market data, and general application messaging. Those are our key use cases. 

Our other use cases are for more guaranteed messaging-type or things where we absolutely need to have the resiliency of every message for higher performance streaming market data, meaning, millisecond latency-sensitive algorithm operations that are running as well.

We also use it for general messaging and to displace some of our legacy messaging applications such as MQ, EMS, and things of that sort. We are standardized on Solace PubSub+; it's an architectural standard at our company.

How has it helped my organization?

Our standpoint is, Solace has been tremendously successful in allowing for businesses and cross-business synchronization to be able to share data across application teams and across business verticals. It's been the success and the uptime stats that Solace has been able to provide that have allowed for more and more users to use the system. 

Going from something where we had outages and capacity issues constantly to a system that was able to scale with the massive market data and messaging spikes that happened during the initial stages of the COVID crisis in March, we were able to scale with 40-plus percent growth in our platform over the course of days. We never had any outages and never had any issues despite those massive spikes. The system was able to cope. There are things that we needed to do and work off to make sure we had headroom and then we were able to scale out, but we can do that seamlessly without our users even knowing.

It has increased application design productivity compared with competitive or open-source alternatives. From my pursuit of the "secret sauce", I found that Solace really is, for us, the service and the partnership that we have with the company. Having the level of expertise and the subject matter knowledge available to us is second to none. It's not just that Solace is entrenched at my company, is the standard, and it would be too hard to roll off. It's more that the level of performance and the level of service we get from Solace precludes that. I'm looking at where other vendors are and I just can't see them getting to the level that Solace is at for us, to really make an impact. Again, there's a small use case here and there where potentially it could be of use, but for the most part, it's not something that's going to get too much traction, given the success of the platform currently.

What is most valuable?

Performance and stability were a absolutely key areas for us. Having a rock-solid appliance-based architecture with the support that goes behind it from Solace is the most valuable aspect of this solution. From our perspective, it's the background of Solace as certain network devices that have very low error. Tolerances are key for us. What separated them from other vendors, at least initially, was that the appliance option versus the commodity hardware was definitely a very important distinction for us. From a management standpoint being able to have a system, we can manage internally, and have access to keep that within our engineering group is key. We isolate it from our standard infrastructure and commodity hardware group.

If we had to deploy to a messaging platform that uses commodity hardware or converged infrastructure, the costs would be much higher for us, especially due to the certain internal cost. The appliance-based architecture is, at least initially, absolutely a big advantage for Solace. And on the other side, the support that we experienced with Solace as a company has been very positive.

Our background is primarily on the market data side where we deal with a lot of different vendors from Reuters, Bloomberg, very big systems as well as vendor appliance hardware. The support we receive from Solace is by far much better. It is the top of the market. The level of expertise in troubleshooting or identifying issues is absolutely key. 

Our messaging platform is the largest thing on our internal network, as the last messaging spike was close to 10 billion messages a day. We're very large consumers on the network. We need to have key exposure to everything that's going on within our platform. And when we do need to get Solace on the line, they know more than our network team does about troubleshooting; where our constraints are in the system and what's going on. I think those are the key advantages for us. Solace support is the best that I've experienced amongst any of the vendors that I deal with. The competitors I am referring to are TIBCO, Kafka, MQ, and EMS. They are messaging platforms that are on the ultra-low latency side, like Dell. We have various small installations and pockets of those various technologies everywhere, but compared to other vendors and database companies, Solace's response time is better. The depth of knowledge and the consistency of knowledge are far and away better than any other vendor partner that we deal with.

In terms of the ease of management, we have a very large deployment. We've globally deployed dozens of appliances in various data centers across North America, Europe, and Asia Pacific. Being able to manage that, we do require an organized team, good infrastructure, and support structure. Solace, as a partner, helped us in the initial installation to get to the point of doing the leg work and initial analysis to give us space to be successful with our deployment. It's a complex messaging platform. It is not a simple thing to do, but the tools and the support you get from Solace definitely enable you to be successful with that installation.

The topic hierarchy in terms of how dynamic and flexible it is is one of the initial definite benefits of Solace compared to other legacy systems. The ease of doing in-service upgrades and in-service deployments without affecting the environment was less key, coming from more legacy platforms where deploying any new topics, topic structures, and publishing structures wouldn't be allowed. It would force you to do system-wide restarts and involve every user on the platform. Whereas, with Solace being able to not only deploy up-to-date changes without any issues but being able to do so without impacting clients is an advantage. Similarly, upgrades and patches and things of that sort are much more seamless compared to the legacy systems that we supported in the past. Reuters to EMS were the systems that were displaced by Solace.

In terms of the granularity of the topic filtering feature, Solace was heavily involved from a professional services standpoint to help us define our topic structure and our topic hierarchy with ourselves and our architects for the initial deployment, being able to get that structure and helping define that structure. Only recently did we make some structural updates to enable more agile, cross-business sharing. But for the most part, it's been a very successful deployment of the topic hierarchy. It is flexible enough to allow us to use subscriptions and publishers. We have a strong process to make sure that folks are conforming to those topic structure formats but Solace was involved in the development of that structure initially and we have been pretty successful with it.

We do have Kafka deployed to serve some use cases in capital markets. We've evaluated it and continue to evaluate it. But from our perspective, the performance and scale that we have in Solace preclude a large scale deployment. It's also a platform that requires a significant commodity hardware installation along with that we are always going to be licensed while we use open source software and platforms. We always make sure that we're fully licensed from the support perspective. From a regulatory and risk perspective, it's something we always operate. It doesn't make a lot of sense to move there, also, given the layer of investment and performance that we have currently. In cases where certain vendors have out-of-the-box plugins with Kafka, we build connectors to allow them to publish onto our platform and that's worked seamlessly for us.

What needs improvement?

Some of the feature's gaps with some of the open-source vendors have been closed in a lot of ways. Being more agile and addressing those earlier could be an area for improvement. Obviously, their movement to cloud and integration is there. There's a need to keep investing in that area to make sure that there's feature parity with their competitors and have that seamless burst to the cloud available like all the vendors that are out there. But from our perspective, if they can keep that feature parity, there'll be little appetite to move.

For how long have I used the solution?

We have been using Solace since 2012 or 2010. We've had them for quite some time so it has gone through multiple interfaces and many iterations of product names but we're using direct event messaging, Solace caching, their VMR, or the other PubSub+. We use all of their products that I can think of, so we have had a pretty large installation of Solace for quite some time.

We're primarily on-prem. As we're a bank, the biggest installation is for some privacy concerns, but we have primarily deployed appliances. With some of the Event Broker software and some of the virtual appliances, the majority are for them in that regard. We have deployed both in internal data centers, as well as colo data centers for connecting us to certain trading and market use cases.

How was the initial setup?

We started the deployment in the fall. The bulk of the fixed income use cases was rolled out within around three months for the initial use case. At that point, to fully migrate every last legacy publisher over, probably took another year or two, but that's really not on the vendor side. It's more that application teams and legacy systems are slow to move away, but we were able to fully remove every trip publisher, EMS publisher, and few publisher with capital markets and were able to fully migrate over to Solace.

We have roughly 200-plus applications across capital markets and over 1,000 client end-user subscribers. That doesn't even count the clients on the application side as well. Our large applications have a billion messages across our global architecture.

What was our ROI?

ROI would be hard to calculate but just talking from a storage perspective, we've been able to use Solace storage appliances. We've been able to say that would have cost around $2 million a year in storage costs to half that amount as an initial investment. We've been able to pay that business casework off in year one, whereas three years would be perfectly acceptable from a cost-savings perspective. And that's just on the storage side. We can pay up to $10,000 and $20,000 a year for internal charges per server versus just paying for rack space costs for our infrastructure. It's a significant amount when you consider that often you'll need many commodity hardware servers to replace a single appliance pair. It's a significant cost saving.

What's my experience with pricing, setup cost, and licensing?

In terms of pricing, you need to take into account the internal infrastructure chargebacks, monitoring, and service chargebacks that you get from using your internal hardware versus what you get by deploying via an appliance. In many cases, the cost is your service costs for having your infrastructure manage your underlying hardware, and the monitoring and service costs can quickly spin costs. Storage can spin costs to be much, much higher. Whereas, sitting on an appliance-based software where you can manage that internally within your own engineering team, has been much more cost-effective for us. The amount of internal chargebacks we've been able to avoid is significant.

It's really the reason we're able to pseudo have a cloud-based costing model by using appliances, which is fine. Everyone's going cloud. It's a cost-play as well as an agility play, but we get that cost play by using the Solace appliance model on our side. I think many people that deal with similar large infrastructure teams are trying to get the same sort of process.

What other advice do I have?

The key is defining the topic structure and working with Solace, the pro serves, and engineering team to define a flexible and also defined and extensible topic structure. It's also important to put a very defined process around application onboarding. Do proper monitoring post-onboarding to make sure that as application publishing subscription, and behavioral changes occur you can be on top of it, be aware of it, and monitor for it. Initially, the key is to set up for a very good governance structure for onboarding, and then go back and make sure you monitor onboarded applications for changes. Know your clients, your applications, and their behavior and be on top of that.

From our perspective, Solace has been a true partner in a sense. We work with not only our sales, engineering teams, and support teams to make sure we're aware of all the processes, but we also make sure to keep on top of the product roadmaps. We're constantly talking about what we're doing, sharing with other clients and learning about what they're doing at different customer events. But really, it's being a true partner, having transparency into what's going on, on the platform, what's coming down the pipe and then that world-class support that are the key takeaways from Solace as a company.

I would rate it a ten out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
NK
Head of Enterprise Architecture & Digital Innovation Lab at a tech vendor with 10,001+ employees
Real User
Top 10
Can add multiple subscribers seamlessly to topics and queues using different formats and protocols

Pros and Cons

  • "This solution reduces the latency to access changes in real-time and the effort required to onboard a new subscriber. It also reduces the maintenance of each of those interfaces because now the publisher and subscribers are decoupled. Event Broker handles all the communication and engagement. We can just push one update, then we don't have to know who is consuming it and what's happening to that publication downstream. It's all done by the broker, which is a huge benefit of using Event Broker."
  • "I would like them to design topic and queue schemas, mapping them to the enterprise data structure."

What is our primary use case?

We are using Event Broker to publish data across the enterprise, then share the transaction data updates in real-time across the enterprise, and also in some cases the telemetry data.

We do use event mesh, but our use is limited. The reason for that is we have our publishers and consumers on-prem while have our applications on AWS, Azure, and SaaS. It's a multicloud hybrid infrastructure, but the majority are still on-prem. We are slowly moving to AWS, Azure, and SaaS. As we expand to AWS and Azure, then event mesh will be a key feature that we would like to leverage.

We are using the latest version.

How has it helped my organization?

When publishing a product, it updates across the enterprise. We have 100 to 200 consumers, which are basically the applications interested in product changes in real-time. We could publish these product changes to Solace Event Broker with one update. Then, all 100 to 200 consumers could be listening to this topic or queue. Any time a change happens, it's pushed to this topic. They have access to it and can take whatever actions based on those changes. This all happens in real-time.

We used more point-to-point integration in the past. This solution reduces the latency to access changes in real-time and the effort required to onboard a new subscriber. It also reduces the maintenance of each of those interfaces because now the publisher and subscribers are decoupled. Event Broker handles all the communication and engagement. We can just push one update, then we don't have to know who is consuming it and what's happening to that publication downstream. It's all done by the broker, which is a huge benefit of using Event Broker.

With the event mesh feature dynamic message routing across the enterprise, you could have an event getting published from on-prem and consumers in the AWS public cloud. The publisher doesn't have to know where the consumers are. The publisher will publish it to the event broker, which could be on-prem, and the broker will have the intelligence to route the events to wherever these consumers are, whether it's AWS or a broker. If there's another broker in agile, then it will have the intelligence to route it dynamically so the publisher doesn't need to know where the consumers are. Event mesh's ability to have brokers installed across a diverse multicloud on-prem infrastructure gives us the flexibility to support applications across our enterprise. That has a big advantage.

If you just have one broker trying to do all this routing of events to different subscribers across different infrastructures, it will have a huge impact on performance. With Solace, events are routed based on the load of the broker. It can dynamically adjust the burst capacity and scale based on the events being pushed as well as events that aren't getting consumed. The logic about how to manage the routing and scaling happens dynamically between brokers. 

What is most valuable?

  • The ability to publish data events in real-time to the broker.
  • The ability to add multiple subscribers seamlessly to topics and queues using different formats and protocols.
  • The Solace Admin Utility is pretty intuitive and flexible.

E.g., if you have to configure these manually, then the publisher of each event would have to manually configure these events to the topics, provide access, and do monitoring. All these activities would have to be done manually without a Solace Admin. The Solace Admin provides you a UI where any publisher with appropriate access can create their own topics and queues. It can also provide access to subscribers so they can administer their own events.

There is another feature where subscribers can easily discover different topics to consume. If they can find it, then they can, get access to it through the workflow in the Solace.

An advantage of Solace is the way they define their topic hierarchy. With the whole filtering on the topic, we are able to publish data to multiple systems without creating new topics fragments. For instance, if you didn't have that flexibility of the topic hierarchy and ability to do filtering, then you would have to create new topics for a different combination of data sets. This filtering ability creates a lot of flexibility in creating generic topics, where subscribers can just do a filter and consume whatever data they need. That's a powerful feature.

It's very granular. If you can define your topic schema with some order, then you can pretty much do whatever data set at the lowest level. It does provide a lot of flexibility that way without making any changes upstream.

The solution’s topic filtering, in terms of the ease of application design and maintenance, provides us flexibility. The solution makes it easier to consume data on same topic but also change the logic or filtering. E.g., if you want column one, two, and five from a topic schema today, but then you may decide the next day that you need column four and seven.

The solution's event mesh has the ability to make a network of brokers look/seem like a single broker. E.g., if you have consumers in on-prem, AWS, and Azure, along with some SaaS providers, external customers, or partners, you could have brokers deployed for AWS, Azure, and outside for external customers, respectively. If the publisher is publishing an event from on-prem, then they just publish the one event to the broker deployed on-prem. The on-prem broker will route the request to the AWS broker, Azure broker, and the external broker seamlessly. This is transparent to the publisher and consumers, which is a positive feature.

What needs improvement?

The discovery part needs improvement. E.g., if I have a topic or queue, I want a common place to look at all the different subscribers who are using them. I think they have added this to the Event Portal, but it's not live yet. If they could have the ability to discover events and the relationship between publisher and subscriber for each topic, that would be a very powerful feature. 

I would like them to design topic and queue schemas, mapping them to the enterprise data structure. We have recommended this feature to Solace. 

For how long have I used the solution?

About eight months.

What do I think about the stability of the solution?

It's very stable. There's high availability. The architecture is pretty robust and can fade over. It's pretty much a NonStop platform as long as it's architected the right way. 

We have a small team who is responsible for monitoring the alerts. However, they're not dedicated to Solace as they also look at other platforms. The maintenance is low because you can pretty much automate all the alerts. In a lot of cases, you can also resolve them systematically. 

What do I think about the scalability of the solution?

You can scale it across multiple instances seamlessly. You can add instances without really disrupting operations. It's obviously not on multiple environments so you can easily add hardware or resources as required. It's very robust in that sense.

We have about eight people using the solution. Their roles are mostly cloud architects and integration architects, as well as some integration developers. 

Right now, we have about 25 applications using Solace, but we anticipate this to increase significantly as we onboard more data sets. By the end of this year, there should potentially be about 100 applications using Solace.

How are customer service and technical support?

We have used their technical support as well as their professional services. 

  • They have a very strong support team. 
  • Some improvement is required with Solace professional services. The professional services really needs to drive the solutions for the customers and share best practices. They also need to guide the teams through the right things.

Which solution did I use previously and why did I switch?

We use Apache Kafka, which is more of an API gateway. For us, events is a new concept. We do more request/reply, API-based integration patents. We also have typical event-driven architecture. This is still a new concept for us that we are trying to evolve.

How was the initial setup?

The initial setup is straightforward. One of the good features about Solace is their documentation and onboarding scripts are very intuitive, easy, and simple to follow.

The broker took three to four hours to deploy.

We had an implementation strategy before we actually deployed it. In terms of:

  • How are we going to create this event mesh across the organization? 
  • Where are we going to deploy this broker? 
  • Which applications are going to onboard as a publisher, or which events? 
  • Defining the topic schema. 

We did spend some time planning for that process in terms of how we were going to do the maintenance of the platform.

What was our ROI?

We have seen ROI because we started with the free version. Even now, we have a basic enterprise license and are getting the business value from its cost.

We have seen at least a 50 percent increase in productivity (compared to using Kafka) when using Solace for the following use cases:

  • Sharing changes in real-time.
  • Onboarding new subscribers.
  • Modifying data sets.

What's my experience with pricing, setup cost, and licensing?

The pricing and licensing are painless. Having a free version of the solution was a big, important part of our decision to go with it. This was the big driver for us to evaluate Solace. We started using it as the free version. When we felt comfortable with the free version, that is when we bought the enterprise version.

For simple use cases, the free version works. Because we wanted the maintenance and access to the technical support, we went with the enterprise license which is pretty cost-efficient compared to other commercial products. Licensing-wise, it's pretty much free if you want to start off with the basic version, then you can expand to other additional features as you feel comfortable and confident. You have that flexibility from a licensing perspective.

Which other solutions did I evaluate?

Once we decided to go with Solace, we then evaluated Kafka and also looked at RabbitMQ. However, this was mostly to ensure we were making the right decision.

Some of Solace's key differentiators versus Kafka and RabbitMQ are its free version with the ability to deploy and try the product. It's very easy to implement the broker and create the topics and queues. It also has helpful documentation and videos.

Kafka has some admin features, but not like Solace Admin or Solace Portal. It has limited UI features, as most of it is through a CLI. The key difference would be that you need a specialized skill set to be able to administer and maintain an event broker, if you are using an open source.

This solution has increased application design productivity compared with competitive or open-source alternatives. The key is it's a concept that is not obvious. Event-driven architecture is still evolving, as people are still comfortable with the traditional way of designing these products. If you purely compare it with open source, this solution has a lot of advantages. In our case, the adoption is still slow. Primarily, that is because of the skill set and maturity of our architecture.

The solution management productivity increased by 50 percent when compared to using Kafka.

Compared to Kafka, with our internal use cases, Solace is definitely the right solution now. If we use the telemetry IoT use cases, such as real-time streaming and analytics, then Kafka would probably have an edge. However, for our use cases and the volume that we have, Solace is good for us.

What other advice do I have?

It would be good to think through your event-driven architecture, roadmap and design.

It is very easy for architects and developers to extend their design and development investment to new applications using this solution. Compared to the legacy integration pattern, there has been mindset shift because the changes are coming in real-time. The solution has the ability to consume those events in real time, then process them. While there is a learning curve there, it's pretty easy to consume changes. 

Biggest lesson learnt: Think through the whole event-driven architecture and involve other stakeholders. Prepare a good business case and have a good MOC before getting started.

I would rate this solution as an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Sachar De Vries
Head of Infrastructure at Grasshopper
Real User
Top 5Leaderboard
Guaranteed Messaging allows for us to transport messages between on-prem and the cloud without any loss of data

Pros and Cons

  • "Guaranteed Messaging allows for us to transport messages between on-prem and the cloud without any loss of data."
  • "The ease of management could be approved. The GUI is very good, but to configure and manage these devices programmatically in the software version is not easy. For example, if I would like to spin up a new software broker, then I could in theory use the API, but it would require a considerable amount of development effort to do so. There should be a tool, or something that Solace supports, that we could use for this, e.g., a platform like Terraform where we could use infrastructure as code to configure our source appliances."

What is our primary use case?

We use it as a central message bus to interconnect all our applications as well as for the transportation of market data.

We're using the 3560s for the hardware appliances and version 9.3 for the software.

How has it helped my organization?

It has helped a lot in the unified way of how we develop software. Having a common message processing protocol has helped a lot with maintainability and how software has been designed. It also removes the worries that the message bus is not performing well, e.g., the throughput rates are so high that it works very well.

The solution has increased application design productivity.

It is easy for architects and developers to extend their design and development investment to new applications using this solution. That's never been a roadblock.

What is most valuable?

PubSub+ capabilities make it all work. 

Guaranteed Messaging allows for us to transport messages between on-prem and the cloud without any loss of data.

The solution’s topic hierarchy is pretty flexible and works well. It does require some engineering thought in the beginning to ensure that the hierarchy works and you don't shoot yourself in the foot. But if that is architected well, it allows for very nice filtering and subscription based on what you are interested in. 

The topic hierarchy's application design and maintenance works very well.

What needs improvement?

The ease of management could be approved. The GUI is very good, but to configure and manage these devices programmatically in the software version is not easy. For example, if I would like to spin up a new software broker, then I could in theory use the API, but it would require a considerable amount of development effort to do so. There should be a tool, or something that Solace supports, that we could use for this, e.g., a platform like Terraform where we could use infrastructure as code to configure our source appliances.

Monitoring needs improvement. There is no way to get useful systems to test out the machine without having to implement our own monitoring solution.

I would like to see improvement in the message promotion rate for software-based brokers.

For how long have I used the solution?

More than 15 years.

What do I think about the stability of the solution?

It is extremely stable. The amount of hardware-based interruptions that we have had from the Solace products are less than 10 in the last seven to eight years. It has extremely high reliability.

What do I think about the scalability of the solution?

Since it is a hardware-based solution, what you buy is what you get. You can then upgrade it, but we have never had a need to upgrade and scale the solution.

It is used for all our applications. The whole company is using it, including traders, developers, and risk.

How are customer service and technical support?

The technical support is very good. Our questions have always been answered and resolved in a very good way. They seem very knowledgeable about their product and can go into depth about how and why we should implement it in certain ways.

How was the initial setup?

The initial setup was pretty straightforward.

The deployment was part of a larger rollout. For just the physical deployment, it took a day per site.

What about the implementation team?

We had good support Solace during the deployment and the architecture phase of designing how we would use the product.

What was our ROI?

It has provided us with a return on our investment. It has enabled us to do what we do now.

What's my experience with pricing, setup cost, and licensing?

The pricing and licensing were very transparent and well-communicated by our account manager.

There was no free version when we evaluated it.

Which other solutions did I evaluate?

We compared a few messages bus solutions, like TIBCO. At that point, Solace came out ahead, both in throughput and probably cost.

We haven't really used any competitors. I don't think there are many on the market still. I don't think the solution really compares that well with any of the open source solutions. Maybe the setup ease with MQ is similar to Solace, but then to keep it operational, Solace is much easier. It's a hardware appliance that you can install in a data center, which just keeps working. That is amazing. It is something that software or open source solutions don't offer.

What other advice do I have?

It is a product that is more like a switch or router, where you install it, then it keeps on working. The operational maintenance is extremely low.

Read the documentation. Talk to Solace about any questions that you might have to find out the best implementation for whatever it is you need to solve.

I would rate the product as an eight (out of 10).

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Google
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
CK
Manager, IT at a financial services firm with 501-1,000 employees
Real User
Top 10
Makes information flow very seamless; templates and naming conventions make it easy to use

Pros and Cons

  • "The topic hierarchy is pretty flexible. Once you have the subject defined just about anybody who knows Java can come onboard. The APIs are all there."
  • "The product should allow third-party agents to be installed. Currently, it is quite proprietary."

What is our primary use case?

We use it as a message bus for our different systems to connect to Solace on a pub/sub basis. We have about 10 systems interfacing with it. It is used for our critical payment systems which are mostly online payment transactions. There are also messages for streaming and data warehouse info.

We are using the Solace PubSub+ 3530 appliance, and the AMI (Amazon Machine Image) version. We have a mixture of an on-premise deployment and a cloud deployment. The cloud part is more the AMI.

How has it helped my organization?

Because we use it as a message broker, it makes information flow very seamless.

When we do the setup we establish the naming conventions. So all we need to do is to tell our stakeholders who are interested in using Solace to follow the naming convention. That way, everybody can implement things according to their own timing and schedule. We decouple implementation from the various systems. We just publish things and whoever is ready to consume does so.

From an application design perspective, it is quite easy for them to interface with it and they don't need a lot of rules. Solace has increased application design productivity. It has reduced dependency. Anybody can work with it based on their own timeline so, to a certain extent, there's no bottleneck when they use it.

We have also seen an increase in productivity when it comes to solution management, by about 30 percent.

It's very easy for architects and developers to extend their design and development investment to new applications using solace because it's quite standardized, as long as they follow the template when they do the design. They just have to publish according to the particular template. There is no need to redesign.

What is most valuable?

  • Everything is good in this solution. We only use the PubSub feature. We use a minimum of topics to publish and they are consumed through the Solace message broker.
  • We have a standard template for any new configuration, so it's very easy to manage.
  • The topic hierarchy is pretty flexible. Once you have the subject defined just about anybody who knows Java can come onboard. The APIs are all there.
  • Topic filtering is easy to use and easy to maintain. Sometimes we go into a lot of detail on the content and it can be affected at a higher level. So it's very flexible.

What needs improvement?

The product should allow third-party agents to be installed. Currently, it is quite proprietary. It doesn't allow third-party agents to be installed.

For how long have I used the solution?

I have been using Solace PubSub+ Event Broker for three years.

What do I think about the stability of the solution?

The product is pretty stable. Since the time we set it up, there has been no need for us to reboot the appliance. We have had zero downtime.

What do I think about the scalability of the solution?

It's quite easy to scale. We just have to build in another set.

We have plans to extend it into our warehousing systems — those are portals — so that the information can be shared.

How are customer service and technical support?

They are very knowledgeable and their responses are pretty fast.

Which solution did I use previously and why did I switch?

We did not have a previous solution. We went with it because we liked the features that Solace provides and, to date, it has delivered.

The free version allows people to do a proof of concept easily. It helps people when they want to see how easy it is to use. The free version helped us to decide to go with the solution.

How was the initial setup?

The initial setup of Solace was straightforward. We just had to buy the product, install it, have a few templates, and that was it. We were already good to go.

Our deployment took about a week or so. After that, we did integration testing. Once that was okay we had to come out with a template for people to follow. The learning curve is quite small.

To administer Solace we only need two people. That's because it has role segregation.

What was our ROI?

In terms of dollar value we have not seen ROI, but in terms of availability, we have, because the product is very stable.

What's my experience with pricing, setup cost, and licensing?

So far, we are okay with the pricing and the licensing.

Which other solutions did I evaluate?

We didn't compare Solace with competing solutions. 

What other advice do I have?

It's a good product to go with if you are interested in uptime and availability, and ease of implementation.

The biggest lesson I have learned from using Solace is that once you get the design correct, everything flows very seamlessly.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.