PubSub+ Event Broker Review

Can add multiple subscribers seamlessly to topics and queues using different formats and protocols


What is our primary use case?

We are using Event Broker to publish data across the enterprise, then share the transaction data updates in real-time across the enterprise, and also in some cases the telemetry data.

We do use event mesh, but our use is limited. The reason for that is we have our publishers and consumers on-prem while have our applications on AWS, Azure, and SaaS. It's a multicloud hybrid infrastructure, but the majority are still on-prem. We are slowly moving to AWS, Azure, and SaaS. As we expand to AWS and Azure, then event mesh will be a key feature that we would like to leverage.

We are using the latest version.

How has it helped my organization?

When publishing a product, it updates across the enterprise. We have 100 to 200 consumers, which are basically the applications interested in product changes in real-time. We could publish these product changes to Solace Event Broker with one update. Then, all 100 to 200 consumers could be listening to this topic or queue. Any time a change happens, it's pushed to this topic. They have access to it and can take whatever actions based on those changes. This all happens in real-time.

We used more point-to-point integration in the past. This solution reduces the latency to access changes in real-time and the effort required to onboard a new subscriber. It also reduces the maintenance of each of those interfaces because now the publisher and subscribers are decoupled. Event Broker handles all the communication and engagement. We can just push one update, then we don't have to know who is consuming it and what's happening to that publication downstream. It's all done by the broker, which is a huge benefit of using Event Broker.

With the event mesh feature dynamic message routing across the enterprise, you could have an event getting published from on-prem and consumers in the AWS public cloud. The publisher doesn't have to know where the consumers are. The publisher will publish it to the event broker, which could be on-prem, and the broker will have the intelligence to route the events to wherever these consumers are, whether it's AWS or a broker. If there's another broker in agile, then it will have the intelligence to route it dynamically so the publisher doesn't need to know where the consumers are. Event mesh's ability to have brokers installed across a diverse multicloud on-prem infrastructure gives us the flexibility to support applications across our enterprise. That has a big advantage.

If you just have one broker trying to do all this routing of events to different subscribers across different infrastructures, it will have a huge impact on performance. With Solace, events are routed based on the load of the broker. It can dynamically adjust the burst capacity and scale based on the events being pushed as well as events that aren't getting consumed. The logic about how to manage the routing and scaling happens dynamically between brokers. 

What is most valuable?

  • The ability to publish data events in real-time to the broker.
  • The ability to add multiple subscribers seamlessly to topics and queues using different formats and protocols.
  • The Solace Admin Utility is pretty intuitive and flexible.

E.g., if you have to configure these manually, then the publisher of each event would have to manually configure these events to the topics, provide access, and do monitoring. All these activities would have to be done manually without a Solace Admin. The Solace Admin provides you a UI where any publisher with appropriate access can create their own topics and queues. It can also provide access to subscribers so they can administer their own events.

There is another feature where subscribers can easily discover different topics to consume. If they can find it, then they can, get access to it through the workflow in the Solace.

An advantage of Solace is the way they define their topic hierarchy. With the whole filtering on the topic, we are able to publish data to multiple systems without creating new topics fragments. For instance, if you didn't have that flexibility of the topic hierarchy and ability to do filtering, then you would have to create new topics for a different combination of data sets. This filtering ability creates a lot of flexibility in creating generic topics, where subscribers can just do a filter and consume whatever data they need. That's a powerful feature.

It's very granular. If you can define your topic schema with some order, then you can pretty much do whatever data set at the lowest level. It does provide a lot of flexibility that way without making any changes upstream.

The solution’s topic filtering, in terms of the ease of application design and maintenance, provides us flexibility. The solution makes it easier to consume data on same topic but also change the logic or filtering. E.g., if you want column one, two, and five from a topic schema today, but then you may decide the next day that you need column four and seven.

The solution's event mesh has the ability to make a network of brokers look/seem like a single broker. E.g., if you have consumers in on-prem, AWS, and Azure, along with some SaaS providers, external customers, or partners, you could have brokers deployed for AWS, Azure, and outside for external customers, respectively. If the publisher is publishing an event from on-prem, then they just publish the one event to the broker deployed on-prem. The on-prem broker will route the request to the AWS broker, Azure broker, and the external broker seamlessly. This is transparent to the publisher and consumers, which is a positive feature.

What needs improvement?

The discovery part needs improvement. E.g., if I have a topic or queue, I want a common place to look at all the different subscribers who are using them. I think they have added this to the Event Portal, but it's not live yet. If they could have the ability to discover events and the relationship between publisher and subscriber for each topic, that would be a very powerful feature. 

I would like them to design topic and queue schemas, mapping them to the enterprise data structure. We have recommended this feature to Solace. 

For how long have I used the solution?

About eight months.

What do I think about the stability of the solution?

It's very stable. There's high availability. The architecture is pretty robust and can fade over. It's pretty much a NonStop platform as long as it's architected the right way. 

We have a small team who is responsible for monitoring the alerts. However, they're not dedicated to Solace as they also look at other platforms. The maintenance is low because you can pretty much automate all the alerts. In a lot of cases, you can also resolve them systematically. 

What do I think about the scalability of the solution?

You can scale it across multiple instances seamlessly. You can add instances without really disrupting operations. It's obviously not on multiple environments so you can easily add hardware or resources as required. It's very robust in that sense.

We have about eight people using the solution. Their roles are mostly cloud architects and integration architects, as well as some integration developers. 

Right now, we have about 25 applications using Solace, but we anticipate this to increase significantly as we onboard more data sets. By the end of this year, there should potentially be about 100 applications using Solace.

How are customer service and technical support?

We have used their technical support as well as their professional services. 

  • They have a very strong support team. 
  • Some improvement is required with Solace professional services. The professional services really needs to drive the solutions for the customers and share best practices. They also need to guide the teams through the right things.

Which solution did I use previously and why did I switch?

We use Apache Kafka, which is more of an API gateway. For us, events is a new concept. We do more request/reply, API-based integration patents. We also have typical event-driven architecture. This is still a new concept for us that we are trying to evolve.

How was the initial setup?

The initial setup is straightforward. One of the good features about Solace is their documentation and onboarding scripts are very intuitive, easy, and simple to follow.

The broker took three to four hours to deploy.

We had an implementation strategy before we actually deployed it. In terms of:

  • How are we going to create this event mesh across the organization? 
  • Where are we going to deploy this broker? 
  • Which applications are going to onboard as a publisher, or which events? 
  • Defining the topic schema. 

We did spend some time planning for that process in terms of how we were going to do the maintenance of the platform.

What was our ROI?

We have seen ROI because we started with the free version. Even now, we have a basic enterprise license and are getting the business value from its cost.

We have seen at least a 50 percent increase in productivity (compared to using Kafka) when using Solace for the following use cases:

  • Sharing changes in real-time.
  • Onboarding new subscribers.
  • Modifying data sets.

What's my experience with pricing, setup cost, and licensing?

The pricing and licensing are painless. Having a free version of the solution was a big, important part of our decision to go with it. This was the big driver for us to evaluate Solace. We started using it as the free version. When we felt comfortable with the free version, that is when we bought the enterprise version.

For simple use cases, the free version works. Because we wanted the maintenance and access to the technical support, we went with the enterprise license which is pretty cost-efficient compared to other commercial products. Licensing-wise, it's pretty much free if you want to start off with the basic version, then you can expand to other additional features as you feel comfortable and confident. You have that flexibility from a licensing perspective.

Which other solutions did I evaluate?

Once we decided to go with Solace, we then evaluated Kafka and also looked at RabbitMQ. However, this was mostly to ensure we were making the right decision.

Some of Solace's key differentiators versus Kafka and RabbitMQ are its free version with the ability to deploy and try the product. It's very easy to implement the broker and create the topics and queues. It also has helpful documentation and videos.

Kafka has some admin features, but not like Solace Admin or Solace Portal. It has limited UI features, as most of it is through a CLI. The key difference would be that you need a specialized skill set to be able to administer and maintain an event broker, if you are using an open source.

This solution has increased application design productivity compared with competitive or open-source alternatives. The key is it's a concept that is not obvious. Event-driven architecture is still evolving, as people are still comfortable with the traditional way of designing these products. If you purely compare it with open source, this solution has a lot of advantages. In our case, the adoption is still slow. Primarily, that is because of the skill set and maturity of our architecture.

The solution management productivity increased by 50 percent when compared to using Kafka.

Compared to Kafka, with our internal use cases, Solace is definitely the right solution now. If we use the telemetry IoT use cases, such as real-time streaming and analytics, then Kafka would probably have an edge. However, for our use cases and the volume that we have, Solace is good for us.

What other advice do I have?

It would be good to think through your event-driven architecture, roadmap and design.

It is very easy for architects and developers to extend their design and development investment to new applications using this solution. Compared to the legacy integration pattern, there has been mindset shift because the changes are coming in real-time. The solution has the ability to consume those events in real time, then process them. While there is a learning curve there, it's pretty easy to consume changes. 

Biggest lesson learnt: Think through the whole event-driven architecture and involve other stakeholders. Prepare a good business case and have a good MOC before getting started.

I would rate this solution as an eight (out of 10).

Which deployment model are you using for this solution?

On-premises
**Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
More PubSub+ Event Broker reviews from users
...who work at a Financial Services Firm
...who compared it with IBM MQ
Add a Comment
Guest