Choose a specific buying criteria from the list and see what real users have to say about it.

Apache Kafka Room for Improvement

50918253 dfac 4b79 a431 4fe711a85fff avatar
Deputy General Manager, DevOps Manager at a tech services company
* GUI for Kafka infrastructure monitoring and deployment view full review »
A6947a15 28e5 4b37 adc2 c1655b92dd46 avatar
Java Architect at a tech vendor with 51-200 employees
Too much dependency on the zookeeper and leader selection is still the bottleneck for Kafka implementation. view full review »
Anonymous avatar x60
Enterprise Architect at a logistics company with 1,001-5,000 employees
A good free monitor tool would be great for Apache Kafka (from Apache foundation). view full review »
Anonymous avatar x60
Technical Lead/Project Manager (Consulting Apple Inc) at a tech services company
I would like to see a more user-friendly GUI. view full review »
Anonymous avatar x60
Lead Engineer at a retailer with 1,001-5,000 employees
This product guarantees at-least-once delivery. We have asked JIRA to provide features such as at-most-once delivery to remove duplicate message consumption. view full review »
Anonymous avatar x60
Java Developer at a media company with 1,001-5,000 employees
It’s pretty easy to use for now. I haven’t had any difficulty or problems that I can complain about. Maybe they can add a UI to the configure queues and to display statistics about data stores. view full review »
868fac92 ebf8 4953 a45c b94dc0d0ab3f avatar
Technical Architect at a tech vendor with 51-200 employees
As an open-source project, Kafka is still fairly young and has not yet built out the stability and features that other open-source projects have acquired over the many years. If done correctly, Kafka can also take over the stream-processing space that technologies such as Apache Storm cover. Currently, as it is in the big/fast data integration world, you need to piece together many different open-source technologies. For example, to create a reliable, fault-tolerant streaming processing system that ingests data, you need: * a producer service * an event/message buffer such as Kafka or a message queue * a stream processing consumer such as Spark, Flink, Storm, etc. * something to help facilitate the ingestion into target datasources such as Flume or some customized concoction. This is simply to ingest the data and does not necessarily account for the analytical pieces, which may consist of Spark ML, SystemML, ElasticSearch, Mahout, etc. What I'm getting at is basically the need for a Spring framework of big data. view full review »

Sign Up with Email