Apache Kafka Room for Improvement

Michael Silvi
Senior Software Engineering Consultant at a tech services company with 51-200 employees
Kafka requires non-trivial expertise with DevOps to deploy in production at scale. The organization needs to understand ZooKeeper and Kafka and should consider using additional tools, such as MirrorMaker, so that the organization can survive an availability zone or a region going down. Shifting availability concerns to Kafka means that it cannot go down. It's important to understand the partitioning model and replication needs before relying on it for critical business functions. I'd suggest using it with a feature toggle for a non-critical path in production and learning from failure before relying on it. While Kafka is built to scale, that does not mean that applications can start as many consumers or producers without consideration for how Kafka brokers will perform. Considerations about scaling out brokers need to occur before publishing millions of messages. View full review »
Sean Hickey
Solutions Architect at a consultancy with 1,001-5,000 employees
The GUI tools for monitoring and support are still very basic and not very rich. There is no help in determining a shard key for performance. View full review »
Kevin Quon
Technical Architect at a tech vendor with 51-200 employees
As an open-source project, Kafka is still fairly young and has not yet built out the stability and features that other open-source projects have acquired over the many years. If done correctly, Kafka can also take over the stream-processing space that technologies such as Apache Storm cover. Currently, as it is in the big/fast data integration world, you need to piece together many different open-source technologies. For example, to create a reliable, fault-tolerant streaming processing system that ingests data, you need: * a producer service * an event/message buffer such as Kafka or a message queue * a stream processing consumer such as Spark, Flink, Storm, etc. * something to help facilitate the ingestion into target datasources such as Flume or some customized concoction. This is simply to ingest the data and does not necessarily account for the analytical pieces, which may consist of Spark ML, SystemML, ElasticSearch, Mahout, etc. What I'm getting at is basically the need for a Spring framework of big data. View full review »
FounderC32bc
Founder, CEO at a tech vendor with 1-10 employees
The product is good, but it needs implementation and on-going support. The whole cloud engagement model has made the adoption of Kafka better due to PaaS (Amazon Kinesis, a fully managed service by AWS). View full review »
HeadOfEn9a94
Head of Engineering
Stability of the API and the technical support could be improved. The Kafka API is changing quite radically with the different releases. There are many new improvements and that's good. But the inherent cost of adapting to a new version of the platform was worrying me at the time. The documentation was sometimes misleading, since it was describing some feature in the new version of the API rather than the one we were using. View full review »
Ivan Dyachkov
Team Lead at a financial services firm with 1,001-5,000 employees
The standard Kafka Java library, which is shipped with the product, is too complex for inexperienced users. At my company, engineering teams ended up writing wrapper libraries to solve complex issues. Kafka client libraries in general are complex, regardless of language. This is the price Kafka users have to pay for having simple, yet robust, server-side code. What could be improved is the hard dependency on ZooKeeper. The work in this direction has already been started, though. Overall, the project is moving forward at a very good pace View full review »
JavaDeve0c6d
Java Developer at a media company with 1,001-5,000 employees
It’s pretty easy to use for now. I haven’t had any difficulty or problems that I can complain about. Maybe they can add a UI to the configure queues and to display statistics about data stores. View full review »
kafkakid
Lead Engineer at a retailer with 1,001-5,000 employees
This product guarantees at-least-once delivery. We have asked JIRA to provide features such as at-most-once delivery to remove duplicate message consumption. View full review »
VINOD K KUMARAN
Technical Lead/Project Manager(Consulting Apple Inc) at a tech services company with 1,001-5,000 employees
I would like to see a more user-friendly GUI. View full review »
Chandra Keerthy
Principal Software Architect at a tech services company with 11-50 employees
The management tools are getting mature. When we have thousands of topics, it is hard to visualize. View full review »
Dori Waldman
Big Data Lead at a marketing services firm with 51-200 employees
* Maintenance: Sometimes brokers disconnect and there are repartitions issues. * Built-in monitoring application for Kafka infrastructure. * UI for Kafka would also be great. View full review »
SeniorJa44a7
Senior Java Consultant at a tech services company with 501-1,000 employees
It’s perfect for our requirements. View full review »
Jyothish Kalavoor Parambil
Hadoop Technical Lead (Assistant Consultant) at a tech services company with 10,001+ employees
* It needs a separate cluster and a separate administrator to manage the Kafka cluster, adding an extra cost. * It is challenging when data is moved to a mirror cluster, in the case of disaster recovery. It doesn't keep the offset. View full review »
Piyush Ranjan
SDET II at a tech services company with 5,001-10,000 employees
One improvement is in regards to the OS memory management. In case there are too many partitions, it runs into memory issues. Although this is a very rare scenario, it can happen. View full review »
Mehul Jani
Deputy General Manager, DevOps Manager at a comms service provider
* GUI for Kafka infrastructure monitoring and deployment View full review »
Enterpri157e
Enterprise Architect at a logistics company with 1,001-5,000 employees
A good free monitor tool would be great for Apache Kafka (from Apache foundation). View full review »
Sendil S
Java Architect at a tech vendor with 51-200 employees
Too much dependency on the zookeeper and leader selection is still the bottleneck for Kafka implementation. View full review »

Sign Up with Email