Just got back from CrunchConf 2018. A good panel of speakers and an interesting conference. Lots of food and drinks. Good atmosphere, helpful organizers. Fun times, good memories. The conference was a blast with most of my questions hitting the top votes with a little help from the community.
I decided in the context of the conference that I will share my thoughts on the presentations, at least for those that were intriguing and for those that my questions got the top votes. All in all, I would like to appraise good presentations, devoid of hype and commercialism. There seems to be some hype in today’s world around the Big Data projects, with the naive jumping ship to the next cool project.
“SoftwareMill tested a few MQs with Artemis & Kafka tied performance wise. Why Kafka over a scalable pedigree MQ, with richer connectivity (amqp, mqtt, JMS) and semnatics? (eg. exclusive/last-value/queues)” – c0d3 guru @ CrunchConf 2018
My colleague and I participated at Tim Berglund’s “Kafka as a Platform: the Ecosystem from the Ground Up” presentation. Let’s say a nice presentation but one not making it clear on where Kafka fits in the MQ landscape. I thought of clearing that up below.
Internally to decouple our systems and to provide a way to transport messages from one system to another, we went first for Active MQ but decided to steer in the direction of Apache Artemis after observing the project’s roadmap to switch the internal engine to Artemis.
Note we did an internal evaluation of all messaging solutions on our own but also read about experience from others. SoftwareMill is just one of them with them going the extra mile of providing the code of their tests also.
If you don’t know Artemis, it’s the HornetQ code donation to the Apache Software Foundation. What’s HornetQ?! Probably the fastest MQ engine from the “old” Java EE world before micro-services kicked in, born in the financial industry where fast means FAST. What’s so special about MQ’s in general? The domain.
Most people these days and I’m sorry to say this, have no clue what the “messaging domain” (or messaging models in general) are. It’s like nobody reads books anymore these days and we rely only on blog articles and by extension on StackOverflow to do our coding or architecture.
So why I’m ranting here about messaging models? Cause there are two of them, P2P (meaning queues) and Pub/Sub (meaning topics). And in most pedigree MQs the broker is the one responsible for keeping the “offsets” of the subscribers for topics and dispatching messages to consumers.
So what’s special about this, against Kafka?! Simplicity. Down the line of development the pedigree MQs save a lot of time as they have been developed with the following aspects in mind:
- a truckload of connectors (mqtt, amqp, openwire, JMS, stomp, REST even);
- a broker of technology type A can talk over the above protocols with a broker of technology type B;
- non-uniform (polygon) clusters with filtering on the edges can be created to allow for a very diverse networking or filtering of messages to/from a destination;
- the support both messaging models (P2P and PusSub) but allow extensions such as exclusive/last-value queues, virtual destinations/subscribers, message grouping, scheduled messages, dead-letters, etc.
- the “Camel In Action” book on EIP (enterprise integration patterns) is written against an virtual idea of an MQ producing “messages” that pass through the framework (testimony of the decoupled architectures that existed before our hyped days of “Big Data”);
- it’s delivered as a service by more than one company (Artemis from Redhat as A-MQ, Wildfly integrates Artemis as part of their messaging service);
Now one might say Kafka has all that. Sure. But only Confluent is the only provider of connectors along with some “community supported”. It relies on a smart client keeping track of an offset. It sports ZooKeeper dependency one must be aware of, for the smart clients of course. It’s a Pub/Sub acting as a distributed log. When did putting a square plug in a round hole ever worked? Unless you make the hole bigger and the square smaller, sure.
So why I’m saying “simplicity”? Because an pedigree MQ provides that simplicity by managing “the offsets”. By managing the complexity of that for the consumer or subscriber communicating against it. Try implementing a queue on top of Kafka with all the bells and whistles. Sure, you can show me this from SoftwareMill again. But seriously?
Seriously speaking, I do respect Tim’s experience and he has previously worked for Datastax putting up a great course for Cassandra. He is a teacher nonetheless. Furthermore I have appreciation for what the team at Confluent is doing. And Apache Kafka is a great project taken in isolation.
But before starting a presentation and when asked a serious question, I would not dismiss or dodge a good and serious architectural and integration question: “Why would I choose Kafka over an full-blown/pedigree MQ”. Because in most cases these types of decisions or questions cost money. I would start this way:
- I would explain the messaging domain and the two messaging models;
- I would explain why Kafka (a good pub/sub system) is and what it is not (not an MQ so it lacks much diversity);
- I would advise people to read and judge on their own and not hype it to death that they should all use Kafka because of its performance or the new KSQL (with SQL support for message selection based on headers, not yet analytics as KSQL, is supported in Artemis for years now);
- I would present the pedigree of the old solutions (2005 and earlier) versus the one I’m about to talk about (Kafka first release in 2011) as a starting point for a better discussion on the advantages or disadvantages of each;
- then I’d do my usual commercial presentation of a product I’d like developers to eat out of my hand;
How Tim answered the question? Dodged it, repeating what we all know that it’s a distributed log. We all know that. It’s everywhere, on the site, on StackOverflow, on Reddit. Just google Kafka and “distributed log” will pop-up in the first 128 words. In the context of “performance” dodging continued with a statement that “performance tests can prove anything”.
Not really, no. It’s easy in the context of a presenter to dodge such hard questions. But it’s even harder to accept a few realities when pushing a product to the market, hyping-up all the developers of this new “event driven” propeller engine that is Kafka. But ignoring the niche place it has in the EDA (event driven architecture) domain it has, concepts which were born long, loooong before Kafka even existed.
I don’t know what’s up with this “Big Data” community, hyping like crazy over random pieces of technology (with some even trying to do MQs over document stores like Mongo DB) just for the fun of it all. It’s a bit crazy contradicting even the popular words: “pick the best tool for the job” with thousands of dollars poured into development teams to try to make an Pub/Sub system behave like an Queue for which the requirements of behavior are different.
Dear random architect out there, do yourself a favor. Before trying to put Kafka in the picture, or any other amazing, hyped piece of technology, read the books and understand the concepts of “messaging” in full. To at least take an informed decision.
Afterwards identify the place the messaging solution of choice against others, on a X/Y quadrant, picking the most import X and Y axis for your use-case. Then think about your evolving messaging requirements and decide if Kafka, Rabbit, Artemis, EventStore or others are a good fit.
Save yourself time. Reading a few days more, asking yourself and others questions, bumping ideas, testing it out and comparing may save a few months or years of work down the line.
I’ve been there. And although I have use-cases for Kafka at my day-to-day job, I still fight the idea of using it “for everything”. Ask yourself this: why does AWS offer Kinesis. which is oddly similar to Kafka and SQS, oddly similar to pedigree MQs?
Because there is no “one good tool”. It’s a combination and for the most parts, my opinion is that MQs tend to simplify the architecture, in a non-average set-up of interconnected services each with their own consumption patterns, while the likes of Kafka are making it relatively easy for moving huge amounts of data from specific technology A to specific technology B if you’re willing to integrate specific connector X.
Keep it simple. Don’t add complexity where it’s not required.