An open empty notebook on a white desk next to an iPhone and a MacBook

Orchestrating notebooks with Camel K, an integration-based approach

When you achieve somewhat of a level of maturity in your data analytics pipelines you also tend to start exploring various and flexible ways to orchestrate the ETL processes of data you have and derive various tables for different access patterns, as required by the business downstream. However, similar to the “object impedance mismatch” in the object vs. relation database worlds, there’s an “impedance mismatch” between data-engineers and business folks when it comes to the expectations they have on the speed of delivery, quality, correctness, maintainability.

In the mindset of the business folk, the derivative work or data that he requires is just a simple SQL that is run on the “raw data” which has infinite amounts of CPU power, and infinite amounts of memory, probably running on GPUs anyway and thus just needs to be written/queried as such and it will return results in an instant.

Continue reading →

On workflow engines and where Airflow fits in

With the occasion of the CrunchConf 2018 there was a presentation on “Operating data pipeline using Airflow @ Slack” fromĀ Ananth Packkildurai. If you don’t know what Airflow is, it’s an workflow engine of the similar likes of Oozie and Azkaban. It’s based on the concept of a DAG which you write in Python and execute on a cluster.

As in the case of the Kafka presentation by Tim Berglund, we’ve asked the hard questions and they got popular pretty soon. In the case of Airflow, in the eco-system of workflow engines, we had quite a heavy question.

Continue reading →