This post was originally published on the Meltano blog on May 13, 2020.
This post is part 2 of a 2-part series to announce and provide context on the new direction of Meltano. If you've been following Meltano for a while or would like to have some historical context, start with part 1: Revisiting the Meltano strategy: a return to our roots. If you're new to Meltano or are mostly interested in what's coming, feel free to skip part 1 and start here. If you're worried that reading this entire post will take a lot of time, feel free to jump right to the conclusion: Where Meltano fits in.
Introduction
If you've read part 1 of the series, you know that Meltano is now focused on building an open source platform for data integration and transformation (ELT) pipelines, and that we're very excited about it.
But why are we even building this?
Isn't data integration (getting data from sources, like SaaS tools, to destinations, like data warehouses) a solved problem by now, with modern off-the-shelf tools having taken the industry by storm over the past few years, making it so that many (smaller) companies and data teams don't even need data engineers on staff anymore?
Off-the-shelf ELT tools are not that expensive, especially compared to other tools in the data stack, like Looker, and not having to worry about keeping your pipelines up and running or writing and maintaining data source connectors (extractors) is obviously extremely valuable to a business.
On top of that, writing and maintaining extractors can be tedious, thankless work, so why would anyone want to do this themselves instead of just paying a vendor to handle this burden instead?
Who would ever want to use a self-managed ELT platform? And why would anyone think building this is a good use of time or money, especially if it's going to be free and open source?