A curated list of notable ETL (extract, transform, load) frameworks, libraries and software.
- Airflow - "Use airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed."
- Azkaban - "a batch workflow job scheduler created at LinkedIn to run Hadoop jobs. Azkaban resolves the ordering through job dependencies and provides an easy to use web user interface to maintain and track your workflows."
- Dray.it - "Docker workflow engine. Allows users to separate a workflow into discrete steps each to be handled by a single container."
- Luigi - "a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in."
- Pinball - "a scalable workflow management platform developed at Pinterest. It is built based on layered approach."
- TaskFlow - "allows the creation of lightweight task objects and/or functions that are combined together into flows (aka: workflows) in a declarative manner. It includes engines for running these flows in a manner that can be stopped, resumed, and safely reverted."
- Toil - Similar to Luigi, jobs are classes with a run method. Supports executing jobs on other machines (workers) which can include AWS spot instances.
- Chronos - "a distributed and fault-tolerant scheduler that runs on top of Apache Mesos that can be used for job orchestration."
- Dagobah - "a simple dependency-based job scheduler written in Python. Dagobah allows you to schedule periodic jobs using Cron syntax. Each job then kicks off a series of tasks (subprocesses) in an order defined by a dependency graph you can easily draw with click-and-drag in the web interface."
- DataPipeLine - Java library for running performance oriented flows
- GETL - Groovy toolbox for ETL Tasks from practicing architectures
- JSR 352 - Java native API for batch processing
- Scriptella - Java-XML ETL toolbox for every day use.
- Spring Batch - ETL on Spring ecosystem
- BeautifulSoup - Popular library used to extract data from web pages.
- Blaze - "translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems."
- Bonobo - Simple, modern and atomic data transformation graphs for Python 3.5+.
- Bubbles - "a Python ETL Framework and set of tools. It can be used for processing, auditing and inspecting data. Focus is on understandability and transparency of the process."
- Celery - "an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well."
- Dask - Ever tried using Pandas to process data that won't fit into memory? Dask makes it easy. Dask also has functionality to make it easy to processing continuous streams of data.
- dataset - A wrapper around SQLAlchemy that simplifies database operations (including upserting).
- ijson - Allows processing JSON iteratively (as a stream) without loading the whole file into memory at once.
- Joblib - "a set of tools to provide lightweight pipelining in Python."
- lxml - Parses XML using C libraries libxml2 and libxslt, so it's very fast. Also supports a "recover" mode that will try its best to use invalid xml or discard it. Great for large XML files and advanced functionality (like using xpaths). IBM also has a great article on high-performance parsing with lxml here: http://www.ibm.com/developerworks/library/x-hiperfparse/
- MrJob - "lets you write MapReduce jobs in Python 2.6+ and run them on several platforms. The easiest route to writing Python programs that run on Hadoop."
- Odo - Moves data across containers (SQL, CSV, MongoDB, Pandas, etc). Claims to be the easiest and fastest way to load a CSV into your database.
- Pandas - Implements dataframes in Python for easier data processing and includes a number of tools that make it easier to extract data from multiple file formats.
- PETL - "a general purpose Python package for extracting, transforming and loading tables of data." Slower than Pandas and not as good for larger amounts of data, but simpler.
- PyQuery - Extracts data from web pages with a jquery-like syntax.
- Retrying - Allows you to add a decorator to any function/method to retry on an exception.
- riko - A python stream processing engine modeled after Yahoo! Pipes.
- Ruffus - "The Ruffus module is a lightweight way to add support for running computational pipelines."
- SQLAlchemy - "the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL."
- Toolz - "A functional standard library for python." Includes a
pipe
function that allows you to pipe a value through a sequence of functions. There's also a cython implementation here: https://github.com/pytoolz/cytoolz - xmltodict - Makes working with XML as easy as working with JSON. Also allows streaming so you don't run out of memory on large XML files. Great for simple operations on small XML files.
- Kiba - "provides you with a DSL to define ETL jobs"
- nokogiri - an excellent XML parser that "just works"
- Square ETL
- Crunch - "A fast to develop, fast to run, Go based toolkit for ETL and feature extraction on Hadoop."
- Pachyderm - A system for running processing pipeline jobs in containers and version controlling all data using a commit-based distributed filesystem.
- Datapumps - "Use pumps to import, export, transform or transfer data."
- NoFlo - "a JavaScript implementation of Flow-Based Programming"
- https://medium.com/@samson_hu/building-analytics-at-500px-92e9a7005c83
- http://www.slideshare.net/g33ktalk/data-pipeline-acial-lyceum20140624
- http://chairnerd.seatgeek.com/building-out-the-seatgeek-data-pipeline/
- http://www.garynissen.com/etl-hand-code-or-tool/
- http://www.slideshare.net/CasertaConcepts/big-data-warehousing-meetup-bigetl-trad-tool-vs-pig-vs-hive-vs-python-what-to-use-when-slide-set-2
- https://deepfriedcode.com/books/darps/index.html
- http://blog.cloudera.com/wp-content/uploads/2010/01/IntroToPig.pdf
- http://tech.adroll.com/blog/data/2015/10/15/luigi.html?adrolldev
- Alterxy - Cloud ETL tool with an interface similar to GUI ETL tools.
- AWS Data Pipeline - "a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premise data sources, at specified intervals."
- AWS Glue - AWS Glue generates the code (using Python and Spark) to execute your data transformations and data loading processes.
- Amazon Simple Workflow Service (SWF) - "helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud."
- AWS Batch - Allows executing jobs as containerized applications running on Amazon ECS. Also includes features for dynamically bidding for Spot Instances, integration with existing workflow engines, scheduling, monitoring, dependency modeling, and dynamic scaling/provisioning based on amount of work.
- Google Dataflow - "Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines."
- Snaplogic - "a self-upgrading, elastic execution grid that streams data between applications, databases, files, social and big data sources."
- Pig - "a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs."
- Spark - "a fast and general-purpose cluster computing system. It provides high-level APIs in Scala, Java, and Python that make parallel jobs easy to write, and an optimized engine that supports general computation graphs. It also supports a rich set of higher-level tools including Shark (Hive on Spark), MLlib for machine learning, GraphX for graph processing, and Spark Streaming."
Warning: If you're already familiar with a scripting language, GUI ETL tools are not a good replacement for a well structured application written with a scripting language. These tools lack flexibility and are a good example of the "inner-platform effect". With a large project, you will most likely run into instances where "the tool doesn't do that" and end up implementing something hacky with a script run by the GUI ETL tool. Also, the GUI can conceal complexity and the files these tools generate are impossible to code review. However, the GUI and out-of-the-box functionality can make some tasks simpler, especially for people not comfortable with writing code.
- Apache NiFi - "a rich, web-based interface for designing, controlling, and monitoring a dataflow."
- Informatica PowerCenter - "a toolset for establishing and maintaining enterprise-wide data warehouses. It has a customer base of over 5,000 companies."
- Jitterbit - "commercial software integration product that facilitates transport between legacy, enterprise, and on-demand computing applications."
- Microsoft SSIS - "a component of the Microsoft SQL Server database software that can be used to perform a broad range of data migration tasks."
- Pentaho Kettle - The most popular open-source graphical ETL tool.
- Talend - "an open source application for data integration job design with a graphical development environment"