[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
/ dlt Public

data load tool (dlt) is an open source Python library that makes data loading easy ๐Ÿ› ๏ธ

License

Notifications You must be signed in to change notification settings

dlt-hub/dlt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

data load tool (dlt) โ€” the open-source Python library for data loading

Be it a Google Colab notebook, AWS Lambda function, an Airflow DAG, your local laptop,
or a GPT-4 assisted development playgroundโ€”dlt can be dropped in anywhere.

๐Ÿš€ Join our thriving community of likeminded developers and build the future together!

Installation

dlt supports Python 3.8+.

pip install dlt

More options: Install via Conda or Pixi

Quick Start

Load chess game data from chess.com API and save it in DuckDB:

import dlt
from dlt.sources.helpers import requests

# Create a dlt pipeline that will load
# chess player data to the DuckDB destination
pipeline = dlt.pipeline(
    pipeline_name='chess_pipeline',
    destination='duckdb',
    dataset_name='player_data'
)

# Grab some player data from Chess.com API
data = []
for player in ['magnuscarlsen', 'rpragchess']:
    response = requests.get(f'https://api.chess.com/pub/player/{player}')
    response.raise_for_status()
    data.append(response.json())

# Extract, normalize, and load the data
pipeline.run(data, table_name='player')

Try it out in our Colab Demo

Features

  • Automatic Schema: Data structure inspection and schema creation for the destination.
  • Data Normalization: Consistent and verified data before loading.
  • Seamless Integration: Colab, AWS Lambda, Airflow, and local environments.
  • Scalable: Adapts to growing data needs in production.
  • Easy Maintenance: Clear data pipeline structure for updates.
  • Rapid Exploration: Quickly explore and gain insights from new data sources.
  • Versatile Usage: Suitable for ad-hoc exploration to advanced loading infrastructures.
  • Start in Seconds with CLI: Powerful CLI for managing, deploying and inspecting local pipelines.
  • Incremental Loading: Load only new or changed data and avoid loading old records again.
  • Open Source: Free and Apache 2.0 Licensed.

Ready to use Sources and Destinations

Explore ready to use sources (e.g. Google Sheets) in the Verified Sources docs and supported destinations (e.g. DuckDB) in the Destinations docs.

Documentation

For detailed usage and configuration, please refer to the official documentation.

Examples

You can find examples for various use cases in the examples folder.

Adding as dependency

dlt follows the semantic versioning with the MAJOR.MINOR.PATCH pattern.

  • major means breaking changes and removed deprecations
  • minor new features, sometimes automatic migrations
  • patch bug fixes

We suggest that you allow only patch level updates automatically:

Get Involved

The dlt project is quickly growing, and we're excited to have you join our community! Here's how you can get involved:

  • Connect with the Community: Join other dlt users and contributors on our Slack
  • Report issues and suggest features: Please use the GitHub Issues to report bugs or suggest new features. Before creating a new issue, make sure to search the tracker for possible duplicates and add a comment if you find one.
  • Track progress of our work and our plans: Please check out our public Github project
  • Contribute Verified Sources: Contribute your custom sources to the dlt-hub/verified-sources to help other folks in handling their data tasks.
  • Contribute code: Check out our contributing guidelines for information on how to make a pull request.
  • Improve documentation: Help us enhance the dlt documentation.

License

dlt is released under the Apache 2.0 License.