The sqlpack module provides —
- A CLI utility used for compiling templated SQL files with macros into standard SQL.
- A Python module that can be used to ship SQL components with your code.
- A repository of standard SQL templates for ETL operations with Snowflake.
If you want to contribute to the project, please refer to the CONTRIBUTING.md file.
pipx install sqlpack
To list the available SQL modules (ETLs) use the list
command like so :
sqlpack list
Use the print-sample-data
command to see the required parameters for the templated SQL for a built in pack like so:
sqlpack print-sample-data pack_name
# To store these parameters into a yaml file , run :
sqlpack print-sample-data pack_name > parameters.yaml
# Update the parameter values in the yaml file with the the editor of your choice. If you use VSCode , run :
code parameters.yaml
To compile a built-in template at the CLI, use the print-sql
sub-command —
sqlpack print-sql <pack_name> [parameters.yaml] [--params ...]
Paramater values are read from —
- default paramater values set by the template author
- a param file provided at the CLI
- values passed into the call.
If a parameter is missing from all three, the following error will be printed —
MISSING VALUE for <parameter_name>
parameter_1: val_1
parameter_2: val_2
simple_replace = {parameter_1}
simple_replace_with_additional_text = {parameter_2}_name
nested_replace = schema_{simple_replace}_end
sqlpack print-sql pack_name --parameter_1 val_1 --parameter_2 val_2
or
sqlpack print-sql pack_name parameters.yaml
simple_replace = val_1
simple_replace_with_additional_text = val_2_name
nested_replace = schema_val_1_name_end
To accomplish the same thing as above in your Python script, you can —
from sqlpack import print_sql
print_sql('pack_name', parameter_1='val1', parameter_2='val2')
or
print_sql('pack_name', 'parameters.yaml')