I will provide code that sources data from APIs or external systems into a data lake (e.g. AWS S3 or Google Cloud Storage), loads relevant data into the warehouse (e.g. BigQuery, Redshift, or Snowflake) and performs transformations and analytics to create production-ready tables in the warehouse. Pipelines may be written in SQL and Python and automated using workflow solutions such as dbt, Airflow, and AWS Step Functions. These automated workflows may be triggered on a schedule or upon events such as new data arriving.