[show-and-tell] Using {targets} with Docker, DuckDB, and {renv} for Reproducible Data Pipelines #1335
Replies: 4 comments 3 replies
-
Nice example, thanks for sharing! |
Beta Was this translation helpful? Give feedback.
-
Thanks for sharing, @philiporlando! What is the advantage of passing a table hash via |
Beta Was this translation helpful? Give feedback.
-
A couple thoughts to add:
|
Beta Was this translation helpful? Give feedback.
-
I found this guide very useful, thank you. I've been trying to make sense of pipelines that involve a database call as well. I had wanted to create a database connection as a |
Beta Was this translation helpful? Give feedback.
-
Description
I've been working on this project which demonstrates how to build reproducible data pipelines using Docker for isolated environments, DuckDB for fast, in-process SQL analytics, and
{targets}
for orchestrating R workflows. This project also incorporates{renv}
to enhance Docker’s handling of R package dependencies, letting users manage and version control packages through anrenv.lock
file.I wanted to share with other
{targets}
users that may be searching for a portable data pipeline solution. Looking forward to any feedback and to learn about other ways that folks are building reproducible data pipelines.Beta Was this translation helpful? Give feedback.
All reactions