You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Scenario is working with very large (many GB) csv files and wanting to save them to parquet files after processing without reading/collecting them entirely into memory.
COPY
(SELECT*FROM'path/to/csv.csv')
TO 'path/to/export.parquet'
(FORMAT 'parquet')
But I would like to figure out how to do this with duckdb and dbplyr. Context is that members of my team are unfamiliar with SQL, and are comfortable with dplyr syntax.
note: in my testing you have to pass tbl() the read function as well as the file path as a text string, even if the file path contains the csv or parquet extension in it. Is this where tbl_file() comes in? if so, how do you pass tbl_file parameters to the read_csv function, such as which columns are dates? (as mentioned in #159, it's unclear how to use duckdb-specific dbplyr functions from the duckdb documentation).
I'm a bit nervous about my workaround of using to_arrow() because one of the reasons we're starting as a team to use duckdb in preference to arrow is that arrow's auto-detection of schemas from csvs is not anywhere near as good as duckdb and it's very fussy and slow at parsing dates. I've also noticed how arrow interprets empty strings differs from duckdb (arrow leaves them as "", duckdb makes them NA). And I guess I'm cautious about introducing a step that might complicate matters and make it unclear what type casting has occurred by transferring between the two libraries.
I've also tried various versions of copy_to and db_copy_to with "temporary = F". I've managed to create an in-memory table called " 'path/to/export.parquet' (FORMAT 'parquet')" (!) but not actually save anything to disk.
If I should ask this question elsewhere, please let me know.
The text was updated successfully, but these errors were encountered:
I am interested on the official answer to this question !
On my side, I've been doing the following : convert my dbplyr operations to sql using sql_render() then piping into a function that wraps COPY .. TO
Thanks. There is duckplyr::df_to_parquet(), but nothing comparable in this package. We could certainly implement a tbl_to_parquet() -- the con can be retrieved via dbplyr::remote_con(), and we probably want to pass along options (hive partitioning, ...). Happy to review an implementation sketch here or in a PR.
Scenario is working with very large (many GB) csv files and wanting to save them to parquet files after processing without reading/collecting them entirely into memory.
I know how to do this with arrow:
I also know how to do this with duckdb and SQL:
But I would like to figure out how to do this with duckdb and dbplyr. Context is that members of my team are unfamiliar with SQL, and are comfortable with dplyr syntax.
The closest I've got so far is:
note
: in my testing you have to passtbl()
the read function as well as the file path as a text string, even if the file path contains the csv or parquet extension in it. Is this wheretbl_file()
comes in? if so, how do you passtbl_file
parameters to the read_csv function, such as which columns are dates? (as mentioned in #159, it's unclear how to use duckdb-specific dbplyr functions from the duckdb documentation).I'm a bit nervous about my workaround of using
to_arrow()
because one of the reasons we're starting as a team to use duckdb in preference to arrow is that arrow's auto-detection of schemas from csvs is not anywhere near as good as duckdb and it's very fussy and slow at parsing dates. I've also noticed how arrow interprets empty strings differs from duckdb (arrow leaves them as "", duckdb makes them NA). And I guess I'm cautious about introducing a step that might complicate matters and make it unclear what type casting has occurred by transferring between the two libraries.I've also tried various versions of
copy_to
anddb_copy_to
with "temporary = F". I've managed to create an in-memory table called " 'path/to/export.parquet' (FORMAT 'parquet')" (!) but not actually save anything to disk.If I should ask this question elsewhere, please let me know.
The text was updated successfully, but these errors were encountered: