A small cast tookit class drived from _ParquetDatasetV2 to support cast in filters argument
View Code with Demo
PyArrow 2.0.0 introduce Expression into its dataset API, which can used as config filters to ParquetDataset, to perform pre-filter before read in partition level. the actual expression construction of ParquetDataset can be found in filters_to_expression in parquet.py it use field (Expression) to have a formal representation. Expression have cast as its method, so a strightforword infer of the support function is that, One can have a comparison between a field and a object by cast the field into the type of object. For Example, we can compare with pandas datatime :
("time_two_pos", ">", pd.to_datetime("1970-01-01 00:24:01.200000001"))
by cast "time_two_pos" field into timestamp, when use time_two_pos as partition key. But it seems not work in the 2.0.0 version.
This project overwrite the ParquetDatasetV2 class expression construction process to have a type level transformation from partitions string to partition cast objects and compare the cast object with another in the pandas space. filter out cast objects and transform them back into partition expression (reduced by some bool combination) to have a final_expression, and apply it to ParquetDatasetV2 filters argument to well-config it.
- pip
pip install pyarrow==2.0.0
- Download example Data from kaggle https://www.kaggle.com/retailrocket/ecommerce-dataset?select=events.csv
events_df = pd.read_csv("/home/svjack/temp_dir/events.csv")
events_df["time"] = pd.to_datetime(events_df["timestamp"]).astype(str)
events_df["time_two_pos"] = events_df["time"].map(lambda x: str(x)[:str(x).find(".") + 2] + "0" * (28 - len(str(x)[:str(x).find(".") + 2])) + "1")
event_table = pa.Table.from_pandas(events_df)
write_path = os.path.join("/home/svjack/temp_dir" ,"event_log")
### write it to local
pq.write_to_dataset(event_table, write_path, partition_cols=["event", "time_two_pos"])
### read by condition
filters_objects = [[("event", "in", ["transaction", "addtocart"]), ("time_two_pos", ">", pd.to_datetime("1970-01-01 00:24:01.200000001"))]]
filters_types = [[("event", "str"), ("time_two_pos", "timestamp[ms]")]]
p_path = write_path
Timestamp = pd.Timestamp
exp_cast_ds = _ParquetDatasetV2PartitionCastWellFormatted(
p_path,
filters_objects = filters_objects,
filters_types = filters_types)
all_filtered_df_in_upper = exp_cast_ds.read().to_pandas()
assert np.all(pd.to_datetime(all_filtered_df_in_upper["time_two_pos"]) > pd.to_datetime("1970-01-01 00:24:01.200000001"))
filters_objects = [[("event", "in", ["transaction", "addtocart"]), ("time_two_pos", "<=", pd.to_datetime("1970-01-01 00:24:01.200000001"))]]
filters_types = [[("event", "str"), ("time_two_pos", "timestamp[ms]")]]
Timestamp = pd.Timestamp
exp_cast_ds = _ParquetDatasetV2PartitionCastWellFormatted(
p_path,
filters_objects = filters_objects,
filters_types = filters_types)
all_filtered_df_in_lower = exp_cast_ds.read().to_pandas()
assert np.all(pd.to_datetime(all_filtered_df_in_lower["time_two_pos"]) <= pd.to_datetime("1970-01-01 00:24:01.200000001"))
filters_objects = [[("event", "in", ["transaction", "addtocart"]),]]
filters_types = [[("event", "str"),]]
exp_cast_ds = _ParquetDatasetV2PartitionCastWellFormatted(
p_path,
filters_objects = filters_objects,
filters_types = filters_types)
all_filtered_df_in = exp_cast_ds.read().to_pandas()
assert all_filtered_df_in.shape[0] == all_filtered_df_in_lower.shape[0] + all_filtered_df_in_upper.shape[0]
"filters_objects" can be seen as original "filters" in PyArrow, "filters_types" is the cast type that can retrieve from type_aliases in PyArrow.
Distributed under the MIT License. See LICENSE
for more information.
svjack - svjackbt@gmail.com
Project Link: https://github.com/svjack/PyArrowExpressionCastToolkit