Skip to content

Commit

Permalink
docs: fix docstring format (#392)
Browse files Browse the repository at this point in the history
  • Loading branch information
douglasdavis authored Oct 20, 2023
1 parent 07237de commit e220179
Showing 1 changed file with 9 additions and 2 deletions.
11 changes: 9 additions & 2 deletions src/dask_awkward/lib/inspect.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,8 +118,9 @@ def report_necessary_buffers(
def report_necessary_columns(
*args: Any, traverse: bool = True
) -> dict[str, frozenset[str] | None]:
r"""Determine the columns necessary to compute a collection built from
a columnar source.
r"""Get columns necessary to compute a collection
This function is specific to sources that are columnar (e.g. Parquet).
Parameters
----------
Expand All @@ -129,25 +130,31 @@ def report_necessary_columns(
traverse : bool, optional
If True (default), builtin Python collections are traversed
looking for any Dask collections they might contain.
Returns
-------
dict[str, frozenset[str] | None]
Mapping that pairs the input layers in the graph to the
set of necessary IO columns that have been identified by column
optimisation of the given layer. If the layer is not backed by a
columnar source, then None is returned instead of a set.
Examples
--------
If we have a hypothetical parquet dataset (``ds``) with the fields
- "foo"
- "bar"
- "baz"
And the "baz" field has fields
- "x"
- "y"
The calculation of ``ds.bar + ds.baz.x`` will only require the
``bar`` and ``baz.x`` columns from the parquet file.
>>> import dask_awkward as dak
>>> ds = dak.from_parquet("some-dataset")
>>> ds.fields
Expand Down

0 comments on commit e220179

Please sign in to comment.